AIOps

Ethical Dilemmas of AI

November 4, 2022

Can AI be trusted? It's already visibly and invisibly woven into our world—from Google Translate and video game bots to industry usage across healthcare, manufacturing, and banking for critical workloads. Can we effectively leverage this game-changing technology and escape the inherent ethical dilemmas around bias, trust, and transparency? There is a way forward, yet it will take continuous, diligent conversations around AI ethics as the technology continues to evolve.

Do You Trust AI?

The question of trust often comes up in situations where human activity is handed over to AI. For example, Tesla and other car manufacturers are experimenting with auto-drive and auto-park capabilities handled by AI. Auto-drive is pushing the boundaries of trust because the testing of that technology has resulted in human fatalities. Lines are quickly drawn in the sand between whether or not we can trust a car to park itself.

However, trusting AI is inherent in some use cases already—people do it without even realizing it. In the case of fraud detection, texts are sent about questionable transactions before you even realize that your wallet or credit card is missing. This kind of technology happens in real-time and can save you a big financial headache.

Even in the industries where AI is part of critical business workloads, the question of trust is still on the table. For example, in Mainframe technology, some businesses do not automatically take action when AI uncovers an operations anomaly. Although AI has done its job detecting the operational anomaly, it does not understand that shutting down a job on a Mainframe could have catastrophic consequences for a business. In this instance, operators do not trust the AI to make a better judgment than they could. As AI evolves, companies and use cases will pressure test when, where, and how much to trust this technology—ultimately scrutinizing if the data and/or results are feasible and unbiased.

"... AI is already visibly and invisibly woven into the way our world operates."

Bias In, Bias Out

Just like humans, AI systems are often expected to follow social norms, and to be fair and unbiased. When it comes to bias, the issue isn’t unique to AI models—humans have difficulty navigating bias as well. However with AI, the potential outcomes of bias can have a staggering impact. In AI, bias has a large correlation with input data. For example, unclean, unrefined, or flawed input data will impact your outcome. The important thing to grasp with bias is that it requires sensitivity, insight, and openness to navigate ethically.

Humans ultimately control bias in AI—the practitioners select original input data and introduce bias to influence outcomes. Take Amazon, for example. Amazon receives a massive amount of job applications. When they decided to test applying AI to their recruitment process, the company used the resumes of current employees as input data. So what was the outcome?

Amazon widely shared that by using the selected demographic sampling, the results were biased against women. During the testing process, the retailer found that if the word "women" was anywhere on a resume, that individual never got a call. Amazon realized the input data was part of the issue and never deployed the model for hiring managers.

Sharing this information and being sensitive to the results is essential as we continue discovering the best use of this technology. Since bias is highly related to intent, Amazon isn't an example of the malicious use of AI. Instead, it demonstrates the necessity of introspection in the use of AI. Amazon corrected outcomes by factoring in bias to the model to help them achieve a more balanced result.

questions around aiIt's increasingly important to ask questions and continuing conversations around AI ethics.

AI has already very quickly become an essential part of business, if not an important differentiator for some organizations. It should be expected that ethical issues such as bias will occur. The keys to overcome bias are making sure input data is as clean as possible, and being willing to investigate unethical outcomes with openness and transparency.

The Role of Transparency

Transparency in AI can be defined as being explainable to employees and customers. The problem is that AI isn’t inherently transparent, so it will be an essential element to navigate as AI becomes more refined. When applying transparency on a business level, the question becomes how do we set regulations that are generally applicable when there are varying degrees of impact? How will we know if the AI used with a less favorable outcome is transparent?

Lack of transparency is especially significant for consumers. Consumers want to understand what personal data companies are collecting, how they use it, how their algorithms work, and who is accountable when things go wrong.

In some cases, such as Spotify, the algorithm is linked to the organization's competitive advantage. Consumers' value from Spotify is in the recommendations it makes based on the information collected about the listener. The question, in this case, is, where is the ethical line for transparency? How much should a company share, and how much should consumers be allowed to see and know?

Transparency is a moving target; however, assessing the impact as algorithms change is crucial. When a change does occur, being transparent about that change and how it impacts various stakeholders will be the key to helping technology progress to an even more innovative place. A potential solution lies in a balance. Organizations that are willing to explain why particular decisions were made within their models can positively contribute to transparency without giving up proprietary information.

Is an Ethical Balance Possible?

The short answer is yes, it is possible for ethical balance. However, figuring out how to navigate the ethics of AI technology is a continuous process. Some in our industry call for transparency, while businesses see it necessary to protect their technology because it's a differentiator. Both sides have significant and valid points, but where does that leave the inherent ethical dilemma?

There are a few key factors, no matter which side of the conversation you support.

  • As the future of AI evolves, it will require ongoing human oversight and input to ensure ethical and functional application.
  • Sensitivity concerning bias is essential for input data, model adjustments, and monitoring outcomes.
  • Transparency about missteps and successes with AI encourages conversations around ethics and contributes to advancing AI technology.

As AI continues to influence the global community, we should remain committed to sharing lessons and asking ethical questions about trust, bias, and transparency. The more we do, the better decisions we’ll make, and the easier it will be to understand how to improve and leverage AI in the future—whether for business, employee hiring (HR), customer engagement, or managing infrastructure operations.


If you’d like to discuss the ethics of AI further, please contact me directly at Nicole.Fagen@broadcom.com.

Citation source:
1., 2. AI Ethicist & AI Bias, Deloitte. Published February 18, 2021. Retrieved from https://www2.deloitte.com/us/en/pages/consulting/articles/ai-ethicist-and-ai-bias.html

 

Tag(s): AIOps, Mainframe