AI has the potential to both benefit and harm society. Organizations can minimize harm, and encourage fairness and transparency with responsible AI.
Responsible AI is safe, trustworthy, and ethical. It’s AI designed to benefit society by aligning with human values, respecting privacy, and encouraging fairness.
Worry about the ethical and legal problems AI isn’t going away. This makes sense because more companies than ever are adopting AI into their business. Almost 75% of US companies used AI in 2023. But, ethical standards can’t keep up with this increased adoption (or the rapid development of systems). Over half of executives in one survey didn’t even know if their company had ethical AI guidelines, leaving them open to ethical and legal problems.
Let’s take a look at the ethical problems facing organizations and how responsible AI can minimize the risks.
Bias and automated decisions
Bias in AI decisions comes from biases present during development. Behind every AI output are decisions made by human developers and designers and a huge amount of training data (also human-generated).
People hold biases. Without proper oversight and careful design, AI decisions reflect the biases of their developers and the training data.
For example, AI racial bias is present in some algorithms. Loan-approval AI in the USA was more likely to deny loans to non-white applicants than white applicants — even if they had the same credit scores. Another study found that AI used in health care led to Black Americans not getting the care they needed.
Solution: There should be no bias or discrimination in AI systems. To do this, training data needs to be representative of the population the AI serves — reflecting the diversity and characteristics of the population.
And the same is true for the development and oversight teams. Inclusive teams are more likely to spot biases within systems and their output. Business leaders, ethics advisors, and social scientists should join AI programmers and data scientists in the oversight teams to make sure no biases slip through the cracks.
Oversight isn’t only important at the output stage. There also needs to be strict oversight at the design and development stage to make sure the developers’ decisions don’t introduce bias to the model. Companies need to consistently monitor and test automated decisions. Careful human oversight can spot biases in the system.
Accountability
AI makes mistakes like hallucinations, racially biased decisions, brittleness, or catastrophic forgetting. And, as we’ve already seen, these mistakes can have significant consequences for people.
But who is responsible when AI makes a mistake? Is it the AI, the parent company, the programmer, or the user? This tricky legal question has no clear answer.
Solution: It’s a legal gray zone at the moment. But organizations that develop AI systems should take responsibility. They also need guidelines in place to address and fix any harm their system causes.
Privacy and data protection
AI is built on data. And that comes with privacy risks. Researchers from Carnegie Mellon and the University of Oxford, found AI creates or increases the following types of data privacy risks.
Surveillance. AI allows the collection of personal data on a huge scale. Before AI, it was impossible to sort, organize, and make sense of such vast amounts of data. AI now makes this easy. And because developers need to give their AI models so much training data, they have more reason than ever to collect as much data as possible.
Identification. AI helps organizations link data points to people’s identities at a scale we haven’t seen before. Widescale facial recognition systems are an example of how AI can encroach on your privacy.
Aggregation. Because AI systems can automate complex processes, they can analyze large amounts of personal data to make predictions and inferences.
Phrenology/physiognomy. Some AI systems make predictions based on external characteristics. For example, predicting sexual orientation from images. Or analyzing how likely someone is to commit a crime from facial recognition.
Secondary use. Consent is a big issue with AI — especially when it comes to training data. Some companies have used data they collected for one purpose (their original AI model) for a different purpose (different AI models) without user consent.
Insecurity. All the data AI collects needs to be securely stored. Unfortunately, it isn’t always (although, this isn’t unique to AI) and data breaches happen. Other problems could include non-encrypted chatbots, human access to training data for labeling, and AI accidentally revealing personal data.
Exposure. Generative AI can manipulate, create, and reconstruct content to reveal sensitive information. For example, the recent controversy about AI-generated nude images.
Distortion. Generative AI can create realistic images, videos, and audio clips of people. This content is so realistic people have no idea it’s fake. Users can misuse this technology to spread false information.
Increased accessibility. People who develop AI models have shared datasets within the AI community. Open-source sharing of data like this can increase transparency and make audits easier. But it also means sharing large amounts of personal data publicly without consent.
Solution: Organizations need frameworks and standards to make sure their data is accurate, secure from theft or misuse, and satisfies government regulations.
Transparency, interpretability, and explainability
AI is complicated. Too complicated for most people to truly understand. Three keys to make it understandable are transparency, interpretability, and explainability.
These terms are close in meaning so let’s look in more detail.
Transparency. You can look at a transparent AI model and see how it works, how it reaches decisions, and what data it uses. This openness means you have a clear picture of the purpose and inner workings of the system.
Interpretability. When AI is interpretable, humans can understand the output and the results, and why the system reached these decisions. Interpretability helps people trust a system’s outputs. It ensures fairness and makes it easier to identify biases.
Explainability. This is all about giving understandable explanations of the results and decisions of the system. Explanations even non-experts can understand. Even if a system was interpretable, without explainability most people wouldn’t understand. The algorithms are too complex. Explainable AI generates explanations that use examples and natural language to help understanding.
When these work together, decision-making processes are clear. It’s easier to identify problems and biases within the systems. And users can trust the AI’s output.
Solution: AI systems must be clear and understandable. Developers and users need to understand how a system reaches its decisions. To do this, developers must document the development of the model and validation processes.
Visuals and natural language in the documentation — and the AI outputs — are fantastic ways to make complex algorithms easier to understand too.
Misuse
Misuse of AI can have life-changing consequences for people. Criminals are using deepfake videos in scams. Political misinformation is having a significant effect on elections around the world. And terrorists could even use drones and autonomous vehicles in attacks.
Solution: To protect users (and non-users) from harm organizations need to ensure their systems resist manipulation, respond safely to unexpected conditions, and are reliable and consistent.
Responsible AI mitigates these risks
There’s no doubt AI is a powerful and revolutionary technology. To maximize the benefits and minimize the risks, organizations need to develop and deploy responsible AI. Systems need to be effective, safe, transparent, accountable, and fair. Ethics should be a priority for all organizations involved with AI.
Here’s how companies can ensure responsible AI: Establish clear guidelines, conduct regular assessments, provide education, engage stakeholders, and continuously monitor the outputs and impacts of their systems.
To live up to its potential as a benefit to society, AI must be responsible. Or the potential harm and misuse will outweigh the benefits.
Comments