To Homepage
Society

Achieving AI fairness for people with disabilities

Extending inclusion to the world of artificial intelligence

Extending inclusion to the world of artificial intelligence

For the benefits of lower operational costs and better efficiency, more and more companies adopt artificial intelligence (AI) for their businesses. Banks use AI to help detecting fraud. Clinics use AI to support patient assessments. Recruiters use AI to identify high-potential candidates.

Concerns arise when AI is increasingly used in decision-making. Can we really trust AI making a fair decision for everyone? How much influence does AI have on our decisions?

Machine learning and how it works

A term we often hear with AI is machine learning. In this process, data is provided to a computer, which recognizes patterns in the data. With these learnings, the computer improves or creates algorithms and applies them to analyze future data. This should help it with making more precise and thus better decisions – in theory.

One reason why AI becomes a popular business solution is that computers can process large amounts of data and provide immediate feedback much quicker than humans. It saves time and money. It also avoids human errors and biases if the AI system is set up properly.

brille vor laptopmonitor

AI is known for its capability of immediate analysis. But can it avoid past prejudice and make fair decisions?

Now “properly” is the key because computers cannot think like humans. They depend on the data we provide to simulate human intelligence processes like generalizing and problem-solving. Therefore, if not handled properly, AI can reinforce the prejudices in the society.

Biases and discrimination of AI

An AI system can be biased in several ways. According to a paper written by Shari Trewin from IBM Accessibility Research, bias can be passed on to the system if the data used to train it contains biased human decisions. For example, if recruiters systematically overlook job applications from people with disabilities, an AI system trained on that data will imitate and neglect these applications in future.

The lack of representation in data sets is another main challenge to achieve AI fairness. Disabilities and underlying health conditions vary a lot in intensity and impact, and often change over time. This heterogeneity of disability makes it more difficult to train an AI system to be fair for people with disabilities than for people of different gender or race.

frau und rollstuhlfahrer vor podcast

Building representative datasets to achieve AI fairness for people with disabilities is difficult because of the heterogeneity of disability. (Source: https://mockmate.com/)

Furthermore, bias can occur depending on the way algorithms are designed. In 2019, a recruiting-technology firm and its AI system used by more than 100 employers, including international corporations, raised serious concerns about discrimination. According to a report by The Washington Post, candidates’ facial movements, word choice and speaking voice are being analyzed via computer or phone camera. These analyses contribute to the employability score generated by the AI system for shortlisting.

A number of ethical AI experts criticized that such an algorithm is anything but pseudoscience: it is extremely difficult to infer emotions and personality traits from facial expressions, not to say in the case of people with disabilities, who experience diverse and ever-changing conditions. Using such an AI system for shortlisting job candidates would mean people with certain disabilities being rejected by the system immediately.

Making AI fairer for people with disabilities

So, after all, AI biases are originated from humans. Although life is never fair, at least we should mitigate these AI biases so that everyone can be treated with a similar level of fairness.

waage mit computer und doktor

AI fairness for people with disabilities is also important because ever more medical decisions are made with the help of AI algorithms. (Source: https://news.mit.edu)

Inclusiveness is the fundamental step for achieving fairness in AI. People with disabilities should be involved in the development of AI systems as early as possible. Their first-hand experiences can help AI developers identify biases, contributing to more objective and fairer algorithms for decision-making.

On the other hand, we should use AI smartly. In an interview about AI and health, Kerstin N. Vokinger, Assistant Professor at the Faculty of Law, University of Zurich, said,

“Artificial Intelligence is not smarter than us. But in combination with our skills, it can offer new opportunities.”

We should not fear or over rely on AI. Instead, we should work on measures to ensure algorithms are explainable, auditable and transparent.

In an interview about AI and bias, Anikó Hannák, Assistant Professor at the Department of Informatics, University of Zurich, emphasizes the obligation of applying the same rules against discrimination online as those in the real world. Regular monitoring of the AI usage would help achieve AI fairness. She said, however,

“Many companies optimize their online platforms for profit, and fairness is left behind.”

AI fairness monitoring costs money: either companies cannot afford the regular monitoring, or they lack the motivation or awareness to achieve AI fairness.

Good news is different measures are already in place to tackle these problems. The EU is currently investing a lot in developing systems that monitor AI fairness. It also works on regulating and punishing AI misuse activities. For example, Google was convicted of distorting competition in 2019 because the platform favored its own products in the search results.

Earning trust through transparency

Another big step in achieving AI fairness would be improving the transparency of AI development. For example, companies should make it publicly known to what extent and how they use AI for decision making. Transparency helps AI technology users, especially those with disabilities, not to be discriminated against and taken advantage of.

Now back to the question: can we really trust AI to make decisions for us? Bradley Hayes, Assistant Professor of Computer Science at the University of Colorado Boulder, answers with the concept of explainable AI at the following TEDx Talk.

“Making our robots and AI systems explainable, we can figure out when they’ve captured the right rules and if we can trust them. When we can bridge the gaps in understanding between how we think and how our AI systems think, we can be sure that we invent the future that we intent to.”

What do you think about AI? Do you see more risks or benefits in it?

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive