Artificial Intelligence Has An Implicit Bias Diversity Dilemma

Image credit: Piqsels

Artificial intelligence has the potential to transform our daily lives. It has increased efficiency and is arming society with new capabilities. The technology is becoming less of a distant vision seen in sci-fi movies and more of an everyday reality. With applications that include self-driving cars, better understanding of human speech, and facial recognition, it seems that artificial intelligence could lead to many positive societal improvements. However, the transformative impact of artificial intelligence also has negative consequences. One major area of concern is around diversity and inclusion.

Growing Concern About Implicit Bias

There is growing concern about implicit biases built into artificial intelligence systems. These biases could lead to unfair outcomes. The foundation of these systems is the data inputs and the people behind designing the technology. If the implicit biases of the people building the technology skew the data, that will carry forward to skewed outputs. After all, machine learning algorithms are limited to the data sets available to them.

Few Women, Fewer Black People in Artificial Intelligence Research

The AI Now Institute is a research think tank studying the social implications of artificial intelligence. It found that women make up only 15% of artificial intelligence researchers at Facebook. At Google, that number is only 10%. The low number of Black employees working on research is even more staggering. Less than 5% of artificial intelligence researchers at Facebook, Google, and Microsoft are Black.

This has implications for the products under development. The lack of diverse engineers and researchers can lead to products with built-in gender and racial biases. This in turn can result in the propagation of bias on a large scale. Tech giants such as Facebook, Google, and Microsoft exert significant control over the future course of artificial intelligence development. Thus, these structural inequalities can propagate to billions of people.

Artificial Intelligence Turns President Obama White

In June 2020, a major controversy arose on Twitter. It cast a spotlight on the deep-rooted biases of artificial intelligence research. A tool that to convert pixelated photos of people into high-resolutions images showed some startling results. The input of a low-resolution photo of Barack Obama resulted in the algorithm producing a high-resolution image of a distinctly white man. High resolution images of distinctly white people resulted when the algorithm used low-resolution images of other racial minorities.

Artificial Intelligence Obama White

The incident spoke volumes about the potential dangers of biased artificial intelligence systems. Researchers at NVIDIA, a leading technology company, developed the algorithm responsible for the skewed results. The algorithm processed the low-resolution photos by upscaling and imagining the higher resolution version. NVIDIA discovered that the algorithm was generating faces with Caucasian features at a significantly higher rate compared to the inputs of photos it was receiving. This bias was likely inherited from the data set the machine learning algorithm was trained to work with.

Algorithms Must Have Diverse Inputs

Algorithms constructed without input from people from various perspectives, backgrounds, and life experiences can result in incomplete data being fed into the algorithm. For example, the algorithm might not consider certain data points due to short-sighted decisions made by engineers about which data inputs to feed the algorithm. Thus, later decisions made by the machines will be limited by missed portions of data. The data used to train the machine learning algorithms is often skewed to mirror the demographics of the developers. Since white men dominate artificial intelligence research and engineering positions, it is not surprising that the outputs default to the characteristics of this demographic.

Some companies have paused the use of facial recognition technology because of concerns about the gender and racial biases built into them. A number of Big Tech companies backed away from selling facial recognition technology to police as the potential for misuse became increasingly apparent. IBM, Microsoft, and Amazon announced that they would not sell facial recognition technology to the police until federal legislation is passed to provide better technological oversight.

Ways Forward

There a number of potential ways to address artificial intelligence’s invisible bias problem. Awareness of the issue can lead to artificial intelligence researchers being more likely to consider diverse data inputs when training machine learning algorithms. This also means paying attention to examining the inherent fairness of artificial intelligence technologies not just at an individual level, but also at an organizational level.

Relatedly, diversity in leadership matters. As the profession becomes more diverse, there will be greater diversity of thought. This in turn will lead to a greater range of ideas and approaches to designing artificial intelligence technology. However, recruiting and retaining diverse talent takes time, and artificial intelligence is developing at a rapid pace. While this is an admirable goal over the long term, to address bias in the near term it will be important to bring greater awareness of the issue to the engineers building the technology.