Artificial intelligence exacerbates racial injustice – Massachusetts Daily Collegian

As industries increasingly develop and deploy AI algorithms, long-standing injustices targeting marginalized individuals are being exacerbated

Daily Collegian Archive (2020)

Bias and injustice are ingrained at seemingly every level of our society. People of color face discrimination and barriers not only in the criminal justice system, but also in the housing market, job market, in the technology industry and in almost every other system they might encounter in their lives.

Contrasting with these systemic issues, some believe that artificial intelligence can make impartial and unbiased judgments when it comes to many issues we may encounter in life. There is hope that AI could potentially play a significant role in combating and resolving racial and other injustices in our society.

The increasing development and use of artificial intelligence exacerbates the problem of injustice and inequality that pervades our society. Within the technology industry, there is a significant lack of representation and diversity of people who are critical to addressing the impact of this technology on marginalized groups. Additionally, tech developers are trained to focus primarily on technical problems and often lack the education essential to addressing the societal impact of technology.

Algorithmic artificial intelligence tends to be biased toward people of color and others who don’t share the same characteristics as the developers: white, male, and affluent. The algorithm perpetuates the injustice inflicted on people of color through methods such as facial recognition or housing applications. Training these AI algorithms to come to these biased conclusions is clearly unfair.

Face recognition systems have been consistently shown to be much more accurate at detecting white faces than others. This is not an exaggerated accusation; These systems are designed and trained by the white developers who make them.

READ :  Kudos Announces Launch of the “Artificial Intelligence Knowledge Cooperative”; New Collaboration Showcases Research About AI Published by ACM, AIP Publishing, ASTM, and IOP Publishing

Joy Buolamwini is a founder of the Algorithmic Justice League and a graduate researcher from MIT. Visiting labs, she came across two separate interactive robots that she didn’t spot due to apparent development bias. While working with these robots at MIT, she had to wear a white mask to be recognized.

What makes this problem worse is that facial recognition systems are being integrated into law enforcement. With pervasive injustice already woven into our criminal justice system, racial and other injustices are amplified by this new technology.

At a Detroit police department, the use of a facial recognition system led to an innocent black man, Robert Julian-Borchak Williams, being wrongly charged with a crime. The algorithm had incorrectly assigned Williams’ license photo to the alleged thief. Williams was arrested and forced to stay in a detention center overnight.

In some housing sectors, the algorithm uses data such as eviction or criminal records when screening potential tenants. These algorithms reflect existing inequalities in the housing and criminal justice systems that disproportionately discriminate against marginalized people. The screening algorithms can easily classify someone as ineligible for an apartment, no matter how minor the previous crime or eviction was.

Many of the people who train the AI ​​algorithms are low-wage workers. Additionally, some of these employees need to train the AI ​​algorithm to monitor hate speech, an arduous task that often exposes them to disturbing and explicit content as part of their work. These employees also need to make decisions about whether content is problematic in seconds, so they can easily embed their own biases. The process of reviewing potentially harmful content is complex and subjective, requiring much more time and thought.

READ :  The future of fintech according to AI

There are federal agencies that are responsible for regulating the industries that use AI, but they have not taken the necessary steps to ensure that AI systems are held accountable for their impact on people. Federal agencies and government departments need to prioritize how AI algorithms exacerbate racial inequalities.

Juliette Perez is available [email protected].