Maryland researchers are working to curb prejudice and injustice in artificial intelligence

BALTIMORE – Artificial intelligence touches every aspect of our lives, and critical research is being conducted in Maryland to ensure it is equitable.

dr Kofi Nyarko, a professor of electrical and computer engineering at Morgan State University, says if AI is trained on biased data, the end result will be biased.

“Everyone wants the same opportunities as everyone else, right?” he says. “But depending on whether the AI ​​used is unfair, some individuals may experience bias and have fewer choices than others.”

This comes into play with something as simple as washing your hands.

“You want to put your hand under the faucet or the water. But if you have lighter skin, the sensor is much better at picking up your hand. But if you have darker skin, it’s more of a technical challenge,” he says.

Here is an example of bias affecting college applications:

“A while back there was a scandal where it was discovered that the system that sorted out candidates automatically kicked out names that didn’t sound European,” says Dr. Nyarko.

Gabriella Waters is with the Center for Equitable AI Machine Learning Systems, or CEAMLS. Through research, they aim to uncover prejudice, improve transparency and develop standards. AI systems affect whether you get a home, a job, or health insurance.

“This technology impacts our lives every day,” says Waters. “It can make the difference between someone receiving cancer treatment or not. So when it drops to that level, it’s about someone living or not based on a decision made by a technology device.”

Waters says the more AI becomes embedded in our lives, the greater the responsibility of making sure it’s safe for everyone.

READ :  Artificial Intelligence: Algorithms that drive our decisions. We need AI that can empower people instead of crowding them out

“If your autonomous vehicle decides that the person at the crosswalk isn’t actually a person because it’s been trained on data that says it doesn’t look like a person because it doesn’t have that capability, what then?” says Wasser.

dr Nyarko says we need protocols, best practices and eventually laws.

“We want to make algorithmic justice a situation where people pay attention. And companies understand that it doesn’t have to be about your bottom line. It’s the right thing to do.”

The White House has introduced a blueprint for an “AI Bill of Rights,” setting out principles that should guide the design, use, and deployment of automated systems to protect the American public.

Linh Bui