opinion | When the technology of the future keeps people trapped in the past

columnist

March 16, 2023 at 1:53 p.m. EDT

(Video: Glenn Harvey for the Washington Post)

Comment on this story

comment

If you apply for life insurance as a chain smoker, you may find it wise to pay a higher premium as your lifestyle increases the risk of dying young. If you tend to collect speeding tickets and run red lights occasionally, you may be reluctant to accept a higher price for car insurance.

But would you think it fair to be denied life insurance based on your zip code, online shopping habits, or social media posts? Or paying a higher interest rate on a student loan because you studied history instead of science? What if you were passed over for a job interview or an apartment because that’s where you grew up? How would you feel about an insurance company using the data from your Fitbit or Apple Watch to figure out how much you should be paying for your health plan?

Political leaders in the United States have largely ignored such fairness issues as insurers, lenders, employers, hospitals, and landlords use predictive algorithms to make decisions that profoundly affect people’s lives. Consumers have been forced to accept automated systems that now scour the internet and our personal devices for life’s artifacts that were once private – from genealogical records to what we do on weekends – and which unwittingly and unfairly deprive us of the Depriving or keeping medical care could prevent us from finding jobs or housing.

With Congress still unsuccessful in passing an algorithmic accountability bill, some state and local politicians are stepping forward to fill the void. Draft regulations issued by the Colorado Insurance Commissioner last month, as well as recently proposed reforms in DC and California, point to what policymakers could do to enable us to a future where algorithms better serve the common good.

READ :  Insurance companies must not take shortcuts with Medicare Advantage Record Keeping

The promise of predictive algorithms is that they make better decisions than humans – freed from our whims and prejudices. But today’s decision-making algorithms too often use the past to predict—and thus create—human fate. They assume that we will follow in the footsteps of others who looked like us and grew up where we grew up or studied where we studied – that we will do the same work and earn the same salaries.

Predictive algorithms can serve you well if you grew up in an affluent neighborhood, enjoyed good nutrition and health care, attended an elite college, and always behaved like a model citizen. But anyone who stumbles through life, learning, growing and changing in the process, can be steered into an unwanted future. Overly simple algorithms reduce us to stereotypes, deny us our individuality and the ability to shape our own future.

For companies trying to pool risk, offer services, or match people with jobs or homes, automated decision-making systems create efficiencies. The use of algorithms creates the impression that their decisions are based on unbiased, neutral reasoning. But all too often, automated systems reinforce existing prejudices and long-standing injustices.

For example, consider the research that showed an algorithm stopped several Massachusetts hospitals from placing black patients with severe kidney disease on transplant waitlists; it rated their conditions as less serious than those of white patients with the same symptoms. A ProPublica investigation found that offenders in Broward County, Fla., were risk-assessed — and therefore convicted — based on flawed predictors of their likelihood of committing future violent crimes. And Consumer Reports recently found that poorer and less educated people are being billed more for auto insurance.

READ :  Does your policy cover treatment abroad?

Because many companies protect their algorithms and data sources from scrutiny, people cannot see how such decisions are made. Any person who is denied a high insurance premium or mortgage has no way of knowing if it has to do with anything other than their underlying risk or ability to pay. Willful discrimination based on race, sex, and ability is not legal in the United States. In many cases, however, it is legal for companies to discriminate based on socioeconomic status, and algorithms can unintentionally reinforce racial and gender disparities.

The new rules, proposed in several places, would oblige companies that rely on automated decision-making tools to monitor them for bias against protected groups – and adjust them if they achieve results that most of us find unfair would.

In February, Colorado passed the most ambitious of these reforms. The state insurance commissioner released draft rules that would require life insurers to test their predictive models for unfair bias in setting pricing and plan eligibility and to disclose the data they use. The proposal builds on a landmark 2021 state law passed despite intense lobbying by the insurance industry against it, designed to protect all types of insurance customers from unfair discrimination by algorithms and other AI technologies.

In DC, five city council members last month re-introduced a bill that would require companies that use algorithms to test their technologies for bias patterns — and make it illegal to use algorithms to discriminate in education, employment, housing, credit , healthcare and insurance. And just a few weeks ago, the state privacy regulator in California made efforts to prevent bias in the use of consumer data and algorithmic tools.

READ :  Less Auto Claim Satisfaction for Vehicle Collisions Finds Study

While such policies do not yet have clear provisions on how they will work in practice, they deserve public support as a first step towards a future of fair algorithmic decision-making. Attempting to try out these reforms at the state and local levels could also give federal lawmakers insight to develop better national policies for emerging technologies.

“Algorithms don’t need to project human bias into the future,” said Cathy O’Neil, who runs an algorithm testing firm that advises insurance regulators in Colorado. “We can actually project the best of human ideals onto future algorithms. And if you want to be optimistic, it will be better because it will be human values ​​but taken to a higher level to uphold our ideals.”

I want to be optimistic – but also vigilant. Instead of fearing a dystopian future where artificial intelligence overwhelms us, we can prevent predictive models from treating us unfairly today. The technology of the future should not constantly haunt us with ghosts from the past.