Concerns over biased algorithms grow as computers make more decisions

February 23, 2021 0 By boss

[ad_1]

gettyimages-1230834373

Technology is increasingly taking the place of humans in making decisions. 


Getty Images

When the US started distributing COVID-19 vaccines late last year, an essential question emerged: Who should get priority access to the shots? Many medical facilities and health officials decided to first vaccinate workers, including nurses and janitors, who came into close contact with infected people. Stanford Medicine, part of one of the country’s top universities, instead built an algorithm to determine the order. 

The only problem with letting a computer decide who should get the vaccine first is that its “very complex algorithm” — which turned out to not be very complicated at all — was built on faulty assumptions and data. Namely, the algorithm prioritized medical workers over a certain age without taking into account that many older doctors weren’t regularly seeing patients. Only seven of 5,000 doses in Stanford Medicine’s initial batch of COVID-19 vaccines went to front-line resident physicians. Most went to senior faculty and doctors who work from home or have little contact with COVID-19-infected patients. Stanford quickly scrapped its algorithm.

“Our algorithm that the ethicists and infectious disease experts worked on for weeks to use age, high-risk work environments [and] prevalence of positivity within job classes … clearly didn’t work right,” Tim Morrison, a director of Stanford’s ambulatory care team, said in a video posted on Twitter in mid-December. 


Now playing:
Watch this:

Beyond the 5G hype: Searching for real solutions to the…



27:17

Stanford’s vaccine debacle is only one example of the many ways algorithms can be biased, a problem that’s becoming more visible as computer programs take the place of human decision makers. Algorithms hold the promise of making decisions based on data without the influence of emotions: Rulings could be made more quickly, fairly and accurately. In practice, however, algorithms aren’t always based on good data, a shortcoming that’s magnified when they’re making life-and-death decisions such as distribution of a vital vaccine. 

The effects are even broader, according to a report released Tuesday by the Greenlining Institute, an Oakland, California-based nonprofit working for racial and economic justice, because computers determine whether someone gets a home loan, who gets hired and how long a prisoner is locked up. Often, algorithms retain the same racial, gender and income-level biases as human decision makers, said Greenlining CEO Debra Gore-Mann. 

“You’re seeing these tools being used for criminal justice assessments, housing assessments, financial credit, education, job searches,” Gore-Mann said in an interview. “It’s now become so pervasive that most of us probably don’t even know that some sort of automation and assessment of data is being done.” 

The Greenlining report examines how poorly designed algorithms threaten to amplify systemic racism, gender discrimination and prejudices against people with lower incomes. Because the technology is created and trained by people, the algorithms — intentionally or not — can reproduce patterns of discrimination and bias, often without people being aware it’s happening. Facial recognition is one area of technology that’s proved to be racially biased. Fitness bands have struggled to be accurate in measuring the heart rates of people of color. 

“The same technology that’s being used to hyper-target global advertising is also being used to charge people different prices for products that are really key to economic well being like mortgage products insurance, as well as not-so-important things like shoes,” said Vinhcent Le, technology equity legal counsel at Greenlining. 

In another example, Optum Health designed an algorithm to determine which patients would get better resources and medical attention. In theory, the algorithm would ensure that the sickest people received the best care. In practice, the technology “overwhelmingly chose to provide white patients with higher quality care while denying that treatment to equally sick Black patients.”

That’s because the algorithm was based on data about who spent more money on health care, Greenlining said in its report. The assumption was that sicker people spent more money, but the technology didn’t take into account that people with less money sometimes had to choose between paying rent or paying medical bills, Gore-Mann said. 

“That isn’t passing the common sense test,” she said. Instead, Optum should have made race a factor in its decision process and had more diverse groups examine the data, she said. 

“The bias arose because doctors thought the algorithm was predicting which patients were the least healthy, when in fact the algorithm was actually predicting which patients had the highest expected health care costs and using that as a proxy for health,” Greenlining said in its report. “While costs may be a good predictor of health needs for white patients, it is much less effective as a predictor of health for Black ones.”

No easy fix

In its report, Greenlining presents three ways for governments and companies to ensure the technology does better. Greenlining recommends that organizations practice algorithm transparency and accountability; work to develop race-aware algorithms in instances where they make sense; and specifically seek to include disadvantaged populations in the algorithm assumptions. 

Ensuring that happens will fall to lawmakers. 

“The whole point [of the report] is build the political will to start regulating AI,” Le said. 

In California, the state legislature is considering Assembly Bill 13, also known as the Automated Decision Systems Accountability Act of 2021. Introduced Dec. 7 and sponsored by Greenlining, it would require businesses that use “an automated decision system” to test for bias and the impacts it would have on marginalized groups. If there’s an impact, the organizations have to explain why the discriminatory treatment isn’t illegal. “You can treat people differently, but it’s illegal when it’s based on protected characteristics like race, gender and age,” Le said. 

In April 2019, Sens. Cory Booker of New Jersey and Ron Wyden of Oregon and Rep. Yvette D. Clarke of New York, all Democrats, introduced the Algorithmic Accountability Act, which would have required companies to study and fix flawed computer algorithms that resulted in inaccurate, unfair, biased or discriminatory decisions impacting Americans. A month later, New Jersey introduced the similar Algorithmic Accountability Act.. Neither bill made it out of committee. 

If California’s AB13 passes, it would be the first such law in the US, Le said, but it may fail because it’s too broad as it’s currently written. Greenlining instead hopes to narrow the bill’s mandate to first focus on government-created algorithms. The hope is the bill will set an example for a national effort. 

Most of the issues with algorithms aren’t because people are biased on purpose, Le said. “They are just taking shortcuts in developing these programs.” In the case of the Stanford vaccine program, the algorithm developers “didn’t think through the consequences,” he said.  

“No one’s really quite sure [about] all the things that need to change,” Le added. “But what [we] do know is that the current system is not well equipped to handle AI.”



[ad_2]

Source link