AI robots can develop prejudices, just like us mere mortals

It's not only humans and animals that can hold biases against outsiders. Psychology and computer science researchers from the Massachusetts Institute of Technology and Cardiff University have discovered that artificial intelligence robots can develop prejudices by learning from each other.

AI often has a complex relationship with racism and sexism, and we've seen previous instances of AI exhibiting such prejudice, as a pair of Microsoft chatbots did over the last couple of years. However, this study showed that AI is capable of forming prejudices all by itself. The researchers wrote that "groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another."

The academics set up a simulated game where each AI bot chooses to donate to another within its own group or another group, based on the reputation of each robot and donation strategies. The researchers found that robots became increasingly prejudiced against those from other groups.

Over thousands of simulations, the robots learned new strategies by copying each other either within their own groups or by across the entire population. The study found the robots cribbed strategies that gave them a better payoff in the short term, indicating that high cognitive ability isn't necessarily required to develop prejudices.

"Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivized in virtual populations, to the detriment of wider connectivity with others," wrote Cardiff University's Professor Roger Whitaker, one of the study's co-authors. "Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse."

The was some hope for those who may ask "why can't we all just get along?" Under some conditions, including having more distinct sub-groups within a population, there were lower levels of prejudice.

"With a greater number of subpopulations, alliances of non-prejudicial groups can cooperate without being exploited. This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold. However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group," Whitaker said.

In his testimony to Congress this week, Twitter CEO Jack Dorsey spoke of the problem AI developers have in reducing accidental bias. Until developers and computer scientists can figure out how to keep AI neutral, let's just hope robots don't suddenly decide they dislike the look of our human faces, especially as they become more autonomous.

Via: TechCrunch

Source: Eureka Alert



via Engadget RSS Feed https://ift.tt/2NUbMd8
RSS Feed

If New feed item from http://www.engadget.com/rss-full.xml, then send me


Unsubscribe from these notifications or sign in to manage your Email Applets.

IFTTT

Comments

Popular posts from this blog

Evernote cuts staff as user growth stalls

The best air conditioner

We won't see a 'universal' vape oil cartridge anytime soon