Site icon Netimperative

Facebook seeks to calm AI fears after bots ‘invent own language’

Facebook is seeking to quell fears following (false) reports that it shut down an AI experiment due to the bots “inventing their own language”.

The experiment, conducted in June, pitched two chatbots together to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value.

But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.
Crucially, they were not told to use comprehensible English, allowing them to create their own “shorthand”, according to researchers.

The actual negotiations appear very odd, and don’t make much sense:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

However, there appears to be some rules to the speech. The way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

Indeed, some of the negotiations that were carried out in this bizarre language even ended up successfully concluding their negotiations, while conducting them entirely in the bizarre language.

However, AI researchers in recent days have been speaking out against media reports that dramatise the AI research that Facebook conducted.

In the past week or so, some media outlets have published reports on the work that are alarmist in tone, claiming the researchers “shut down” the experiement (suggesting they were losing control). This was proved to be false.

The researchers finished their experiment, and indeed they noticed that the agents even figured out how to pretend to be interested in something they didn’t actually want, “only to later ‘compromise’ by conceding it,” Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh and Dhruv Batra of Facebook’s Artificial Intelligence Research group wrote in the paper.

On Monday evening Batra weighed in on the situation in a Facebook post:

“While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.

“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize reward. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI.” If that were the case, every AI researcher has been “shutting down AI” every time they kill a job on a machine.

Batra called certain media reports “clickbaity and irresponsible.” What’s more, the negotiating agents were never used in production; it was simply a research experiment.
Other researchers have been critical of the fear-mongering reports on social media in recent days.

Read the full post below:

Exit mobile version