Meta’s new artificial intelligence has gone racist: it only took one weekend

Meta’s new artificial intelligence has gone racist: it only took one weekend

On the first weekend that it launched, BlenderBot 3 made truly egregious, if not offensive or untrue, claims. These include anti-Semitic conspiracy theories and Donald Trump as president.

Friday 5th August Meta released a new chatbot, BlenderBot 3. It is an artificial intelligence (AI) that interacts with human users online. Unlike the previous chatbot models used by Meta, BlenderBot3 relies on internet research to be able to touch any topic: the more conversations it faces, the greater its ability to understand and respond. Finally, it is able to improve parameters such as personality, empathy, knowledge and long-term memory. On paper it looks very promising, but the results prove otherwise. On the first weekend it was launched, BlenderBot 3 made truly egregious, if not offensive or untrue, claims. Among them: Donald Trump won the 2020 presidential election and is still the president of the United States, conspiracy theories of an anti-Semitic nature and hoaxes taken up by Facebook. It took two days of human interactions to make an AI an echo of the toxicity of the internet.

Chatbots learn and improve their skills by talking to the audience. As a result, a company like Meta aims to encourage adults to talk to AI, to help them have natural conversations on a wide range of topics. However, this can lead to a dark side, because by doing so the bot can come into contact with disinformation from the public. According to a Bloomberg report, BlenderBot3 described Meta CEO Mark Zuckerberg as “too creepy and manipulative”. He also told a Wall Street Journal reporter that Trump “will always” be president and propagated the anti-Semitic conspiracy theory.

This episode isn’t just about Meta’s AI. Many will remember the recent case of LaMDA, the Google chatbot reputed to be sentient by a later fired engineer. Well, LaMDA presented itself as an eight-year-old child capable of making racist and sexist claims. Going even further back in time: in 2016 Microsoft had to shut down Tay, because just two days after the launch, the AI ​​found itself praising the Nazi dictator Adolf Hitler. As Mashable’s Christianna Silva reports, this latest episode represents disturbing proof of how Godwin’s law – if a discussion is fed long enough on the internet someone will sooner or later be compared to Hitler – also applies to chatbots.

Mark Zuckerberg: “My employees call me the Eye of Sauron”

Leave a Comment