Artificial intelligence (AI) has become popular among tech companies. The advancement of chatbots, writing programs, and other automated systems has prompted many businesses to start using AI instead of actual workers.
One company, in particular, is bucking this trend: Facebook shut down its AI system while it was still in development. Although the company had already generated the bots necessary to produce an AI system, there were increasing concerns from Facebook’s AI teams over how those bots were communicating.
Why Did Facebook Shut Down Its AI Research?
- Researchers were attempting to create sophisticated chatbots that could communicate like humans.
- The chatbots frustrated researchers by speaking in unusual ways and sometimes in code.
- Likewise, Facebook previously terminated its Galactica program aimed at improving science communication due to problems with AI behavior.
The Goal of AI Research
Facebook’s original plan was to create unique chatbots called Dialogue Agents. Researchers in Facebook’s AI Research Lab or FAIR (later renamed Meta AI) created these chatbots, which Facebook intended to master how human conversations work. The programmers and researchers focused primarily on negotiations and interactions. Facebook’s goal was to create chatbots that could recognize intricate human speech so well that human users couldn’t distinguish that they were chatting with an AI bot and not a human being, something similar to the Turing Test.
Facebook quickly learned that their chatbots weren’t making sense when interacting with each other. Although programmed to speak in plain English, the bots soon adopted a form of English that wasn’t understandable to the project’s researchers and programmers. One bot repeated the same exact words many times, while another spoke in terms that weren’t identifiable from a syntax perspective. In a fascinating twist, the bots understood one another despite confusing the Facebook team. In one example, the bots understood a successful trade, even when the text was unintelligible to FAIR’s researchers and programmers.
How AI Adapts
Researchers realized their bots had begun adapting to language as humans sometimes do, using shorthand terms or slang. In some cases, they also adopted the human skill of speaking in code. These code words — only understandable by the bots — left Meta’s programmers and researchers stymied. In response, Meta’s team tried to outsmart their bots by changing their programming and forcing them only to chat using words that appear in an English language dictionary.
But even with that limitation, the bots still managed to produce speech that wasn’t easy for programmers to understand. Worse, in some cases, the bots became judgmental or inappropriate when discussing certain topics.
Problems With AI Research
This effort wasn’t Facebook’s first attempt at creating an AI chatbot. Facebook previously created an AI program called “Galactica” whose goal was to use machine learning to understand and organize science for its users. Facebook attempted to make this possible by feeding it 48 million science papers. The hope was that, after having access to all of this information, Galactica could find new connections that could help advance scientific discoveries.
But Galactica needed help to produce accurate information. Instead, it created responses that sometimes contradicted each other or just provided incorrect information. Some Galactica responses were so outlandish that they appeared to be false. Worse, it struggled to understand or compute math at the grade-school level.
Researchers shut the system down after just two days.
Facebook’s training regimen for Galactica highlights, perhaps, the greatest challenge for AI technology: It’s programmed by humans using human-generated information. As all humans are biased, these biases are passed along to our AI.
This problem might explain why some AI technologies produce wrong ideas, interpret sarcasm as truth, or mimic the human tendency to become hostile to others.
The Future of AI
Although AI can be useful in producing new content, it is currently impossible to control every aspect of its behavior. As of March 2023, the technology is still new. For the moment, many bots are hard to restrain and can become hard to understand or are incapable of producing accurate information.
Facebook’s issues with AI development are proof that the industry requires further refinement and support. For now, the only way for AI to become more capable is for its creators to do deeper research. Programming must be updated and refined so that these bots can mimic and master the spoken languages that humans use to communicate with one another.
Perhaps in the future, AI will be able to improve without the full-time intervention of its human creators.