Meta is putting its latest AI chatbot on the web for public communication


Meta’s artificial intelligence research lab has created a new state-of-the-art chatbot and has the public talk to the system to gather feedback on its capabilities.

The bot is called BlenderBot 3 and can be accessed on the web(Though, it appears that only U.S. residents can do this at the moment.) The BlenderBot 3 is capable of general small talk, Meta says, but also answers the kind of questions you might ask a digital assistant, “from talking about health to finding fit in the city” Food recipes for children’s facilities.”

The bot is a prototype built on Meta’s Previous work Use so-called Large Language Models or LLMSs – powerful but flawed text generation software, of which OpenAI’s GPT-3 is the most well-known example. Like all LLMs, BlenderBot was originally trained on massive text datasets, mining statistical patterns to generate language. Such systems have proven to be very flexible and have been used for everything from generating code for programmers to helping authors write their next best-selling book.However, these models also have serious flaws: they Rumination bias in their training data, and often Inventing answers to user questions (This is a big deal if they’re going to be used as digital assistants).

The latter question is something Meta specifically wants to test with BlenderBot. One of the great features of chatbots is their ability to search the internet to discuss specific topics. What’s more, users can click on its replies to see where it got the information from. In other words, BlenderBot 3 can cite its source.

By releasing the chatbot to the public, Meta hopes to gather feedback on various issues facing large language models. Users chatting with BlenderBot will be able to flag any suspicious responses from the system, and Meta says it’s working to “minimize the bot’s use of foul language, defamation, and culturally insensitive comments.” Users must choose to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta for use by the general AI research community.

“We’re committed to publicly releasing all the data we’ve collected in our demos in the hope that we can improve conversational AI,” Meta research engineer Kurt Shuster, who helped create BlenderBot 3, told edge.

A sample conversation with BlenderBot 3 on the web. Users can give feedback and reactions to specific answers.
Image: Yuan

Historically, releasing a prototype AI chatbot to the public has been a risky move for tech companies. In 2016, Microsoft tweeted a chatbot called Tay that learned from interactions with the public. Predictably, Twitter users were quick to direct Tay to regurgitate a series of racist, anti-Semitic and misogynistic remarks. In response, Microsoft took the bot offline less than 24 hours later.

Meta says the world of AI has changed a lot since Tay went down, and BlenderBot has various safety bars that should stop Meta from going the way Microsoft did.

Crucially, Tay is designed to learn in real-time from user interactions, while BlenderBot is a static model, said Mary Williamson, research engineering manager at Facebook AI Research (FAIR). This means it is able to remember what the user said in the conversation (if the user exits the program and comes back later, it even retains this information via a browser cookie), but this data is only used to further improve the system.

“This is just my personal opinion, but [Tay] The plot was relatively unfortunate because it created this chatbot winter and every institution was afraid to launch public chatbots for research,” Williamson told edge.

Most chatbots in use today are narrow and task-oriented, Williamson said. Consider, for example, customer service bots, which typically just show users a pre-programmed conversation tree, narrow down their queries before handing them off to human agents who can actually get the job done. The real reward is building a system that can have free and natural conversations like humans do, and Meta says the only way to achieve this is for bots to have free and natural conversations.

“Broadly speaking, the lack of tolerance for robots saying unhelpful things is unfortunate,” Williamson said. “And what we’re trying to do is publish it very responsibly and move the research forward.”

In addition to putting BlenderBot 3 on the web, Meta also Publish low-level code, training datasets, and smaller model variants. Researchers can request access to the largest model with 175 billion parameters, via the form here.



Source link

Leave a Reply

Your email address will not be published.