Google’s Latest App Allows Users to Test Experimental AI Systems such as LaMDA

Google recently released AI Test Kitchen, an Android app that lets users test out prototype systems powered by AI before they are put into production. As AI Test Kitchen continues to gradually roll out to small groups in the United States.
SIA Team
September 1, 2022

Google recently released AI Test Kitchen, an Android app that lets users test out prototype systems powered by AI before they are put into production. As AI Test Kitchen continues to gradually roll out to small groups in the United States.

The AI Test Kitchen will provide rotating demos focusing around innovative, cutting-edge AI technology, all from within Google, as it was unveiled at Google’s I/O developer conference earlier this year. The business emphasizes that they aren’t final goods but rather are meant to give users a taste of the internet giant’s inventions and provide Google a chance to research how they are applied.

“As AI technologies continue to advance, they have the potential to unlock new experiences that support more natural human-computer interactions,” Tris Warkentin, Google Product Manager  and Josh Woodward, Director of Product Management, said in a blog post. 

According to them, getting external feedback is the best way to advance LaMDA. They said they will utilize this information, which is unrelated to users’ Google account and comes from their rating of each LaMDA comment as nice, offensive, off-topic, or inaccurate, to create and improve our upcoming products. 

In the first set of demos in AI Test Kitchen, Google’s language model, LaMDA (Language Model for Dialogue Applications), which searches the web to provide human-like responses to queries, is examined. Users might identify a place and have LaMDA recommend ways to take, or they could discuss a goal and ask LaMDA to break it down into a series of smaller chores.

In an effort to reduce the hazards associated with systems like LaMDA, such as biases and poisonous outputs, Google says it has added “several levels” of safety to AI Test Kitchen. Even the most advanced chatbots today can easily go off the tracks, diving into conspiracy theories and objectionable content when prompted with certain words, as most recently demonstrated by Meta’s BlenderBot 3.0.

According to Google, AI Test Kitchen systems will make an effort to automatically identify and filter out undesirable words and phrases that might be sexually explicit, rude or hostile, violent or unlawful, or reveal personal information. However, the business issues a warning that occasionally foul text may still get through.