The search giant’s generative AI technology is getting a new twist: Testing an AI tool that can function as a “life coach.” According to internal documents The New York Times assessed, the AI can perform “at least 21 different types of personal and professional tasks,” including giving life advice, ideas, planning meals, and tutorial tips. The company is also working to add features to help users learn and plan their finances, such as providing suggestions on how to save money or create a budget. The project indicates Google’s growing efforts to rival OpenAI’s ChatGPT and Microsoft’s Bing Chat, capable of dispensing life advice.
But Google’s decision to use generative AI to offer life advice raises ethical concerns, the Times reports. In December, the company’s artificial intelligence safety team warned that using generative AI to give people personal advice could lead users to form emotional bonds with the tools and think of them as sentient. The warning was prompted by a case of the Tessa chatbot, which offered eating disorder advice that ultimately led to the end of the National Eating Disorder Association’s partnership with the software maker.
A worker who is part of the project — which is run by Scale AI, a contractor for Google DeepMind — tells the Times that workers are evaluating the capabilities and their effects on user well-being. The worker says they’re using “a variety of tests” that include presenting a prompt such as one involving a user struggling to attend a friend’s destination wedding because of financial constraints. The worker adds that the testers are then assessing whether the AI provides valuable guidance and if it gives the user confidence to take action.
The generative AI tools are also being used by companies such as GA Telesis, which uses it to identify vehicles in video footage, and GitLab, which provides natural language descriptions of code flaws. Google is also pushing its generative AI into products such as Search, which can now generate draft messages, and Photos, which uses it to edit images by centering figures and coloring in empty spaces.
But Luccioni warns that users need to learn to spot errors introduced by generative AI to avoid breaking software and misguided recommendations. He urges developers to “do their homework” and check the results carefully to ensure they’re using the technology safely.
But for many Google workers, the issue is bigger than the risks of flawed or dangerous generative AI. It reflects how they feel about their workplace and the perception that the company has a hierarchy in which straight, white male techies are valued more than women and minorities. That’s why some employees quickly organized last year to form a union. The effort failed, but the workers did manage to bring attention to discrimination at the company. The walkout also highlighted that Google hasn’t always done enough to protect its workers from sexual misconduct and other forms of workplace abuse.