As chatbot fever spreads to the workplace, many workers across the U.S. are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found. But that may not be good news for some. Generative A.I. — an app that uses a large language model to hold conversations with users and answer myriad prompts — has raised worries about potential intellectual property and strategy leaks. That has led companies ranging from JPMorgan Chase and Northrop Grumman to Apple and Google to curb its use or ban it altogether.
The hottest chatbot, ChatGPT, was developed by San Francisco-based tech startup OpenAI and became so popular that it was the fastest-growing app in history. Millions of people used it and quickly made headlines for several humorous responses, including some that could be considered offensive.
But its popularity has sparked concerns that it will lead to the computerization of white-collar jobs and the emergence of “chat robots with artificial intelligence.” Goldman Sachs economists predict that 18% of full-time work worldwide could be automated by this technology, especially in advanced economies. This would likely displace administrative workers like secretaries and lawyers and have a more significant effect in those industries than on manual labor or services.
While the technology might be helpful for some, it’s not a panacea and should be carefully controlled in the workplace. One concern is that ChatGPT can be used to cheat, allowing students to get a leg up on their competition for coveted college scholarships or other academic awards. The app can even be used to cheat on exams, a practice that teachers and educational providers are concerned might skew education’s evaluation of student achievement or skew the competition for merit-based aid.
Another concern is that ChatGPT can be used for phishing and other online scams. Hackers and scammers can use the AI’s ability to write humanlike content to author phishing emails that appear more convincing to victims.
Moreover, employees should know that ChatGPT can — and often do — spit out non-unique output for common prompts. This can erode employee morale and cause other workers to wonder if colleagues are tailing them. Some tools can detect whether a bot has generated a message, but they’re not foolproof.
Other fears include that the software will expose sensitive information or be susceptible to data breaches if it’s not properly managed and secured. That could be particularly troubling for businesses that require heightened security levels for clients, such as banks or defense contractors. These concerns have led companies from JPMorgan to Google to ban their staff from using ChatGPT, though many others are still considering how best to use the technology. Those reservations could continue to stifle adoption.