It took a day or two, but ChatGPT, the world-leading generative artificial intelligence (AI) tool that allows you to type in a prompt and get a human-sounding response, returned to its senses. After spouting nonsensical answers to users’ queries for hours Tuesday into Wednesday, it finally came up with something meaningful.
The outburst, reported in a Reddit thread and elsewhere, is another reminder that AI can be unpredictable. It also demonstrates the limits of even the most powerful AI systems. It’s a testament to just how much we still have to learn about the nature of human language and thinking and how difficult it will be to create machines that can mimic the process of humans forming thoughts.
While ChatGPT’s rambling responses might be annoying for anyone trying to use it, they’re fascinating to those who know how to break the system in exciting and surprising ways. It’s already possible to use it to write full-length essays, generate song lyrics or lines of programming code, and do many other things that its creators never intended.
A few weeks ago, the upstart tech company OpenAI released ChatGPT, a chatbot that lets people use natural language to generate human-sounding responses to questions. It quickly exploded in popularity, with people using it to pen school essays, songs, and lines of software code. It’s also used to practice for job interviews and help prepare for psychotherapy sessions.
ChatGPT is based on so-called large language models, which start with huge samples of existing text from the web, books, and other sources to train a neural network to produce text that’s “like” that. That training helps the model understand the rules of grammar, syntax, and other structures that make language mean something.
It’s also trained to consider a person’s style, mannerisms, and other characteristics. This means it can create conversations that genuinely feel like they’re being held with a natural person.
But as ChatGPT’s recent outburst shows, there are plenty of other rules the machine doesn’t understand and isn’t bound by.
OpenAI said a software tweak to the system “introduced a bug with how the model processes language.” Upon identifying the cause of this incident, the company rolled out a fix and confirmed that the issue had been resolved. “We apologize to anyone affected by this issue,” the company added. It said the problem may have sprung from an incompatibility between the ChatGPT code and specific libraries that needed to be updated simultaneously. It said the company is releasing a new version of the chatbot that should be fixed. It said this would be rolled out to all users on Monday.