ChatGPT was introduced just seven weeks ago, but the AI has already garnered a lifetime’s worth of hype. It’s anybody’s guess whether this particular technology opens the AI kimono for good or is just a blip before the next AI winter sets in, but one thing is certain: It’s kickstarted an important conversation about AI, including what level of transparency we should expect when working with AI and how to tell when it’s lying.
Since it was launched on November 30, OpenAI’s newest language model, which was trained on a very large corpus of human knowledge, has demonstrated an uncanny capability to generate compelling responses to text-based prompts. It not only raps like Snoop Dogg and rhymes like Nick Cave (to the songwriter’s great chagrin), but also solves complex mathematical problems and writes computer code.
Now that ChatGPT can churn out mediocre and (mostly) correct writing, the era of the student essay has been declared officially over. “Nobody is prepared for how AI will transform academia,” Stephen Marche writes in “The College Essay Is Dead,” published last month.
“Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.”
It’s possible that Marche was off a bit in his timing. For starters, schools have already started to respond to the plagiarism threat posed by ChatGPT, with bans in place in public school districts in Seattle, Washington and New York City. And thanks to the same relentless march of technology that gave us ChatGPT, we’re gaining the ability to detect when generative AI is being used.
Over the weekend, news began to percolate out about a tool that can detect when ChatGPT was used to generate a given bit of text. Dubbed GPTZero, the tool was written by Edward Tian, who is a computer science major at Princeton University in New Jersey.
“I spent New Years building GPTZero — an app that can quickly and efficiently detect whether an essay is ChatGPT or human written,” Tian wrote on Twitter. “[T]he motivation here is increasing AI plagiarism. [T]hink are high school teachers going to want students using ChatGPT to write their history essays? [L]ikely not.”
The tool works by analyzing two characteristics of text: the level of “perplexity” and the level of “burstiness,” according to an article on NPR. Tian determined that ChatGPT tends to generate text that has a lower level of complexity than human-generated text. He also found that ChatGPT consistently generates sentences that are more consistent in length and less “bursty” than humans.
GPTZero isn’t perfect (no AI is), but in demonstrations, it seems to work. On Sunday, Tian announced on his substack that he’s in talks with school boards and scholarship funds to provide a new version of the tool, called GPTZeroX, to 300,000 schools and scholarship funds. “If your organization might be interested, please let us know,” he writes.
Tracking Down Hallucinations
Meanwhile, other developers are building additional tools to help with another problem that has come to light with ChatGPT’s meteoric rise to fame: hallucinations.
“Any large language model that’s given an input or a prompt–it’s sort of not a choice–it is going to hallucinate,” says Peter Relan, a co-founder and chairman with Got It AI, a Silicon Valley firm that develops custom conversational AI solutions for clients.
The Internet is full of examples of ChatGPT going off the rails. The model will give you exquisitely written–and wrong–text about the record for walking across the English Channel on foot, or will write a compelling essay about why mayonnaise is a racist condiment, if properly prompted.
Roughly speaking, the hallucination rate for ChatGPT is 15% to 20%, Relan says. “So 80% of the time, it does well, and 20% of the time, it makes up stuff,” he tells Datanami. “The key here is to find out when it is [hallucinating], and make sure that you have an alternative answer or a response you deliver to the user, versus its hallucination.”
Got It AI last week announced a private preview for a new truth-checking component of Autonomous Articlebot, one of two products at the company. Like ChatGPT, the company’s truth-checker is also based on a large language model that is trained to detect when ChatGPT (or other large language models) is telling a fib.
The new truth-checker is 90% accurate at the moment, according to Relan. So if ChatGPT or another large language model is used to generate a response 100 times and 20 of them are wrong, the truth-checker will be able to spot 18 of those fabrications before the answer is sent to the user. That effectively increase ChatGPT’s accuracy rate to 98%, Relan says.
“Now you’re in the range of acceptable. We’re shooting for 95% next,” he says. “If you can detect 95% of those hallucinations, you’re down to one out of 100 response is still inaccurate. Now you’re into a real enterprise-class system.”
OpenAI, the maker of ChatGPT, has yet to release an API for the large language model that has captured the world’s attention. However, the underlying model used by ChatGPT is known to be GPT-3, which does have an API available. Got It AI’s truth-checker can be used now with the latest release of GPT-3, dubbed davinci-003, which was released November 28.
“The closest model we have found in an API is GPT-3 davinci,” Relan says. “That’s what we think is close to what ChatGPT is using behind the scenes.”
The hallucination problem will never fully go away with conversational AI systems, Relan says, but it can be minimized, and OpenAI is making progress on that front. For example, the error rate for GPT-3.5 is close to 30%, so the 20% rate with ChatGPT–which Relan attributes to OpenAI’s adoption of the reinforcement learning human feedback loop (RLHF)—is already a big improvement.
“I do believe that OpenAI…will solve some of the core platform’s tendency’s to hallucinate,” Relan says. “But it’s a stochastic model. It’s going to do pattern matching and come up with something, and occasionally it will make up stuff. That’s not our challenge. That’s OpenAI’s challenge: How to reduce its hallucination rate from 20% to 10% to 5% to very little over time.”