Nvidia has a new way to prevent A.I. chatbots from ‘hallucinating’ wrong facts
Nvidia announced new software on Tuesday that will help software makers prevent AI models from stating incorrect facts, talking about harmful subjects, or opening up security holes.
The software, called NeMo Guardrails, is one example of how the artificial intelligence industry is scrambling to address the “hallucination” issue with the latest generation of large language models, which is a major blocking point for businesses.
Large language models, like GPT from Microsoft-backed OpenAI and LaMDA from Google, are trained on terabytes of data to create programs that can spit out blocks of text that read like a human wrote them. But they also have a tendency to make things up, which is often called “hallucination” by practitioners. Early applications for the technology, such as summarizing documents or answering basic questions, need to minimize hallucinations in order to be useful.
Nvidia’s new software can do this by adding guardrails to prevent the software from addressing topics that it shouldn’t. NeMo Guardrails can force a LLM chatbot to talk about a specific topic, head off toxic content, and can prevent LLM systems from executing harmful commands on a computer.
“You can write a script that says, if someone talks about this topic, no matter what, respond this way,” said Jonathan Cohen, Nvidia vice president of applied research. “You don’t have to trust that a language model will follow a prompt or follow your instructions. It’s actually hard coded in the execution logic of the guardrail system what will happen.”
The announcement also highlights Nvidia’s strategy to maintain its lead in the market for AI chips by simultaneously developing critical software for machine learning.
Nvidia provides the graphics processors needed in the thousands to train and deploy software like ChatGPT. Nvidia has more than 95% of the market for AI chips, according to analysts, but competition is rising.