Nvidia Open Sources Universal ‘Guardrails’ to Keep Those Dumb AIs in Line

Nvidia Open Sources Universal ‘Guardrails’ to Keep Those Dumb AIs in Line

The growing list of companies incorporating AI into their apps and platforms have had to create and continuously tweak their own workarounds for dealing with AI’s propensity to lie, cheat, style, borrow, or barter. Now, Nvidia is looking to give more developers an easier way to tell the AI to shut its trap.

On Tuesday, Nvidia shared its so-called “NeMo Guardrails” that the company described as a kind of one-size-fits-all censorship bot for apps powered by large language models. The software is open source, and is supposed to slot on top of oft-used modern toolkits like LangChain. According to the company’s technical blog, NeMo uses an AI-specific sub-system called Colang as a kind of interface to define what kinds of restrictions on the AI output each app wants to have.

Those using NeMo can help the chatbots stay on topic and keep it from spewing misinformation, offering toxic or outright racist responses, or from performing tasks like creating malicious code. Nvidia said that it’s already employed with business-end web application company Zapier.

Nvidia VP of Applied Research Jonathan Cohen told TechCrunch that while the company has been working on the Guardrails system for years, they found a year ago this system would work well toward OpenAI’s GPT models. The NeMo page says it works on top of older language models like OpenAI’s GPT-3 and Google’s T5. Nvidia says it also works on top of some AI image generation models like Stable Diffusion 1.5 and Imagen. A Nvidia spokesperson confirmed to Gizmodo that NeMo is supposed to work with “all major LLMs supported by LangChain, including OpenAI’s GPT-4.”

Still, it remains unclear just how much good an open source guardrail might accomplish. While we may not get a “GPT-5” anytime soon, OpenAI has already tried to mass-market its GPT-4 model with its API access. Stability AI, the makers of Stable Diffusion, is also angling toward businesses with its “XL” model. Both companies have tried to reassure customers there are already blocks on bad content found in the depths of the AI’s training data, though with GPT-4 especially, we’re forced to take OpenAI’s word for it.

And even if it’s implemented in software that best supports it, like LangChain, it’s not like NeMo will catch everything. Companies that have already implemented AI systems have found that out the hard way. Micorosoft’s Bing AI started its journey earlier this year, and users immediately found ways to abuse it to say “Heil Hitler” and make other racist statements. Every update that gave the AI a little more wiggle room proved how its AI could be exploited.

And even if the AI has explicit blocks for certain content, that doesn’t mean it’s always perfect. Last week, Snapchat took its “My AI” ChatGPT-based chatbot out of beta and forced it upon all its users. One user proved they could manipulate the AI to say the n-word, despite other users’ attempts with the same prompt being foiled by existing blocks on the AI.

This is why most implementations of AI have been released in a kind of “beta” format. Google has called the release of its Bard AI a “test” while constantly trying to talk up “responsible” AI develpment. Microsoft pushed out its Bing AI based on OpenAI’s ChatGPT in a beta format. Modern AI chatbots are the worst kind of liar. They fib without even knowing what they say is untrue. They can post harmful, dangerous, and often absurd content without comprehending any of what it said.

AI chatbots are worse than any child screaming obscenities in a Walmart because the child can eventually learn. If called out, the AI will pretend to apologise, but without modifying the AI’s learning data or processes, an AI will never change. The best thing most AI developers can do to hamper AI’s worst impulses is stick it in a cage like you would find in the lion’s den at the local zoo. You need tall walls to keep AI at bay, and even then, don’t stick your hand through the bars.

And you can’t forget how this is all big business for Nvidia. These guardrails are supposed to promote the company’s existing AI software suite for businesses. Nvidia is already one of the most major players in the AI space, at least in terms of hardware. Its A100 and newer H100 AI training chips make up more than 90% of the global market for that kind of GPU. Microsoft has reportedly been trying to find a way to create its own AI training chip and get out from under the yoke of Nvidia’s dominance.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.