Nvidia started out as a graphics card company. Nvidia basically invented the performance GPU. But Nvidia does a lot more than desktop graphics these days. VR, AR, mixed reality, self-driving and autonomous vehicles, deep learning neural networks — all these use technology that started out running the first 3D-accelerated PC games 20 years ago.
Jen-Hsun Huang, founder and CEO of Nvidia, is one of the driving forces behind the $56 billion company. He’s been the face of Nvidia as it pioneered new parts of the business like artificial intelligence, deep learning and other technologies — like self-driving cars — that rely on Nvidia’s particular skills in the parallel processing chips that started out powering gaming graphics.
Bringing PC Gaming To Any PC
Video gaming is the single largest entertainment industry in the world, as well as the world’s largest sporting event, says Huang. Doubling revenue in the last 5 years, Nvidia says 200 million players have GeForce graphics cards; there are 100 million MOBA gamers, 325 million spectators and 600 million Twitch viewers around the planet.
Nvidia is opening up its GeForce Experience streaming and social platform directly to Facebook Live, allowing for live broadcasts from gamers to their entire social friends network — including those that might not be traditional gamers. To commemorate the occasion, Nvidia showed off some new footage from Mass Effect Andromeda.
A billion PC users around the world aren’t gaming-ready, says Nvidia — that’s half of the 2 billion estimated machines around the world — running on low-power integrated graphics chips or outdated models. There’s no way to upgrade these machines. But Nvidia’s cloud aspirations — putting a gaming supercomputer on the internet for anyone on a low-powered machine to use — means it’s launching its GeForce Now cloud gaming platform for PC users.
GeForce Now previously worked on Nvidia’s Android devices like the Shield portable handheld and Shield Tablet, but it’s now also available for PC and Mac users — with compatibility for any game regardless of the platform the end user is experiencing it on. Third-party, non-Nvidia platforms — EA’s Origin, Valve’s Steam, Ubisoft’s uPlay — work perfectly. Save states are stored. Everything works.
GeForce Now for PC will be out in March, and will cost $US25 for 20 hours of play — it’s intended almost as a taster for anyone that wants to see what PC gaming is like. You’ll be able to jump to different grades of graphics and performance, too, using more power from the cloud at a commensurately higher cost — essentially fewer hours of gameplay for that $US25 fee. No word on whether it’ll come to Australia, though, but stay tuned.
Nvidia Shield And Spot: AI For The Home
Nvidia has a new $US199 Shield set-top box that supports 4K HDR video across the board, whether it’s Netflix or Amazon Video or YouTube. It’s intended as a central home hub, but unlike the Amazon Alexa or Google’s own Home, it uses your TV as a central point of contact, and it uses AI and voice control to streamline your life.
Nvidia and Google have collaborated on Google Assistant for TV — a world first — that uses natural language processing to use the same voice-recognition tech to power Android TV through Shield. Beyond that, though, it has a new standalone device outside the TV that can be placed around the house.
Called the Nvidia Spot, it’ll plug directly into your wall socket, and it’s basically a super-advanced AI-powered microphone. Place a couple within the room, and they’ll work out where you are in the room by measuring the different arrival times of the sound of your voice — and respond to you in kind. Those commands are then interpreted by Shield, and spoken back to you via Shield or Spot.
GPUs Power AI And Self-Driving Cars, Too
GPUs, unlike CPUs, have massively parallel computing capabilities and lend themselves to multiple simultaneous calculations required for things like autonomous cars’ massive real-time data processing and data centre supercomputers. Nvidia is the one company with the world that has translated its history with gaming graphics into artificial intelligence.
“One day, AI researchers met the GPU, and the Big Bang happened. Some of the things we’ve been able to achieve in the last few years are incredible.” Nvidia tech evolved from the humble gaming GPU now powers Tesla’s fully self-driving cars, for example.
The best example of that is Nvidia’s AI research, which — courtesy of huge achievements in increasing GPU computing performance in the last few years — has made things like self-driving cars that learn from other drivers. Huang is bullish on the achievement: “We’ve been able to teach a car how to drive. Driving is a skill — we do no computation in our head, we just drive — but we’ve been able to teach a car to do that.”
Nvidia’s stated goals for self-driving cars are noble and kinda simple: reduce waste from fuel emissions in the community, save lives, and so on. But using AI to make a self-driving car in the first place is complex. AI uses computing to handle perception, reasoning and the actual driving process, all by interpreting in real-time a huge array of data from cameras placed on a self-driving car.
There’s only one problem with self-driving tech at the moment: it’s incredibly processing intensive, which takes a lot of energy. But, of course, the company has a solution for that. Nvidia’s new Xavier supercomputer, which it intends to be used in future self-driving cars, uses a fraction of that power. With a custom 8-core ARM64 processor and 512-core Volta GPU, the much smaller machine uses just 30 Watts of electricity to fulfil the same task as current machines require 20 times the power or more to handle.
Two incredibly powerful and influential automotive companies, Bosch and ZF, are working with Nvidia to put its self-driving tech and computers into production cars over the next few years. Nvidia is also partnering with Audi to build fully integrated self-driving cars from the ground up by 2020.
The ideal future of entirely self-driving tech is great, but Nvidia is investigating a future that might be more interesting to driving purists and enthusiasts, too — the people that don’t want to let a computer take over their driving. The company’s just-announced AI co-pilot will give you surround and environmental awareness while you’re driving, using the same hardware and software suite to keep you aware of things like pedestrians, motorcyclists and bicycles in the road around you. It’ll be able to monitor you, the driver, too, to track fatigue through facial recognition and to use head-tracking and gaze-tracking to check that you’re looking the right way to sense dangers. Lip reading can even be used alongside voice recognition to take commands — “start autopilot, take me to Starbucks” — for this goddamn futuristic self-driving supercomputercar to execute.
The future is gonna be amazing, guys.