NVIDIA CES 2015 Live Blog: Follow The News As It Happened

NVIDIA CES 2015 Live Blog: Follow The News As It Happened

NVIDIA had a cracking year in 2014. With a brand new Shield gaming tablet with sweet free games, amazing new laptop graphics chips and the excellent GeForce range, it was an impressive 12 months. So what’s next? We’ll be at NVIDIA’s CES 2015 press conference this afternoon to bring you all the news as it happens!

All times are in AEDT.

12midday, 5 January
The presser kicks off in earnest at 3pm AEDT. Tune back in then!

And we’re seated! Solid turnout for a sneaky press conference late on day zero of CES 2015!

What do you want to see from NVIDIA in 2015? A new Shield handheld? Better desktop processors? Let us know in the comments!

Green is tonight’s theme.

Right on time, co-founder and CEO Jen-Hsun Huang takes the stage.

We’re going over the company’s recent achievements, including K1 and the Maxwell GPU.

Meet the Tegra X1! It’s a Maxwell chip, shrunk down for a mobile. It’s incredible. It’s an 8-core 64-bit CPU with an incredible 256 cores, and can handle 4K video at 60hz H.265 and VP9. Oh my.

It smashes rivals out of the park on benchmark tests, and manages to sip power.


The Tegra X1 is able to run the amazing Elemental Unreal Engine 4 benchmark test using just 10W of power. When it was demonstrated two months ago, it needed a desktop processor with 300W of power to run it.

And now we’re watching that demo. This looks great.

In the year 2000, the ASCI Red supercomputer with 1 million Watts was able to produce 1 Teraflops. 15 years later, the Tegra X1 can pump out the same amount with just 10 Watts.

So what’s all that power going to be used for? “Surely not a phone,” Jen-Hsun Huang says. It’s going to power the smart car and all of its displays and computing needs.

So that Tegra X1 is being put to good use inside the NVIDIA Drive CX: a digital cockpit computer.

Designed for future cars, it’s meant to power a bunch of different high-resolution screens, a number of virtual machines and comes with support for QNX, Linux and Android.

The software suite that designers will get access to with it is called Drive Studio. “We’ll be able to render Tron-like graphics for your cockpit in the future,” Jen-Hsun Huang says with a demonstration.


These graphics look gorgeous.


This digital cockpit software is completely amazing, and it’s all being powered by this Mighty Mouse of a chip. Awesome.


Features include dynamic lighting on navigation maps, customisable lighting cues on rev gauges and fantastic rear view cameras that give you a bird’s eye view of a car in a single video render.


We’re getting a demo of NVIDIA’s design studio replicating different lighting scenarios on various materials in a speedometer. From carbon fibre gauges with red lighting through to porcelain gauges with dimmed blue lighting, NVIDIA boffins are able to replicate and simulate it in NVIDIA Drive CX for car designers.


We’re now talking about the “car of the future”, and all the advanced driver aid sensors it will have and already has today. NVIDIA predicts that these sensor technologies will be replaced by smart cameras, with computer vision technology powering warnings for the driver.


How’s this for some future?

NVIDIA just announced a new auto-pilot computer system called the NVIDIA Drive PX. It’s packing two NVIDIA X1 processor, which gives it incredible processing capabilities.

Combined, they have 2.3 Teraflops of computing power. It can connect to up to 12 HD cameras and process 1.3 billion pixels per second. That way it can work out where you’re going.

It’s designed for driverless cars with 4K cameras, amazing displays and smart camera technologies all over it.



There’s also a new Deep Neural Network sensor system that detects and classifies objects as it sees them. It will know what a pedestrian looks like, what a bicycle looks like, what an emergency vehicle looks like and what road signs look like.

It’s about making the car “situationally aware” according to NVIDIA. Which sounds terrifying.


What NVIDIA’s Deep Neural Network systems are able to do is take apart images at a neural level and learn exactly what each pixel pertains to and react accordingly.

For example, if you detect a car in front of you, you’ll drive differently than if you see a school bus with its lights on. The car can tell the difference.


We’re now taking a look at how NVIDIA has been testing this on real roads. It’s amazing.

These cameras and this Deep Neural Network Computer Vision system can see everything from occluded pedestrians through to distant speed limit signs and the colour of traffic signals and react accordingly.


It can also get into nitty-gritty subclassifications: it can see all the cars, but it can sub-classify them by sports cars, trucks (heavy and light) and SUVs.

Ricky Hudi, Executive VP of Electronics Development from Audi is up talking about the developments in driverless car tech.

Finally, we’re talking about NVIDIA Surround Vision. It pieces every camera angle around the car together th make a fantastic full-car view.


And that’s all she wrote, folks! Thanks for joining us for the first live blog of CES 2015. We’ll have some amazing stuff coming up this week, so stay tuned to Gizmodo Australia!

Refresh this post periodically for the latest updates!