Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

Attention graphics card fanatics: at approximately 2AM AEST, Nvidia will be kicking off its keynote address for GTC 2016 — one of the premiere events for GPU developers. This year, the focus is on artificial intelligence, virtual reality and self driving cars along with some big hardware announcements that we’re not allowed to talk about (yet). Once again, Gizmodo will be blogging from the event live. Get your cat naps in and we’ll see you at 2am sharp!

The 2016 GTC keynote begins at 9am PDT, which works out to 2am Sydney time (AEST). Keep updating this page for all the biggest announcements!

All times are in AEDT.

1:30am, 6 April 2016

Good morning hardcore GPU fans! (I’m assuming you’re pretty hardcore if you’ve stayed up to watch the GTC 2016 keynote live.) In a little over 30 minutes, Nvidia CEO Jen-Hsun Huang will take to the stage to kick off what is sure to be a tech-packed presentation. There are rumours that a mystery graphics card will make its debut. (Last year, Nvidia elected to unveil the Titan X a few weeks prior to GTC.) There’s also going to be some fresh announcements surrounding virtual reality (VR) and autonomous driving. What are you hoping to see in action? Not long to go now…

1:45am, 6 April 2016

The press queue is packed-to-the-gills with eager technology reporters from all over the world. I’d hate to see what the regular line looks like. (Press only make up a tiny fraction of attendees.)

1:59am, 6 April 2016

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

And we’re inside the convention hall! Not long to go now. Nvidia has cranked up deadmau5 to get everybody pumped. (Heads up, we’re experiencing flaky Wi-Fi atm, so if this blog suddenly stops updating you know the reason why.)

2:10am, 6 April 2016

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

Following a brief video about killer computing apps ranging from Google’s AlphaGo to the Higgs boson, Jen-Hsun Huang takes to the stage to talk up Nvidia’s super-computer credentials. Without further ado we get the first announcement — new updates for Nvidia SDK. This is a catch-all devkit covering every conceivable GPU-based industry and school of learning, from cutting-edge gaming and photo-realistic design to VR projects and autonomous driving.

The reworked SDK is divided into six categories: Computeworks, Gameworks, VRworks, Designworks, Driveworks and Jetpack.

2:15am, 6 April 2016

The Nvidia Computeworks SDK packs in cuDNN 5, nvGraph, Index plug-in for ParaView, AMGx, cuSolver, cuSparse, OpenACC, NSIGHT, THRUST and the next generation of CUDA — CUDA 8.

Nvidia VRworks is an Oculus Rift and HTC Vive integrator for Epic, Max Play and Unity game engines. Naturally, the SDK can also be used to develop non-gaming applications.

DriveWorks is a suite of algorithm libraries that allows developers to create self-driving cars, more of which will be announced later in the keynote.

Nvidia Jetpack is a deep learning SDK that will debut a new GPU inference engine dubbed “GIE” which will be available in May.

2:30am, 6 April 2016

Now we’re getting into VR which Jen-Hsun describes as a “brand new platform” made possible by lighter head mount displays. He’s bigging up the benefits for gamers, virtual designers and travel — including “dangerous” places. He also gave a brief shout-out to Microsoft’s HoloLens.

We’re now getting a demo of Everst VR; a painstakingly reconstruction of the world’s tallest mountain simulated in full HD via an astonishing 108 billion pixels. We’ll be getting to experience this first-hand after the keynote, so stay tuned for a hands-on.

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

Next we were shown a VR simulation of life on Mars dubbed Mars 2030 — with none other than Steve Wozniak participating via live stream. When asked what he would like to do on Mars, he quipped “use VR”. He also complained that the demo was making him dizzy. “That is not a helpful comment, Woz,” Jen-Hsun interjected. Bless.

We watched Steve trundle about in the simulation but to observers it just looks like an okay FPS video game. This is the problem with VR demonstrations, as Jen-Hsun freely admits: “We can show it on screen, but it won’t come close to the grandeur you will experience.”

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

2:50am, 6 April 2016

And we have a new tech announcement: Iray VR! Nvidia’s big boy 3D modelling platform is coming to virtual reality starting in June. It will allow developers to create pre-rendered light probes in regions of interest, rasterize depth for optimal headset eye position and reconstruct images for new viewpoints quickly and efficiently from within the platform.

From June, there will also be a consumer version available dubbed Iray VR Lite. This will work on Android devices using a range of available headsets including Google cardboard. Nice.

3:00am, 6 April 2016

Now we’re talking AI. According to Jen-Hsu, 2016 marks a defining year for humanity with a multitude of milestones including MS and Google’s “super human” image recognition, Berkley’s self-learning Brett robot, Deep Speech 2’s dual-language speech recognition network and Google AlphaGo triumphing against the Go world champion. (Interestingly, Jen-Hsun focused on the human achievement of Lee Sedol, or rather, his “amazing genes”, for being able to stay toe-to-toe with AlphaGo and win one out of their five matches.)

We’re now pontificating about the importance of artificial intelligence/deep machine learning and how it will basically take over every industry (in a good way) and dominate all computer applications. Jen-Hsu reckons one of the chief advantages of deep learning is that it’s easy to apply: “super human results without super human training”.

3:15am, 6 April 2016

He’s now showing off Nvidia’s graphics cards for deep learning applications: Tesla M40 and Telsa M4. These cards debuted back in 2015 and are mainly used by research boffins.

We’re now looking at the slightly creepy sounding Facebook AI Research project; a computer with “artistic skills”. This platform is an example of (deep breath) unsupervised representative learning with deep convolutional generative adversarial networks. It can generate its own landscapes based on images it has seen before when asked. We were shown examples of a field and beach. They were rather pretty. It can also create complete “turn” vectors from four average samples of faces looking left and right. By adding interpolation, it fills in the blanks to show the real person’s face from every angle.

3:30am, 6 April 2016

And we have a new chip announcement: say hello to the Tesla P100. This Pascal-based processor packs in a ridiculous 150 billion transistors. HBM2, 5.3TF FP64, 10.6TF FP32, 21.2TF FP16, 14MB SM RF, 4MB L2 Cache = “Christmas in April.” It’s in large-scale production from today with Tesla P100-based enterprise servers coming in Q1 2017.

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

We were also shown the new Nvidia DGX-1 — the world’s first deep learning supercomputer described as “120 servers in a box”. Specifically engineered for deep learning, it boasts eight 16GB Tesla GPUs, a 7TB SSD, a pair of Xeon processors and an NVLink Hybrid Cube Mesh. It represents an astonishing 12x speed-up in a single year.

3:40am, 6 April 2016

Baido senior researcher Bryan Catanzaro has just taken to the stage. He’s talking about how Pascal benefits model + data parallelism and I’m understanding about one word in five. The chief takeaway is that NVLink makes deep machine learning faster and more efficient.

And now we’re hearing from Rajat Monga, manager of the open source deep learning software library TensorFlow which is used in a ton of Google products. Monga is keen to see the community push TensorFlow in new directions.

We have a price for the DGX-1: $129,000. Stanford University, Berkely, NYU and the University of Oxford will be among the first institutions to get DGX-1s. Nvidia will also be partnering with Massachusetts General Hospital to bring the power of DGX-1 to medical research; specifically in the areas of radiology, pathology and genomics.

4:00am, 6 April 2016

Jen-Hsun Huang totally just pulled a “one more thing” (his actual words): he’s now talking about Nvidia Drive PX for self driving cars. According to the KITTI car detection system, Drive PX has an accuracy rating of 83.76% in hard conditions and 89.81% in medium conditions — the highest of any self-driving platform.

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

Here’s a pic of Baidu’s self-driving vehicle super computer that fits into the trunk of a car. It can detect up to 1.8 million points of interest per second which are assessed in the cloud via DGX-1.

Now we’re seeing a video of a self-driving car called BB-8. (No relation to the Star Wars droid.) A brave tester is playing with his smartphone as the car navigates an obstacle course. It successfully drove in the rain, changed lanes to avoid witch hats and successfully altered driving to suit a transition from asphalt to gravel. Wow.

4:10am, 6 April 2016

Nvidia GTC 2016 Keynote Live Blog: Follow All The News As It Happened

Okay, now things are getting ridiculously sci-fi: behold the world’s first autonomous race car. Powered by Drive PX 2, these vehicles will actually compete in the 2016/2017 Formula E season in a new event dubbed ROBORACE. 10 teams will compete, each with two identical cars. Expect shrapnel to fly.

And that’s it! There wasn’t a whole lot for gamers or other consumers here, but the deep learning stuff and VR developments are certainly an intriguing snapshot into the future. We’ll be here for the rest of the week, so keep your eye out for plenty of hands-on demonstrations from the GTC showroom floor.

Let us know which announcement tickled your fancy the most in the comments!

Gizmodo travelled to GTC 2015 in San Jose, California as a guest of Nvidia.