Neon is the name of the company and also the name of its only productâ€”if you can call it that. Iâ€™m told a Neon is a living entity, but it doesnâ€™t have a physical body. Iâ€™m told a Neon is computationally created, but itâ€™s not an AI assistant. So what the fuck is Neon and what is all this talk about artificial humans?
At this point, if youâ€™re totally confused, thatâ€™s a perfectly reasonable feeling. Prior to CES 2020, the hype started ramping up around Neon, its relationship with Samsung, and the far-out presentation it would give this week. Now Iâ€™ve seen everything Neon has to offer and Iâ€™m pretty sure I get it. So letâ€™s break this thing down.
Neon is a project helmed by Star Labs, which is an independent startup/incubator funded by Samsung. The CEO of Neon and Star Labs is Pranav Mistry, who is also a global vice president of research at Samsung, and was responsible for helping create gadgets like the original Sixth Sense wearable computer, and more.
So what is a Neon?
In short, a Neon is an artificial intelligence in the vein of Haloâ€™s Cortana or Red Dwarfâ€™s Holly, a computer-generated life form that can think and learn on its own, control its own virtual body, has a unique personality, and retains its own set of memories, or at least thatâ€™s the goal. A Neon doesnâ€™t have a physical body (aside from the processor and computer components that its software runs on), so in a way, you can sort of think of a Neon as a cyberbrain from Ghost in the Shell too. Mistry describes Neon as a way to discover the â€œsoul of tech.â€
But unlike a lot of the AIs we interact with today, like Siri and Alexa, Neonâ€™s arenâ€™t digital assistants. They werenâ€™t created specifically to help humans and they arenâ€™t supposed to be all-knowing. They are fallible and have emotions, possibly even free will, and presumably, they have the potential to die. Though that last one isnâ€™t quite clear.
OK, but those things look A LOT like humans. Whatâ€™s the deal?
Thatâ€™s because Neons were originally modelled on humans. The company used computers to record different peopleâ€™s faces, expressions, and bodies, and then all that info was rolled into a platform called Core R3, which forms the basis of how Neons appear to look, move, and react so naturally.
If you break it down even further, the three Rs in Core R3 stand for reality, realtime, and responsiveness, each R representing a major tenet for what defines a Neon. Reality is meant to show that a Neon is itâ€™s own thing, and not simply a copy or motion capture footage from an actor or something else. Realtime is supposed to signify that a Neon isnâ€™t just a preprogrammed line of code, scripted to perform a certain task without variation like you would get from a robot. Finally, the part about responsiveness represents that Neons, like humans, can react to stimuli, with Mistry claiming latency as low as a few milliseconds.
Whoo, thatâ€™s quite a doozy. Is that it?
Oh, I see, a computer-generated human simulation with emotions, free will, and the ability to die isnâ€™t enough for you? Well, thereâ€™s also Spectra, which is Neonâ€™s (the company) learning platform thatâ€™s designed to teach Neons (the artificial humans) how to learn new skills, develop emotions, retain memories, and more. Itâ€™s the other half of the puzzle. Core R3 is responsible for the look, mannerisms, and animations of a Neonâ€™s general appearance, including their voice. Spectra is responsible for a Neonâ€™s personality and intelligence.
Oh yeah, did we mention they can talk too?
So is Neon Skynet?
Yes. No. Maybe. Itâ€™s too early to tell.
That all sounds nice, but what actually happened at Neonâ€™s CES presentation?
After explaining the concept behind Neonâ€™s artificial humans and how the company started off creating their appearance by recording and modelling humans, Mistry showed how after becoming adequately sophisticated, Core R3 engine allows a Neon to animate a realistic-looking avatar on its own.
Then, Mistry and another Neon employee attempted to present a live demo of a Neonâ€™s abilities, which is sort of when things went awry. To Neonâ€™s credit, Mistry did preface everything by saying the tech is still very early, and given the complexity of the task and issues with doing a live demo at CES, itâ€™s not really a surprise the Neon team ran into technical difficulties.
At first, the demo went smooth, as Mistry introduced three Neons whose avatars were displayed in a row of nearby displays: Karen, an airline worker, Cathy, a yoga instructor, and Maya, a student. From there, each Neon was commanded to perform various things like laugh, smile, and talk, through controls on a nearby tablet. To be clear, in this case, the Neons werenâ€™t moving on their own but were manually controlled to demonstrate the lifelike mannerisms.
If youâ€™re thinking a digital version of the creepy Sophia-bot youâ€™re not far off.
For the most part, each Neon did appear quite realistic, avoiding nearly all the awkwardness you get from even high-quality CGI like the kind Disney used animate young Princess Leia in recent Star Wars movies. In fact, when the Neons were asked to move and laugh, the crowd at Neonâ€™s booth let out a small murmur of shock and awe (and maybe fear).
From there, Mistry introduced a fourth Neon along with a visualisation of the Neonâ€™s neural network, which is essentially an image of its brain. And after getting the Neon to talk in English, Chinese, and Korean (which sounded a bit robotic and less natural than what youâ€™d hear from Alexa or the Google Assistant), Misty attempted to demo even more actions. But thatâ€™s when the demo seemed to freeze, with the Neon not responding properly to commands.
At this point, Mistry apologised to the crowd and promised that the team would work on fixings things so it could run through more in-depth demos later this week. Iâ€™m hoping to revisit the Neon booth to see if thatâ€™s the case, so stay tuned for potential updates.
So whatâ€™s the actual product? Thereâ€™s a product, right?
Yes, or at least there will be eventually. Right now, even in such an early state, Mistry said he just wanted to share his work with the world. However, sometime near the end of 2020, Neon plans to launch a beta version of the Neon software at Neon World 2020, a convention dedicated to all things Neon. This software will feature Core R3 and will allow users to tinker with making their own Neons, while Neon the company continues to work on developing its Spectra software to give Neonâ€™s life and emotion.
How much will Neon cost? What is Neonâ€™s business model?
Supposedly there isnâ€™t one. Mistry says that instead of worrying about how to make money, he just wants Neon to â€œmake a positive impact.â€ That said, Misty also mentioned that Neon (the platform) would be made available to business partners, who may be able to tweak the Neon software to sell things or serve in call centres or something. The bottom line is this: If Neon can pull off what itâ€™s aiming to pull off, there would be a healthy business in replacing countless service workers.Â
Can I fuck a Neon?
Get your mind out of the gutter. But at some point, probably yes. Everything we do eventually comes around to sex, right? Furthermore, this does bring up some interesting concerns about consent.
How can I learn more?
Go to Neon.life.
So what happens next?
Neon are going to Neon, I donâ€™t know. Iâ€™m a messenger trying to explain the latest chapter of CES quackery. Donâ€™t get me wrong, the idea behind Neon is super interesting and is something sci-fi writers have been writing about for decades. But for right now, itâ€™s not even clear how legit all this is.
Itâ€™s unclear how much a Neon can do on its own, and how long it will take for Neon to live up to its goal of creating a truly independent artificial human. What is really real? Itâ€™s weird, ambitious, and could be the start of a new era in human development. For now? Itâ€™s still quackery.