We haven't quite reached the stage where we need to think about sending a robot back in time to save the Connors, but it's hard to argue that computing power -- and the super-smart software that takes advantage of it -- is evolving at a cracking pace. Technological singularity and all that. The average Joe might not take the threat of "subhuman AI systems" seriously, but one scientist has already proposed a branch of research to tackle the issue... just in case it comes to pass.
An article on InnovationNewsDaily describes it as the "AI prison problem" -- if we get to the stage of having a computer intelligence capable of thinking in some ways like a human, should it be allowed to roam freely, absorbing knowledge and learning, or is it better to keep it confined in a virtual environment where it can be safely monitored? This is the problem University of Louisville computer scientist Roman Yampolskiy wants to investigate, so much so he's detailed a new field dedicated to the subject in the latest Journal of Consciousness Studies.
The journal describes itself as a peer-reviewed publication that seeks to answer, among other things, whether "computers [can] ever be conscious".
Yampolskiy's not so much concerned with a rogue AI wiping hard drives or stealing data, but its ability to become a silicon-powered version of security expert Kevin Mitnick. From the InnovationNewsDaily piece:
"It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks -- it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."
The solution? Yampolskiy suggests keeping the AI locked up in a virtual environment until its abilities can be fully understood. It's not dissimilar to how antivirus companies handle viruses, worms and trojans. Setting them loose in a virtualised operating system means it's near impossible for the threat to damage the host OS. But it's not entirely safe, as the Cloudbust exploit aptly demonstrates (this is a visual demo; more information on Cloudburst can be found here).
Yet, this is our best option for keeping tabs on a "subhuman AI systems", according to Yampolskiy:
"The Catch-22 is that until we have fully developed superintelligent AI we can't fully test our ideas, but in order to safely develop such AI we need to have working security measures," Yampolskiy told InnovationNewsDaily. "Our best bet is to use confinement measures against subhuman AI systems and to update them as needed with increasing capacities of AI."
When we talk about AIs running wild, killing people indiscriminately and stealing leather jackets from bikies, the inevitable comparison drawn is to Skynet. Heck, I did exactly this in the open paragraph. But going by the article, I think more of the hologram Moriarty, based on the Sherlock Holmes villain of the same name, in the Star Trek: The Next Generation episodes "Elementary, Dear Data" and "Ship In A Bottle".
In the latter episode's denouement, we see the nefarious Moriarty, who became aware of his state as a hologram and achieved a degree of sentience, confined to a memory cube on Picard's desk. Moriarty believes he's roaming the universe but, in actual fact, he's merely experiencing an elaborate simulation. The reason he was placed there? Because he was too dangerous to actually set free, in either electronic or physical form.
At any rate, it still sounds pie in the sky to me, though if scientists and researchers are taking it somewhat seriously, perhaps we should too.
Image: Paramount / Memory Alpha.