artificial superintelligence

  • These 23 Principles Could Help Us Avoid An AI Apocalypse

    These 23 Principles Could Help Us Avoid An AI Apocalypse

    Science fiction author Isaac Asimov famously predicted that we’ll one day have to program robots with a set of laws that protect us from our mechanical creations. But before we get there, we need rules to ensure that, at the most fundamental level, we’re developing AI responsibly and safely. At a recent gathering, a group…


  • Why Asimov’s Three Laws Of Robotics Can’t Protect Us

    Why Asimov’s Three Laws Of Robotics Can’t Protect Us

    It’s been 50 years since Isaac Asimov devised his famous Three Laws of Robotics — a set of rules designed to ensure friendly robot behaviour. Though intended as a literary device, these laws are heralded by some as a ready-made prescription for avoiding the robopocalypse. We spoke to the experts to find out if Asimov’s…


  • A Brief History Of Stephen Hawking Being A Bummer

    A Brief History Of Stephen Hawking Being A Bummer

    Stephen Hawking is at it again, saying it’s a “near certainty” that a self-inflicted disaster will befall humanity within the next thousand years or so. It’s not the first time the world’s most famous physicist has raised the alarm on the apocalypse, and he’s starting to become a real downer. Here are some of the…