Powerful Lobby Group Wants To Keep AI Unregulated

Powerful Lobby Group Wants To Keep AI Unregulated

The Information Technology Industry Council (ITI) — a Washington D.C.-based lobby group that boasts Google, Amazon, and Microsoft among its many clients — is telling governments to think twice about establishing laws to regulate AI. But given mounting safety, ethical, and social justice concerns, is that such a good idea?

Photo: AP

On Tuesday, ITI released its “AI Policy Principles,” in which the lobby group outlined “specific areas where industry, governments, and others can collaborate, as well as specific opportunities for public-private partnership.” In the new document, ITI acknowledged the need for the tech sector to promote the responsible development and use of AI, while calling upon governments to support, incentivise, and fund AI research efforts. But as for letting governments take a peek at an ITI client’s source code, or enact laws to steer the safe and ethical development of AI, that’s something it’s a bit less enthused about.

“We also encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI,” notes ITI in its new list of AI principles. “This extends to the foundational nature of protecting source code, proprietary algorithms, and other intellectual property. Failure to do so could present a significant cyber risk.”

According to its mandate, ITI seeks to “encourage all governments around the world — including the US government — to develop policies, standards, and regulations that promote innovation and growth for the tech industry.” It represents some of the heaviest hitters in the tech sector, including Amazon, Facebook, Google, IBM, and Microsoft, while claiming to be “the global voice of the tech sector” and “a catalyst for preparing an AI world.”

ITI’s document is timely given just how important AI is starting to become, both in terms of its burgeoning influence on our lives (whether it be a photo-sorting app or an algorithm that invents new medicines), and in the global economy (ITI estimates that AI will add at least $US7 trillion to the global economy by 2025). But it’s also timely given the recent calls for oversight and regulation. As Bloomberg reporters Gerrit De Vynck and Ben Brody write:

Big tech companies, and their software, are coming under more scrutiny in the wake of news that Russian-sponsored accounts used social networks to spread discord and try to influence the outcome of the 2016 U.S. presidential election. Algorithms designed by Facebook, Twitter Inc. and Google have also been criticised for increasing political polarization by giving people the type of news they already agree with, creating so-called “filter bubbles.”

And the concerns don’t stop there. Developers are starting to be criticised for allowing their AI systems to adopt human biases and prejudices (a recent Princeton study, for example, showed that some AI systems are sexist and racist). There’s also uncertainty about how AI will contribute to technological unemployment, automated warfare, and computer hacking. And there’s still no consensus on the specific ethical or moral codes that need to be imbued into these systems.

There’s also the frightening potential, as thinkers like Elon Musk, Stephen Hawking, and others have pointed out, for something to go horribly wrong with AI. As the recent AI breakthrough by Google-owned DeepMind demonstrated, a fast takeoff event, in which AI evolves into a superintelligent form, may happen relatively quickly and without warning, thus introducing catastrophic — and possibly existential — threats.

As all of this is happening, it shouldn’t come as a surprise that some concerned observers are calling for the government to step in. Musk has warned that governments need to implement regulations “before it’s too late,” and that it’s only after things get out of hand that we tend to act. Two years ago, the White House implemented a preliminary AI strategy, saying AI needs to be ethical, that it must augment, and not replace, humans, and that everyone should have a chance to participate in the development of these systems. But as for formal regulations, the White House said it’s still premature. As former US president Obama told Wired last year, “Most people aren’t spending a lot of time right now worrying about the Singularity — they are worrying about ‘Well, is my job going to be replaced by a machine?’”

Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, says that regulating new technologies is always a delicate balancing act.

“If you set regulation too early, then you may be betting on the wrong standards, and that would be terrible for commercialization, which is important,” Lin told Gizmodo. “The same problem exists with setting too many or unnecessary regulations; they can create barriers to innovation. But commercialization isn’t the only value at stake here; public safety is another value in the equation. So, if there’s little or no regulation for technologies that can have serious impact on our lives — from self-driving cars to AI systems that make criminal sentencing and bank lending decisions — then that will be bad for society. It’s a mistake to have a knee-jerk reaction either way, reflexively for or against regulation. Each technology is different and needs to be considered carefully on its own merits.”

Lin says this conversation is currently happening in regards to autonomous vehicles, with one camp arguing that regulatory standards will put manufacturers on the same page about safety-critical functions (which would protect the industry from some liability), with the other camp saying we don’t know enough to start forging standards.

“A middle path between no regulation and state regulation is to let industry regulate itself, which is the ITI approach,” says Lin. “But this is far from ideal as well: it’s letting the fox guard the henhouse. There’s no teeth to enforce self-regulations if a company breaks rank; there may be even less transparency than with government regulators; and many other problems.”

Currently, the US has no federal agency dedicated to regulating or monitoring AI, and it will probably be a while before we see anything like that (if ever). In the meantime, it will be up to various groups, both inside and outside the government, to monitor developments in AI, such as the National Highway Traffic Safety Administration (to oversee development of autonomous vehicles) and the Department of Homeland Security (to monitor cybersecurity threats). Some private individuals and companies have created their own groups, such as Musk’s OpenAI initiative and Google’s DeepMind Ethics & Society group. But as Lin points out, there’s a “having your cake and eat it, too” aspect to self-regulation.

“On one hand, industry (correctly) says that AI is going to be this game-changing, super-revolutionary thing, but on the other hand, they often tell us not to worry about it, that they have it handled,” he said. “Worse, because the AI industry is so fragmented and full of start-ups — or even individuals without formal education or professional training, working from their basements — you couldn’t possibly get them all on board with your self-imposed regulations, whereas government regulations can use the full force of law to achieve compliance.”

Lin says that self-regulation may be better than no regulation or uninformed regulation, especially when it’s about a technology that could cause major problems for society. As examples, he points to fake news, dieselgate, biased decision systems, and so on.

In an email to Gizmodo, Jaan Tallinn, the co-founder of Skype, said “we need regulation eventually, but first we need more research into what a positive and effective regulation should look like. And indeed, those arguing for regulations are having some difficulty articulating what actually needs regulating, and how it should be implemented and enforced. Thankfully, however, these conversations have started and the frameworks for AI regulation are starting to emerge.

As Tallinn noted, we’re going to need regulations eventually. The self-serving principles set out by ITI can be seen as pre-emptive attack to delay the inevitable, and to protect its clients from what it sees as meddlesome and potentially costly intrusions.

And self-serving it is; it can hardly be said, for example — and as stated in the new list of principles — that the ITI clientele could use some additional financial support from the government. As Oxford philosopher Nick Bostrom wrote last year, “Great resources are devoted to making [progress in AI] happen, with major (and growing) investments from both industry and academia in many countries.” At the same time, investments “in long-term AI safety…remains orders of magnitude less than investment in increasing AI capabilities.” That ITI did not list the funding of AI safety initiatives by industry, government, and private sources as an “AI principle” is as problematic as it is revealing. Moreover, it’s not immediately obvious that profit-driven companies with cranky shareholders in the background are in any way interested in constraints imposed by outside forces, or in voluntarily contributing to the public good. Regulations and government oversight exists in the absence of pro-social forces within the overarching capitalist framework.

“We can hope that corporate self-interest will align with public interests, but that is a giant leap of faith, and many companies in ITIC don’t exactly have a great track record at winning public trust,” Lin told Gizmodo. “It’s important to remember that they’re not in the business of protecting the public or promoting democracy — their business is business. When profit motives and humanitarian motives collide, take a wild guess which one usually wins.”

[Bloomberg]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.