20 Years Of Copyright Wars

20 Years Of Copyright Wars

Gizmodo is 20 years old! To celebrate the anniversary, we’re looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools.

In the revisionist history of the internet, we were all sold down the river by the “techno-optimists,” who assumed that once we “connected every person in the world” we’d enter a kind of post-industrial nirvana, a condition so wondrous that it behooved us all to “move fast,” even if that meant that we’d “break things.”

The problem is that this is the Facebook story, not the internet story, and notwithstanding decades of well-financed attempts at enclosure, “Facebook” is not “the internet” (also, it’s not even Facebook’s story: “connecting every person in the world” was always a euphemism for “spy on every person in the world in order to enrich our shareholders”).

20 years ago, the internet’s early adopters were excited about the digital world’s potential, but we were also terrified about how it could go wrong.

Take the Electronic Frontier Foundation, an organisation that hired me 20 years ago, just a few months after the large record companies filed suit against the P2P file-sharing service Napster, naming its investors and their backers (giant insurance companies and pension funds) to the suit.

The Copyright Wars were kicking off. Napster showed that the record industry’s plan to capitalise on the internet with another “format shift” was doomed: we may have re-bought our vinyl on 8-track, re-bought our 8-tracks on cassette, and re-bought our cassettes on CD, but now we were in the driver’s seat. We were going to rip our CDs, make playlists, share the tracks, and bring back the 80 per cent of recorded music that the labels had withdrawn from sale, one MP3 at a time.

The rate of Napster’s ascent was dizzying. In 18 short months, Napster attracted 52 million users, making it the fastest-adopted technology in history, surpassing DVD players. For comparison, this was shortly after the 2000 presidential elections, in which 50,999,897 votes were cast for the loser (the “winner” got 50,456,002 votes).

Napster’s fall was just as dizzying. In July 2001, the service shut down after a 25-month run. A month later, it declared bankruptcy. The labels’ attempt to drag Napsters’ VCs and their backers failed, but it didn’t matter –the investor class got the message, and P2P file-sharing became balance-sheet poison.

P2P users didn’t care. They just moved from “platforms” to “protocols”, switching to increasingly decentralized systems like Gnutella and BitTorrent – systems that, in turn, eliminated their own central points of failure in a relentless drive to trackerlessness.

P2P users interpreted the law as damage and routed around it. What they didn’t do, for the most part, was develop a political consciousness. If “P2P users” were a political party, they could have elected a president. Instead, they steered clear of politics, committing the original sin of nerd hubris: “Our superior technology makes your inferior laws irrelevant.”

P2P users may not have been interested in politics, but politics was interested in P2P users. The record industry sued 19,000 kids, singling out young P2P developers for special treatment. For example, there was the college Computer Science major who maintained a free software package called FlatLAN, that indexed the shared files on any local network. The labels offered him a settlement: if he changed majors and gave up programming computers, they wouldn’t seek $US150,000 ($208,230) in statutory damages for each track in his MP3 collection.

This phase of the P2P wars was a race between civil disobedience and regulatory capture. Senate Commerce Chairman Fritz Hollings introduced a bill that would make it a crime to sell a computer unless it had a special copyright enforcement chip (wags dubbed this hypothetical co-processor the “Fritz Chip”) that would (somehow) block all unauthorised uses of copyrighted works. The Hollings bill would have required total network surveillance of every packet going in or out of the USA to block the importation of software that might defeat this chip.

The Hollings bill died, but the entertainment industry had a backup plan: the FCC enacted “the Broadcast Flag regulation,” a rule that would require all digital devices and their operating systems to be approved by a regulator who would ensure that they were designed to foil their owners’ attempts to save, copy, or manipulate high-definition digital videos. (EFF subsequently convinced a federal judge that this order was illegal).

Thus did the DRM Wars begin: a battle over whether our devices would be designed to obey us, or police us. The DRM Wars had been percolating for many years, ever since Bill Clinton signed the Digital Millennium Copyright Act into law in 1998.

The DMCA is a complex, multifaceted law, but the clause relevant to this part of history is Section 1201, the “anti-circumvention” rule that makes it a jailable felony to provide tools or information that would help someone defeat DRM (“access controls for copyrighted works”). DMCA 1201 is so broadly worded that it bans removing DRM even when it does not lead to copyright infringement. For example, bypassing the DRM on a printer-ink cartridge lets you print using third-party ink, which is in no way a violation of anyone’s copyright, but because you have to bypass DRM to do it, anyone who gives you a printer jailbreaking tool risks a five-year prison sentence and a $US500,000 ($694,100) fine…for a first offence.

DMCA 1201 makes it illegal to remove DRM. The Hollings Bill and the Broadcast Flag would have made it a crime to sell a device unless it had DRM. Combine the two and you get a world where everything has DRM and no one is allowed to do anything about it.

DRM on your media is gross and terrible, a way to turn your media collection into a sandcastle that melts away when the tide comes in. But the DRM Wars are only incidentally about media. The real action is in the integration of DRM in the Internet of Things, which lets giant corporations dictate which software your computer can run, and who can fix your gadgets (this also means that hospitals in the middle of a once-in-a-century pandemic can’t fix their ventilators). DRM in embedded systems also means that researchers who reveal security defects in widely used programs face arrest on federal charges, and it means that scientific conferences risk civil and criminal liability for providing a forum to discuss such research.

As microprocessors plummeted in price, it became practical to embed them in an ever-expanding constellation of devices, turning your home, your car and even your toilet into sensor-studded, always-on, networked devices. Manufacturers seized on the flimsiest bit of “interactivity” as justification for putting their crap on the internet, but the true motivation is to be found in DMCA 1201: once a gadget has a chip, it can have a thin skin of DRM, which is a felony to remove.

You may own the device, but it pwns you: you can’t remove that DRM without facing a prison sentence, so the manufacturer can booby-trap its gizmos so that any time your interests conflict with its commercial imperatives, you will lose. As Jay Freeman says, DMCA 1201 is a way to turn DRM into a de facto law called “Felony Contempt of Business Model.”

The DRM Wars rage on, under many new guises. These days, it’s often called the “Right to Repair” fight, but that’s just a corner of the raging battle over who gets to decide how the digital technology that you rely on for climate control, shelter, education, romance, finance, politics and civics work.

The copyright maximalists cheered DRM on as a means to prevent “piracy,” and dismissed anyone who warned about the dangers of turning our devices into ubiquitous wardens and unauditable reservoirs of exploitable software bugs as a deranged zealot.

It’s a natural enough mistake for anyone who treats networked digital infrastructure as a glorified video-on-demand service, and not as the nervous system of 21st Century civilisation. That worldview –that the internet is cable TV for your pocket rectangle– is what led those same people to demand copyright filters for every kind of online social space.

Filtering proposals have been there all along, since the days of the Broadcast Flag and even the passage of the DMCA, but they only came into widespread use in 2007, when Google announced a filtering system for YouTube called Content ID.

Google bought YouTube in 2006, to replace its failing in-house rival Google Video (Google is a buying-things company, not a making-things company; with the exception of Search and Gmail, all its successes are acquisitions, while its made-at-Google alternatives from Glass to G+ to Reader fail).

YouTube attracted far more users than Google Video – and also far more legal trouble. A bruising, multi-billion-dollar lawsuit from Viacom was an omen of more litigation to come.

Content ID was an effort to head off future litigation. Selected media companies were invited to submit the works they claimed to hold the copyright to, and Content ID scoured all existing and new user uploads for matches. Rightsholders got to decide how Content ID handled these matches: they could “monetise” them (taking the ad revenue that the user’s video generated) or they could block them.

Content ID is one of those systems that works well, but fails badly. It has three critical failings:

  1. YouTube is extremely tolerant of false copyright claims. Media companies have claimed everything from birdsong to Brahms without being kicked off the system.
  2. Content ID tolerates false positives. The designers of any audio fingerprinting system have to decide how close two files must be to trigger a match. If the system is too strict, it can be trivially defeated by adding a little noise, slicing out a few seconds of the stream, or imperceptibly shifting the tones. On the other hand, very loose matching creates a dragnet that scoops up a lot of dolphins with the tuna. Content ID is tuned to block infringement even if that means taking down non-infringing material. That’s how a recording of white noise can attract multiple Content ID claims, and why virtually any classical music performance (including those by music teachers) gets claimed by Sony.
  3. It is impossible for Content ID to understand and accommodate fair use. Fair use is a badly understood but vital part of copyright; as the Supreme Court says, fair use is the free expression escape-valve in copyright, the thing that makes it possible to square copyright (in which the government creates a law about who is allowed to publish certain phrases) with the First Amendment (which bars the government from creating such a law). There is no bright-line test for whether something is fair use; rather, there is a large body of jurisprudence and some statutory factors that have to be considered in their totality to determine whether a use is fair. Here are some uses that have been found fair under some circumstances: making copies of Hollywood blockbusters and bringing them to your friends’ house to watch at viewing parties; copying an entire commercial news article into an ad-supported message board so participants can read and discuss it; publishing a commercial bestselling novel that is an unauthorised retelling of another bestseller, specifically for the purpose of discrediting and replacing the original book in the reader’s imagination. Now, these were highly specific circumstances, and I’m not trying to say that all copying is fair, but Google’s algorithms can’t ever make the fine distinctions that created these exceptions, and it doesn’t even try. Instead, YouTube mostly acts as if fair use didn’t exist. Creators whose work is demonetised or removed can argue fair use in their appeals, but the process is beyond baroque and generally discourages fair usage.

Content ID resulted in billions of dollars in revenue for rightholders, but in no way ended copyright infringement on YouTube, as Big Content lobbyists frequently remind us. YouTube spent $US100 ($139),000,000 ($138,820,000) (and counting) on the system, which explains why only the largest Big Tech companies, like Facebook, have attempted their own filters.

Copyright filters are derided as inadequate by rightsholder groups, but that doesn’t stop them from demanding more of them. In 2019, the EU erupted in controversy over the Article 13 of the Digital Single Market Act (DMSA), which requires platforms for user-generated content to prevent “re-uploading” of material that has been taken down following a copyright complaint. Artice 13 triggered street demonstrations in cities all over Europe, and a petition opposing Article 13 attracted more signatories than any petition in EU history.

The official in charge of the Article 13 push, a German politician named Axel Voss, repeatedly insisted that its goal of preventing re-uploading could be accomplished without automated filters – after all, the existing E-Commerce Directive banned “general monitoring obligations” and the General Data Protection Regulation (GDPR) bans “processing” of your uploads without consent.

Article 13 came up for a line-item vote in March 2019 and carried by five votes. Afterward, ten Members of the European Parliament claimed they were confused and pressed the wrong button; their votes were switched in the official record, but under EU procedures, the outcome of the (now losing) vote was not changed.

Almost simultaneously, Axel Voss admitted that there was no way this would work without automated filters. This wasn’t surprising: after all, this is what everyone had said all along, including lobbyists for Facebook and YouTube, who endorsed the idea of mandatory filters as a workable solution to copyright infringement.

European national governments are now struggling to implement Article 13 (renumbered in the final regulation and now called Article 17), and when that’s done, there’s a whole slew of future filter mandates requiring implementation, like the terror regulation that requires platforms to identify and block “terrorist” and “extremist” content and keep it down. This has all the constitutional deficiencies of Article 13/17, and even higher stakes, because the false positives that “terrorism filters” take down isn’t white noise or birdsong – it’s the war-crime evidence painstakingly gathered by survivors.

In the USA, the Copyright Office is pondering its own copyright filter mandate, which would force all platforms that allow users to publish text, audio, code or video to compare users’ submissions to a database of copyrighted works and block anything that someone has claimed as their own.

As with Article 17 (née Article 13), such a measure will come at enormous cost. Remember, Content ID cost more than $US100 ($139) million to build, and Content ID only accomplishes a minute sliver of the obligations envisioned by Article 17 and the US Copyright Office proposal.

Adding more than $US100 ($139) million to the startup costs of any new online platform only makes sense if your view of the internet is five giant websites filled with screenshots of text from the other four. But if you hold out any hope for a more decentralized future built on protocols, not platforms, then filtering mandates should extinguish it.

Which brings me back to 20 years ago, and the naivete of the techno-optimists. 20 years ago, technology activists understood and feared the possibilities for technological dystopia. The rallying cry back then wasn’t “this will all be amazing,” it was “this will all be great…but only if we don’t screw it up.”

The Napster Wars weren’t animated by free music, but by a free internet – by the principle that we should should be free to build software that let people talk directly to one another, without giving corporations or governments a veto over who could connect or what they could say.

The DRM Wars weren’t about controlling the distribution of digital videos, they were fought by people who feared that our devices would be redesigned to control us, not take orders from us, and that this would come to permeate our whole digital lives so that every appliance, gadget and device, from our speakers to our cars to our medical implants, would become a locus of surveillance and control.

The filter wars aren’t about whether you can upload music or movies – it’s about whether cops can prevent you from sharing videos of their actions by playing pop music in order to trigger filters and block your uploads.

From Napster to DRM to filters, the fight has always had the same stakes: will our digital nervous system be designed to spy on us and boss us around, or will it serve as a tool to connect us and let us coordinate our collective works?

But the techno-optimists – myself included – did miss something important 20 years ago. We missed the fact that antitrust law was a dead letter. Having lived through the breakup of AT&T (which unleashed modems and low-cost long-distance on America, setting the stage for the commercial internet); having lived through the 12-year IBM antitrust investigation (which led Big Blue to build a PC without making its own operating system and without blocking third-party clones of its ROMs); having lived through Microsoft’s seven-year turn in the antitrust barrel (which tamed Microsoft so that it spared Google from the vicious monopoly tactics that it used to destroy Netscape); we thought that we could rely on regulators to keep tech fair.

That was a huge mistake. In reality, by 1982, antitrust law was a dead man walking. The last major action of antitrust enforcers was breaking up AT&T. They were too weak to carry on against IBM. They were too weak to prevent the “Baby Bells” that emerged from AT&T’s breakup from re-merging with one another. They were even too weak to win their slam-dunk case against Microsoft.

That was by design. Under Reagan, the business lobby’s “consumer welfare” theory of antitrust (which holds that monopolies are actually “efficient” and should only be challenged when there is mathematical proof that a merger will drive up prices) moved from the fringes to the mainstream. Nearly half of all US Federal judges attended cushy junkets in Florida where these theories were taught, and afterwards they consistently ruled in favour of monopolies.

This process was a slow burn, but now, in hindsight, it’s easy to see how it drastically remade our entire economy, including tech. The list of concentrated industries includes everything from eyeglasses to glass bottles, shipping to finance, wrestling to cheerleading, railroads to airlines, and, of course, tech and entertainment.

40 years after the neutering of antitrust, it’s hard to remember that we once lived in a world that barred corporations from growing by buying their small, emerging competitors, merging with their largest rivals, or driving other businesses out of the marketplace with subsidized “predatory pricing.”

Yet if this regime had been intact for the rise of tech, we’d live in a very different world. Where would Google be without the power to gobble up small competitors? Recall that Google’s in-house projects (with the exception of Search and Gmail) have either failed outright or amounted to very little, and it was through buying up other businesses that Google developed its entire ad-tech stack, its mobile platform, its video platform, even its server infrastructure tools.

Google’s not alone in this – Big Tech isn’t a product-inventing machine, it’s a company-buying machine. Apple buys companies as often as you buy groceries. Facebook buys companies specifically to wipe out potential competitors.

But this isn’t just a Big Tech phenomenon. The transformation of the film industry – which is now dominated by just four studios – is a story of titanic mergers between giant, profitable companies, and not a tale of a few companies succeeding so wildly that their rivals go bust.

Here is an area where people with legitimate concerns over creators’ falling share of the revenues their labour generates and people who don’t want a half-dozen tech bros controlling the future have real common ground.

The fight to make Spotify pay artists fairly is doomed for so long as Spotify and the major labels can conspire to rip off artists. The fight to get journalists paid depends on ending illegal Google-Facebook collusion to steal ad-revenue from publishers. The fight to get mobile creators paid fairly runs through ending the mobile duopoly’s massive price-gouging on apps.

All of which depends on fighting a new war, an anti-monopoly war: the natural successor to the Napster Wars and the DRM Wars and the Filter Wars. It’s a war with innumerable allies, from the people who hate that all the beer is being brewed by just two companies to the people who are outraged that all the shipping in the world is (mis)managed by four cartels, to the people who are coming to realise that “inflation” is often just CEOs of highly concentrated industries jacking up prices because they know that no competitor will make them stop.

The anti-monopoly war is moving so swiftly, and in so many places, that a lot of combatants in the old tech fights haven’t even noticed that the battleground has shifted.

But this is a new era, and a new fight, a fight over whether a world where the line between “offline” and “online” has blurred into insignificance will be democratically accountable and fair, or whether it will be run by a handful of giant corporations and their former executives who are spending a few years working as regulators.

All the tech antitrust laws in the world won’t help us if running an online platform comes with an obligation to spend hundreds of millions of dollars to spy on your users and block their unlawful or unsavoury speech; nor will reform help us if it continues to be illegal to jailbreak our devices and smash the chains that bind our devices to their manufacturers’ whims.

The Copyright Wars have always been premised on the notion that tech companies should be so enormous that they can afford to develop and maintain the invasive technologies needed to police their users’ conduct to a fine degree. The Anti-Monopoly Wars are premised on the idea that tech and entertainment companies must be made small enough that creative workers and audiences can fit them in a bathtub… and drown them.

Cory Doctorow is a science fiction author, activist and journalist. His next book is Chokepoint Capitalism (co-authored with Rebecca Giblin), on how Big Tech and Big Content rigged creative labour markets – and how to unrig them.