What if, instead of being a copyright infringement threat, wearable technology became copyright's ultimate enforcement tool?
Copyright enforcement will be a major challenge in the medium of augmented reality. The mass lawsuits of the past two decades against file-sharers and signal pirates have required a significant amount of detective work and discovery to connect individual users to allegedly infringing downloads. Pursuing legal action against those who share infringing content in AR will not differ categorically from these efforts. After all, augmented content only appears to exist in three dimensions; in reality, it will still reside in a hard drive, device, or server somewhere that can be located and tracked. Indeed, the earliest versions of digital eyewear available now barely have any on-board memory or processing power at all. Google Glass, for example, only connects to the internet by means of a wireless connection to a mobile phone, and its apps reside in the cloud. Newer devices such as Meta’s Space Glasses and the Atheer One require a physically wired connection to a mobile device.
As augmented content proliferates across the internet of things, however, the substance and sources of data will become that much harder to track. The entire world will eventually become a giant peer-to-peer sharing network; think of AR channels as bit torrent sites that users can walk through, see, and touch. So-called “darknets”—sealed digital communities with no visible connection to the internet—will become much more common.
One can imagine that it will become even more difficult to prove that a particular user viewed a particular work when the “display” occurred entirely within a mobile headset. I expect that many litigators will soon be conducting “v-discovery,” in which they must determine not only the device to which virtual data was routed, but also where individual users were located, and in what direction they were looking, when the data was displayed.
AR eyewear could also be used, however, as a copyright enforcement mechanism. The YouTube video A Read-Only Future depicts life through the eyes of someone wearing digital eyewear that is regulated by the entertainment industry. His glasses recognize copyrighted content in the user’s field of view—such as a photo hanging on the wall or a song being played on the sidewalk—and obscures it unless he agrees to a micro-license payment. Just as in concept videos for actual digital headsets, the eyewear in this video is able to share content directly to Facebook, but these will refuse to do so if they detect unlicensed content. They even alert the authorities if the user stumbles across an unauthorized reproduction published by someone else. Excerpts from copyleftist Larry Lessig feature prominently in A Read-Only Future, which plays out as if it was Lessig’s nightmare.
This scenario is entirely plausible in light of how most AR apps function today. A mobile device scans the ambient world looking for one of the targets it is pre-programmed to recognize. Each time it captures a view, the device sends that image to the cloud to check against the portfolio of targets. If a match is found, the cloud server sends back the digital content associated with that target. Several non-AR apps operate in a similar way; for example, the popular app Shazam listens to ambient music and identifies it in real time, allowing users to purchase a copy of the song or follow along to the lyrics. (Port that app over to Glass and add the ability to project the lyrics in three dimensions, and you would have an augmented karaoke machine. Let's get on that, developers!)
It would be child’s play to simply add a roster of copyrighted images to that cloud-based catalog of targets. Every time the cloud server recognizes one of the protected files in its database, it could be set to trigger a request for micropayment, or obscure the image, or even issue a warning to law enforcement or the copyright owner itself. The fine print in our mobile app stores already prohibits us from using the apps to commit copyright infringement; this would be going one step further to turn mobile devices into the eyes and ears of the copyright police.
Such an enforcement mechanism could potentially be so effective, and offer such a unique functionality not available by any other means, that the company able to provide it would be foolish not to monetize it. Today, mobile devices (including eyewear such as Glass) receive their internet connections through such providers as AT&T, Verizon, Sprint, Virgin, and the like. In the near future, we may instead get online directly through mass wireless signals emitted by Google or Facebook. Whichever company provides that service could easily sell to copyright owners the ability to police copyright compliance through the network of AR-capable devices they serve. Internet service providers (ISPs) would then become analogous to the performance rights organizations (PROs) of today—ASCAP, BMI and SESAC—which rely on human investigators to overhear unlicensed public performances of copyrighted music. Indeed, some day PROs could theoretically contract directly with ISPs to enforce their entire catalogs—deputizing every end user as investigators.
With such an arrangement in place, ISPs might even share the wealth in order to incentivize users to cooperate. Imagine if AR users received a micropayment each time they used their device to report an observed copyright infringement. Knowing that anyone you meet is a potential copyright cop would certainly be a powerful disincentive to would-be casual infringers.
Five years from now, instead of movie theaters detaining and interrogating digital eyewear users, they may be rewarding them.