MediaTech Law

By MIRSKY & COMPANY, PLLC

Copyright, Fair Use, and the Kissing Picture: Storms v. New England Sports Network, Inc.

Recently, a photojournalist, Michael Storms, filed an intriguing lawsuit in the U.S. District Court for the Southern District of New York against a website that published photographs taken by Mr. Storm without his permission and without paying Mr. Storms a licensing fee. The photos were of New York Mets pitcher Matt Harvey kissing Victoria Secret model Adriana Lima at a restaurant in Miami, not long after Ms. Lima broke up with New England Patriots wide receiver Julian Edelman. The pictures were posted on the website of the New England Sports Network (NESN). (The case is Storms v. New England Sports Network, Inc.)

On its face, the complaint is relatively short and generic, but it will be interesting to see the defendant’s reply, whether the network argues that its use of the photos constitutes permissible “fair use,” and the potential effect of the court’s decision on copyright law as a whole.

Under the U.S. Copyright Act, 17 U.S.C. §§ 101 et seq., the “fair use of a copyrighted work, including . . . for purposes such as . . . news reporting . . . is not an infringement of copyright.” While there is no strict formula for how a court determines “fair use”, the Copyright Act (17 U.S. Code § 107) requires consideration of 4 factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.
Read More

Circuits Weigh-in on PII Under the VPPA

The Video Privacy Protection Act (VPPA) was enacted in 1988 in response to Robert Bork’s Supreme Court confirmation hearings before the Senate judiciary committee, during which his family’s video rental history was used to great effect and in excoriating detail. This was the age of brick-and-mortar video rental stores, well before the age of instant video streaming and on-demand content. Nonetheless, VPPA compliance is an important component to any privacy and data security programs of online video-content providers, websites that host streaming videos and others that are in the business of facilitating consumers viewing streaming video.

Judicial application of the VPPA to online content has produced inconsistent results, including how the statute’s definition of personally-identifiable information (PII)—the disclosure of which triggers VPPA-liability—has been interpreted. Under the VPPA, PII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. § 3710(a)(3). Courts and commentators alike have noted that this definition is vague particularly when applied to new technological situations, as it describes what counts as PII rather than providing an absolute definition. Specifically in the streaming video context, the dispute of the PII definition typically turns on whether a static identifier, like an internet protocol (IP) address or other similar identifier uniquely assigned to consumers, counts as PII under the VPPA.

Read More

Federal Judge Tosses Stingray Evidence

In a first, a federal judge ruled that evidence found through the use of a stingray device is inadmissible. Reuters reports on the case, United States v. Raymond Lambis, which involved a man targeted in a US Drug Enforcement Administration (DEA) investigation. The DEA used a stingray, a surveillance tool used to reveal a phone’s location, to identify Raymond Lambis’ apartment as the most likely location of a cell phone identified during a drug trafficking probe. Upon searching the apartment, the DEA discovered a kilogram of cocaine.

According to ArsTechnica, the DEA sought a warrant seeking location information and cell-site data for a particular 646 area code phone number. The warrant was based on communications obtained from a wiretap order that suggested illegal drug activity. With the information provided by the cell-site location, the DEA was able to determine the general vicinity of the targeted cell phone, which pointed to the intersection of Broadway and 177th streets in Manhattan. The DEA then used a stingray device, which mimics a cell phone tower and forces cell phones in the area to transmit “pings” back to the device. This enabled law enforcement to pinpoint a particular phone’s location.

Read More

Protecting Children’s Privacy in the Age of Siri, Echo, Google and Cortana

“OK Google”, “Hey Cortana”, “Siri…”, “Alexa,…”

These statements are more and more common as artificial intelligence (AI) becomes mainstream. They serve as the default statements that kick off the myriad of services offered by Google, Microsoft, Apple and Amazon respectively, and are at the heart of the explosion of voice-activated search and services now available through computers, phones, watches, and stand-alone devices. Once activated, these devices record the statements being made and digitally process and analyze them in the cloud. The service then returns the search results to the device in the form of answers, helpful suggestions, or an array of other responses.

A recent investigation by the UK’s Guardian newspaper, however, claims these devices likely run afoul of the U.S. Children’s Online Privacy Protection Act (COPPA), which regulates the collection and use of personal information from anyone younger than 13. If true, the companies behind these services could face multimillion-dollar fines.

COPPA details, in part, responsibilities of an operator to protect children’s online privacy and safety, and when and how to seek verifiable consent from a parent or guardian. COPPA also includes restrictions on marketing to children under the age of 13. The purpose of COPPA is to provide protection to children when they are online or interacting with internet-enabled devices, and to prevent the rampant collection of their sensitive personal data and information. The Federal Trade Commission (FTC) is the agency tasked with monitoring and enforcing COPPA, and encourages industry self-regulation.

The Guardian investigation states that voice-enabled devices like the Amazon Echo, Google Home and Apple’s Siri are recording and storing data provided by children interacting with the devices in their homes. While the investigation concluded that these devices are likely collecting information of family members under the age of 13, it avoids conclusion as to whether it can be proven that these services primarily target children under the age of 13 as their audience – a key determining factor for COPPA. Furthermore, according to the FTC’s own COPPA FAQ page, even if a child provides personal information to a general audience online service, so long as the service has no actual knowledge that the particular individual is a child, COPPA is not triggered.

While the details of COPPA will need to be refined and re-defined in the era of always-on digital assistants and AI, the Guardian’s claim that the FTC will crack down harshly on offenders is not likely to happen, and the potential large fines are unlikely to materialize. Rather, what will likely occur is the FTC will provide guidance and recommendations to such services, allowing them to modify their practices and stay within the bounds of the law, so long as they’re acting in good faith. For example, services like Amazon, Apple and Google could update their services to request on installation the age and number of individuals in the home, paired with an update to the terms of service requesting parental permission for the use of data provided by children under 13. For children outside of the immediate family who access the device, the services can claim they lacked actual knowledge a child interacted with the service, again satisfying COPPA’s requirements.

Read More

License Plate Numbers: a valuable data-point in big-data retention

What can you get from a license plate number?

At first glance, a person’s license plate number may not be considered that valuable a piece of information. When tied to a formal Motor Vehicle Administration (MVA) request it can yield the owner’s name, address, type of vehicle, vehicle identification number, and any lienholders associated with the vehicle. While this does reveal some sensitive information, such as a likely home address, there are generally easier ways to go about gathering that information. Furthermore, states have made efforts to protect such data, revealing owner information only to law enforcement officials or certified private investigators. The increasing use of Automated License Plate Readers (ALPRs), however, is proving to reveal a treasure trove of historical location information that is being used by law enforcement and private companies alike. Also, unlike historical MVA data, policies and regulations surrounding ALPRs are in their infancy and provide much lesser safeguards for protecting personal information.

ALPR – what is it?

Consisting of either a stationary or mobile-mounted camera, ALPRs use pattern recognition software to scan up to 1,800 license plates per minute, recording the time, date and location a particular car was encountered.

Read More

Please Don’t Take My Privacy (Why Would Anybody Really Want It?)

Legal issues with privacy in social media stem from the nature of social media – an inherently communicative and open medium. A cliché is that in social media there is no expectation of privacy because the very idea of privacy is inconsistent with a “social” medium. Scott McNealy from Sun Microsystems reportedly made this point with his famous aphorism of “You have zero privacy anyway. Get over it.”

But in evidence law, there’s a rule barring assumption of facts not in evidence. In social media, by analogy: Where was it proven that we cannot find privacy in a new communications medium, even one as public as the internet and social media?

Let’s go back to basic principles. Everyone talks about how privacy has to “adapt” to a new technological paradigm. I agree that technology and custom require adaptation by a legal system steeped in common law principles with foundations from the 13th century. But I do not agree that the legal system isn’t up to the task.

All you really need to do is take a wider look at the law.

Privacy writers talk about the law of appropriation in privacy. The law of appropriation varies from state to state, though it is a fairly established aspect of privacy law.

Read More

Privacy For Businesses: Any Actual Legal Obligations?

For businesses, is there an obligation in the United States to do anything more than simply have a privacy policy?  The answer is not much of an obligation at all.

Put another way, is it simply a question of disclosure – so long as a business tells users what it intends to do with their personal information, can the business pretty much do anything it wants with personal information?  This would be the privacy law equivalent of the “as long as I signal, I am allowed to cut anyone off” theory of driving.

Much high-profile enforcement (via the Federal Trade Commission and State Attorneys General) has definitely focused on breaches by businesses of their own privacy statements.  Plus, state laws in California and elsewhere either require that companies have privacy policies or require what types of disclosures must be in those policies, but again focus on disclosure rather than mandating specific substantive actions that businesses must or must not take when using personal information.

As The Economist recently noted in its Schumpeter blog, “Europeans have long relied on governments to set policies to protect their privacy on the internet.  America has taken a different tack, shunning detailed prescriptions for how companies should handle people’s data online and letting industries regulate themselves.”   This structural (or lack of structural) approach to privacy regulation in the United States can also been seen – vividly – in legal and business commentary that met Google’s recent privacy overhaul.  Despite howls of displeasure and the concerted voices of dozens of State Attorneys General, none of the complaints relied on any particular violations of law.  Rather, arguments (by the AGs) are made about consumer expectations in advance of consumer advocacy, as in “[C]onsumers may be comfortable with Google knowing their search queries but not with it knowing their whereabouts, yet the new privacy policy appears to give them no choice in the matter, further invading their privacy.”

Again, there’s little reliance on codified law because, for better or worse, there is no relevant codified law to rely upon.  Google, Twitter and Facebook have been famously the subjects of enforcement actions by the states and the Federal Trade Commission, and accordingly Google has been careful in its privacy rollout to provide extensive advance disclosures of its intentions.

As The Economist also reported, industry trade groups have stepped in with self-regulatory “best practices” for online advertising, search and data collection, as well as “do not track” initiatives including browser tools, while the Obama Administration last month announced a privacy “bill of rights” that it hopes to move in the current or, more realistically, a future Congress.

This also should not ignore common law rights of privacy invasion, such as the type of criminal charges successfully brought in New Jersey against the Rutgers student spying on his roommate.   These rights are not new and for the time being remain the main source of consumer recourse for privacy violations in the absence of meaningful contract remedies (for breaches of privacy policies) and legislative remedies targeted to online transactions.

More to come on this topic shortly.

Read More