MediaTech Law

By MIRSKY & COMPANY, PLLC

Legal Issues in Ad Tech: De-Identified vs. Anonymized in a World of Big Data

In the booming world of Big Data, consumers, governments, and even companies are rightfully concerned about the protection and security of their data and how to keep one’s personal and potentially embarrassing details of life from falling into nefarious hands.   At the same time, most would recognize that Big Data can serve a valuable purpose, such as being used for lifesaving medical research and to improve commercial products. A question therefore at the center of this discussion is how, and if, data can be effectively “de-identified” or even “anonymized” to limit privacy concerns – and if the distinction between the two terms is more theoretical than practical. (As I mentioned in a prior post, “de-identified” data is data that has the possibility to be re-identified; while, at least in theory, anonymized data cannot be re-identified.)

Privacy of health data is particularly important and so the U.S. Health Insurance Portability and Accountability Act (HIPPA) includes strict rules on the use and disclosure of protected health information. These privacy constraints do not apply if the health data has been de-identified – either through a safe harbor-blessed process that removes 18 key identifiers or through a formal determination by a qualified expert, in either case presumably because these mechanisms are seen as a reasonable way to make it difficult to re-identify the data.

Read More

“Do Not Track” and Cookies – European Commission Proposes New ePrivacy Regulations

The European Commission recently proposed new regulations that will align privacy rules for electronic communications with the much-anticipated General Data Protection Regulation (GDPR) (the GDPR was fully adopted in May 2016 and goes into effect in May 2018). Referred to as the Regulation on Privacy and Electronic Communications or “ePrivacy” regulation, these final additions to the EU’s new data protection framework make a number of important changes, including expanding privacy protections to over-the-top applications (like WhatsApp and Skype), requiring consent before metadata can be processed, and providing additional restrictions on SPAM. But the provisions relating to “cookies” and tracking of consumers online activity are particularly interesting and applicable to a wide-range of companies.

Cookies are small data files stored on a user’s computer or mobile device by a web browser. The files help websites remember information about the user and track a user’s online activity. Under the EU’s current ePrivacy Directive, a company must get a user’s specific consent before a cookie can be stored and accessed. While well-intentioned, this provision has caused frustration and resulted in consumers facing frequent pop-up windows (requesting consent) as they surf the Internet.

Read More

Circuits Weigh-in on PII Under the VPPA

The Video Privacy Protection Act (VPPA) was enacted in 1988 in response to Robert Bork’s Supreme Court confirmation hearings before the Senate judiciary committee, during which his family’s video rental history was used to great effect and in excoriating detail. This was the age of brick-and-mortar video rental stores, well before the age of instant video streaming and on-demand content. Nonetheless, VPPA compliance is an important component to any privacy and data security programs of online video-content providers, websites that host streaming videos and others that are in the business of facilitating consumers viewing streaming video.

Judicial application of the VPPA to online content has produced inconsistent results, including how the statute’s definition of personally-identifiable information (PII)—the disclosure of which triggers VPPA-liability—has been interpreted. Under the VPPA, PII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. § 3710(a)(3). Courts and commentators alike have noted that this definition is vague particularly when applied to new technological situations, as it describes what counts as PII rather than providing an absolute definition. Specifically in the streaming video context, the dispute of the PII definition typically turns on whether a static identifier, like an internet protocol (IP) address or other similar identifier uniquely assigned to consumers, counts as PII under the VPPA.

Read More

Blogs and Writings We Like

This week we highlight 3 fine writers discussing timely subjects in media tech law: Beverly Berman writing about website terms of service and fair use, Leonard Gordon writing about “astroturfing” in advertising law, and John Buchanan and Dustin Cho writing about a gaping coverage gap with cybersecurity insurance.

Hot Topic: Fake News

Beverly Berneman’s timely post, “Hot Topic: Fake News” blog post (on the “IP News For Business” blog of Chicago firm Golan Christie Taglia), offers a simple cautionary tale about publishing your copyrighted artwork on the internet, or in this case publishing on a website (DeviantArt) promoting the works of visual artists. One such artist’s posting subsequently appeared for sale, unauthorized, on t-shirts promoted on the website of another company (Hot Topic). The aggrieved artist then sought recourse from DeviantArt. Berneman (like DeviantArt) pointed to DeviantArt’s terms of use, which prohibited downloading or using artwork for commercial purposes without permission from the copyright owner – leaving the artist with no claim against DeviantArt.

Berneman correctly highlights the need to read website terms of use before publishing your artwork on third party sites, especially if you expect that website to enforce piracy by other parties. Berneman also dismisses arguments about fair use made by some commentators about this case, adding “If Hot Topic used the fan art without the artist’s permission and for commercial purposes, it was not fair use.”

What we like: We like Berneman’s concise and spot-on guidance about the need to read website terms of use and, of course, when fair use is not “fair”. Plus her witty tie-in to “fake news”.

*            *            *

NY AG Keeps up the Pressure on Astroturfing

Leonard Gordon, writing in Venable’s “All About Advertising Law” blog, offered a nice write-up of several recent settlements of “Astroturfing” enforcement actions by New York State’s Attorney General. First, what is Astroturfing? Gordon defines it as “the posting of fake reviews”, although blogger Sharyl Attkisson put it more vividly: “What’s most successful when it appears to be something it’s not? Astroturf. As in fake grassroots.” (And for the partisan spin on this, Attkisson follows that up with her personal conclusions as to who makes up the “Top 10 Astroturfers”, including “Moms Demand Action for Gun Sense in America and Everytown” and The Huffington Post. Ok now. But we digress ….)

The first case involved an urgent care provider (Medrite), which evidently contracted with freelancers and firms to write favorable reviews on sites like Yelp and Google Plus. Reviewers were not required to have been actual urgent care patients, nor were they required to disclose that they were compensated for their reviews.

The second case involved a car service (Carmel). The AG claimed that Carmel solicited favorable Yelp reviews from customers in exchange for discount cards on future use of the service. As with Medrite, reviewers were not required to disclose compensation for favorable reviews, and customers posting negative reviews were not given discount cards.

The settlements both involved monetary penalties and commitments against compensating reviewers without requiring the reviewers to disclose compensation. And in the Carmel settlement, Carmel took on affirmative obligations to educate its industry against conducting these practices.

What we like: We like Gordon’s commentary about this case, particularly its advisory conclusion: “Failure to do that could cause you to end up with a nasty case of “turf toe” from the FTC or an AG.” Very nice.

*            *            *

Insurance Coverage Issues for Cyber-Physical Risks

John Buchanan and Dustin Cho write in Covington’s Inside Privacy blog about a gaping insurance coverage gap from risks to physical property from cybersecurity attacks, as opposed to the more familiar privacy breaches. Buchanan and Cho report on a recently published report from the U.S. Government’s National Institute of Standards and Technology (NIST), helpfully titled “Systems Security Engineering Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems”. Rolls off the tongue.

The NIST report is a dense read (257 pages), and covers much more than insurance issues, in particular recommendations for improvements to system security engineering for (among other things) critical infrastructure, medical devices and hospital equipment and networked home devices (IoT or the Internet of Things).

Buchanan and Cho’s post addresses insurance issues, noting that “purchasers of cyber insurance are finding that nearly all of the available cyber insurance products expressly exclude coverage for physical bodily injury and property damage”.

What we like: Insurance is always an important and underappreciated business issue, with even less public understanding of the property and injury risks to (and coverage from) cyber damage. We like how Buchanan and Cho took the time to plow through an opaque government report to tell a simple and important story.

Read More

Legal Issues in Ad Tech: IP Addresses Are Personal Data, Says the EU (well … sort of)

Much has been written in the past 2 weeks about the U.S. Presidential election. Time now for a diversion into the exciting world of data privacy and “personal data”. Because in the highly refined world of privacy and data security law, important news actually happened in the past few weeks. Yes, I speak breathlessly of the European Court of Justice (ECJ) decision on October 19th that IP (internet protocol) addresses are “Personal Data” for purposes of the EU Data Directive. This is bigly news (in the data privacy world, at least).

First, what the decision actually said, which leads immediately into a riveting discussion of the distinction between static and dynamic IP addresses.

The decision ruled on a case brought by a German politician named Patrick Breyer, who sought an injunction preventing a website and its owner – here, publicly available websites operated by the German government – from collecting and storing his IP address when he lawfully accessed the sites. Breyer claimed that the government’s actions were in violation of his privacy rights under the EU Directive 95/46/EC – The Data Protection Directive (Data Protection Directive). As the ECJ reported in its opinion, the government websites “register and store the IP addresses of visitors to those sites, together with the date and time when a site was accessed, with the aim of preventing cybernetic attacks and to make it possible to bring criminal proceedings.”

The case is Patrick Breyer v Bundesrepublik Deutschland, Case C-582/14, and the ECJ’s opinion was published on October 19th.

Read More

Please Don’t Take My Privacy (Why Would Anybody Really Want It?)

Legal issues with privacy in social media stem from the nature of social media – an inherently communicative and open medium. A cliché is that in social media there is no expectation of privacy because the very idea of privacy is inconsistent with a “social” medium. Scott McNealy from Sun Microsystems reportedly made this point with his famous aphorism of “You have zero privacy anyway. Get over it.”

But in evidence law, there’s a rule barring assumption of facts not in evidence. In social media, by analogy: Where was it proven that we cannot find privacy in a new communications medium, even one as public as the internet and social media?

Let’s go back to basic principles. Everyone talks about how privacy has to “adapt” to a new technological paradigm. I agree that technology and custom require adaptation by a legal system steeped in common law principles with foundations from the 13th century. But I do not agree that the legal system isn’t up to the task.

All you really need to do is take a wider look at the law.

Privacy writers talk about the law of appropriation in privacy. The law of appropriation varies from state to state, though it is a fairly established aspect of privacy law.

Read More

Privacy: Consent to Collecting Personal Information

Gonzalo Mon writes in Mashable that “Although various bills pending in Congress would require companies to get consent before collecting certain types of information, outside of COPPA, getting consent is not a uniformly applicable legal requirement yet. Nevertheless, there are some types of information (such as location-based data) for which getting consent may be a good idea.  Moreover, it may be advisable to get consent at the point of collection when sensitive personal data is in play.”

First, what current requirements – laws, agency regulations and quasi-laws – require obtaining consent, even if not “uniformly applicable”?

1. Government Enforcement.  The Federal Trade Commission’s November 2011 consent decree with Facebook user express consent to sharing of nonpublic user information that “materially exceeds” user’s privacy settings.  The FTC was acting under its authority under Section 5 of the FTC Act against an “unfair and deceptive trade practice”, an authority the FTC has liberally used in enforcement actions involving not just claimed breaches of privacy policies but also data security cases involving managing of personal data without providing adequate security.

2. User Expectations Established by Actual Practice.  The mobile space offers some of the most progressive (and aggressive) examples of privacy rights seemingly established by practice rather than stated policy.  For example, on the PrivacyChoice blog, the CEO of PlaceIQ explained that “Apple and Android have already established user expectations about [obtaining] consent.  Location-based services in the operating system provide very precise location information, but only through a user-consent framework built-in to the OS.  This creates a baseline user expectation about consent for precise location targeting.”  (emphasis added)

Read More