MediaTech Law

By MIRSKY & COMPANY, PLLC

Encrypted Data: Still “Personal Data” under GDPR?

An interesting question is whether encrypted personal data is still “personal data” for purposes of the European Union’s General Data Protection Regulation (GDPR), and therefore making processing of that data subject to the GDPR’s library of compliance obligations.  The answer depends on the meaning of encryption: It is not enough to claim that encrypted data is “anonymized” and therefore inaccurate to conclude that it does not relate to the personal data definition’s meaning of an “identified or identifiable natural person.”

If an organization encrypts data in its care, with the encryption thereby rendering the data no longer “identified”, is it still “identifiable”?  Maybe.  If neither identified nor identifiable, then data is no longer “personal data”.

First, what is encryption?  Josh Gresham writes on IAPP’s blog that encryption involves a party “tak[ing] data and us[ing] an ‘encryption key’ to encode it so that it appears unintelligible.  The recipient uses the encryption key to make it readable again.  The encryption key itself is a collection of algorithms that are designed to be completely unique, and without the encryption key, the data cannot be accessed.  As long as the key is well designed, the encrypted data is safe.” (emphasis added)

Read More

Legal Issues in Ad Tech: De-Identified vs. Anonymized in a World of Big Data

In the booming world of Big Data, consumers, governments, and even companies are rightfully concerned about the protection and security of their data and how to keep one’s personal and potentially embarrassing details of life from falling into nefarious hands.   At the same time, most would recognize that Big Data can serve a valuable purpose, such as being used for lifesaving medical research and to improve commercial products. A question therefore at the center of this discussion is how, and if, data can be effectively “de-identified” or even “anonymized” to limit privacy concerns – and if the distinction between the two terms is more theoretical than practical. (As I mentioned in a prior post, “de-identified” data is data that has the possibility to be re-identified; while, at least in theory, anonymized data cannot be re-identified.)

Privacy of health data is particularly important and so the U.S. Health Insurance Portability and Accountability Act (HIPPA) includes strict rules on the use and disclosure of protected health information. These privacy constraints do not apply if the health data has been de-identified – either through a safe harbor-blessed process that removes 18 key identifiers or through a formal determination by a qualified expert, in either case presumably because these mechanisms are seen as a reasonable way to make it difficult to re-identify the data.

Read More

Blogs and Writings We Like

This week we highlight 3 writers discussing timely subjects in media tech law: Sandy Botkin writing about zombie cookies and targeted advertising, Geoffrey Fowler writing about the new world of phishing and “phishermen” (yes, that’s a thing), and Justin Giovannettone and Christina Von der Ahe writing about nonsolicitation agreements and social media law.

FTC vs Turn, Inc.: Zombie Hunters

Sandy Botkin, writing on TaxBot Blog, reports amusingly on the FTC’s December 2016 settlement with digital advertising data provider Turn, Inc., stemming from an enforcement action against Turn for violating Turn’s own consumer privacy policy. Botkin used the analogy of a human zombie attack to illustrate the effect of actions Turn took to end-run around user actions to block targeted advertising on websites and apps.

According to the FTC in its complaint, Turn’s participation in Verizon Wireless’ tracking header program – attaching unique IDs to all unencrypted mobile internet traffic for Verizon subscribers – enabled turn to re-associate the Verizon subscriber with his or her use history. By so doing, according to Botkin, this further enabled Turn to “recreate[] cookies that consumers had previously deleted.” Or better yet: “Put another way, even when people used the tech equivalent of kerosene and machetes [to thwart zombies], Turn created zombies out of consumers’ deleted cookies.”

What we like: We like Botkin’s zombie analogy, although not because we like zombies. We don’t. Like. Zombies. But we do think it’s a clever explanatory tool for an otherwise arcane issue.

*            *            *

Your Biggest Online Security Risk Is You

Geoffrey Fowler writes in The Wall Street Journal (here ($), with an even fuller version of the story available here via Dow Jones Newswires) about the latest in the world of phishing, that large category of online scams that, one way or another, has the common goals of accessing your data, your money or your life, or someone else’s who might be accessed through your unsuspecting gateway.

“If you’re sure you already know all about them, think again. Those grammatically challenged emails from overseas ‘pharmacies’ and Nigerian ‘princes’ are yesterday’s news. They’ve been replaced by techniques so insidious, they could leave any of us feeling like a sucker.”

Oren Falkowitz of Area 1 Security told Fowler that about 97% of all cyberattacks start with phishing. Phishing is a big deal.

Fowler writes of the constantly increasing sophistication of “phishermen” – yes, that’s a term – weakening the effectiveness of old common-sense precautions:

In the past, typos, odd graphics or weird email addresses gave away phishing messages, but now, it’s fairly easy for evildoers to spoof an email address or copy a design perfectly. Another old giveaway was the misfit web address at the top of your browser, along with the lack of a secure lock icon. But now, phishing campaigns sometimes run on secure websites, and confuse things with really long addresses, says James Pleger, security director at RiskIQ, which tracked 58 million phishing incidents in 2016.

What we like: Fowler is helpful with advice about newer precautions, including keeping web browser security features updated and employing 2-factor authentication wherever possible. We also like his admission of his own past victim-hood to phishing, via a malware attack. He’s not overly cheery about the prospects of stopping the bad guys, but he does give confidence to people willing to take a few extra regular precautions.

*            *            *

Don’t Friend My Friends: Nonsolicitation Agreements Should Account for Social Media Strategies

This is an employment story about former employees who signed agreements with their former employers restricting their solicitations of customers of their former employers. In the traditional nonsolicitation context, it wasn’t that hard to tell when a former employee went about trying to poach his or her former company’s business. Things have become trickier in the age of social media, when “friend”-ing, “like”-ing, or “following” a contact on Facebook, Twitter, Instagram or LinkedIn might or might not suggest nefarious related behavior.

Justin Giovannettone and Christina Von der Ahe of Orrick’s “Trade Secrets Watch” survey a nice representative handful of recent cases from federal and state courts on just such questions.

In one case, the former employee – now working for a competitor of his former employer – remained linked via LinkedIn with connections he made while at his former company. His subsequent action in inviting his contacts to “check out” his new employer’s updated website drew a lawsuit for violating his nonsolicitation. For various reasons, the lawsuit failed, but of most interest was Giovannettone and Von der Ahe’s comment that “The court also noted that the former employer did not request or require the former employee to “unlink” with its customers after he left and, in fact, did not discuss his LinkedIn account with him at all.”

What we like: Giovannettone and Von der Ahe point out the inconsistencies in court opinions on this subject and, therefore, smartly recognize the takeaway for employers, namely to be specific about what’s expected of former employees. That may seem obvious, but for me it was surprising to learn that an employer could potentially – and enforceably – prevent a former employee from “friend”-ing on Facebook.

Read More

“Do Not Track” and Cookies – European Commission Proposes New ePrivacy Regulations

The European Commission recently proposed new regulations that will align privacy rules for electronic communications with the much-anticipated General Data Protection Regulation (GDPR) (the GDPR was fully adopted in May 2016 and goes into effect in May 2018). Referred to as the Regulation on Privacy and Electronic Communications or “ePrivacy” regulation, these final additions to the EU’s new data protection framework make a number of important changes, including expanding privacy protections to over-the-top applications (like WhatsApp and Skype), requiring consent before metadata can be processed, and providing additional restrictions on SPAM. But the provisions relating to “cookies” and tracking of consumers online activity are particularly interesting and applicable to a wide-range of companies.

Cookies are small data files stored on a user’s computer or mobile device by a web browser. The files help websites remember information about the user and track a user’s online activity. Under the EU’s current ePrivacy Directive, a company must get a user’s specific consent before a cookie can be stored and accessed. While well-intentioned, this provision has caused frustration and resulted in consumers facing frequent pop-up windows (requesting consent) as they surf the Internet.

Read More

Circuits Weigh-in on PII Under the VPPA

The Video Privacy Protection Act (VPPA) was enacted in 1988 in response to Robert Bork’s Supreme Court confirmation hearings before the Senate judiciary committee, during which his family’s video rental history was used to great effect and in excoriating detail. This was the age of brick-and-mortar video rental stores, well before the age of instant video streaming and on-demand content. Nonetheless, VPPA compliance is an important component to any privacy and data security programs of online video-content providers, websites that host streaming videos and others that are in the business of facilitating consumers viewing streaming video.

Judicial application of the VPPA to online content has produced inconsistent results, including how the statute’s definition of personally-identifiable information (PII)—the disclosure of which triggers VPPA-liability—has been interpreted. Under the VPPA, PII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. § 3710(a)(3). Courts and commentators alike have noted that this definition is vague particularly when applied to new technological situations, as it describes what counts as PII rather than providing an absolute definition. Specifically in the streaming video context, the dispute of the PII definition typically turns on whether a static identifier, like an internet protocol (IP) address or other similar identifier uniquely assigned to consumers, counts as PII under the VPPA.

Read More

Legal Issues in Ad Tech: Who Owns Marketing Performance Data?

Does a marketer own data related to performance of its own marketing campaigns? It might surprise marketers to know that data ownership isn’t automatically so. Or more broadly, who does own that data? A data rights clause in contracts with DSPs or agencies might state something like this:

“Client owns and retains all right, title and interest (including without limitation all intellectual property rights) in and to Client Data”,

… where “Client Data” is defined as “Client’s data files”. Or this:

“As between the Parties, Advertiser retains and shall have sole and exclusive ownership and Intellectual Property Rights in the … Performance Data”,

… where “Performance Data” means “campaign data related to the delivery and tracking of Advertiser’s digital advertising”.

Both clauses are vague, although the second is broader and more favorable to the marketer. In neither case are “data files” or “campaign data” defined with any particularity, and neither case includes any delivery obligation much less specifications for formatting, reporting or performance analytics. And even if data were provided by a vendor or agency, these other questions remain: What kind of data would be provided, how would it be provided, and how useful would the data be if it were provided?

Read More

Legal Issues in Ad Tech: Anonymized and De-Identified Data

Recently, in reviewing a contract with a demand-side platform (DSP), I came across this typical language in a “Data Ownership” section:

“All Performance Data shall be considered Confidential Information of Advertiser, provided that [VENDOR] may use such Performance Data … to create anonymized aggregated data, industry reports, and/or statistics (“Aggregated Data”) for its own commercial purposes, provided that Aggregated Data will not contain any information that identifies the Advertiser or any of its customers and does not contain the Confidential Information of the Advertiser or any intellectual property of the Advertiser or its customers.” (emphasis added).

I was curious what makes data “anonymized”, and I was even more curious whether the term was casually and improperly used. I’ve seen the same language alternately used substituting “de-identified” for “anonymized”. Looking into this opened a can of worms ….

What are Anonymized and De-Identified Data – and Are They the Same?

Here’s how Gregory Nelson described it in his casually titled “Practical Implications of Sharing Data: A Primer on Data Privacy, Anonymization, and De-Identification”:

“De-identification of data refers to the process of removing or obscuring any personally identifiable information from individual records in a way that minimizes the risk of unintended disclosure of the identity of individuals and information about them. Anonymization of data refers to the process of data de-identification that produces data where individual records cannot be linked back to an original as they do not include the required translation variables to do so.” (emphasis added)

Or in other words, both methods have the same purpose and both methods technically remove personally identifiable information (PII) from the data set. But while de-identified data can be re-identified, anonymized data cannot be re-identified. To use a simple example, if a column from an Excel spreadsheet containing Social Security numbers is removed from a dataset and discarded, the data would be “anonymized”.

But first … what aspects or portions of data must be removed in order to either de-identify or anonymize a set?

But What Makes Data “De-Identified” or “Anonymous” in the First Place?

Daniel Solove has written that, under the European Union’s Data Directive 95/46/EC, “Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.” This makes things complicated in a hurry. After all, in the above example where Social Security numbers are removed, remaining columns might include normally non-PII information such as zip codes or gender (male or female). But the Harvard researchers Olivia Angiuli, Joe Blitzstein, and Jim Waldo show how even these 3 data points in an otherwise “de-identified” data set (i.e. “medical data” in the image below) can be used to re-identify individuals when combined with an outside data source that shares these same points (i.e. “voter list” in the image below):

Data Sets Overlap Chart

(Source: How to De-Identify Your Data, by Olivia Angiuli, Joe Blitzstein, and Jim Waldo, http://queue.acm.org/detail.cfm?id=2838930)

That helps explain the Advocate General opinion recently issued in the European Union Court of Justice (ECJ), finding that dynamic IP addresses can, under certain circumstances, be “personal data” under the European Union’s Data Directive 95/46/EC. The case involves interpretation of the same point made by Daniel Solove cited above, namely discerning the “personal data” definition, including this formulation in Recital 26 of the Directive:

“(26) … whereas, to determine whether a person is identifiable, account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person …”

There was inconsistency among the EU countries on the level of pro-activity required by a data controller in order to render an IP address “personal data”.   So, for example, the United Kingdom’s definition of “personal data”: “data which relate to a living individual who can be identified – (a) from those data, or (b) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller” (emphasis added). Not so in Germany and, according to a White & Case report on the ECJ case, not so according to the Advocate General, whose position was that “the mere possibility that such a request [for further identifying information] could be made is sufficient.”

Which then circles things back to the question at the top, namely: Are Anonymized and De-Identified Data the Same? They are not the same. That part is easy to say. The harder part is determining which is which, especially with the ease of re-identifying presumably scrubbed data sets. More on this topic shortly.

Read More

Please Don’t Take My Privacy (Why Would Anybody Really Want It?)

Legal issues with privacy in social media stem from the nature of social media – an inherently communicative and open medium. A cliché is that in social media there is no expectation of privacy because the very idea of privacy is inconsistent with a “social” medium. Scott McNealy from Sun Microsystems reportedly made this point with his famous aphorism of “You have zero privacy anyway. Get over it.”

But in evidence law, there’s a rule barring assumption of facts not in evidence. In social media, by analogy: Where was it proven that we cannot find privacy in a new communications medium, even one as public as the internet and social media?

Let’s go back to basic principles. Everyone talks about how privacy has to “adapt” to a new technological paradigm. I agree that technology and custom require adaptation by a legal system steeped in common law principles with foundations from the 13th century. But I do not agree that the legal system isn’t up to the task.

All you really need to do is take a wider look at the law.

Privacy writers talk about the law of appropriation in privacy. The law of appropriation varies from state to state, though it is a fairly established aspect of privacy law.

Read More

Privacy: Consent to Collecting Personal Information

Gonzalo Mon writes in Mashable that “Although various bills pending in Congress would require companies to get consent before collecting certain types of information, outside of COPPA, getting consent is not a uniformly applicable legal requirement yet. Nevertheless, there are some types of information (such as location-based data) for which getting consent may be a good idea.  Moreover, it may be advisable to get consent at the point of collection when sensitive personal data is in play.”

First, what current requirements – laws, agency regulations and quasi-laws – require obtaining consent, even if not “uniformly applicable”?

1. Government Enforcement.  The Federal Trade Commission’s November 2011 consent decree with Facebook user express consent to sharing of nonpublic user information that “materially exceeds” user’s privacy settings.  The FTC was acting under its authority under Section 5 of the FTC Act against an “unfair and deceptive trade practice”, an authority the FTC has liberally used in enforcement actions involving not just claimed breaches of privacy policies but also data security cases involving managing of personal data without providing adequate security.

2. User Expectations Established by Actual Practice.  The mobile space offers some of the most progressive (and aggressive) examples of privacy rights seemingly established by practice rather than stated policy.  For example, on the PrivacyChoice blog, the CEO of PlaceIQ explained that “Apple and Android have already established user expectations about [obtaining] consent.  Location-based services in the operating system provide very precise location information, but only through a user-consent framework built-in to the OS.  This creates a baseline user expectation about consent for precise location targeting.”  (emphasis added)

Read More