MediaTech Law

By MIRSKY & COMPANY, PLLC

Change Your Password Every [Blank] Days!

Takeaways from Microsoft’s announcement in May that it would be “Dropping the password-expiration policies that require periodic password changes” in baseline settings for Windows 10 and Windows Server:

First: The major security problem with passwords – the most major of the major problems – is not a failure to change passwords often enough.  Rather, it is choosing weak passwords.  Making passwords much harder for supercomputers (and humans, too) to guess – for example, requiring minimums of 11 characters, randomly-generated, using both upper- and lower-case letters, symbols and numbers – are much more “real-world security” (in Microsoft’s formulation).  As Dan Goodin recently wrote in Ars Technica, “Even when users attempt to obfuscate their easy-to-remember passwords – say by adding letters or symbols to the words, or by substituting 0’s for the o’s or 1’s for L’s – hackers can use programming rules that modify the dictionary entries.”

Read More

Confusion in “Cookie”-Land: Consent Requirements for Placing Cookies under GDPR and ePrivacy Directive

Must a website get consent from a user before placing cookies in the user’s browser?  The EU’s ePrivacy Directive says that yes, consent from the user is required prior to placement of most cookies (regardless of whether the cookies track personal data).  But under the General Data Protection Regulation (GDPR), consent is only one of several “lawful bases” available to justify collection of personal data.  If cookies are viewed as “personal data” under the GDPR – specifically, the placement of cookies in a user’s browser – must a website still get consent in order to place cookies, or instead can the site rely on one of those other “lawful bases” for dropping cookies?

First, are cookies “personal data” governed by the GDPR?  Or to be more precise, do cookies that may identify individuals fall under the GDPR?  This blog says yes: “when cookies can identify an individual, it is considered personal data.  … While not all cookies are used in a way that could identify users, the majority (and the most useful ones to the website owners) are, and will therefore be subject to the GDPR.”  This blog says no: “cookie usage and its related consent acquisition are not governed by the GDPR, they are instead governed by the ePrivacy Directive.” (emphasis added)  Similarly with this blog.

Read More

Encrypted Data: Still “Personal Data” under GDPR?

An interesting question is whether encrypted personal data is still “personal data” for purposes of the European Union’s General Data Protection Regulation (GDPR), and therefore making processing of that data subject to the GDPR’s library of compliance obligations.  The answer depends on the meaning of encryption: It is not enough to claim that encrypted data is “anonymized” and therefore inaccurate to conclude that it does not relate to the personal data definition’s meaning of an “identified or identifiable natural person.”

If an organization encrypts data in its care, with the encryption thereby rendering the data no longer “identified”, is it still “identifiable”?  Maybe.  If neither identified nor identifiable, then data is no longer “personal data”.

First, what is encryption?  Josh Gresham writes on IAPP’s blog that encryption involves a party “tak[ing] data and us[ing] an ‘encryption key’ to encode it so that it appears unintelligible.  The recipient uses the encryption key to make it readable again.  The encryption key itself is a collection of algorithms that are designed to be completely unique, and without the encryption key, the data cannot be accessed.  As long as the key is well designed, the encrypted data is safe.” (emphasis added)

Read More

Legal Considerations of Agile Development

An interesting change has occurred across software development projects over the past several years, which has seen the practice of Agile software development overtake that of the traditional Waterfall model. Rooted in the 2001 Agile Manifesto, Agile development favors greater interaction between technical and business teams, resulting in a more fluid development lifecycle. That is in comparison to the Waterfall approach, which operates on the basis of clear defined stages and objective within the project.

In the past, with a Waterfall approach, a software development project would be scoped out in full, with every detail and eventuality planned out, and with a completion date identified. So when asked “When is the project launching?”, a project manager or stakeholder would confidently reply with a set date, possibly months or years into the future.

With Agile development, the understanding is that not every detail can be mapped out, and requirements may change as the project advances. Agile allows for shifting of goals and deliverables as requirements shift during the development lifecycle. For that reason, work is done in small increments – referred to as sprints – with each sprint resulting in some working piece of code or “minimum viable product” (MVP). So when asked “When is the project launching?”, a project manager or stakeholder will likely not have a firm date, and instead reply “We expect a working version of this piece of the project by the end of the next two-week sprint.”

Read More

Dataveillance Protection: The E.U.-U.S. Privacy Shield

For many years, technology outpaced policy when it came to standards and protections around ownership of and access to personal data. Privacy policies are not set by governments but rather by technology companies that created the digital world as it is experienced today. Many if not all of the dominant players in this space are American technology companies that include Alphabet (i.e. Google), Apple, Amazon, Facebook and Microsoft. These companies have more say about a user’s online life than any individual local, state or national government.

Read More

Legal Issues in Ad Tech: IP Addresses Are Personal Data, Says the EU (well … sort of)

Much has been written in the past 2 weeks about the U.S. Presidential election. Time now for a diversion into the exciting world of data privacy and “personal data”. Because in the highly refined world of privacy and data security law, important news actually happened in the past few weeks. Yes, I speak breathlessly of the European Court of Justice (ECJ) decision on October 19th that IP (internet protocol) addresses are “Personal Data” for purposes of the EU Data Directive. This is bigly news (in the data privacy world, at least).

First, what the decision actually said, which leads immediately into a riveting discussion of the distinction between static and dynamic IP addresses.

The decision ruled on a case brought by a German politician named Patrick Breyer, who sought an injunction preventing a website and its owner – here, publicly available websites operated by the German government – from collecting and storing his IP address when he lawfully accessed the sites. Breyer claimed that the government’s actions were in violation of his privacy rights under the EU Directive 95/46/EC – The Data Protection Directive (Data Protection Directive). As the ECJ reported in its opinion, the government websites “register and store the IP addresses of visitors to those sites, together with the date and time when a site was accessed, with the aim of preventing cybernetic attacks and to make it possible to bring criminal proceedings.”

The case is Patrick Breyer v Bundesrepublik Deutschland, Case C-582/14, and the ECJ’s opinion was published on October 19th.

Read More

Federal Judge Tosses Stingray Evidence

In a first, a federal judge ruled that evidence found through the use of a stingray device is inadmissible. Reuters reports on the case, United States v. Raymond Lambis, which involved a man targeted in a US Drug Enforcement Administration (DEA) investigation. The DEA used a stingray, a surveillance tool used to reveal a phone’s location, to identify Raymond Lambis’ apartment as the most likely location of a cell phone identified during a drug trafficking probe. Upon searching the apartment, the DEA discovered a kilogram of cocaine.

According to ArsTechnica, the DEA sought a warrant seeking location information and cell-site data for a particular 646 area code phone number. The warrant was based on communications obtained from a wiretap order that suggested illegal drug activity. With the information provided by the cell-site location, the DEA was able to determine the general vicinity of the targeted cell phone, which pointed to the intersection of Broadway and 177th streets in Manhattan. The DEA then used a stingray device, which mimics a cell phone tower and forces cell phones in the area to transmit “pings” back to the device. This enabled law enforcement to pinpoint a particular phone’s location.

Read More

Legal Issues in Ad Tech: Who Owns Marketing Performance Data?

Does a marketer own data related to performance of its own marketing campaigns? It might surprise marketers to know that data ownership isn’t automatically so. Or more broadly, who does own that data? A data rights clause in contracts with DSPs or agencies might state something like this:

“Client owns and retains all right, title and interest (including without limitation all intellectual property rights) in and to Client Data”,

… where “Client Data” is defined as “Client’s data files”. Or this:

“As between the Parties, Advertiser retains and shall have sole and exclusive ownership and Intellectual Property Rights in the … Performance Data”,

… where “Performance Data” means “campaign data related to the delivery and tracking of Advertiser’s digital advertising”.

Both clauses are vague, although the second is broader and more favorable to the marketer. In neither case are “data files” or “campaign data” defined with any particularity, and neither case includes any delivery obligation much less specifications for formatting, reporting or performance analytics. And even if data were provided by a vendor or agency, these other questions remain: What kind of data would be provided, how would it be provided, and how useful would the data be if it were provided?

Read More

Legal Issues in Ad Tech: Anonymized and De-Identified Data

Recently, in reviewing a contract with a demand-side platform (DSP), I came across this typical language in a “Data Ownership” section:

“All Performance Data shall be considered Confidential Information of Advertiser, provided that [VENDOR] may use such Performance Data … to create anonymized aggregated data, industry reports, and/or statistics (“Aggregated Data”) for its own commercial purposes, provided that Aggregated Data will not contain any information that identifies the Advertiser or any of its customers and does not contain the Confidential Information of the Advertiser or any intellectual property of the Advertiser or its customers.” (emphasis added).

I was curious what makes data “anonymized”, and I was even more curious whether the term was casually and improperly used. I’ve seen the same language alternately used substituting “de-identified” for “anonymized”. Looking into this opened a can of worms ….

What are Anonymized and De-Identified Data – and Are They the Same?

Here’s how Gregory Nelson described it in his casually titled “Practical Implications of Sharing Data: A Primer on Data Privacy, Anonymization, and De-Identification”:

“De-identification of data refers to the process of removing or obscuring any personally identifiable information from individual records in a way that minimizes the risk of unintended disclosure of the identity of individuals and information about them. Anonymization of data refers to the process of data de-identification that produces data where individual records cannot be linked back to an original as they do not include the required translation variables to do so.” (emphasis added)

Or in other words, both methods have the same purpose and both methods technically remove personally identifiable information (PII) from the data set. But while de-identified data can be re-identified, anonymized data cannot be re-identified. To use a simple example, if a column from an Excel spreadsheet containing Social Security numbers is removed from a dataset and discarded, the data would be “anonymized”.

But first … what aspects or portions of data must be removed in order to either de-identify or anonymize a set?

But What Makes Data “De-Identified” or “Anonymous” in the First Place?

Daniel Solove has written that, under the European Union’s Data Directive 95/46/EC, “Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.” This makes things complicated in a hurry. After all, in the above example where Social Security numbers are removed, remaining columns might include normally non-PII information such as zip codes or gender (male or female). But the Harvard researchers Olivia Angiuli, Joe Blitzstein, and Jim Waldo show how even these 3 data points in an otherwise “de-identified” data set (i.e. “medical data” in the image below) can be used to re-identify individuals when combined with an outside data source that shares these same points (i.e. “voter list” in the image below):

Data Sets Overlap Chart

(Source: How to De-Identify Your Data, by Olivia Angiuli, Joe Blitzstein, and Jim Waldo, http://queue.acm.org/detail.cfm?id=2838930)

That helps explain the Advocate General opinion recently issued in the European Union Court of Justice (ECJ), finding that dynamic IP addresses can, under certain circumstances, be “personal data” under the European Union’s Data Directive 95/46/EC. The case involves interpretation of the same point made by Daniel Solove cited above, namely discerning the “personal data” definition, including this formulation in Recital 26 of the Directive:

“(26) … whereas, to determine whether a person is identifiable, account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person …”

There was inconsistency among the EU countries on the level of pro-activity required by a data controller in order to render an IP address “personal data”.   So, for example, the United Kingdom’s definition of “personal data”: “data which relate to a living individual who can be identified – (a) from those data, or (b) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller” (emphasis added). Not so in Germany and, according to a White & Case report on the ECJ case, not so according to the Advocate General, whose position was that “the mere possibility that such a request [for further identifying information] could be made is sufficient.”

Which then circles things back to the question at the top, namely: Are Anonymized and De-Identified Data the Same? They are not the same. That part is easy to say. The harder part is determining which is which, especially with the ease of re-identifying presumably scrubbed data sets. More on this topic shortly.

Read More

Protecting Children’s Privacy in the Age of Siri, Echo, Google and Cortana

“OK Google”, “Hey Cortana”, “Siri…”, “Alexa,…”

These statements are more and more common as artificial intelligence (AI) becomes mainstream. They serve as the default statements that kick off the myriad of services offered by Google, Microsoft, Apple and Amazon respectively, and are at the heart of the explosion of voice-activated search and services now available through computers, phones, watches, and stand-alone devices. Once activated, these devices record the statements being made and digitally process and analyze them in the cloud. The service then returns the search results to the device in the form of answers, helpful suggestions, or an array of other responses.

A recent investigation by the UK’s Guardian newspaper, however, claims these devices likely run afoul of the U.S. Children’s Online Privacy Protection Act (COPPA), which regulates the collection and use of personal information from anyone younger than 13. If true, the companies behind these services could face multimillion-dollar fines.

COPPA details, in part, responsibilities of an operator to protect children’s online privacy and safety, and when and how to seek verifiable consent from a parent or guardian. COPPA also includes restrictions on marketing to children under the age of 13. The purpose of COPPA is to provide protection to children when they are online or interacting with internet-enabled devices, and to prevent the rampant collection of their sensitive personal data and information. The Federal Trade Commission (FTC) is the agency tasked with monitoring and enforcing COPPA, and encourages industry self-regulation.

The Guardian investigation states that voice-enabled devices like the Amazon Echo, Google Home and Apple’s Siri are recording and storing data provided by children interacting with the devices in their homes. While the investigation concluded that these devices are likely collecting information of family members under the age of 13, it avoids conclusion as to whether it can be proven that these services primarily target children under the age of 13 as their audience – a key determining factor for COPPA. Furthermore, according to the FTC’s own COPPA FAQ page, even if a child provides personal information to a general audience online service, so long as the service has no actual knowledge that the particular individual is a child, COPPA is not triggered.

While the details of COPPA will need to be refined and re-defined in the era of always-on digital assistants and AI, the Guardian’s claim that the FTC will crack down harshly on offenders is not likely to happen, and the potential large fines are unlikely to materialize. Rather, what will likely occur is the FTC will provide guidance and recommendations to such services, allowing them to modify their practices and stay within the bounds of the law, so long as they’re acting in good faith. For example, services like Amazon, Apple and Google could update their services to request on installation the age and number of individuals in the home, paired with an update to the terms of service requesting parental permission for the use of data provided by children under 13. For children outside of the immediate family who access the device, the services can claim they lacked actual knowledge a child interacted with the service, again satisfying COPPA’s requirements.

Read More

Can Social Media Use Save a Trademark?

Maintaining a social media profile has become standard practice for most businesses advertising their services. Savvy trademark owners may also know that they must “use” their mark in order to establish trademark rights – meaning that the mark must be actually used in connection with providing a good or service. But what type of use is sufficient? Is simply using a mark on a Facebook or Twitter profile enough to show “use” of the mark for trademark purposes? A Trademark Trial and Appeal Board (TTAB) decision says no, but offers useful guidance to trademark owners on using “analogous” trademark use to establish trademark rights. The decision is The PNC Financial Services Group, Inc. v. Keith Alexander Ashe dba Spendology and Spendology LLC.

Spendology attempted to register the mark SPENDOLOGY for web-based personal finance tools. PNC Financial Services Group (PNC), which used the same mark for an “online money management tool,” opposed Spendology’s application, claiming that PNC had used the mark first. Both parties filed motions for summary judgment for likelihood of confusion and priority.

Read More