MediaTech Law


Dataveillance Protection: The E.U.-U.S. Privacy Shield

For many years, technology outpaced policy when it came to standards and protections around ownership of and access to personal data. Privacy policies are not set by governments but rather by technology companies that created the digital world as it is experienced today. Many if not all of the dominant players in this space are American technology companies that include Alphabet (i.e. Google), Apple, Amazon, Facebook and Microsoft. These companies have more say about a user’s online life than any individual local, state or national government.

Read More

Legal Issues in Ad Tech: IP Addresses Are Personal Data, Says the EU (well … sort of)

Much has been written in the past 2 weeks about the U.S. Presidential election. Time now for a diversion into the exciting world of data privacy and “personal data”. Because in the highly refined world of privacy and data security law, important news actually happened in the past few weeks. Yes, I speak breathlessly of the European Court of Justice (ECJ) decision on October 19th that IP (internet protocol) addresses are “Personal Data” for purposes of the EU Data Directive. This is bigly news (in the data privacy world, at least).

First, what the decision actually said, which leads immediately into a riveting discussion of the distinction between static and dynamic IP addresses.

The decision ruled on a case brought by a German politician named Patrick Breyer, who sought an injunction preventing a website and its owner – here, publicly available websites operated by the German government – from collecting and storing his IP address when he lawfully accessed the sites. Breyer claimed that the government’s actions were in violation of his privacy rights under the EU Directive 95/46/EC – The Data Protection Directive (Data Protection Directive). As the ECJ reported in its opinion, the government websites “register and store the IP addresses of visitors to those sites, together with the date and time when a site was accessed, with the aim of preventing cybernetic attacks and to make it possible to bring criminal proceedings.”

The case is Patrick Breyer v Bundesrepublik Deutschland, Case C-582/14, and the ECJ’s opinion was published on October 19th.

Read More

Federal Judge Tosses Stingray Evidence

In a first, a federal judge ruled that evidence found through the use of a stingray device is inadmissible. Reuters reports on the case, United States v. Raymond Lambis, which involved a man targeted in a US Drug Enforcement Administration (DEA) investigation. The DEA used a stingray, a surveillance tool used to reveal a phone’s location, to identify Raymond Lambis’ apartment as the most likely location of a cell phone identified during a drug trafficking probe. Upon searching the apartment, the DEA discovered a kilogram of cocaine.

According to ArsTechnica, the DEA sought a warrant seeking location information and cell-site data for a particular 646 area code phone number. The warrant was based on communications obtained from a wiretap order that suggested illegal drug activity. With the information provided by the cell-site location, the DEA was able to determine the general vicinity of the targeted cell phone, which pointed to the intersection of Broadway and 177th streets in Manhattan. The DEA then used a stingray device, which mimics a cell phone tower and forces cell phones in the area to transmit “pings” back to the device. This enabled law enforcement to pinpoint a particular phone’s location.

Read More

Legal Issues in Ad Tech: Who Owns Marketing Performance Data?

Does a marketer own data related to performance of its own marketing campaigns? It might surprise marketers to know that data ownership isn’t automatically so. Or more broadly, who does own that data? A data rights clause in contracts with DSPs or agencies might state something like this:

“Client owns and retains all right, title and interest (including without limitation all intellectual property rights) in and to Client Data”,

… where “Client Data” is defined as “Client’s data files”. Or this:

“As between the Parties, Advertiser retains and shall have sole and exclusive ownership and Intellectual Property Rights in the … Performance Data”,

… where “Performance Data” means “campaign data related to the delivery and tracking of Advertiser’s digital advertising”.

Both clauses are vague, although the second is broader and more favorable to the marketer. In neither case are “data files” or “campaign data” defined with any particularity, and neither case includes any delivery obligation much less specifications for formatting, reporting or performance analytics. And even if data were provided by a vendor or agency, these other questions remain: What kind of data would be provided, how would it be provided, and how useful would the data be if it were provided?

Read More

Legal Issues in Ad Tech: Anonymized and De-Identified Data

Recently, in reviewing a contract with a demand-side platform (DSP), this week, I came across this typical language in a “Data Ownership” section:

“All Performance Data shall be considered Confidential Information of Advertiser, provided that [VENDOR] may use such Performance Data … to create anonymized aggregated data, industry reports, and/or statistics (“Aggregated Data”) for its own commercial purposes, provided that Aggregated Data will not contain any information that identifies the Advertiser or any of its customers and does not contain the Confidential Information of the Advertiser or any intellectual property of the Advertiser or its customers.” (emphasis added).

I was curious what makes data “anonymized”, and I was even more curious whether the term was casually and improperly used. I’ve seen the same language alternately used substituting “de-identified” for “anonymized”. Looking into this opened a can of worms ….

What are Anonymized and De-Identified Data – and Are They the Same?

Here’s how Gregory Nelson described it in his casually titled “Practical Implications of Sharing Data: A Primer on Data Privacy, Anonymization, and De-Identification”:

“De-identification of data refers to the process of removing or obscuring any personally identifiable information from individual records in a way that minimizes the risk of unintended disclosure of the identity of individuals and information about them. Anonymization of data refers to the process of data de-identification that produces data where individual records cannot be linked back to an original as they do not include the required translation variables to do so.” (emphasis added)

Or in other words, both methods have the same purpose and both methods technically remove personally identifiable information (PII) from the data set. But while de-identified data can be re-identified, anonymized data cannot be re-identified. To use a simple example, if a column from an Excel spreadsheet containing Social Security numbers is removed from a dataset and discarded, the data would be “anonymized”.

But first … what aspects or portions of data must be removed in order to either de-identify or anonymize a set?

But What Makes Data “De-Identified” or “Anonymous” in the First Place?

Daniel Solove has written that, under the European Union’s Data Directive 95/46/EC, “Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.” This makes things complicated in a hurry. After all, in the above example where Social Security numbers are removed, remaining columns might include normally non-PII information such as zip codes or gender (male or female). But the Harvard researchers Olivia Angiuli, Joe Blitzstein, and Jim Waldo show how even these 3 data points in an otherwise “de-identified” data set (i.e. “medical data” in the image below) can be used to re-identify individuals when combined with an outside data source that shares these same points (i.e. “voter list” in the image below):

Data Sets Overlap Chart

(Source: How to De-Identify Your Data, by Olivia Angiuli, Joe Blitzstein, and Jim Waldo,

That helps explain the Advocate General opinion recently issued in the European Union Court of Justice (ECJ), finding that dynamic IP addresses can, under certain circumstances, be “personal data” under the European Union’s Data Directive 95/46/EC. Specifically, those circumstances. The case involves interpretation of the same point made by Daniel Solove cited above, namely discerning the “personal data” definition, including this formulation in Recital 26 of the Directive:

“(26) … whereas, to determine whether a person is identifiable, account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person …”

There was inconsistency among the EU countries on the level of pro-activity required by a data controller in order to render an IP address “personal data”.   So, for example, the United Kingdom’s definition of “personal data” “data which relate to a living individual who can be identified – (a) from those data, or (b) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller” (emphasis added). Not so in Germany and, according to a White & Case report on the ECJ case, not so according to the Advocate General, whose position was that “the mere possibility that such a request [for further identifying information] could be made is sufficient.”

Which then circles things back to the question at the top, namely: Are Anonymized and De-Identified Data the Same? They are not the same. That part is easy to say. The harder part is determining which is which, especially with the ease of re-identifying presumably scrubbed data sets. More on this topic shortly.

Read More

Protecting Children’s Privacy in the Age of Siri, Echo, Google and Cortana

“OK Google”, “Hey Cortana”, “Siri…”, “Alexa,…”

These statements are more and more common as artificial intelligence (AI) becomes mainstream. They serve as the default statements that kick off the myriad of services offered by Google, Microsoft, Apple and Amazon respectively, and are at the heart of the explosion of voice-activated search and services now available through computers, phones, watches, and stand-alone devices. Once activated, these devices record the statements being made and digitally process and analyze them in the cloud. The service then returns the search results to the device in the form of answers, helpful suggestions, or an array of other responses.

A recent investigation by the UK’s Guardian newspaper, however, claims these devices likely run afoul of the U.S. Children’s Online Privacy Protection Act (COPPA), which regulates the collection and use of personal information from anyone younger than 13. If true, the companies behind these services could face multimillion-dollar fines.

COPPA details, in part, responsibilities of an operator to protect children’s online privacy and safety, and when and how to seek verifiable consent from a parent or guardian. COPPA also includes restrictions on marketing to children under the age of 13. The purpose of COPPA is to provide protection to children when they are online or interacting with internet-enabled devices, and to prevent the rampant collection of their sensitive personal data and information. The Federal Trade Commission (FTC) is the agency tasked with monitoring and enforcing COPPA, and encourages industry self-regulation.

The Guardian investigation states that voice-enabled devices like the Amazon Echo, Google Home and Apple’s Siri are recording and storing data provided by children interacting with the devices in their homes. While the investigation concluded that these devices are likely collecting information of family members under the age of 13, it avoids conclusion as to whether it can be proven that these services primarily target children under the age of 13 as their audience – a key determining factor for COPPA. Furthermore, according to the FTC’s own COPPA FAQ page, even if a child provides personal information to a general audience online service, so long as the service has no actual knowledge that the particular individual is a child, COPPA is not triggered.

While the details of COPPA will need to be refined and re-defined in the era of always-on digital assistants and AI, the Guardian’s claim that the FTC will crack down harshly on offenders is not likely to happen, and the potential large fines are unlikely to materialize. Rather, what will likely occur is the FTC will provide guidance and recommendations to such services, allowing them to modify their practices and stay within the bounds of the law, so long as they’re acting in good faith. For example, services like Amazon, Apple and Google could update their services to request on installation the age and number of individuals in the home, paired with an update to the terms of service requesting parental permission for the use of data provided by children under 13. For children outside of the immediate family who access the device, the services can claim they lacked actual knowledge a child interacted with the service, again satisfying COPPA’s requirements.

Read More

Can Social Media Use Save a Trademark?

Maintaining a social media profile has become standard practice for most businesses advertising their services. Savvy trademark owners may also know that they must “use” their mark in order to establish trademark rights – meaning that the mark must be actually used in connection with providing a good or service. But what type of use is sufficient? Is simply using a mark on a Facebook or Twitter profile enough to show “use” of the mark for trademark purposes? A Trademark Trial and Appeal Board (TTAB) decision says no, but offers useful guidance to trademark owners on using “analogous” trademark use to establish trademark rights. The decision is The PNC Financial Services Group, Inc. v. Keith Alexander Ashe dba Spendology and Spendology LLC.

Spendology attempted to register the mark SPENDOLOGY for web-based personal finance tools. PNC Financial Services Group (PNC), which used the same mark for an “online money management tool,” opposed Spendology’s application, claiming that PNC had used the mark first. Both parties filed motions for summary judgment for likelihood of confusion and priority.

Read More

Deceptive Software: Breaking Down VW’s Emissions Cheating Code Scandal


After a university study uncovered code designed to cheat emissions testing standards, Volkswagen Inc. (VW) has been on the defensive, admitting wrongdoing and bracing for the onslaught of regulatory fines, class actions suits, and major repairs and recalls.

The code at the heart of the controversy places the car in one of two operating modes. When the car appears to be driving under conditions simulating an emissions test, the “cheat code” is enabled, delivering proficient emissions results and better gas mileage. When driving conditions denote real-world driving, cheat mode is disabled, delivering increased power and torque, but decreasing gas mileage and outputting a level of emissions 40 times greater than the legal limit as regulated by the Environmental Protection Agency (EPA).

Discovering the Cheat Code

Researchers at West Virginia University uncovered the higher emissions during a study funded by the International Council on Clean Transportation, a nonprofit with offices in the U.S. and Europe, to test the emissions of diesel vehicles while driving. Traditionally, emissions testing occurred in a stationary location by placing the front-wheels of a car on a rolling treadmill while the rear wheels remained static. Emissions that escaped through the tail pipe were then collected and measured. The WVU researchers took the tests to the open road by creating a mobile testing rig. Sensors attached to the tailpipe captured the emissions and fed the data to testing equipment stored in the trunk and backseat of the cars. The test results captured the greater emissions and lower fuel efficiency since the cheat code was disabled on open road conditions. Upon discovering the discrepancies and conducting multiple follow up tests, WVU contacted the EPA and the California Air Resources Board, who conducted their own tests and issued a citation to VW.

Read More

Budweiser Protects Its Throne From the Queen of Beer

Anheuser-Busch’s Budweiser brands itself as the king of beer and the company’s recent trademark defense shows it’s not willing to share the throne. A California craft beer company named She Beverage Company recently filed a trademark application with the U.S. Patent and Trademark Office (PTO) for THE QUEEN OF BEER for “beer,” and Anheuser-Busch quickly moved to oppose it.

Anheuser-Busch argued in its opposition that She Beverage Co.’s trademark would cause consumer confusion with several of its KING OF BEERS word and design marks, the oldest of which was registered in 1968 for “beer.”

Anheuser-Busch also argued that THE QUEEN OF BEER would dilute the distinctive nature of Budweiser’s famous trademarks. Famous marks are afforded heightened protection from similar marks because of the strong connection in the mind of the public between the source of the product and the mark. And there is little doubt that Anheuser-Busch’s marks, including Budweiser, qualify as famous considering the hundreds of millions of dollars that it spends annually on advertising, and its place as one of the world’s most valuable brands.  

Read More

Appellate Court Upholds FTC’s Authority to Fine and Regulate Companies Shirking Cybersecurity

In a case determining the scope of the Federal Trade Commission’s (FTC) ability to govern data security, the 3rd U.S. Circuit Court of Appeals in Philadelphia upheld a 2014 ruling allowing the FTC to pursue a lawsuit against Wyndham Worldwide Corp. for failing to protect customer information after three data breaches that occurred in 2008 and 2009. The theft of credit card and personal details from over 600,000 consumers resulted in $10.6 million in fraudulent charges and the transfer of consumer account information to a website registered in Russia.

In 2012, the FTC sued Wyndham, which brands include Days Inn, Howard Johnson, Ramada, Super 8 and Travelodge. The basis of the claim stated that Wyndham’s conduct was an unfair practice and its privacy policy deceptive. The suit further alleged the company “engaged in unfair cybersecurity practices that unreasonably and unnecessarily exposed consumers’ personal data to unauthorized access and theft.”

The appellate court’s decision is of importance because it declares the FTC has the authority to regulate cybersecurity under the unfairness doctrine within §45 of the FTC Act. This doctrine allows the FTC to declare a business practice unfair if it is oppressive or harmful to consumers even though the practice is not an antitrust violation. Under this decision, the FTC has the authority to level civil penalties against companies convicted of engaging in unfair practices.

What exactly did Wyndham do to possibly merit the claim of unfair practices?

According to the FTC’s original complaint, the company:

  • allowed for the storing of payment card information in clear readable text;
  • allowed for the use of easily guessed password to access property management systems;
  • failed to use commonly available security measures, like firewalls, to limit access between hotel property management systems, corporate networks and the internet; and
  • failed to adequately restrict and measure unauthorized access to its network.

Furthermore, the FTC alleged the company’s privacy policy was deceptive, stating:

“a company does not act equitably when it publishes a privacy policy to attract customers who are concerned about data privacy, fails to make good on that promise by investing inadequate resources in cybersecurity, exposes its unsuspecting customers to substantial financial injury, and retains the profits of the business.”

Wyndham requested the suit be dismissed arguing the FTC did not have the authority to regulate cybersecurity. The appellate court found otherwise, however, stating that Wyndham failed to show that its alleged conduct fell outside the plain meaning of unfair.

The appellate court’s ruling highlights the need for companies to take special care in crafting a privacy policy to ensure it reflects the company’s cybersecurity standards and practices. This includes staying up-to-date on the latest best practices, and being familiar with the ever-changing industry standard security practices, including encryption and firewalls.

Read More

Delayed Results of Google’s “Mobilegeddon” Show Small Sites Suffer on Mobile

On April 21st online behemoth Google altered its search engine algorithm to favor websites it considered mobile-friendly. This change, dubbed “Mobilegeddon” by web developers and search engine optimization (SEO) specialists, sought to reward sites that used responsive design and other mobile-friendly practices to ensure sites display well on smartphones and other mobile devices. Conversely, sites that were not mobile friendly would ultimately be penalized by ranking lower on mobile search results.

At the time, it was unclear just how large of an impact this change would have on companies’ appearance in organic mobile search results. A recent report by Adobe Digital Index, however, shows that the impact has indeed been substantial. The report determined that traffic to non-mobile-friendly sites from Google mobile searches fell more than 10% in the two months after the change, with the impact growing weekly since April. This means that non-mobile-friendly sites have dropped sharply in mobile search rankings, while mobile-friendly sites have risen in rankings, showing up higher on the mobile search results page. This change has had the greatest impact on small businesses that likely underestimated the value of mobile search traffic, and also affected financial services and law firms.

In a recent article in the Wall Street Journal, Adobe analyst, Tamara Gaffney, found that companies which were unprepared for the impact on search results have tried to offset the decrease in organic traffic by buying mobile search-ads from Google. This tactic served to keep mobile users visiting their sites through paid ads. Substituting paid results for organic results may work in the short term but is usually not a sound long-term approach. A sustainable long term online add strategy over time usually consists of a balanced approach between building brand and consumer trust through organic search, and strategically supplementing that with paid ads.

What is a company adversely affected by Mobilegeddon to do?

One obvious course of action for a site that has suffered from Mobilegeddon is to become mobile friendly. This means putting in place a responsive theme, and implementing best practices that aid in mobile user experience. This includes using larger easier-to-read text and separating links to make them easier to tap on a smaller screen. Those who are unsure of how their site fares according to Google can use the company’s Mobile Friendly Test Tool to see what recommendations may be made to improve the mobile user’s experience.

With mobile search queries outpacing desktop, Google is sending a clear message that it is willing to reward sites that provide a good mobile experience, and businesses that fail to heed that message will suffer in the search rankings.

Read More