MediaTech Law

By MIRSKY & COMPANY, PLLC

Change Your Password Every [Blank] Days!

Takeaways from Microsoft’s announcement in May that it would be “Dropping the password-expiration policies that require periodic password changes” in baseline settings for Windows 10 and Windows Server:

First: The major security problem with passwords – the most major of the major problems – is not a failure to change passwords often enough.  Rather, it is choosing weak passwords.  Making passwords much harder for supercomputers (and humans, too) to guess – for example, requiring minimums of 11 characters, randomly-generated, using both upper- and lower-case letters, symbols and numbers – are much more “real-world security” (in Microsoft’s formulation).  As Dan Goodin recently wrote in Ars Technica, “Even when users attempt to obfuscate their easy-to-remember passwords – say by adding letters or symbols to the words, or by substituting 0’s for the o’s or 1’s for L’s – hackers can use programming rules that modify the dictionary entries.”

Read More

Confusion in “Cookie”-Land: Consent Requirements for Placing Cookies under GDPR and ePrivacy Directive

Must a website get consent from a user before placing cookies in the user’s browser?  The EU’s ePrivacy Directive says that yes, consent from the user is required prior to placement of most cookies (regardless of whether the cookies track personal data).  But under the General Data Protection Regulation (GDPR), consent is only one of several “lawful bases” available to justify collection of personal data.  If cookies are viewed as “personal data” under the GDPR – specifically, the placement of cookies in a user’s browser – must a website still get consent in order to place cookies, or instead can the site rely on one of those other “lawful bases” for dropping cookies?

First, are cookies “personal data” governed by the GDPR?  Or to be more precise, do cookies that may identify individuals fall under the GDPR?  This blog says yes: “when cookies can identify an individual, it is considered personal data.  … While not all cookies are used in a way that could identify users, the majority (and the most useful ones to the website owners) are, and will therefore be subject to the GDPR.”  This blog says no: “cookie usage and its related consent acquisition are not governed by the GDPR, they are instead governed by the ePrivacy Directive.” (emphasis added)  Similarly with this blog.

Read More

Do We Need to Appoint a (GDPR) Data Protection Officer?

Does your organization need to appoint a “Data Protection Officer”?  Articles 37-39 of the EU’s General Data Protection Regulation (GDPR) require certain organizations that process personal data of EU citizens to appoint a Data Protection Officer (DPO) to record their data processing activities.  Doing so is a lot more than perfunctory – you can’t just say, “Steve, our HR Director, you’re now our DPO.  Congratulations!”  The qualifications for the job are significant, and the organizational impact of having a DPO is extensive.  You may be better off avoiding appointing a DPO if you don’t have to, while if you do have to the failure to do so can expose your organization to serious enforcement penalties. 

Read More

Blogs and Writings we Like

This week we highlight three writers discussing timely subjects in copyright, technology, and advertising law. Susan Neuberger Weller and Anne-Marie Dao from Mintz Levin discussed a split in thought on when a copyright is officially registered for purposes of filing an infringement lawsuit; Jeffery Neuburger from Proskauer wrote an interesting article reflecting on technology-related legal issues in 2017 and looking forward to potential hot issues in 2018; and Leonard Gordon posted a piece on Venable’s All About Advertising Law Blog about cancellation methods for continuity sales offers.

When is a Copyright “Registered” for Purposes of Filing Suit?

In a recent post, Susan Neuberger Weller and Anne-Marie Dao from Mintz Levin discuss a split among Federal Courts of Appeal about when a copyright is registered. Weller and Dao note that registration of a US copyright is required prior to being able to initiate an infringement suit (or to obtain statutory damages) in federal court, but there is not an agreement on when “registration” actually occurs. Some circuit courts have found that registration happens when the application is filed, but others believe it only occurs when the Register of Copyrights actually issues the copyright registration. The article recounts a recent case in the 11th Circuit in which the court dismissed an infringement case because the copyright holder had filed the application but no action had been taken by the US Copyright Office.

The authors note that the issue could be resolved if the US Supreme Court agrees to hear an appeal by the plaintiff in the 11th Circuit case, although – but, as of April 16, 2018 the Supreme Court had not acted on the plaintiff’s certirari petition.

What We Like: The article raises an important issue for copyright holders that can be critical in copyright infringement cases. In addition to raising the topic, we particularly like the authors’ summary of the various positions among the federal appeals courts about when copyright registration actually occurs. This list is a good reference for any lawyers considering whether (and maybe even where) to bring an infringement case.

***

Reflections on Technology-Related Legal Issues: Looking Back at 2017; Will 2018 Be a Quantum Leap Forward?

Jeffery Neuburger from Proskauer wrote an interesting article reflecting on technology-related legal issues in 2017 and looking forward to issues that will likely be in play in 2018. Neuburger mentions a number of things that came up in 2017 ranging from cybersecurity to privacy. He also discusses the development of blockchain (“a continuously growing list of records, called blocks, which are linked and secured using cryptography,” which is a “core component of bitcoin”) into areas beyond cryptocurrencies and poses questions about potential legal issues that may arise. In the privacy realm, Neuburger opines that “2018 also promises to be the year of Europe’s General Data Privacy Regulation” (GDPR) and notes that mobile tracking also is likely to be a hot issue in the new year.

Most interesting, Neuburger spends almost half the article talking about quantum computing. He explains that quantum computers operate on the law of quantum mechanics and use quantum bits or “qubits” (“a qubit can store a 0, 1, or a summation of both 0 and 1”), and states that quantum computers could be up to 100 million times faster than current computers. The article further sets out four areas of legal issues related to quantum computers: (i) encryption and cryptography; (ii) blockchain; (iii) securities industry; and (iv) military applications. Neuburger ominously notes that “quantum computers may be powerful enough (perhaps) to break the public key cryptography systems currently in use that protects secure online communications and encrypted data.”

What We Like: We’ve always looked forward to Jeff Neuberger’s commentary on new media and tech law issues, particularly his extensive recent blogging on the GDPR and other privacy issues. But we particularly liked his discussion of quantum computing, a topic not ordinarily discussed in these types of summaries and somewhat challenging for non-scientists to tackle. As is clear from Neuberger’s analysis, many aspects of the law may be affected as this technology advances.

***

Sex, Golf, and the FTC – And, of course, Continuity Sales Programs

On Venable’s All About Advertising Law Blog, Leonard Gordon discusses a recent Federal Trade Commission complaint and settlement with a lingerie online retailer related to a continuity sales promotion – “A continuity program is a company’s sales offer where a buyer/consumer is agreeing to receive merchandise or services automatically at regular intervals (often monthly), without advance notice, until they cancel.” (Gordon included a passing reference to a similar case involving golf balls, but did not provide many details – thus, the reference in the title.)

Read More

Apple Touts Differential Privacy, Privacy Wonks Remain Skeptic, Google Joins In

(Originally published January 19, 2017, updated July 24, 2017)

Apple has traditionally distinguished itself from its rivals, like Google and Facebook, by emphasizing its respect of user privacy. It has taken deliberate steps to avoid vacuuming up all of its users’ data, providing encryption at the device level as well as during data transmission. It has done so, however, at the cost of foregoing the benefits that pervasive data collection and analysis have to offer. Such benefits include improving on the growing and popular on-demand search and recommendation services, like Google Now and Microsoft’s Cortana and Amazon’s Echo. Like Apple’s Siri technology, these services act as a digital assistant, providing responses to search requests and making recommendations. Now Apple, pushing to remain competitive in this line of its business, is taking a new approach to privacy, in the form of differential privacy (DP).

Announced in June 2016 during Apple’s Worldwide Developers’ Conference in San Francisco, DP is, as Craig Federighi, senior vice president of software engineering, stated “a research topic in the area of statistics and data analytics that uses hashing, subsampling and noise injection to enable … crowdsourced learning while keeping the data of individual users completely private.” More simply put, DP is the statistical science of attempting to learn as much as possible about a group while learning as little as possible about any individual in it.

Read More

Legal Issues in Ad Tech: De-Identified vs. Anonymized in a World of Big Data

In the booming world of Big Data, consumers, governments, and even companies are rightfully concerned about the protection and security of their data and how to keep one’s personal and potentially embarrassing details of life from falling into nefarious hands.   At the same time, most would recognize that Big Data can serve a valuable purpose, such as being used for lifesaving medical research and to improve commercial products. A question therefore at the center of this discussion is how, and if, data can be effectively “de-identified” or even “anonymized” to limit privacy concerns – and if the distinction between the two terms is more theoretical than practical. (As I mentioned in a prior post, “de-identified” data is data that has the possibility to be re-identified; while, at least in theory, anonymized data cannot be re-identified.)

Privacy of health data is particularly important and so the U.S. Health Insurance Portability and Accountability Act (HIPPA) includes strict rules on the use and disclosure of protected health information. These privacy constraints do not apply if the health data has been de-identified – either through a safe harbor-blessed process that removes 18 key identifiers or through a formal determination by a qualified expert, in either case presumably because these mechanisms are seen as a reasonable way to make it difficult to re-identify the data.

Read More

Blogs and Writings We Like

This week we highlight 3 writers discussing timely subjects in media tech law: Sandy Botkin writing about zombie cookies and targeted advertising, Geoffrey Fowler writing about the new world of phishing and “phishermen” (yes, that’s a thing), and Justin Giovannettone and Christina Von der Ahe writing about nonsolicitation agreements and social media law.

FTC vs Turn, Inc.: Zombie Hunters

Sandy Botkin, writing on TaxBot Blog, reports amusingly on the FTC’s December 2016 settlement with digital advertising data provider Turn, Inc., stemming from an enforcement action against Turn for violating Turn’s own consumer privacy policy. Botkin used the analogy of a human zombie attack to illustrate the effect of actions Turn took to end-run around user actions to block targeted advertising on websites and apps.

According to the FTC in its complaint, Turn’s participation in Verizon Wireless’ tracking header program – attaching unique IDs to all unencrypted mobile internet traffic for Verizon subscribers – enabled turn to re-associate the Verizon subscriber with his or her use history. By so doing, according to Botkin, this further enabled Turn to “recreate[] cookies that consumers had previously deleted.” Or better yet: “Put another way, even when people used the tech equivalent of kerosene and machetes [to thwart zombies], Turn created zombies out of consumers’ deleted cookies.”

What we like: We like Botkin’s zombie analogy, although not because we like zombies. We don’t. Like. Zombies. But we do think it’s a clever explanatory tool for an otherwise arcane issue.

*            *            *

Your Biggest Online Security Risk Is You

Geoffrey Fowler writes in The Wall Street Journal (here ($), with an even fuller version of the story available here via Dow Jones Newswires) about the latest in the world of phishing, that large category of online scams that, one way or another, has the common goals of accessing your data, your money or your life, or someone else’s who might be accessed through your unsuspecting gateway.

“If you’re sure you already know all about them, think again. Those grammatically challenged emails from overseas ‘pharmacies’ and Nigerian ‘princes’ are yesterday’s news. They’ve been replaced by techniques so insidious, they could leave any of us feeling like a sucker.”

Oren Falkowitz of Area 1 Security told Fowler that about 97% of all cyberattacks start with phishing. Phishing is a big deal.

Fowler writes of the constantly increasing sophistication of “phishermen” – yes, that’s a term – weakening the effectiveness of old common-sense precautions:

In the past, typos, odd graphics or weird email addresses gave away phishing messages, but now, it’s fairly easy for evildoers to spoof an email address or copy a design perfectly. Another old giveaway was the misfit web address at the top of your browser, along with the lack of a secure lock icon. But now, phishing campaigns sometimes run on secure websites, and confuse things with really long addresses, says James Pleger, security director at RiskIQ, which tracked 58 million phishing incidents in 2016.

What we like: Fowler is helpful with advice about newer precautions, including keeping web browser security features updated and employing 2-factor authentication wherever possible. We also like his admission of his own past victim-hood to phishing, via a malware attack. He’s not overly cheery about the prospects of stopping the bad guys, but he does give confidence to people willing to take a few extra regular precautions.

*            *            *

Don’t Friend My Friends: Nonsolicitation Agreements Should Account for Social Media Strategies

This is an employment story about former employees who signed agreements with their former employers restricting their solicitations of customers of their former employers. In the traditional nonsolicitation context, it wasn’t that hard to tell when a former employee went about trying to poach his or her former company’s business. Things have become trickier in the age of social media, when “friend”-ing, “like”-ing, or “following” a contact on Facebook, Twitter, Instagram or LinkedIn might or might not suggest nefarious related behavior.

Justin Giovannettone and Christina Von der Ahe of Orrick’s “Trade Secrets Watch” survey a nice representative handful of recent cases from federal and state courts on just such questions.

In one case, the former employee – now working for a competitor of his former employer – remained linked via LinkedIn with connections he made while at his former company. His subsequent action in inviting his contacts to “check out” his new employer’s updated website drew a lawsuit for violating his nonsolicitation. For various reasons, the lawsuit failed, but of most interest was Giovannettone and Von der Ahe’s comment that “The court also noted that the former employer did not request or require the former employee to “unlink” with its customers after he left and, in fact, did not discuss his LinkedIn account with him at all.”

What we like: Giovannettone and Von der Ahe point out the inconsistencies in court opinions on this subject and, therefore, smartly recognize the takeaway for employers, namely to be specific about what’s expected of former employees. That may seem obvious, but for me it was surprising to learn that an employer could potentially – and enforceably – prevent a former employee from “friend”-ing on Facebook.

Read More

Blogs and Writings We Like

This week we highlight 3 fine writers discussing timely subjects in media tech law: Beverly Berman writing about website terms of service and fair use, Leonard Gordon writing about “astroturfing” in advertising law, and John Buchanan and Dustin Cho writing about a gaping coverage gap with cybersecurity insurance.

Hot Topic: Fake News

Beverly Berneman’s timely post, “Hot Topic: Fake News” blog post (on the “IP News For Business” blog of Chicago firm Golan Christie Taglia), offers a simple cautionary tale about publishing your copyrighted artwork on the internet, or in this case publishing on a website (DeviantArt) promoting the works of visual artists. One such artist’s posting subsequently appeared for sale, unauthorized, on t-shirts promoted on the website of another company (Hot Topic). The aggrieved artist then sought recourse from DeviantArt. Berneman (like DeviantArt) pointed to DeviantArt’s terms of use, which prohibited downloading or using artwork for commercial purposes without permission from the copyright owner – leaving the artist with no claim against DeviantArt.

Berneman correctly highlights the need to read website terms of use before publishing your artwork on third party sites, especially if you expect that website to enforce piracy by other parties. Berneman also dismisses arguments about fair use made by some commentators about this case, adding “If Hot Topic used the fan art without the artist’s permission and for commercial purposes, it was not fair use.”

What we like: We like Berneman’s concise and spot-on guidance about the need to read website terms of use and, of course, when fair use is not “fair”. Plus her witty tie-in to “fake news”.

*            *            *

NY AG Keeps up the Pressure on Astroturfing

Leonard Gordon, writing in Venable’s “All About Advertising Law” blog, offered a nice write-up of several recent settlements of “Astroturfing” enforcement actions by New York State’s Attorney General. First, what is Astroturfing? Gordon defines it as “the posting of fake reviews”, although blogger Sharyl Attkisson put it more vividly: “What’s most successful when it appears to be something it’s not? Astroturf. As in fake grassroots.” (And for the partisan spin on this, Attkisson follows that up with her personal conclusions as to who makes up the “Top 10 Astroturfers”, including “Moms Demand Action for Gun Sense in America and Everytown” and The Huffington Post. Ok now. But we digress ….)

The first case involved an urgent care provider (Medrite), which evidently contracted with freelancers and firms to write favorable reviews on sites like Yelp and Google Plus. Reviewers were not required to have been actual urgent care patients, nor were they required to disclose that they were compensated for their reviews.

The second case involved a car service (Carmel). The AG claimed that Carmel solicited favorable Yelp reviews from customers in exchange for discount cards on future use of the service. As with Medrite, reviewers were not required to disclose compensation for favorable reviews, and customers posting negative reviews were not given discount cards.

The settlements both involved monetary penalties and commitments against compensating reviewers without requiring the reviewers to disclose compensation. And in the Carmel settlement, Carmel took on affirmative obligations to educate its industry against conducting these practices.

What we like: We like Gordon’s commentary about this case, particularly its advisory conclusion: “Failure to do that could cause you to end up with a nasty case of “turf toe” from the FTC or an AG.” Very nice.

*            *            *

Insurance Coverage Issues for Cyber-Physical Risks

John Buchanan and Dustin Cho write in Covington’s Inside Privacy blog about a gaping insurance coverage gap from risks to physical property from cybersecurity attacks, as opposed to the more familiar privacy breaches. Buchanan and Cho report on a recently published report from the U.S. Government’s National Institute of Standards and Technology (NIST), helpfully titled “Systems Security Engineering Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems”. Rolls off the tongue.

The NIST report is a dense read (257 pages), and covers much more than insurance issues, in particular recommendations for improvements to system security engineering for (among other things) critical infrastructure, medical devices and hospital equipment and networked home devices (IoT or the Internet of Things).

Buchanan and Cho’s post addresses insurance issues, noting that “purchasers of cyber insurance are finding that nearly all of the available cyber insurance products expressly exclude coverage for physical bodily injury and property damage”.

What we like: Insurance is always an important and underappreciated business issue, with even less public understanding of the property and injury risks to (and coverage from) cyber damage. We like how Buchanan and Cho took the time to plow through an opaque government report to tell a simple and important story.

Read More

Dataveillance Protection: The E.U.-U.S. Privacy Shield

For many years, technology outpaced policy when it came to standards and protections around ownership of and access to personal data. Privacy policies are not set by governments but rather by technology companies that created the digital world as it is experienced today. Many if not all of the dominant players in this space are American technology companies that include Alphabet (i.e. Google), Apple, Amazon, Facebook and Microsoft. These companies have more say about a user’s online life than any individual local, state or national government.

Read More

Legal Issues in Ad Tech: IP Addresses Are Personal Data, Says the EU (well … sort of)

Much has been written in the past 2 weeks about the U.S. Presidential election. Time now for a diversion into the exciting world of data privacy and “personal data”. Because in the highly refined world of privacy and data security law, important news actually happened in the past few weeks. Yes, I speak breathlessly of the European Court of Justice (ECJ) decision on October 19th that IP (internet protocol) addresses are “Personal Data” for purposes of the EU Data Directive. This is bigly news (in the data privacy world, at least).

First, what the decision actually said, which leads immediately into a riveting discussion of the distinction between static and dynamic IP addresses.

The decision ruled on a case brought by a German politician named Patrick Breyer, who sought an injunction preventing a website and its owner – here, publicly available websites operated by the German government – from collecting and storing his IP address when he lawfully accessed the sites. Breyer claimed that the government’s actions were in violation of his privacy rights under the EU Directive 95/46/EC – The Data Protection Directive (Data Protection Directive). As the ECJ reported in its opinion, the government websites “register and store the IP addresses of visitors to those sites, together with the date and time when a site was accessed, with the aim of preventing cybernetic attacks and to make it possible to bring criminal proceedings.”

The case is Patrick Breyer v Bundesrepublik Deutschland, Case C-582/14, and the ECJ’s opinion was published on October 19th.

Read More

Legal Issues in Ad Tech: Who Owns Marketing Performance Data?

Does a marketer own data related to performance of its own marketing campaigns? It might surprise marketers to know that data ownership isn’t automatically so. Or more broadly, who does own that data? A data rights clause in contracts with DSPs or agencies might state something like this:

“Client owns and retains all right, title and interest (including without limitation all intellectual property rights) in and to Client Data”,

… where “Client Data” is defined as “Client’s data files”. Or this:

“As between the Parties, Advertiser retains and shall have sole and exclusive ownership and Intellectual Property Rights in the … Performance Data”,

… where “Performance Data” means “campaign data related to the delivery and tracking of Advertiser’s digital advertising”.

Both clauses are vague, although the second is broader and more favorable to the marketer. In neither case are “data files” or “campaign data” defined with any particularity, and neither case includes any delivery obligation much less specifications for formatting, reporting or performance analytics. And even if data were provided by a vendor or agency, these other questions remain: What kind of data would be provided, how would it be provided, and how useful would the data be if it were provided?

Read More

Legal Issues in Ad Tech: Anonymized and De-Identified Data

Recently, in reviewing a contract with a demand-side platform (DSP), I came across this typical language in a “Data Ownership” section:

“All Performance Data shall be considered Confidential Information of Advertiser, provided that [VENDOR] may use such Performance Data … to create anonymized aggregated data, industry reports, and/or statistics (“Aggregated Data”) for its own commercial purposes, provided that Aggregated Data will not contain any information that identifies the Advertiser or any of its customers and does not contain the Confidential Information of the Advertiser or any intellectual property of the Advertiser or its customers.” (emphasis added).

I was curious what makes data “anonymized”, and I was even more curious whether the term was casually and improperly used. I’ve seen the same language alternately used substituting “de-identified” for “anonymized”. Looking into this opened a can of worms ….

What are Anonymized and De-Identified Data – and Are They the Same?

Here’s how Gregory Nelson described it in his casually titled “Practical Implications of Sharing Data: A Primer on Data Privacy, Anonymization, and De-Identification”:

“De-identification of data refers to the process of removing or obscuring any personally identifiable information from individual records in a way that minimizes the risk of unintended disclosure of the identity of individuals and information about them. Anonymization of data refers to the process of data de-identification that produces data where individual records cannot be linked back to an original as they do not include the required translation variables to do so.” (emphasis added)

Or in other words, both methods have the same purpose and both methods technically remove personally identifiable information (PII) from the data set. But while de-identified data can be re-identified, anonymized data cannot be re-identified. To use a simple example, if a column from an Excel spreadsheet containing Social Security numbers is removed from a dataset and discarded, the data would be “anonymized”.

But first … what aspects or portions of data must be removed in order to either de-identify or anonymize a set?

But What Makes Data “De-Identified” or “Anonymous” in the First Place?

Daniel Solove has written that, under the European Union’s Data Directive 95/46/EC, “Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.” This makes things complicated in a hurry. After all, in the above example where Social Security numbers are removed, remaining columns might include normally non-PII information such as zip codes or gender (male or female). But the Harvard researchers Olivia Angiuli, Joe Blitzstein, and Jim Waldo show how even these 3 data points in an otherwise “de-identified” data set (i.e. “medical data” in the image below) can be used to re-identify individuals when combined with an outside data source that shares these same points (i.e. “voter list” in the image below):

Data Sets Overlap Chart

(Source: How to De-Identify Your Data, by Olivia Angiuli, Joe Blitzstein, and Jim Waldo, http://queue.acm.org/detail.cfm?id=2838930)

That helps explain the Advocate General opinion recently issued in the European Union Court of Justice (ECJ), finding that dynamic IP addresses can, under certain circumstances, be “personal data” under the European Union’s Data Directive 95/46/EC. The case involves interpretation of the same point made by Daniel Solove cited above, namely discerning the “personal data” definition, including this formulation in Recital 26 of the Directive:

“(26) … whereas, to determine whether a person is identifiable, account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person …”

There was inconsistency among the EU countries on the level of pro-activity required by a data controller in order to render an IP address “personal data”.   So, for example, the United Kingdom’s definition of “personal data”: “data which relate to a living individual who can be identified – (a) from those data, or (b) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller” (emphasis added). Not so in Germany and, according to a White & Case report on the ECJ case, not so according to the Advocate General, whose position was that “the mere possibility that such a request [for further identifying information] could be made is sufficient.”

Which then circles things back to the question at the top, namely: Are Anonymized and De-Identified Data the Same? They are not the same. That part is easy to say. The harder part is determining which is which, especially with the ease of re-identifying presumably scrubbed data sets. More on this topic shortly.

Read More