MediaTech Law

By MIRSKY & COMPANY, PLLC

Change Your Password Every [Blank] Days!

Takeaways from Microsoft’s announcement in May that it would be “Dropping the password-expiration policies that require periodic password changes” in baseline settings for Windows 10 and Windows Server:

First: The major security problem with passwords – the most major of the major problems – is not a failure to change passwords often enough.  Rather, it is choosing weak passwords.  Making passwords much harder for supercomputers (and humans, too) to guess – for example, requiring minimums of 11 characters, randomly-generated, using both upper- and lower-case letters, symbols and numbers – are much more “real-world security” (in Microsoft’s formulation).  As Dan Goodin recently wrote in Ars Technica, “Even when users attempt to obfuscate their easy-to-remember passwords – say by adding letters or symbols to the words, or by substituting 0’s for the o’s or 1’s for L’s – hackers can use programming rules that modify the dictionary entries.”

Read More

Confusion in “Cookie”-Land: Consent Requirements for Placing Cookies under GDPR and ePrivacy Directive

Must a website get consent from a user before placing cookies in the user’s browser?  The EU’s ePrivacy Directive says that yes, consent from the user is required prior to placement of most cookies (regardless of whether the cookies track personal data).  But under the General Data Protection Regulation (GDPR), consent is only one of several “lawful bases” available to justify collection of personal data.  If cookies are viewed as “personal data” under the GDPR – specifically, the placement of cookies in a user’s browser – must a website still get consent in order to place cookies, or instead can the site rely on one of those other “lawful bases” for dropping cookies?

First, are cookies “personal data” governed by the GDPR?  Or to be more precise, do cookies that may identify individuals fall under the GDPR?  This blog says yes: “when cookies can identify an individual, it is considered personal data.  … While not all cookies are used in a way that could identify users, the majority (and the most useful ones to the website owners) are, and will therefore be subject to the GDPR.”  This blog says no: “cookie usage and its related consent acquisition are not governed by the GDPR, they are instead governed by the ePrivacy Directive.” (emphasis added)  Similarly with this blog.

Read More

Encrypted Data: Still “Personal Data” under GDPR?

An interesting question is whether encrypted personal data is still “personal data” for purposes of the European Union’s General Data Protection Regulation (GDPR), and therefore making processing of that data subject to the GDPR’s library of compliance obligations.  The answer depends on the meaning of encryption: It is not enough to claim that encrypted data is “anonymized” and therefore inaccurate to conclude that it does not relate to the personal data definition’s meaning of an “identified or identifiable natural person.”

If an organization encrypts data in its care, with the encryption thereby rendering the data no longer “identified”, is it still “identifiable”?  Maybe.  If neither identified nor identifiable, then data is no longer “personal data”.

First, what is encryption?  Josh Gresham writes on IAPP’s blog that encryption involves a party “tak[ing] data and us[ing] an ‘encryption key’ to encode it so that it appears unintelligible.  The recipient uses the encryption key to make it readable again.  The encryption key itself is a collection of algorithms that are designed to be completely unique, and without the encryption key, the data cannot be accessed.  As long as the key is well designed, the encrypted data is safe.” (emphasis added)

Read More

Do We Need to Appoint a (GDPR) Data Protection Officer?

Does your organization need to appoint a “Data Protection Officer”?  Articles 37-39 of the EU’s General Data Protection Regulation (GDPR) require certain organizations that process personal data of EU citizens to appoint a Data Protection Officer (DPO) to record their data processing activities.  Doing so is a lot more than perfunctory – you can’t just say, “Steve, our HR Director, you’re now our DPO.  Congratulations!”  The qualifications for the job are significant, and the organizational impact of having a DPO is extensive.  You may be better off avoiding appointing a DPO if you don’t have to, while if you do have to the failure to do so can expose your organization to serious enforcement penalties. 

Read More

Blogs and Writings we Like

This week we highlight three writers discussing timely subjects in copyright, technology, and advertising law. Susan Neuberger Weller and Anne-Marie Dao from Mintz Levin discussed a split in thought on when a copyright is officially registered for purposes of filing an infringement lawsuit; Jeffery Neuburger from Proskauer wrote an interesting article reflecting on technology-related legal issues in 2017 and looking forward to potential hot issues in 2018; and Leonard Gordon posted a piece on Venable’s All About Advertising Law Blog about cancellation methods for continuity sales offers.

When is a Copyright “Registered” for Purposes of Filing Suit?

In a recent post, Susan Neuberger Weller and Anne-Marie Dao from Mintz Levin discuss a split among Federal Courts of Appeal about when a copyright is registered. Weller and Dao note that registration of a US copyright is required prior to being able to initiate an infringement suit (or to obtain statutory damages) in federal court, but there is not an agreement on when “registration” actually occurs. Some circuit courts have found that registration happens when the application is filed, but others believe it only occurs when the Register of Copyrights actually issues the copyright registration. The article recounts a recent case in the 11th Circuit in which the court dismissed an infringement case because the copyright holder had filed the application but no action had been taken by the US Copyright Office.

The authors note that the issue could be resolved if the US Supreme Court agrees to hear an appeal by the plaintiff in the 11th Circuit case, although – but, as of April 16, 2018 the Supreme Court had not acted on the plaintiff’s certirari petition.

What We Like: The article raises an important issue for copyright holders that can be critical in copyright infringement cases. In addition to raising the topic, we particularly like the authors’ summary of the various positions among the federal appeals courts about when copyright registration actually occurs. This list is a good reference for any lawyers considering whether (and maybe even where) to bring an infringement case.

***

Reflections on Technology-Related Legal Issues: Looking Back at 2017; Will 2018 Be a Quantum Leap Forward?

Jeffery Neuburger from Proskauer wrote an interesting article reflecting on technology-related legal issues in 2017 and looking forward to issues that will likely be in play in 2018. Neuburger mentions a number of things that came up in 2017 ranging from cybersecurity to privacy. He also discusses the development of blockchain (“a continuously growing list of records, called blocks, which are linked and secured using cryptography,” which is a “core component of bitcoin”) into areas beyond cryptocurrencies and poses questions about potential legal issues that may arise. In the privacy realm, Neuburger opines that “2018 also promises to be the year of Europe’s General Data Privacy Regulation” (GDPR) and notes that mobile tracking also is likely to be a hot issue in the new year.

Most interesting, Neuburger spends almost half the article talking about quantum computing. He explains that quantum computers operate on the law of quantum mechanics and use quantum bits or “qubits” (“a qubit can store a 0, 1, or a summation of both 0 and 1”), and states that quantum computers could be up to 100 million times faster than current computers. The article further sets out four areas of legal issues related to quantum computers: (i) encryption and cryptography; (ii) blockchain; (iii) securities industry; and (iv) military applications. Neuburger ominously notes that “quantum computers may be powerful enough (perhaps) to break the public key cryptography systems currently in use that protects secure online communications and encrypted data.”

What We Like: We’ve always looked forward to Jeff Neuberger’s commentary on new media and tech law issues, particularly his extensive recent blogging on the GDPR and other privacy issues. But we particularly liked his discussion of quantum computing, a topic not ordinarily discussed in these types of summaries and somewhat challenging for non-scientists to tackle. As is clear from Neuberger’s analysis, many aspects of the law may be affected as this technology advances.

***

Sex, Golf, and the FTC – And, of course, Continuity Sales Programs

On Venable’s All About Advertising Law Blog, Leonard Gordon discusses a recent Federal Trade Commission complaint and settlement with a lingerie online retailer related to a continuity sales promotion – “A continuity program is a company’s sales offer where a buyer/consumer is agreeing to receive merchandise or services automatically at regular intervals (often monthly), without advance notice, until they cancel.” (Gordon included a passing reference to a similar case involving golf balls, but did not provide many details – thus, the reference in the title.)

Read More

Blogs and Writings We Like

This week we highlight three writers discussing timely subjects in privacy and trademark law. Brandon Vigliarolo wrote in TechRepublic about Google’s new app privacy standards; Sarah Pearce from Cooley wrote a practical guide to the EU’s General Data Protection Regulation that includes a 6-month compliance plan; and Scott Hervey posted a piece on the IP Law Blog analyzing whether there was trademark infringement under an interesting situation involving a strain of pot.

Google’s new app privacy standards mean big changes for developers

In TechRepublic, Brandon Vigliarolo wrote about Google’s new app privacy standards that will begin on January 30, 2018. At the forefront, app developers will need to explain what data is being used, how it is used, and when it is used – and get user consent. Vigliarolo anticipates that most developers will need to make changes to their app design in order to comply with the new standards. In addition, any transmission of data (even in a crash report) has to be explained and accepted by the user. While Vigliarolo writes that it is not completely clear how Google will enforce these standards, beginning at the end of January users will be given warnings if an app (or a website leading to an app) is known by Google to collect user data without consent. Non-compliant developers could see lower ratings and less traffic.

Read More

Apple Touts Differential Privacy, Privacy Wonks Remain Skeptic, Google Joins In

(Originally published January 19, 2017, updated July 24, 2017)

Apple has traditionally distinguished itself from its rivals, like Google and Facebook, by emphasizing its respect of user privacy. It has taken deliberate steps to avoid vacuuming up all of its users’ data, providing encryption at the device level as well as during data transmission. It has done so, however, at the cost of foregoing the benefits that pervasive data collection and analysis have to offer. Such benefits include improving on the growing and popular on-demand search and recommendation services, like Google Now and Microsoft’s Cortana and Amazon’s Echo. Like Apple’s Siri technology, these services act as a digital assistant, providing responses to search requests and making recommendations. Now Apple, pushing to remain competitive in this line of its business, is taking a new approach to privacy, in the form of differential privacy (DP).

Announced in June 2016 during Apple’s Worldwide Developers’ Conference in San Francisco, DP is, as Craig Federighi, senior vice president of software engineering, stated “a research topic in the area of statistics and data analytics that uses hashing, subsampling and noise injection to enable … crowdsourced learning while keeping the data of individual users completely private.” More simply put, DP is the statistical science of attempting to learn as much as possible about a group while learning as little as possible about any individual in it.

Read More

Copyright, Fair Use, and the Kissing Picture: Storms v. New England Sports Network, Inc.

Recently, a photojournalist, Michael Storms, filed an intriguing lawsuit in the U.S. District Court for the Southern District of New York against a website that published photographs taken by Mr. Storm without his permission and without paying Mr. Storms a licensing fee. The photos were of New York Mets pitcher Matt Harvey kissing Victoria Secret model Adriana Lima at a restaurant in Miami, not long after Ms. Lima broke up with New England Patriots wide receiver Julian Edelman. The pictures were posted on the website of the New England Sports Network (NESN). (The case is Storms v. New England Sports Network, Inc.)

On its face, the complaint is relatively short and generic, but it will be interesting to see the defendant’s reply, whether the network argues that its use of the photos constitutes permissible “fair use,” and the potential effect of the court’s decision on copyright law as a whole.

Under the U.S. Copyright Act, 17 U.S.C. §§ 101 et seq., the “fair use of a copyrighted work, including . . . for purposes such as . . . news reporting . . . is not an infringement of copyright.” While there is no strict formula for how a court determines “fair use”, the Copyright Act (17 U.S. Code § 107) requires consideration of 4 factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.
Read More

Legal Issues in Ad Tech: De-Identified vs. Anonymized in a World of Big Data

In the booming world of Big Data, consumers, governments, and even companies are rightfully concerned about the protection and security of their data and how to keep one’s personal and potentially embarrassing details of life from falling into nefarious hands.   At the same time, most would recognize that Big Data can serve a valuable purpose, such as being used for lifesaving medical research and to improve commercial products. A question therefore at the center of this discussion is how, and if, data can be effectively “de-identified” or even “anonymized” to limit privacy concerns – and if the distinction between the two terms is more theoretical than practical. (As I mentioned in a prior post, “de-identified” data is data that has the possibility to be re-identified; while, at least in theory, anonymized data cannot be re-identified.)

Privacy of health data is particularly important and so the U.S. Health Insurance Portability and Accountability Act (HIPPA) includes strict rules on the use and disclosure of protected health information. These privacy constraints do not apply if the health data has been de-identified – either through a safe harbor-blessed process that removes 18 key identifiers or through a formal determination by a qualified expert, in either case presumably because these mechanisms are seen as a reasonable way to make it difficult to re-identify the data.

Read More

Blogs and Writings We Like

This week we highlight 3 writers discussing timely subjects in media tech law: Sandy Botkin writing about zombie cookies and targeted advertising, Geoffrey Fowler writing about the new world of phishing and “phishermen” (yes, that’s a thing), and Justin Giovannettone and Christina Von der Ahe writing about nonsolicitation agreements and social media law.

FTC vs Turn, Inc.: Zombie Hunters

Sandy Botkin, writing on TaxBot Blog, reports amusingly on the FTC’s December 2016 settlement with digital advertising data provider Turn, Inc., stemming from an enforcement action against Turn for violating Turn’s own consumer privacy policy. Botkin used the analogy of a human zombie attack to illustrate the effect of actions Turn took to end-run around user actions to block targeted advertising on websites and apps.

According to the FTC in its complaint, Turn’s participation in Verizon Wireless’ tracking header program – attaching unique IDs to all unencrypted mobile internet traffic for Verizon subscribers – enabled turn to re-associate the Verizon subscriber with his or her use history. By so doing, according to Botkin, this further enabled Turn to “recreate[] cookies that consumers had previously deleted.” Or better yet: “Put another way, even when people used the tech equivalent of kerosene and machetes [to thwart zombies], Turn created zombies out of consumers’ deleted cookies.”

What we like: We like Botkin’s zombie analogy, although not because we like zombies. We don’t. Like. Zombies. But we do think it’s a clever explanatory tool for an otherwise arcane issue.

*            *            *

Your Biggest Online Security Risk Is You

Geoffrey Fowler writes in The Wall Street Journal (here ($), with an even fuller version of the story available here via Dow Jones Newswires) about the latest in the world of phishing, that large category of online scams that, one way or another, has the common goals of accessing your data, your money or your life, or someone else’s who might be accessed through your unsuspecting gateway.

“If you’re sure you already know all about them, think again. Those grammatically challenged emails from overseas ‘pharmacies’ and Nigerian ‘princes’ are yesterday’s news. They’ve been replaced by techniques so insidious, they could leave any of us feeling like a sucker.”

Oren Falkowitz of Area 1 Security told Fowler that about 97% of all cyberattacks start with phishing. Phishing is a big deal.

Fowler writes of the constantly increasing sophistication of “phishermen” – yes, that’s a term – weakening the effectiveness of old common-sense precautions:

In the past, typos, odd graphics or weird email addresses gave away phishing messages, but now, it’s fairly easy for evildoers to spoof an email address or copy a design perfectly. Another old giveaway was the misfit web address at the top of your browser, along with the lack of a secure lock icon. But now, phishing campaigns sometimes run on secure websites, and confuse things with really long addresses, says James Pleger, security director at RiskIQ, which tracked 58 million phishing incidents in 2016.

What we like: Fowler is helpful with advice about newer precautions, including keeping web browser security features updated and employing 2-factor authentication wherever possible. We also like his admission of his own past victim-hood to phishing, via a malware attack. He’s not overly cheery about the prospects of stopping the bad guys, but he does give confidence to people willing to take a few extra regular precautions.

*            *            *

Don’t Friend My Friends: Nonsolicitation Agreements Should Account for Social Media Strategies

This is an employment story about former employees who signed agreements with their former employers restricting their solicitations of customers of their former employers. In the traditional nonsolicitation context, it wasn’t that hard to tell when a former employee went about trying to poach his or her former company’s business. Things have become trickier in the age of social media, when “friend”-ing, “like”-ing, or “following” a contact on Facebook, Twitter, Instagram or LinkedIn might or might not suggest nefarious related behavior.

Justin Giovannettone and Christina Von der Ahe of Orrick’s “Trade Secrets Watch” survey a nice representative handful of recent cases from federal and state courts on just such questions.

In one case, the former employee – now working for a competitor of his former employer – remained linked via LinkedIn with connections he made while at his former company. His subsequent action in inviting his contacts to “check out” his new employer’s updated website drew a lawsuit for violating his nonsolicitation. For various reasons, the lawsuit failed, but of most interest was Giovannettone and Von der Ahe’s comment that “The court also noted that the former employer did not request or require the former employee to “unlink” with its customers after he left and, in fact, did not discuss his LinkedIn account with him at all.”

What we like: Giovannettone and Von der Ahe point out the inconsistencies in court opinions on this subject and, therefore, smartly recognize the takeaway for employers, namely to be specific about what’s expected of former employees. That may seem obvious, but for me it was surprising to learn that an employer could potentially – and enforceably – prevent a former employee from “friend”-ing on Facebook.

Read More

“Do Not Track” and Cookies – European Commission Proposes New ePrivacy Regulations

The European Commission recently proposed new regulations that will align privacy rules for electronic communications with the much-anticipated General Data Protection Regulation (GDPR) (the GDPR was fully adopted in May 2016 and goes into effect in May 2018). Referred to as the Regulation on Privacy and Electronic Communications or “ePrivacy” regulation, these final additions to the EU’s new data protection framework make a number of important changes, including expanding privacy protections to over-the-top applications (like WhatsApp and Skype), requiring consent before metadata can be processed, and providing additional restrictions on SPAM. But the provisions relating to “cookies” and tracking of consumers online activity are particularly interesting and applicable to a wide-range of companies.

Cookies are small data files stored on a user’s computer or mobile device by a web browser. The files help websites remember information about the user and track a user’s online activity. Under the EU’s current ePrivacy Directive, a company must get a user’s specific consent before a cookie can be stored and accessed. While well-intentioned, this provision has caused frustration and resulted in consumers facing frequent pop-up windows (requesting consent) as they surf the Internet.

Read More

Circuits Weigh-in on PII Under the VPPA

The Video Privacy Protection Act (VPPA) was enacted in 1988 in response to Robert Bork’s Supreme Court confirmation hearings before the Senate judiciary committee, during which his family’s video rental history was used to great effect and in excoriating detail. This was the age of brick-and-mortar video rental stores, well before the age of instant video streaming and on-demand content. Nonetheless, VPPA compliance is an important component to any privacy and data security programs of online video-content providers, websites that host streaming videos and others that are in the business of facilitating consumers viewing streaming video.

Judicial application of the VPPA to online content has produced inconsistent results, including how the statute’s definition of personally-identifiable information (PII)—the disclosure of which triggers VPPA-liability—has been interpreted. Under the VPPA, PII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. § 3710(a)(3). Courts and commentators alike have noted that this definition is vague particularly when applied to new technological situations, as it describes what counts as PII rather than providing an absolute definition. Specifically in the streaming video context, the dispute of the PII definition typically turns on whether a static identifier, like an internet protocol (IP) address or other similar identifier uniquely assigned to consumers, counts as PII under the VPPA.

Read More