MediaTech Law

By MIRSKY & COMPANY, PLLC

Do We Need to Appoint a (GDPR) Data Protection Officer?

Does your organization need to appoint a “Data Protection Officer”?  Articles 37-39 of the EU’s General Data Protection Regulation (GDPR) require certain organizations that process personal data of EU citizens to appoint a Data Protection Officer (DPO) to record their data processing activities.  Doing so is a lot more than perfunctory – you can’t just say, “Steve, our HR Director, you’re now our DPO.  Congratulations!”  The qualifications for the job are significant, and the organizational impact of having a DPO is extensive.  You may be better off avoiding appointing a DPO if you don’t have to, while if you do have to the failure to do so can expose your organization to serious enforcement penalties. 

Read More

Equifax Breach Ignites Discussions about Open Source Software

In the recent Equifax data beach, massive amounts of personal information (including the names, social security numbers, birth dates, addresses and driver’s license numbers of 145.5 million U.S. consumers) were potentially accessed by hackers. As a result, Equifax parted ways with its CEO and other executives. While Equifax has offered credit monitoring and identity theft protection to victims, the full extent of the damage still may not be known for some time.

Interestingly, the incident has sparked a discussion about the use of open source software by companies because Equifax claims the breach was caused by a vulnerability in an open source application framework called Apache Struts (the formal name of the vulnerability is CVE-2017-5638). Apache Struts is a very popular framework for building web applications and was used by Equifax as part of a web portal that allowed consumers to dispute the accuracy of credit information.   For context, the vulnerability in Apache Struts is only one of many known and widely exploited security vulnerabilities in open source projects, including among others OpenSSL Heartbleed, gSOAP Devil’s Ivy, and Shellshock.

Equifax’s use of open source software is not unique. In a 2016 article in Wired, Kline Finley explained that open source can be the best way to develop software in part because it “lets companies share the burden of developing common infrastructure and compatibility standards.”

Read More

Blogs and Writings We Like

This week we highlight 3 fine writers discussing timely subjects in media tech law: Beverly Berman writing about website terms of service and fair use, Leonard Gordon writing about “astroturfing” in advertising law, and John Buchanan and Dustin Cho writing about a gaping coverage gap with cybersecurity insurance.

Hot Topic: Fake News

Beverly Berneman’s timely post, “Hot Topic: Fake News” blog post (on the “IP News For Business” blog of Chicago firm Golan Christie Taglia), offers a simple cautionary tale about publishing your copyrighted artwork on the internet, or in this case publishing on a website (DeviantArt) promoting the works of visual artists. One such artist’s posting subsequently appeared for sale, unauthorized, on t-shirts promoted on the website of another company (Hot Topic). The aggrieved artist then sought recourse from DeviantArt. Berneman (like DeviantArt) pointed to DeviantArt’s terms of use, which prohibited downloading or using artwork for commercial purposes without permission from the copyright owner – leaving the artist with no claim against DeviantArt.

Berneman correctly highlights the need to read website terms of use before publishing your artwork on third party sites, especially if you expect that website to enforce piracy by other parties. Berneman also dismisses arguments about fair use made by some commentators about this case, adding “If Hot Topic used the fan art without the artist’s permission and for commercial purposes, it was not fair use.”

What we like: We like Berneman’s concise and spot-on guidance about the need to read website terms of use and, of course, when fair use is not “fair”. Plus her witty tie-in to “fake news”.

*            *            *

NY AG Keeps up the Pressure on Astroturfing

Leonard Gordon, writing in Venable’s “All About Advertising Law” blog, offered a nice write-up of several recent settlements of “Astroturfing” enforcement actions by New York State’s Attorney General. First, what is Astroturfing? Gordon defines it as “the posting of fake reviews”, although blogger Sharyl Attkisson put it more vividly: “What’s most successful when it appears to be something it’s not? Astroturf. As in fake grassroots.” (And for the partisan spin on this, Attkisson follows that up with her personal conclusions as to who makes up the “Top 10 Astroturfers”, including “Moms Demand Action for Gun Sense in America and Everytown” and The Huffington Post. Ok now. But we digress ….)

The first case involved an urgent care provider (Medrite), which evidently contracted with freelancers and firms to write favorable reviews on sites like Yelp and Google Plus. Reviewers were not required to have been actual urgent care patients, nor were they required to disclose that they were compensated for their reviews.

The second case involved a car service (Carmel). The AG claimed that Carmel solicited favorable Yelp reviews from customers in exchange for discount cards on future use of the service. As with Medrite, reviewers were not required to disclose compensation for favorable reviews, and customers posting negative reviews were not given discount cards.

The settlements both involved monetary penalties and commitments against compensating reviewers without requiring the reviewers to disclose compensation. And in the Carmel settlement, Carmel took on affirmative obligations to educate its industry against conducting these practices.

What we like: We like Gordon’s commentary about this case, particularly its advisory conclusion: “Failure to do that could cause you to end up with a nasty case of “turf toe” from the FTC or an AG.” Very nice.

*            *            *

Insurance Coverage Issues for Cyber-Physical Risks

John Buchanan and Dustin Cho write in Covington’s Inside Privacy blog about a gaping insurance coverage gap from risks to physical property from cybersecurity attacks, as opposed to the more familiar privacy breaches. Buchanan and Cho report on a recently published report from the U.S. Government’s National Institute of Standards and Technology (NIST), helpfully titled “Systems Security Engineering Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems”. Rolls off the tongue.

The NIST report is a dense read (257 pages), and covers much more than insurance issues, in particular recommendations for improvements to system security engineering for (among other things) critical infrastructure, medical devices and hospital equipment and networked home devices (IoT or the Internet of Things).

Buchanan and Cho’s post addresses insurance issues, noting that “purchasers of cyber insurance are finding that nearly all of the available cyber insurance products expressly exclude coverage for physical bodily injury and property damage”.

What we like: Insurance is always an important and underappreciated business issue, with even less public understanding of the property and injury risks to (and coverage from) cyber damage. We like how Buchanan and Cho took the time to plow through an opaque government report to tell a simple and important story.

Read More

What’s Behind the Decline in Internet Privacy Litigation?

The number of privacy lawsuits filed against big tech companies has significantly dropped in recent years, according to a review of court filings conducted by The Recorder, a California business journal.

According to The Recorder, the period 2010-2012 saw a dramatic spike in cases filed against Google, Apple, or Facebook (as measured by filings in the Northern District of California naming one of the three as defendants). The peak year was 2012, with 30 cases filed against the three tech giants, followed by a dramatic drop-off in 2014 and 2015, with only five privacy cases filed between the two years naming one of the three as defendants. So what explains the sudden drop off in privacy lawsuits?

One theory, according to privacy litigators interviewed for The Recorder article, is that the decline reflects the difficulty in applying federal privacy statutes to prosecute modern methods of monetizing, collecting, or disclosing online data. Many privacy class action claims are based on statutes passed in the 1980s like the Electronic Communications Privacy Act (ECPA), the Stored Communications Act (SCA), both passed in 1986, and the Video Privacy Protection Act (VPPA), passed in 1988. These statutes were originally written to address specific privacy intrusions like government wire taps or disclosures of video rental history.

Read More

License Plate Numbers: a valuable data-point in big-data retention

What can you get from a license plate number?

At first glance, a person’s license plate number may not be considered that valuable a piece of information. When tied to a formal Motor Vehicle Administration (MVA) request it can yield the owner’s name, address, type of vehicle, vehicle identification number, and any lienholders associated with the vehicle. While this does reveal some sensitive information, such as a likely home address, there are generally easier ways to go about gathering that information. Furthermore, states have made efforts to protect such data, revealing owner information only to law enforcement officials or certified private investigators. The increasing use of Automated License Plate Readers (ALPRs), however, is proving to reveal a treasure trove of historical location information that is being used by law enforcement and private companies alike. Also, unlike historical MVA data, policies and regulations surrounding ALPRs are in their infancy and provide much lesser safeguards for protecting personal information.

ALPR – what is it?

Consisting of either a stationary or mobile-mounted camera, ALPRs use pattern recognition software to scan up to 1,800 license plates per minute, recording the time, date and location a particular car was encountered.

Read More

Liability for Data Loss in the Cloud: Why No One Accepts Liability? Why Carve it Out?

Why is liability for data loss typically carved out or tightly limited in cloud service and IT outsourcing contracts?  A common disclaimer in contracts for cloud services (and sometimes plain old IT outsourcing) runs like this:

You agree to take full responsibility for files and data transferred, and to maintain all appropriate backup of files and data stored on our servers. We will not be responsible for any data loss from your account.  (From http://techtips.salon.com/liability-loss-data-under-hosting-agreement-2065.html (emphasis added))

What is the Liability from Data Loss?

First, what exactly is the liability – from data loss – that is being disclaimed?  What is the risk?  For that, we turn to Dan Eash writing in Salon’sTech Tips”:

  1. Your site might be corrupted by hackers and spammers because your host didn’t properly secure the servers.
  2. Your host might do weekly backups, but something goes wrong and you lose days of work.
  3. You might have customers in a hosting reseller account who lose data because the host you bought the account from didn’t do regular backups.
  4. You might even have an e-commerce site where new customers make daily purchases.  If something goes wrong, how do you restore lost orders and customer details without a current backup?

I would add a 5th scenario: You just don’t know. 

Read More

SaaS: Software License or Service Agreement? Start with Copyright

SaaS, short for “Software as a Service”, is a software delivery model that grants users access to a program while the software itself and its accompanying data are stored off-site, on a vendor’s (or another third party’s) servers.  A user accesses the program via the internet, and the access is provided as a service.  Hence … “Software as a Service”.

In terms of user interface functionality, a SaaS service – typically accessed via a subscription model – is identical to a traditional software model in which a user purchases (or more typically, licenses) a physical copy of the software for installation on and access via the user’s own computer.  And in enterprise structures, the software is installed on an organization’s servers and accessed via dedicated “client” end machines, under one of many client-server setups.  In that sense, SaaS is much like the traditional client-server enterprise model where servers in both cases will likely be offsite, the difference being that SaaS servers are owned and managed by the software owner.  The “cloud” really just refers to the invisibility of the legal and operational relationship of the servers to the end user, since even in traditional client-server structures servers might very likely be offsite and accessed only via internet.

Read More

Privacy: Consent to Collecting Personal Information

Gonzalo Mon writes in Mashable that “Although various bills pending in Congress would require companies to get consent before collecting certain types of information, outside of COPPA, getting consent is not a uniformly applicable legal requirement yet. Nevertheless, there are some types of information (such as location-based data) for which getting consent may be a good idea.  Moreover, it may be advisable to get consent at the point of collection when sensitive personal data is in play.”

First, what current requirements – laws, agency regulations and quasi-laws – require obtaining consent, even if not “uniformly applicable”?

1. Government Enforcement.  The Federal Trade Commission’s November 2011 consent decree with Facebook user express consent to sharing of nonpublic user information that “materially exceeds” user’s privacy settings.  The FTC was acting under its authority under Section 5 of the FTC Act against an “unfair and deceptive trade practice”, an authority the FTC has liberally used in enforcement actions involving not just claimed breaches of privacy policies but also data security cases involving managing of personal data without providing adequate security.

2. User Expectations Established by Actual Practice.  The mobile space offers some of the most progressive (and aggressive) examples of privacy rights seemingly established by practice rather than stated policy.  For example, on the PrivacyChoice blog, the CEO of PlaceIQ explained that “Apple and Android have already established user expectations about [obtaining] consent.  Location-based services in the operating system provide very precise location information, but only through a user-consent framework built-in to the OS.  This creates a baseline user expectation about consent for precise location targeting.”  (emphasis added)

Read More

Privacy For Businesses: Any Actual Legal Obligations?

For businesses, is there an obligation in the United States to do anything more than simply have a privacy policy?  The answer is not much of an obligation at all.

Put another way, is it simply a question of disclosure – so long as a business tells users what it intends to do with their personal information, can the business pretty much do anything it wants with personal information?  This would be the privacy law equivalent of the “as long as I signal, I am allowed to cut anyone off” theory of driving.

Much high-profile enforcement (via the Federal Trade Commission and State Attorneys General) has definitely focused on breaches by businesses of their own privacy statements.  Plus, state laws in California and elsewhere either require that companies have privacy policies or require what types of disclosures must be in those policies, but again focus on disclosure rather than mandating specific substantive actions that businesses must or must not take when using personal information.

As The Economist recently noted in its Schumpeter blog, “Europeans have long relied on governments to set policies to protect their privacy on the internet.  America has taken a different tack, shunning detailed prescriptions for how companies should handle people’s data online and letting industries regulate themselves.”   This structural (or lack of structural) approach to privacy regulation in the United States can also been seen – vividly – in legal and business commentary that met Google’s recent privacy overhaul.  Despite howls of displeasure and the concerted voices of dozens of State Attorneys General, none of the complaints relied on any particular violations of law.  Rather, arguments (by the AGs) are made about consumer expectations in advance of consumer advocacy, as in “[C]onsumers may be comfortable with Google knowing their search queries but not with it knowing their whereabouts, yet the new privacy policy appears to give them no choice in the matter, further invading their privacy.”

Again, there’s little reliance on codified law because, for better or worse, there is no relevant codified law to rely upon.  Google, Twitter and Facebook have been famously the subjects of enforcement actions by the states and the Federal Trade Commission, and accordingly Google has been careful in its privacy rollout to provide extensive advance disclosures of its intentions.

As The Economist also reported, industry trade groups have stepped in with self-regulatory “best practices” for online advertising, search and data collection, as well as “do not track” initiatives including browser tools, while the Obama Administration last month announced a privacy “bill of rights” that it hopes to move in the current or, more realistically, a future Congress.

This also should not ignore common law rights of privacy invasion, such as the type of criminal charges successfully brought in New Jersey against the Rutgers student spying on his roommate.   These rights are not new and for the time being remain the main source of consumer recourse for privacy violations in the absence of meaningful contract remedies (for breaches of privacy policies) and legislative remedies targeted to online transactions.

More to come on this topic shortly.

Read More