MediaTech Law

By MIRSKY & COMPANY, PLLC

Protecting Children’s Privacy in the Age of Siri, Echo, Google and Cortana

“OK Google”, “Hey Cortana”, “Siri…”, “Alexa,…”

These statements are more and more common as artificial intelligence (AI) becomes mainstream. They serve as the default statements that kick off the myriad of services offered by Google, Microsoft, Apple and Amazon respectively, and are at the heart of the explosion of voice-activated search and services now available through computers, phones, watches, and stand-alone devices. Once activated, these devices record the statements being made and digitally process and analyze them in the cloud. The service then returns the search results to the device in the form of answers, helpful suggestions, or an array of other responses.

A recent investigation by the UK’s Guardian newspaper, however, claims these devices likely run afoul of the U.S. Children’s Online Privacy Protection Act (COPPA), which regulates the collection and use of personal information from anyone younger than 13. If true, the companies behind these services could face multimillion-dollar fines.

COPPA details, in part, responsibilities of an operator to protect children’s online privacy and safety, and when and how to seek verifiable consent from a parent or guardian. COPPA also includes restrictions on marketing to children under the age of 13. The purpose of COPPA is to provide protection to children when they are online or interacting with internet-enabled devices, and to prevent the rampant collection of their sensitive personal data and information. The Federal Trade Commission (FTC) is the agency tasked with monitoring and enforcing COPPA, and encourages industry self-regulation.

The Guardian investigation states that voice-enabled devices like the Amazon Echo, Google Home and Apple’s Siri are recording and storing data provided by children interacting with the devices in their homes. While the investigation concluded that these devices are likely collecting information of family members under the age of 13, it avoids conclusion as to whether it can be proven that these services primarily target children under the age of 13 as their audience – a key determining factor for COPPA. Furthermore, according to the FTC’s own COPPA FAQ page, even if a child provides personal information to a general audience online service, so long as the service has no actual knowledge that the particular individual is a child, COPPA is not triggered.

While the details of COPPA will need to be refined and re-defined in the era of always-on digital assistants and AI, the Guardian’s claim that the FTC will crack down harshly on offenders is not likely to happen, and the potential large fines are unlikely to materialize. Rather, what will likely occur is the FTC will provide guidance and recommendations to such services, allowing them to modify their practices and stay within the bounds of the law, so long as they’re acting in good faith. For example, services like Amazon, Apple and Google could update their services to request on installation the age and number of individuals in the home, paired with an update to the terms of service requesting parental permission for the use of data provided by children under 13. For children outside of the immediate family who access the device, the services can claim they lacked actual knowledge a child interacted with the service, again satisfying COPPA’s requirements.

Read More

Delayed Results of Google’s “Mobilegeddon” Show Small Sites Suffer on Mobile

On April 21st online behemoth Google altered its search engine algorithm to favor websites it considered mobile-friendly. This change, dubbed “Mobilegeddon” by web developers and search engine optimization (SEO) specialists, sought to reward sites that used responsive design and other mobile-friendly practices to ensure sites display well on smartphones and other mobile devices. Conversely, sites that were not mobile friendly would ultimately be penalized by ranking lower on mobile search results.

At the time, it was unclear just how large of an impact this change would have on companies’ appearance in organic mobile search results. A recent report by Adobe Digital Index, however, shows that the impact has indeed been substantial. The report determined that traffic to non-mobile-friendly sites from Google mobile searches fell more than 10% in the two months after the change, with the impact growing weekly since April. This means that non-mobile-friendly sites have dropped sharply in mobile search rankings, while mobile-friendly sites have risen in rankings, showing up higher on the mobile search results page. This change has had the greatest impact on small businesses that likely underestimated the value of mobile search traffic, and also affected financial services and law firms.

In a recent article in the Wall Street Journal, Adobe analyst, Tamara Gaffney, found that companies which were unprepared for the impact on search results have tried to offset the decrease in organic traffic by buying mobile search-ads from Google. This tactic served to keep mobile users visiting their sites through paid ads. Substituting paid results for organic results may work in the short term but is usually not a sound long-term approach. A sustainable long term online add strategy over time usually consists of a balanced approach between building brand and consumer trust through organic search, and strategically supplementing that with paid ads.

What is a company adversely affected by Mobilegeddon to do?

One obvious course of action for a site that has suffered from Mobilegeddon is to become mobile friendly. This means putting in place a responsive theme, and implementing best practices that aid in mobile user experience. This includes using larger easier-to-read text and separating links to make them easier to tap on a smaller screen. Those who are unsure of how their site fares according to Google can use the company’s Mobile Friendly Test Tool to see what recommendations may be made to improve the mobile user’s experience.

With mobile search queries outpacing desktop, Google is sending a clear message that it is willing to reward sites that provide a good mobile experience, and businesses that fail to heed that message will suffer in the search rankings.

Read More

Website Policies and Terms: What You Lose if You Don’t Read Them

When was the last time you actually read the privacy policy or terms of use of your go-to social media website or you favorite app? If you’re a diligent internet user (like me), it might take you an average of 10 minutes to skim a privacy policy before clicking “ok” or “I agree.” But after you click “ok,” have you properly consented to all the ways in which your information may be used?

As consumers become more aware of how companies profit from the use of their personal information, the way a company discloses its data collection methods and obtains consent from its users becomes more important, both to the company and to users.  Some critics even advocate voluntarily paying social media sites like Facebook in exchange for more control over how their personal information is used. In other examples, courts have scrutinized whether websites can protect themselves against claims that they misused users’ information, simply because they presented a privacy policy or terms of service to a consumer, and the user clicked “ok.”

The concept of “clickable consent” has gained more attention because of the cross-promotional nature of many leading websites and mobile apps. 

Read More

Privacy: Consent to Collecting Personal Information

Gonzalo Mon writes in Mashable that “Although various bills pending in Congress would require companies to get consent before collecting certain types of information, outside of COPPA, getting consent is not a uniformly applicable legal requirement yet. Nevertheless, there are some types of information (such as location-based data) for which getting consent may be a good idea.  Moreover, it may be advisable to get consent at the point of collection when sensitive personal data is in play.”

First, what current requirements – laws, agency regulations and quasi-laws – require obtaining consent, even if not “uniformly applicable”?

1. Government Enforcement.  The Federal Trade Commission’s November 2011 consent decree with Facebook user express consent to sharing of nonpublic user information that “materially exceeds” user’s privacy settings.  The FTC was acting under its authority under Section 5 of the FTC Act against an “unfair and deceptive trade practice”, an authority the FTC has liberally used in enforcement actions involving not just claimed breaches of privacy policies but also data security cases involving managing of personal data without providing adequate security.

2. User Expectations Established by Actual Practice.  The mobile space offers some of the most progressive (and aggressive) examples of privacy rights seemingly established by practice rather than stated policy.  For example, on the PrivacyChoice blog, the CEO of PlaceIQ explained that “Apple and Android have already established user expectations about [obtaining] consent.  Location-based services in the operating system provide very precise location information, but only through a user-consent framework built-in to the OS.  This creates a baseline user expectation about consent for precise location targeting.”  (emphasis added)

Read More

Privacy For Businesses: Any Actual Legal Obligations?

For businesses, is there an obligation in the United States to do anything more than simply have a privacy policy?  The answer is not much of an obligation at all.

Put another way, is it simply a question of disclosure – so long as a business tells users what it intends to do with their personal information, can the business pretty much do anything it wants with personal information?  This would be the privacy law equivalent of the “as long as I signal, I am allowed to cut anyone off” theory of driving.

Much high-profile enforcement (via the Federal Trade Commission and State Attorneys General) has definitely focused on breaches by businesses of their own privacy statements.  Plus, state laws in California and elsewhere either require that companies have privacy policies or require what types of disclosures must be in those policies, but again focus on disclosure rather than mandating specific substantive actions that businesses must or must not take when using personal information.

As The Economist recently noted in its Schumpeter blog, “Europeans have long relied on governments to set policies to protect their privacy on the internet.  America has taken a different tack, shunning detailed prescriptions for how companies should handle people’s data online and letting industries regulate themselves.”   This structural (or lack of structural) approach to privacy regulation in the United States can also been seen – vividly – in legal and business commentary that met Google’s recent privacy overhaul.  Despite howls of displeasure and the concerted voices of dozens of State Attorneys General, none of the complaints relied on any particular violations of law.  Rather, arguments (by the AGs) are made about consumer expectations in advance of consumer advocacy, as in “[C]onsumers may be comfortable with Google knowing their search queries but not with it knowing their whereabouts, yet the new privacy policy appears to give them no choice in the matter, further invading their privacy.”

Again, there’s little reliance on codified law because, for better or worse, there is no relevant codified law to rely upon.  Google, Twitter and Facebook have been famously the subjects of enforcement actions by the states and the Federal Trade Commission, and accordingly Google has been careful in its privacy rollout to provide extensive advance disclosures of its intentions.

As The Economist also reported, industry trade groups have stepped in with self-regulatory “best practices” for online advertising, search and data collection, as well as “do not track” initiatives including browser tools, while the Obama Administration last month announced a privacy “bill of rights” that it hopes to move in the current or, more realistically, a future Congress.

This also should not ignore common law rights of privacy invasion, such as the type of criminal charges successfully brought in New Jersey against the Rutgers student spying on his roommate.   These rights are not new and for the time being remain the main source of consumer recourse for privacy violations in the absence of meaningful contract remedies (for breaches of privacy policies) and legislative remedies targeted to online transactions.

More to come on this topic shortly.

Read More

Citizen Journalism: Vetting Quality Via Lessons from Gaming

Unlike traditional newsroom journalists, “citizen journalists” have no formal way to ensure that everyone maintains similar quality standards.  Which does not mean that quality standards are necessarily (or consistently) maintained at traditional newsrooms, but rather that a traditional hierarchical editorial structure imposes at least theoretical guidelines.

By definition, citizen journalism’s inherent difference from the traditional editorial process is the dispersion of responsibility for editorial choice.  Nonetheless, “trustiness” in journalism is a concept still heavily dependent on a reporter’s or editor’s reputation.  Is the New York Times trusted because it’s trustworthy?  Or is it trustworthy because it’s trusted?

The “Generated By Users” journalism blog recently reported the results of its reader poll, “Do you TRUST user generated content in news?”

Read More

Dropbox TOS – In Praise of Clarity

Earlier this month, Dropbox spawned a new kerfuffle in internet-land with changes to its Terms of Service (TOS).

The outrage was fast and furious.  A nice deal of blog and Tumblr and other commentary zeroed in on changes Dropbox announced to its TOS before the 4th of July holiday, and in particular how this or that provision “won’t hold up in court”.  See for example J. Daniel Sawyer’s commentary here.

Sawyer was referring to language in the TOS for cloud-server services granting ownership rights to Dropbox or other cloud services.

At least I think that’s what he was referring to, because the Dropbox TOS did not actually grant those ownership rights to Dropbox.  Dropbox’ TOS – like similar TOS for SugarSync and Box.net – granted limited use rights to enable Dropbox to actually provide the service.  Here is the offending provision:

… you grant us (and those we work with to provide the Services) worldwide, non-exclusive, royalty-free, sublicenseable rights to use, copy, distribute, prepare derivative works (such as translations or format conversions) of, perform, or publicly display that stuff to the extent we think it necessary for the Service.

To be clear, if Dropbox actually claimed ownership rights to customer files – and actually provided for the same in its TOS – there’s no particular reason such a grant “won’t hold up in court”.   There are certainly cases of unenforceable contracts – contracts that are fraudulently induced or in contravention of public policy, for example – but a fully and clearly disclosed obligation in exchange for a mutual commitment of service is enforceable.

Read More

Podcast #10: BitTorrent Copyright Infringement: Trouble for DMCA?

 

Today, I discuss BitTorrents, and a particular case in California challenging the copyright validity of what one service provider is doing.  BitTorrent has been in the (copyright) news lately – and not surprisingly – after the movie studios set their sites on bringing down yet the latest iteration of file-sharing technology.

Some of the issues I discuss are these:

  • What is the BitTorrent file sharing technology? And how is it different from Napster and its peer-to-peer progeny?
  • What are the 2 biggest distinctions between BitTorrent and peer-to-peer and, in particular, BitTorrent’s distributive approach to file-sharing?
  • Why is bitTorrent in the (copyright) news? I will particularly discuss a case in federal court in California, involving Columbia Pictures and other film studios who sued a bitTorrent company called isoHunt, together with its founder, Gary Fung.
  • What were the relevant legal issues in this case? Several important copyright arguments were made, but of most significance were 2 particular issues: inducement of copyright infringement, and the safe harbor for providers of “information location tools” under Section 512 of the Digital Millennium Copyright Act (the DMCA).
  • Why did Google get involved? I discuss how this case was an unusual instance where a court ruled that DMCA safe harbor protection was not available to a provider of “information location tools” who knew or should have known about potential or actual copyright infringement happening on its service.

Please click below for the podcast.

Read More

BitTorrent Copyright Infringement: Trouble for DMCA?

BitTorrent has been in the (copyright) news lately – and not surprisingly – after the movie studios set their sites on bringing down yet the latest iteration of file-sharing technology.

2 great background sources on what BitTorrent is and how it works can be found here and here.  In short terms, BitTorrent is a file sharing technology, different from Napster and its peer-to-peer progeny in that it draws down pieces of large data files from multiple computers – rather than single computer to single computer peer-to-peer – based on a “community” structure of participating individual users.  The two biggest distinctions are (1) no single source for the compiled total file contributes more than a very small portion of the total file and (2) the distributive structure finesses the constant file-sharing problem of large data transfers demanding large broadband resources.

Why is bitTorrent in the (copyright) news?

BitTorrent is in the news not simply because Netflix’ CEO stated that “we’ve finally beaten bitTorrent.”  (“We”, by the way, presumably refers to Netflix’ full-file streaming capabilities.)

Read More