You are hereFeed aggregator / Categories / Privacy

Privacy


EPIC to Senate: Weaknesses in Cybersecurity Threaten Both Consumers and Democratic Institutions

EPIC - 0 sec ago

EPIC submitted a statement to the Senate Homeland Security Committee in advance of a hearing on "Cyber Threats Facing America." Last year, the White House National Security Strategy report set out the administration's goals for global policy. EPIC supports several of the goals in the National Strategy report, including enhanced cybersecurity, support for democratic institutions, and protection of human rights. EPIC wrote to the Senate Committee to seek assurances that those goals will remain priorities for this administration. Quoting former world chess champion Garry Kasparov, EPIC also said "perhaps it is a firewall and not a border wall that the United States needs to safeguard our national interests at this moment in time."

Categories: Privacy

Net Neutrality Did Not Die Today

EFF News - Mon, 2018-04-23 17:01

When the FCC’s “Restoring Internet Freedom Order,” which repealed net neutrality protections the FCC had previously issued, was published on February 22nd, it was interpreted by many to mean it would go into effect on April 23. That’s not true, and we still don’t know when the previous net neutrality protections will end.

On the Federal Register’s website—which is the official daily journal of the United States Federal Government and publishes all proposed and adopted rules, the so-called “Restoring Internet Freedom Order” has an “effective date” of April 23. But that only applies to a few cosmetic changes. The majority of the rules governing the Internet remain the same—the prohibitions on blocking, throttling, and paid prioritization—remain.

Before the FCC’s end to those protections can take effect, the Office of Management and Budget has to approve the new order, which it hasn’t done. Once that happens, we’ll get another notice in the Federal Register. And that’s when we’ll know for sure when the ISPs will be able to legally start changing their actions.

If your Internet experience hasn’t changed today, don’t take that as a sign that ISPs aren’t going to start acting differently once the rule actually does take effect;  for example, Comcast changed the wording on its net neutrality pledge almost immediately after last year’s FCC vote.

Net neutrality protections didn’t end today, and you can help make sure they never do. Congress can still stop the repeal from going into effect by using the Congressional Review Act (CRA) to overturn the FCC’s action. All it takes is a simple majority vote held within 60 legislative working days of the rule being published. The Senate is only one vote short of the 51 votes necessary to stop the rule change, but there is a lot more work to be done in the House of Representatives. See where your members of Congress stand and voice your support for the CRA here.

Take Action

Save the net neutrality rules

Categories: Privacy

Stupid Patent of the Month: Suggesting Reading Material

EFF News - Mon, 2018-04-23 16:49

Online businesses—like businesses everywhere—are full of suggestions. If you order a burger, you might want fries with that. If you read Popular Science, you might like reading Popular Mechanics. Those kinds of suggestions are a very old part of commerce, and no one would seriously think it’s a patentable technology.

Except, apparently, for Red River Innovations LLC, a patent troll that believes its patents cover the idea of suggesting what people should read next. Red River filed a half-dozen lawsuits in East Texas throughout 2015 and 2016. Some of those lawsuits were against retailers like home improvement chain Menards, clothier Zumiez, and cookie retailer Ms. Fields. Those stores all got sued because they have search bars on their websites.

In some lawsuits, Red River claimed the use of a search bar infringed US Patent No. 7,958,138. For example, in a lawsuit against Zumiez, Red River claimed [PDF] that “after a request for electronic text through the search box located at www.zumiez.com, the Zumiez system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text, as described and claimed in the ’138 Patent.” In that case, the “reading material” is text like product listings for jackets or skateboard decks.

In another lawsuit, Red River asserted a related patent, US Patent No. 7,526,477, which is our winner this month. The ’477 patent describes a system of electronic text searching, where the user is presented with “related concepts” to the text they’re already reading. The examples shown in the patent display a kind of live index, shown to the right of a block of electronic text. In a lawsuit against Infolinks, Red River alleged [PDF] infringement because “after a request for electronic text, the InText system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text.”   

Suggesting and providing reading material isn’t an invention, but rather an abstract idea. The final paragraph of the ’477 patent’s specification makes it clear that the claimed method could be practiced on just about any computer. Under the Supreme Court’s decision in Alice v. CLS Bank, an abstract idea doesn’t become eligible for a patent merely because you suggest performing it with a computer. But hiring lawyers to make this argument is an expensive task, and it can be daunting to do so in a faraway locale, like the East Texas district where Red River has filed its lawsuits so far. That venue has historically attracted “patent troll” entities that see it as favorable to their cases.

The ’477 patent is another of the patents featured in Unified Patents’ prior art crowdsourcing project Patroll. If you know of any prior art for the ’477 patent, you can submit it (before April 30) to Unified Patents for a possible $2,000 prize.

The good news for anyone being targeted by Red River today is that it’s not going to be as easy to drag businesses from all over the country into a court of their choice. The Supreme Court’s TC Heartland decision, combined with a Federal Circuit case called In re Cray, mean that patent owners have to sue in a venue where defendants actually do business.

It’s also a good example of why fee-shifting in patent cases, and upholding the case law of the Alice decision, are so important. Small companies using basic web technologies shouldn’t have to go through a multi-million dollar jury trial to get a chance to prove that a patent like the ’477 is abstract and obvious.

Categories: Privacy

Paid Prioritization: We Have Solved This Problem Before

CDT - Mon, 2018-04-23 16:20

Net neutrality does not end today. Although today does mark 60 days since the publication of the FCC’s order repealing its own rules, that repeal (due to some obscure and protracted administrative procedure) has not yet taken effect. Keep this in mind if you read or hear any arguments pointing out that ISPs haven’t ruined the internet, even without the net neutrality rules. For now, they still exist. And if the current effort to shut down the repeal through the Congressional Review Act (CRA) succeeds, the net neutrality protections will survive even longer.* But that doesn’t mean the debate is standing still. Instead, opponents of the rules are using the recent and repeated regulatory swings (that they caused) as justification for a legislative compromise. Specifically, some in the telecom industry have argued for watered-down consumer protections, most recently on the subject of paid prioritization.

Although it has been a key tenet of the net neutrality discussion for years, paid prioritization has recently become a more prominent focal point. Commonly spoken of in terms of “fast lanes,” paid prioritization is when online companies pay ISPs to give their data traffic preferential treatment. It allows ISPs to double charge by charging both the customer for service and edge providers to reach customers, and lets well-funded companies buy an advantage over their competitors. Because the value (and therefore the price) of paid prioritization increases as networks become more congested, it also rewards ISPs for letting their networks become clogged rather than upgrading their capacity.

Last week, the House Energy and Commerce Subcommittee on Communications and Technology held a hearing on the subject, ostensibly to “have a realistic discussion” about it and to develop a “nuanced approach.” This language fits nicely with the industry’s calls for compromise legislation, but conveniently discounts the decades-long discussion that led up to the 2015 Open Internet Order (OIO).

In some ways, the focus on paid prioritization represents progress. (It even sells hamburgers!) Practices like blocking websites or applications or throttling certain net traffic have become so universally disapproved that they have faded from the debate. Most ISPs either have no interest in blocking or throttling or they have given up fighting for the ability to do so, and even the current ISP-friendly legislative proposals would prohibit these practices. Paid prioritization, however, remains a core source of disagreement.

Unfortunately, ISPs and their advocates have tried to confuse the issue to hide the negative effects and incentives paid prioritization creates. They have claimed that banning paid prioritization jeopardizes telemedicine applications and autonomous vehicle safety and would inhibit emergency first responders and 911 systems. They have claimed that content delivery networks (CDNs) do the same thing as paid prioritization. They have talked about beneficial network traffic management techniques and paid prioritization as though they are one and the same. They have argued that small businesses would benefit from paid prioritization. They have claimed that paid prioritization would somehow lower the cost of internet access and have even used TSA PreCheck as a positive example of paid prioritization. But these claims are either misleading, ridiculous, or just plain wrong.

The net neutrality rules created by the OIO banned paid prioritization because of its potential for harm to innovation and competition at the edges of the internet was “overwhelming.” The rules, (which the current FCC has voted to repeal) applied only to broadband internet access service (BIAS) and did not apply to “specialized” or non-BIAS services, such as telemedicine applications or autonomous vehicle support. The rules also created exceptions for emergency services. So, under the OIO, ISPs would still be able to offer paid prioritization for the use cases they list because they do not constitute broadband internet access.

The arguments about CDNs and network management amount to semantic sleight-of-hand. CDNs allow companies to store information, like the files that make up websites or the music and movie files for streaming, closer to end users. This decentralized distribution makes for a better, faster experience by minimizing the distance and number of network segments between the user and the information. Prioritization, on the other hand, involves giving favorable treatment to some traffic as it crosses a network. For instance, an ISP can prioritize the traffic from an affiliate’s video streaming service by letting those packets jump the queue at the ISP’s routers, or by creating a separate queue just for the affiliate’s traffic.

Beyond the structural differences between paid prioritization and CDNs, they also have different effects on both network function and competition. Not only do CDNs offer more efficient delivery for their customers, they also reduce traffic loads between distant parts of the internet, improving speeds for everyone else. There is no limit to how many companies can benefit from CDNs, nor do CDNs create a disadvantage for non-customers; no traffic is made slower by CDN usage. Paid prioritization, however, cannot benefit everyone; by definition, it is impossible to prioritize everyone. By the same token, paid prioritization necessarily disadvantages all those who do not, or cannot pay for preferential treatment.

Supporters also try to blur the line between paid prioritization and reasonable network traffic management. Traffic management consists of several techniques by which network operators like ISPs can improve the overall functionality of their network. For instance, operators may be able to provide a better quality of experience for subscribers using real-time video applications by prioritizing that traffic over less time-sensitive traffic like email or software updates. Done properly, no one’s quality of experience is degraded and all similar kinds of traffic enjoy the same treatment. The network works better and no one loses.

The protections against blocking, throttling, and unreasonable discrimination in the OIO each had exceptions for reasonable network management. The rule against paid prioritization, however, did not. According to the Order, paid prioritization, by definition, is not a network management practice because it “does not primarily have a technical network management purpose.” Although (unpaid) prioritization can be a network traffic management technique, it takes on a completely different character when compensation is part of the deal, creating perverse incentives for ISPs and distorting competition online. This is why it’s so important to distinguish paid prioritization from everything else and not fall for the trickery of using paid prioritization and other, harmless terms interchangeably.

The claims that paid prioritization could somehow give small businesses an advantage are almost laughable. Paid prioritization is all about buying an advantage; how can small businesses hope to out-spend their deep-pocketed competitors? Equally ludicrous are the claims that ISPs would somehow drop broadband subscription prices if they could charge for prioritized treatment. As we’ve already said, paid prioritization monetizes network congestion, giving ISPs a way to charge more for getting around traffic jams that they create. In this light, Congresswoman Blackburn’s comparison of paid prioritization to the TSA PreCheck program is somewhat accurate, but it’s also illustrative of the perverse incentives it creates for ISPs.

The conversation about paid prioritization is far from over, and you can be sure that efforts to confuse the issue will continue. Just remember this: the problems with paid prioritization all stem from the “paid” aspect. Whatever other aspects of prioritization ISPs may talk about, getting paid is what they want. But net neutrality cannot coexist with paid prioritization of web traffic; real net neutrality protections must prohibit paid prioritization. The 2015 Open Internet Order did this, while also allowing flexibility to perform reasonable network management and to support limited-purpose “specialized” services like telemedicine. That sounds like a compromise to me.

* There are also two court cases pending: one to strike down the 2015 rules is stalled in front of the Supreme Court, and one to strike down the 2018 repeal of the rules is gearing up for briefing. The outcome of either of these could alter the existing rule set. To add to the complexity, litigation against the various state initiatives to put net neutrality protections in place will emerge as soon as the repeal takes effect.

Categories: Privacy

EPIC Sues FTC for Release of Facebook's Audits

EPIC - Fri, 2018-04-20 17:45

EPIC has filed a Freedom of Information Act lawsuit to obtain the release of the unredacted Facebook Assessments from the FTC. The FTC Consent Order. required Facebook to provide to the FTC biennial assessments conducted by an independent auditor. In March, EPIC filed a Freedom of Information Act request for the 2013, 2015, 2017 Facebook Assessments and related records. EPIC's FOIA request drew attention to a version of the 2017 report available at the FTC website. But that version is heavily redacted. EPIC is suing now for the release of unredacted report. EPIC has an extensive open government practice and has previously obtained records from many federal agencies. The case is EPIC v. FTC, No. 18-942 (D.D.C. filed April 20, 2018).

Categories: Privacy

EPIC Obtains Partial Release of 2017 Facebook Audit

EPIC - Fri, 2018-04-20 17:10

EPIC has obtained a redacted version of the 2017 Facebook Assessment required by the 2012 Federal Trade Commission Consent Order. The Order required Facebook to conduct biennial assessments from a third-party auditor of Facebook's privacy and security practices. In March, EPIC filed a Freedom of Information Act request for the 2013, 2015, and 2017 Facebook Assessments as well as related records. The 2017 Facebook Assessment, prepared by PwC, stated that "Facebook's privacy controls were operating with sufficient effectiveness" to protect the privacy of users. This assessment was prepared after Cambridge Analytica harvested the personal data of 87 million Facebook users. In a statement to Congress for the Facebook hearings last week, EPIC noted that FTC Commissioners represented that the Consent Order protected the privacy of hundreds of millions of Facebook users in the United States and Europe.

Categories: Privacy

We’re in the Uncanny Valley of Targeted Advertising

EFF News - Fri, 2018-04-20 14:22

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Categories: Privacy

Minnesota Supreme Court Ruling Will Help Shed Light on Police Use of Biometric Technology

EFF News - Fri, 2018-04-20 12:43

A decision by the Minnesota Supreme Court on Wednesday will help the public learn more about how law enforcement use of privacy invasive biometric technology.

The decision in Webster v. Hennepin County is mostly good news for the requester in the case, who sought the public records as part of a 2015 EFF and MuckRock campaign to track mobile biometric technology use by law enforcement across the country. EFF filed a brief in support of Tony Webster, arguing that the public needed to know more about how officials use these technologies.

Across the country, law enforcement agencies have been adopting technologies that allow cops to identify subjects by matching their distinguishing physical characteristics to giant repositories of biometric data. This could include images of faces, fingerprints, irises, or even tattoos. In many cases, police use mobile devices in the field to scan and identify people during stops. However, police may also use this technology when a subject isn’t present, such as grabbing images from social media, CCTV, or even lifting biological traces from seats or drinking glasses.

Webster’s request to Hennepin County officials sought a variety of records, and included a request for the agencies to search officials’ email messages for keywords related to biometric technology, such as “face recognition” and “iris scan.”

Officials largely ignored the request and when Webster brought a legal challenge, they claimed that searching their email for keywords would be burdensome and that the request was improper under the state’s public records law, the Minnesota Government Data Practices Act.

Webster initially prevailed before an administrative law judge, who ruled that the agencies had failed to comply with the Data Practices Act in several respects. The judge also ruled that request a search of email records for keywords was proper under the law and was not burdensome.

County officials appealed that decision to a state appellate court. That court agreed that Webster’s request was proper and not burdensome. But it disagreed that the agencies had violated the Data Practices Act by not responding to Webster’s request or that they had failed to set up their records so that they could be easily searched in response to records requests.

Webster appealed to the Minnesota Supreme Court, who on Wednesday agreed with him that the agencies had failed to comply with the Data Practices Act by not responding to his request. The court, however, agreed with the lower appellate court that county officials did not violate the law in how they had configured their email service or arranged their records systems.

In a missed opportunity, however, the court declined to rule on whether searching for emails by keywords was appropriate under the Data Practices Act and not burdensome. The court claimed that it didn’t have the ability to review that issue because Webster had prevailed in the lower court and county officials failed to properly raise the issue.

Although this means that the lower appellate court’s decision affirming that email keyword searches are proper and burdensome still stands, it would have been nice if the state’s highest court weighed in on the issue.

EFF is nonetheless pleased with the court’s decision as it means Webster can finally access records that document county law enforcement’s use of biometric technology. We would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.

For more on biometric identification, such as face recognition, check out EFF’s Street-Level Surveillance project.

Categories: Privacy

Senator Blumenthal Calls On FTC To Enforce Consent Order Against Facebook

EPIC - Fri, 2018-04-20 10:25

Senator Richard Blumenthal (D-CT) has called for "monetary penalties that provide redress for consumers and stricter oversight" in a letter to the Federal Trade Commission. Senator Blumenthal focused on the FTC's 2011 Consent Order that EPIC, and a coalition of consumer groups obtained, after preparing a detailed complaint in 2009. Referring to the Cambridge Analytica scandal, Senator Blumenthal wrote that "three of the FTC's claims concerned the misrepresentation of verification and privacy preferences of third-party apps." Senator Blumenthal also raised questions about the FTC's monitoring of the consent order, noting that "even the most rudimentary oversight would have uncovered these problematic terms of service." And the Senator stated, "The Cambridge Analytica matter also calls into question Facebook's compliance with the consent decree's requirements to respect privacy settings and protect private information." EPIC and other consumer groups recently urged the FTC to reopen the investigation. The FTC has confirmed that an investigation of Facebook is now underway.

Categories: Privacy

Dear Canada: Accessing Publicly Available Information on the Internet Is Not a Crime

EFF News - Thu, 2018-04-19 23:00

Canadian authorities should drop charges against a 19-year-old Canadian accused of “unauthorized use of a computer service” for downloading thousands of public records hosted and available to all on a government website. The whole episode is an embarrassing overreach that chills the right of access to public records and threatens important security research.

At the heart of the incident, as reported by CBC news this week, is the Nova Scotian government’s embarrassment over its own failure to protect the sensitive data of 250 people who used the province’s Freedom of Information Act (FOIA) to request their own government files. These documents were hosted on the government web server that also hosted public records containing no personal information. Every request hosted on the server contained very similar URLs, which differed only in a single document ID number at the end of the URL. The teenager took a known ID number, and then, by modifying the URL, retrieved and stored all of the FOIA documents available on the Nova Scotia FOIA website.

Beyond the absurdity of charging someone with downloading public records that were available to anyone with an Internet connection, if anyone is to blame for this mess, it’s Nova Scotia officials. They have both insecurely set up their public records server to permit public access to others’ private information. Officials should accept responsibility for failing to secure such sensitive data rather than ginning up a prosecution. The fact that the government was publishing documents that contained sensitive data in a public website without any passwords or access controls demonstrates their own failure to protect the private information of individuals. Moreover, it does not appear that the site even deployed minimal technical safeguards to exclude widely-known indexing tools such as Google search and the Internet Archive from archiving all the records published on the site, as both appear to have cached some of the documents.

The lack of any technical safeguards shielding the Freedom of Information responses from public access would make it difficult for anyone to know that they were downloading material containing private information, much less provide any indication that such activity was “without authorization” under the criminal statute. According to the report, more than 95% of the 7,000 Freedom of Information responses in question included redactions for any information properly excluded from disclosure under Nova Scotia’s FOI law. Freedom of Information laws are about furthering public transparency, and information released through the FOI process is typically considered to be public to everyone.

But beyond the details of this case, automating access to publicly available freedom of information requests is not conduct that should be criminalized: Canadian law criminalizes unauthorized use of  computer systems, but these provisions are only intended to be applied when the use of the service is both unauthorized and carried out with fraudulent intent. Neither element should be stretched to meet the specifics in this case. The teenager in question believed he was carrying out a research and archiving role, preserving the results of freedom of information requests. And given the setup of the site, he likely wasn’t aware that a few of the documents contained personal information. If true, he would not have had any fraudulent intent.

“The prosecution of this individual highlights a serious problem with Canada’s unauthorized intrusion regime,”  Tamir Israel, Staff Lawyer at CIPPIC, told us. “Even if he is ultimately found innocent, the fact that these provisions are sufficiently ambiguous to lay charges can have a serious chilling effect on innovation, free expression and legitimate security research.”

The deeper problem with this case is that it highlights how concerns about computer crime can lead to absurd prosecutions. The Canadian police are using to prosecute the teen was implemented after Canada sign the Budapest Cybercrime Convention. The convention’s original intent was to punish those who break into protected computers to steal data or cause damage.

Criminalizing access to publicly available data over the Internet twists the Cybercrime Convention’s purpose. Laws that offer the possibility of imposing criminal liability on someone simply for engaging with freely available information on the web pose a continuing threat to the openness and innovation of the Internet. They also threaten legitimate security research. As technology law professor Orin Kerr describes it, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Canada should take the lead from the  United States federal court’s decision in Sandvig v. Sessions, which made clear that using automated tools to access freely available information is not a computer crime. As the court wrote:  

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

The same is true in the case of the Canadian teen.

We've long defended the use of “automated scraping,” which is the process of using web crawlers or bots — applications that run automated tasks over the Internet—to extract content and data from a website. Scraping provides a wide range of valuable tools and services that Internet users, programmers, journalists, and researchers around the world rely on every day to the benefit of the broader public.

The value of automated scraping value goes well beyond curious teenagers seeking access to freedom of information requests. The Internet Archive has long been scraping public portions of the world wide web and preserving them for future researchers. News aggregation tools, including Google’s Crisis Map, which aggregated critical information about the California’s October 2016 wildfires, involve scraping. ProPublica journalists used automated scrapers to investigate Amazon’s algorithm for ranking products by price and uncovered that Amazon’s pricing algorithm was hiding the best deals from many of its customers. The researchers who studied racial discrimination on Airbnb also used bots, and found that distinctively African American names were 16 percent less likely to be accepted relative to identical guests with distinctively white names.

Charging the Canadian teen with a computer crime for what amounts to his scraping publicly available online content has severe consequences for him and the broader public. As a result of the charges against him, the teen is banned from using the Internet and is concerned he may not be able to complete his education.

More broadly, the prosecution is a significant deterrent to anyone who wanted to use common tools such as scraping to collect public government records from websites, as the government’s own failure to adequately protect private information can now be leveraged into criminal charges against journalists, activists, or anyone else seeking to access public records.

Even if the teen is ultimately vindicated in court, this incident calls for a re-examination of Canada’s unauthorized intrusion regime and law enforcement’s use of it. The law was not intended for cases like this, and should never have been raised against an innocent Internet user.

Categories: Privacy

A Tale of Two Poorly Designed Cross-Border Data Access Regimes

EFF News - Thu, 2018-04-19 22:57

On Tuesday, the European Commission published two legislative proposals that could further cement an unfortunate trend towards privacy erosion in cross-border state investigati­ons. Building on a foundation first established by the recently enacted U.S. CLOUD Act, these proposals compel tech companies and service providers to ignore critical privacy obligations in order to facilitate easy access when facing data requests from foreign governments. These initiatives collectively signal the increasing willingness of states to sacrifice privacy as a way of addressing pragmatic challenges in cross-border access that could be better solved with more training and streamlined processes.

The EU proposals (which consist of a Regulation and a Directive) apply to a broad range of companies1 that offer services in the Union and that have a “substantial connection” to one or more Member States.2 Practically, that means companies like Facebook, Twitter, and Google, though not based in the EU, would still be affected by these proposals. The proposals create a number of new data disclosure powers and obligations, including:

  • European court orders that compel internet companies and service providers to preserve data they already stored at the time the order is received (European preservation orders);
  • European court orders for content and ‘transactional’ data3 for investigation of a crime that carries a custodial sentence of at least 3 years or more (European production orders for content data);
  • European orders for some metadata defined as “access data” (IP addresses, service access times) and customer identification data (including name, date of birth, billing data and email addresses) that could be issued for any criminal offense (European production orders for access and subscriber data);4
  • An obligation for some service providers to appoint an EU legal representative who will be responsible for complying with data access demands from any EU Member State;
  • The package of proposals does not address real-time access to communications (in contrast to the CLOUD Act).
Who Is Affected and How?

Such orders would affect Google, Facebook, Microsoft, Twitter, instant messaging services, voice over IP, apps, Internet Service Providers, and e-mail services, as well as cloud technology providers, domain name registries, registrars, privacy and proxy service providers, and digital marketplaces.

Moreover, tech companies and service providers would have to comply with law enforcement orders for data preservation and delivery within 10 days or, in the case of an imminent threat to life or physical integrity of a person or to a critical infrastructure, within just six hours. Complying with these orders would be costly and time-consuming.

Alarmingly, the EU proposals would compel affected companies (which include diverse entities ranging from small ISPs and burgeoning startups to multibillion dollar global corporations) to develop extensive resources and expertise in the nuances of many EU data access regimes. A small regional German ISP  will need the capacity to process demands from France, Estonia, Poland, or any other EU member state in a manner that minimizes legal risks. Ironically, the EU proposals are presented as beneficial to businesses and service providers on the basis that they provide ‘legal certainty and clarity’. In reality, they do the opposite, forcing these entities to devote resources to understanding the law of each member state. Even worse, the proposal would immunize businesses from liability in situations where good faith compliance with a data request might conflict with EU data protection laws. This creates a powerful incentive to err on the side of compliance with a data demand at cost to privacy. There is no comparable immunity from the heavy fines that could be levied for ignoring a data access request on the basis of good-faith compliance with EU data protection rules.

No such liability limitation at all is available to companies and service providers subject to non-EU privacy protections. In some instances, the companies would be forced to choose between complying with EU data demands issued further to EU standards and complying with legal restrictions on data exposure imposed by other jurisdictions. For example, mechanisms requiring service providers to disclose customer identification data on the basis of a prosecutorial demand could conflict with Canada’s data protection regime. Personal Information Protection and Electronic Documents Act (PIPEDA), a Canadian privacy law, has been held to prevent service providers from identifying customers associated with anonymous online activity in the absence of a court order. As the European proposals purport to apply to domain name registries as well, these mechanisms could also interfere with efforts at ICANN to protect anonymity in website registration by shielding customer registration information.

The EU package could also compel U.S.-based providers to violate the Stored Communications Act (SCA), which prevents the disclosure of stored communications content in the absence of a court order.5 The recent U.S. CLOUD Act created a new mechanism for bypassing these safeguards—allowing certain foreign nations (if the United States enters into a “executive agreement” with them under the CLOUD Act) to compel data production from U.S.-based providers without following U.S. law or getting an order from a U.S. judge. However, the United States has not entered into any such an agreement with the EU or any EU member states at this stage, and the European package would require compliance even in the absence of one.

No Political Will to Fix the MLAT Process

The unfortunate backdrop to this race to subvert other states’ privacy standards is a regime that already exists for navigating cross-border data access. The Mutual Legal Assistance Treaty (MLAT) system creates global mechanisms by which one state can access data hosted in another while still complying with privacy safeguards in both jurisdictions. The MLAT system is in need of reform, as the volume of cross-border requests in modern times has strained some of its procedural mechanisms to the point where delays in responses can be significant. However, the fundamental basis of the MLAT regime remains sound and the pragmatic flaws in its implementation are far from insurmountable. Instead of reforming the MLAT regime in a way that would retain the current safeguards it respects, the European Commission and the United States seem to prefer to jettison these safeguards.

Perhaps ironically, much of the delay within the MLAT system arises from a lack of expertise in state agencies and officials in the data access laws of foreign states. Developing such expertise would allow state agencies to formulate foreign data access requests faster and more efficiently. It would also allow state officials to process incoming requests with greater speed. The EU proposals seek to bypass this requirement by effectively privatizing the legal assessment process: meaning that we're losing a real judge making real judgments. Service providers will now need to decide whether foreign requests are properly formulated under foreign laws. Yet the judicial authorities are far better placed to make these assessments—not only from a resource management perspective, but also from a legitimacy perspective.

Contrary to this trend, European courts have continued to assert their own domestic privacy standards when protecting EU individuals’ data from access by foreign state agencies. Late last week, an Irish court questioned whether U.S. state agencies ( particularly the NSA and FBI who are granted broad powers under the U.S. Foreign Intelligence Surveillance Court) are sufficiently restrained in their ability to access EU individuals’ data. The matter was referred to the EU’s highest court and an adverse finding on the matter could prevent global communications platforms from exporting EU individuals’ data to the U.S. Such a finding could even prevent those same platforms from complying with some U.S. data demands regarding EU individuals’ data if additional privacy safeguards and remedies are not added. It is not yet clear what role such restrictions might ultimately play in any EU-U.S. agreement that might be negotiated under the U.S. CLOUD Act.

Ultimately, both the U.S. CLOUD Act and the EU proposal are a missed opportunity to work towards cross border data access regime that facilitates efficient law enforcement access and respects privacy, due process, and freedom of expression.

Conclusion

Unlike the last-minute rush to approve the U.S. CLOUD Act, there is still a long way to go before finalizing the EU proposals. Both documents need to be reviewed by the European Parliament and the Council of the European Union, and be subject to amendments. Once approved by both institutions, the regulation will become immediately enforceable as law in all Member States simultaneously, and it will override all national laws dealing with the same subject matter. The directive, however, will need to be transposed into national law.

We call on EU policy-makers to avoid the privatization of law enforcement and work instead to enhance judicial cooperation within and outside the European Union.

  • 1. Specifically listed are: providers of electronic communications service, social networks, online marketplaces, hosting service providers, and Internet infrastructure providers such as IP address and domain name registries. See Article 2, Definitions.
  • 2. A substantial connection is defined in the regulation as having an establishment in one or more Member States. In the absence of an establishment in the Union, a substantive connection will be the existence of a significant number of users in one or more Member States, or the targeting of activities towards one or more Member States (including factors such as the use of a language or a currency generally used in a Member State, availability of an app in the relevant national app store from providing local advertising or advertising in the language used in a Member State, from making use of any information originating from persons in Member States in the course of its activities, among others). See Article 3 Scope of the Regulation.
  • 3. Transactional data is “generally pursued to obtain information about the contacts and whereabouts of the user and may be served to establish a profile of an individual concerned”. The regulation described transactional data as the “the source and destination of a message or another type of interaction, data on the location of the device, date, time, duration, size, route, format, the protocol used and the type of compression, unless such data constitutes access data.
  • 4. The draft regulation states that access data is “typically recorded as part of a record of events (in other words a server log) to indicate the commencement and termination of a user access session to a service. It is often an individual IP address (static or dynamic) or other identifier that singles out the network interface used during the access session.”
  • 5. Most large U.S. providers insist on a warrant based on probable cause to disclose content, although the SCA allows disclosure on a weaker standard in some cases.

Categories: Privacy

A Little Help for Our Friends

EFF News - Thu, 2018-04-19 21:01

In periods like this one, when governments seem to ignore the will of the people as easily as companies violate their users’ trust, it’s important to draw strength from your friends. EFF is glad to have allies in the online freedom movement like the Internet Archive. Right now, donations to the Archive will be matched automatically by the Pineapple Fund.

Founded 21 years ago by Brewster Kahle, the Internet Archive’s mission is to provide free and universal access to knowledge through its vast digital library. Their work has helped capture the massive—yet now too often ephemeral—proliferation of human creativity and knowledge online. Popular tools like the Wayback Machine have allowed people to do things like view deleted and altered webpages and recover public statements to hold officials accountable.

EFF and the Internet Archive have stood together in a number of digital civil liberties cases. We fought back when the Archive became the recipient of a National Security Letter, a tool often used by the FBI to force Internet providers and telecommunications companies to turn over the names, addresses, and other records about their customers, and frequently accompanied by a gag order. EFF and the Archive have worked together to fight threats to free expression, online innovation, and the free flow of information on the Internet on numerous occasions. We have even collaborated on community gatherings like EFF’s own Pwning Tomorrow speculative fiction launch and the recent Barlow Symposium exploring EFF co-founder John Perry Barlow’s philosophy of the Internet.

EFF co-founder John Perry Barlow with the Internet Archive’s Brewster Kahle.

This month, the Bitcoin philanthropist behind the Pineapple Fund is challenging the world to support the Internet Archive and the movement for online freedom. The Pineapple Fund will match up to $1 million in donations to the Archive through April 30. (EFF was also the grateful recipient of a $1 million Pineapple Fund grant in January of this year.) If you would like to support the future of libraries and preserve online knowledge for generations to come, consider giving to the Internet Archive today. We salute the Internet Archive for supporting privacy, free expression, and the open web.

Categories: Privacy

Patent Office Throws Out GEMSA’s Stupid Patent on a GUI For Storage

EFF News - Thu, 2018-04-19 18:14

The Patent Trial and Appeal Board has issued a ruling [PDF] invalidating claims from US Patent No. 6,690,400, which had been the subject of the June 2016 entry in our Stupid Patent of the Month blog series. The patent owner, Global Equity Management (SA) Pty Ltd. (GEMSA), responded to that post by suing EFF in Australia. Eventually, a U.S. court ruled that EFF’s speech was protected by the First Amendment. Now the Patent Office has found key claims from the ’400 patent invalid.

The ’400 patent described its “invention” as “a Graphic User Interface (GUI) that enables a user to virtualize the system and to define secondary storage physical devices through the graphical depiction of cabinets.” In other words, virtual storage cabinets on a computer. E-Bay, Alibaba, and Booking.com, filed a petition for inter partes review arguing that claims from the ’400 patent were obvious in light of the Partition Magic 3.0 User Guide (1997) from PowerQuest Corporation. Three administrative patent judges from the Patent Trial and Appeal Board (PTAB) agreed.

The PTAB opinion notes that Partition Magic’s user guide teaches each part of the patent’s Claim 1, including the portrayal of a “cabinet selection button bar,” a “secondary storage partitions window,” and a “cabinet visible partition window.” This may be better understood through diagrams from the opinion. The first diagram below reproduces a figure from the patent labeled with claim elements. The second is a figure from Partition Magic, labeled with the same claim elements.

GEMSA argued that the ’400 patent was non-obvious because the first owner of the patent, a company called Flash Vos, Inc., “moved the computer industry a quantum leap forward in the late 90’s when it invented Systems Virtualization.” But the PTAB found that “Patent Owner’s argument fails because [it] has put forth no evidence that Flash Vos or GEMSA actually had any commercial success.”

The constitutionality of inter partes review is being challenged in the Supreme Court in the Oil States case. (EFF filed an amicus brief in that case in support of the process.) A decision is expected in Oil States before the end of June. The successful challenge to GEMSA’s patent shows the importance of inter partes review. GEMSA had sued dozens of companies alleging infringement of the ’400 patent. GEMSA can still appeal the PTAB’s ruling. If the ruling stands, however, it should end those suits as to this patent.

Related Cases: EFF v. Global Equity Management (SA) Pty Ltd

Categories: Privacy

New York Judge Makes the Wrong Call on Stingray Secrecy

EFF News - Thu, 2018-04-19 15:15

A New York judge has ruled that the public and the judiciary shouldn’t second-guess the police when it comes to secret snooping on the public with intrusive surveillance technologies.

He couldn’t be more wrong. 

A core part of EFF’s mission is questioning the decisions of our law enforcement and intelligence agencies over digital surveillance. We’ve seen too many cases where police have abused databases, hidden the use of invasive technologies, targeted people exercising their First Amendment rights, disparately burdened immigrants and people of color, and captured massive amounts of unnecessary information on innocent people. 

We’re outraged about New York Judge Shlomo Hager’s recent ruling against the New York Civil Liberties Union in a public records case. The judge upheld the New York Police Department’s decision to withhold records about its purchases of cell-site simulator equipment (colloquially known as Stingrays), including the names of surveillance products and how much they cost taxpayers. 

As the judge said in the hearing [PDF]: 

The case law is clear … "It is bad law and bad policy to second-guess the predictive judgments made by the government’s intelligence agencies" … Therefore, this Court will defer to Detective Werner, as well as to Inspector Gregory Antonsen’s expertise, that disclosure of the names of the StingRay devices, as well as the prices, would pose a substantial threat and would reveal the nonroutine information to bad actors that would use it to evade detection.

We wholeheartedly disagree. Holding police accountable and shining light on the criminal justice system is absolutely good law, good policy, and good for community relations. Questioning authority is one of the most important ways to defend democracy.

Up until a few years ago, a lot of law enforcement agencies around the country went to extreme lengths to hide the existence of cell-site simulators. These devices mimic cell towers in order to connect to people’s phones. Police would reject public records requests about this technology, while prosecutors would sometimes drop cases rather than let information come to light. One of the main vendors, Harris Corp., even had agencies sign non-disclosure agreements.

Transparency advocates sued and the technology’s capabilities began to surface. Police departments were using the technology to track phones without a warrant. They were sucking up data on thousands of innocent phone owners with each use. They were surveilling protesters. The technology reportedly interferes with cellphone coverage, which disparately impacts people of color because police much more frequently deploy cell-site simulators in their neighborhoods. 

In California, legislators were so outraged by the secrecy that they passed a law requiring any agency using a cell-site simulator to publish a privacy and usage policy online and hold public meetings before acquiring the technology. California also passed a law requiring a warrant before police can use a cell-site simulator as well as mandating annual public disclosures about these warrants. 

What’s good enough for California should be good for New York. Transparency in New York City about high-tech spying is especially important, given the NYPD’s track record of civil liberties violations—including illegal surveillance of Muslims and the practice of “testilying.” 

The argument that transparency is going to put more information in the hands of criminals is a weak diversion. By that logic, nothing law enforcement does should be open to public scrutiny, and we should resign ourselves to an Orwellian America monitored only by secret police. That argument failed to hold water in California. In the years since California legislators mandated greater transparency about acquisition and use of cell-site simulators, there is no evidence that these laws contributed to any crime. In recent years, many other agencies have handed over documents about cell-site simulators with little objection. 

The judge’s misguided ruling is a reminder that we must seek transparency through all available means. That’s why we support efforts in the New York City Council to pass the Public Oversight of Surveillance Technology (POST) Act. This measure would require NYPD to publish a use policy for each electronic surveillance technologies it has or seeks use to use in the future. We’re also supporting a variety of measures across the country that would require even stronger oversight of spy tech, including a public process before equipment is acquired. Already, Santa Clara County, Davis, and Berkeley in California have passed such ordinances. 

The time for secrecy over cell-site simulators has passed. The Stingray is out of the bag, and we’re going to keep fighting to make sure it remains in the open.

Learn more about cell-site simulators at EFF’s Street-Level Surveillance project.

Categories: Privacy

Hearing Monday in Groundbreaking Lawsuit Over Border Searches of Laptops and Smartphones

EFF News - Thu, 2018-04-19 12:46

EFF and ACLU Fight Government’s Move to Dismiss Case

Boston – The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) will appear in federal court in Boston Monday, fighting the U.S. government’s attempts to block their lawsuit over illegal laptop and smartphone searches at the country’s borders.

The case, Alasaad v. Nielsen, was filed last fall on behalf of 10 U.S. citizens and one lawful permanent resident who had their digital devices searched without a warrant. The lawsuit challenges the government’s fast-growing practice of searching travelers’ electronics at airports and other border crossings—often confiscating the items for weeks or months at a time—without any individualized suspicion that a traveler has done anything wrong.

The government has moved to dismiss this case.  In court on Monday, EFF Senior Staff Attorney Adam Schwartz will argue that the plaintiffs have legal standing to challenge these illegal searches, and ACLU Staff Attorney Esha Bhandari will argue that the searches are unconstitutional, violating the First and Fourth Amendments.

What:
Hearing in Alasaad v. Nielsen

When:
Monday, April 23
3 pm

Where:
Courtroom 11 (Judge Casper)
Moakley U.S. Courthouse
1 Courthouse Way
Boston, Massachusetts

For more information on this case:
https://www.eff.org/cases/alasaad-v-duke
https://www.aclu.org/cases/alasaad-v-neilsen-challenge-warrantless-phone-and-laptop-searches-us-border

Contact:  AdamSchwartzSenior Staff Attorneyadam@eff.org

Categories: Privacy

Latin American Consumer Groups Urge Facebook to Comply with GDPR in All Countries

EPIC - Thu, 2018-04-19 12:00

A coalition of 14 consumer groups in Latin America has sent a letter to Facebook CEO Mark Zuckerberg, urging him to comply with the EU General Data Protection Regulation (GDPR) at a global level. The groups wrote, "The GDPR provides a solid foundation for the protection of personal data: it establishes clear responsibilities for companies that collect and process personal data and provides data subjects, Facebook users whose data your company collects and processes, with clear rights. These are protections that all users should be entitled to, regardless of where they are located." Earlier this month, the Transatlantic Consumer Dialogue (TACD), a coalition of consumer groups in North America and Europe, also sent a letter to Facebook advocating for the GDPR to be implemented as a baseline standard of data protection for all users.

Categories: Privacy

EPIC Tells House Committee: Require Transparency for Government Use of AI

EPIC - Thu, 2018-04-19 11:50

In advance of a hearing on "Game Changers: Artificial Intelligence Part III, Artificial Intelligence and Public Policy," EPIC told the House Oversight Committee that Congress must implement oversight mechanisms for the use of AI by federal agencies. EPIC said that Congress should require algorithmic transparency, particularly for government systems that involve the processing of personal data. EPIC also said that Congress should amend the E-Government Act to require disclosure of the logic of algorithms that profile individuals. EPIC made similar comments to the UK Privacy Commissioner on issues facing the EU under the GDPR. A recent GAO report explored challenges with AI, including the risk that machine-learning algorithms may not comply with legal requirements or ethical norms. EPIC has pursued several criminal justice FOIA cases, and FTC consumer complaints to promote transparency and accountability. In 2015, EPIC launched an international campaign for Algorithmic Transparency.

Categories: Privacy

Initial Observations on the European Commission’s E-Evidence Proposals

CDT - Wed, 2018-04-18 15:04

On April 17, the European Commission (EC) published its long-awaited draft legislation on E-Evidence (“E-Evidence”) to facilitate cross-border demands for internet users’ communications content and metadata. Commissioners Jourova (Justice), Avramopoulos (Home Affairs), and King (Security) proposed two separate pieces of legislation: (i) a Regulation (“Regulation”) that enables law enforcement authorities in European Union (EU) Member States to issue production orders on communications and cloud providers based in other Member States or based outside of the European Union, regardless of where the data is located; and (ii) a Directive (“Directive”) that would require Member States to enact legislation compelling providers that offer services in an EU Member State to establish a legal representative in an EU Member State for the receipt of cross-border demands.  

EU Member States and the European Parliament will now begin their review of the proposed legislation. CDT will contribute to this debate. We recognise the concerns about difficulties in obtaining electronic data relevant for criminal investigations that motivate the EC’s initiative. We also recognise that cooperation with communications providers may be enhanced, and that existing MLAT processes may not always be able to scale with the volume of requests. We have participated in a series of stakeholder meetings and a public consultation leading up to these proposals. During this process, we have argued that enhanced access to electronic data by law enforcement authorities cannot come at the expense of fundamental privacy and procedural rights protections. This is the core principle we will base our advocacy on as the legislative process moves forward.

If enacted and implemented, the Regulation and Directive will effectively give each EU Member State access for law enforcement purposes to the data of internet users worldwide. This is because each provider in the scope of the Regulation can be compelled to disclose its users’ data no matter where the user is located and no matter the country of citizenship of the user. This can create an enormous risk to privacy worldwide. Because EU Member States have different national laws that can provide different levels of protection, it is necessary to build strong human rights standards into the E-Evidence proposals.

CDT set out ten human rights standards the EC’s proposals must meet, and has now shown how they match up to these criteria. These are our initial observations. We will develop more detailed positions and suggestions once we have analysed the proposals more comprehensively.

Directive

The preamble to the 10-page proposed Directive paints a picture of inconsistent practices among Member States that the Directive is intended to address partially. Some already require providers to have local legal representatives for the service of process; others take the position that their process works extraterritorially. Member States apply different “connecting factors” to determine whether they have jurisdiction over a provider: some base jurisdiction on the location of the provider’s main office; others base jurisdiction on location of data sought; others base jurisdiction on whether services are offered in the territory of the country. Member States are also inconsistent with respect to whether the demands they issue to providers are obligatory or voluntary.  

The Directive requires certain providers to establish in an EU Member State a legal representative for the receipt of law enforcement demands, including the European Production Orders established in the Regulation described below. The Directive chooses the most minimal of connecting factors as the one that obligates a company to establish a local legal representative: the offer of services in a Member State.  Thus, a start-up in the U.S. that successfully offers its service on global basis would have to have a legal representative in an EU Member State. To partially offset the burden this will create, the EC notes that the legal representative can be a third party shared by multiple providers and could be the same representative the company chose for purposes of compliance with the GDPR. The Directive’s recital 13 indicates that mere accessibility of services in a Member State is not sufficient: there must also be a significant number of users in one or more Member States, or targeting of activities or advertising to one or more Member States. 

The Directive describes very broadly the entities that would have to designate a legal representative to include: providers of electronic communications services, providers of information society services that store data for users — including social networks, online marketplaces and other hosting service providers, and providers of internet names and number services. Entities that offer services for which storage of data is not a defining component are not required to designate a representative, but domain name registrars and registries, and privacy and proxy service providers, are required to do so. Additional clarity is needed to delineate the entities that must appoint a representative. The provider can choose to designate a representative only in a Member State in which the provider has an office or provides services, and particular Member States cannot obligate providers to designate a legal representative on their territory.  

Missing from the Directive is a requirement that disclosure orders issued to a provider’s representative come through a central authority in each Member State.  Such a requirement would promote uniformity and quality in such demands. The absence of a Single Point of Contact (SPOC) is among the features of E-Evidence that drew fire from EuroISPA, the leading trade association among Europe-based ISPs.  

Regulation

The 29-page proposed Regulation would authorise judicial authorities in one Member State to issue “European Production Orders” (“Production Orders”) that compel a provider or a provider’s representative in another Member State to disclose stored communications content and transactional records in a criminal investigation.  Production Orders for subscriber information and a new category of information called “access data” do not require judicial authorisation or approval. “Access data” is data related to the commencement and termination of a user access session to a service that is used, with IP address, by an access service provider to identify the user. The Regulation would also authorize prosecuting authorities in one Member State to issue “European Preservation Orders” (“Preservation Orders”) that compel a provider in another Member State to preserve content, transactional records, access information, and subscriber information until a Production Order or a request under a Mutual Legal Assistance Treaty or similar instrument can be obtained. Preservation Orders, including those for content, do not require judicial authorisation or approval and can be issued in investigations of petty crimes.  

The Regulation will effectively operate against providers that offer services in a Member State which have no physical presence in a Member State, other than the representative that must be designated under the proposed Directive. Like the Directive, the Regulation broadly describes the providers on whom such orders can be served to include all of the entities covered by the proposed Directive.  

Production Orders for subscriber information and access data can be issued in investigations of petty crimes and without judicial authorisation. This creates a risk that providers will be inundated with such demands. Production Orders for content and transactional records can only be issued in criminal investigations of cyber crimes, fraud and counterfeiting of non-cash means of payment, child pornography and child sexual abuse and exploitation, and terrorism, as well as in investigations of any other crime for which the maximum penalty is at least three years in custody.  Limiting Production Orders for content and transactional records to serious crimes is a sensible step, and the European Parliament and Council should consider further limitations for Production Orders for subscriber and access information.

The Regulation states that, as a general matter, when data being sought is held by an entity which is not in the scope of the Regulation, but the entity uses an infrastructure service of a provider covered by the Regulation, a data request should be addressed to the entity, not the service provider. This is a sensible principle.

The Regulation does not require that Member States reimburse providers for costs incurred in reviewing and executing orders. Article 12 says that if a Member State reimburses domestic service providers for their costs, it must reimburse providers elsewhere for their compliance costs. Instead, reimbursement of costs should be mandatory. This would serve a dual purpose of protecting small providers against excessive costs, and more importantly, it would have a privacy-protective effect by making it less likely that Production Orders are issued unless there is a clear need and justification, particularly with respect to orders for access data and subscriber data, which can be sought in investigation of petty crimes.

The provider does not see the information in a Production or Preservation Order that shows the grounds upon which the order was determined to be necessary and proportionate. Instead, they see a Certificate that the order has been issued, and the Certificate provides in a standardised format the information necessary to identify the account from which data are sought. Articles 9 and 15 indicate that a provider can challenge a Production Order that, if complied with, would violate the rights of the individual concerned. Such challenges may be brought in the jurisdiction in which the order is served. However, the Regulation and Annex 1 make it clear that the provider will generally not receive the information that would be necessary to bring such a challenge, particularly in the case of a Production Order that would violate fundamental rights.

In addition, the Regulation does not require dual criminality — that is, that the conduct alleged to be criminal is a crime in both the issuing Member State and the Member State in which the provider’s representative is present, or the Member State in which the person to whom the data pertains resides or is a national of. This presumes a high level of confidence in the adherence to fundamental rights in all Member States because all Member States can issue Production Orders.  

The Regulation imposes tight deadlines for provider response: 10 days normally, and six hours in an emergency when there is an imminent threat to life or physical integrity of a person, or to critical infrastructure. This creates a risk that providers will comply with requests that are improper just because the deadline for compliance is approaching. The 10-day limitation creates a risk that providers will prioritize less important demands (including demands in petty criminal cases) as the clock on them runs out instead of responding promptly in just a few days to more important, non-emergency demands. Annex 1, which contains the form for the European Production Order Certificate that the provider receives, is not faithful to these deadlines. It permits issuing authorities to specify other deadlines in non-emergency situations and does not contain any parameters for the duration of those deadlines.  

The confidentiality provisions of the Regulation in Article 11 may deprive persons whose data is being sought of notice of a Production Order in many circumstances.  The Regulation authorises issuing authorities to gag a provider receiving a Production Order when notice to the person to whom the data pertains would obstruct the criminal proceedings. It does not require issuing authorities to provide notice to such person, except in the case where the provider is gagged. Notice can be delayed to avoid obstructing the criminal proceedings. The question is whether the Law Enforcement Data Protection Directive’s (2016/680) Article 13 ensures that individuals are notified in such cases.

Categories: Privacy

Privacy as an Afterthought: ICANN's Response to the GDPR

EFF News - Wed, 2018-04-18 14:44

Almost three years ago, the global domain name authority ICANN chartered a working group to consider how to build a replacement for the WHOIS database, a publicly-accessible record of registered domain names. Because it includes the personal information of millions of domain name registrants with no built-in protections for their privacy, the legacy WHOIS system exposes registrants to the risk that their information will be misused by spammers, identity thieves, doxxers, and censors.

But at the same time, the public availability of the information contained in the WHOIS database has become taken for granted, not only by its regular users, but by a secondary industry that repackages and sells access to its data, providing services like bulk searches and reverse lookups for clients as diverse as marketers, anti-abuse experts, trademark attorneys, and law enforcement authorities.

The working group tasked with replacing this outdated system, formally known as the Next Generation gTLD RDS to Replace WHOIS PDP Working Group did not get far. Despite holding 90 minute weekly working meetings for more than two years, deep divisions within the group have resulted in glacial progress, even as the urgency of its work has increased. A key privacy advocate within that Working Group, EFF Pioneer Award winner Stephanie Perrin, ended up resigning from the group in frustration this March, saying "I believe this process is fundamentally flawed and does not reflect well on the multi-stakeholder model."

With the impending commencement of Europe's General Data Protection Regulation or GDPR on May 25, which will make the continued operation of the existing WHOIS system illegal under European law, ICANN's board has been forced to step in. On April 3, members of the Working Group were informed that it had been "decided to suspend WG meetings until further notice while we await guidance from the Board regarding how this WG will be affected by the GDPR compliance efforts."

ICANN Board Cookbook

With this, the Board has floated its own interim solution aimed at bringing the legacy WHOIS system into compliance with the GDPR. The ingredients of this so-called "Cookbook" proposal [PDF] are drawn from responses to a call for public submissions, to which EFF contributed [PDF]. In short, it would make the following changes to the WHOIS regime:

  • Although full contact information of domain name registrants will still be collected, most of this information will become hidden from public view, unless the registrant affirmatively "opts in" to displaying that information publicly. A tiered access model will be put in place to ensure that only parties who have a "legitimate interest" in obtaining access to a registrant's address, phone number, or email address, will be able to do so.
  • Although email addresses will not be displayed in the public WHOIS data record, they will be replaced by a contact form or anonymized email address, which would still allow members of the public to make contact with a domain owner if they need to. (This idea is one of those that EFF had suggested in our submission, with the additional suggestion that the contact form be protected by a CAPTCHA to minimize the potential for misuse.)
  • No differentiation is attempted to be made between domains registered to individuals, and those registered to companies. This makes sense, because many company domain records do include personal contact information for individuals who act as the administrative or technical contacts for the domain. In practical terms, it would be impossible to weed out the entries that do contain such personal information from those that don't.

The board proposal is an improvement on the status quo, but doesn't go as far in protecting privacy as we would like it to. For example, it leaves it up to individual registrars as to whether they should apply these privacy protections to all domain owners worldwide, or attempt to limit them to those within the European Economic Area. It also contains a too expansive suggested list of acceptable purposes for the collection and processing of WHOIS personal data, including "to address issues involving domain name registrations, including but not limited to: consumer protection, investigation of cybercrime, DNS abuse, and intellectual property protection." 

The ICANN board's Cookbook proposal was submitted to the European Data Protection Authorities, who come together in a group called the Article 29 Working Party, for consideration at its next meeting which took place on April 10-11. The board had hoped to receive [PDF] the group's agreement to a moratorium of enforcement of the GDPR over WHOIS until ICANN is able to get its act together and establish its interim accreditation program. But the Working Party's reply of April 11 [PDF] offers no such moratorium, and instead affirms that the purposes for data collection listed by the board are too broad and will require further work if they are to comply with the GDPR.

Another fundamental limitation of the Cookbook proposal is that while it sets up the idea that there should be an accreditation program for "legitimate" users, it leaves unanswered key questions about how that accreditation program should operate in practice, and in particular how it would assess the legitimacy of claimants seeking access to user data. Since there is not enough time to develop an accreditation system before May 25, the board floats the option of an interim self-accreditation process, which somewhat undermines the purposes of limiting access. The other option is that, by default, access to WHOIS data would "go dark" for all users, until a suitable accreditation system was in place.

Business and IP Constituencies Accreditation and Access Model

This prospect has disturbed stakeholders accustomed to receiving free access to registrant data; one goes so far as to describe the Cookbook proposal as "the most serious threat to the open and public Internet for decades." ICANN's Business and Intellectual Property constituencies have responded by proposing an accreditation and access model [PDF] aimed at keeping the WHOIS door open for three loosely-defined categories of actors: cybersecurity and opsec investigators, intellectual property rights holders and their agents, and law enforcement and other government agencies. It attempts to fill in the gaps of the Board's proposal by suggesting how these users might be accredited.

The biggest problem with the Business and IP constituencies' proposal is that the bar for accreditation to access full registrant data would be set so low that it would become essentially meaningless, while still managing to exclude the wider public and keep them in the dark about who might be viewing their personal data. For example, it could allow anyone who has registered a trademark to enjoy carte blanche access to the entire WHOIS database. In a token effort to prevent misuse of WHOIS access there would be random audits, but penalties for misuse might be limited to de-accreditation.

The proposal would structurally elevate the financial interests of intellectual property owners above the privacy and access rights of ordinary users. While the GDPR does allow data sharing that is necessary for the purposes of legitimate interests of third parties, these interests must be balanced with and can be overridden by the interests, rights or freedoms of the domain name registrant. This proposed accreditation and access model doesn't even attempt to strike such a balance.

Although EFF would have preferred a model requiring a court order or warrant for access to such personal information, it seems inevitable that tiered access will be based on some kind of ICANN-administered accreditation system. Community discussions on what that accreditation program should look like continue on a new ICANN discussion list, using the Business and IP constituencies' proposal as a starting point. But this is work that should have been finished long ago. The commencement date of the GDPR has been known since the rule was adopted on April 27, 2016. Although its edges will be difficult for ICANN to navigate, its basic outlines are not rocket science; it has been obvious for over two years that more would need to be done to secure the personal information of domain name registrants.

Unfortunately, ICANN's version of a multi-stakeholder process has broken down over this contentious issue of registrant data privacy. It therefore falls to ICANN's board to make the interim changes necessary to ensure that the WHOIS system is brought into compliance with European Union law. While this interim model may be replaced by a community-based access model in the future, institutional inertia is likely to see to it that the Board's "interim" policy constrains the outlines of that future model. This makes it all the more important that the ICANN Board listens to all segments of its community, and to the advice of the Article 29 Working Party, in order to ensure that the solutions developed strike an appropriate balance between stakeholders' competing interests, and that the human rights of users are put first.

Categories: Privacy

Assessing the European Commission’s E-Evidence Proposals on Ten Human Rights Criteria

CDT - Wed, 2018-04-18 14:14

Earlier this week, CDT described and made initial observations to the E-Evidence Directive and Regulation. We also issued a list of 10 human rights criteria that the E-Evidence proposals should meet. With the draft text of both now published, we have assessed each against the criteria.. 

1. Legality: Data demands must be connected to a crime published in a statute that gives sufficient detail to give an accused person notice that her actions are unlawful.

This criterion is partially met because European Production Orders and European Preservation Orders authorised in the Regulation may only be issued for criminal proceedings relating to a criminal offence for which a legal person may be held liable or punished in the issuing State.  Whether the Member State’s criminal code provides sufficient notice to a person that user actions are unlawful depends on the text of the code, and the Regulation sets no requirements in this regard.

2. Judicial Authorisation: Data demands must be authorised by an independent entity – preferably judicial in nature – that is independent from the prosecutorial function.

This criterion is fully met for Production Orders for content and transactional data. The Regulation provides that judicial authorisation is necessary for Production Orders seeking this data.  However, prosecutors can issue Production Orders for access and subscriber data, and they can issue Preservation Orders for all types of data, without judicial authorisation.

3. High Probability: There must be a high degree of probability: (i) that a crime has been, is being, or will be committed; and (ii) that evidence of the crime would be revealed by the compelled disclosure.

If this criterion is met, it is met implicitly. The Regulation could, but does not explicitly require a high degree of probability that a crime has been committed and that the information sought will reveal evidence of the crime.  Issuing authorities are required to assess necessity and proportionality before issuing orders, and decisions of the European Court of Human Rights call for “reasonable suspicion” and even “probable cause,” as part of such assessments.  

4. Particularity: Demands should be limited to seeking only data relevant to the crime and should specify the device, account, or person to whom the data demanded relates.

This criterion seems to have been met. The Regulation provides that Production Orders must include, among other things, the persons whose data is being requested, except where the sole purpose of the order is to identify a person. Annex I prompts the issuing authority to specify device and account identifiers.  

5. Least Intrusive Means: If less intrusive mechanisms could readily be used to obtain the information necessary to prosecute the case, they should be used instead.

This criterion has not been met explicitly. The issuing authority has to demonstrate that the Production Order is necessary and proportionate, but how it meets that threshold is not clear. There may be different thresholds applicable in different Member States that justify including explicit language in the Regulation on this matter. As a general point, it should not be the case that standards are lessened across Member States.   

6. Seriousness: Demands should be limited to serious crimes only, which can be articulated by type of crime (e.g. terrorism) and maximum sentence.

This criterion has been partially met. The Regulation permits Production Orders for content and transactional records only for cyber crimes, fraud and counterfeiting of non-cash means of payment, child pornography and child sexual abuse and exploitation, and terrorism, as well as in investigations of any other crime for which the maximum penalty is at least three years in custody. These are serious crimes or crimes that cannot be investigated effectively without electronic evidence. However, these limitations do not apply to Production Orders for access and subscriber data, and they do not apply to Preservation Orders.  

7. Notice: Users must be notified that their information has been sought or obtained.  Notice can be delayed in limited circumstances to protect the integrity of an investigation.  Provider notice should be permitted, but is no substitute for required notice from the government.

The confidentiality provisions of the Regulation in Article 11 may deprive persons whose data is being sought of notice of a Production Order in many circumstances. The Regulation authorises issuing authorities to gag a provider receiving a Production Order when notice to the person to whom the data pertains would obstruct the criminal proceedings.  It does not require issuing authorities to provide notice to such person, except in the case where the provider is gagged. Notice can be delayed to avoid obstructing the criminal proceedings. National measures implementing Article 13 of the Law Enforcement Data Protection Directive (2016/680) will determine whether individuals are notified in cases where the provider is not gagged.

8. Minimisation: Only information necessary to the investigation can be retained, and excess information must be destroyed or returned.

This criterion has not been met explicitly. The Regulation does not include provisions on data minimisation. The GDPR (2016/679) and the Law Enforcement Data Protection Directive (2016/680) have provisions on minimisation. It is necessary to consider whether such provisions should be added to the Regulation.

9. Transparency: Publication of numbers of data demands made and granted, and types of offences specified.

This criterion has not been met. Article 19 obliges Member States to maintain comprehensive statistics and report them to the EC annually.  However, it does not oblige the EC to publish this information. This criterion would be met if this obligation was imposed. It would also be essential that Data Protection Authorities have full access to the data and can assess the use of the instrument, to verify whether privacy rules are respected.

10. Redress: There must be a process through which a person whose rights are interfered with because these criteria were not met can obtain redress.

The right to redress is addressed in Article 17, which provides that the person whose data was obtained, as well as suspects and accused persons, “shall have the right to effective remedies against a [Production Order] in the issuing State, without prejudice to remedies available under Directive (EU) 2016/680 and Regulation (EU) 2016/679.”  We will consider whether these remedies are sufficient and may provide further suggestions on this point.

Conclusion

Some of the human rights protections set out above have not been fully met, or are only met implicitly. We believe that improvements in the text are necessary to provide these protections. We look forward to working with the EC, the Council, and the Parliament to ensure that the human rights criteria that we have set forth are more fully and clearly met, and to make other improvements as well.

 

Categories: Privacy