You are hereFeed aggregator / Categories / Privacy


The New Music Modernization Act Has a Major Fix: Older Recordings Will Belong to the Public, Orphan Recordings Will Be Heard Again

EFF News - Wed, 2018-09-19 14:31

The Senate passed a new version of the Music Modernization Act (MMA) as an amendment to another bill this week, a marked improvement over the version passed by the House of Representatives earlier in the year. This version contains a new compromise amendment that could preserve early sound recordings and increase public access to them.

Until recently, the MMA (formerly known as the CLASSICS Act) was looking like the major record labels’ latest grab for perpetual control over twentieth-century culture. The House of Representatives passed a bill that would have given the major labels—the copyright holders for most recorded music before 1972—broad new rights in those recordings, ones lasting all the way until 2067. Copyright in these pre-1972 recordings, already set to last far longer than even the grossly extended copyright terms that apply to other creative works, would a) grow to include a new right to control public performances like digital streaming; b) be backed by copyright’s draconian penalty regime; and c) be without many of the user protections and limitations that apply to other works.

Fundamentally, Congress should not be adding new rights in works created decades ago.

The drafting process was also troubling. It seemed a return to the pattern of decades past, where copyright law was written behind closed doors by representatives from a few industries and then passed by Congress without considering the views of a broader public. Star power, in the form of famous musicians flown to Washington to shake hands with representatives, eased things along.

Two things changed the narrative. First, a broad swath of affected groups spoke up and demanded to be heard. Tireless efforts by library groups, music libraries, archives, copyright scholars, entrepreneurs, and music fans made sure that the problems with MMA were made known, even after it sailed to near-unanimous passage in the House. You contacted your Senators to let them know the House bill was unacceptable to you, and that made a big difference.

Second, the public found a champion in Senator Ron Wyden, who proposed a better alternative in the ACCESS to Recordings Act. Instead of layering bits of federal copyright law on top of the patchwork of state laws that govern pre-1972 recordings, ACCESS would have brought these recordings completely under federal law, with all of the rights and limitations that apply to other creative works. While that still would have brought them under the long-lasting and otherwise deeply-flawed copyright system we have, at least there would be consistency.

Weeks of negotiation led to this week’s compromise. The new “Classics Protection and Access Act” section of MMA clears away most of the varied and uncertain state laws governing pre-1972 recordings, and in their place applies nearly all of federal copyright law. Copyright holders—again, mainly record labels—gain a new digital performance right equivalent to the one that already applies to recent recordings streamed over the Internet or satellite radio. But older recordings will also get the full set of public rights and protections that apply to other creative work. Fair use, the first sale doctrine, and protections for libraries and educators will apply explicitly. That’s important, because many state copyright laws—California’s, for example—don’t contain explicit fair use or first sale defenses.

The new bill also brings older recordings into the public domain sooner. Recordings made before 1923 will exit from all copyright protection after a 3-year grace period. Recordings made from 1923 to 1956 will enter the public domain over the next several decades. And recordings from 1957 onward will continue under copyright until 2067, as before. These terms are still ridiculously long—up to 110 years from first publication, which is longer than any other U.S. copyright. But our musical heritage will leave the exclusive control of the major record labels sooner than it would have otherwise.

The bill also contains an “orphan works”-style provision that could allow for more use of old recordings even if the rightsholder can’t be found. By filing a notice with the copyright office, anyone can use a pre-1972 recording for non-commercial purposes, after checking first to make sure the recording isn’t in commercial use. The rightsholder then has 90 days to object. And if they do, the potential user can still argue that their use is fair. This provision will be an important testcase for solving the broader orphan works problem.

The MMA still has many problems. With the compromise, the bill becomes even more complex, extending to 186 pages. And fundamentally, Congress should not be adding new rights in works created decades ago. Copyright law is about building incentives for new creativity, enriching the public. Adding new rights to old recordings doesn’t create any incentives for new creativity. And copyrights as a whole, including sound recording copyrights, still last for far too long.

Still, this compromise gives us reason for hope. Music fans, non-commercial users, and the broader public have a voice—a voice that was heard—in shaping copyright law as long as legislators will listen and act.

Categories: Privacy

Hill-Climbing Our Way to Defeating DRM

EFF News - Tue, 2018-09-18 16:17

Computer science has long grappled with the problem of unknowable terrain: how do you route a packet from A to E when B, C, and D are nodes that keep coming up and going down as they get flooded by traffic from other sources? How do you shard a database when uncontrollable third parties are shoving records into it all the time? What's the best way to sort some data when spammers are always coming up with new tactics for re-sorting it in ways that suit them, but not you or your users?

One way to address the problem is the very useful notion of "hill-climbing." Hill-climbing is modeled on a metaphor of a many-legged insect, like an ant. The ant has forward-facing eyes and can't look up to scout the terrain and spot the high ground, but it can still ascend towards a peak by checking to see which foot is highest and taking a step in that direction. Once it's situated in that new place, it can repeat the process, climbing stepwise toward the highest peak that is available to it (of course, that might not be the highest peak on the terrain, so sometimes we ask our metaphorical ant to descend and try a different direction, to see if it gets somewhere higher).

This metaphor is not just applicable to computer science: it's also an important way to think about big, ambitious, fraught policy fights, like the ones we fight at EFF. Our Apollo 1201 Project aims to kill all the DRM in the world inside of a decade, but we don't have an elaborate roadmap showing all the directions we'll take on the way.

There's a good reason for that. Not only is the terrain complex to the point of unknowability; it's also adversarial: other, powerful entities are rearranging the landscape as we go, trying to head us off. As the old saying goes, "The first casualty of any battle is the plan of attack."

Instead of figuring out the whole route from A to Z, we deploy heuristics: rules of thumb that help us chart a course along this complex, adversarial terrain as we traverse it.

Like the ant climbing its hill, we're feeling around for degrees of freedom where we can move, ascending towards our goal. There are four axes we check as we ascend:

1. Law: What is legal? What is illegal? What chances are there to change the law? For example, we're suing the US government to invalidate Section 1201 of the Digital Millennium Copyright Act (DMCA), the abetting legislation that imposes penalties for bans breaking DRM, even for legal reasons.  If it was legal to break DRM for a legal purpose, the market would be full of products that let you unlock more value in the products you own, and companies would eventually give up on trying to restrict legal conduct.

We're also petitioning the US Copyright Office to grant more exemptions to DMCA 1201, despite the fact that those exemptions are limited in practice (e.g., "use" exemptions that let you jailbreak a device, but not "tools" exemptions that let you explain to someone how to jailbreak their device or give them a tool to do so).

Why bother petitioning the Copyright Office if they can only make changes that barely rise above the level of cosmetic? Glad you asked.

2. Norms: What is socially acceptable? A law that is widely viewed as unreasonable is easier to change than a law that is viewed as perfectly understandable. Copyright law is complicated and boring, and overshadowed by emotive appeals to save wretched "creators" (like me—my full-time job is as a novelist, and I work part-time for EFF as an activist because sitting on the sidelines while technology was perverted to control and oppress people was unbearable).

But in the twenty-first century, a tragic category error (using copyright, a body of law intended to regulate the entertainment industry's supply chain, to regulate the Internet, which is the nervous system of the entire digital world) has led to disastrous and nonsensical results. Thanks to copyright law, computer companies and car companies and tractor companies and voting machine companies and medical implant companies and any other company whose product has a computer in it can use copyright to make it a crime to thwart their commercial plans—to sell you expensive ink, or to earn a commission on every app, or to monopolize the repair market.

From long experience, I can tell you that the vast majority of people do not and will never care about copyright or DRM. But they do care about the idea that vast corporations have bootstrapped copyright and DRM into a doctrine that amounts to "felony contempt of business model." They care when their mechanic can't fix their car any longer, or the insulin for their artificial pancreas goes up 1000 percent, or when security experts announce that they can't audit their state's voting machines.

The Copyright Office proceedings can carve out some important freedoms, but more importantly, they are a powerful normative force, an official recognition from the branch of the US government charged with crafting and regulating copyright that DRM is messed up and getting in the way of legitimate activity.

3. Code: What is technically possible? DRM is rarely technologically effective. For the most part, DRM does not survive contact with the real world, where technologists take it apart, see how it works, find its weak spots, and figure out how to switch it off. Unfortunately, laws like DMCA 1201 make developing anti-DRM code legally perilous, and people who try face both civil and criminal jeopardy. But despite the risks, we still see technical interventions like papers at security conferences on the weaknesses in DRM or tools for bypassing and jailbreaking DRM. EFF's Coders' Rights project stands up for the right of developers to create these legitimate technologies, and our intake desk can help coders find legal representation when they're threatened.

4. Markets: What's profitable? When a policy goal intersects with someone else's business model, you get an automatic power-up. People who want to sell jailbreaking tools, third-party inkjet cartridges, and other consumables, independent repair services, apps and games for locked platforms are all natural opponents of DRM, even if they're not particularly worried about DRM itself, and only care about the parts of it that get in the way of earning their own living.

There are many very successful products that were born with DRM—like iPhones—and where no competing commercial interests were ever able to develop. It's a long battle to convince app makers that competition in app stores would result in their being able to keep more of that 30 percent commission they currently pay to Apple.

But in other domains, like the independent repair sector, there are huge independent commercial markets that are thwarted by DRM. Independent repair shops create local, middle-class jobs (no one sends a phone or a car overseas for service!) and they rely on manufacturers for third-party replacement parts and diagnostic tools. Farmers are a particularly staunch ally in the repair fight, grossly affronted at the idea of having to pay John Deere a service charge to unlock the parts they swap into their own tractors (and even more furious at having to wait days for a John Deere service technician to put in an appearance in order to enter the unlock code).

Law, Norms, Code, and Markets: these are the four forces that former EFF Board member Lawrence Lessig first identified in his 1999 masterpiece Code and Other Laws of Cyberspace, the forces that regulate all our policy outcomes. The fight to rescue the world from DRM needs all four.

When we're hill-climbing, we're always looking for chances to invoke one of these four forces, or better yet, to combine them. Is there a business that's getting shafted by DRM who will get their customers to write to the Copyright Office? Is there a country that hasn't yet signed a trade agreement banning DRM-breaking, and if so, are they making code that might help the rest of us get around our DRM? Is there a story to tell about a ripoff in DRM (like the time HP pushed a fake security update to millions of printers in order to insert DRM that prevented third-party ink) and if so, can we complain to the FTC or a state Attorney-General to punish them? Can that be brought to a legislature considering a Right to Repair bill?

On the way, we expect more setbacks than victories, because we're going up against commercial entities who are waxing rich and powerful by using DRM as an illegitimate means to cement monopolies, silence critics, and rake in high rents.

But even defeats are useful: as painful as it is to lose a crucial battle, such a loss can galvanize popular opposition, convincing apathetic or distracted sideliners that there's a real danger that the things they value will be forever lost if they don't join in (that would be a "normative" step towards victory).

As we've said before, the fight to keep technology free, fair and open isn't a destination, it's a journey. Every day, there are new reasons that otherwise reasonable people will find to break the tech we use in increasingly vital and intimate ways—and every day, there will be new people who are awoken to the need to fight against this temptation.

These new allies may get involved because they care about Net Neutrality, or surveillance, or monopolies. But these are all part of the same information ecology: what would it gain us to have a neutral internet if all the devices we connect to it use DRM to control us to the benefit of distant corporations? How can we end surveillance if our devices are designed to treat us as their enemies, and thus able to run surveillance code that, by design, we're not supposed to be able to see or stop? How can we fight monopolies if corporations get to use DRM to decide who can compete with them—or even criticize the security defects in their products?

On this Day Against DRM, in a year of terrible tech setbacks and disasters, it could be easy to despair. But despair never got the job done: when life gives you SARS, you make sarsaparilla. Every crisis and catastrophe bring new converts to the cause. And if the terrain seems impassible, just look for a single step that will take you to higher ground. Hill-climbing algorithms may not be the most direct route to higher ground, but as every programmer knows, it's still the best way to traverse unknowable terrain.

What step will you take today?

(Image: Jacob_Eckert, Creative Commons Attribution 3.0 Unported)

Related Cases: Green v. U.S. Department of Justice

Categories: Privacy

EFF to Court: The First Amendment Protects Criticism of Patent Trolls

EFF News - Tue, 2018-09-18 14:38

EFF has submitted an amicus brief [PDF] to the New Hampshire Supreme Court asking it to affirm a lower court ruling that found criticism of a patent owner was not defamatory. The trial judge hearing the case ruled that “patent troll” and other rhetorical characterizations are not the type of factual statements that can be the basis of a defamation claim. Our brief explains that both the First Amendment and the common law of defamation support this ruling.

This case began when patent assertion entity Automated Transactions, LLC (“ATL”) and inventor David Barcelou filed a defamation complaint [PDF] in New Hampshire Superior Court. Barcelou claims to have come up with the idea of connecting automated teller machines to the Internet. As the complaint explains, he tried to commercialize this idea but failed. Later, ATL acquired an interest in Barcelou’s patents and began suing banks and credit unions.

ATL’s patent litigation did not go well. In one case, the Federal Circuit ruled that some of ATL’s patent claims were invalid and that the defendants did not infringe. ATL’s patents were directed to ATMs connected to the Internet and it was “undisputed” that the defendants’ products “are not connected to the Internet and cannot be accessed over the Internet.” ATL filed a petition asking the U.S. Supreme Court to overturn the Federal Circuit. The Supreme Court denied that petition.

Unsurprisingly, ATL’s licensing revenues went down after its defeat in the federal courts. Rather than accept this, ATL and Barcelou filed a defamation suit in New Hampshire state court blaming their critics for ATL’s financial decline.

In the New Hampshire litigation, ATL and Barcelou allege that statements referring to them as a “patent troll” are defamatory. They also claim that characterizations of ATL’s litigation campaign as a “shakedown,” “extortion,” or “blackmail” are defamatory. The Superior Court found these statements were the kind of rhetorical hyperbole that is not capable of defamatory meaning and dismissed the complaint. ATL and Barcelou appealed.

EFF’s amicus brief [PDF], filed together with ACLU of New Hampshire, explains that Superior Court Judge Brian Tucker got it right. The First Amendment provides wide breathing room for public debate and does not allow defamation actions based solely on the use of harsh language. The common law of defamation draws a distinction between statements of fact and pure opinion or rhetorical hyperbole. A term like “patent troll,” which lacks any settled definition, is classic rhetorical hyperbole. Similarly, using terms like “blackmail” to characterize patent litigation is non-actionable opinion.

ATL and Barcelou, like some other critics of the Superior Court’s ruling, spend much of their time arguing that “patent troll” is a pejorative term. This misunderstands the Superior Court’s decision. At one point in his opinion, Judge Tucker noted that some commentators have presented the patent assertion, or troll, business model in a positive light. But the court wasn’t saying that “patent troll” is never used pejoratively or even that the defendants didn’t use it pejoratively. The law reports are filled with cases where harsh, pejorative language is found not capable of defamatory meaning, including “creepazoid attorney,” “pitiable lunatics,” “stupid,” “asshole,” “Director of Butt Licking,” etc.

ATL and Barcelou may believe that their conduct as inventors and patent litigants should be praised rather than criticized. They are entitled to hold that view. But their critics are also allowed to express their opinions, even with harsh and fanciful language. Critics of patent owners, like all participants in public debate, may use the “imaginative expression” and “rhetorical hyperbole” which “has traditionally added much to the discourse of our Nation.”

Related Cases: Automated Transactions LLC v. American Bankers Association

Categories: Privacy

EPIC FOIA Docs Show FBI and CBP Accessed "Hemisphere" Records

EPIC - Tue, 2018-09-18 10:55

The Drug Enforcement Agency has released to EPIC a new FOIA production about the AT&T "Hemisphere" program. Hemisphere is a massive call records database made available to government agents by the nation's largest telecommunication company. AT&T discloses to the government billions of detailed customer phone records, including location data, without judicial review. The new release to EPIC reveals that both the FBI and CBP obtained access to these call details records. EPIC filed suit against the DEA in 2013 after the agency failed to respond to EPIC's FOIA request for information about the Hemisphere program. EPIC previously argued that the names of other agencies with access to Hemisphere records should be released. In June, the Supreme Court held in Carpenter v US that government access to location data is a search subject to Fourth Amendment review. EPIC filed an amicus brief in the Carpenter case.

Categories: Privacy

EPIC Sues for Release of Kavanaugh White House Records on Warrantless Surveillance, Patriot Act

EPIC - Tue, 2018-09-18 10:50

EPIC has filed a lawsuit to compel the National Archives and Records Administration to release Brett Kavanaugh's White House records about warrantless surveillance and the Patriot Act. EPIC's lawsuit follows the agency's failure to respond to EPIC's two urgent Freedom of Information Act requests. In the complaint, EPIC explains that timely release of these records is now essential to assess Kavanaugh's role in the White House surveillance programs. In Senate testimony, Kavanaugh claimed that he knew nothing about these programs, but documents indicate that he drafted President Bush's speech on the Patriot Act, communicated with John Yoo, the architect of the warrantless surveillance program, and defended suspicionless surveillance of the American public. Last week, EPIC sent a letter to the Senate Judiciary Committee urging postponement of the the committee vote on Kavanaugh until the documents EPIC requested are released. EPIC highlighted concerns about Kavanaugh’s White House years in an earlier letter to the Committee.

Categories: Privacy

Border Searches of Electronic Devices: Oh, The Places Your Data Will Go

CDT - Mon, 2018-09-17 18:06

Rejhane Lazoja, a Muslim American, is challenging U.S. Custom and Border Protection’s (CBP) seizure of her cell phone, as well as the possible retention and sharing of her device’s data without any articulation of reasonable suspicion, probable cause, or procurement of a warrant. CDT is supporting pending litigation challenging these searches under the First and Fourth Amendments. Lazoja’s lawsuit highlights one aspect of these searches that we have not addressed: what happens to the data once a device has been subjected to a search at the border?

Lazoja arrived in Newark from Switzerland on February 26, 2018. She was flagged for secondary screening during which CBP requested she open her phone. When CBP refused to provide her an explanation for their request, Lazoja in turn refused to unlock the device because the phone contained photos of her without her hijab (religious headscarf) and pursuant to her religion, men who are not family cannot view them. She also explained that the device contained communications with her attorney. CBP seized her device and it was returned on July 6, 2018, 130 days later. Lazoja contacted CBP and requested CBP indicate whether copies of her device’s data were made, and if so that the data be expunged, and indicate whether the data was shared with any third parties. Receiving no information in response to her request, Lazoja, represented by CAIR-NY and CAIR-NJ, filed a Federal Rules of Criminal Procedure 41(g) Motion for Return of Property in Federal District Court in New Jersey, to compel the return of any copies of her data that may be in CBP’s possession, and to be informed of other parties that may also have a copy.

Her brief cites to CBP’s policy directive on border searches of electronic devices, which provides some guidance on what may have happened to her data. By our read of that directive, Lazoja is right to be concerned.

CBP’s Border Search of Electronic Devices Policy

CBP’s policy on border searches addresses when CBP officials may search electronic devices and the process that governs device and data retention, sharing, and destruction. We discuss each of these in turn below.


CBP policy allows officers to conduct two types of searches at the border without a warrant: basic and advanced. A basic search is a manual search and – according to CBP – it requires no individualized suspicion to justify the search. An advanced search involves connecting external equipment to the device in order to gain access, review, copy and analyze the device’s contents, frequently while detaining the device for a number of days. This type of search requires supervisory approval and only an articulation of reasonable suspicion, or a national security concern.


A traveler is expected to present CBP with a device that is ready to be searched—in other words —unlocked. If the traveler does not, CBP is allowed, with a supervisor’s approval, to detain the device or make a copy of the data in order to effectuate its search. There is no time limit to these detentions, however initial approval allows a detention of 5 days, after which extensions require approval in increments of no more than 7 days. Again, in this case CBP seized Lazoja’s device for over 4 months, meaning CBP may have approved the continued detention 18 times.

CBP officials may seize and retain a device or copies of the data, when “they determine there is probable cause to believe that the device, or copy of the contents from the device, contains evidence of a violation of law that CBP is authorized to enforce or administer.” CBP is tasked with regulating and facilitating international trade, securing the border from terrorism, and enforcing U.S. regulations, including trade, customs, and immigration. In short, it enforces many, many laws. Without a finding of probable cause, “CBP may retain only information relating to immigration, customs, and other enforcement matters if such retention is consistent with the applicable system of records notice.” Practically speaking, this is hardly a limitation and depends greatly on how broadly CBP reads “relating.” This is certainly a lower standard than reasonable suspicion.

Information related to the inspections and the data itself may be retained in a number of locations. CBP officials note in the TECS database the details and impressions of all secondary inspections, including why they took place, and whether and why a device was searched or detained. TECS is the information sharing platform managed by CBP and is its primary screening tool. TECS hosts data collected by CBP and other agencies, and allows CBP officers to query other law enforcement data streams nonresident in TECS, like the FBI’s National Crime Information Center or the Terrorist Screening Center. If an officer was able to review the device during the inspection, notes about the inspection could include what information was on the device, like what the traveler was reading or with whom they were communicating. This information is retained for 75 years according to a 2008 SORN, or for “the life of the law enforcement matter to support that activity and other enforcement activities that may become related.”

Data copied by CBP may be retained in the Automated Targeting System to “further review, analyze, and assess the information physically resident on the electronic devices, or copies thereof.” The Automated Targeting System (ATS) is a subset of data connected to TECS and is “a decision support tool that compares traveler, cargo, and conveyance information against law enforcement, intelligence, and other enforcement data using risk-based scenarios and assessments.” In short, ATS alerts CBP of perceived high risk passengers and cargo. Information stored in ATS is retained for 15 years and then deleted. However, data won’t be deleted if it is “linked to active law enforcement lookout records, CBP matches to enforcement activities, and/or investigations or cases (i.e., specific and credible threats; flights, individuals, and routes of concern; or other defined sets of circumstances) will remain accessible for the life of the law enforcement matter to support that activity and other enforcement activities that may become related.”

Data may also be retained in other systems, including if relevant, any of the immigration systems like the Alien File, Central Index System, E3 (CBP’s access portal to ICE and DHS’s biometric databases, ENFORCE/IDENT), or others. CBP operates and has access to a staggering number of systems related to its responsibilities, all with their own data retention schedules. We do not have complete information on all aspects of the search and retention of data because, according to a 2009 DHS Privacy Impact Assessment on border searches of electronic devices, “providing specific transparency to the general public about all aspects of the program could compromise law enforcement or national security sensitive information.”

CBP policy does not affirm a traveler’s right to be notified that their data has been copied or retained.

In short, CBP has broad authority to retain data from electronic devices subjected to a search, and can retain such data for extensive periods of time. As we do not know what was resident on her device or why she was stopped, it is difficult to assess where Lazoja’s data might be, if retained at all. Certainly TECS will have a record of the fact of the search, as well as CBP’s notes about the search.


As Lazoja points out in her motion, CBP can share information copied from a device in a border search broadly. According to the CBP directive on border searches of electronic devices, “[n]othing in this Directive limits the authority of CBP to share copies of information contained in electronic devices (or portions thereof), which are retained in accordance with this Directive, with federal, state, local, and foreign law enforcement agencies to the extent consistent with applicable law and policy.” In case there was any doubt about the breadth of this sharing authority, CBP assures us that “[a]s a federal law enforcement agency, CBP has broad authority to share lawfully seized and/or retained information with other federal, state, local, and foreign law enforcement agencies in furtherance of law enforcement investigations, counterterrorism, and prosecutions (consistent with applicable SORNs).”

A review of the SORNs reveals this to be the case. For example, information in TECS can be shared pursuant to 15 different routine uses such as “[t]o appropriate Federal, State, local, tribal, or foreign governmental agencies or multilateral governmental organizations responsible for investigating or prosecuting the violations of, or for enforcing or implementing, a statute, rule, regulation, order, license, or treaty where DHS determines that the information would assist in the enforcement of civil or criminal laws.” ATS is no different, and asserts similar broad sharing authority pursuant to routine use.

Furthermore, CBP may convey devices or copies of the data to third parties in order to receive technical assistance to search the device or get something translated. With reasonable suspicion of activities in violation of the laws enforced or administered by CBP, or where there is a national security concern, CBP may convey devices or data to third parties for subject matter assistance. Then, the agency providing the assistance may retain copies of the information “only if and to the extent that it has the independent legal authority to do so.” Thus, for example, if a traveler’s cellphone is shared with the NSA to obtain its assistance in defeating a security measure applied to the device, the NSA may retain and ingest into its databases any information it obtains from the device, to the extent retention furthers its mission and is consistent with its legal authorities. CBP policy states that when an electronic device or information is conveyed to a third party for assistance, the traveler will be notified, unless the “notification would impair national security, law enforcement, officer safety, or other operational interests.”

CBP has ample permission to share data retained from a border search of a device. Indeed the authority to share information that would “assist in the enforcement of civil or criminal laws” is broad. Again, without knowing the specifics of Lazoja’s case we cannot predict how data from her cell phone may have been shared. As noted above, the fact that the search took place as well as CBP’s observations will be retained on record in TECS, and that information is available to many entities.

Data Destruction

CBP policy states that if after reviewing the information there is not probable cause to seize the device or information therein, copies of the data held by CBP must be destroyed. CBP self imposed a deadline of deleting the data within 7 days of determining there is no probable cause, unless a supervisory official grants an extension in which case the data must be deleted within 21 days. Of course this deletion does not apply to the data CBP may retain that is “related” to its enforcement mission. Nor does this data deletion mandate extend to information shared with third parties that have an independent authority to retain the data.


Based on the broad authority conferred in CBP’s policy on border searches of electronic devices, it is certainly possible that some of Lazoja’s data has been been retained and shared. It’s difficult to determine whether this is the case without knowing the basis for the seizure of her data and device, or whether the information resident on her device revealed evidence of a violation of a statute enforced by CBP. What we do know is that these searches at the border implicate the fundamental freedoms of all travelers and raise the risk that incredibly personal and sensitive data will be retained for years and broadly shared throughout the government.

As of now, a date has not been set to hear Lazoja’s motion.

The post Border Searches of Electronic Devices: Oh, The Places Your Data Will Go appeared first on Center for Democracy & Technology.

Categories: Privacy

California Law Could be a Big Step Forward for Police Transparency

EFF News - Mon, 2018-09-17 11:21

Government can’t be accountable unless it is transparent. Voters and taxpayers can only know whether they approve of the actions of public officials and public employees if they know what they’re doing. That transparency is especially important when it comes to the actions of local police, who carry weapons and have the power of arrest.

In the age of the Internet, for most of us, access to the state, local and federal laws that we must follow is just a click away. But if a resident of a particular city wants to know the rules that the police she pays for must follow, it’s a lot more difficult. In the state of California, accessing records about basic police policies often requires the filing of a California Public Records Act (CPRA) request.

There’s a chance now to make it much easier. Both houses of the California legislature have passed S.B. 978, which requires local police departments to publish their “training, policies, practices, and operating procedures” on their websites. That’s exactly as it should be, with transparency as the default—not a special privilege that journalists or activists have to request.

In an age when police are enhancing their powers with extraordinary surveillance tools like automated license plate readers, facial recognition, drones, and social media monitoring, transparency in police procedures is especially important—because without it, it's much harder to hold law enforcement personnel accountable. 

The bill has exceptions that give us real concern. Governor Brown vetoed a similar bill last year that we also supported, which led the bill’s author to exempt several important state agencies that would have been covered under the earlier bill, including the Department of Justice and the Department of Corrections and Rehabilitation. Also, S.B. 978 doesn’t provide enforcement mechanisms or consequences for police agencies that fail to post the required information.

Despite those limitations, S.B. 978 will be a big step forward in creating a more transparent government, at a time when trust between police and vulnerable communities needs to be rebuilt. Join us in urging Governor Jerry Brown to sign this important bill.

Take Action

tell the governor to sign sb 978

Categories: Privacy

Microsoft Clears the Air About Fighting CLOUD Act Abuses

EFF News - Fri, 2018-09-14 20:24

Five of the largest U.S. technology companies pledged support this year for a dangerous law that makes our emails, chat logs, online videos and photos vulnerable to warrantless collection by foreign governments.

Now, one of those companies has voiced a meaningful pivot, instead pledging support for its users and their privacy. EFF appreciates this commitment, and urges other companies to do the same.

Microsoft’s long-titled “Six Principles for International Agreements Governing Law Enforcement Access to Data” serves as the clearest set of instructions by a company to oppose the many privacy invasions possible under the CLOUD Act. (Dropbox published similar opposition earlier this year, advocating for many safeguards.)

Quickly, Microsoft’s principles are:

  • The universal right to notice
  • Prior independent judicial authorization and required minimum showing
  • Specific and complete legal process and clear grounds to challenge
  • Mechanisms to resolve and raise conflicts with third-country laws
  • Modernizing rules for seeking enterprise data
  • Transparency

To understand how these principles could serve as a bulwark for privacy, we have to first revisit how the CLOUD Act does the opposite.

The CLOUD Act, Revisited

Bypassing responsible legislative procedure and robbed of a stand-alone floor vote before being signed into law in March, the CLOUD Act created new mechanisms for U.S. and foreign police to seize data across the globe.

Under the CLOUD Act, the president can enter into “executive agreements” that allow police in foreign countries to request data directly from U.S. companies, so long as that data does not belong to a U.S. person or person living in the United States. Now, you might wonder: Why should a U.S. person worry about their privacy when foreign governments can’t specifically request their data? Because even though foreign governments can’t request U.S. person data, that doesn’t mean they won’t get it.

As we wrote before, here is an example of how a CLOUD Act data request could work:

“London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect those messages. The London police would receive no prior judicial review for this request. The London police could avoid notifying U.S. law enforcement about this request. The London police would not need a probable cause warrant for this collection.

Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages could be used to criminally charge the U.S. person with potentially unrelated crimes, too.”

Many of the CLOUD Act’s privacy failures—failure to require notice, failure to require prior judicial authorization, and the failure to provide a clear path for companies and individuals to challenge data requests—are addressed by Microsoft’s newly released principles.

The Microsoft Principles

Microsoft’s principles encompass both itself and other U.S. technology companies that handle foreign data, including cloud technology providers. That’s because the principles sometimes demand changes to the actual executive agreements—changes that will affect how any company that receives CLOUD Act data requests can publicize, respond to, or challenge them. (No agreements have been finalized, but EFF anticipates the first one between the United States and the United Kingdom to be released later this year.)

Microsoft has committed to the “universal right to notice,” saying that “absent narrow circumstances, users have a right to know when the government accesses their data, and cloud providers must have a right to tell them.”

EFF agrees. For years, we have graded companies explicitly on their policies to inform users about U.S. government data requests prior to fulfilling such requests, barring narrow emergency exceptions. It is great to see Microsoft’s desire to continue this practice for any CLOUD Act data request it receives. The company has also demanded that it and other companies be allowed to fight nondisclosure orders that are tied to a data request. This is similar to another practice that EFF supports.

Providing notice is vital to empowering individuals to legally defend themselves from overbroad government requests. The more companies that do this, the better.

Further, Microsoft committed itself to “transparency,” saying that “the public has a right to know how and when governments seek access to digital evidence, and the protections that apply to their data.”

Again, EFF agrees. This principle, while similar to universal notice, serves a wider public. Microsoft’s desire is to not only inform users whose data is requested about those data requests, but to also spread broader information to everyone. For instance, Microsoft wants all cloud providers to “have the right to publish regular and appropriate transparency reports” that unveil the number of data requests a company receives, what governments are making requests, and how many users are affected by requests. This type of information is crucial to understanding, for instance, if certain governments make a disproportionate number of requests, and, if so, what country’s persons, if any, are they targeting? Once again, EFF has graded companies on this issue.

Microsoft’s interpretation on transparency also includes a demand that any executive agreement negotiated under the CLOUD Act must be published “prior to its adoption to allow for meaningful public input.” This is the exact type of responsible procedure that Congressional leadership robbed from the American public when sneaking the CLOUD Act into the back of a 2,232-page government spending bill just hours before a vote. Removing the public from a conversation about their right to privacy was unacceptable then, and it remains unacceptable now.

Microsoft additionally demanded that any CLOUD Act data requests include “prior independent judicial authorization and required minimum showing.” This is a big deal. Microsoft is demanding a “universal requirement” that all data requests for users’ content and “other sensitive digital evidence” be first approved by a judicial authority before being carried out. This safeguard is nowhere in the CLOUD Act itself.

One strong example of this approval process, which Microsoft boldly cites, is the U.S. requirement for a probable cause warrant. This standard requires a judicial authority, often a magistrate judge, to approve a government search application prior to the search taking place. It is one of the strongest privacy standards in the world and a necessary step in preventing government abuse. It serves as a bedrock to the right to privacy, and we are happy to see Microsoft mention it.

Elsewhere in the principles, Microsoft said that all CLOUD Act requests must include a “specific and complete legal process and clear grounds to challenge.”

Currently, the CLOUD Act offers individuals no avenue to fight a request that sweeps up their data, even if that request was wrongfully issued, overbroad, or illegal. Instead, the only party that can legally challenge a data request is the company that receives it. This structure forces individuals to rely on technology companies to serve as their privacy stewards, battling for their rights in court.

Microsoft’s demand is for a clear process to do just that, both for itself and other companies. Microsoft wants all executive agreement data requests to show proof that prior independent judicial review was obtained, a serious crime is under investigation as defined by the executive agreement, and that the data request is not for an investigation that infringes human rights.

Finally, a small absence: EFF would like to see Microsoft commit to “minimization procedure” safeguards for how requested data is stored, used, shared, and eventually deleted by governments.

You can read the full set of principles here.

A Broader Commitment

Microsoft’s principles are appreciated, but it must be noted that some of their demands require the work of people outside the company’s walls. For example, lawmakers will decide how much to include the public when negotiating executive agreements under the CLOUD Act. And lawmakers will decide what actually goes in those agreements, including restrictions on the universal right to notice, language about prior judicial review, and instructions for legal challenges.

That said, Microsoft is powerful enough to influence CLOUD Act negotiations. And so are the four companies that, as far as we know, still non-conditionally support the CLOUD Act—Apple, Google, Facebook, and Oath (formerly Yahoo). EFF urges these four companies to make the same commitment as Microsoft and to publish principles that put privacy first when responding to CLOUD Act data requests.

EFF also invites all companies affected by the CLOUD Act to also publish their own set of principles similar to Microsoft’s.

As for Microsoft, Apple, Google, Facebook, and Oath, we can at least say that some have scored well on EFF’s Who Has Your Back reports, and some have shown a healthy appetite for defending privacy in court, challenging government gag orders, search warrants, and surveillance requests. And, of course, if these companies falter, EFF and its supporters will hold them accountable.

The CLOUD Act has yet to produce its first executive agreement. Before that day comes, we urge technology companies: support privacy and fight this dangerous law, both for your users and for everyone.

Categories: Privacy

The Game is Rigged: Congress Invites No Consumer Privacy Advocates to its Consumer Privacy Hearing

EFF News - Fri, 2018-09-14 18:01

The Senate Commerce Committee is getting ready to host a much-anticipated hearing on consumer privacy—and consumer privacy groups don’t get a seat at the table. Instead, the Committee is seeking only the testimony of big tech and Internet access corporations: Amazon, Apple, AT&T, Charter Communications, Google, and Twitter. Some of these companies have spent heavily to oppose consumer privacy legislation and have never supported consumer privacy laws. They know policymakers are considering new privacy protections, and are likely to view this hearing as a chance to encourage Congress to adopt the weakest privacy protections possible—and eviscerate stronger state protections at the same time.

The upcoming hearing at the Senate Commerce Committee may be the launch pad for this strategy of undoing stronger state laws.

It is no coincidence that, in the past week, two leading industry groups (the Chamber of Commerce and the Internet Association) have called for federal preemption of state data privacy laws in exchange for weaker federal protections. For example, laws in California and Illinois require companies to have user consent to certain uses of their personal information (Nevada and Minnesota have these requirements for Internet access providers), while the industry proposals would only require transparency. That means that companies would be allowed to collect information without your permission as long as they tell you they’re doing it. The upcoming hearing at the Senate Commerce Committee may be the launch pad for this strategy of undoing stronger state laws.

Since we can’t be there to say this ourselves, we’ll say it here: EFF will oppose any federal legislation that weakens today’s hard-fought privacy protections or destroys the states’ ability to protect their citizens’ personal information. EFF has had a long and continuous battle with some of the testifying companies, such as Google and AT&T, regarding your right to data privacy, and we’re not going to give up now.

To be clear, we would look closely at a sensible federal legislation that offers meaningful protections for data privacy. Uniform laws offer predictability, making life easier for smaller companies, nonprofits and others that may struggle to meet the rules of different states. But a uniform law is only a good alternative if it’s actually a good law—not a weak placeholder designed only to block something stronger.

The State Consumer Privacy Laws That Big Tech and ISPs Want Congress to Nullify

California’s recently passed consumer privacy legislation has some valuable protections as well as room for improvement, but even this modest set of privacy protections is apparently too much for some big tech companies and the ISPs. If Congress passes the industry’s wish list, it won’t just kill the California privacy law. It will also preempt Illinois’ biometric privacy law, which landed Facebook in a class action lawsuit for allegedly collecting facial data without permission. And there’s more: Such a federal law would also block strong state data breach notification laws that forced companies like Equifax to tell us when they compromised the data of 145.5 million Americans. The upcoming one-sided congressional hearing will not yield valuable insights to the Senate Commerce Committee, but rather give the industry a lengthy amount of time to repeat talking points that reinforce their lobbyists’ arguments in hopes of persuading Congress to once again vote against our privacy rights.

The state legislators in California and Illinois who passed these laws did what they were supposed to do: protect the privacy of their residents. The absence of these state laws would mean that big companies face fewer consequences for compromising our personal information.

This Congress Has a Terrible Record on Protecting Privacy

There’s a reason states are taking action: They are filling a void. What did this Congress do when Facebook’s Cambridge Analytica scandal broke, besides hold a hearing? What did it do when Equifax failed to protect the personal data of 145 million Americans, causing lasting damage to their financial security, besides hold a hearing? Absolutely nothing. Despite overwhelming public support for privacy—a resounding 89 percent of Americans support privacy being a legal right and 91 percent believe we have lost control over our privacy—this legislature has taken little real action.

In fact, when this Congress has taken action on privacy hazards, whether from the government or from corporations, it has pro-actively stripped us of our privacy protections. When companies like AT&T, Verizon, and Comcast wanted to get away from strong federal broadband privacy regulations, Congress took the dramatic step of repealing those privacy protections. When the NSA requested an expansion of its warrantless surveillance program, Congress readily agreed.

Given this track record, Internet users should wonder whether the upcoming Senate Commerce hearing is just a prelude to yet another rollback of privacy protections. If so, policymakers can expect to hear the voices they excluded loud and clear in opposition.

Categories: Privacy

New Federal Law Makes Credit Freezes Free for All Consumers

EPIC - Fri, 2018-09-14 18:00

Starting next week, consumers will be able to "freeze" their credit reports at no cost. A credit freeze restricts public access to a consumer's credit report, making it much more difficult for identity thieves to open fraudulent accounts. Previously, state laws allowed credit bureaus to charge consumers $2 to $10 to place or lift credit freezes. Amendments to the Fair Credit Reporting Act also extend the time period for a fraud alert in a consumer's file and create new safeguards for the protection of credit records of minors. Following the Equifax data breach in 2017, EPIC President Marc Rotenberg testified before the Senate Banking Committee and recommended free credit freezes and other consumer safeguards to mitigate the risk of identity theft.

Categories: Privacy

More Bay Area Jurisdictions Adopt Civilian Control of Police Spy Tech

EFF News - Fri, 2018-09-14 16:16

This week, two California jurisdictions joined the growing movement to subject government surveillance technology to democratic transparency and civilian control. Each culminated a local process spearheaded by concerned residents who campaigned for years.

First, on Monday, the City of Palo Alto voted 8-1 to adopt an ordinance to “Establish Criteria and Procedures for Protecting Personal Privacy When Considering the Acquisition and Use of Surveillance Technologies, and Provide for Ongoing Monitoring and Reporting.” Like a handful of similar ordinances adopted across the Bay Area over the past two years, it includes several requirements.

The new ordinance requires any proposed acquisition of surveillance technology to go through a public process. First, law enforcement must announce the proposal publicly, provide a formal analysis supporting the rationale, and also document potential impacts on privacy. Then, there is an opportunity for public comment to inform a transform, public vote by local elected officials. Only with their approval may the proposal proceed.

We are disappointed that the Palo Alto measure lacks a provision through which the public can enforce its protections. Instead, it empowers only Council members to hold law enforcement accountable if they violate the ordinance’s process requirements. This weakness aside, the adoption of the measure is an important step forward in the expansion of civilian oversight across the Bay Area, California, and beyond.

Three days later, the Board of Bay Area Rapid Transit (BART) voted unanimously to adopt a similar measure. This comes on the heels of a controversial proposed BART face surveillance program that lacked any public process. It also follows the activation of automated license plate readers (ALPRs) at a BART station without the Board’s prior approval, and the transfer of the resulting ALPR data to a regional fusion center, where it was accessible to U.S. Immigration and Customs Enforcement (ICE). Thus, the new oversight ordinance reflects a dramatic turn for BART.

Like the Palo Alto ordinance, the one adopted by BART is flawed in some respects. It includes a potentially dangerous exception for law enforcement to conduct a “trial” period use of unapproved spy tech for up to 60 days at a single station. We hope the limited duration for a trial suggests that it will not become a back door to permanence. The BART Board will need to actively ensure that potential trials remain truly temporary.

In June 2016, the first local surveillance oversight measure in the nation was adopted in Santa Clara County, the heart of Silicon Valley. These laws also have been adopted in Berkeley, Davis, and Oakland. By subjecting any proposed surveillance technology to a public process, these laws not only ensure community control over whether police acquire these tools. They also force into the open the increasingly common domestic use of powerful spy tech designed for use in foreign battlefields, which has proceeded largely in secret, despite being the subject of explicit warnings by the last U.S. President to command a wartime army.

Each of these measures was spearheaded by local community organizations, including Oakland Privacy, a member of the Electronic Frontier Alliance. Oakland Privacy was formed during the Occupy movement in response to a proposed Domain Awareness Center, and continues to champion civilian oversight across Oakland and beyond. It was joined in Palo Alto by the Peninsula Peace and Justice Center, another group in the Alliance.

Categories: Privacy

EFF Helps Launch Anti-SLAPP Task Force ‘Protect the Protest’

EFF News - Fri, 2018-09-14 13:29

Aboard the Arctic Sunrise, a working icebreaker that has sailed to the Arctic Circle, the Congo, and the Amazon Rivers under Greenpeace’s stead, EFF joined several civil liberties and environmental rights groups to send a message: no longer will we be bullied by malicious lawsuits that threaten our freedom of speech.

“We have the Constitution, we have our rights, and now, we have each other,” said Greenpeace executive director Annie Leonard.

On September 5, EFF helped launch Protect the Protest, a coalition of nearly 20 organizations committed to fighting back against Strategic Lawsuits Against Public Participation, also known as SLAPPs. The coalition includes EFF, ACLU, Greenpeace, Freedom of the Press Foundation, Amnesty International, and Human Rights Watch.

(Left to right) Mother Jones CEO Monika Bauerlein, Greenpeace executive director Annie Leonard, Rainforest Action Network director of communications Christopher Herrera, Wikimedia legal counsel Jacob Rogers, and EFF Civil Liberties Director David Greene discuss their civil liberties work aboard the Greenpeace ship The Arctic Sunrise.

SLAPPs are malicious lawsuits often filed by large corporations and wealthy individuals to silence journalists, activists, nonprofit organizations, and those who speak truth to power. The goal is not to win a SLAPP based on legal merits. Instead, it is to abuse the court system in a way that forces a victim to spend time and money to fight the lawsuit itself—draining their resources and chilling their right to free speech.

Countless Americans are hit with these lawsuits every year.

From 2014 to 2016, the online technology blog Techdirt published multiple detailed articles disputing claims from Shiva Ayyadurai that he invented email. In 2016, Techdirt published an article with the headline “Here’s the Truth: Shiva Ayyadurai Didn’t Invent Email.” Months later, Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit. The lawsuit and anticipated legal fees threatened Masnick’s entire business.

“It affects just about everything we do,” Masnick said last week.

Last year, former Weed, CA mayor Bob Hall was attacked with a SLAPP for standing up for his city’s water rights. At the launch event, Hall empathized with every SLAPP victim who feels bullied into backing down.

“How many times has what’s good and right been destroyed because you don’t have the financial wherewithal to fight?” Hall said.

Every SLAPP recipient speaking at the Protect the Protest launch realized that they were a lucky minority: while many SLAPP victims are eventually silenced by crushing legal fees and intimidation, the men and women on stage found lawyers to fight for them, experts to help their cases, and support within their communities.

For the individuals and organizations that feel alone, Protect the Protest is here to help.

No longer will SLAPPs be fought in the dark. No longer will their recipients feel isolated. No longer will we defend the First Amendment without one another. If one of our organizations faces a SLAPP, Protect the Protest is committed to amplifying that wrongful attack. If our group is truly effective, said Greenpeace’s Leonard, perhaps SLAPP will no longer mean Strategic Lawsuits Against Public Participation. Perhaps it will mean Strategic Lawsuits Accelerating Public Participation. With enough resistance from us, hopefully First Amendment opponents will no longer benefit from filing SLAPPs at all and we can put an end to this entire practice that hurts organizations and individuals alike.

EFF feels right at home in Protect the Protest. We’ve represented individuals facing SLAPPs, we’ve connected others with legal help, and we’ve repeatedly advocated for a strong federal anti-SLAPP law.

The Internet should allow every person—no matter their income, assets, or connections in high places—the opportunity to participate in public debates. That is only possible when everyone can speak freely without the fear of legal bullying. Our constitutionally protected right to free speech carries through both online and off.

As EFF civil liberties director David Greene explained at the event, this is a crucial moment for our organizations and communities to stand together.

“We realized that we, the organizations that we are, have an obligation to make sure that there is a structure in place to support those who don’t always have a group around them all the time,” Greene said.

Together with our allies in the Protect the Protest coalition, EFF is committed to providing that structure and support.

Related Cases: Carreon v. Inman

Categories: Privacy

Sony Finally Admits It Doesn’t Own Bach and It Only Took Public Pressure

EFF News - Thu, 2018-09-13 16:50

Here’s the thing about different people playing the same piece of music: sometimes, they’re going to sound similar. And when music is by a composer who died 268 years ago, putting his music in the public domain, a bunch of people might record it and some of them might put it online. In this situation, a combination of copyright bots and corporate intransigence led to a Kafkaesque attack on music.

Musician James Rhodes put a video of himself playing Bach on Facebook. Sony Music Entertainment claimed that 47 seconds of that performance belonged to them. Facebook muted the video as a result.

So far, this is stupid but not unusually stupid in the world of takedowns. It’s what happened after Rhodes got Sony’s notice that earned it a place in the Hall of Shame.

One argument in favor of this process is that there are supposed to be checks and balances. Takedown notices are supposed to only be sent by someone who owns the copyright in the material and actually believes that copyright’s been infringed. And if a takedown notice is wrong, a counter-notice can be sent by someone explaining that they own the work or that it’s not infringement.

Counter-notices have a lot of problems, not the least of which is that the requirements are onerous for small-time creators, requiring a fair bit of personal information. There’s always the fear that, even for someone who knows they own the work, that the other side will sue them anyway, which they cannot afford.

Rhodes did dispute the claim, explaining that “this is my own performance of Bach. Who died 300 years ago. I own all the rights.” Sony rejected this reasoning.

While we don’t know for sure what Sony’s process is, we can guess that a copyright bot, or a human acting just as mechanically, was at the center of this mess. A human doing actual analysis would have looked at a video of a man playing a piece of music older than American copyright law and determined that it was not something they owned. It almost feels like an automatic response also rejected Rhodes’ appeal, because we certainly hope a thoughtful person would have received his notice and accepted it.

Rhodes took his story to Twitter, where it picked up some steam, and emailed the heads of Sony Classical and Sony’s public relations, eventually getting his audio restored. He tweeted “What about the thousands of other musicians without that reach…?” He raises a good point.

None of the supposed checks worked. Public pressure and the persistence of Rhodes was the only reason this complaint went away, despite how the rules are supposed to protect fair use and the public domain.

How many more ways do we need to say that copyright bots and filters don’t work? That mandating them, as the European Union is poised to do, is dangerous and shortsighted? We hear about these misfires roughly the same way they get resolved: because they generate enough noise. How many more lead to a creator’s work being taken down with no recourse?

Categories: Privacy

EU Copyright Reform: Parliamentary Vote Seriously Undermines Core Online Freedoms

CDT - Thu, 2018-09-13 16:32

On 12 September, the European Parliament unfortunately missed the opportunity to redraft the ill-advised copyright provisions adopted in the Legal Affairs (JURI) committee last June. It is disheartening to see parliamentarians blatantly ignore the advice received from leading European academics; the UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression; European research organisations; European and international human rights and digital rights groups; and dozens of internet pioneers and innovators from around the world.

The outcome of the plenary vote largely mirrors the European Commission’s proposal. We cannot help but stress the potential disastrous consequences of certain provisions for the ability of European citizens to communicate freely and share and access information online:

Article 13: The proposal forces internet companies into using content identification technology to prevent users from uploading unlicensed copyrighted content. This creates an obligation to filter each and every piece of content uploaded by users. The provision is thus in violation of fundamental rights and freedoms, as well as EU law. It is also bound to create new legal uncertainty as well as risks and costs for a broad range of intermediaries. Ultimately, the tech giants will be the ones with the resources to comply with such requirements, entrenching their position in the market.

Article 11: The proposal for an ancillary right for publishers, enabling them to charge licensing fees for links to their content, will risk impacting much more than just news snippets. The Parliament missed the opportunity to enable press publishers to act against infringing uses of their publications by providing them with a presumption of representation in court. This would target an enforcement problem with an enforcement solution.

Article 3: The text and data mining (TDM) exception remains very limited in scope, and if not expanded to other entities and purposes, the exception could potentially restrict the advancement of EU competitiveness and research. A limited TDM exception conflicts with the Commission’s call for a robust AI strategy.

While Articles 13 and 11 should ideally be rejected outright, we consider that the amendments tabled by the cross-party coalition of parliamentarians ahead of the vote addressed the most serious concerns for free expression and access to information. We commend this coalition and other parliamentarians on their efforts to reach balanced and fair solutions.

Despite the setback this vote represents, the fight is not over. Now the European Commission, Parliament and Member States will begin trilogue negotiations. The final vote in Parliament is expected in January 2019. We will continue to follow the institutional discussions closely and advocate for a progressive, innovation-friendly, and flexible copyright regime in the EU that safeguards internet users’ rights and freedoms.

The post EU Copyright Reform: Parliamentary Vote Seriously Undermines Core Online Freedoms appeared first on Center for Democracy & Technology.

Categories: Privacy

Offline: Activists and Technologists Still Face Grave Threats for Expression

EFF News - Thu, 2018-09-13 12:54

A decade ago, before social media was a widespread phenomenon and blogging was still a nascent activity, it was nearly unthinkable outside of a handful of countries—namely China, Tunisia, Syria, and Iran—to detain citizens for their online activity. Ten years later, the practice has become all too common, and remains on the rise in dozens of countries. In 2017, the Committee to Protect Journalists found that more than seventy percent of imprisoned journalists were arrested for online activity, while Reporters Without Borders’ 2018 press freedom barometer cited 143 imprisoned citizen journalists globally, and ten citizen journalists killed. While Tunisia has inched toward democracy, releasing large numbers of political prisoners following the 2011 revolution, China, Syria, and Iran remain major offenders, and are now joined by several countries, including the Philippines, Saudi Arabia, and Egypt.

When we first launched Offline in 2015, we featured five cases of imprisoned or threatened bloggers and technologists, and later added several more. We hoped to raise awareness of their plight, and advocate for their freedom, but we knew it would be an uphill struggle. In two cases, our advocacy helped to secure their release: Ethiopian journalist Eskinder Nega was released from prison earlier this year, and the Zone 9 Bloggers, also from Ethiopia, were acquitted in 2015 following a sustained campaign for their freedom. Privacy info. This embed will serve content from

Award-winning Ethiopian journalist Eskinder Nega on the power of the Internet and journalism. 

Today, the situation in several countries is dire. In Egypt, where a military coup brought the country back toward dictatorship, dozens of individuals have been imprisoned for expressing themselves. Activist Amal Fathy was detained earlier this year after a video she posted to Facebook detailing her experiences with sexual harassment in Cairo went viral, and awaits trial. And Wael Abbas, an award-winning journalist whose experiences with censorship we’ve previously documented, has been detained without trial since May 2018. We also continue to advocate for the release of Alaa Abd El Fattah, the Egyptian activist whose five-year sentence was upheld by an appeals court last year.

Three new Offline cases demonstrate the lengths to which states will go to silence their critics. Eman Al-Nafjan, a professor, blogger, and activist from Saudi Arabia, was arrested in May for her advocacy against the country’s ban on women driving, which was repealed just one month later. Ahmed Mansoor is currently serving a ten-year sentence for “cybercrimes” in his home country of the United Arab Emirates after being targeted several times in the past for his writing and human rights advocacy. And Dareen Tatour, a Palestinian citizen of Israel, recently began a five-month prison sentence after several years of house arrest and a lengthy trial for content she posted on social media that had been misinterpreted by police.

Advocacy and campaigns on behalf of imprisoned technologists, activists, and bloggers can make a difference. In the coming months, we will share more details and actions that the online community can take to support these individuals, defend their names, and keep them safe.

To learn more about these and other cases, visit Offline.

Categories: Privacy

What We Mean When We Say "Data Portability"

EFF News - Thu, 2018-09-13 12:42

“Data portability” is a feature that lets a user take their data from a service and transfer or “port” it elsewhere. This often comes up in discussions about leaving a particular social media platform and taking your data with you to a rival service. But bringing data to a competing service is just one use for data portability; other, just-as-important goals include analyzing your data to better understand your relationship with a service, building something new out of your data, self-publishing what you learn, and generally achieving greater transparency.

Regardless of whether you are “porting” your data to a different service or to a personal spreadsheet, data that is “portable” should be easy to download, organized, tagged, and machine-parsable.

EFF supports users’ legal right to obtain a copy of the data they have provided to an online service provider. Once you move beyond that, however, the situation gets more complicated. Data portability interacts, and sometimes even conflicts, with other digital rights priorities, including privacy and security, transparency, interoperability, and competition. Here are some of the considerations EFF keeps in mind when looking at the dynamics of data portability.

Privacy and Security

Any conversation about data portability in practice should keep privacy and security considerations front and center.

First off, security is a critical concern. Ported data can contain extremely sensitive information about you, and companies need to be clear about the potential risks before users move their data to another service. Users shouldn’t be encouraged to share information with untrustworthy third parties. And data must always be protected with strong security in transit and at its new location.

How do we unravel the data you provide about yourself to a service from the data your friends provide about you?

Second, it’s not always clear what data a user should have the right to port. There are a lot of questions to grapple with here: When does "data portability" presume inclusion of one's social graph, including friends' contact information? What are all the ways that can go wrong for those friends’ privacy and security? How do we unravel the data you provide about yourself, the data your friends provide about you, and all the various posts, photos, and comments you may interact with? And then, how can we ensure data portability respects all of those users’ right to have control over their information?

While there are no easy answers, the concept of consent is a starting point. For example, a service could ask friends for their specific, informed consent to share contact information when you initiate a download of all your data. Companies should also explore technical solutions that might allow users to export lists of friends in an obfuscated, privacy-protective form.


Portability works hand-in-hand with transparency. If some of your data is easy to download and use (portable) but the rest is secret (not transparent), then you are left with an incomplete picture of your relationship with a service. Conversely, if you are able to find out all the information a company has about you (transparent) but have no way to take it and interact with it (not portable), you are denied opportunities to further understand and analyze it.

Companies first should be transparent about the profile data that they collect or generate about you for marketing or advertising purposes, including data from third parties and inferences the company itself makes about you. Comprehensive portability should include this information, too; these data should be just as easy for you to access and use as the information you share voluntarily.

Portability works hand-in-hand with transparency to return power to users.

Both portability and transparency return power to users. For example, a comprehensive download of the data Facebook stores about a user’s browsing habits and advertising preferences might help her reverse-engineer Facebook’s processes for making inferences about users for targeted advertising. Or, in another example, the ability to take complete metadata about one’s music preferences and listening patterns from Spotify to another streaming service might make for a better user experience; Spotify might have figured out over time that you can’t stand a certain genre of music, and your next streaming service can immediately accommodate that too.


Data portability can also work alongside “interoperability.” Interoperability refers to the extent to which one platform’s infrastructure can work with others. In software parlance, interoperability is usually achieved through Application Programming Interfaces (APIs)—interfaces that allow other developers to interact with an existing software service.

This can allow “follow-on innovators” to not only interact with and analyze but also build on existing platforms in ways that benefit users. For example, PadMapper started by organizing data about rental housing pulled from Craigslist posts and presenting it in a useful way; Trillian allowed users to use multiple IM services through the same client and added features like encryption on top of AIM, Skype, and email. On a larger scale, digital interoperability enables decentralized, federated services like email, modern telephony networks, and the World Wide Web.


Depending on the context and platform, data portability is vital but not sufficient for encouraging competition. In many markets, it’s hard for competition to exist without portability, so we must get this part right.

Data portability can support users’ right to “vote with their feet” by leaving a platform or service that isn’t working for them.

But on its own, data portability cannot magically improve competition; the ability to take your data to another service is not helpful if there are no viable competitors. Similarly, data portability cannot fend off increasing centralization as big players buy up or squash smaller competitors. Initiatives like the Data Transfer Project among Facebook, Microsoft, Twitter, and Google could ultimately be important,  but won’t meaningfully help competition unless they allow users to move their data beyond a small cabal of incumbent services. Right now they don’t.

Combined with other substantive changes, data portability can support users’ right to “vote with their feet” by leaving a platform or service that isn’t working for them and taking their data and connections to one that does. Making these options real for people can encourage companies to work to keep their users, rather than hold them hostage.

Categories: Privacy

EPIC v. IRS: D.C. Circuit Hears Arguments in FOIA Case for Trump's Tax Returns

EPIC - Thu, 2018-09-13 10:20

The D.C. Circuit heard oral arguments today in EPIC v. IRS, EPIC's Freedom of Information Act case to obtain public release of President Trump's tax returns. EPIC argued that the IRS has the authority, under a provision known as "(k)(3)," to disclose the President's returns to correct numerous misstatements of fact concerning his financial ties to Russia. For example, President Trump falsely tweeted that "Russia has never tried to use leverage over me. I HAVE NOTHING TO DO WITH RUSSIA - NO DEALS, NO LOANS, NO NOTHING." EPIC Counsel John Davisson told the court that "If ever there were a situation that justified the use of (k)(3), this is it." Judge Patricia Millett questioned the IRS's claim that it can only process EPIC's FOIA with the President's consent. "It would be ludicrous to require consent of the taxpayer under (k)(3)," Millett said. A broad majority of the American public favor the release of the President's tax returns. EPIC v. IRS is one of several FOIA cases EPIC has pursued concerning Russian interference in the 2016 Presidential election, including EPIC v. FBI (response to Russian cyberattack) and EPIC v. DHS (election cybersecurity). In a related case, EPIC v. IRS II, EPIC is seeking the release of tax settlement information concerning Donald Trump's businesses. These "offers in compromise" are "an agreement between a taxpayer and the Internal Revenue Service that settles a taxpayer's tax liabilities for less than the full amount owed."

Categories: Privacy

European Court of Human Rights Rules UK Surveillance Violated Human Rights

EPIC - Thu, 2018-09-13 10:10

The European Court of Human Rights has ruled that the UK's surveillance regime, revealed by Edward Snowden, violates human rights set out in the European Convention. In consolidated cases Big Brother Watch v. UK, Bureau of Investigative Journalism v. UK, and 10 Human Rights Organizations v. UK, the Court ruled that the UK surveillance system violated Article 8, the right to privacy, because there were "inadequate" safeguards for selecting the data subject to surveillance. The Court also said "all interception regimes...have the potential to be abused," and that bulk surveillance include safeguards "to be sufficiently foreseeable to minimise the risk of abuses of power." The Court also ruled UK surveillance violated the right of free expression because the law did not sufficiently protect confidential journalistic material. EPIC filed a brief in the case explaining that the US, which transfers intelligence data to the UK, has "technological capacities" enabling "wide scale surveillance" and that US law do not restrict surveillance of non-U.S. persons abroad. EPIC casebook Privacy Law and Society explores a wide range of privacy issues, including recent decisions of the European Court of Human Rights.

Categories: Privacy

EPIC Asks Senate Committee for Delay on Kavanaugh Vote, Seeks Records Release

EPIC - Wed, 2018-09-12 22:13

In a letter to the Senate Judiciary Committee, EPIC has urged the Senate Judiciary Committee to postpone the vote in the Executive Business Meeting on the nomination of Judge Brett Kavanaugh, pending the release of documents concerning the development, defense, and promotion of surveillance programs during the period 2001-2006. EPIC said “[t[he documents are necessary for a full consideration of the qualifications of the nominee to serve on the United States Supreme Court.” In an earlier letter to the Committee, EPIC asked the Senate to determine Judge Kavanuagh's role, while in the Bush White House, in the unlawful warrantless wiretapping program and the secret expansion of the Patriot Act. Traditionally, the records of Supreme Court nominees who served in the White House are routinely made available prior to committee hearings. Last month, EPIC submitted two urgent Freedom of Information Act requests for the records. EPIC regularly shares its views with the Senate concerning nominees to the Supreme Court, including Justice Gorsuch, Justice Kagan, Justice Sotomayor, Justice Alito, and Chief Justice Roberts.

Categories: Privacy

FTC to Explore Competition and Consumer Protection Issues at Hearings this Week

EPIC - Wed, 2018-09-12 15:25

The FTC is holding a hearing this week to examine the regulation of consumer data, the consumer welfare standard in antitrust law, and vertical mergers. This is the first in a series of hearings on "Competition and Consumer Protection in the 21st Century" that will examine how changes in the economy affect the FTC's enforcement priorities. EPIC and a coalition of consumer groups submitted extensive comments for the hearings. EPIC and the groups said that privacy protection is critical for competition and innovation. EPIC and the groups told the FTC that it should: 1) unwind the Facebook-WhatsApp deal; 2) require Facebook and Google to spin off their advertising units; 3) block future acquisitions by Facebook and Google that would extend monopoly control over consumer data; 4) impose privacy safeguards for all mergers that implicate data privacy; and 5) perform audits of algorithmic tools to promote accountability and to limit anticompetitive conduct. The FTC reopened the investigation of Facebook in March after EPIC and consumer groups filed a formal complaint, but has still taken no action. The UK Information Commissioner completed its initial investigation, published a report, and issued a substantial fine in July.

Categories: Privacy