You are hereFeed aggregator / Categories / Privacy


It's Repair Day: No One Should Be Punished for "Contempt of Business Model"

EFF News - Sat, 2018-10-20 15:06

Repair is one of the secret keys to a better life. Repairs keep our gadgets in use longer (saving our pocketbooks) and divert e-waste from landfills or toxic recycling processes (saving our planet). Repair is an engine of community prosperity: when you get your phone screen fixed at your corner repair shop, your money goes to a locally owned small business (my daughter and the phone screen guy's daughter go to the same school and he always tut-tuts over the state of my chipped and dented phone at parent-teacher nights).

Fixing stuff has deep roots in the American psyche, from the motorheads who rebuilt and souped-up their cars, to the farmers whose ingenuity wrung every last bit of value out of their heavy equipment, to the electronics tinkerers who are lionized today as some of the founders of Silicon Valley.

Repairs are amazing: they account for up to 4% of GDP, create local jobs (fix a ton of electronics and generate 200 jobs, send a ton of electronics to a dump to be dismantled and recycled and you create a measly 15 jobs, along with a mountain of toxic waste – reuse is always greener than recycling), and they generate a stream of low-cost, refurbished devices and products that are within reach of low-income Americans.

The twenty-first century should be a golden age of repairs. A simple web-search can yield up instructions for fixing your stuff, a wealth of replacement-part options, and thriving communities of other people in the same boat as you, ready to brainstorm solutions when you hit a wall.

But instead, digital technology has been a godsend for big corporations that want to control how you use, fix, and replace your property.

One trick is to put small, inexpensive microprocessors on each part in a complex product -- everything from tractors to phones -- that force you to use the manufacturer's authorized parts and service technicians. Third-party parts may be functionally identical to the manufacturer's own parts (or even better!), but your device won't recognize them unless they have the manufacturer's "security" chip and its associated cryptographic authentication systems. Even if you put an original manufacturer's part in your device (say, one you've bought from the manufacturer or harvested from a scrapped system), some devices won't start using the original part until an authorized service technician inputs an activation code.

As if that wasn't bad enough, corporations routinely withhold service manuals, lists of diagnostic codes, and parts.

This would be merely unconscionable and obnoxious, but thanks to some toxic technology laws, these practices become more than a hurdle for independent service technicians to overcome – they become a legal risk.

Section 1201 of the Digital Millennium Copyright Act (DMCA 1201) contains broad prohibitions on bypassing "access controls" for "copyrighted works," with potentially stiff criminal penalties (five years in prison and a $500,000 fine for a first offense) for "commercial trafficking" in tools to bypass an access control. Manufacturers have interpreted this law very broadly, asserting that the software in their gadgets – cars, medical implants, HVAC systems and thermostats, phones, TVs, etc. – is a "copyrighted work" and the systems that block independent service (checks for original parts, activation codes for new parts, access to diagnostic systems) are "access controls." If the firmware in your car is a "copyrighted work" and the system that stops it from recognizing a new engine part is an "access control" then your auto manufacturer can threaten a competitor with civil prosecution and prison time for making a gadget that allows your corner mechanic to figure out what's wrong with your car and fix it.

Manufacturers can also look to other notorious tech laws, like the Computer Fraud and Abuse Act (CFAA), as well as end-user license agreements, nondisclosure agreements, trade secrecy, and onerous supply-chain deals. Taken together, these rules and agreements have allowed the country's increasingly concentrated industries to turn purchasing a simple device, appliance, or vehicle into a long-term relationship with the manufacturer, like it or not.

The corporations involved make all kinds of bad faith arguments, claiming that they are protecting their customers from safety risks, like getting malware on their phone via an unscrupulous service technician or winding up with defective replacement parts.

But the reality is that anyone can screw up a repair, including the manufacturer's authorized technicians. Only in the bizarro universe of monopoly corporatethink do consumers get a better deal and more reliable service when companies don't have to compete to get their business (and of course, controlling repairs means controlling product life: in California, a law requires manufacturers to supply parts for seven years, and in California, laptops and phones and other electronics last for seven years, while manufacturers in neighboring states often declare their products to be obsolete after three or five years).

Last year, 18 states introduced Right to Repair legislation that requires manufacturers to get out of the way of independent repair: to make parts, manuals and diagnostic codes available to third-party service depots and refrain from other practices that limit your ability to decide who can fix your things.

The corporate blowback from these bills was massive: with so much money at stake when it comes to monopolizing repair, and so many manufacturers using Big Tech's tricks for freezing out indie service, millions were on the line, and the lobbying money managed to stifle these proposals – for now.

But there is nothing harder to kill than an idea whose time has come. This is the golden age of repairs, a moment made for a renaissance of shade-tree mechanics, electronics tinkerers, jackleg fixit shops, and mom-and-pop service depots. It has to be: our planet, our pocketbooks, and our neighborhoods all benefit when our property lasts longer, works better and does more.

Categories: Privacy

We’re Telling a Court (Again) That President Trump and Other Government Officials Can’t Block People on Twitter For Disagreeing With Them

EFF News - Fri, 2018-10-19 19:25

President Donald Trump and his lawyers still believe he can block people on Twitter because he doesn’t like their views, so today we’ve filed a brief telling a court, again, that doing so violates the First Amendment. We’re hopeful that the court, like the last one that considered the case, will side with the plaintiffs, seven individuals blocked by Trump who are represented by the Knight First Amendment Institute. As we explain in the brief, the case has broad implications for the public as social media use by the government becomes more and more ubiquitous.

Trump lost the first round of the case when a judge sided with the plaintiffs, who include a university professor, a surgeon, a comedy writer, a community organizer, an author, a legal analyst, and a police officer. The judge agreed with the Knight Institute, which argued that the interactive spaces associated with the @realDonaldTrump account are “public forums” under the First Amendment, meaning that the government cannot exclude people from them simply because it disagrees with their views. In a brief filed in round one, we argued governmental use of social media platforms to communicate to and with the public—and allow the public to communication with each other—is now the rule of democratic engagement, not the exception. As a result, First Amendment rights of both access to those accounts and the ability to speak in them must apply in full force.

The ruling in round one was a great victory for free speech and recognizes that in the digital age, when a local, state, or federal agent officially communicates, through Twitter, with the public about the government’s business, he or she doesn’t get to block people from receiving those messages because they’ve used the forum to express their disagreement with the official’s policies. Trump was forced to unblock the plaintiffs.

The president’s attorneys are now trying to convince an appeals court to overturn this ruling, making the same arguments they made in the lower court that @realDonaldTrump, Trump’s Twitter handle, is the president’s private property and he can block people if he wants.

In the brief we filed today we’ve told the appeals court that those arguments—which were wrong on the law in the first place—are still wrong. The president has chosen to use his longtime Twitter handle to communicate his administration’s goals, announce policy decisions, and talk about government activity. Similarly, public agencies and officials, from city mayors and county sheriff offices to U.S. Secretaries of State and members of Congress, routinely use social media to communicate official positions, services, and important public safety and policy messages. Twitter has become a vital communications tool for government, allowing local and federal officials to transmit information when natural disasters such as hurricanes and wildfires strike, hold online town halls, and answer citizens’ questions about programs.

When governmental officials and agencies choose a particular technique or technology to communicate with the public about governmental affairs, they have endowed the public with First Amendment rights to receive those messages. And this right, we told the appeals court, is infringed when government denies access to these messages because it disagrees with someone’s viewpoints.

Categories: Privacy

Tech Talk: Trustworthy VPNs

CDT - Fri, 2018-10-19 18:00

CDT’s Tech Talk is a podcast where we dish on tech and Internet policy, while also explaining what these policies mean to our daily lives. You can find Tech Talk on SoundCloud iTunes, and Google Play, as well as Stitcher and TuneIn.

Are all of you using a VPN to mask your internet browsing and protect your privacy? My hunch is that our tech savvy listeners are all about using a VPN, but of course, not all VPNs are created equal.

CDT has launched a new initiative aimed at helping internet users better assess the trustworthiness of VPNs, and a number of VPN providers were active partners in this process, which is awesome.

In this episode, our Data & Privacy mensch Joe Jerome joins us to talk about the effort and what makes for a trustworthy VPN.


The post Tech Talk: Trustworthy VPNs appeared first on Center for Democracy & Technology.

Categories: Privacy

EPIC v. FTC: EPIC Obtains Facebook-FTC Emails About 2011 Consent Order

EPIC - Fri, 2018-10-19 17:05

In response to EPIC's Freedom of Information Act lawsuit, the FTC has released agency emails about the 2011 Facebook Consent Order. Following a detailed complaint by EPIC and other consumer privacy organizations, the FTC issued an order in 2011 that required biennial audits of Facebook's privacy practices. EPIC pursued public release of these reports and related emails to understand why the FTC failed to bring an enforcement action action against the company. Today the FTC released to EPIC 89 emails between the FTC and Facebook from the years 2011, 2012, 2013, 2014, 2015, 2016, 2017, and 2018. In March 2018, following the Cambridge Analytica data breach, the FTC announced it was reopening the Facebook investigation. To date, there is still no announcement, no report, and no fine.

Categories: Privacy

Federal Circuit Overturns Fee Award In Crowdsourcing Patent Case

EFF News - Fri, 2018-10-19 15:50

Patent trolls know that it costs a lot of money to defend a patent case. The high cost of defensive litigation means that defendants are pressured to settle even if the patent is invalid. Fee awards can change this calculus and give defendants a chance to fight back against weak claims. A recent decision [PDF] from the Federal Circuit has overturned a fee award in a case involving an abstract software patent on crowdsourcing. This disappointing ruling may encourage other patent trolls to file meritless cases.

Patent troll AlphaCap Ventures claimed that its patent covered various forms of online equity financing. It filed suit against ten different crowdfunding platforms. Most of the defendants settled quickly. But one defendant, Gust, fought back. After nearly two years of litigation in both the Eastern District of Texas and the Southern District of New York, AlphaCap Ventures dismissed its claim against Gust. The judge in the Southern District of New York ruled that AlphaCap Ventures’ attorneys had litigated unreasonably and ordered them to pay Gust’s attorneys’ fees. Those lawyers then appealed.

EFF filed an amicus brief [PDF] to respond to one of the lawyers’ key arguments. AlphaCap Ventures’ attorneys argued that the law of patent eligibility—particularly the law regarding when a claimed invention is an abstract idea and thus ineligible for patent protection under the Supreme Court’s decision in Alice v. CLS Bank—is so unsettled that a court should never award fees when a party loses on the issue. Our brief argued that such a rule could embolden lawyers to file suits with patents they should know are invalid.

As we were drafting our brief in the AlphaCap Ventures case, the Federal Circuit issued a decision in Inventor Holdings v. Bed Bath & Beyond. The patent owner in Inventor Holdings had asked the court to overturn a fee award against it on the ground that the law of patent eligibility was too uncertain for its arguments to have been unreasonable. The Federal Circuit rejected this in a unanimous panel opinion. It wrote:

[W]hile we agree with [Inventor Holdings] as a general matter that it was and is sometimes difficult to analyze patent eligibility under the framework prescribed by the Supreme Court . . . , there is no uncertainty or difficulty in applying the principles set out in Alice to reach the conclusion that the ’582 patent's claims are ineligible.

In other words, it rejected a very similar argument to the one advanced by Alphacap Ventures’ lawyers.

In the Alphacap Ventures decision, in contrast, the two-judge majority emphasized that “abstract idea law was unsettled” and found that the lawyers’ arguments were not so unreasonable to warrant fees. The majority did not distinguish or even cite Inventor Holdings. (Judge Wallach’s dissent does cite Inventor Holdings.) The appeals involved different patents, and the fee awards were made under different statutes, but it was still surprising that the majority did not discuss the Inventor Holdings decision at all.

We hope that the decision in Alphacap Ventures does not encourage other patent trolls to bring suits with invalid patents. The Inventor Holdings decision remains good law and shows that, at least sometimes, they will be held to account for bringing unreasonable cases.

Categories: Privacy

Open Access Is the Law in California

EFF News - Thu, 2018-10-18 21:33

Governor Jerry Brown recently signed A.B. 2192, a law requiring that all peer-reviewed, scientific research funded by the state of California be made available to the public no later than one year after publication.

EFF applauds Governor Brown for signing A.B. 2192 and the legislature for unanimously passing it—particularly Assemblymember Mark Stone, who introduced the bill and championed it at every step. To our knowledge, no other state has adopted an open access bill this comprehensive.

As we’ve explained before, it’s a problem when cutting-edge scientific research is available only to people who can afford expensive journal subscriptions and academic databases. It insulates scientific research from a broader field of innovators: if the latest research is only available to people with the most resources, then the next breakthroughs will only come from that group.

A.B. 2192 doesn’t solve that problem entirely, but it does limit it. Under the new law, researchers can still publish their papers in subscription-based journals so long as they upload them to public open access repositories no later than one year after publication.

What Now? Future Wins for Open Access

While legislators were considering passing A.B. 2192, we urged them to consider passing a stronger law making research available to the public on the date of publication. In the fast-moving world of science, a one-year embargo period is simply too long.

The best way to maximize the public benefit of state-funded research is to publish it in an open access journal, so that everyone can read it for free on the day it’s published—ideally under an open license that allows anyone to adapt and republish it.

Opponents of open access sometimes claim that open publishing hurts researchers’ reputations, but increasingly, the exact opposite is true; indeed, some of the most important discoveries of the modern era were published in open access journals. That change in practices has come thanks in no small part to a growing list of foundations requiring their grantees to publish in open access journals. Funders can use their influence to change norms in publishing to benefit the public. With the majority of scientific research in the United States funded by government bodies, lawmakers ought to use their power to push for open access. Ultimately, requiring government grantees to publish in open access journals won’t hurt scientists’ reputations; it will help open access’ reputation.

While A.B. 2192’s passage is good news, Congress has still failed to pass an open access law covering science funded by the federal government. FASTR—the Fair Access to Science and Technology Act (S. 1701, H.R. 3427)—is very similar to the California law. It would require every federal agency that spends more than $100 million on grants for research to adopt an open access policy. The bill gives each agency flexibility to choose a policy suited to the work it funds, as long as research is made available to the general public no later than one year after publication. Like the California law, FASTR isn’t perfect, but it’s a great start. Unfortunately, despite strong support in both political parties, FASTR has floundered in Congressional gridlock for five years.

As we celebrate the win for open access in California, please take a moment to write your members of Congress and urge them to pass FASTR.

Take action

Tell Congress: It’s time to move FASTR

Categories: Privacy

EPIC Files Amicus in Case Concerning Government Searches and Google's Email Screening Practices

EPIC - Thu, 2018-10-18 14:55

EPIC has filed an amicus brief with the U.S. Court of Appeals for the Sixth Circuit in United States v. Miller, arguing that the Government must prove the reliability of Google email screening technique. The lower court held that law enforcement could search any images that Google's algorithm had flagged as apparent child pornography. EPIC explained that a search is unreasonable when the government cannot establish the reliability of the technique. EPIC also warned that the government could use this technique "to determine if files contain religious viewpoints, political opinions, or banned books." EPIC has promoted algorithmic transparency for many years. EPIC routinely submits amicus briefs on the application of the Fourth Amendment to investigative techniques. EPIC previously urged the government to prove the reliability of investigative techniques in Florida v. Harris.

Categories: Privacy

EPIC FOIA: Records Show DHS Ignored Privacy, First Amendment Threats of Media Monitoring Program

EPIC - Wed, 2018-10-17 19:05

EPIC has obtained records concerning "Media Monitoring Services," a controversial DHS project to track journalists, news outlets, and social media accounts. The records, released in EPIC's FOIA lawsuit against the federal agency, reveal that the DHS bypassed the agency's own privacy officials and ignored the privacy and First Amendment implications of monitoring the coverage by particular journalists of a federal agency. As a result of EPIC's lawsuit, the agency previously admitted that it did not conduct a Privacy Impact Assessment for the program, as required by law. EPIC has successfully obtained several Privacy Impact Assessments, including for a related media tracking system (EPIC v. DHS) and for facial recognition technology (EPIC v. FBI). In EPIC v. Presidential Election Commission, EPIC challenged the Commission's failure to publish a Privacy Impact Assessment prior to the collection of state voter data.

Categories: Privacy

From Canada to Argentina, Security Researchers Have Rights—Our New Report

EFF News - Tue, 2018-10-16 19:44

EFF is introducing a new Coders' Rights project to connect the work of security research with the fundamental rights of its practitioners throughout the Americas. The project seeks to support the right of free expression that lies at the heart of researchers' creations and use of computer code to examine computer systems, and relay their discoveries among their peers and to the wider public.  

To kick off the project, EFF published a whitepaper today, “Protecting Security Researchers' Rights in the Americas” (PDF), to provide the legal and policy basis for our work, outlining human rights standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights, should use to protect the fundamental rights of security researchers.

We started this project because hackers and security researchers have never been more important to the security of the Internet. By identifying and disclosing vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work.

Computer security researchers work, often independently from large public and private institutions, to analyze, explore, and fix the vulnerabilities that are scattered across the digital landscape. While most of this work is conducted unobtrusively as consultants or as employees, sometimes their work is done in the public interest—which gathers researchers headlines and plaudits, but can also attract civil or criminal suits. They can be targeted and threatened with laws intended to prevent malicious intrusion, even when their own work is anything but malicious. The result is that security researchers work in an environment of legal uncertainty, even as their job becomes more vital to the orderly functioning of society.

Drawing on rights recognized by the American Convention on Human Rights, and examples from North and South American jurisprudence, this paper analyzes what rights security researchers have; how those rights are expressed in the Americas’ unique arrangement of human rights instruments, and how we might best interpret the requirements of human rights law—including rights of privacy, free expression, and due process—when applied to the domain of computer security research and its practitioners. In cooperation with technical and legal experts across the continent, we explain that:

  • Computer programming is expressive activity protected by the American Convention of Human Rights. We explain how free expression lies at the heart of researchers’ creation and use of computer code to examine computer systems and to relay their discoveries among their peers and to the wider public.
  • Courts and the law should guarantee that the creation, possession or distribution of tools related to cybersecurity are protected by Article 13 of the American Convention of Human Rights, as legitimate acts of free expression, and should not be criminalized or otherwise restricted. These tools are critical to the practice of defensive security and have legitimate, socially desirable uses, such as identifying and testing practical vulnerabilities.
  • Lawmakers and judges should discourage the use of criminal law as a response to behavior by security researchers which, while technically in violation of a computer crime law, is socially beneficial.
    Cybercrime laws should include malicious intent and actual damage in its definition of criminal liability.
  • The “Terms of service” (ToS) of private entities have created inappropriate and dangerous criminal liability among researchers by redefining “unauthorized access” in the United States. In Latin America, under the Legality Principle, ToS provisions cannot be used to meet the vague and ambiguous standards established in criminal provisions (for example, "without authorization"). Criminal liability cannot be based on how private companies would like their services to be used. On the contrary, criminal liability must be based on laws which describe in a precise manner which conduct is forbidden and which is punishable.
  • Penalties for crimes committed with computers should, at a minimum, be no higher than penalties for analogous crimes committed without computers.
  • Criminal law punishment provisions should be proportionate to the crime, especially when cybercrimes demonstrate little harmful effects, or are comparable to minor traditional infractions.
  • Proactive actions that will secure the free flow of information in the security research community are needed.

We’d like to thank EFF Senior Staff Attorney Nate Cardozo, Deputy Executive Director and General Counsel Kurt Opsahl, International Rights Director Katitza Rodríguez, Staff Attorney Jamie Lee Williams, as well as consultant Ramiro Ugarte and Tamir Israel, Staff Attorney at Canadian Internet Policy and Public Interest Clinic at the Centre for Law, Technology and Society at the University of Ottawa, for their assistance in researching and writing this paper.

Categories: Privacy

What To Do If Your Account Was Caught in the Facebook Breach

EFF News - Tue, 2018-10-16 19:10

Keeping up with Facebook privacy scandals is basically a full-time job these days. Two weeks ago, it announced a massive breach with scant details. Then, this past Friday, Facebook released more information, revising earlier estimates about the number of affected users and outlining exactly what types of user data were accessed. Here are the key details you need to know, as well as recommendations about what to do if your account was affected.

30 Million Accounts Affected

The number of users whose access tokens were stolen is lower than Facebook originally estimated. When Facebook first announced this incident, it stated that attackers may have been able to steal access tokens—digital “keys” that control your login information and keep you logged in—from 50 to 90 million accounts. Since then, further investigation has revised that number down to 30 million accounts.

The attackers were able to access an incredibly broad array of information from those accounts. The 30 million compromised accounts fall into three main categories. For 15 million users, attackers access names and phone numbers, emails, or both (depending on what people had listed).

For 14 million, attackers access those two sets of information as well as extensive profile details including:

  • Username
  • Gender
  • Locale/language
  • Relationship status
  • Religion
  • Hometown
  • Self-reported current city
  • Birthdate
  • Device types used to access Facebook
  • Education
  • Work
  • The last 10 places they checked into or were tagged in
  • Website
  • People or Pages they follow
  • Their 15 most recent searches

For the remaining 1 million users whose access tokens were stolen, attackers did not access any information.

Facebook is in the process of sending messages to affected users. In the meantime, you can also check Facebook’s Help Center to find out if your account was among the 30 million compromised—and if it was, which of the three rough groups above it fell into. Information about your account will be at the bottom in the box titled “Is my Facebook account impacted by this security issue?”

What Should You Do If Your Account Was Hit?

The most worrying potential outcome of this hack for most people is what someone might be able to do with this mountain of sensitive personal information. In particular, adversaries could use this information to turbocharge their efforts to break into other accounts, particularly by using phishing messages or exploiting legitimate account recovery flows. With that in mind, the best thing to do is stay on top of some digital security basics: look out for common signs of phishing, keep your software updated, consider using a password manager, and avoid using easy-to-guess security questions that rely on personal information.

The difference between a clumsy, obviously fake phishing email and a frighteningly convincing phishing email is personal information. The information that attackers stole from Facebook is essentially a database connecting millions of people’s contact information to their personal information, which amounts to a treasure trove for phishers and scammers. Details about your hometown, education, and places you recently checked in, for example, could allow scammers to craft emails impersonating your college, your employer, or even an old friend.

In addition, the combination of email addresses and personal details could help someone break into one of your accounts on another service. All a would-be hacker needs to do is impersonate you and pretend to be locked out of your account—usually starting with the “Forgot your password?” option you see on log-in pages. Because so many services across the web still have insecure methods of account recovery like security questions, information like birthdate, hometown, and alternate contact methods like phone numbers could give hackers more than enough to break into weakly protected accounts.

Facebook stated that it has not seen evidence of this kind of information being used “in the wild” for phishing attempts or account recovery break-ins. Facebook has also assured users that no credit card information or actual passwords were stolen (which means you don’t need to change those) but for many that is cold comfort. Credit card numbers and passwords can be changed, but the deeply private insights revealed by your 15 most recent searches or 10 most recent locations cannot be so easily reset.

What Do We Still Need To Know?

Because it’s cooperating with the FBI, Facebook cannot discuss any findings about the hackers’ identity or motivations. However, from Facebook’s more detailed description of how they carried out the attack, it’s clear that the attackers were determined and coordinated enough to find an obscure, complex vulnerability in Facebook’s code. It’s also clear that they had the resources necessary to automatically exfiltrate data on a large scale.

We still don’t know what exactly the hackers were after: were they targeting particular individuals or groups, or did they just want to gather as much information as possible? It’s also unclear if the attackers abused the platform in ways beyond what Facebook has reported, or used the particular vulnerability behind this attack to launch other, more subtle attacks that Facebook has not yet found.

There is only so much individual users can do to protect themselves from this kind of attack and its aftermath. Ultimately, it is Facebook’s and other companies’ responsibility to not only protect against these kinds of attacks, but also to avoid retaining and making vulnerable so much personal information in the first place.

Categories: Privacy

EPIC Publishes "Privacy Law Sourcebook 2018"

EPIC - Tue, 2018-10-16 18:00

EPIC proudly announces the 2018 edition of the Privacy Law Sourcebook, the definitive reference guide to US and international privacy law. The Privacy Law Sourcebook is an edited collection of the primary legal instruments for privacy protection in the modern age, including United States law, International law, and recent developments. The Privacy Law Sourcebook 2018 has been updated and expanded to include the modernized Council of Europe Convention on Privacy, the Judicial Redress Act, the CLOUD Act, and new materials from the United Nations. The EPIC Privacy Law Sourcebook also includes the full text of the GDPR. EPIC will make the Privacy Law Sourcebook freely available to NGOs and human rights organizations. EPIC publications and the publications of EPIC Advisory Board members are available at the EPIC Bookstore.

Categories: Privacy

Lawsuit Seeking to Unmask Contributors to ‘Shitty Media Men’ List Would Violate Anonymous Speakers’ First Amendment Rights

EFF News - Tue, 2018-10-16 17:10

A lawsuit filed in New York federal court last week against the creator of the “Shitty Media Men” list and its anonymous contributors exemplifies how individuals often misuse the court system to unmask anonymous speakers and chill their speech. That’s why we’re watching this case closely, and we’re prepared to advocate for the First Amendment rights of the list’s anonymous contributors.

On paper, the lawsuit is a defamation case brought by the writer Stephen Elliott, who was named on the list. The Shitty Media Men list was a Google spreadsheet shared via link and made editable by anyone, making it particularly easy for anonymous speakers to share their experiences with men identified on the list. But a review of the complaint suggests that the lawsuit is focused more broadly on retaliating against the list’s creator, Moira Donegan, and publicly identifying those who contributed to it.

For example, after naming several anonymous defendants as Jane Does, the complaint stresses that “Plaintiff will know, through initial discovery, the names, email addresses, pseudonyms and/or ‘Internet handles’ used by Jane Doe Defendants to create the List, enter information into the List, circulate the List, and otherwise publish information in the List or publicize the List.”

In other words, Elliott wants to obtain identifying information about anyone and everyone who contributed to, distributed, or called attention to the list, not just those who provided information about Elliot specifically.

The First Amendment, however, protects anonymous speakers like the contributors to the Shitty Media Men list, who were trying to raise awareness about what they see as a pervasive problem: predatory men in media. As the Supreme Court has ruled, anonymity is a historic and essential way of speaking on matters of public concern—it is a “shield against the tyranny of the majority.”

Anonymity is particularly critical for people who need to communicate honestly and openly without fear of retribution. People rely on anonymity in a variety of contexts, including reporting harassment, violence, and other abusive behavior they’ve experienced or witnessed. This was the exact purpose behind the Shitty Media Men list. Donegan, who after learning she would be identified as the creator of the list, came forward and wrote that she “wanted to create a place for women to share their stories of harassment and assault without being needlessly discredited or judged. The hope was to create an alternate avenue to report this kind of behavior and warn others without fear of retaliation.”

It’s easy to understand why contributors to the list did so anonymously, and that they very likely would not have provided the information had they not been able to remain anonymous. By threatening that anonymity, lawsuits like this one risk discouraging anyone in the future from creating similar tools that share information and warn people about violence, abuse, and harassment.

To be clear, our courts do allow plaintiffs to pierce anonymity if they can show need to do so in order to pursue legitimate claims. That does not seem to be the case here, because the claims against Donegan appear to be without merit. Given that she initially created the spreadsheet as a platform to allow others to provide information, Donegan is likely immune from suit under Section 230, the federal law that protects creators of online forums like the “Shitty Media Men” list from being treated as the publisher of the information added by other users, here the list’s contributors. And even if Donegan did in fact create the content about Elliott, she could still argue that the First Amendment requires that he show that the allegations were not only false but also made with actual malice.

EFF has long fought for robust protections for anonymous online speakers, representing speakers in court cases and also pushing courts to adopt broad protections for them. Given the potential dangers to anonymous contributors to this list and the thin allegations in the complaint, we hope the court hearing the lawsuit quickly dismisses the case and protects the First Amendment rights of the speakers who provided information to it. We also applaud Google, which has said that it will fight any subpoenas seeking information on its users who contributed to the list. 

EFF will continue to monitor the case and seek to advocate for the First Amendment rights of those who contributed to the list should it become necessary. If you contributed to the list and are concerned about being identified or otherwise have questions, contact us at As with all inquiries about legal assistance from EFF, the attorney/client privilege applies, even if we can’t take your case.

Categories: Privacy

Techsplanations: Part 5, Virtual Private Networks

CDT - Tue, 2018-10-16 17:09

Previously in this series, we talked about what the internet is and how it works, what the web is, and net neutrality. In this post, we survey some methods for regulation, compare different venues for regulation, attempt to sort out facts from rhetoric, and distill a must-have checklist for net neutrality regulation. As before, please refer to this glossary for quick reference to some of the key terms and concepts (in bold).

One increasingly popular and prominent privacy-enhancing tool is the virtual private network, or VPN, which we’re going to explain a bit on this page and talk about some of the challenges internet users have using VPNs.

What is a VPN?

Illustration by Jospeh Jerome.

When browsing the internet or connecting “smart” technologies, we leave a trail of information that is valuable to companies, governments, and bad actors. Unsecured Wi-Fi networks are everywhere and easily accessible to anyone with a bit of technical skill who might be curious. A means of shielding oneself from these prying eyes is undoubtedly attractive. A virtual private network, or VPN, creates a virtual tunnel that encrypts and obscures some of this information.

Illustration by Jospeh Jerome.

VPNs are a tool that disguises your actual network IP address and encrypts internet traffic between a computer (or phone or any networked “smart” device) and a VPN’s server. A VPN acts as a sort of tunnel for your internet traffic, preventing outsiders from monitoring or modifying your traffic. Traffic in the tunnel is encrypted and sent to your VPN, which makes it much harder for third parties like internet service providers (ISPs) or hackers on public Wi-Fi to snoop on a VPN users’ traffic or execute man-in-the-middle attacks. The traffic then leaves the VPN to its ultimate destination, masking that user’s original IP address. This helps to disguise a user’s physical location for anyone looking at traffic after it leaves the VPN. This offers you more privacy and security, but using a VPN does not make you completely anonymous online: your traffic can still be visible to the operator of the VPN.

It is also important to recognize that a VPN is not the same thing as an ad blocker. It can mask your IP address, but a VPN does not, by default, disrupt other sorts of online tracking. VPNs are not ad blockers or other tools that block efforts to track your activities across websites and devices. The protections offered by a VPN are also not the same as those offered by web browsers, such as private modes that clear cookies on exit or security-focused browsers like Tor. VPNs are generally faster than Tor, but Tor can potentially provide more anonymity.

Why should someone use a VPN?

Why you might use a VPN really depends on your threat model, or what traffic you want to disguise and from whom. Enterprise VPNs have long been used by employers for teleworking or to give remote employees access to employers’ computer networks, but data breaches, government surveillance, and debates about net neutrality have driven additional VPN use by individuals in the United States. For example, when Congress rolled back the FCC’s broadband privacy rules in 2017, VPNs were suggested as one tool to limit the amount of web browsing activities and network information available to ISPs.

While it is true that a trustworthy VPN can shield you from having your ISP see your browsing activities, VPNs provide their best benefits by shielding your activities from other third parties that can monitor traffic on local networks. A VPN is a good tool to have if you do any of the following:

  • Look for any unsecured Wi-Fi networks to connect to while traveling.
  • Frequently take advantage of Wi-Fi networks at coffee shops or airports.
  • Connect to secured networks at hotels or other businesses that monitor internet usage.

Unsecure networks are a big problem; hackers can position themselves between you and the internet access point, or be the access point. This allows complete access to anything you do online. A VPN can protect you from this sort of snooping. However, anywhere a network can be monitored by a curious customer, bad actor, or interested employer can warrant a VPN.

Many people seek out VPNs to access content restricted by geography or to torrent and download media. This is because VPNs can disguise your general physical location by changing the IP address seen by the receiving end of your communications. A VPN hides your true IP address, which reveals your general location and can be valuable to anyone from advertisers to law enforcement, and shows your traffic as coming from an IP address assigned by the VPN. This not only protects your privacy by disassociating your web traffic from your home IP address, but also allows VPNs to effectively “tunnel” your traffic to another country or physical location. In this way, VPNs can help people get around content restrictions, blocked websites, and government censorship. VPNs, for example, have been an important tool to help people in China circumvent internet access restrictions and access sites blacklisted by the government.

How does a VPN work?

VPNs rely on servers, protocols, and encryption to disguise your data. If you’ve been reading this series, you should already know what the internet and the World Wide Web (that lies atop the Internet) are. Web servers receive packets of information from your modem, which has an IP address. This content and metadata can be very revealing. While unencrypted data packets are what users are often most concerned about, even our metadata reveals a lot about us. Every piece of technology your modem interacts with can learn a bit about you, which can reveal a great deal accumulatively, over time.

For example, if I visit a website to see what medical help I can get for a rash or infection, that site will learn my IP address and it will frequently log that information. This information can be used to analyze how many people visit a site or where traffic is coming from; it can also be shared with marketers or law enforcement, who can learn other information attached to that IP address. A VPN essentially replaces this visible metadata.

Illustration by Jospeh Jerome.

When you launch your VPN, either as software on your computer or as an app on your phone, your traffic is encrypted and sent to your VPN and then sent onward from your VPN’s servers to your ultimate online destination. Third parties see this information as coming from a VPN server and its location, not your computer or your general location. For example, if I search for the IP address of my computer at CDT, I can learn it is, located in Arlington, Virginia. When I turn on one VPN, my IP address changes to, based in the United Kingdom. Another defaults to placing me in Los Angeles. A third keeps me close to home in Washington, DC.

This ability for VPNs to make it appear that your traffic is coming from elsewhere is how and why VPNs were used to circumvent online censorship controls, or access movies or digital content that are geoblocked. These sorts of activities can violate the terms or conditions of using those services, and many popular services block IP addresses known to be associated with VPNs. That stated, the number, variety, and location of servers that a VPN offers is often an important selling point. Many VPNs rely on third parties to host their servers, but this comes at the expense of having physical, in-house control over them.

In addition to servers, VPNs rely on protocols to ensure traffic arrives to a VPN server and back. These protocols make up the “tunnel” for your traffic and are a mixture of transmission protocols and encryption standards, which can impact the security or speed of your traffic. Transmission protocols like PPTP, L2TP/IPSec, SSTP, IKEv2, and OpenVPN are instructions for how a VPN makes an encrypted connection. Unfortunately, there is no standard transmission protocol. Each has different pros and cons. CDT recommends that most people use connections based on the OpenVPN protocol. Like HTTPS websites, OpenVPN relies on SSL/TLS. OpenVPN is a widely regarded and, importantly, open source protocol, but it may require additional work to download, install, and configure.

There are different levels of encryption to consider, as well. Encryption is like a lock that protects information. For maximum security, bigger is generally better: AES 256-bit encryption provides a good security baseline. A 256-bit encryption key, for instance, would have 1.1579 × 10^77 different lock combinations. Put another way, if the fastest computer in the world were to start guessing combinations, it could take longer than the entire lifespan of the universe to crack your key.

VPNs use encryption for a number of different purposes. For instance, encryption is used in VPNs both to protect information from anyone monitoring the tunnel and to authenticate that both the user of the VPN and the VPN provider are who they claim to be. The level of encryption really depends on what the security risk is, and a user-friendly VPN will explain what encryption method it employs and what options users have.

Sounds great! What’s the catch?

Despite how VPNs are often marketed, they do not make a person absolutely anonymous online. They only disguise your traffic to some third parties. A VPN will not stop services like Google or Amazon from recognizing you if you sign into their services, and VPNs also cannot stop the types of invasive data fingerprinting or web tracking technologies that are pretty good at guessing who you are without your knowledge or participation. VPNs are just one tool among many to protect your online privacy.

And they are a tool that requires you to trust the provider of the VPN service, which can be easier said than done. Trust, or lack thereof, is a huge problem in the world of VPNs. Even the Federal Trade Commission has basically suggested that “buyer beware” when it comes to researching and choosing a VPN. The need for trust can be eliminated by installing your own VPN such as Algo, JigSaw’s Project Outline, or Streisand, but none of these tools are as easy to install or use as a commercial VPN service. They also require a user to sign up for a cloud service provider like Amazon Web Services to host the VPN server.

These tools have gotten more accessible, but for most users, commercial VPN services are the easier solution. This makes the reputation of the VPN incredibly important. Two years ago, ArsTechnica detailed the struggle to create a list of trustworthy, safe, and secure VPNs, and even today, while there are endless amounts of VPN review sites, it is often unclear how biased or accurate these resources are. That One Privacy Site has become perhaps the most critical source of reviews and comparison information about hundreds of different VPNs, and Wirecutter’s recent exploration into VPNs (which in full disclosure, CDT contributed to) also highlighted as a useful resource for information on VPNs.

CDT has been working with a number of VPNs to promote better practices. You can learn more by visiting CDT’s Signals of Trustworthy VPNs resource page.

The post Techsplanations: Part 5, Virtual Private Networks appeared first on Center for Democracy & Technology.

Categories: Privacy

Federal Circuit (Finally) Makes Briefs Immediately Available to the Public

EFF News - Tue, 2018-10-16 14:14

In a victory for transparency, the Federal Circuit has changed its policies to give the public immediate access to briefs. Previously, the court had marked submitted briefs as “tendered” and withheld them from the public pending review by the Clerk’s Office. That process sometimes took a number of days. EFF wrote a letter [PDF] asking the court to make briefs available as soon as they are filed. The court has published new procedures [PDF] that will allow immediate access to submitted briefs.

Regular readers might note that this is the second time we have announced this modest victory. Unfortunately, our earlier blog post was wrong and arose out of a miscommunication with the court (the Clerk’s Office informed us of our mistake and we corrected that post). This time, the new policy clearly provides for briefs to be immediately available to the public. The announcement states:

The revised procedure will allow for the immediate filing and public availability of all electronically-filed briefs and appendices. … As of December 1, 2018, when a party files a brief or appendix with the court, the document will immediately appear on the public docket as filed, with a notation of pending compliance review.

In our letter to the Federal Circuit, we had explained that the public’s right of access to courts includes a right to timely access. The Federal Circuit is the federal court of appeal that hears appeals in patent cases from all across the country, and many of its cases are of interest to the public at large. We are glad that the court will now give the press and the public immediate access to filed briefs.

Overall, the Federal Circuit has a good record on transparency. The court has issued rulings making it clear that it will only allow material to be sealed for good reason. The court’s rules of practice require parties to file a separate motion if they want to seal more than 15 consecutive words in a motion or a brief. The Federal Circuit’s new filing policy brings its docketing practices in line with this record of transparency and promotes timely access to court records.

Categories: Privacy

Ten Legislative Victories You Helped Us Win in California

EFF News - Tue, 2018-10-16 13:44

 Your strong support helped us persuade California’s lawmakers to do the right thing on many important technology bills debated on the chamber floors this year. With your help, EFF won an unprecedented number of victories, supporting good bills and stopping those that would have hurt innovation and digital freedoms.

Here’s a list of victories you helped us get the legislature to pass and the governor to sign, through your direct participation in our advocacy campaigns and your other contributions to support our work.

Net Neutrality for California

Our biggest win of the year, the quest to pass California’s net neutrality law and set a gold standard for the whole country, was hard-fought. S.B. 822 not only prevents Internet service providers from blocking or interfering with traffic, but also from prioritizing their own services in ways that discriminate.

California made a bold declaration to support the nation’s strongest protections of a free and open Internet. As the state fights for the ability to enact its law—following an ill-conceived legal challenge from the Trump administration—you can continue to let lawmakers know that you support its principles.

Increased Transparency into Local Law Enforcement Policies

Transparency is the foundation of trust. Thanks to the passage of S.B. 978, California police departments and sheriff’s offices will now be required to post their policies and training materials online, starting in January 2020. The California Commission on Peace Officer Standards and Training will be required to make its vast catalog of trainings available as well. This will encourage better and more open relationships between law enforcement agencies and the communities they serve.

Increasing public access to police materials about training and procedures benefits everyone by making it easier to understand what to expect from a police encounter. It also helps ensure that communities have a better grasp of new police surveillance technologies, including body cameras and drones.

Public Access to Footage from Police Body Cameras

Cameras worn by police officers are increasingly common. While intended to promote police accountability, unregulated body cams can instead become high tech police snooping devices.

Some police departments have withheld recordings of high-profile police use of force against civilians, even when communities demand release. Prior to this bill’s introduction, Los Angeles, for example, had a policy that didn’t allow for any kind of public access at all.

 The public now has the right to access those recordings. A.B. 748 ensures that starting July 1, 2019, you will have the right to access this important transparency resource.

EFF sent a letter stating its support for this law, which makes it more likely that body-worn cameras will be used as a tool for holding officers accountable, rather than a tool of police surveillance against the public.

Privacy Protections for Cannabis Users

As the legal marijuana market develops in California, it is critical that the state protects the data privacy rights of cannabis users. A.B. 2402 is a step in the right direction, providing modest but vital privacy measures.

A.B. 2402 stops cannabis distributors from sharing the personal information of their customers without their consent, granting cannabis users an important data privacy right. The bill also prohibits dispensaries from discriminating against a customer who chooses to withhold that consent.

As more vendors use technology such as apps and websites to market marijuana, the breadth of their data collection continues to grow. News reports have found that dispensaries are scanning and retaining driver license data, as well as requiring names and phone numbers before purchases. This new law ensures that users can deny consent to having their personal information shared with other companies, without penalty.

Better DNA Privacy for Youths

DNA information reveals a tremendous amount about a person – their medical conditions, their ancestry, and many other immutable traits – and handing over a sample to law enforcement has long-lasting consequences. Unfortunately, at least one police agency has demanded DNA from youths in circumstances that are confusing and coercive.

A.B. 1584 makes sure that before this happens, kids will have an adult in the room to explain the implications of handing a DNA sample over to law enforcement. Once this law takes effect in January 2019, law enforcement officials must have the consent of a parent, guardian, or attorney, in addition to consent from the minor, to collect a DNA sample.

EFF wrote a letter supporting this bill as a vital protection for California’s youths, particularly in light of press reports about police demanding DNA from young people without a clear reason. In one case, police approached kids coming back from a basketball game at a rec center and had them sign forms “consenting” to check swabs.

A.B. 1584 adds sensible privacy protections for children, to ensure that they fully understand how police may use these DNA samples. It also guarantees that, if the sample doesn’t implicate them in a crime, it will be deleted from the system promptly.

Guaranteed Internet Access for Kids in Foster Care and Juvenile Detention

Internet access is vital to succeeding in today’s world. With your support, we persuaded lawmakers to recognize how important it is for some of California’s most vulnerable young people—those involved in the child welfare and juvenile justice systems— to be able to access the Internet, as a way to further their education. A.B. 2448 guarantees that access.

EFF testified before a Senate committee to advocate for the 2017 version of this bill, which the governor vetoed with the condition that he would sign a more narrow text. The second version, however, passed Gov. Brown’s muster. Throughout the process, EFF launched email campaigns and enlisted the help of tech companies, including Facebook, to lend their support to the effort.

This law affirms that some of the state’s most at-risk young people have access to all the resources the Internet has to offer. And it shows the country that if California can promise Internet access to disadvantaged youth, then other states can, too.

Better Privacy Protections for ID Scanning

Getting your ID card checked at a bar? The bouncer may be extracting digital information from your ID, and the bar may then be sharing that information with others. California law limits bars from sharing information they collected through swiping your ID, but some companies and police departments believed they could bypass those safeguards as long as IDs were “scanned” rather than “swiped.”

 A.B. 2769 closes this loophole. It makes sure that you have the same protections against having your information shared without your consent whether the bouncer checking you out is swiping your card or scanning it.

EFF sent a letter in support of this bill to the governor. People shouldn’t lose the right to consent to data sharing simply because the place they go chooses a different method of checking their identification.

Thankfully, the governor signed this common-sense bill.

Open Access to Government-funded Research

A.B. 2192 was a huge victory for open access to knowledge in the state of California. It gives everyone access to research that’s been funded by the government within a year of its publication date.

EFF went to Sacramento to testify in support of this bill. We also wrote to explain that it would have at most a negligible financial impact on the state budget to require researchers to make their reports open to the public. This prompted lawmakers to reconsider the bill after previously setting it aside.

A.B. 2192 is a good first step. EFF would like to see other states adopt similar measures. We also want California to take further strides to make research available to other researchers looking to advance their work, and to the general public.

No Government Committee Deciding What is “Fake News”

Fighting “fake news” has become a priority for a lot of lawmakers, but S.B. 1424, a bill EFF opposed, was not the way to do it. The bill would have set up a state advisory committee to recommend ways to “mitigate” the spread of “fake news.” That would have created an excessive risk of new laws that restrict the First Amendment rights of Californians.

EFF sent a letter to the governor, outlining our concerns about having the government be the arbiter of what is true and what isn’t. This is an especially difficult task when censors examine complex speech, such as parody and satire.

Gov. Brown vetoed this bill, ultimately concluding that it was not needed. “As evidenced by the numerous studies by academic and policy groups on the spread of false information, the creation of a statutory advisory group to examine this issue is not necessary,” he wrote.

Helped Craft a Better Bot-Labeling Law

California's new bot-labeling bill, S.B. 1001, initially included overbroad language that would have swept up bots used for ordinary and protected speech activities. Early drafts of the bill would have regulated accounts used for poetry, political speech, or satire. The original bill also created a takedown system that could have been used to censor or discredit important voices, like civil rights leaders or activists.

EFF worked with the bill's sponsor, Senator Robert Hertzberg, to remove the dangerous language and think through the original bill's unintended negative consequences. We thank the California legislature for hearing our concerns and amending this bill.

On to 2019!

You spoke, and California’s legislature and governor listened. In 2018, we made great progress for digital liberty. With your help, we look forward to more successes in 2019. Thank you!

Categories: Privacy

EPIC v. FTC: EPIC Obtains Emails about Facebook Audits

EPIC - Mon, 2018-10-15 18:10

In response to EPIC's Freedom of Information Act lawsuit, the FTC has released communications about Facebook's biennial audits. The audits are required by the FTC's 2011 Consent Order with Facebook, which followed a detailed complaint by EPIC and other consumer privacy organizations. The emails show that the FTC had concerns about the scope of Facebook's 2015 assessment, stating "PwC's report does not demonstrate whether and how Facebook addressed the impact of acquisitions on its Privacy Program." In other email, the FTC expressed similar concerns about the 2017 assessment and whether the audit evaluated the company's acquisitions impact on Facebook's privacy program. EPIC had previously opposed Facebook's acquisition of WhatsApp and submitted detailed comments for the FTC's review of the merger remedy process. In March 2018, following the Cambridge Analytica breach, the FTC announced it was reopening the Facebook investigation, but still there is no announcement, no report, and no fine.

Categories: Privacy

EPIC, Coalition Warn Australian Bill Would Weaken Encryption

EPIC - Fri, 2018-10-12 18:25

EPIC and a coalition of civil society organizations told the Australian Parliament that pending legislation would weaken digital security and increase the risks to human rights. The proposal is one of several that promotes weak encryption for digital services. In 2016, Apple refused a demand by the FBI to redesign iPhones to enable law enforcement access. The FBI sued Apple, and EPIC filed an amicus brief in support of Apple, arguing that the FBI's demand "places at risk millions of cell phone users across the United States." The FBI eventually dropped the case.

Categories: Privacy

EPIC Files Appeal with D.C. Circuit, Seeks Release of 'Predictive Analytics Report'

EPIC - Fri, 2018-10-12 17:55

EPIC has appealed a federal district court decision for the release of a "Predictive Analytics Report." The district court backed the Department of Justice when the agency claimed the "presidential communications privilege." But neither the D.C. Circuit Court of Appeals nor the Supreme Court has ever permitted a federal agency to invoke that privilege in a FOIA case. EPIC sued the agency in 2017 to obtain records about "risk assessment" tools in the criminal justice system. These controversial techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence. EPIC has pursued numerous FOIA cases concerning algorithmic transparency, passenger risk assessment, "future crime" prediction, and proprietary forensic analysis. The D.C. Circuit will likely hear EPIC's appeal next year.

Categories: Privacy

Wed, 1969-12-31 20:00

Categories: , Privacy