You are hereFeed aggregator / Categories / Privacy

Privacy


EPIC to FTC: Google's Location Tracking Violates Consent Order

EPIC - Fri, 2018-08-17 16:15

Following a report that Google tracks user location even when users opt-out, EPIC wrote to the FTC that Google violated the 2011 consent order. EPIC said "Google's subsequent changes to its policy, after it has already obtained location data on Internet users, fails to comply with the 2011 order." EPIC also told the FTC that "The Commission's inactions have made the Internet less safe and less secure for users and consumers." The 2011 settlement with Google followed a detailed complaint brought by EPIC and a coalition of consumer organizations. The groups charged that Google had engaged in unfair and deceptive trade practices when it changed the privacy settings of Gmail users and opted them into Google Buzz. The FTC agreed with the consumer groups, Google entered into a settlement and Buzz was shuttered. FTC chairman John Liebowitz said at the time, "When companies make privacy pledges, they need to honor them. This is a tough settlement that ensures that Google will honor its commitments to consumers and build strong privacy protections into all of its operations."

Categories: Privacy

Free Expression Activist and Poet Birgitta Jónsdóttir Joins EFF’s Advisory Board

EFF News - Fri, 2018-08-17 13:42

EFF is thrilled to welcome Birgitta Jónsdóttir as a Technical Advisor on our Advisory Board. The founder of Iceland’s Pirate Party and a former member of Iceland’s Parliament, Birgitta is a poet, artist, and free expression and digital rights activist who is one of the world’s most inspiring voices for the possibility of the Internet as force for freedom.

Birgitta’s activism has been an inspiration to many, including EFF. In 2010, she worked with WikiLeaks to release a video of a U.S. helicopter gunning down a group of civilians and journalists in Baghdad. That put her on the radar screen of U.S. Justice Department, which sought to obtain her Twitter account records in an investigation of Wikileaks.

When Twitter notified Birgitta and others about the government request, EFF stepped in to ask a court to block the government from forcing Twitter to turn over Birgitta’s records. We sought to encourage other companies to follow Twitter and notify customers when law enforcement demands user data, which led to the creation of our annual “Who Has Your Back” report examining tech companys’ policies for protecting their users from the government.

 In 2008, EFF co-founder John Perry Barlow proposed in a speech that Iceland become a “Switzerland of bits,” where data gathered by whistleblowers, bloggers, and journalists could be safely stored in the public domain and remain online. Barlow’s proposal helped lead Birgitta to start the International Modern Media Institute, better known as IMMI, a nonprofit that seeks ways to enhance and empower freedom of expression, freedom of speech, dissemination of information and publication within Iceland as well as ensuring source protection and whistleblower protection. Iceland’s parliament unanimously voted in 2010 to make the proposals into law; Birgitta is now part of a steering committee group working to finalize all IMMI measures into law. In 2016 Fortune magazine named Birgitta as one of the World’s Most Powerful Women.

 Birgitta brings exceptional experience as a trailblazer on international issues concerning privacy, transparency, and free expression. We’re honored to have her as a technical advisor.

Related Cases: Government demands Twitter records of Birgitta Jonsdottir

Categories: Privacy

Techsplanations: Part 3, What is net neutrality?

CDT - Fri, 2018-08-17 10:20


In the last two posts, we talked about what the internet and the web are and how they work. In the next two posts we will look at the concept and principles of net neutrality and some of the ways to preserve them. As before, please refer to this glossary for quick reference to some of the key terms and concepts (in bold).

What is this “Net Neutrality” you speak of?

Net neutrality is the idea that the net (as in the internet) should be neutral towards the information crossing it and should not treat some traffic differently based on what kind of traffic it is, who sent it, or who will receive it. The concern is that access networks like ISPs will use their position (between you and everything else) and their ability to control how traffic flows into, out of, and across their networks to influence or control competition among providers of goods and services online. Because most of these providers connect to the internet, like you do, at the “edge” of an access network, they are sometimes called “edge providers.”

Acting as a gatekeeper, an ISP has the ability to pick and choose which content crosses their network, and at what speed and price. Because access networks are the only way for edge providers to reach customers over the internet, they have been called “terminating access monopolies.” This is not because the ISP holds a monopoly from the customer’s perspective; customers may have a choice of access providers. But from the perspective of an edge provider, there is only one choice to reach their customers online: via whichever access network the customer uses. This position gives greater bargaining power to the network operator and puts edge providers at a disadvantage.

Ok, I get that ISPs can control traffic on their networks, but why do they care what edge providers do?

In addition to the ability to control how traffic flows across their networks, ISPs also have several incentives to do so. First, ISPs can create an additional source of income by charging edge providers for access to you. That is, rather than just charging you for the ability to access and retrieve information from the web via the ISP’s access network, the ISP could also charge web entities for delivering to you all the information you requested from them. Second, ISPs can charge edge providers higher prices for more favorable treatment of the traffic they send across the network. This practice, known as paid prioritization, lets some edge providers pay ISPs to move their traffic more quickly than others across the ISP’s network. If you want to learn more, check out one of our past posts on paid prioritization. Finally, many ISPs own or are affiliated with some edge providers. This creates an incentive to give those providers better treatment and to disadvantage their competitors.

Before we move on, a point of clarification: net neutrality relates to the practices of network operators, but not to the practices of edge providers. Therefore, only network operators can violate net neutrality principles. Although similar principles could apply to platforms and device manufacturers, it is important to think of (and treat) them separately because the differences between their positions on the network (edge versus gatekeeper) and the lack of choices for internet access (you’re lucky if you have a choice) give ISPs greater leverage to use discriminatory practices to their advantage.

From what I’ve heard, net neutrality is pretty popular. Wouldn’t competition among ISPs lead to them offering neutral network access?

In theory, maybe. It’s true that the vast majority of people support the concept of net neutrality, regardless of political affiliation. However there currently is not, and potentially never will be, enough competing ISPs to prompt the largest ones to offer neutrality as a perk. This is partly due to the nature of network structure and the cost of building one. Basically, they are too big and too expensive to create an incentive to build multiple, overlapping network systems (especially if that means burying conduits or cables). In a high-density neighborhood, there might be enough potential customers living close enough together to support two or more competing networks, but for many locations, it just doesn’t make sense to build a second or third network. Beyond the practical limitations on network deployment, the financial incentives inherent in an unregulated two-sided market are significant. In other words, if ISPs can charge both customers and edge providers for carrying traffic between them, they have more ways to make more money.

Not that there’s anything wrong with making money. The problems start when ISPs use their position in ways that affect competition outside the market for internet access. This potential for market distortion is troubling when more well-funded companies can buy an advantage over their competition, rather than competing based on the merits of their offerings. A pay-to-play system helps cement established businesses in place and makes it very difficult for new businesses to compete. It is even more troubling when vertically-integrated ISPs give their own affiliated edge offerings better treatment than other similar offerings.

So, net neutrality is about competition? Is that it?

Many of the concerns about the future of the web without neutrality center around competition policy, but there are other problems. For instance, ISPs have the ability to block access to specific websites or to otherwise censor the content or applications their subscribers may access. In the past, ISPs have prevented customers from using popular apps like FaceTime and have blocked certain peer-to-peer file transfer protocols. Although there may be fewer financial incentives for blocking, other incentives, such as a desire to influence political views, may exist. Regardless of motivation, the potential harms to free expression and competition online are sufficient to justify some kind of regulation.

Isn’t there already some kind of regulation?

There was. However, the FCC recently reversed its position on regulating ISPs and removed all the rules. Let’s quickly cover the history of net neutrality regulation before we get to where we are now. Basically, the Federal Communications Commission (FCC) has been working on protecting consumers from undesirable practices of network operators for decades, and has been working on protecting internet openness and net neutrality since about 2005. Then, the Commission issued a Policy Statement with a set of four principles to “foster creation, adoption, and use of of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.”

Between 2005 and 2010, the Commission incorporated those principles as conditions for telecom merger agreements. In 2010, the FCC lost a court case against Comcast because the Court found that the Commission did not base its action against Comcast in the right source of authority. So, later that year the Commission issued its first Open Internet Order, which relied on part of Title I of the Communications Act to turn the Policy Statement’s principles into actual rules against blocking and unreasonable discrimination. Verizon sued the FCC and won because the Court found that the rules could only apply if ISPs were officially classified as “common carriers.”

The Commission went back to the drawing board and quickly returned with a new regulatory proposal: classify ISPs as common carriers and then put rules against blocking, throttling, and unreasonable discrimination in place. The Commission adopted this approach in 2015 in its second Open Internet Order. Then the ISPs sued again, but this time they lost. The court found that the FCC had grounded its rules properly in its authority under Title II and upheld the Commission’s strongest net neutrality regulation to date. A few months later, we elected a new president and the balance of the FCC shifted in the ISPs’ favor. The Commission immediately began to unmake the rules it had finally succeeded in making. Which brings us to now.

The FCC has adopted its Restoring Internet Freedom Order, repealing all of the previously adopted rules and leaving the Federal Trade Commission in charge of protecting consumers from unscrupulous ISPs. CDT is among a sizeable group of petitioners currently suing the FCC to undo the repeal. At the same time, there is an effort in Congress to use the Congressional Review Act to reverse the repeal. So that’s where we are.

So that’s it? We had rules, now we’re throwing them out…the end?

No, not the end. ISPs are unlikely to voluntarily abide by the rules they worked so hard to get rid of, and net neutrality advocates are unlikely to stop fighting for effective regulations. In one venue or another, the fight will continue. In our next post, we will talk about some of the possible regulatory approaches we may see in the future and sift through some of the rhetoric around those approaches.

The post Techsplanations: Part 3, What is net neutrality? appeared first on Center for Democracy & Technology.

Categories: Privacy

Court Blocks EPIC's Efforts to Obtain "Predictive Analytics Report"

EPIC - Thu, 2018-08-16 11:00

A federal court in the District of Columbia has blocked EPIC's efforts to obtain a secret "Predictive Analytics Report" in a FOIA case against the Department of Justice. The court sided with the agency which had withheld the report and claimed the "Presidential communications privilege." Neither the Supreme Court nor the D.C. Circuit has never permitted a federal agency to invoke that privilege. EPIC sued the agency in 2017 to obtain records about "risk assessment" tools in the criminal justice system. These techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence. Many criminal justice experts oppose their use. EPIC has pursued several FOIA cases to promote "algorithmic transparency," passenger risk assessment, "future crime" prediction, and proprietary forensic analysis. The case is EPIC v. DOJ (Aug. 14, 2018 D.D.C.). EPIC is considering an appeal.

Categories: Privacy

EPIC, Consumer Groups Urge FTC to conclude Facebook Investigation

EPIC - Wed, 2018-08-15 20:20

EPIC and a coalition of consumer groups have asked the FTC to conclude the Facebook-Cambridge Analytica investigation by September 1, 2018. The groups said, "It is critical that the FTC conclude the Facebook matter, issue a significant fine, and ensure that the company upholds its privacy commitments to users.” Congress and the European Parliament have both conducted extensive hearings on the Cambridge Analytica matter. The U.K. Information Commissioner’s Office conducted an extensive investigation, published a substantial report, and issued a significant fine in July. The FTC announced in March that it would reopen the Facebook investigation.

Categories: Privacy

EPIC FOIA: EPIC Obtains DOD Inspector General Audits of Hotline Allegations

EPIC - Wed, 2018-08-15 17:30

Through a Freedom of Information Act request, EPIC has obtained the Department of Defense's Inspector General report on audit of hotline allegations involving improper use of agency funds for foreign counterintelligence billets. The report found that the Defense Intelligence Agency followed proper appropriation authorities but did not ensure proper function and management for the program. The Inspector General found that "employees were performing duties not aligned with their position descriptions and funding." In a 2012 FOIA case, EPIC v. CIA, EPIC uncovered an Inspector General's report which revealed that the CIA, in collaboration with the NYPD, conducted domestic surveillance of mosques, Muslim student groups, and Muslim stores and businesses. EPIC continues to pursue the release of government documents to improve oversight and accountability through litigation and EPIC's Open Government Project.

Categories: Privacy

Google Needs To Come Clean About Its Chinese Plans

EFF News - Wed, 2018-08-15 15:23

Eight years after Google initially took a stand against Internet censorship by exiting the Chinese search market, we are disappointed to learn the company has been secretly re-considering an extended collaboration with the massive censorship and surveillance-wielding state. According to an Intercept report released at the beginning of the month, Google is working on a censored version of its search service for release in China.

In 2010, EFF and many other organizations praised Google for refusing to sacrifice the company’s values for access to the Chinese market. At the time, this move followed public backlash and several attacks on Google’s infrastructure that targeted the personal data of several prominent Chinese human rights activists. Google’s departure from China showed that strong core values in fundamental human rights could beat out short-term economic gain in the calculus of an Internet company.

But now it seems the company has reversed course.

This news comes amid other reports of American tech giants compromising values to enter or remain within China: Facebook has piloted a censored version of its own platform, and Apple recently faced criticism for moving its customers' data into China-hosted servers, and adding code to filter the Taiwanese flag emoji in Chinese locales.

Within China, Google’s direct competitor, Baidu, has been facing a significant amount of social, regulatory, and economic backlash over recent advertising malpractice, such as monetizing questionable medical advertisements, heavily deprioritizing non-Baidu services, and allegedly promoting phishing sites. There may well be a growing demand for competition within the Chinese search engine market.

In even considering these changes, Google needs to tread carefully. In the wake of the last wave of engagement with the Chinese market, and to prevent Internet companies being complicit with human rights’ violations, the company joined with Microsoft and Yahoo! to create a set of standards for working in countries with poor human rights records: the Global Network Initiative’s Implementation Guidelines. EFF was a founding member of the GNI, but subsequently left the coalition in 2013 due to concerns that the companies were unable to be forthcoming about their involvement in state surveillance, even within a confidential environment.

From the outside, it’s unclear to us whether this project has yet to be considered in the light of that agreement. GNI’s Executive Director has told reporters, in part, that “All member companies are expected to implement the GNI Principles wherever they operate, and are subject to independent assessment, which is overseen by our multi-stakeholder Board of Directors.” It might reassure Google’s own staff and external critics to be told that process was being followed, and if both the GNI and Google were more public about the results of that procedure.

But for now, it seems the company has opted to prepare new Chinese plans outside the view of the public, and even behind the backs of many of their own employees.

From 2006 to 2018: Both Google and China are more powerful than ever

Our original concerns from 2006 still stand today, but in 2018, the potential for damage when large tech companies co-operate with repressive states has grown.

Since 2006, Google’s capabilities have expanded massively. We live in an era in which Google-owned tracking scripts are present on an incredible 75% of the top million websites. Google’s personalized profiles of its users across several online services help it provide “relevant” search results and advertisements.

Simultaneously, in order to sustain their position on strong censorship, the Chinese government has had to implement broad and pervasive surveillance laws and technology. In particular, the explosive dominance of centralized applications like Weibo and WeChat, whose communications and transactions are regularly surveilled and censored, has ultimately transformed the digital landscape in China.

2017 in particular saw a new wave of regulatory crackdowns aimed towards strengthening digital surveillance practices across the Chinese Internet. In particular, the government began restricting tools used for anonymity and privacy by arresting local VPN providers, banning end-to-end chat applications like WhatsApp, and mandating Internet platforms to require offline identity verification. In certain regions of China, citizens merely attempting to use foreign or encrypted applications like WhatsApp or Telegram can have their service cut off and are asked to report to the police.

It’s not clear how or whether Google’s planned offerings will comply with these new national regulations, or whether exemptions would be worked out for the tech giant.

At this early stage, it’s this lack of transparency that concerns us most.

What happened to transparency within Google?

Google once prided itself in its internal organizational transparency, especially when compared to giants like Apple, famous for their secrets veiled in black cloth. However, as we saw with Project Maven, Google’s controversial AI contract with the Department of Defense, executives within the organization are willing to keep projects quiet in the face of potential backlash. The initiative was not publicized, and came to light only when employees noticed and brought it to the forefront of internal discussion forums.

Unlike in 2006, when Google was open (and even apologetic) about the quality and nature of its service in China, this new iteration was developed with little external or internal visibility. A source at The Intercept reports that knowledge about Google’s China project was “restricted to just a few hundred members of the Internet giant’s 88,000-strong workforce.” Though Project Maven was not publicized, this information was at least available to employees. This time, the vast majority of employees discovered the existence of the China project only after these emails were leaked to the public media.

That means certain questions remain unanswered, not just publicly, but even among Google’s own staff. What sacrifices will Google make to its own operating practices in order to enter the Chinese market? Will it have to comply with China’s internal strict regulations, and how will these compromises affect its offerings outside of China?

The public, Google’s users, and Google’s employees have been kept increasingly in the dark about compromises on the company’s own values that could massively affect the lives of not only citizens within China or the U.S., but also Internet users around the world. Google has already committed to processes that consider human rights when entering new markets in the Global Network Initiative. Is it following them?

Google is an effective gatekeeper of the Internet for a large majority of the world. It’s the portal through which many access the Internet, and through which Google itself continues to collect troves of information about these users across a variety of platforms. With that kind of responsibility, everyone — inside and outside Google — needs to stay vigilant and continue to hold the giant accountable. Avoiding internal oversight and criticism will not evade the backlash that will come from launching a complicit service, or the damaging consequences to Chinese users when Google’s compromises are used against them. It is better to have this debate now, in public, than to pick up the pieces when the damage has been done.

Categories: Privacy

Telling the Truth About Defects in Technology Should Never, Ever, Ever Be Illegal. EVER.

EFF News - Wed, 2018-08-15 15:16

Congress has never made a law saying, "Corporations should get to decide who gets to publish truthful information about defects in their products,"— and the First Amendment wouldn't allow such a law — but that hasn't stopped corporations from conjuring one out of thin air, and then defending it as though it was a natural right they'd had all along.

Some background: in 1986, Ronald Reagan, spooked by the Matthew Broderick movie Wargames (true story!) worked with Congress to pass a sweeping cybercrime bill called the Computer Fraud and Abuse Act (CFAA) that was exceedingly sloppily drafted. CFAA makes it a felony to "exceed[] authorized access" on someone else's computer in many instances.

Fast forward to 1998, when Bill Clinton and his Congress enacted the Digital Millennium Copyright Act (DMCA), a giant, gnarly hairball of digital copyright law that included section 1201, which bans bypassing any "technological measure" that "effectively controls access" to copyrighted works, or "traffic[ing]" in devices or services that bypass digital locks.

Notice that neither of these laws bans disclosure of defects, including security disclosures! But decades later, corporate lawyers and federal prosecutors have constructed a body of legal precedents that twist these overbroad laws into a rule that effectively gives corporations the power to decide who gets to tell the truth about flaws and bugs in their products.

Businesses and prosecutors have brought civil and criminal actions against researchers and whistleblowers who violated a company's terms of service in the process of discovering a defect. The argument goes like this: "Our terms of service ban probing our system for security defects. When you login to our server for that purpose, you 'exceed your authorization,' and that violates the Computer Fraud and Abuse Act."

Likewise, businesses and prosecutors have used Section 1201 of the DMCA to attack researchers who exposed defects in software and hardware. Here's how that argument goes: "We designed our products with a lock that you have to get around to discover the defects in our software. Since our software is copyrighted, that lock is an 'access control for a copyrighted work' and that means that your research is prohibited, and any publication you make explaining how to replicate your findings is illegal speech, because helping other people get around our locks is 'trafficking.'"

The First Amendment would certainly not allow Congress to enact a law that banned making true, technical disclosures. Even (especially!) if those disclosures revealed security defects that the public needed to be aware of before deciding whether to trust a product or service.

But the presence of these laws has convinced the tech industry — and corporations that have added 'smart' tech to their otherwise 'dumb' products — that it's only natural that they should be the sole custodians of the authority to embarrass or inconvenience them. The worst of these actors use threats of invoking CFAA and DMCA 1201 to silence researchers altogether, so the first time you discover that you've been trusting a defective product is when it is so widely exploited by criminals and grifters that it's impossible to keep the problem from becoming widely known.

Even the best, most responsible corporate actors get this wrong. Tech companies like Mozilla, Dropbox and, most recently, Tesla, have crafted "coordinated disclosure" policies in which they make sincere and legally enforceable promises to take security disclosures seriously and act on them within a defined period, and they even promise not to use laws like DMCA 1201 to retaliate against security researchers who follow their guidelines.

This is a great start, but it's a late and limited solution to a much bigger problem.

The point is that almost every company is a "tech company" — from medical implant vendors to voting machine companies — and not all of them are as upstanding and public-spirited as Mozilla.

Many of these companies do have "coordinated disclosure" policies by which they hope to tempt security researchers into coming to them first when they discover problems with their products and services. But these companies don't make these policies out of the goodness of their hearts: those policies exist because they're the companies' best hope of keeping security researchers from embarrassing them and leaving them scrambling by just publishing the bug without warning.

If corporations can simply silence researchers who don't play ball, we should expect them to do so. There is no shortage of CEOs who are lulling themselves to sleep tonight with fantasies about getting to shut their critics up.

EFF is currently suing the US government to invalidate DMCA 1201 and the ACLU is trying to chip away at CFAA, and there will come a day when we succeed, because the idea of suppressing bug reports (even ones made in disrespectful or rude ways) is totally incompatible with the First Amendment.

Rather than crafting a disclosure policy that says "We'll stay away from these unjust and absurd interpretations of these badly written laws, provided you only tell the truth in ways we approve of," companies that want to lead by example could do so by putting something like this in their disclosure policies:

We believe that conveying truthful warnings about defects in systems is always legal. Of course, we have a strong preference for you to use our disclosure system [LINK] where we promise to investigate your bugs and fix them in a timely manner. But we don't believe we have the right to force you to use our system.

Accordingly, we promise to NEVER invoke any statutory right — for example, rights we are granted under trade secret law, anti-hacking law, or anti-circumvention law — against ANYONE who makes a truthful disclosure about a defect in one of our products or services, regardless of the manner of that disclosure.

We really do think that the best way to keep our customers safe and our products bug-free is to enter into a cooperative relationship with security researchers and that's why our disclosure system exists and we really hope you'll use it, but we don't think we should have the right to force you to use it.

Companies should not rely on these laws to silence security researchers who displease them with the time and manner of their truthful disclosures — if their threats ever materialize into full-blown lawsuits, there's a reasonable chance that they'll find themselves facing down public-spirited litigators (ahem) who will use those suits as a fast-track to overturning these laws in the courts.

But while we wait for the slow wheels of justice to turn, the specter of legal retaliation haunts the best and most public-spirited security researchers (the researchers who work for cyber-criminals and state surveillance contractors don't have to worry about these laws, because they never make their findings public). That is bad for all of us, because for every Tesla, Dropbox and Mozilla, there are a thousand puny tyrants who are using these good-citizen companies' backhanded insistence that disclosure should be subject to  their corporate approval to intimidate their own critics into silence.

Those intimidated researchers? They've discovered true facts about why we shouldn't trust systems with our data, our finances, our personal communications, the security of our homes and businesses, and even our lives.

EFF has sued the US government to overturn DMCA 1201 and we just asked the US Copyright Office to reassure security researchers that DMCA 1201 does not prevent them from telling the truth.

We're discussing all this in a Reddit AMA next Tuesday, August 21, from 12-3PM Pacific (3-6PM Eastern). We hope you'll come and join us.

Related Cases: Green v. U.S. Department of Justice

Categories: Privacy

EPIC Urges Senate Committee to Press FCC on Privacy

EPIC - Wed, 2018-08-15 15:05

EPIC has sent a statement to the Senate Commerce Committee for a hearing on the Federal Communications Commission. EPIC urged the Committee to push the FCC to protect online privacy. EPIC also asked the Committee to press the FCC to repeal a regulation that requires the retention of telephone customer records for 18 months. EPIC filed the petition urging the repeal of this mandate more than two years ago. Every comment received by the FCC favored the EPIC petition. EPIC has submitted multiple comments to the FCC to strengthen online privacy and has recommended an industry neutral and comprehensive privacy framework.

Categories: Privacy

Help Send EFF to SXSW 2019

EFF News - Tue, 2018-08-14 19:30

Want to see the Electronic Frontier Foundation at the annual SXSW conference and festival in 2019? Help us get there by voting for our panels in the SXSW Panel Picker!

Every year, the Internet has a chance to choose what panels will be featured at the event. We’re asking friends and fans to take a moment to vote for us.

Here's how you can help EFF:

  1. Visit the Panel Picker site and login or register for a new account.
  2. Click each of the links below.
  3. Click the “Vote up” button on the left of the page, next to the panel description.
  4. Share this blog post!
    Suggested tweet: Help @EFF get to SXSW! You can vote in SXSW's Panel Picker: https://www.eff.org/deeplinks/2018/08/help-send-eff-sxsw-2019

Here are the panels with EFF staff members—please upvote!

With four exciting panel proposals on subjects from combating misinformation on the web to a discussion of whether or not science-fiction is doing a good job at talking about AI, you can help us keep SXSW as an incubator of cutting-edge technologies and digital creativity, and also as a place where experts discuss what those technologies mean for digital rights.

Here is more info on the panels we’re hoping to join:

8-Bit Policies in a 4K World: Adapting Law to Tech

The speed at which technology is developing is unprecedented in our history, yet politicians are as jammed up and at loggerheads as ever. The Senate hearing with Mark Zuckerberg revealed how little our political leaders actually understand what's going on, but we're still bound by the decisions they make regarding the technology we use on a daily basis. SOPA, PIPA, and the FCC's vote against Net Neutrality are specific instances of politicians being at odds with public opinion, where technology enthusiasts feel the constant struggle to stem the tide of harmful legislation, and many may be left wondering - where is this going?

Speakers:

  • Alex Shahrestani, Board Member, EFF-Austin, Digital Arts Coalition 
  • Shahid Buttar, Director of Grassroots Advocacy, Electronic Frontier Foundation
  • Jan Gerlach, Public Policy Manager, Wikimedia Foundation

Join us as we discuss how to engage with our representatives and help them craft flexible policies that address the ever-changing tech landscape.

Fighting Misinformation and Defending the Open Web

The spread of misinformation is becoming an increasing problem in countries around the world. In particular during election times, social media platforms have been used to strategically to influence public opinion – from the Philippines, to Kenya, from Germany to the USA. Lack of net neutrality and the dominance of platforms like Facebook with its zero rating services are contributing to this becoming an increasing problem for democracy.

Internet activists from Africa, Europe and the USA will give insights into different government attempts to introduce new legislation combating the spread of misinformation as well as civil society strategies to defend freedom of speech and promote access to pluralistic information sources.

Speakers:

  • Geraldine de Bastion, Founder / International Executive Director, Global Innovation Gathering
  • Nanjira Sambuli, Consultant , Web Foundation
  • Markus Beckedahl, Founder, Netzpolitik
  • Jillian York, Director for International Freedom of Expression, EFF

Beyond the Surveillance Business Model: Why & How

It’s time to talk about the future – how technology developers and companies can successfully move beyond the surveillance business model.

Trump, Cambridge Analytica and the growing scope of cybersecurity crises have been a wake-up call to the public, tech employees, and investors about the high price of the collect-it-all business model and the grave impact it can have on society. New comprehensive European and California privacy law have changed the landscape and the risk for surveillance business models.

Get the inside track from Silicon Valley journalist and author Brad Stone, Duck Duck Go Founder and CEO Gabriel Weinberg, EFF’s Executive Director Cindy Cohn, and the ACLU’s Nicole Ozer on why and how to build a successful business model beyond surveillance.

Speakers:

  • Nicole Ozer, Technology & Civil Liberties Director, ACLU of California
  • Gabriel Weinberg, Founder and CEO, Duck Duck Go
  • Brad Stone, Senior Executive Editor, Bloomberg Technology
  • Cindy Cohn, Executive Director, Electronic Frontier Foundation

Untold AI: Is Sci-Fi Telling Us the Right Stories?

How do depictions of Artificial Intelligence in popular science fiction affect how we think about real AI and its future? How has fiction about AI influenced the development of AI technology and policy in the real world? (And do we really have to talk about Terminator’s Skynet or 2001’s Hal 9000 every damned time we talk about the risks of AI?) Join bestselling sci-fi authors Cory Doctorow and Malka Older, scifiinterfaces.com editor Chris Noessel, along with futurism and AI policy experts as they examine what TV, movies, games, and sci-fi literature are telling us about AI, compare those lessons to real-world AI tech & policy, and identify the stories that we should be telling ourselves about AI, but aren’t.

Speakers:

  • Christopher Noessel, Designer, IBM
  • Cory Doctorow, Apollo 1201, Electronic Frontier Foundation
  • Malka Older, Author, Self-employed
  • Rashida Richardson, Director of Policy Research, AI Now Institute

Thanks for your help!

Categories: Privacy

D.C. Circuit Announces Panel in EPIC v. IRS, FOIA Case for Trump's Tax Returns

EPIC - Tue, 2018-08-14 18:05

The D.C. Circuit has announced the three-judge panel that will decide EPIC v. IRS, EPIC's Freedom of Information Act case to obtain public release of President Trump's tax returns. Arguments will be held in the case on Thursday, September 13, 2018 before Judge Karen LeCraft Henderson, Judge Patricia A. Millett, and Judge Harry T. Edwards. EPIC has argued that the IRS has the authority to disclose the President's returns to correct numerous misstatements of fact concerning his financial ties to Russia. For example, President Trump tweeted that "Russia has never tried to use leverage over me. I HAVE NOTHING TO DO WITH RUSSIA - NO DEALS, NO LOANS, NO NOTHING"—a claim "plainly contradicted by his own attorneys, family members, and business partners." As EPIC told the Court, "there has never been a more compelling FOIA request presented to the IRS." A broad majority of the American public favor the release of the President's tax returns. EPIC v. IRS is one of several FOIA cases EPIC has pursued concerning Russian interference in the 2016 Presidential election, including EPIC v. FBI (response to Russian cyber attack) and EPIC v. DHS (election cybersecurity).

Categories: Privacy

EPIC Comments on Second Annual Privacy Shield Review

EPIC - Tue, 2018-08-14 17:50

EPIC provided comments to the European Commission to inform the second annual review of the EU-U.S. Privacy Shield, a framework that permits the processing of the personal data of Europeans in the United States. EPIC detailed the latest privacy developments in the U.S., including the extension of Fourth Amendment protection to cell phone location data in Carpenter v. United States, passage of the CLOUD Act, the FTC's failure to enforce its legal judgment against Facebook, the vacancies at the PCLOB, the absence of a Privacy Shield Ombudsman at the Commerce Department, and the nomination of Judge Brett Kavanaugh to the Supreme Court. The Commission approved Privacy Shield last year, but sought additional steps by the United States. The European Parliament has called for suspension of the pact if the U.S. does not fully comply by September 1st. The European Commission will make a final determination this fall.

Categories: Privacy

Following EPIC Comments, FTC Strengthens Safeguards for Kids' Data in Gaming Industry

EPIC - Tue, 2018-08-14 16:45

The FTC has unanimously voted to approve EPIC’s recommendations to strengthen safeguards for children's data in the gaming industry. In a 5-0 vote, the FTC adopted EPIC's proposals to revise the Entertainment Software Rating Board's industry rules to (1) extend children's privacy protections in COPPA to all users worldwide; and (2) to implement privacy safeguards for the collection of data "rendered anonymous." The FTC wrote, "the Commission agrees with EPIC's comment. As COPPA's protections are not limited only to U.S. residents, the definition of 'child' in the ESRB program has been revised to remove the limitation." The Commission also strengthened protections for de-identified children's data: "companies must provide notice and obtain verifiable parental consent if personal information is collected, even if it is later anonymized." EPIC has testified several times before Congress on protecting children's data and supported the 2013 updates to COPPA.

Categories: Privacy

How Militaries Should Plan for AI

EFF News - Tue, 2018-08-14 13:51

Today we are publishing a new EFF white paper, The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI. This paper analyzes the risks and implications of military AI projects in the wake of Google's decision to discontinue AI assistance to the US military's drone program and adopt AI ethics principles that preclude many forms of military work.

The key audiences for this paper are military planners and defense contractors, who may find the objections to military uses of AI from Google's employees and others in Silicon Valley hard to understand. Hoping to bridge the gap, we urge our key audiences to consider several guiding questions. What are the major technical and strategic risks of applying current machine learning methods in weapons systems or military command and control? What are the appropriate responses that states and militaries can adopt in response? What kinds of AI are safe for military use, and what kinds aren't?

Militaries must make sure they don't buy into the machine learning hype while missing the warning label.

We are at a critical juncture. Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation. They also lack the basic forms of common sense and judgment on which humans usually rely.[1]

Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too.

The U.S. Department of Defense and its counterparts have an opportunity to show leadership and move AI technologies in a direction that improves our odds of security, peace, and stability in the long run—or they could quickly push us in the opposite direction. We hope this white paper will help them chart the former course.

Part I identifies how military use of AI could create unexpected dangers and risks, laying out four major dangers:

  • Machine learning systems can be easily fooled or subverted: neural networks are vulnerable to a range of novel attacks including adversarial examples, model stealing, and data poisoning. Until these attacks are better understood and defended against, militaries should avoid ML applications that are exposed to input (either direct input or anticipatable indirect input) by their adversaries.
  • The current balance of power in cybersecurity significantly favors attackers over defenders. Until that changes, AI applications will necessarily be running on insecure platforms, and this is a grave concern for command, control, and intelligence (C2I), as well as autonomous and partially autonomous weapons.
  • Many of the most dramatic and hyped recent AI accomplishments have come from the field of reinforcement learning (RL), but current state-of-the-art RL systems are particularly unpredictable, hard to control, and unsuited to complex real-world deployment.
  • The greatest risk posed by military applications of AI, increasingly autonomous weapons, and algorithmic C2I is that the interactions between the systems deployed will be extremely complex, impossible to model, and subject to catastrophic forms of failure that are hard to mitigate. This is true both of use by a single military over time, and, even more importantly, between those of opposing nations. As a result, there is a serious risk of accidental conflict, or accidental escalation of conflict, if ML or algorithmic automation is used in these kinds of military applications.

Part II offers and elaborates on an agenda for mitigating these risks:

  • Support and establish international institutions and agreements for managing AI, and AI-related risks, in military contexts.
  • Focus on machine learning applications that lie outside of the "kill chain," including logistics, system diagnostics and repair, and defensive cybersecurity.
  • Focus R&D effort on increasing the predictability, robustness, and safety of ML systems.
  • Share predictability and safety research with the wider academic and civilian research community.
  • Focus on defensive cybersecurity (including fixing vulnerabilities in widespread platforms and civilian infrastructure) as a major strategic objective, since the security of hardware and software platforms is a precondition for many military uses of AI. The national security community has a key role to play in changing the balance between cyber offense and defense.
  • Engage in military-to-military dialogue, and pursue memoranda of understanding and other instruments, agreements, or treaties to prevent the risks of accidental conflict, and accidental escalation, that increasing automation of weapons systems and C2I would inherently create.

Finally, Part III provides strategic questions to consider in the future that are intended to help the defense community contribute to building safe and controllable AI systems, rather than making vulnerable systems and processes that we may regret in decades to come.

Read the full white paper as a PDF or on the Web.

Categories: Privacy

International Privacy Experts Adopt Recommendations for Cross-Border Law Enforcement Requests for Data

EPIC - Tue, 2018-08-14 12:05

The International Working Group on Data Protection in Telecommunications has adopted new recommendations to protect individual rights during criminal cross-border law enforcement. The Berlin-based Working Group includes Data Protection Authorities and experts who assess emerging privacy challenges. The Working Group on Data Protection calls on governments and international organisations to ensure law enforcement requests accord with international human rights norms. The Working Group recommends specific safeguards for data protection and privacy, including accountability, procedural fairness, notice and an opportunity to challenge. EPIC addressed similar issues in an amicus brief for the US Supreme Court in the Microsoft case. EPIC and a coalition of civil society organizations recently urged the Council of Europe to protect human rights in the proposed revision to the Convention on Cybercrime. In April 2017, EPIC hosted the 61st meeting of the IWG in Washington, D.C. at the Goethe-Institut, Germany's cultural institute.

Categories: Privacy

Two More Nominees for Intelligence Oversight Board

EPIC - Mon, 2018-08-13 13:00

The White House announced the nomination of two board members to the Privacy and Civil Liberties Oversight Board (PCLOB). Travis LeBlanc is a partner at Boies Schiller, and former Federal Communications Commission Enforcement Bureau Chief. Aditya Bamzai is a law professor at the University of Virginia and former Department of Justice attorney. The intelligence oversight body has been unable to act due to long-term vacancies. The European Parliament has called for suspension of the Privacy Shield if the U.S. does not to improve data protection and restore the PCLOB. Three other members have been nominated but have yet to be confirmed. EPIC opposed the nomination of Adam Klein to serve as Chairman of the Board. EPIC previously testified before PCLOB, made recommendations for PCLOB's handling of FOIA requests, and set out a broad agenda for the work of the independent agency. EPIC previously sought public release of the PCLOB report on Executive order 12333.

Categories: Privacy

Kavanaugh White House Counsel: PATRIOT Act, "measured, careful, responsible, and constitutional approach"

EPIC - Sat, 2018-08-11 12:39

On Thursday, the Senate Judiciary Committee released the first production of records for Supreme Court nominee Brett M. Kavanaugh from his time as associate counsel for George W. Bush. Roughly 5,700 pages of documents were made available to the public. The documents show that Kavanaugh assisted in the effort to pass the Patriot Act and drafted a statement that President Bush incorporated in the bill signing. Kavanaugh wrote that the PATRIOT Act will “update laws authorizing government surveillance,” which he claimed, and President Bush then restated, were from an era of “rotary phones.” In fact, the PATRIOT Act weakened numerous U.S. privacy laws, including the subscriber privacy provisions in the Cable Act and the email safeguards in the Electronic Communications Privacy Act. Both laws were enacted after the era of rotary phones. Congress amended the Foreign Intelligence Surveillance Act after it was revealed that the White House had authorized warrantless wiretapping of Americans beginning in 2002. In an email exchange, Kavanaugh wrote that the PATRIOT Act was a "measured, careful, responsible, and constitutional approach . . . .” EPIC recently submitted two urgent Freedom of Information Act requests for Judge Kavanaugh’s records during his time serving as Staff Secretary for President Bush.

Categories: Privacy

Wed, 1969-12-31 20:00

Categories: , Privacy