Nine reasons you don’t have “nothing to hide”

By Sean Rintel. This article (with different images) was originally published on 11th June 2013 in The Conversation.

In the wake of former CIA employee Edward Snowden’s revelations of the PRISM NSA mass surveillance, people are once again asking why the general public should care if they’ve got nothing to hide.

Nothing to hide” hides a lot behind an absolutist gloss. It puts the focus on the individual rather than on the real problem of a society-wide loss of data control at many levels.

Is this a fair question? Not really. Below, I give nine reasons why we must care – regardless of our innocent intentions.

1) Presumption of guilt

Mass surveillance and data retention overturn the foundation of the modern legal system: the presumption of innocence. Not only is the presumption lost for gathering evidence, it also weakens the effect of that presumption throughout the rest of the legal process.

If there is a normalisation in the public consciousness that there is a weakened presumption of innocence, we have compromised the effectiveness of our legal system.

2) The loss of personal data control

Mass surveillance circumvents our right to personal data control, also known as informational self-determination. As the late Professor of Public Law Alan F. Westin put it in his 1970 book, Privacy and Freedom:

The right of the individual to decide what information about himself [sic] should be communicated to others and under what circumstances.

We have envelopes for our letters and curtains on our windows not because we’re doing something wrong but because our we are choosing how to share (or not) that business. Governments and security organisations should have no part in that choice without a specific, targeted, and legally warranted reason.

3) Transferring power to security organisations

Allowing security organisations to have far-reaching capabilities without strict oversight effectively transfers power from governments to the security organisations themselves.

The power of voting for elected officials is weakened if security organisations make choices based on securing their own position rather the interests of the country.

Image: topgold

Image: topgold

Vladimir Putin is reputed to be finding the siloviki (the “men of power” from state security) who helped build his regime to now be more demanding than in the past. Such transfers of power are not limited to a shadowy few in a far-off land, nor just at the highest level.

In this kind of climate, the power to invoke or even just threaten a search from mass surveillance can be devolved to even front-line law enforcement.

4) False positives

Anyone searching for information on “topics of concern” to security agencies, for legitimate reasons (such as researchers, journalists, students) or even personal curiosity, could be falsely identified as a person of interest in an investigation.

As security technologist and author Bruce Schneier argued in a guest blog post last year, this is one of the fundamental problems of profiling.

The ramifications for the individual might range from inclusion on no-fly lists, denial of access to some jobs, through to false arrest.

5) Changing definitions of issues of concern

What counts as a problematic topic in the eyes of security organisations changes over time, especially in the wake of an incident. We are all still taking off our shoes at many airports because of one “shoe bomber”, Richard Reid, in 2001.

When something as seemingly benign as shoes is suddenly linked to security concerns, the potential for large retrospective data sweeps – as well as having shoe-related topics then included in future sweeps – increases, with concurrent increases in the possibility of embarrassing and/or gravely serious mistakes.

6) Political corruption

The potential exists for the government of the day to request detailed information that falls well outside the scope of legality. Watergate is the classic example of data-gathering about political adversaries, but compared to the potential corruption made possible by mass surveillance, that was a drop in the ocean.

Mass surveillance could be directed not only at direct political adversaries but also their official supporters and those who might fall into a demographic of potential support.

7) Personal abuse of power

While most security agents work within the law, there are occasions when they abuse their power. The London Police were found to be complicit in the News Of The World hacking scandal and, as ABC journalist Nick Ross noted in an article last September, many small-scale examples of abuse of power are captured on the news website Reddit.

Communication data gathered for abusive private purposes could include email, texts, pictures intended for revenge, extortion or prurience.

Image: Free Press Pics

Image: Free Press Pics

8) Honeypots

Large collections of telecommunications data – be it the content or the metadataattract hackers. Unfortunately, governments and their sub-contractors have a poor track record safe-guarding such data.

Even without blunders, the data can be stolen or individuals with direct access can be manipulated to hand over this information through social engineering, bribery, or coercion.

9) Big data and the problem of patterns

The entire premise of “big data” – large and complex sets of computer data – is to find patterns from aggregates. While you may feel that, post-by-Facebook-post, you have “nothing to hide”, mass surveillance creates the possibility of finding patterns that catch the interest of security organisations.

Such patterns have the possibility of including the innocent with the guilty. Worse, there’s the possibility to not just find but “create” patterns from such aggregations that frame the innocent as potentially guilty.

Everything to lose

As security expert Bruce Schneir wrote for Wired in 2006, and is even more true today, we must not “accept the premise that privacy is about hiding a wrong”.

The issue with the NSA PRISM program, and other such programs around the world, is not that we have “nothing to hide” – it’s that we have everything to lose.

Sean Rintel is a life member and former Chair of Electronic Frontiers Australia.

The ConversationThis article was originally published at The Conversation. Read the original article.

Posted in Uncategorized

How far from 1984?

The ABC has done a brilliant radio program about surveillance and how, in so many ways, we are close to a George Orwell-esque situation

Check it out here




Posted in Privacy, Surveillance Tagged with: , , , , ,

New intelligence powers: what’s included

Last week, Attorney-General George Brandis introduced an omnibus bill to the Senate that seeks to update the legislation governing the operations of Australia’s intelligence services.

Three weeks ago, we reviewed the proposals that were expected to be included in this tranche of legislation. The more controversial of these include provisions for “disrupting” target computers in the course of executing a computer access warrant and allowing access via a third-party computer.

Now that the legislation has been tabled we can see exactly how the government intends to apply the PJCIS recommendations.

image: commonwealth of australia

Image: Commonwealth of Australia

Firstly, the good news

EFA is pleased that the government has reversed its decision to abolish the Independent National Security Legislation Monitor, a role which provides an important holistic overview of the web of interconnecting legislation in this area. EFA is also pleased that the Attorney-General has referred this legislation to the Parliamentary Joint Committee on Intelligence & Security (JCIS) for review before it is considered by the parliament in full.

These are both small but significant wins in terms of ensuring proper oversight of these significant proposed changes.

Disruption of target computers

The standards that ASIO has to meet when they are executing a computer access warrant have been lowered. Under existing legislation there is effectively a blanket ban on activities that would impact on lawful users. This has been changed in two ways:

• If some action “is necessary to do one or more of the things specified in the warrant”, provided there is no “material loss or damage” then the impact on lawful users is now completely disregarded.
• Interfering with, interrupting or obstructing a “communication in transit” is now explicitly permitted.

It is concerning that the rights of lawful users are discarded so easily. Under this proposed amendment, if a website was being used to conduct illegal activities then an attempt to disrupt its server could potentially affect thousands of legitimate organisations and ordinary people. This sort of action should only be used as a last resort and when the seriousness of the threat warrants that level of surveillance.

We’ve already seen the effects of such action, when ASIC last year inadvertently blocked tens of thousands of legitimate websites when they meant to target only one or two. While the technical incompetence shown by ASIC in this case (they failed to understand that a very large proportion of websites exist on shared servers – with the same IP address), is unlikely to be paralleled within the intelligence services, this is a telling example of the sorts of collateral damage that can occur. It should be noted that ASIC’s actions were taken under section 313 of the Telecommunications Act, which is in dire need of review. We are therefore very pleased that Communications Minister Malcolm Turnbull has recently announced an inquiry into this use of that section of the Act.

The Inspector General of Intelligence and Security (IGIS), the body responsible for overseeing the activities of intelligence agencies, also expressed support for ensuring the impact on unrelated persons resulting from any use of this power is minimised.

Access to third-party computers

A computer targeted in a computer access warrant may now include “a computer associated with, used by or likely to be used by, a
person (whose identity may or may not be known).”

This gives ASIO the right, with a suitable warrant, to break into computers belonging to innocent people in order to obtain covert access to a target.

This opens a potentially enormous can of worms. As an example, ASIO could use this power to access the servers of an email provider, in order to access the emails of an individual of interest. This email provider might be a legitimate Australian business that does its best to provide a secure and private service for its customers. Not only does this provision give ASIO the right to violate that trust – it is now even in their interest to stockpile possible attacks against Australian businesses rather than help them to secure themselves.

There are many other scenarios to ponder. If the business notices ASIO’s intrusion and announces to all their customers that they have been attacked, could that count as disclosing information about a special intelligence operation? Would there be any penalty for detecting and stopping ASIO? What about publishing details of their method of attack? If ASIO accidentally damages a computer or a business’ reputation, is there any accountability requiring ASIO to own up to that and provide compensation?

This new power could therefore have a great deal of negative unintended consequences.

Surveillance of Australians by ASIS

Until now ASIO is the only intelligence agency that can (legally) spy on Australians. This bill seeks to empower theAustralian Secret Intelligence Service (ASIS) to collect intelligence not only about foreign individuals and organisations, but to also assist ASIO in investigations targeting Australians. This will be permitted provided that ASIS’s activities occur outside Australia.

Although there is much talk of Australians fighting in Syria and Iraq, it is interesting to note that nothing in this section of the bill requires the Australian target to be overseas. With much of Australians’ data and communications going overseas this raises important questions about what else ASIS might be asked to do that ASIO can’t.

This is precisely the sort of power that enabled the United States’ National Security Agency (NSA) to conduct comprehensive surveillance on American citizens and residents, in contravention of their supposed role as a foreign-focused organisation.

Imprisonment for disclosure of classified information

A new type of “special intelligence operations” is to be created, which will carry with them indemnity for participants from civil and criminal liability, subject to certain limitations. There are to be strict penalties for disclosure of such ‘special operations’ – 5 years imprisonment, or 10 years if disclosure endangers somebody’s health and safety.

The wording of these new offences is clearly intended to cover both whistle-blowers and journalists who might publish leaked information.

EFA supports whistle-blowers who responsibly reveal illegal and immoral activities. If abuses like those disclosed by Edward Snowden were revealed in Australia under the terms of this proposed legislation, then the actions of journalist Glenn Greenwald and the Guardian, New York Times and Washington Post could all be illegal; this, despite the enormous public interest value in revealing the far-reaching and indiscriminate NSA surveillance, and despite the reforms that have occurred in response to the public outcry.

Former Iraq war whistle-blower and now MP for Denison, Andrew Wilkie said in a statement: “The increase in penalties for the disclosure of intelligence material must also be accompanied by an amendment to the Public Interest Disclosure Act 2013 to ensure protection for intelligence whistle-blowers.”

Mandatory data retention – not just yet

A proposal for mandatory data retention by ISPs is not included in this bill. It is widely expected that the government will attempt to legislate for this at a later date.

At a press conference on Wednesday the head of ASIO David Irvine described data retention as “absolutely crucial”. The effectiveness of data retention in improving national security is highly disputed and the implications for the privacy and rights of Australians are significant. See our previous coverage of this issue.

EFA is committed to opposing unnecessary mass surveillance in general and mandatory data retention in particular.  If you support us, please join or donate today.

Posted in Privacy, Surveillance Tagged with: , , , , , , ,

Perfect Forward Secrecy: an Important Web Privacy Protection

This is a modified version of an article first published by Parker Higgins on EFF’s Deeplinks blog on 28th August 2013. See the original article.

Love locks - Seoul

Image: J. Lawrence

When you access a Web site over an encrypted connection, you’re using a protocol called HTTPS. But not all HTTPS connections are created equal. In the first few milliseconds after a browser connects securely to a server, an important choice is made: the browser sends a list of preferences for what kind of encryption it’s willing to support, and the server replies with a verification certificate and picks a choice for encryption from the browser’s list. These different encryption choices are called “cipher suites.” Most of the time, users don’t have to worry about which suite the browsers and servers are using, but in some cases it can make a big difference.

One important property is called “perfect forward secrecy,” but only some servers and only some browsers are configured to support it. Sites that use perfect forward secrecy can provide better security to users in cases where the encrypted data is being monitored and recorded by a third party. That particular threat may have once seemed unlikely, but we now know that the NSA does exactly this kind of long-term storage of at least some encrypted communications as they flow through telecommunications hubs, in a collection effort it calls “upstream.”

How can perfect forward secrecy help protect user privacy against that kind of threat? In order to understand that, it’s helpful to have a basic idea of how HTTPS works in general. Every Web server that uses HTTPS has its own secret key that it uses to encrypt data that it sends to users. Specifically, it uses that secret key to generate a new “session key” that only the server and the browser know. Without that secret key, the traffic traveling back and forth between the user and the server is incomprehensible, to the NSA and to any other eavesdroppers.

But imagine that some of that incomprehensible data is being recorded anyway—as leaked NSA documents confirm the agency is doing. An eavesdropper who gets the secret key at any time in the future—even years later—can use it to decrypt all of the stored data! That means that the encrypted data, once stored, is only as secure as the secret key, which may be vulnerable to compromised server security or disclosure by the service provider.

That’s where perfect forward secrecy comes in. When an encrypted connection uses perfect forward secrecy, that means that the session keys the server generates are truly ephemeral, and even somebody with access to the secret key can’t later derive the relevant session key that would allow her to decrypt any particular HTTPS session. So intercepted encrypted data is protected from prying eyes long into the future, even if the website’s secret key is later compromised.

It’s important to note that no flavor of HTTPS, on its own, will protect the data once it’s on the server. Web services should definitely take precautions to protect that data, too. Services should give user data the strongest legal protection possible, and minimize what they collect and store in the first place. But against the known threat of “upstream” data collection, supporting perfect forward secrecy is an essential step.

So who protects long-term privacy by supporting perfect forward secrecy? Unfortunately, it’s not a very long list—but it’s growing. Google made headlines when it became the first major web player to enable the feature in November of 2011. Facebook announced last month that, as part of security efforts that included turning on HTTPS by default for all users, it would enable perfect forward secrecy soon. And while they don’t serve the same volume as those other sites, EFA’s websites are also configured to use perfect forward secrecy, including:
- this website:
- EFA’s main website at:
- EFA’s Member website at:

Outside of the web, emails encrypted using the OpenPGP standard do not have forward secrecy, but instant messages (or text messages) encrypted using the OTR protocol do.

Supporting the right cipher suites—and today, for the Web, that means ones that support perfect forward secrecy—is an important component of doing security correctly. But sites may need encouragement from users because, like HTTPS generally, supporting perfect forward secrecy doesn’t come completely without a cost. In particular, it requires more computational resources to calculate the truly ephemeral session keys required.

It may not be as obvious a step as simply enabling HTTPS, but turning on perfect forward secrecy is an important improvement that protects users. More sites should enable it, and more users should demand it of the sites they trust with their private data.

Posted in Data Retention, Encryption, PRISM, Privacy, Surveillance Tagged with: , , , , , ,

Sign up