Reading Sacha Molitorisz’s Net Privacy: How We Can Be Free in an Age of Surveillance, I have appreciated the background and philosophical backing for protecting privacy. In particular, the relational approach to privacy he describes I think is a brilliant way to expand the scope of what actually constitutes privacy as an individual and societal concern. However, there have been a few points that I do not agree with or wish were explored more; as young as it is, the role blockchain technology has and could play in ensuring net privacy is not even mentioned once. Instead, the chapter I am currently reading speaks of regulation and legislation, talking about the privacy of individuals and society but then falling back on the State or global institutions to uphold it – a tad problematic.
Perhaps I am jumping slightly ahead and Molitorisz qualifies this statement with further comments in the book, but I feel he places way too much faith in political actors to do what’s right regarding privacy laws. One line in particular prompted this piece:
“A vast difference exists between a hacker gaining unauthorised access to a webcam for personal gain and an AI system employed by the government for national security that automatically records webcam footage no human will never see unless a court orders otherwise.”Net Privacy: How We Can Be Free in an Age of Surveillance, Sacha Molitorisz. pg. 260.
I agree a vast difference exists, but in the reverse order. While both are wrong, a hacker obtaining access to another’s webcam, or even a small group of devices, only breaches the privacy of those targeted. Important ethical considerations involved there aside, the mass surveillance of the State indiscriminately targets everyone – a far larger breach. The reasons given to excuse this do not hold up to scrutiny when one looks at governments’ records, at least here in Australia and especially the US.
Firstly, “national security” is an intentionally vague concept. Molitorisz is obviously referring the mainstream definition, using examples of terrorism often touted by governments as shields against criticism of policy. In June of last year, I wrote a piece discounting the myth of national security, using the explanation brought forward by Clinton Fernandes in Island off the Coast of Asia:
“[he] described security as a “useful concept because of its elasticity.” It not only refers to security in terms of military strength and cybersecurity and privacy, but also the economic interests of the State. Fernandes describes these interests as “what a nation or a dominant group within it possesses or thinks it ought to possess.””
What is often described as national security, and the measures implemented to “defend” it, is usually in direct opposition to what the public interest actually is. Government expansions of surveillance and data collection inarguably falls into this category, not least because it rarely ever works. Edward Snowden revealed in 2013 how extensive and invasive government programs were, run by agencies like the NSA and GCHQ, and various abuses of this data. Arun Kundnani, in his book The Muslims Are Coming! describes how Muslims were targeted after 9/11, altering individuals’ and group’s behaviours, occasionally entrapping otherwise innocent people in conspiracies set up by intelligence agencies, and even resulting in further terrorist cultivation.
In fact, it could be argued that for governments, invasive mass surveillance is critical to their “national security” interests. During the Arab Spring uprisings in 2011, many activists and protestors used browsers like Tor and/or encrypted services to communicate and organise. Democracy and the security of the people were protected by defying and overcoming State surveillance. You could argue that this only applies because the governments of, say, Egypt and Tunisia were dictatorial, but you would be wrong. It applies equally in so-called free and democratic countries, particularly those in which mass surveillance is secretly expanded and carried out.
AI Systems and Humans
In a perfect world, the vision that Molitorisz describes regarding AI might be possible, but that is often not the case. He even notes in the book the privacy is not – but should be – coded into technology from the start, not, as has been the case since ARPANET, as an afterthought. This concept applies across the board, however. Biases and design oversights are an inherent part of all IT systems, and AI is no different.
Some major examples of this are facial recognition, drone warfare, and (more US-centric, but potentially applicable elsewhere) policing. Companies like Amazon have come under fire for creating and selling facial recognition technology that is inherently racist – it is able to identify lighter skinned people with a much higher percentage of accuracy than it is able to pick up those with darker skin, with (if I recall) black women being at the bottom. It is an imperfect system that, by inherent (not necessarily intentional) design, contributes to the institutionally racist nature of “Western” society, especially when police departments purchase these systems.
Drone warfare, a devastating development that exploded in use under the Obama administration, has created an even further divide between the victims of war and those perpetrating them. Human beings – terrorists or not – become abstract, disconnected from the humanity that’s owed them. With the introduction of AI, a system that already sees extreme abuse and blatantly criminal conduct could well become an automated “army”, where the line over who is responsible and culpable will become even hazier. Given data collection in these operations can be very inaccurate and difficult to obtain, what’s to stop an AI from “acting recklessly” and with impunity?
Domestically, the US also carries out a war against its own people with increasingly militarised police forces. Crime data is collated and resources allocated based on this, a practice that disproportionately targets people of colour. AI systems can only do so much, and it is humans who draw conclusions based on that data, conclusions that are subject to bias and a lack of understanding (elaboration in linked post).
How do you think an AI given the power to monitor everyone’s webcams will go? And it is equally ludicrous to believe that this system won’t be abused. In Queensland, there are instances of police sharing the data of abuse victims with their abuser; Edward Snowden mentioned frequent occurrences of spying on people’s webcams, mostly women. Even if humans never explicitly look at the webcam footage, massive amounts of data can still be inferred from it. And who gets to authorise “legitimate access” to it? Those who give themselves the power to collect it in the first place?
The State Beyond Scrutiny
Molitorisz dedicates a reasonably large section of the book discussing the effects privacy issues can have on democracy and our own free, rational thought. Cambridge Analytica is the main example, running extremely personalised advertisement campaigns for wannabe despots like Boris Johnson and Donald Trump. As mentioned above, surveillance is considered necessary for “national security”, but it is clear such data collection can be, and routinely is, used for other, much more nefarious purposes.
In the US, the record barely needs covering – ever since 9/11, the concept of net privacy is virtually non-existent. Data collection and sharing is a global enterprise for governments, with “loopholes” where the US government can spy on their own citizens despite it being against their oh so sacred Constitution. I do not know any specifics about the UK, but I can only imagine the Conservative Party over there has similar disregard for privacy as our (Australian) Liberal government.
With this in mind, Molitorisz appears to not pick up the inconsistency of his argument. If data collection and mass surveillance, in the way it has been used by State actors, is a direct threat to democracy and the privacy – the very humanity – of citizens, then what use are suggestions for legislation and regulation from the State? A State whose self-preservation and acquisition of power relies on the subservience and surveillance of the masses cannot be expected to act in our best interests.
It isn’t that I disagree the laws suggested would work or are valuable, but the State has shown it is not a trusted entity when it comes to privacy, and therefore the citizens themselves must be made aware of ways they can directly control their online activity and data. Blockchain and encrypted services like the Tor or DuckDuckGo browsers are popular examples, but even education on tech literacy would go a long way helping people make informed decisions about their data and privacy.
In a true democracy, in the theoretical realm, a State legislating and regulating privacy might work. But in reality, where the State itself is the threat, we cannot expect the government to regulate itself properly. The people need to take matters into their own hands, to protect the privacy of the individual and society as a whole.
Previous piece: Loss of Rationality: Kant, Consumerism, and Democracy
Liked this? Read Anarchism and the Neutrality of Technology
3 thoughts on “Governments Can’t Self-Regulate Surveillance”