Alt. vs. Ctrl.: Thinking About Alternative Internets

Editorial notes for the Journal of Peer Production #9 on Alternative Internets

By Félix Tréguer, Panayotis Antoniadis, Johan Söderberg

The hopes of past generations of hackers weigh like a delirium on the brains of the newbies. Back in the days when Bulletin Board Systems metamorphosed into the Internet, the world’s digital communications networks – hitherto confined to military, corporate and elite academic institutions – were at grasping reach of ordinary individuals. To declare the independence of the Internet from nation states and the corporate world seemed like no more than stating the bare facts. Even encrypted communication – the brainchild of military research – had leaked into the public’s hands and had become a tool wielded against state power. Collectives of all stripes could make use of the new possibilities offered by the Web to bypass traditional media, broadcast their own voice and assemble in new ways in this new public sphere. For some time, at least, the Internet as a whole embodied “alternativeness.”

Already by the mid-nineties, however, states began to reshape the communications infrastructure into something more manageable. Through a series of international treaties, legislations and market developments, ownership over this infrastructure was concentrated to a few multinational companies (McChesney, 2013). On top of this legal and technical basis, a new breed of informational capitalism sprang up, where value is siphoned from deterritorialized “open” flows (Fuchs, 2015). Meanwhile, the ecological footprint of communication technologies has come to represent a formidable challenge (Flipo et al., 2013).

It is in the light of these transformations that the emancipatory promises inherited from the 1980s and 1990s must be assessed. With every new wave of high-tech products, these promises have been renewed. For instance, when WiFi-antennas were rolled out in the 2000s, community WiFi-activists hoped to rebuild the communications infrastructure bottom-up (Dunbar-Hester, 2014). With the advent of crypto-currencies, some claimed to believe that bankers’ control over global currency flows would be demolished (Karlström, 2014). The technology at hand might be new, but the storyline bundled with it is made up of recycled materials. It basically says: “Technology x has leveled the playing field, now individuals can outsmart the combined, global forces of state and capital.”

Underlying this claim is a grander narrative about (information) technology as the harbinger of a brighter future. Although progressivism goes all the way back to the Scientific Revolution, it was given a particular, informational twist during the Cold War. In the 1950s and 1960s, disillusioned US Trotskyists – most notably among them Daniel Bell – rebranded historical materialism as the post-industrialism hypothesis. With this remake of hist-mat, history did no longer culminate in socialism, but in a global consumer village. Furthermore, the motor of transition was not class struggle anymore, but the inert development of technology (Barbrook, 2007). Though a spark of conflict has of course survived in the post-industrial hypothesis, this technological determinism flares up anew every time hackers and Internet activists rally behind, say, the inevitable demise of copyright or the awaiting triumph of decentralised communication networks (Söderberg, 2013). Determinism is performative, and never more so than when it is mobilized in political struggles.

This observation points to the instability of the meanings invested in computers and in the Internet itself. It suffices to recall the twin roots of these technologies, one in the military-industrial complex (Agar, 2003, Edwards, 1996), the other in the counter-culture and peace movement (Turner, 2006; 2013). The same undecidedness prevails today, as exemplified by the global controversies unleashed by NSA whistleblower Edward Snowden. The documents leaked by Snowden revealed the extent to which communications surveillance has been built into the pipes of a supposedly flat network, giving rise to unprecedented mobilisations aimed at resisting it. But paradoxically, this wave of resistance is now leading to the legalisation of mass surveillance (Tréguer, 2016). Because of these persistent ambiguities, it would be as wrong to denounce the inherent oppressiveness of the Internet as it would be to celebrate the alternative essence of this technology. Either position amounts to the same thing: A foreclosing of the struggle in which the future meaning of the technology is determined. Both Alt. and Ctrl. are possible and competing scenarios. They evolve in constant interaction.

How can we, as scholars and/or activists, sort out this complexity and make an assessment of the balance of forces, while reinvigorating hope for the future? Can we learn from the past to ward off the eternal return of a dystopian future?  Posing these questions – and perhaps contributing to answers – is the task that we have set for ourselves in this special issue of Journal of Peer Production on “alternative Internets.”

If the meaning of the “Internet” is instable, then the definition of “alternative” in “alternative Internets” is even more so. Alternativeness is never an absolute. It is relative to something else, the non-alternative, which must also be defined. In this respect, Paschal Preston notes that alternative Internets were found in online applications that “manage to challenge and resist domination by commercial and other sectional interests”, in particular those “operating as alternative and/or minority media for the exchanges of news and commentary on political and social developments which are marginalized in mainstream media and debates” (Preston, 2001). Likewise, Chris Atton writes that alternative Internets are “produced outside the forces of market economics and the state” (Atton, 2003). As seen from these rather conventional definitions, alternativeness is measured in distance from the centres of state and capital.

How can we move past the couple of “useful others” (the state, the market) to better grasp alternativeness? The tools, applications and media that form part of the Internet can be assessed as composites made up of different dimensions. Some important parameters include the underlying funding and economic models, the governance schemes for taking decisions and allocating tasks, or the modes of production. Nick Couldy puts emphasis on this later dimension when discussing alternative online media, stressing that the most important for them is to challenge big corporate mass media by overcoming ‘‘the entrenched division of labour (producers of stories vs. consumers of stories)” (Couldry, 2003:45).

Another crucial line of inquiry for evaluating an alternative Internet relates to the underlying content or ideology that it circulates. For Sandoval and Fuchs, this is the most important dimension, and anything claiming to be alternative must adopt a critical stance to “try to contribute to emancipatory societal transformation” and “question dominative social relations” (Sandoval & Fuchs, 2009). When we consider the Internet, ideology is found in the values that underly the design of a technology or application, structure its uses or populate the online social space that this application brings about.

Of course, ideology is also embedded in the discourses and practices of the many actors trying to influence its development at the technical, social or legal level. The Internet is indeed a social space made up of a myriad of contentious actors such as hackers, software developers and makers who hack, code and make, of advocacy groups with their value-ridden proclamations and legalese, of Internet users making claims to an enlarged citizenship, and of course of all the entrepreneurs, crooks, bureaucrats, agents provocateurs and politicians they fight against or – less often – coalise with. All of these actors produce, use or advocate for particular technologies, fight against or encourage dystopic trends, work towards or oppose emancipatory projects, and in doing so produce political discourses and imaginaries that weigh on the social construction of the Internet. As such, they are part of our field of inquiry when we talk about “alternative Internets.” Their own contradictions further complicate the analysis. A protagonist might go to bed as a subversive hacker but wake up the next day as a piece-rate worker in someone else’s pension plan, or worse.

This speaks to the more general fact that a socio-technical dispositif that is “alternative” on one level tends to be preconditioned by status quo on some other level. For instance, openness in terms of software licenses often comes hand in hand with a closure in terms of technical expertise. To put it in more general terms, the alternative, if it is to be effective, is necessarily compromised by the dominant. Here as elsewhere, a maximising strategy is paralysing: As the proverb goes,  “the perfect is the enemy of the good.” In this spirit, Marisol Sandoval and Christian Fuchs have argued for “politically effective alternative media that in order to advance transformative political can include certain elements of capitalist mass media” (Sandoval & Fuchs, 2009:147). According to the authors, subscription fees or even advertising might be required if a project is to break out of the niche to reach a broader audience. Assessing trade-offs is part of the alternative game.

In this issue of the JoPP, we present contributions that explore these questions and shed light on the blind spots of alternative Internets.

With “In Defense of the Digital Craftsperson,” James Losey and Sascha D. Meinrath offer a conceptual framework for analyzing control in Internet technical architectures along five dimensions: networks, devices, applications/services, content, and data. By updating prior analysis regarding threats to communicational autonomy and to the ability to tinker with digital technologies, they identify key challenges and help think systematically about strategies of resistance.

Stefano Crabu, Federica Giovanella, Leonardo Maccari, and Paolo Magaudda consider the bottom of the “network” layer of Losey and Meinrath’s framework by analyzing offering an interdisciplinary perspective on Ninux, a network of  wireless community networks in Italy. Their paper, “Hacktivism, Infrastructures and Legal Frameworks in Community Networks: the Italian Case of“, benefits from the active participation of one of the authors in Ninux, and presents interesting evidence about the limited levels of decentralization in a network built exactly around this vision. It is also one of the very few papers that brings insights on the legal aspects of community networks, focusing on the question of liability and different organizational forms that can protect these networks against legal actions.

Christina Haralanova and Evan Light offer an insider’s look at a much smaller community network in Montreal, called Réseau Libre. In their paper entitled “Enmeshed Lives? Examining the Potentials and the Limits in the Provision of Wireless Networks,” they try to understand two other important contradictions in community networks. First, they examine their possible role as both an “alternative Internet provider,” as well as an “alternative to the Internet all together,” that is to say a local infrastructure providing local services for the members of the network. They also identify the lack of adequate security against surveillance, despite the fact that many people cite enhanced privacy and security options as a reason for their participation in the community. As the paper shows, even though they might foster knowledge-sharing around issues such as computer security, these networks remain “as insecure as the Internet itself.”

The paper “Going Off-the-Cloud: The Role of Art in the Development of a User-Owned & Controlled Connected World” by Daphne Dragona and Dimitris Charitos also explores various alternatives of user-owned network infrastructures, this time focusing on an “alternative to the Internet all together”, imagined and experimented by artists and activists. The scale here is much smaller, with most networks comprised by a single wireless router acting as a hotspot allowing only local interactions between those in physical proximity. Such “off-the-cloud” networks, have been given numerous telling names like Netless, PirateBox, Occupy here, Hot probs, Datafield, Hive networks, Autonomous Cube. According to the authors, these and many more similar inspiring projects work towards “new modes of organization and responsibility (…) beyond the sovereignty of the cloud.”

In “Gesturing Towards ‘Anti-Colonial Hacking’ and its Infrastructure,” Sophie Toupin draws on a historical example to investigate the opportunities and limitations for appropriating cryptography today. Her interviews with some of the key actors in this glorious moment of hacker politics is particularly inspiring,  as is Toupin’s willingness to expand our understanding of “hacktivism” by looking beyond Europe and North America.

Primavera De Filippi’s piece focuses on “The Interplay between Decentralization and Privacy,” using blockchain technologies as a case-study. She shows that while decentralized architectures are often key to the design of alternative Internets, they come with important challenges with regards to privacy protection. Her critical assessment is particularly timely, as blockchain technologies are rapidly co-opted by the bureaucratic organizations there were originally meant to subvert.

InFinding an Alternate Route: Circumventing Conventional Models of Agricultural Commerce and Aid,” Stephen Quilley, Jason Hawreliak and Katie Kish present a case study on Open Source Ecology (OSE). OSE started in the United States but has sprouted similar initiatives in Europe and South America. It is now developing a series of open source industrial machines and publishes the designs online. One of the primary goals of OSE is to provide collaboratively produced blueprints for relatively inexpensive agricultural machinery, such as tractors, backhoes, and compressed earth brick presses for constructing buildings. The authors argue that the proliferation of open source networks can reshape domains that have traditionally relied on state and inter-state actors such as international aid.

Lastly, Melanie Dulong de Rosnay’s experimental text on “Alternative Policies for Alternative Internets” raises awareness on the importance of the terms of use of Internet platforms. By quoting numerous such policies – from both mainstream and alternative platforms – on topics like copyright or data protection, she manages to create a diverse mix of feelings, all the way from anger to laughter. Most importantly, this collection warns us about the legal issues that alternative platforms have to deal with, and provides inspiration and useful information on how to address them in practice.

Each of these papers addresses one or more of these “layers” described by Losey and Meinrath, analysing different facets of alternativeness. But there are other dimensions outside this framework that we have not touched upon. For instance, the staggering ecological impact of Internet technologies and their environmental unsustainability is not addressed, despite the growing attention of scholars and engineers to these crucial issues (Chen, 2016). Although two papers focus on urban community networks, other aspects of the urban dimension of alternative Internets are overlooked.  Together with the notion of locality, urbanity appears to be crucial in helping actualise the potential of alternative Internets to become autonomous infrastructures operating outside the commercial Internet. It is also an avenue to think about resistance strategies: As the urban space becomes increasingly hybrid and renders the digital and physical evermore intertwined, those movements fighting for the “right to the city” (Lefebvre, 1996) and those working towards the “right to the Internet” will have renewed opportunities to join forces (Antoniadis & Apostol, 2014).

For sure, advancing alternative Internets will require from a very diverse set of actors to go beyond traditional boundaries so as to engage in effective collaboration. In academia too, transdisciplinary collaboration – though still in its infancy – is extremely promising. We hope that this issue of the JoPP will be read as an invitation to work further in that direction.

As editors, we would like to thank Bryan Hugill for helping us copy-edit the papers, and express our gratitude to both authors and reviewers. We hope that readers will be as inspired as we are by these very diverse contributions, which each in their own ways point towards a more democratic and more inclusive Internet.


Agar, J. (2003). The Government Machine: A Revolutionary History of the Computer, MIT Press.

Antoniadis, P. & Apostol, I. (2014). The right to the hybrid city and the role of DIY networking, Journal of Community Informatics, special issue on Community Informatics and Urban Planning, vol. 10.

Atton, C. (2005). An Alternative Internet: Radical Media, Politics and Creativity, University Press.

Barbrook, R. (2007). Imaginary Futures: From Thinking Machines to the Global Village, Pluto Press.

Chen, J. (2016). “A Strategy for Limits-aware Computing”, LIMITS’16, Irvine, California, June 9th.

Dunbar-Hester, C. (2014). Low Power to the People: Pirates, Protest, and Politics in FM Radio Activism, MIT Press.

Flipo, F., Dobré, M, & Michot, M. (2013). La Face cachée du numérique. L’impact environnemental des nouvelles technologies, L’Échappée.

Fuchs, C. (2015). Culture and Economy in the Age of Social Media, New York: Routledge.

Edwards, P. (1996). The Closed World: Computers and the Politics of Discourse in Cold War America, MIT Press.

Karlström, H. (2014). “Do Libertarians Dream of Electric Coins? The Material Embeddedness of Bitcoin”, Scandinavian Journal of Social Theory, 15 (1) p.23-36.

Lefebvre, H. (1996 [1968]). “The right to the city”. In H. Lefebvre (auth), E. Kofman & E. Lebas (Eds.), Writings on Cities, Blackwell, pp. 63-184.

Preston, P. (2001). Reshaping Communications: Technology, Information and Social Change, SLE Pound.

Sandoval, M. & Fuchs, C. (2010). “Towards a Critical Theory of Alternative Media”, Telemat. Inf. 27(2), May 2010, pp. 141–150.

Söderberg, J. (2013). “Determining Social Change: The Role of Technological Determinism in the Collective Action Framing of Hackers”, New Media & Society. 15(8) pp. 1277–1293.

Tréguer, F. (2016). “From Deep State Illegality to Law of the Land: The Case of Internet Surveillance in France”, 7th Biennial Surveillance & Society Conference, Barcelona, April 20th.

Turner, F. (2006).  From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism, University of Chicago Press.

——— (2013). The Democratic Surround: Multimedia & American Liberalism From World War II to the Psychedelic Sixties, University of Chicago Press.


Supporting Community Networks Through Law and Policy

This post was first published on the website of the netCommons project.

During the workshop on community networking infrastructures held in Barcelona on June 17th, 2016, I talked about the regulatory hurdles faced by Community Networks (CNs) in Europe, as well as a few potential solutions. Here is the long version of the talk…

For the most part, the following is based on research conducted for an article I co-authored with Primavera De Filippi after interviewing several leading community networks in Europe that use wireless networks to provide Internet connectivity to their members, such as, Freifunk, Ninux or As we at netCommons start looking into the legal landscape surrounding community networks, I thought it would be useful to provide an updated version as a starting point for our research.

Regulation creates hurdles for Community Networks

First, it is clear that despite their potential in fostering public interest goals in telecom policy, policy-makers have so far failed to support the efforts of community networks. More often than not, public policy actually puts important hurdles on their way by focusing solely on the needs of big incumbent players.

Exclusion from public networks

The most striking example of such hurdles is the fact that several community ISPs have been precluded from using public broadband networks funded with taxpayers money. In France for instance, many local governments invested in rolling-out fiber networks in both urban and rural areas. These networks are built and managed by a private company contracted by the public authority, a company which then lease access to traditional access providers. ISPs then sell their Internet access offers to subscribers. Yet, the fee charged to access the network is designed for big commercial ISPs, and is often much too prohibitive for nonprofit community networks. Several French community ISPs in the Federation FDN have been unable to afford such fees, and are thus being denied access on a preferential basis.

There is also an issue of transparency and access to public sector information. In at least one reported case, the network operator even refused to communicate its price listing to an interested CN. In a neighboring country, a community ISPs similarly underlined “the lack of collaboration with public administrations” in securing access to landline infrastructure.

Short-term policies: the case of radio spectrum

Another connected issue in current telecom policies are short-term policies, many of which can be linked to the issue of what economists call “regulatory capture,” that is the fact that regulators and policy-makers listen to and serve those they are supposed to regulate and who have the resources to develop full-fledged lobbying strategies.

Let’s look at the issue of spectrum management. Here, as in many other areas, regulatory capture by commercial interests leads to regulatory choices that systematically overlook the potential of more flexible and citizen-centric policies. The allocations the so-called “digital dividend” (i.e. the frequencies left vacant by the switch from analog to digital television) is a textbook case. In France for instance, it was proposed to use part of the spectrum dividend to create new digital TV channels and develop mobile television as well as digital radio (neither of these two technologies has taken off thus far). The remaining half of these “golden frequencies” of the UHF band (sought-after for their long-range propagation) was then auctioned off to telecom operators for their 4G mobile Internet access offers (the lucrative license auctioning took place between October 2011 and January 2012 and brought €3,5bn to the French state). Similar policies have been devised in other European countries.

In the process, one option has never been seriously considered: extending “unlicensed” access to some of these frequencies – that is, effectively turning them into a commons open for all to use, regardless of public, private or non-profit status. Long thought to be unreasonable because of the risk of radio interferences, opening up the spectrum to multiple, non-coordinated radio users has actually been experimented on a worldwide basis more than a decade ago for the WiFi frequencies. Needless to say, it has proved to be a very wise policy choice. At the time, those frequencies were referred to as “junk bands,” because few actually thought they could have valuable applications. Today, they support about half of the Internet communications worldwide. Even exclusive licensees in the telecom sector providing Internet access over 3G and 4G increasingly resort to WiFi’s open spectrum to offload their Internet traffic.

In many regards, though property-based allocations of spectrum and exclusive licensing still have the upper hand, they have often come short of fostering public interest goals, by creating a very significant underutilization of public resource. Moreover, not only does the regulatory focus on exclusive licensing create an enormous opportunity cost by favoring established players over innovative new-entrants (such as CNs), it has even been argued by human rights NGOs that it may actually breach the international law on freedom of expression.

But despite the successes of WiFi and the fact that, as Yochai Benkler has shown, market adoption favors open spectrum policies, unlicensed access remains marginal. For CNs, this is worrying considering that they are increasingly victims of the rapid growth of WiFi traffic. For instance, and Freifunk report having a hard-time maintaining the quality of their network in urban areas because of the saturation of the 5GHz frequency bands. In some instances, they theoretically would be allowed to use the other portion of spectrum open to unlicensed uses in the 2,4 GHz band; yet, this constitutes a niche market for manufacturers of radio transmitters, and the gear necessary to deploy wireless networks in these bands is simply too costly for them.

Another issue for CNs is linked to the topography of their environment: WiFi bands have some important technical limitations, in particular in terms of propagation, and signals are easily blocked by tall buildings or trees. In such cases, CNs are faced with the choice of either renouncing to create a new radio link in a given location, or push the emission power levels beyond the legal limits to overcome these obstacles. A change in spectrum policy would therefore be much welcome.

New software restrictions on radio equipment

Another regulatory challenge for CNs (and many other actors in the radio field) relates to recent changes in legislation in both the US and the European Union. In the EU, a directive on radio equipment was adopted in 2014, and is currently being transposed at the national level. Article 3.3 of this directive might put in jeopardy the ability to flash radio hardware with “unauthorized” software (unauthorized by the manufacturer that is). As the Free Software Foundation Europe explains in its analysis, this provision “implies that device manufacturers have to check every software which can be loaded on the device regarding its compliance with applicable radio regulations (e.g. signal frequency and strength). Until now, the responsibility for the compliance rested on the users if they modified something, no matter if hardware- or software-wise.”

How does this impact CNs? By shifting the responsibility for legal compliance onto manufacturers, the latter could decide to protect themselves by locking down the device they sell (as is happening in the US). This would prevent CNs from installing custom software on the radio equipment that support their infrastructure. Second, FSFE notes that, anticipating on this legal shift, manufacturers have already “installed modules on their devices checking which software is loaded.” According to the organization, “this is done by built-in non-free and non-removable modules disrespecting users’ rights and demands to use technology which they can control.” There is a fear that such software will evolve towards a built-in spying system checking on the user’s behavior or location, which needless to say runs counter to fundamental rights and more generally to the political values defended by CNs.

In the past few months, in Sweden, France, Germany and elsewhere, radio professionals and hobbyists as well as CNs and digital rights group have urged policymakers to ensure that national transposition texts will clarify that radio hardware must remain open to free software and other forms of technical tinkering. Unfortunately, such last-minute advocacy effort might have come too late.

Towards a public policy for the telecom commons

These hurdles already hint at policy reforms aimed at supporting the development of community networks. Here are few other items that should be put on the agenda.

Lifting unnecessary regulatory burdens

First, there is a range of regulations which make CN’s work and very existence significantly and often unnecessarily difficult. In a country such as Belgium for instance, the registration fee that telecom operators must pay to the NRA is at 676€ for the first registration, plus 557€ every following year (for those whose revenues are below 1M€). In France, Spain or Germany, it is free, which may explain why the movement is much more dynamic in these countries. Registration procedures could therefore be harmonized at the EU level, and ensure that they are free for nonprofit ISPs.

Promoting open WiFi

Second, several laws seek to prevent the sharing of Internet connections amongst several users by making people responsible (and potentially liable) for all communications made through their WiFi connection. This is the case in France, for instance, where the 2009 three-strikes copyright law against peer-to-peer file-sharing (the infamous HADOPI) also introduced a tort for improperly securing one’s Internet connection against the unlawful activity of other users. As a result of such legal rules, many community ISPs who would like to establish open WiFi networks in public spaces, such as parks and streets, refrain from doing so. A case regarding the so-called “secondary liability” of the provider of an open WiFi hostpot currently pending before the EU Court of Justice – the McFadden case – could soon bring useful clarifications (for other liability issues surrounding CNs, see this paper by netCommons researcher Federica Giovanella and this other paper, also from a netCommons researcher, Mélanie Dulong de Rosnay).

Expanding the spectrum commons

Third, as I have already suggested, it is not just Internet wireless access points that can be shared, but also, the intangible infrastructure on which radio signals travel. WiFi, as unlicensed spectrum, is a key asset for CNs willing to set up affordable and flexible last-mile infrastructure, but it is currently very limited. In the US, the FCC has initiated promising policies in that field in the past years. But for the moment, the EU has shied away from similar moves.

Yet, in 2012, the EU adopted its first Radio Spectrum Policy Programme (RSPP). During the legislative process, the EU Parliament voted in favor of ambitious amendments to open the spectrum to unlicensed uses. Even if some of these amendments were later scrapped by national governments, the final text still states for instance that “wireless access systems, including radio local area networks, may outgrow their current allocations on an unlicensed basis. The need for and feasibility of extending the allocations of unlicensed spectrum for wireless access systems, including radio local area networks, at 2,4 GHz and 5 GHz, should be assessed in relation to the inventory of existing uses of, and emerging needs for, spectrum (…).” On mesh networks, it adds that “member states shall, in cooperation with the Commission (…) take full account of (…) the shared and unlicensed use of spectrum to provide the basis for wireless mesh networks, which can play a key role in bridging the digital divide.”

In late-2012, as EU lawmakers were finalising on the RSPP, a study (pdf) commissioned by the EU Commission also called for a new 100 MHz of license-exempt bands (half in the sub 1 GHz bands and the other one at 1,4 GHz) as well as for higher power output limits in rural areas to reduce the cost of broadband Internet access deployment. It also warn against underutilization of current spectrum allocations (by the military, by incumbent operators, etc.). Since then, however, EU work on unlicensed spectrum and more flexible authorization schemes more accessible to community ISPs has stalled. A the national level too, save for a few exceptions, concrete steps have been virtually non-existent.

Opening access to public networks

Fourth – and this also relates to what I was explaining before –, networks built with taxpayers money could also be treated as a commons, and should, as such, remain free from corporate capture. Regulators should ensure that nonprofit community networks can access publicly-funded and subsidized physical infrastructures without unnecessary financial or administrative hurdles. Accordingly, they should review existing policies and current practices in this field, providing transparent information to map publicly-funded networks, and mandate rules to allow for grassroots, nonprofit ISPs to use these on a preferential basis.

Offering targeted, direct public support

Of course, countless other policy initiatives can help support grassroots networks, such as small grants, crowdfunding and subsidies to help these groups buy servers and radio equipment, communicate around their initiative, giving them access to public infrastructures (for instance the roof of a church to install an antenna), but also to support their research on radio transmission, routing methods, softwares or encryption. Like, the most successful of these groups suggest that even little governmental support – either municipal, regional or national – can make a big difference in their ability to successfully accomplish the ambitious objectives they set for themselves.

Inviting CNs to the policy table

But all of these policies point to an overarching issue, namely the need to democratize telecom policy and establish procedures that can institutionalize “subversive rationalization” in this filed. In some countries, regulators have already started to reach out to community networks. In Slovenia, on one occasion, Wlan-SI was asked to contribute to policy discussion on a piece of telecom legislation. In Greece, the Athens Wireless Metropolitan Network has also been invited by the NRA to respond to consultations and in France, FFDN has sometimes been convened to technical meetings. However, save for a few exception (like the Net neutrality provisions introduced in Slovenian law in late 2012), their input has so far never translated into actual policies. Then, in many other countries, such as Italy, even though city councils may occasionally actively support these organizations to the extent that they provide better Internet access to their citizens, regional governments and national regulators have so far largely neglected them. Finally, at the EU level, where much of telecom regulation applicable in Europe is ultimately crafted, community networks are virtually absent of policy debates.

Given the revival of CNs in the past years, it is not enough for regulatory authorities to treat citizens as mere consumers by occasionally inviting consumer organizations at the table. Regulators and policy-makers need to recognize that the Internet architecture is a contested space, and that citizen groups across Europe and beyond show that for the provision of Internet access, commons-based forms of governance are not only possible but that they also represent effective and viable alternatives to the most powerful telecom operators. What is more, their participants have both the expertise and legitimacy to take an integral part in technical and legal debates over broadband policy in which traditional, commercial ISPs are over-represented. They can bring an informed and dissenting view to these debates, and eventually help alleviate regulatory capture and allowing for policy-making to be more aligned with the public interest.

Of course, a potential problem is the fact that these are often run by volunteers whose lack of time and resources may sometimes make it difficult for them to participate as actively as the full-time and well-resourced lobbyists of incumbent operators. But overtime, as the movement grows, it may be able sustain its engagement with public authorities, especially if the latter adapts and establish ad hoc contact channels and remote participation mechanisms.

Twenty years after the privatization of national networks in Europe, there is certainly a long way to go for telecom policy to balance the interests of all various stakeholders. But it is clear that community networks have an important role to play in this process. As we move forward with the netCommons project, I hope we can help the policy debate move in that direction as the EU starts updating its telecom laws.

Rollback or Legalisation? Mass Surveillance in France and the Snowden Paradox

This piece was first published on ExplosivePolitics and MappingSecurity.

On June the 5th, it will be exactly three years since Pulitzer Prize-winning journalist Glenn Greenwald wrote the first article based on the trove of secret documents disclosed by the now famous NSA whistleblower Edward Snowden. Three years that saw the unfolding of an unprecedented controversy on the surveillance capabilities of the world’s most powerful intelligence agencies, thanks to the combined work of investigative journalists, computer experts, lawyers, activists and scholars. Since 2013, France is the first liberal European regime to undergo a vast reform of its legal framework regulating secret state surveillance. What follows is a reader’s digest of research presented at the 7th Biennial Surveillance & Society Conference.

Download the full paper (pdf)

For many observers, the first Snowden disclosures and the global scandal that followed held the promise of an upcoming rollback of the techno-legal apparatus developed by the American National Security Agency (NSA), the British Government Communications Headquarters (GCHQ) and their counterparts to intercept and analyse large portions of the world’s Internet traffic. State secrets and the “plausible deniability” doctrine often used by these secretive organisations could no longer stand in the face of such overwhelming documentation. Intelligence reform, one could then hope, would soon be put on the agenda to crack down on these undue surveillance powers and relocate surveillance within the boundaries of the rule of law. Three years later, however, what were then reasonable expectations have likely been crushed. Intelligence reform is being passed, but mainly to secure the legal basis for large-scale surveillance to a degree of detail that was hard to imagine just a few years ago. Despite an unprecedented resistance to surveillance practices developed in the shadows of the reason of state, the latter are progressively being legalised.

This is the Snowden paradox.

France: Mass-surveillance a la mode

France is a good case in point. Before the adoption of the Intelligence Act in the summer of 2015, the surveillance capabilities of French intelligence agencies were regulated by a 1991 law. In the early 1990s the prospect of Internet surveillance was still very distant, and the law was drafted with landline and wireless telephone communications in mind. So when tapping into Internet traffic became an operational necessity for intelligence agencies at the end of the 1990s, it developed based on secret and extensive interpretations of existing provisions (one notable exception are the provisions adopted in 2006 to authorize administrative access to metadata records for the sole purpose of anti-terrorism). From 2008 onward, the French foreign intelligence agency (DGSE) was even allowed to spend hundreds of millions of euros to tap extensively into international fibre optics cables landing on French shores.


French officials looking back at these developments have often resorted to euphemisms, going on record talking about zone of “a-legality” to describe this secret creep in surveillance capabilities. Although “a-legality” may be used to characterize the legal grey areas in which citizens operate to exert and claim new rights that have yet to be sanctioned by either the parliament or the courts – for instance the disclosure of huge swathes of digital documents –  it cannot adequately characterize these instances of “deep state” legal tinkering aimed at escaping the safeguards associated with the rule of law. Indeed, when the state interferes with civil rights like privacy and freedom of communication, a detailed, public and proportionate legal basis authorizing them to do so is required. Otherwise, such interferences are, quite plainly, illegal. Secret legal interpretations are of course a common feature in the field of surveillance. In France, they could prosper all the more easily given the shortcomings of human rights advocacy against Internet surveillance. Indeed, prior to 2013, French activists had, by and large, remained outside of the transnational networks working on this issue.

French national security policymakers were very much aware that the existing framework failed to comply with the standards of the European Court of Human Rights (ECHR). And so intelligence reform was announced for the first time in 2008, under the presidency of Nicolas Sarkozy (2007-2012). But the reform then lingered, which in turn created the political space for the parliamentary opposition to carry the torch. By the time the Socialist Party got back to power, in 2012, its officials in charge of security affairs were the ones pushing for a sweeping new law that would secure the work of people in the intelligence community and, incidentally, put France in line with democratic standards.

Then came Edward Snowden. At first, the Snowden disclosures deeply destabilized these plans, creating a new dilemma for the proponents of legalization: On the one hand, the disclosures helped document the growing gap between the existing legal framework and actual surveillance practices, exposing them to litigation and thereby reinforcing the rationale for legalization. But on the other hand, they had put the issue of surveillance at the forefront of the public debate and therefore made such a legislative reform politically risky and unpredictable. In late 2013, a first attempt at partial legalization (widening access to metadata) gave rise to new coordination within civil society groups opposed to large-scale surveillance, which reinforced these fears. It was only with the spectacular rise of the threat posed by the Islamic State in 2014 and the Paris attacks of January 2015 that new securitization discourses created the adequate political conditions for the passage of the Intelligence Act – the most extensive piece of legislation ever adopted in France to regulate the work of intelligence agencies.

The 2015 French Intelligence Act in a nutshell

On January 21st 2015, Prime Minister Manuel Valls turned the long-awaited intelligence reform into an essential part of the government’s political response to the Paris attacks carried on earlier that month. Presenting a package of “exceptional measures” that formed part of the government’sproclaimed “general mobilization against terrorism,” Valls claimed that a new law was “necessary to strengthen the legal capacity of intelligence agencies to act” against that threat. During the expeditious parliamentary debate that ensued (April-June 2015), the Bill’s proponents never missed an opportunity to stress, as Valls did while presenting the text to the National Assembly, that the new law had “nothing to do with the practices revealed by Edward Snowden”. Political rhetoric notwithstanding, the Act’s provisions actually demonstrate how important the sort of practices revealed by Snowden have become for the geopolitical arms race in communications intelligence.

The Intelligence Act creates whole new sections in the Code of Internal Security. It starts off by widening the scope of public-interest motives for which surveillance can be authorized. Besides terrorism, economic intelligence, organized crime and counter-espionage, it now includes vague notions such as the promotion of “major interests in foreign policy” or the prevention of “collective violence likely to cause serious harm to public peace.” As for the number of agencies allowed to use this new legal basis for extra-judicial surveillance, it comprises the “second circle” of law enforcement agencies that are not part of the official “intelligence community” and whose combined staff is well over 45,000.

In terms of technical capabilities, the Act seeks to harmonize the range of tools that intelligence agencies can use according to the regime applicable to judicial investigations. These include targeted telephone and Internet wiretaps, access to metadata and geotagging records as well as computer intrusion and exploitation (e.g. “hacking”). But the Act also authorizes techniques that directly echo the large-scale surveillance practices at the heart of post-Snowden controversies. Such is the case of the so-called “black boxes,” those scanning devices that will use Big Data techniques to sort through Internet traffic in order to detect “weak signals” of terrorism (intelligence officials have given the example of encryption as the sort of things these black boxes would look for). Similarly, there is a whole chapter on “international surveillance,” which legalizes the massive programme deployed by the DGSE since 2008 to tap into submarine cables.

As for oversight, all national surveillance activities are authorized by the Prime Minister. An oversight commission (the CNCTR) composed of judges and members of Parliament has 24 hours to issue non-binding opinions on authorization requests. The main innovation here is the creation of a new redress mechanism before the Conseil d’Etat (France’s Supreme Court for administrative law), but the procedure is veiled in secrecy and fails to respect defence rights. As for the regime governing foreign communications – which is vague enough to be invoked to spy on domestic communications –it comes with important derogations, not least of which is the fact that it remains completely outside of this redress procedure. Among other notable provisions, one forbids the oversight body from reviewing communications data obtained from foreign agencies. Another gives a criminal immunity to agents hacking into computers located outside of French borders. The law also fails to provide any framework to regulate (and limit) access to the collected intelligence once it is stored by intelligence and law enforcement agencies.

Mobilisation against the controversial French Intelligence Bill

By the time the Intelligence Bill was debated in Parliament, civil society organizations had built the kind of networking and expertise that made them more suited to campaign against national security legislation. As I show in the paper and in the online narrative below, human rights advocates led the contention during the three-month long parliamentary debate on the Bill, while benefiting from the support of variety of other actors typical of post-Snowden contention, including engineers and hackers, digital entrepreneurs as well as leading national and international organizations.

Overall, contention played an important role in barring amendments that would have given intelligence agencies even more leeway than originally afforded by the Bill. Whereas the government hoped for an “union sacrée,” contention managed to fracture the initial display of unanimity. MPs from across the political spectrum (including many within both socialist and conservative ranks) fought against the Bill, pushing its proponents to amend the text in order to bring significant but relatively marginal corrections. In the end, some of the parameters were changed compared to the government’s proposal, but the general philosophy remained intact. In June 2015, the Bill was eventually adopted with 438 votes in favour, 86 against and 42 abstentions at the National Assembly and 252 for, 67 against and 26 abstentions at the Senate. A number of legal challenges are now pending both before French and European courts against the new law.

The limits of post-Snowden contention

France’s passage of the 2015 Intelligence Act makes it an “early-adopter”’ of post-Snowden intelligence reform among liberal regimes. But lawmakers in several other European countries are now following suit. The British Parliament is currently debating the much-criticized Investigatory Powers Bill. The Dutch government has recently adopted its own reform proposal, which has also raised strong concerns. The new conservative Polish government has announced plans to expand the access of law enforcement agencies to communications data, amid heated condemnations of the regime’s “orbanization” (in reference to Hungary’s controversial Prime Minister Viktor Orban). And in Germany, the Bundestag’s Interior Committee will soon start working on amendments to the so-called “G-10 law,’” which regulates the surveillance powers of the country’s intelligence agencies.

Each country knows its own specific context, and post-Snowden contention around intelligence reform will most likely have different outcomes according to these varying contexts. As Bigo and Tsoukala highlight,

the actors never know the final results of the move they are doing, as the result depends on the field effect of many actors engaged in competitions for defining whose security is important, and of different audiences liable to accept or not that definition” (2008:8).

These field effects are exactly what made post-Snowden intelligence reform hazardous for intelligence officials and their political backers. And it may be that, in these other countries, human right defenders will have greater success than their French counterparts in defeating the false “liberty versus security” dilemma, framing strong privacy safeguards and the rule of law as core components of individual and collective security. However, the ongoing British debate or the US’s tepid reform of the PATRIOT Act in June 2015, indicate that the case of France is likely more telling than its decrepit political institutions may suggest.

In the same way 9/11 brought an end to the controversy on the NSA’s ECHELON program and paved the way for the adoption of the PATRIOT Act, the threat of terrorism and associated processes of securitization now tend to hinder the global episode of contention opened by Edward Snowden in June 2013. Securitization is provoking a “chilling effect” on civil society contention, making legalization politically possible and leading to a “ratchet effect” in the development of previously illegal deep state practices and, more generally, of executive powers.

Fifteen years after 9/11, the French intelligence reform thus stands as a stark reminder of the fact that, once coupled with securitization, “a-legality” and national security become two convenient excuses for legalization and impunity, allowing states to navigate the legal and political constraints created by human rights organizations and institutional pluralism. This is yet another “resonance” of the French Intelligence Act with the PATRIOT Act.

The proponents of the French reform were probably right to claim that it is neither Schmitt’s nor Agamben’s states of exception. But because it is “legal” or includes some oversight and redress mechanisms does not mean that large-scale surveillance and secret procedures do not represent a formidable challenge to the rule of law. Rather than a state of exception, legalization carried on under the guise of the reason of state amounts to what Sidney Tarrow calls “rule by law.” In his comparative study of the relationships between states, wars and contention, he writes of the US “war on terror”:

Is the distinction between rule of law and rule by law a distinction without difference? I think not. First, rule by law convinces both decision makers and operatives that their illegal behavior is legally protected (…). Second, engaging in rule by law provides a defense against the charge they are breaking the law. Over time, and repeated often enough, this can create a “new normal,” or at least a new content for long-legitimates symbols of the American creed. Finally, “legalizing”’ illegality draws resources and energies away from other forms of contention (…) (2015:165-166).”

The same process is happening with regards to present-day state surveillance: Large-scale collection of communications and Big Data preventive policing are becoming the “new normal.” At this point in time, it seems difficult to argue that post-Snowden contention has hindered in any significant and lasting way the formidable growth of surveillance capabilities of the world’s most powerful intelligence agencies.

And yet, the jury is still out. Post-Snowden contention has documented state surveillance like never before, undermining the secrecy that surrounds deep state institutions, prevents their democratic accountability, and helps sustain taken for granted assumptions about them. It has provided fresh political and legal arguments to reclaim privacy as a “part of the common good” (Lyon 2015:9), leading courts –and in particular the ECHR– to admit several cases of historic importance which will be decided in the coming months. Judges now appear as the last institutional resort against large-scale surveillance. If litigation fails, the only possibility left for resisting it will lie in what would by then represent a most transgressive form of political action: upholding the right to encryption and anonymity, and more generally subverting the centralized and commodified technical architecture that made such surveillance possible in the first place.