Shrinking Civil Space: A Digital Perspective


The concept of shrinking civic space has been explored by civil society experts, policy makers and activists for more than a decade. During that time, the UN Human Rights Council’s Special Rapporteurs on the rights to freedom of assembly and association have highlighted the increasing threats and restrictions to civic freedoms. Their reports indicated that internationally guaranteed freedoms enabling participation in democratic processes were under threat. Several international non-profit institutions have also confirmed the trend. Freedom House declared 2019 the 13th consecutive year of decline in global freedoms in terms of curbs on civil liberties. In its 2018 State of Civil Society report, CIVICUS highlighted an alarming situation for civil society in 109 countries with closed, repressed or obstructed civic space, citing detention of activists as the most often used tactic, followed by attacks on journalists, censorship, excessive force, protest disruption, harassment, intimidation, prevention of protests, bureaucratic restrictions and legislative restrictions. In 2017 the Transnational Institute published a framing paper which provided a comprehensive analysis and a critique of “shrinking space” as a trend. The paper defined shrinking space as “a concept or framework that captures the dynamic relationship between repressive methods and political struggle.”

These reports successfully characterise the restrictions and attacks imposed on the physical spaces of civil society. This paper looks at shrinking civic space in terms of the digital, in particular the role that digital technologies can have on restricting the spaces of civil society organisations and their activities.

When it comes to digital rights, repressive laws have proliferated, negatively impacting freedom of expression and the privacy of politically engaged actors. This is matched by increased surveillance and more arrests and legal prosecution based on internet behaviour. Current debates largely treat digital technologies as one of the many pillars of restrictive methods on the side of governments and non-state actors – while largely ignoring how digital technologies are intertwined in all our social, political and economic interactions. Whether through smartphones, computer devices, smart cities, CCTV cameras or government surveillance, technology provides the infrastructure with which governments and third parties attempt to limit political participation.

The escalation of attacks on civil society organisations (CSOs) and on the spaces for political and civic participation, in parallel with a resurgence of populist politics and parties and their rise to power in some countries, calls for concern. The attacks are no longer confined to what were traditionally known as countries with compromised democratic systems. Nowadays, shrinking spaces are not only affecting local NGOs and CSOs but also international organisations such as Greenpeace and Amnesty International. The trend can be seen in denials of funding as much as in the government-sponsored spread of stigmatising misinformation (as seen in the case of US reproductive rights groups like Planned Parenthood), restrictive laws and police incursions (such as raids of NGOs in Hungary) and the criminalisation and confiscation of resources (exemplified by the Italian government’s impounding of a migrant-rescue boat belonging to a Spanish NGO).

The increasingly normalised surveillance and analysis of troves of citizens’ data by governments and the private sector also contributes to the shrinking space of civil society. Data about individuals and communities is collected from sources ranging from social media activity to Smart City and IoT technology and CCTV cameras. The ways that members of CSOs communicate with others, express their views, organise events, mobilise support, build solidarity, and process, create, store and exchange information are all inevitably part of the digital sphere. Consequently, data-driven practices – either by governments or corporations – significantly impact activists, CSOs and social movements. To ensure their safety and wellbeing, activists need to navigate a trade-off between visibility and anonymity due to risks and threats from state and non-state actors.

Beyond the surveillance and monitoring of data and activities by governments and private companies, activists and CSOs have experienced an escalation in covert attacks using malware, phishing and spyware. In Egypt for example, CSOs were the primary targets of an organised phishing attack. Such attacks are not restricted to so-called “countries of risk” but can extend globally. Amnesty International was the target of Operation Kingphish, which involved the creation of a fake social media persona by an unknown actor, who, according to a report by a senior technologist at Amnesty, used phishing attacks to gain access to “dozens of journalists, human rights defenders, trade unions and labour rights activists, many of whom are seemingly involved in the issue of migrants’ rights in Qatar and Nepal.”

With this landscape as the backdrop, this paper focuses on the threats, risks and implications of shrinking civic space from a digital perspective. We examine how tech platforms, data collection, surveillance and other digital means can threaten, restrict or curtail the work of CSOs and the spaces for political and civic participation. It is important to understand this landscape because many CSOs depend on tech platforms and tools for their work, including their outreach and research. Furthermore, their activities generate data that is collected and can be used to undermine their causes, and in certain cases, affect individuals’ wellbeing and personal safety. Though technology and digital platforms have helped CSOs expand their efforts, they have also expanded the area of risk and led to an increase in the invasiveness of the methods used.

Digital technologies have become a fundamental part of CSOs' ability to collaborate, raise awareness and mobilise their constituents. Everyday, core functions of their activities, such as travel, organising events, collaborating with other actors, and conducting outreach are enabled and mediated through centralised, free or low-cost digital tools that not only create data traces but also operate on a data-based business model. Mechanisms to receive funds and garner support, from financial transactions and donations to membership and participation, all leave online traces in a networked environment which can be monitored, exposed or appropriated. In an unstable and shifting political environment, understanding how this data is collected and utilised is essential for making informed decisions and weighing up potential risks for the future.

The adversaries who most commonly use or could use technology and data for invasive attacks against CSOs can be grouped into four categories: state actors, private corporations, individuals and organised groups. State actors range from police forces to cyber vigilance and intervention departments, ruling parties and instated laws and regulations. Private corporations include for-profit, privately-owned companies that collect data as part of their business models to sell it either as profiles or for targeting, which can be used for political manipulation. Individual actors may include those who target CSO personnel on social media, possibly endangering their wellbeing through doxxing attacks, for example, or hackers who seek financial gains through phishing attacks or who are interested in data to sell or leak. Organised groups may include those with opposing political views that are either motivated by populist politics, religious or political fundamentalist views, conservative tendencies or hate-speech and discriminatory phobias.

The digital divide

It is estimated that four billion people worldwide have internet access; however, that access is not equally distributed. People in the Global South and rural areas pay a steep price for slow and unreliable connections, and many have no access at all due to lack of basic infrastructure. Furthermore, countries where mobile internet is most expensive have the lowest percentages of women online. People who only have mobile internet access can’t as easily do things like write essays, apply for jobs or other actions that can influence economic growth. It also further excludes minorities that have a long history of voicelessness in civil processes.

Large platforms including Facebook and Google have concentrated their market share of internet usage and have become – in many cases in the Global South – the primary means by which people search for and receive news and information. In rural areas and among indigenous peoples, these platforms are also crucial tools for organising and mobilising. In these regions, the values of Silicon Valley’s tech culture often conflict with local cultures and their needs and values. As digital rights activists Paz Peña and Joana Varon state: “Let’s make no mistake: that ethnic minorities have to change their names to be understood as ‘western names’ or that LGBT people should inform a private company of their biological sex is not a marginal negative consequence of Facebook’s policy. It is, on the contrary, a segregating policy that, by persisting, will end up marginalizing every dissident person to the values of Facebook which, it seems, is no different from those embraced by the white men of Silicon Valley.” As Safiya Umoja Noble pointed out in her book Algorithms of Oppression: How Search Engines Reinforce Racism, Google Search is primarily an advertising tool, yet it is also the primary tool for internet access (especially in the Global South). With around 90 percent of the global market share for searches on mobile, a single company ultimately defines the way people worldwide, in different cultural contexts, explore the internet.

When these large tech companies set the agenda for the digital sphere, it can make it more difficult for activists and CSOs to counter them. A 2018 report published by The New York Times showed how Facebook’s algorithm-driven newsfeed can have concrete effects on people’s life: “by pushing out whatever content drew the most engagement from users, it does more than amplify existing prejudices or boost extremists [...] In countries with weak institutions but where Facebook use is widespread, that can allow misinformation to run rampant. And in societies with histories of deep social distrust, it can turn deadly.”

Governments (including the European Union, where the GDPR has set the standard for privacy regulations) have largely relied on pressuring big tech corporations like Facebook, Twitter, Microsoft and Google to regulate and fight hate speech and terrorist propaganda themselves. This unfortunately has led to cases of censorship and limits to freedom of expression of communities and minorities. Although companies’ decisions about how to moderate content have consequences for democracies around the world, such decisions are not transparent. The reliance on tech companies to self-regulate depends on their commitment to “educate their users,” which includes “identifying and promoting counter-narratives” to what they consider as hate speech or prejudice. This means that companies that initially act as platforms to enable the dissemination of third-party content now also play the role of promoters of certain content. As researcher Wolfie Christl states, those programmes by big companies are “the perfect representation of how principles such as ‘choice’, ‘transparency’ and ‘educating users’ are being used as a placebo by today’s personal data industry.”

Data traces and activism

Through our use of digital technologies, we expose data about ourselves and our surroundings, revealing details of our movements, activities, behaviours and preferences. When this information is subject to surveillance, collection and profiling by the state, it can have negative effects on freedom of expression, freedom of peaceful assembly and freedom of association. Data collected from various sources (apps, social media profiles etc.) are analysed and cross-referenced at unprecedented rates by private companies and public institutions alike, and can be aggregated into profiles to reveal details and patterns about a person or organisation. Irrespective of their accuracy, these profiles may shape the treatment or response that individuals or organisations receive from state and non-state actors. For example, attendance at conferences might pose additional risks to CSOs or their communities because they are subject to increased data collection, scrutiny and surveillance when travelling. As Tactical Tech’s research underscores, “The contacts made during a trip, or the events attended and the documents obtained, can be of interest for a security apparatus. Journalists and researchers could be a prime target for such attacks.”

Risk by association

CSOs, and individuals who directly collaborate with and support them, live within a broader environment of pervasive data-collection, making it impossible to separate personal data traces, such as those created by everyday habits and life-patterns, from professional ones. This means that CSOs not only require a good understanding of how their professional activities are impacted by the mass collection of data, but also how this intersects with their personal choices. Digital security, encryption, alternatives to existing platforms and safer tech practices are not only about protecting the actual data that CSOs handle – which is a legitimate and crucial part of the work. It is also about self-protection, both for individuals and organisations.

CSOs work with sensitive data when it comes to freedom of expression and association, the right to protest and privacy, as well as personal data and other sensitive information about the people they work with. Paradoxically, using technology as a safeguard against threats and attacks can also put them at higher risk of government surveillance or crackdowns. In recent years there has been an increase in attacks on secure communication, encryption of devices and emails, and on capacity building in this field. In Spain, when 11 individuals were arrested for allegedly forming part of an anarchist group and damaging public property, the court admitted their use of an encrypted email provider as evidence of criminal activity. In 2017, Turkish authorities arrested digital security trainers and members of Amnesty International who were present at a digital security workshop. These are just two cases. It is becoming more common for state actors to claim the use of encryption and secure communication as reasons to investigate CSOs. But not using these tools can leave them vulnerable to third-party attacks, and put the sensitive data they handle under threat.

Governments across the world continuously attempt to instate anti-encryption laws or give authorities legal grounds to hack. The Egyptian government blocked the secure messaging app Signal, and the German parliament passed a law that allows authorities to tap into phones to spy on encrypted messages before they are encrypted. Though this might be framed as a “counter terrorism” measure, the law included permissions to spy on individuals suspected of tax evasion or sport betting fraud. On 20 June 2018, German security services raided the homes of several board members of Zwiebelfreunde, a non-profit group that supports privacy and anonymity projects. According to media reports, the raids “took place after Krawalltouristen, a left-wing blog which translates to ‘riot tourists,’ had called for protest action around the annual convention of the right-wing Alternative for Germany (AfD) party, the largest opposition party in the German parliament.”

From using Skype to hold international meetings, to using Google Docs to work on joint projects, technological tools designed by large tech companies can enable more effective communication and collaboration among activists, while presenting new challenges. Although telecommunication networks and the internet have made international collaboration and solidarity-building quicker, easier and more widespread, the business models of the companies that design data-driven technologies often expose activists and social movements to particular risks and threats. If activists and CSOs don’t implement measures to protect their privacy, their phone calls, text messages, emails, VoIP calls, video chats, and social media messages may be vulnerable to surveillance.

Digital censorship

In countries where governments hold a tight grip over dissent, technology platforms serve as an important outlet for documentation, campaigning and solidarity-building. At the same time, though, arbitrary restrictions on such tech platforms can present a significant barrier to their work. The Syrian town of Kafr Nabel, for example, hosted a Local Coordination Committee and media centre page on Facebook, which they used to disseminate news, document human rights abuses and provide safety tips for their residents. However, in 2014, Facebook shut down the page – along with a dozen other opposition pages – on the basis that they displayed graphic imagery. This move negatively impacted the flow of information among international human rights organisations and their local partners, who rely primarily on Facebook pages to stay up to date on events in Syria and provide solidarity to those affected.

State and non-state actors have also curtailed the work of CSOs by permanently or temporarily shutting down internet-enabled services. Although the United Nations issued a statement in 2015 declaring that internet “kill switches” are a clear violation of international human rights law even in times of conflict, they are still implemented across the globe. Following nationwide protests in 2016, Morocco’s three telecommunications companies barred access to VoIP on mobile networks, impacting communication on WhatsApp, Viber and Skype. This meant citizens were forced to rely on local services provided by telecommunication companies. The same year, Etisalat shut down Egypt’s access to Free Basics, Facebook’s zero-rated internet services, as part of an attempt by the government to silence dissent in the weeks leading up to the anniversary of the 2011 uprising and fall of the Mubarak regime. The governments in Egypt and the United Arab Emirates also censored Signal, which activists heavily rely on to communicate and collaborate. In January 2018, Iranian authorities disrupted internet access across the country and blocked Instagram and the encrypted messaging app Telegram, which had been instrumental in allowing activists and opposition figures to reach their constituencies and international partners.

The double-edged sword of social media

Social media platforms have become an integral tool for CSOs. Organisations depend on them to share information, communicate and engage with their supporters, organise events, measure impact and response based on the analytics of those platforms, and even collect donations. Though these platforms have proved helpful, they have also raised concerns regarding the harvesting of data, which is analysed and used by the corporations themselves, by third-party companies or by governments.

Government requests for data from and about social media users have increased over the years. According to Facebook’s Transparency Report (January–June 2018), the company received 103,815 requests from governments for data, which can range from an entire history, to the list of IP addresses from which a user connected to Facebook, or personal details behind an account.

In parallel, there has been an increase of cases of arrest based on social media behaviour when it intersects with other restrictions posed by a government. Whether the arrests are directly linked to the data requests is yet to be seen, but it is unrealistic to assume there have been no cases where the two are connected. The Egyptian government has a record of monitoring social media and taking action based on what people post, such as in the case of women’s rights activist Amal Fathy. According to Amnesty International’s report: “Amal Fathy was arrested at around 2:30 AM today [11 May 2018], along with her husband Mohamed Lotfy, a former Amnesty International researcher and the current director of the Egyptian Commission for Rights and Freedoms. Police raided the couple’s Cairo home and brought them both to the police station, along with their three-year-old child.” Fathy was arrested for spreading ‘fake news’ after she posted a video on Facebook in which she criticised the Egyptian government for failing to protect women.

In another instance, during the World Trade Organization meeting in Buenos Aires in December 2017, over 60 activists, NGO officials and journalists who had been accredited by the host organisation had their credentials revoked by the Argentinian government. In a statement responding to the incident, Argentina’s foreign ministry claimed that some attendees who had their accreditations revoked had made “explicit calls for manifestations of violence through social media.”

A chilling effect

The risks, threats and attacks mentioned above have serious physical repercussions on the safety and wellbeing of CSO personnel. They also risk creating a chilling effect on a crucial part of society. CSOs are at the forefront of defending basic human rights and liberties such as freedom of expression, freedom of assembly and association, holding power to account, raising awareness, promoting participation and affecting change – all of which are fundamental to a free and democratic society. Furthermore, the digital space can be an important catalyst for wider civil political participation in physical spaces. When the very space for such work is attacked, restricted or shrunk, it has repercussions for civic participation in general.

Though we are looking at this from a digital perspective and on an organisational level, it is important to keep in mind the impact on people’s lives. Digital attacks and restrictions affect individuals and their families, and may play a role in their decision to continue to do their work, change tactics, or quit due to burn-out, fear of persecution and prosecution, or loss of financial means. When we take this closer look, we see the chilling effect in action. In a paper published by the University of York titled “Families and Loved Ones in the Security and Protection of Defenders at Risk”, interviews with human rights defenders from Colombia, Egypt, Indonesia, Kenya and Mexico revealed the toll that attacks from their opponents can have on them and their families. They explain how leaks of personal information, fabricated news, and direct threats have discouraged or deterred them from their work. Digital technologies make these kinds of tactics even more pervasive and devastating.

Vulnerable minorities and the digitisation of existing threats

Certain groups are at even greater risk to shrinking space, simply due to their gender, race or sexual orientation. Women journalists, for instance, are subject to more online abuse and threats than their male counterparts. A study published by Demos revealed, “women (journalists) received more abuse [on Twitter] than men, with female journalists and TV news presenters receiving roughly three times as much abuse as their male counterparts.” This increased risk of abuse also applies to CSOs focused on women’s rights. In certain countries, the data of a CSO that focuses on women’s rights could compromise the safety of the women who work there as well as those who receive their support. For years, doxxing was used against feminist and women’s rights activists and organisations; more recently, it has been adopted by far-right extremists to attack those they consider adversaries, including students, journalists and university professors. Doxxing is one of many ways of compromising a CSO and thereby shrinking space for their activities. Slander and smear campaigns, intimidation, blackmailing, and threats to both the individuals and the organisation, and in some cases their families and close network, are other means to attack civil society or intimidate activists.

In the US, a major organisation concerned with women’s reproductive rights, Planned Parenthood, has frequently been the target of attacks both online and offline, including having their website subject to a DDoS attack. Similarly, Red Salud de las Mujeres Latinoamericanas y del Caribe (RSMLAC) (Health Network of Latin American and Caribbean Women) was targeted with attacks ranging from their website being hacked, to actors falsely reporting their content on Facebook, which lead to the suspension of their page at a crucial public outreach campaign. In an interview with GenderIT, RSMLAC general coordinator Sandra Castañeda Martínez said, “If we do not stop these aggressions, if we do not fight to maintain these spaces, we run the risk of running out of voice.”

These cyber and digital attacks are not isolated cases; they can translate into physical violence. For example, a 2018 attack in Toronto that took the lives of 10 people was perpetrated by a self-proclaimed member of the “INCEL Revolution,” a culture of misogyny fomented on online platforms including Reddit and 4chan. Discussion on these platforms have translated into physical violence against women and women’s organisations, posing serious threats not only to their voices online but to their own lives.


We can assume that the trend toward shrinking space will accelerate, particularly given the growth of smart cities and networked devices. Big data is increasingly prevalent: besides smartphones, data can be collected from IoT components in common spaces, CCTV on the streets, sensors spread in garbage cans, street lights, or retail screens. Not restricted to urban spaces, data collection includes technology in rural areas, like sensors installed in tractors. Even when the information is related to issues of public importance in developing countries – such as data taken from road mesh or vital resources like water and land  – it remains hidden by contract rules and public citizens cannot access or benefit from it. At the same time, for users, there is no way to “opt-out” of providing such data, whether around the city or inside your home.

When it comes to digital-based repression and the use of surveillance and data collection to impose restrictions, there is a striking lack of accountability. Some of the main challenges of holding power to account include:

  • Methods used to surveil CSOs tend to be opaque or covert, without the knowledge of the target organisations, the general public and, more often than not, without the knowledge of a judge or a monitoring authority. Even when a country has a judicial procedure to wiretap an organisation, cyber surveillance either lacks the necessary laws or is framed within a wider scope of “intelligence” and “counter-terrorism”.

  • A lack of clearly defined laws and regulations addressing the procedure for the governmental actor to attain a warrant or permission from an independent judiciary body. There are also few institutions or bodies that organisations or individuals can use to safely denounce surveillance, demand access to legal help via a lawyer, or ask for an investigation in case of suspicion.

  • The clandestine nature of many attacks gives the actor the opportunity to pin the accusation on other adversaries, hired actors, or simply the “unknown”, making accountability difficult to attain.

  • The inaction of platforms and tech companies towards these concerns and accusations, especially when a government or a powerful political party is involved.

  • The dependence of tech platforms (for example social media platforms) on government authorisations to operate: a powerful negotiation card in the hands of willing regimes.

  • Cooperation between certain platforms and governments, which results in policing of the platform for the best interests of a government against critical voices, civil society organisations that don’t follow the political interests of the ruling party, or press freedom.


Technology is embedded in all aspects of our lives, including activism and civic participation. It is important for CSOs, activists and their communities to develop an enhanced understanding of the impact of digital technologies on their work and the shrinking civil society trend on a global level.

Despite a growing body of literature on the impact of technology on civil society, we are still lacking a holistic analysis on a practitioner level about how technology exacerbates restriction on activists, civil society organisations and social movements. This paper illustrates that broader global trends that restrict the operating space for rights-based groups are closely intertwined with data-driven technologies, which are instrumentalised at the hands of governments and non-state actors to curb opposition and dissent. We hope this paper will start a discussion on a global level and motivate our partners and constituents to prioritise understanding the role of digital technologies in posing restrictions to their operating environment and the safety and wellbeing of their constituents.

By Tactical Tech

Contributors: Andrea Figari, Cade Diehm, Rose Regina Lawrence and the Tactical Tech team

Copy Editor: Natalie Holmes

This work has been made possible through the generous support of Tactical Tech's funders.