On Weaponised Design

“When people fail to follow these bizarre, secret rules, and the machine does the wrong thing, its operators are blamed for not understanding the machine, for not following its rigid specifications. With everyday objects, the result is frustration. With complex devices and commercial and industrial processes, the resulting difficulties can lead to accidents, injuries, and even deaths. It is time to reverse the situation: to cast the blame upon the machines and their design. It is the machine and its design that are at fault. It is the duty of machines and those who design them to understand people. It is not our duty to understand the arbitrary, meaningless dictates of machines1
- Donald Norman, The Design of Everyday Things

User Experience Design has blossomed from a niche industry in the halls of Silicon Valley’s early startup darlings to a force that architects our digital lives. Today, user experience design is wholly responsible for modelling human expression and self identity, and enabling interaction and participation online. Design today shares as much in common with information security research as it does with behavioural science or aesthetics. By failing to criticise common design practices or form cooperative relationships with other technology fields, user experience designers are effectively refusing to recognise and respond to traumatic cases of its own work being used to harm users.

Every major technology company has grappled with recent or ongoing uses of their platforms to maliciously target and harm users whilst performing entirely within expected behaviour. This is weaponised design: electronic systems whose designs either do not account for abusive application or whose user experiences directly empower attackers.

Examples of weaponised design are numerous and horrific: Using a smartphone, a technology reporter uncovers the home address of a friend by interacting with a new Snapchat feature that broadcasts her location on a map each time she posts2. After viewing her friend’s posts, the journalist spends a few moments looking up the location on Google Maps and asks her subject to confirm the address. Her friend is ‘creeped out’ as she realises that, despite Snapchat’s claims to the contrary, her phone has violated her privacy and her trust by effortlessly exposing her home address to all of her followers without her consent. The experience makes international headlines and authorities issue another warning over the app’s potential for targeted misuse.

Meanwhile, in an effort to address targeted harassment of users, the design team at Twitter announces an adjustment that silences notifications sent to trolling victims when they are added to malicious Twitter lists3. The outcry is immediate; Removing list notifications allows harassers to compile and share lists of targets undetected. Victims are unaware they are shared targets and can’t fight back. The violation is so obvious the feature is reversed within hours4. Harassment across Twitter continues.

Elsewhere, two Facebook designers write5 about the data-driven tweaks their team has made to Newsfeed. They conclude that the changes make Facebook meaningfully easier to use, engage with and navigate, but fail to discuss the ethical or sociological implications of their work6, which researchers consider significant7. The article proves popular with the design community. A month later, the US Government enters Facebook’s offices with a warrant, looking for evidence of Facebook’s ability to exert major influence on voting behaviour through psychology research enabled by interface and systems design8.

These examples happened in 2017 alone and are all dramatic scenarios of weaponised design. The most common way design is weaponised is through poorly considered tradeoffs, where the ideal user is prioritised over empathetic threat modelling, resulting in features that are at high risk of exploitation. Alongside this year’s Snapchat example, another example of this famously occurred in 2014 when attackers accessed automatic camera roll cloud backups and leaked of intimate photos of mostly female victims from Apple servers9. In that case, the critical key focus is not of the actions of attackers, but rather one of informed consent. One can assume Apple’s designers took a puritan, anti-sex approach to the problem of centralised photo backup: Users don’t sext, and if they do, they wouldn’t object to intimate personal photos being synced to the cloud. As the company transferred millions of users into an opt-out automatic backup service, they failed to articulate the personal implications to their user base.

Design can also be weaponised through team apathy or inertia, where user feedback is ignored or invalidated by an arrogant, culturally homogenous or inexperienced team designing a platform. This is a notable criticism of Twitter’s product team, whose perceived lack of design-led response is seen as a core factor for enabling targeted, serious harassment of women10 by #Gamergate11, from at least 2014 to present day.

Finally, design can be directly weaponised by the design team itself. Examples of this include Facebook’s designers conducting secret and non-consensual experiments on voter behaviour in 2012–2016, and emotional states of users in 201212, and Target, who in 2014 through surveillance ad tech and careful communications design, informed a father of his daughter’s unannounced pregnancy13. In these examples, designers collaborate with other teams within an organisation, facilitating problematic outcomes whose impact scale exponentially in correlation with the quality of the design input.

These three situations have one thing in common: They are the result of designers failing their users through designed systems that behave more or less as the user expected. This is a decade long misrepresentation of the relationship and responsibilities of designers to their users, a dangerous lack of professionalism, ethics and self-regulation, along with a lack of understanding of how multi-disciplinary design is leveraged to both exploit and harm the community. As appointed user advocates, design is yet to embrace the new tools and practices to continue working in an increasingly user-hostile digital world.

Corrupting Design

In 1993, Apple Computer’s Human Computer Interface research team hired cognitive psychologist Donald Norman to work with the company’s early-stage interaction team. While they were not the first to recognise the need for human-centric approaches to interface design14, Norman’s role as a ‘User Experience Architect’ marked a subtle but fundamental shift in the practice behind the design of commodified personal computer systems. By drawing from mid-twentieth century concepts of product design and applying psychology and behavioural science to information architecture, the foundations for user-centric software design were laid, contributing to the company’s early successes in the West.

Today, user experience design no longer requires a sociological or behavioural science background, but these origins linger. The field encompasses everything from aesthetics, visual communication and branding to deep system and information architecture, all working in concert to define and anticipate the activities of a user through applied user research. As platforms became more commodified – especially through mobile touch mediums – UX designers have progressively become more reliant on existing work, creating a feedback loop that promotes playfulness, obviousness and assumed trust at the expense of user safety.

The focus on details and delight can be traced to manifestos like Steve Krug’s Don’t Make Me Think, which propose a dogmatic adherence to cognitive obviousness and celebrates frictionless interaction as the ultimate design accomplishment. Users should never have to question an interface. Instead, designers should anticipate and cater to their needs: ”It doesn’t matter how many times I have to click, as long as each click is a mindless, unambiguous choice.15

A “mindless, unambiguous choice” is not without cultural, social and political context. In Universal UX Design: Building Multicultural User Experience, Alberto Ferreira explores the futility of ‘universal design,’ where in a globalised world, the expression of design varies wildly. Amongst many cross-cultural examples, his most striking are the failings of Western designers as they adapt their work to users in China or Japan. These hyper-connected middle-class audiences share comparable statuses for wealth, connectivity and education. But the cultural and aesthetic variances between these three societies are pronounced. Designers who produce or adapt work for these populations without a sturdy conceptual framework repeatedly fail their users16. ‘Mindless and unambiguous’ is only true for those who have both the cultural context to effortlessly decode an interface, and the confidence that their comprehension is solid. Not only is this dogma an unreasonable constraint, it also frequently fails.

In 2014, Eric Meyer, an influential web developer, was shown a celebratory algorithmically- generated “Facebook Year in Review” featuring images of his daughter who had died from cancer in that same year. He wrote:

“Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves. [...] In creating this Year in Review app, there wasn’t enough thought given to cases like mine or anyone who had a bad year. The design is for the ideal user, the happy, upbeat, good-life user. It doesn’t take other use cases into account17.”

That the distinguished design team at Facebook chose to deploy an application capable of such an insensitive intrusion betrays a lack of diverse life experiences within its ranks, but this is somehow a tame example of the homogenous foundations of user experience design. At its extreme, Twitter is often singled out for its initial ideal user design, and through designing with such optimism, it has become optimised for abuse – “a honeypot for assholes18. Now that the platform is an incumbent in the social media landscape, investor demand for user attention – measured as ‘platform stickiness’ – modifying the weaponised effects of homogenous ideal design has created business incentives to prevent the implementation of subsequent protections. In direct contradiction to public claims by the company’s leadership, former19 and current20 Twitter employees describe a organisation-wide disinterest in creating new models to moderate abuse on their platform or even deploy already-developed solutions.

As design has become commodified and weaponised by both platform operators and attackers, the response from designers has largely been to arrange chairs on the Titanic. As designer David Rudnick wrote in a series of tweets:

“Its not just that 20th [century] design laid the groundwork for the centralisation of global power, it actively belittled strategies for resistance. It sneered & recast radical voices, voices of colour, of poverty, as aesthetics of failure. Then told us to trust a system it helped rig.21

Today’s designers continue to maintain the status quo without industry or external criticism or pressure. Whereas other industries have ethical review boards or independent investigative bodies, popular design is apathetic22 to harm reduction or political discussion. Because of this, the industry is yet to fully examine its participation to the current techno-political climate. In 2017 – as wealthy democracies destabilise and technology is embraced in conflict-stricken societies – the ethical and empathetic failings of user experience design harms users with increasing regularity and intensity.

Responding to a user-hostile world

Design is inherently political, but it is not inherently good. With few exceptions, the motivations of a design project are constrained by the encompassing platform or system first, and the experiences and values of its designers second. The result is designers working in a user hostile world, where even seemingly harmless platforms or features are exploited for state or interpersonal surveillance and violence.

As people living in societies, we cannot be separated from our political contexts. However, design practitioners research and implement systems based on a process of abstracting their audience through user stories. A user story is “a very high-level definition of a requirement, containing just enough information so that the developers can produce a reasonable estimate of the effort to implement it23.” In most cases, user are grouped through shared financial or biographical data, by their chosen devices, or by their technical or cognitive abilities.

When designing for the digital world, user stories ultimately determine what is or is not an acceptable area of human variation. The practice empowers designers and engineers to communicate via a common problem-focused language. But practicing design that views users through a politically-naive lens leaves practitioners blind to the potential weaponisation of their design. User-storied design abstracts an individual user from a person of lived experience to a collection of designer-defined generalisations. In this approach, their political and interpersonal experiences are also generalised or discarded, creating a shaky foundation that allows for assumptions to form from the biases of the design team. This is at odds with the personal lived experience of each user, and the complex interpersonal interactions that occur within a designed digital platform.

When a design transitions from theoretical to tangible, individual user problems and motivations become part of a larger interpersonal and highly political human network, affecting communities in ways that we do not yet fully understand. In Infrastructural Games and Societal Play, Eleanor Saitta writes of the rolling anticipated and unanticipated consequences of systems design: “All intentionally-created systems have a set of things the designers consider part of the scope of what the system manages, but any nontrivial system has a broader set of impacts. Often, emergence takes the form of externalities — changes that impact people or domains beyond the designed scope of the system^24.” These are no doubt challenges in an empathetically designed system, but in the context of design homogeny, these problems cascade.

In a talk entitled From User Focus to Participation Design, Andie Nordgren advocates for how participatory design is a step to developing empathy for users:

“If we can’t get beyond ourselves and our [platforms] – even if we are thinking about the users – it’s hard to transfer our focus to where we actually need to be when designing for participation which is with the people in relation to each other25.”

Through inclusion, participatory design extends a design team’s focus beyond the hypothetical or ideal user, considering the interactions between users and other stakeholders over user stories. When implemented with the aim of engaging a diverse range of users during a project, participatory design becomes more political by forcing teams to address weaponised design opportunities during all stages of the process.

Beyond better design paradigms, designers must look beyond the field, toward practices that directly criticise or oppose their work. In particular, security research and user experience design have significant practice and goal overlap and this relationship is often antagonistic. Both fields primarily focus on the systems of wide-scale interactions between users and technology, but the goals of the two fields are diametrically opposed; design is to create the best possible experience for a user, security is to create the worst possible experience for an attacker. By focusing of the outcomes of the two fields, it’s clear that security research is a form of user experience design. Design should reciprocate, and become a form of security research.

At-risk users themselves are educating each other in operational security and threat modelling techniques. Robert Gehl describes how, increasingly, the threats faced by dark web communities mirror those experienced by the general public, acting as predictors for the future of the internet. In his essay, Proactive Paranoia, he writes:

“Proactive paranoia was forged in the toxic cultural contexts of increasingly militarized states, ubiquitous surveillance, global neoliberalism, and an all-out hustle for online money. In other words, OPSEC politics need not be limited to the dark web. In a world of arrests, scams, fines and fees, constant monitoring, and extreme caveat emptor, suspicion and paranoia are rational responses — not just in dark web markets, but in our daily lives26.”

There is tremendous opportunity for designers and information security researchers to cooperatively apply operational security and threat modelling practices to their work. If adopting participation design enables greater focus on interpersonal interactions within a network, then threat modelling and operational security practices offer concrete foundations for addressing human-design socio-political threats within a platform.

The 'ethical' designer

It is critical that user experience design must begin to deconstruct the outcomes of our collective body of work, especially as tech becomes more embedded and less visible or more easily ignored. Saitta writes, “All infrastructure is political; indeed, one might better say that all politics is infrastructural; we ignore it at our peril.” Developing new design practices helps to reduce cases of weaponisation through trade-offs in a system’s design, but practice alone is not enough.

Despite the many problematic elements of contemporary design, the community is somewhat capable of self-examination and cultural change. Counterintuitively, within user experience and its related fields, gender inequality seems less pronounced than the incredibly low bar set by other fields within the technology industries27. Self-reported research suggests that entry level roles are both well-paid and tend towards equality, with a smaller pay and ratio gap between men and women28. This is not to say that the industry is an inclusive field, and furthermore it is unclear whether these trends extend into senior or managerial positions.

The introduction of the Open Code of Conduct29 is one example of industry self-reflection and regulation. Designed to hold the contributors of a project to account, the commitment of members to a minimum level of expected behaviour as defined in a code of conduct has its roots in organiser or maintainer response to incidents of harassment at conferences and in open source projects30.

But addressing problematic internal culture of design teams is not enough. As an industry we must also confront the real-world socio-political outcomes of our practice. If we accept a code of conduct as necessary, we must also accept a code of outcomes as necessary. We must create ethical frameworks to evaluate our work at all stages, especially once it is alive in the world. Our lack of ongoing critical evaluation of our profession means that design continues to reinforce a harmful status quo, creating exploitable systems at the expense of societies.

User experience is more than just aesthetics and interfaces. It is a form of cooperative authorship and bound deeply to the infrastructure; each platform and its designers work together to represent a piece of an individual’s digital self and self expression within a wider online community. This is a responsibility on par with publishing literature that transports the reader so fully as to transform one’s self-understanding or producing cinema that ensconces the viewer so deeply that at the end there is a moment of pause to remember which understanding is reality. The online lives we collectively live are inherently mediated by technology, but how we experience this is also mediated by design.

Technology inherits the politics of its authors, but almost all technology can be harnessed in ways that transcend these frameworks. Even in its harmful and chaotic state, it is still possible to marvel at the opportunities afforded to us through design. But arrogant practice that produce brutally naive outcomes must be transcended to facilitate empathetic products and broader platforms. We must stop making decisions inside of a rose-coloured echo-chamber. As designers it is time to collectively address the political, social and human costs of weaponised design. ■

Cade researches the political and social role of design at Tactical Tech. He has spent ten years examining systems, aesthetics and interface design and their influence on infrastructure, decentralisation and online self-identity. Prior to Tactical Tech, he led design at SpiderOak, collaborated with Open Whisper Systems and worked on projects in Australia, Singapore, Korea, Japan and the US.

Special thanks to Rose Regina Lawrence, Semanur Karaman, Dalia Othman, Louis Center, Ignatius Gilfedder, Arikia Millikan and Maya Ganesh for their assistance with this essay.

Endnotes

1: Donald Norman, The Design of Everyday Things, Basic Books, Revised 2013.↩︎

2: Dani Deahal, Snapchat’s newest feature is also its biggest privacy threat, The Verge, 2017.↩︎

3: Twitter first announced the change via its own platform. Responses from prominent critics are threaded contextually with the original tweet.↩︎

4: Sarah Perez, Twitter quickly kills a poorly thought out anti-abuse measure, Tech Crunch, 2017.↩︎

5: Shali Nguyen & Ryan Freitas, Evolving the Facebook News Feed to Serve You Better, Medium.com / Facebook Design, 2017.↩︎

6: While not a direct criticism to the Newsfeed team’s work, Zeynep Tufekci’s Engineering the public: Big data, surveillance and computational politics (2014) was an early influential investigation of Facebook’s practices.↩︎

7: Jason J. Jones, Robert M. Bond, Eytan Bakshy, Dean Eckles & James H. Fowler, Social influence and political mobilization: Further evidence from a randomized experiment in the 2012 U.S. presidential election PLOS ONE, 2012.↩︎

8: At time of writing, the US Justice Department had only just begun its investigation into electioneering via social media platforms and ad buys.↩︎

9: Relevant ‘celebgate’ context, detailing the full extent of the phishing attack and resulting fallout.↩︎

10: Matias, J. N., Johnson, A., Boesel, W. E., Keegan, B., Friedman, J., & DeTar, C., Reporting, Reviewing, and Responding to Harassment on Twitter, Women, Action, and the Media, 2015.↩︎

11: ‘#GamerGate’ is an extremely visible example of gendered online violence.↩︎

12: Adam D. I. Kramer, Jamie E. Guillory and Jeffrey T. Hancock, Experimental evidence of massive-scale emotional contagion through social networks, Proceedings of the National Sciences of the United States of America, 2014.↩︎

13: This is an anecdote in a 2012 New York Times piece, How Companies Learn Your Secrets, and is an earlier publicised example of big data and corporate surveillance on the middle class. When initially published, this story went viral.↩︎

14: There were several other key players responsible for pioneering graphical user interfaces and user experience, including Susan Kare, Alan Kay, Larry Tesler, Bob Taylor, and many others.↩︎

15: Steve Krug, Don’t Make Me Think, New Riders Press, 2000.↩︎

16: Alberto Ferreira, Universal UX Design: Building Multicultural User Experience, Morgan Kaufmann, 2016.↩︎

17: Eric’s experience was not unique, nor was he the only person to write about his experience. Facebook’s Year in Review has generated annual criticism since it first debuted.↩︎

18: Charlie Warzel, “A Honeypot For Assholes”: Inside Twitter’s 10-Year Failure To Stop Harassment, Buzzfeed, 2016.↩︎

19: Criticism by @saffree, a former Twitter employee.↩︎

20: Joshua Cohen (@heyjoshua) was a current Twitter employee while posting criticism of the company’s inaction against abuse.↩︎

21: Quoted from a Twitter thread by @David_Rudnick.↩︎

22: This 2016 collection of popular design writing is an example of the industry’s fixation on more trivial issues as larger societal problems were beginning to converge.↩︎

23: A deeper introduction to user stories and the Agile software development methodology.↩︎

24: Eleanor Saitta, Infrastructural Games and Societal Play, Dymaxion.org, 2016.↩︎

25: Andie Nordgren, From User Focus To Participation Design, Alibis for Interaction, 2014.↩︎

26: Robert Gehl, Proactive Paranoia, Real Life Magazine, 2017.↩︎

27: Kate Crawford, Artificial Intelligence’s White Guy Problem, New York Times, 2016.↩︎

28: The majority of employment and gender reporting is presented by industry bodies. In this case, the cited research was published by the User Experience Professionals Association, a US organisation.↩︎

29: For an introduction to Code of Conducts within the open source software community, see the Open Source Code of Conduct guide.↩︎

30: In 2015, Github adopted Code of Conducts as an official feature and policy, influencing the further spread of the concept.↩︎