Overcoming Tech Exceptionalism: How to Improve Societal Impact by Technology Firms in Fragile and Conflict Settings

By John E. Katsos and Jason Miklian - 09 January 2019
Overcoming Tech Exceptionalism: How to Improve Societal Impact by Technology Firms in Fragile and Conflict Settings

Social media and technology companies are quickly branching beyond the developed world. As their need for growth expands the map of their geographic footprint, tech firms increasingly find themselves working within fragile and conflict-affected states to expand global market share. But in entering these markets, these firms often ignore the difficult lessons learned from companies in other industries entering similar (or even the same) types of states, a phenomenon we describe as “tech exceptionalism.” As a cure, we offer the lessons of other firms in fragile and conflict-affected states with the addition of the particular strengths that tech companies, and social media firms in particular, might bring to do no harm and maybe even to enhance peace in these locations.

Introduction

In August 2018, social media giant Facebook took what it called an “unprecedented” step, banning 18 Myanmar military officials from using its platform to spread messages of genocide against the country’s Rohingya minority.  Facebook human rights manager Alex Warofka called their own decision “courageous,” and investors like Norway’s $1 trillion oil fund applauded Facebook’s transparency as it offered solutions that ensured continued operations. (1) Facebook also commissioned and released an independent report on the matter. Executives promised to do better next time, tinkering at the margins to provide an action plan for a “better user experience” in the country. For Facebook, their firm’s role in the genocide was simply an unfortunate and unforeseen side effect to which they are “making progress – with better technology.”

But this feel good story has a problem – it’s based on deception. Myanmar’s ethnic cleansing began in 2014, four years before Facebook did anything about it. Rights activists repeatedly notified Facebook about the military’s use of the platform for violence. By 2018, the hatred enabled by Facebook led to the killing of at least 10,000 people and displacement of over 700,000. During this same period, Facebook’s user growth in Myanmar was over 2,500% and over 90% of internet users in Myanmar used the platform – remarkably, without a single Myanmar-based employee. By waiting to 2018 to implement their ban, Facebook ensured that neither violence nor its own growth were hindered.

Facebook is not an outlier, but a figurehead of technology firms treating users as commodities. It followed the blueprint of what is considered best practice by a socially responsible technology company: allow users to use the platform as they wish in the name of free expression, even if it includes socially destructive actions. Then, only intervene if sufficient numbers of policymakers condemn the company’s actions, typically long after the negative consequences have occurred. As opposed to their initial “move fast and break things” growth strategy, technology companies seem to offer a different strategy for their social obligations: “move slow and let someone else pick up the pieces”.

This approach is remarkable given the language that tech firms use to describe their services and industry. ‘Disruptive’ and ‘innovation’ are not just truisms, but ingrained assumptions within nearly every tech startup. Slogans like Facebook’s “Bringing the World Closer Together” and Apple’s “Think Different” typify how the tech industry differentiates itself as instrumental to a new global operating system. The message is intoxicatingly simple: tech makes the world a better place and does it in ways you haven’t even thought of.

But if Facebook truly cared about the damage it was doing to the Rohingya, why didn’t it act when it was notified in 2014? And why did the international investment and policy communities celebrate a response that was too little, too late?

The answers lie in what we call tech exceptionalism, the pervasive idea that technology companies can bypass the social lessons of previous industries because they are a unique breed of business.

In conflict contexts, the failure of technology to achieve its lofty aims – often through its use and misuse by conflict actors – can mean violence, suppression, and death. Tackling tech exceptionalism requires challenging the hubris of many tech firms that their products are panaceas for complex social problems. It also requires incorporating hard-won lessons from other industries to maximize the chances for positive impact while minimizing the possibility that they will be used to harm. Here, we offer a brief glimpse of what such an approach might entail.

The Myths and Assumptions of Tech Exceptionalism

Tech exceptionalism is rooted in an old idea that has again come to the fore: the belief that there are technological solutions for the world’s complex social problems. We see this mentality, often rooted in what Evgeny Morozov has called “cyber utopianism”, throughout today’s tech titans, from Tesla founder Elon Musk’s lofty claims to rocket or tunnel humanity to utopia, to the dozens of startups promising to ‘reboot’ or otherwise disrupt democratic procedures, bypassing imperfect politicians and institutions in the process.

But there is one realm where technology companies, especially in social media, are truly exceptional. Nearly all western social media firms and new tech startups argue that their products are forces for global social good. When they enter new markets under the promise of connecting people, of delivering essential needs, or of ‘doing good by doing well’, they do something that no almost no other industry does: they claim that social good is a core attribute of their business.

The most dangerous assumption in tech exceptionalism is the belief that new technologies can be deployed wholesale in fragile and conflict-affected states to deliver positive societal change. Twitter CEO Jack Dorsey claims that tech firms inspire “revolution” through importing their values and purpose. But he also celebrated his own buddhist retreat in Myanmar without mention of the genocide, nor the fact that many of the Myanmar military officials that Facebook banned have simply migrated to Twitter.

‘Social good’ myths in Silicon Valley are pervasive, from women in rural villages being empowered by using mobile technologies to social media platforms as community resistance vehicles. Unfulfilled promises of positive impact range from Google’s Internet Saathi initiative in rural India to pro-democracy protests in Moldova, Iran, Egypt, and Ukraine. These myths communicate that modern technology companies are enablers of social change of the liberal, democratic type. Implicitly, they promise that their app can do in a few weeks what countries couldn’t do themselves in a few decades.

Yet these myths leave out two important points.

First, technology companies need users. Profitability for most tech products, from Facebook to Bitcoin, is pinned to Metcalfe’s Law, which describes the network effect required for most internet-enabled technologies. Most are also dependent on advertising for revenue, and the more users are in the network, the more profitable the enterprise. But in conflict settings, the very human rights abusers that a platform might want to ban are often gatekeepers who can allow access to societies more broadly. Thus, firms strike devilish deals to gain access to the market, promising themselves – and any others who will listen – that exposing these miscreants to their western moral compass will eventually help them see the light of democracy and human rights. 

Second, bad actors are acutely aware of the dual-use nature of these new technologies. Authoritarian governments are among the most sophisticated and dominant users of technology, with hackers funded alongside missiles and firewalls as strong as real ones. For tech firms, this means using the local government’s internet, the underlying backbone that new tech firms need but can’t build themselves. Governments can use the same technology as an intelligence gathering tool, using metadata to find the exact location of rabble-rousers along with details about who they associate with. Such governments can use the same technologies designed for “empowerment” to disempower more effectively than ever.

Technology can be used for good or evil. To wit, the five “revolutions” to occur over social media since 2009 were ultimately handily dispatched by authoritarian regimes with the exception of Tunisia. In Moldova and Ukraine, it was the shift from only use of social media to word of mouth and community organizing that eventually got those revolutions off-the-ground three years later. In Iran and Egypt, the use of social media in lieu of community organizing allowed for both countries’ governments to more effectively identify and punish those who organized and participated in the protests. In Egypt in particular, social media helped accomplish something that five decades of repression could not – the complete decimation of the Muslim Brotherhood.

How Other Industries do Business in Fragile and Conflict Settings

Many of us know these pitfalls. How can we overcome these pitfalls to deliver better guidance to tech firms? Here we draw upon our research on companies in other sectors working for peace and development in conflict areas.

In industries like extractives and consumer goods, most firms operating in impoverished and fragile places also try to help build societal benefit. These firms grapple with hard problems, like how to protect vulnerable societies in operational areas without ‘taking sides’ in local conflict. Yet these industries carry higher human rights standards and a higher degree of social impact in operations than the tech sector, with the possible exception of sourcing and supply chains. This is partially due to increased scrutiny, but also to increased expectations that their social impact initiatives will be monitored and assessed. Unlike tech firms, these companies rarely claim to deliver such benefits up front.

In short, we see humility. Humility that was ‘earned’ through failure, born out of decades of mistakes and corporate violations, from conflict minerals to disasters like the Rana Plaza garment factory fire in Bangladesh. Once a firm’s long-term societal and reputational damage is damaged, it can take decades to recover.

We note two hard-won lessons of this humility.

First, firms must engage with whoever governs the territory, in the same way that they do in a developed country. Social media does not subvert governments through its very use. And the idea that a morally sophisticated western firm can hoodwink local government yokels into delivering better human rights is bound to backfire. For example, 20 years before Facebook entered into Myanmar, another California company bringing cutting edge technology did the same. Unocal attempted to build a cutting edge pipeline using state-of-the-art technology that would require little local presence in a difficult country. The government simply used the ground-clearing as an excuse to accomplish a long-sought goal: widespread human rights abuses against the Karen ethnic minority.

Governments can and do use technology to accomplish the same aims that they did before the tech existed. If those who govern the territory are serial human rights abusers or war criminals, new technologies will simply make those actions more sophisticated. Unocal learned that lesson the hard way. Similarly, if the government lacks real power or control, you can again expect more of the same from the deployment of your technology. Tech companies are not exempt.

Second, something that extractive firms in particular have learned is that entering fragile, conflict-affected markets is not only more expensive but carries the added risk that you could do everything ‘right’ and still fail.  One example is that of the oil company Chevron, which implemented an extensive community engagement program in 2005 between 95 different communities in Nigeria to better secure operations. Chevron allocated nearly $100 million to the endeavour, followed corporate best practice to the letter, and was confident of a successful social return.

10 years later, Chevron’s facilities were being attacked by militants as a symbol of the Nigerian government and unrest was endemic amongst the local communities left out of the program. Their risk assessment only looked at risks to the company (security, political, and operational). Working with risk assessment teams but not peace practitioners, Chevron did not think to ask what is the local definition of peace, or how their well-intended activities impacted fragile socio-economic structures. Had Chevron better understood how their largesse would upend local ecosystems of conflict and power, their resultant actions - and local standing in the Niger Delta - would likely be completely different today.

That’s why companies in other industries are very careful in entering into fragile markets. The reward can be worth the risk – more users, greater network effect, more revenues – but tech firms are just as exposed to the risks of working in these countries as their more established counterparts. The faster that tech firms adopt these lessons, the faster they can be a real force for good in conflict-affected and fragile states.

Tech Firms are Different in Some Ways, but Not in the Way They Think

Unfortunately, most tech firms seem to be utterly uninterested in these lessons. They promise to shoulder the responsibility of positive social development and mitigate any negative consequences that their company might generate. But then, they don’t follow through.

Leaving a profitable market, no matter how their product contributes to conflict or rights violations, is almost never realistically considered. Of the tens of thousands of cases of tech firms working in conflict or human rights-sensitive contexts, there are five documented cases of firms choosing to exit. Even in those cases, like South African mobile firm MTN’s weighing exiting Syria and South Sudan, issues like corruption, profitability and infrastructure were more important in the decision than the harmful application of technology.

The typical justification for staying is a cornerstone of tech exceptionalism: when our tech improves lives in fragile places, we will take the credit. When it doesn't, the negative consequences would have been even worse if a national or Chinese firm had done it, so we need to stay in the market as the vanguard of liberal democratic morals.

This claim moves the goalposts whenever a western tech firm encounters problems in the developing world and is not based in empirical evidence. Firms call these scenarios 'ethical dilemmas', but calling it a ‘dilemma’ rings false when nearly every time companies ultimately maintain their operational presence. This decision is typically coupled with an announcement that the firm will be a positive change agent in issues of interest for the company, such as in local legislation or regulation. Or that it would wish to do more but its hands are tied by having to follow discriminatory local laws or risk harming communities, as Facebook’s Human Rights head said of their Myanmar inaction. The only 'ethical dilemma' a firm truly faces is if their salesmanship of such justifications are good enough to placate their consumers and investors.

While Google in China or Microsoft in Russia may bend their guidelines to maintain access, smaller tech startups, under much more pressure to rapidly grow, have a fundamentally weaker position with such regimes. Further, at no point do tech startups that work with vulnerable populations go through the mandatory ethical checks and balances required of corresponding peacebuilding or development aid initiatives. As one positive example, Oath (soon to be Verizon Media Group but formerly Yahoo and AOL) brings in local stakeholders to meet with their engineers to help with responsible design - but startups do not have this capacity and those that succeed may be afraid to do this once they can afford it as it might challenge the very business model that made them successful.

Moreover, tech business models are structured so that it is almost impossible for them to bear direct consequences of any tail risk societal negative impact in the developing world. During the scaling stage, products are rolled out to maximize growth, with little concern about impact. If negative impacts to manifest, they are rarely discovered until years later, by which time the firm has dissolved, been acquired by a larger firm, or grown so big that it can simply employ the ‘lessons learned’ model described above.

For firms of all types, until human rights – and more ethical business practice in general - becomes more important than just one of a basket of concerns that companies consider when making decisions about problematic countries of operation (along with profitability, corruption, supply chains, and regulation among others), ethical action will continue to be compartmentalized as a ‘challenge’ to work through instead of a guiding operational principle to lead decision-making.

Fixing Tech Exceptionalism in Fragile and Conflict Settings

How do we fix the flawed ideas of tech exceptionalism in conflict contexts? It requires rethinking the range of social impacts that technology actors have and incorporating lessons from other industries.

Social media, it bears repeating, is social. But understanding the social order that users experience helps firms understand how local societies will actually use their product. Social problems between communities do not disappear with the appearance of technology; on the contrary, most evidence shows they are exacerbated. The purpose of the product is undermined by not understanding the pre-existing social order of a new market.

Managing pre-existing social problems starts with a more holistic due diligence model designed to anticipate social flashpoints. Human moderators with in-depth local knowledge are necessary from the outset to ensure that human rights are enforced in content generation and decisions to track users. This stands in stark contrast to using more technology to automate adherence to standards and guidelines, a current proposal of tech companies. The risk of using more tech over humans? We refer you to our introduction.

For their part, investors must demand the same level of action from tech firms as they do from the extractive sector, private security companies, and others who operate in conflict zones. Otherwise, firms will continue to be incentivized to use the firefighting model of 'public concern' and 'independent assessment' rather than 'action plans' to be more compliant and honest long-term actors.

Scholars and practitioners also must develop a more critical eye towards the tech industry and cease to assume that its actions are de facto a force for good. Developing a coherent and empirically sound theory of change for what social media firms in particular can and should do to mitigate their negative impacts is an essential first step. Otherwise the industry will continue to stumble as conflict instigators to use tech platforms for devious aims. All the companies with these conflict impacts are private – they can ban whomever they want, whenever they want, for whatever reasons they want.

In conflict zones, the ‘dilemma’ is clear: a product or service can either aid human rights or not. If a company decides to knowingly let its platform be used for human rights abuses, it must be punished just as oil companies, diamond miners, and private military contractors are. Problems in conflict zones in these industries are solved by doing something the tech industry seems allergic to: hiring human experts with local knowledge of political and social processes. Some problems cannot be solved by automation or communicating with the cloud.

Social technologies can be immensely powerful tools, but ultimately are tools like any other. They do not offer shortcuts to peace and development, nor can they substitute for good project design with clear impact frameworks. With matters of life and death at stake for communities in conflict, firms should build in seasoned analysis of social impact at every stage of their operations if technology is going to deliver on its promises of societal improvement.

 

 

(1) - Both statements were made at a roundtable discussion of Facebook's role in Myanmar  at the panel "Human rights due diligence in practice in the ICT sector," United Nations Business and Human Rights Forum, 27 November 2018, Geneva.

 

 

John E. Katsos, JD, is Associate Professor of Business Law and Ethics at the American University of Sharjah (UAE). John researches business operations in fragile and conflict-affected states, including in Syria, Iraq, Cyprus, and Sri Lanka. He is currently conducting surveys and interviews on how businesses mitigate political risk, bolster rule of law, and enhance peace.

Jason Miklian, Ph.D., is a fellow at the Centre for Development and the Environment at the University of Oslo. He studies the role of businesses as peacebuilders and agents of sustainable development in fragile states, and corporate engagement within the 'Business for Peace' paradigm and for UN Sustainable Development Goal 16.

John and Jason, together with Rina Alluri, are editors of the upcoming volume Business, Peacebuilding and Sustainable Development, out February 2019 (Routledge).

Image credit: AMISOM Public Information via Flickr (CC0 1.0)

Disqus comments