Navigating a sociotechnical age

From navigating our interactions with digital systems such as artificial intelligence and algorithms to a socially beneficial trajectory for technology, Alix Dunn walks us through the sociotechnical challenges we are currently facing

Alix Dunn wears many hats. As a facilitator, she holds inclusive space for complex debates around data and technology, and works to build conversation cultures in teams and communities. As an advisor, she works with foundations, researchers, and non-profits to shape a socially beneficial trajectory of emerging technology. She sat on the boards of Technology Advisory Council at Amnesty International, Ada Lovelace Institute and The Human Rights Initiative at Open Society Foundation among many others. She was also a Fellow at the Harvard Kennedy School and at Digital Impact Lab, Stanford University.

The more complex technology gets, the more there is a need for diverse skill sets and backgrounds, and the more important it becomes to know how to facilitate meaningful conversation. Dunn is working on that problem, with a particular interest in the unique challenges that sociotechnical questions pose for organisational design.

Technical intuition

How do we navigate a progressively digital age? Dunn thinks the initial effort should be people getting rid of the notion that they have to be technologists to have technological conversations. “We don’t expect fellow humans to get a psychology degree to understand others, why should we expect them to become an engineer to advocate for their digital rights?” she says. The swift embedding of digital layers to our physical life has huge implications for how we relate to our world; from not understanding how employment rights may be affected by datafication of our work days to our digital rights around being monitored and targeted.

One remedy might be “technical intuition” — a term she coined which is the layer of knowledge and instinct that we should be working to build, so that our navigation of digital spaces and engagement with digital possibility is as mature as it is in physical spaces.

In her interview for Tabitha Goldstaub’s book How To Talk To Robots, Dunn explains how similar types of intuition have developed over millennia and through our lifetimes, based on direct experiences, education and social communication. For example, when we cross the street, we know that the cars moving toward us are heavy, fast and a danger to our physical safety. However, because of the relatively new and rapid nature of our interactions with digital systems, we haven’t had the time to develop this intuition as efficiently in comparison.

The source of the problem

So are digital systems the problem? Not entirely. Concerns are often aimed specifically towards artificial intelligence (AI) or algorithmic systems. Dunn believes people react particularly badly towards these systems because partly, it creates the possibility to delegate responsibility and accountability. “We don’t trust the underlying institution, so when that underlying institution is accelerating or strengthening its capacity to process in an even less than human-centered way, it is scary” she says.

However, if we are at a moment when the unspoken ways of how institutions are operating are being hardcoded through technologies like AI and algorithms, the opportunity may lie in them showing us who they really are. When, and if, they do, then “we should be able to fight against injustice more clearly. The problem is structural and political. The problem is the people. The problem is the positions and the policies and the rules.”

Diversity and ethics in organisations

Another problem is that building socially beneficial technology requires collaboration across many disciplines, understandings and ideologies. Diversity is something Dunn defines as a competency rather than a performance. “Not just because it signifies something such as fair hiring practices or anti-racist efforts to rebalance power in industry, but something that unlocks a capability to do big, meaningful things.”

She has been reflecting a lot on the Google AI Ethics scandal; “the one involving the firing of Dr Timnit Gebru, and how there is a crescendo of debate about independent research, corporate accountability, and the trajectory of it all.” Gebru is an AI ethics researcher who says she was asked by a senior manager to remove her name from a research paper she co-authored; discussing ethical issues raised by recent advances in AI technology that works with language, which is crucial to the future of Google’s business.

“Unfortunately, many companies and teams have conflated diversity with tokenisation, and tokenisation with equity. In other words, the optics-focused, harmful, and ultimately failed attempt to make organisations more diverse appears to be limited to recruiting — or saying you will recruit — a more diverse workforce rather than build a pluralistic, high-functioning, and equitable team” says Dunn.

Part of the answer is the need to recognise that we are in a particularly challenging time for organisational design. The first step to make diversity of discipline, approach and background become a capability is to actually making technology socially beneficial. “If you’re optimising purely for profit, then a lot of these other ideas are moot” says Dunn. For instance, if you’re thinking about building an ethics team, there’s no point if you’re going to remain orientated towards pure-profit. But if you are optimising for socially beneficial technology, you can then hold space for dynamic uncertainty and values-aligned decision-making.

Could AI solve our problems?

When asked whether she thinks digital systems such as algorithms and AI may eventually be the solution for systems to function in fair and unbiased ways as we alter and improve them, for instance in employment or judicial processes, it is a firm no from Dunn’s part. “It’s a very engineering way of thinking that is kind of infectious. It is so exciting to think that it could be possible to strip a system as politically charged and problematic as for example, an immigration system and to turn it into something that is fair” she says.

The caveat is that digital systems are not born from nothing. The ones that have predictive power are born built atop data that has been generated by systems that are biased. Thus, according to Dunn, the idea that “you would be able to construct something that would teach a system to be able to act fairly, within a system that is fundamentally, structurally, systemically unfair is just not possible. The problem is not the execution so you can’t fix the flaw by executing the flawed system more efficiently or effectively.”

Another idea she refutes is how AI is going to generate a lot of wealth and benefit everybody because technology is going to help us create a lot more resources and then we would just need to distribute it. “The distribution is the hard part. The political power or representation necessary to distribute equitably is the challenge. The challenge isn’t ‘Where are the resources?’ There are plenty of resources in the world, we just choose to divide them inequitably” she says.

Ultimately, what she views as most problematic is the concept that technology can help us bypass some of the really core questions about power and equity.

Defne Saricetin

Defne Saricetin is a writer, creative consultant and freelance journalist based in London. She has worked on projects for Vogue, Semaine and New York International Screenplay Awards, interested in arts, storytelling, social psychology and cultural and creative industries.

All articles