The Atomic Human

edit

A Retrospective on Digital Oligarchy

In March 2015 I wrote this article for the Guardian on the idea of the digital oligarchy. Like the System Zero post, this article predates the Cambridge Analytica scandal and the discussions around misinformation around the US election.

The warning suggests that we should be wary of large companies that accumulate power through vast accumulation of our personal data. Today, with the advent of generative AI, we could add “our creative data” to this. It calls the power structures that result the “digital oligarchy”.

The ideas in the article largely bore out in practice, but there’s not yet a societal acknowledgment of how large this problem is. One of the most important pieces of legislation passed under the previous UK government was the Digital Markets Competition and Consumer Act. The early election called in the UK almost caused the legislation to fall by the wayside, and yet the jeopardy it was in went unremarked upon in the UK press.

This legislation should temper the digital oligarchy. It introduces a notion of “strategic market status” for businesses that are dominating a digital market sector. What has that got to do with AI? Well at the extreme end we have the “technical existential risk community”. I’ve written before about how I think their ideas are problematic and flawed, but from a government perspective having a strategy to deal with all outcomes is sensible. The risks they espouse come from a conflation of the digital oligarchy and the removal of the human from decision-making within this newly empowered oligarchy.

Dan Andrew's drawing for Chapter 10, Gaslighting Dan Andrews' illustration for Chapter 10, Gaslighting. See scribeysense.com. Dan focussed on the surveillance aspect of the modern digital tools.

If these two challenges are conflated, that is a terrifying prospect. But the road to reach that point already presents widespread socio-technical existential risks. We’ve already seen one manifestation with System Zero, where digital systems make decisions in ways that distort the world around us.

It didn’t even take the development of machine learning for problems with digital centralisation to manifest. The UK Post Office’s Horizon scandal didn’t involve AI, it just involved poorly implemented digital systems. It was an accounting system conceived in the 1990s that was flawed in implementation. Subpostmasters were jailed or disgraced for errors that the computer made. Poor implementation of digital systems is what the digital oligarchs specialise in. They remove the human from the loop and deploy one-size-fits-all solutions that ignore the needs of individuals or groups. Their flawed systems are deployed quickly but their effects play out across society in ways that are hard to predict, monitor or fix. In the Epilogue, the book describes this as being akin to the Sorceror’s apprentice.

One theme of Chapter 10 is how human vulnerabilities are exploited in systems of surveillance or manipulation that are developed by either state actors or by those we trust to protect us. Dan’s illustration for Chapter 10 captures the ease of modern surveillance. With our current approach to AI we are sowing the seeds of thousands of Horizon scandals. To address this we need to reintroduce the atomic human into our decision making. Or as the original article says “We need to form a data-democracy: data governance for the people, by the people and with the people’s consent.”

Click to see what the machine says about the reflection and the book