The Atomic Human

edit

A Retrospective on Digital Oligarchy

In March 2015 I wrote this article for the Guardian on the idea of the digital oligarchy. Like the System Zero post, this article predates the Cambridge Analytica scandal and the discussions around misinformation around the US election.

The warning suggests that we should be wary of large companies that accumulate power through vast accumulation of our personal data. Today, with the advent of generative AI, we could add “our creative data” to this. It calls the power structures that result the “digital oligarchy”.

The ideas in the article largely bore out in practice, but there’s not yet a societal acknowledgment of how large this problem is. One of the most important pieces of legislation passed under the previous UK government was the Digital Markets Competition and Consumer Act. The early election called in the UK almost caused the legislation to fall by the wayside, and yet the jeopardy it was in went unremarked upon in the UK press.

This legislation should temper the digital oligarchy. It introduces a notion of “strategic market status” for businesses that are dominating a digital market sector. What has that got to do with AI? Well at the extreme end we have the “technical existential risk community”. I’ve written before about how I think their ideas are problematic and flawed, but from a government perspective having a strategy to deal with all outcomes is sensible. The risks they espouse come from a conflation of the digital oligarchy and the removal of the human from decision-making within this newly empowered oligarchy.

Dan Andrew's drawing for Chapter 10, Gaslighting Dan Andrews' illustration for Chapter 10, Gaslighting. See scribeysense.com. Dan focussed on the surveillance aspect of the modern digital tools.

If these two challenges are conflated, that is a terrifying prospect. But the road to reach that point already presents widespread socio-technical existential risks. We’ve already seen one manifestation with System Zero, where digital systems make decisions in ways that distort the world around us.

It didn’t even take the development of machine learning for problems with digital centralisation to manifest. The UK Post Office’s Horizon scandal didn’t involve AI, it just involved poorly implemented digital systems. It was an accounting system conceived in the 1990s that was flawed in implementation. Subpostmasters were jailed or disgraced for errors that the computer made. Poor implementation of digital systems is what the digital oligarchs specialise in. They remove the human from the loop and deploy one-size-fits-all solutions that ignore the needs of individuals or groups. Their flawed systems are deployed quickly but their effects play out across society in ways that are hard to predict, monitor or fix. In the Epilogue, the book describes this as being akin to the Sorceror’s apprentice.

One theme of Chapter 10 is how human vulnerabilities are exploited in systems of surveillance or manipulation that are developed by either state actors or by those we trust to protect us. Dan’s illustration for Chapter 10 captures the ease of modern surveillance. With our current approach to AI we are sowing the seeds of thousands of Horizon scandals. To address this we need to reintroduce the atomic human into our decision making. Or as the original article says “We need to form a data-democracy: data governance for the people, by the people and with the people’s consent.”

Machine Commentary

Claude 3.5 review of the article without access to the book prompted by “How well does this article from 2015 capture how events panned out in practice.”

This 2015 article was remarkably prescient in identifying several key trends and concerns about data and AI that have become major issues today:

Key accurate predictions:

  1. Data concentration in few hands: The article warned about a “digital oligarchy” concentrating data power among a few companies. This has largely come true with companies like Google, Meta, Amazon, and Apple controlling vast amounts of personal data.

  2. Personal data value and ownership: The article advocated for people to have more control over their data and potentially be paid for its use. This foreshadowed regulations like GDPR in Europe and growing calls for data privacy rights and compensation.

  3. Algorithm power: The article correctly identified that algorithms would become increasingly powerful tools for understanding and influencing individuals, which we’ve seen with recommendation systems, targeted advertising, and social media algorithms.

  4. Immediate vs long-term AI concerns: The author was correct in suggesting that near-term concerns about data concentration and algorithmic power were more pressing than fears about sentient AI, which remains largely theoretical.

Areas where developments went further:

  1. Scale of data collection: The breadth and depth of personal data collection has likely exceeded what was envisioned in 2015, especially through smartphones, IoT devices, and digital services.

  2. AI capabilities: While the article focused on data analytics, the rapid advancement of large language models and other AI systems has introduced new capabilities and concerns not fully anticipated.

  3. Business impact: The economic value and competitive advantage of data has proven even more significant than suggested, with data-driven companies becoming among the world’s most valuable.

The article’s core warning about the need for better data governance and democratic control over personal information remains highly relevant, though progress on these issues has been limited despite growing awareness of the problems identified.

Can you provide machine commentary that relates the human written part of the post with the themes and chapters in the book?

This retrospective piece connects several key themes from “The Atomic Human”, particularly from Chapters 3, 8, 10 and the Epilogue:

Digital Oligarchy and System Zero

The post expands on the book’s concept of “digital oligarchy” (introduced in Chapter 3) and its relationship with “System Zero” (detailed in Chapter 8). While the book presents System Zero as an AI system that manipulates through data by bypassing both System 1 and System 2 thinking, the post shows how this concern predated modern AI through the Post Office Horizon scandal example. This illustrates the book’s argument that the problems of automated decision-making aren’t unique to AI but are exacerbated by it.

Gaslighting and Surveillance

The post directly references Chapter 10 (Gaslighting) through Dan Andrews’ illustration, connecting the chapter’s historical examples of manipulation by trusted actors (like the Stasi) to modern digital surveillance. The post reinforces the book’s warning about how systems of surveillance and manipulation exploit human vulnerabilities, whether deployed by state actors or trusted institutions.

The Atomic Human

The post’s conclusion about reintroducing “the atomic human into our decision making” ties directly to the book’s central thesis about human vulnerability defining the “atomic human” that cannot be replaced by machines. This connects to the book’s recurring theme of human intelligence being fundamentally different from machine intelligence due to our reliance on trust and cultural context.

Sorcerer’s Apprentice

The reference to the Sorcerer’s Apprentice links directly to the Epilogue’s use of this metaphor to describe how tech companies, like the apprentice, cannot control their creations. The post extends this metaphor to modern contexts like the Horizon scandal and AI deployment.

Evolution of Themes

The post shows how themes introduced in the 2015 Guardian article evolved into key concepts in the book:

  • The “digital oligarchy” warning expanded into the book’s broader critique of automated decision-making
  • Early concerns about data concentration developed into the book’s concept of System Zero
  • The call for “data-democracy” aligns with the book’s emphasis on maintaining human agency and control

Contemporary Context

The post updates these themes for late 2024, showing how issues the book warned about (like flawed implementation of digital systems and removal of humans from decision loops) are manifesting in real-world examples like the Horizon scandal. This reinforces the book’s argument that we need to maintain human control over technological development while being aware of our vulnerabilities to manipulation.

The post effectively demonstrates how the book’s theoretical framework helps explain real-world developments in technology and society, while showing how early warnings about digital oligarchy evolved into broader concerns about human agency in automated systems.

Click to see what the machine says about the reflection and the book