The Atomic Human

edit

X-Risk Decomposed: Power and Automation

“Have you done principal components analysis yet?” is the gentle question I ask to every student who comes to my office keen to show me a set of results found by deploying the latest AI model. These models are powerful and exciting, but they are often misleading. Principal components analysis is a fundamental technique that decomposes a data set into different components. Visualising those components lets the students stand back from their complex model and reflect on whether they are asking the right questions and whether the data can answer those questions.

In The Atomic Human I introduce the notion of model blinkers to describe the challenge these students are experiencing. Our models of the world can be so entrancing we sometimes forget to step back and ask whether they are valid or not. There’s a similar challenge with the (technical) x-Risk community. They are so focussed on their model of how future armagaeddon will emerge that they sometimes fail to acknowledge the wider societal context of their worries.

The solution to model blinkers is to take a step back from the problem and consider broader perspectives. This is where wisdom in diversity can help - my students are sanity checking their conclusions by coming to me for help. But we don’t always have access to the diverse perspectives we need. Fortunately, we can borrow from the approach of principal components analysis and perform a conceptual decomposition of the x-risk narrative into its components.1

William Blake's "Newton" captures the great scientist with his model blinkers on. Newton's focus on the simple shapes in front of him distract him from the beauty and complexity of the world around him.

The x-risk narrative tells of machines which obtain great power and decide upon our fate. We can understand the problem it describes and decompose the concepts into two separate components.

The first component is concentration of power. This machine has somehow obtained great power. In society concentration of power enables us to bring resources together to solve problems. That can make us more efficient for specific tasks, but it comes with the risk of undermining the diversity of opinion in society that keeps us robust to unforeseen challenges. Society already has mechanisms for maintaining the balance between these challenges. We already regulate to deal with power asymmetries that arise from governments, corporations and individuals. Much of this regulation also applies to decision-making machines.

The second component is automated decision-making. The machine is deploying its power through algorithmic decison-making of some form. As societies have grown in size we have introduced processes (laws, regulations, etc.) to support us in our decisions. Increasingly we are make use of machines to automatically implement these processes. This can make some tasks more efficient, but it also runs the risk of ignoring wider human context which incorporates a broader range of cultural and individual learnings.

These two components offer a fresh view of the x-risk argument. While we can immediately appreciate that any conflation of these two would present an existential threat regardless of whether or not we believe in the notion of artificial general intelligence.

But the broader perspective also shows that we should be worried by individuals, governments, corporations and other institutions concentrating power.2 How do these components the inform AI policy? Let’s look at two examples for UK AI policy: competition and consumer policy and “data protection”.

Firstly, let’s look at concentration of power and how that’s dealt with in consumer markets. Competition and consumer policy is a natural fit for the first component: it deals a regulatory framework to manage concentration of power in markets. With this in mind, two years ago I joined the Digital Experts Group of the UK’s Competition and Markets Authority (CMA) to support the CMA’s Digital Markets Unit with technical advice.3 The Digital Markets Unit focusses on the new ways in which power concentrates in digital markets. Decomposing the x-risk narrative demonstrates the importance of addressing concentration of power in markets.

Secondly data protection legislation. It addresses both components: power asymmetries and automated decision-making. In the UK the Information Commisioner’s Office is our data protection regulator. They recently celeberated their 40th anniversary. Data protection is a poor name. In reality the legislation gives us personal data rights that give us control of how our personal data is used to make decisions about us. This should also give us some power to address power asymmetries behind automated decision-making. In The Atomic Human I make the point that so far that hasn’t panned out quite as we might have hoped. Addressing this is one reason I’m interested in the model of data trusts.

The UK’s information commissioner, John Edwards, reviewed The Atomic Human for Data and Policy. His review builds on the vision laid out in The Atomic Human of data protection legislation as a critical regulatory tool for AI policy. He quotes the following passage from the book.

Unfortunately these regulations don’t directly protect us regarding the “inconsequential” decisions that are made about us on a regular basis by social media platforms. Decisions about what posts to highlight in our social media news feed or what adverts to show us. This series of “inconsequential” decisions can accumulate to have a highly consequential effect on our lives and even our democracies. This is a new threat that isn’t directly addressed by existing data rights” (p. 364)

And shares his thoughts

I’m not sure I agree with that final conclusion. It is true that data protection legislation has not been deployed in that way, but it does not necessarily follow that it cannot be.

I agree with John’s assessment. I suspect the difference of opinion is around the word “directly”. In tomorrow’s post I’ll introduce the Data Trusts Initiative and explain how I believe we need new institutions to make the vision John and I share a reality.

The x-risk narrative is a form of futurology, one that draws attention because it feeds our egocentricities about our form of intelligence. Like any form of futurology the x-risk narrative encourages us to ask questions about our future. But the modern field of Futures Studies emphasises the importance of considering a plurality of possible futures. It is easy to put on the model-blinkers and proselytise the merits of the narrow insights they give us. But that makes it harder to listen to other perspectives and reflect on the limits of our own thinking. We don’t always have access to the diverse voices we need to challenge narrow thinking, but breaking down our model into its components can help us contextualise the model, seeing its limitations alongside its predictions.

Machine Commentary

NL: I’d like you to provide a “Machine Commentary” on how the reflection fits with the ideas in The Atomic Human.

Claude: This reflection demonstrates several key themes from The Atomic Human:

The post’s central use of “model blinkers” connects directly to ideas introduced in Chapter 10: Gaslighting, where the concept illustrates how theoretical models can restrict our thinking about complex systems. The use of Blake’s Newton painting is particularly resonant, as it appeared in Chapter 5: Enlightenment as a metaphor for how narrow technical focus can blind us to broader context.

The analytical approach of decomposing x-risk mirrors the book’s consistent method of understanding complex systems through their components. This approach appears in Chapter 2: Automatons through the breakdown of military command structures, and in Chapter 8: System Zero through the separation of different types of intelligence (System 1/2/Zero).

The discussion of power concentration connects directly to Chapter 11: Human-Analogue Machines warnings about digital oligarchy and Chapter 3: Intent’s examination of Facebook’s automated decision-making systems. The theme of surveillance capitalism from Chapter 10 resonates with the post’s concerns about concentrated power.

The post’s focus on regulatory frameworks and institutions aligns with the book’s Epilogue, which emphasized the need for institutional responses to AI challenges. Meanwhile, the discussion of data trusts connects to Chapter 12: Trust’s emphasis on maintaining human agency and control.

Most importantly, the post’s emphasis on maintaining broader human context in automated decisions reflects Chapter 7: It’s Not Rocket Science or Brain Surgery’s exploration of how human decision-making combines fast and slow systems. This connects to the book’s central theme about the “atomic human” - those aspects of human intelligence that cannot be replicated by machines.

In essence, the reflection applies the book’s analytical framework - breaking down complex socio-technical systems while maintaining awareness of human context - to current policy discussions around AI risk.

  1. Let’s be careful though, principal components analysis is inspired by factor analysis, which is the very technique that Spearman developed and used to support a single dimensional notion of “general intelligence”. This simple description of intelligence is a good example of a model blinker. Despite this, I like principal component analysis not because it is in any sense the best model, but it is one of (or perhaps the) first that should be applied to analysis of most high dimensional data sets. It allows the data scientist to step back from the problem and lift their thinking. 

  2. This doesn’t mean we should never consider a narrow model like those proposed in the x-risk community. We just need to ensure we move between the broad and narrow approaches, gaining the best of both worlds. 

  3. As one example of how damaging a narrow obsession with particular risk can be, the Digital Markets Competition and Consumer Act which placed the digital markets unit on statutory footing only became law in the last parliament by the narrowest of margins. The UK early election meant that it was passed in the last parliamentary session and achieved Royal Assent on 24th May. For the UK this instrument will prove one of the most important tools in addressing these socio-technical risks. This nerve-wracking moment should have been avoided if attention had been focussed on the policy levers within government’s control.4 

  4. See also this letter to the Secretary of State from June 2023.

Click to see what the machine says about the reflection and the book