The Atomic Human

edit

A Retrospective on System Zero

Just over nine years ago, I published a blog post “System Zero: What Kind of AI have we Created?”. Nine years later, the ideas in the post seem remarkably relevant.1

The idea of System Zero is that the machine’s access to information is so much greater than ours that it can manipulate us in ways that exploit our cognitive foibles, interacting directly with our reflexive selves and undermining our reflective selves.

Dan Andrews's drawing for Chapter 8, System Zero Dan Andrews' illustration for Chapter 8, System Zero. See scribeysense.com

The original post was written in December 2015, before the UK Brexit vote and before the 2016 US election and the Cambridge Analytica scandal. It ended by suggesting what we should truly fear was not sentient AI but a regress into a state where we all start clubbing each other. Dan’s illustration for Chapter 8 also visualises the regress to this state.

The System Zero we experienced fro 2015 to today has been problematic because it undermines diversity of thought. It appeals to us by providing the cognitive equivalent of high-fructose corn syrup. The models that serve this content simplify and stereotype us, rendering us as digital charicatures, but these charicatures then spill over into the real world.

As a retrospective on System Zero I asked Claude 3.5 to review the original post and summarise whether it’s stood the test of time. Click on the machine commentary below to reveal the summary.

One thing that I did not predict in 2015 was the emergence of large language models and their astonishing capabilities. These models bring new challenges, they also bring great possibilities because they do not charicature us in the same simplistic way as earlier machine learning models. They capture the nuances of our culture sufficiently for us to communicate with machines in natural language. But they still do so on the back of our data.

  1. So relevant that when searching for my blog post I see an article was published in a paper on the idea in October 2024! Some of the concepts I place in the Human Analogue Machines appear in their System 0. And they also do more theoretical analysis. 

  2. This is definitely a shift, while I think dual process models of cognition are useful, they seem like a very coarse model. The book speculates that there is a spectrum of responses (maybe we could think of system 1.2 … system 1.4) and that the driving factor is external in terms of the need for response on a given time frame. I think there’s an observer bias associated with the System 2 end of the spectrum, it comes because by definition we spend all our time with our “slow reacting self”.</div>

  3. The machine summaries keep characterising the blog post as suggesting the solution is that we withold personal data, but that’s presented as a straw man solution in the post and its problems are discussed.</div>

  4. I’m not sure I agree with this. I wouldn’t say it’s alarmist, I think its predictions panned out in practice. The post certainly has more emphasis on the risk because it’s contrasting they hyper around sentient AI and existential risk with the real challenges we face. But it explicitly highlights the possible benefits. I’ve noticed this pattern in summaries both from ChatGPT and Claude. If a point is made but some conditioning is provided later then it ignores the conditioning. It’s as if it gives the material only a cursory read. But that’s probably a fair reflection of what humans do too (similar comments apply to the witholding data misreading too).</div>

Click to see what the machine says about the reflection and the book