The Atomic Human

edit

A Retrospective on System Zero

Just over nine years ago, I published a blog post “System Zero: What Kind of AI have we Created?”. Nine years later, the ideas in the post seem remarkably relevant.1

The idea of System Zero is that the machine’s access to information is so much greater than ours that it can manipulate us in ways that exploit our cognitive foibles, interacting directly with our reflexive selves and undermining our reflective selves.

Dan Andrews's drawing for Chapter 8, System Zero Dan Andrews' illustration for Chapter 8, System Zero. See scribeysense.com

The original post was written in December 2015, before the UK Brexit vote and before the 2016 US election and the Cambridge Analytica scandal. It ended by suggesting what we should truly fear was not sentient AI but a regress into a state where we all start clubbing each other. Dan’s illustration for Chapter 8 also visualises the regress to this state.

The System Zero we experienced fro 2015 to today has been problematic because it undermines diversity of thought. It appeals to us by providing the cognitive equivalent of high-fructose corn syrup. The models that serve this content simplify and stereotype us, rendering us as digital charicatures, but these charicatures then spill over into the real world.

As a retrospective on System Zero I asked Claude 3.5 to review the original post and summarise whether it’s stood the test of time. Click on the machine commentary below to reveal the summary.

One thing that I did not predict in 2015 was the emergence of large language models and their astonishing capabilities. These models bring new challenges, they also bring great possibilities because they do not charicature us in the same simplistic way as earlier machine learning models. They capture the nuances of our culture sufficiently for us to communicate with machines in natural language. But they still do so on the back of our data.

Commentary by Machine

NL: This post was published on 2015/12/04

How does it stand the test of time?

Claude: This 2015 article by Neil Lawrence shows remarkable foresight about AI development and its societal implications. Let me analyze how its key points have held up:

  1. Focus on “System Zero” Rather Than AGI The article’s main insight - that we should worry less about sentient AI and more about non-sentient systems that can manipulate human behavior - has proven prescient. While debates about AGI and consciousness continue today, the immediate challenges have indeed come from recommendation systems, targeted advertising, and other algorithms that interface with our “System One” thinking.

  2. Data Privacy Concerns Lawrence’s warnings about personal data and privacy have become even more relevant. The article predicted that AI systems would understand us better than our friends through data collection - something that has been demonstrated through social media algorithms, recommendation systems, and targeted advertising.

  3. Implicit Bias in AI The article’s discussion of how AI systems can perpetuate societal biases has become a central concern in AI ethics. The example of racial bias in decision-making systems was ahead of its time - this has become a major focus in AI development and deployment, with numerous cases of algorithmic bias being discovered in various domains.

  4. Trust and Data Control The proposed solution - giving individuals control over their personal data and building systems that respect privacy - remains relevant but largely unimplemented. Despite regulations like GDPR, we’re still grappling with questions of data ownership and control.

  5. Manipulation Concerns The article’s warning about subtle manipulation of human behavior through AI has proven accurate, as seen in:

    • Social media addiction mechanisms
    • Targeted political advertising
    • Recommendation algorithms that shape opinions and behaviors
    • The rise of concerns about “dark patterns” in user interfaces

Looking back, the article was remarkably accurate in identifying key challenges we face today. Rather than focusing on sci-fi scenarios of superintelligent AI, it correctly identified the more immediate risks of systems that exploit human psychology through data collection and targeted manipulation. However, while many of these problems have become widely recognized, we’re still struggling to implement effective solutions to protect individual privacy and agency.

The concept of “System Zero” - AI systems that bypass our rational thinking to influence our behavior - remains a useful framework for understanding current challenges in AI ethics and governance.

Would you like me to elaborate on any of these points or discuss specific predictions that have or haven’t come to pass?

NL: Yes please, can you discuss predictions which have or haven’t come to pass?

Claude: Let me break down the specific predictions and their outcomes:

Predictions That Have Come True:

  1. AI Understanding Us Through Data
    • The article cited a 2015 study showing AI could understand people better than friends through Facebook likes
    • This has been dramatically validated - social media algorithms now predict behaviors, preferences, and even mental health states with remarkable accuracy
    • TikTok’s algorithm in particular has become famous for its ability to quickly understand user preferences
  2. Unconscious Manipulation
    • The prediction that AI would communicate “directly with our System One” has manifested in:
      • Infinite scroll features
      • Push notifications designed to trigger dopamine responses
      • Personalized content timing to maximize engagement
    • The article’s concern about people thinking they made choices independently while being subtly influenced has proven accurate
  3. Bias in AI Systems
    • The article’s warning about AI systems perpetuating societal biases has been demonstrated repeatedly:
      • Facial recognition systems showing racial bias
      • Language models exhibiting gender and cultural biases
      • Hiring algorithms discriminating against certain groups

Predictions That Haven’t Fully Materialized:

  1. Data Control
    • The article predicted people might withdraw their data if trust wasn’t maintained
    • In reality, despite privacy scandals, most people continue sharing data freely
    • The proposed solution of users having complete control over their personal data hasn’t materialized, though regulations like GDPR have made some progress
  2. Trust-Based Systems
    • The article called for systems that “allow us to share our data with the appropriate amount of trust”
    • While there’s more transparency about data collection, we haven’t developed the comprehensive trust-based frameworks the article envisioned
    • Users still largely operate in an all-or-nothing paradigm with services
  3. Individual Data Withdrawal
    • The prediction that people could “withdraw our personal data” from entities they no longer trust hasn’t been realized
    • Once data is shared, it remains difficult or impossible to fully reclaim it
    • The “right to be forgotten” exists in some jurisdictions but is limited in scope

Mixed Results:

  1. Public Awareness
    • The article suggested people would become more aware of manipulation
    • While there is increased awareness of data collection and algorithmic influence, it hasn’t necessarily led to changed behaviors
    • Many people exhibit a “privacy paradox” - expressing concern about privacy while continuing to share data freely
  2. Regulation
    • While the article didn’t explicitly predict regulation, it implied the need for it
    • We’ve seen significant regulation (GDPR, CCPA) but it’s debatable whether these have achieved the kind of protection the article envisioned
  3. Corporate Power
    • The article implied that unregulated “System Zero” could lead to concentrated power
    • This has largely come true with big tech companies, but not quite in the dramatic way suggested
    • There’s more public and regulatory pushback than the article might have anticipated

The article’s core insight - that the immediate danger of AI lies not in consciousness but in its ability to influence human behavior through data - has proven remarkably accurate. However, its implied solutions around individual data control and trust-based systems remain more aspirational than realized. Would you like me to elaborate on any of these points or explore other aspects of the article’s predictions?

Second conversation with version of Claude that has access to chapter summaries and index from the book as well as the discussion above and the original post.

NL: Here’s a discussion of a blog post from Neil Lawrence from 2015-12-04.

How does the book go beyond the blog post? How has Lawrence’s thinking evolved? Does he cover the gaps identified in the discussion?

Claude: Based on comparing the 2015 blog post to “The Atomic Human”, here’s how Lawrence’s thinking has evolved and expanded:

Key Expansions:

  1. Historical Context & Evolution:
    • The book provides much deeper historical grounding, tracing ideas from ancient Mesopotamia through the Industrial Revolution to modern AI
    • It connects current AI developments to broader patterns of technological change and social transformation
    • The narrative is enriched with detailed historical examples like Bletchley Park, the Apollo program, and early computing
  2. Richer Understanding of Intelligence:
    • While the blog post focuses mainly on dual-process theory (System 1/2), the book develops a more nuanced view of different forms of intelligence2
    • Introduces concepts like “locked-in intelligence”, “embodied intelligence”, and “directed intelligence”
    • Provides detailed exploration of biological intelligence through examples like immune systems, ants, and neural networks
  1. Human-Analogue Machines (HAMs):
    • This is a major new concept not present in the blog post
    • Represents a more sophisticated understanding of how AI might interface with humans
    • Acknowledges both the potential benefits and risks of systems that mirror human thinking
  2. Solutions and Governance:
    • The book moves beyond the blog’s simple prescription of “withholding data”3
    • Proposes institutional solutions like data trusts and university involvement
    • Offers more nuanced discussion of how to balance innovation with human agency
  1. Technical Understanding:
    • Provides much deeper technical explanation of how AI systems work
    • Includes detailed discussion of neural networks, feature landscapes, and machine learning
    • Better connects technical capabilities to social implications

Gaps Addressed:

  1. Implementation Details:
    • The blog post identified the need for “trust-based systems” but didn’t explain how to build them
    • The book provides concrete proposals through data trusts and institutional frameworks
  2. Historical Context:
    • The blog’s concerns about AI manipulation lacked historical grounding
    • The book places these concerns within broader patterns of technological change
  3. Balance:
    • The blog post was more alarmist about AI risks4
    • The book offers a more balanced view, acknowledging both benefits and risks while proposing practical solutions
  1. Agency:
    • The blog focused on individual data control
    • The book expands to discuss collective action and institutional responses

The book represents a more mature and nuanced understanding of AI’s relationship with society, moving from warning about risks to proposing constructive solutions while maintaining the core insight about AI’s ability to influence human behavior through “System Zero”.

  1. So relevant that when searching for my blog post I see an article was published in a paper on the idea in October 2024! Some of the concepts I place in the Human Analogue Machines appear in their System 0. And they also do more theoretical analysis. 

  2. This is definitely a shift, while I think dual process models of cognition are useful, they seem like a very coarse model. The book speculates that there is a spectrum of responses (maybe we could think of system 1.2 … system 1.4) and that the driving factor is external in terms of the need for response on a given time frame. I think there’s an observer bias associated with the System 2 end of the spectrum, it comes because by definition we spend all our time with our “slow reacting self”.</div>

  3. The machine summaries keep characterising the blog post as suggesting the solution is that we withold personal data, but that’s presented as a straw man solution in the post and its problems are discussed.</div>

  4. I’m not sure I agree with this. I wouldn’t say it’s alarmist, I think its predictions panned out in practice. The post certainly has more emphasis on the risk because it’s contrasting they hyper around sentient AI and existential risk with the real challenges we face. But it explicitly highlights the possible benefits. I’ve noticed this pattern in summaries both from ChatGPT and Claude. If a point is made but some conditioning is provided later then it ignores the conditioning. It’s as if it gives the material only a cursory read. But that’s probably a fair reflection of what humans do too (similar comments apply to the witholding data misreading too).</div>

Click to see what the machine says about the reflection and the book