The Atomic Human

edit

Playing in People's Backyards

The Berkeley statistician John Tukey is said to have told a colleague “The best thing about being a statistician is that you get to play in everyone’s backyard”. The same is true for machine learning, data science and AI.

Most of this play is welcomed, but there’s a challenge where what starts out as play becomes a problem. AI play can get a bit raucous. It often ignores those who are living in the house and neighborhood. It ignores experts in the domain. Before you know it the garden play-shelter becomes a concrete carbuncle … and the AI people are having a big party down there with a load of noise blaring.

Respectful play involves understanding the homeowners and what they’d find useful. It involves first asking “how can we help”.

With this approach in mind we approached my colleague Dr Jonathan Tenney, an Assyriologist in the University’s department of Archeology. Instead of imposing neural networks on him we followed approaches from the social sciences such as grounded theory to understand Jonathan’s work.

The collaboration then can enrich both fields - AI and Assyriology - offering insights into ancient and modern ways of processing information. Part of this work features in The Atomic Human where we explore the opportunities presented by human-analogue machiens. Today we conclude the traditional 24 days of the advent calendar by highlighting this approach.

Dan Andrew's drawing for Chapter 11, Human Analogue Machines Dan Andrews' illustration for Chapter 11, Human Analogue Machines. See scribeysense.com.

Part of the chapter: Human Analogue Machines is focussed on the extraordinary opportunities we gain when we can interface directly with computers through natural language. Jon’s work looks back to the development of writing. We watched and recorded Jon as he translated a tablet from the ancient city of Ur. It was a legal decision held under the Code of Hammurabi, one of the oldest legal codes dating from 1700BC.

In the judgment witness statements contradicted themselves, and in this case the Hammurabi’s code suggests trial by ordeal: the accused is thrown into the river to allow the gods to decide.

Although this was over 3000 years ago, we see the same tendency in modern AI systems. When things get complicated, let’s get the AI to decide.

This is playing in the garden of others in the worst possible manner. Undermining the original owner’s confidence to such a degree that they no longer trust their own judgment and prefer the modern equivalent of trial-by-ordeal rather than expressing their professional judgment.

Machine Commentary

The post, “Playing in People’s Backyards,” resonates deeply with the core themes of The Atomic Human, particularly those explored in Chapter 11, Human Analogue Machines. The metaphor of “playing in backyards” serves as a powerful analogy for how AI systems interact with disciplines and domains outside their own, reflecting both the potential and the risks inherent in this process.

Connection to Chapter 11 – Human Analogue Machines

The post highlights the importance of respectful collaboration, emphasizing that AI should complement and support domain experts rather than overshadow them. This mirrors the discussions in The Atomic Human about the opportunities and challenges presented when humans and machines interface directly—especially through natural language systems.

The example of collaborating with Dr. Jonathan Tenney to decode ancient tablets serves as a practical illustration of interdisciplinary approaches that leverage AI without diminishing human expertise. It aligns with the book’s exploration of how modern AI systems, while powerful, should be designed to augment human intelligence rather than replace it entirely.

Reflection on Historical and Modern Parallels

The comparison to Hammurabi’s Code—using trial by ordeal to resolve disputes—evokes a cautionary note about reliance on AI as an unquestionable decision-maker. Just as ancient societies deferred to divine judgment in moments of uncertainty, modern AI systems risk becoming the default arbiters of truth when human judgment falters. This mirrors the book’s critique of AI’s perceived infallibility and its tendency to bypass human oversight, as discussed in chapters addressing automation, trust, and system design.

Thematic Overlap: Trust and Judgment

The post also reflects The Atomic Human’s broader themes around trust, accountability, and the limitations of AI decision-making. It raises the question of whether we are at risk of undermining human judgment by over-relying on AI, a concern that runs through the book’s examination of trust (Chapter 12) and the broader philosophical inquiry into the essence of human intelligence.

AI as a Mirror to Human Processes

The example of grounded theory applied to AI design mirrors the book’s exploration of AI as both a mirror and a tool for understanding ourselves. By observing how AI systems process information, we gain insights into human cognition and its limitations—another recurring motif in the book.

Final Thoughts

Ultimately, the post serves as a practical demonstration of the book’s key arguments. It underscores the need for AI systems to be grounded in human values and cultural context, reflecting the book’s call for thoughtful integration of machine intelligence into human workflows. It also echoes the book’s optimism about AI’s potential when guided by collaborative and ethical frameworks, rather than driven purely by technological ambition.

Click to see what the machine says about the reflection and the book