Bridging from Domain Experts to AI Capability
When turning right on a bicycle the first thing you need to do is to turn the wheel to the left. At least you may not be consciously aware of it, but if you can ride a bicycle you instinctively know it.1
This shows how we can understand systems in different ways. An intellectual understanding is different from the pragmatic understanding that comes from experience.
The difference between these two forms of understanding surfaces when we encounter statements like “no one understands this artificial intelligence”. Is this meant in the intellectual sense or the empirical sense?
We don’t need to know how a car or a bicycle works to learn how to drive or ride them, rather we need the technology to perform in a predictable manner that responds to our interventions.
This is the same for digital systems. We don’t need to intellectually understand how a digital system works to use it, we need to have an intuition about how it will respond to our interventions.

In The Atomic Human I try to avoid using terms like consciousness or sentience that are poorly defined. But I do use the word “feel” in a way that isn’t very well defined. I use it in the sense of “getting a feel for something”. It also has a different sense as in “search your feelings Luke”, or when something makes us “feel bad”.
This relates to the way our decisions pan out across different time frames. The feel for a bicycle is panning out over a short time frame, but the feel for a friend or a sister pans out over a longer time frame. I think the most relevant word here is affordances. The book focuses on constraints (or limitations) of human intelligence that in combination with our environment change our affordances. This limits not only what we can do but what we imagine we can do.
The bicycle is adapted to our capabilities, a minimalist tool that extends our affordances. The way digital systems are deployed often has the opposite effect. In the Horizon scandal, a digital accounting system was deployed that undermined the subpostmasters it was supposed to support. It also undermined the institutions that should have protected the subpostmasters: the legal and the accounting professions. Digital systems can undermine both individual and institutional affordances.
In the ITV dramatisation of the Horizon scandal, the actor playing Jo Hamilton2 asks campaigner Alan Bates
Jo Hamilton : Are they just incompetent, Alan, or just evil?
Alan Bates : Well, y’know… it comes to the same thing in the end.
The real issue here is neither. The Horizon system undermined the affordances of both its creators, the subpostmasters and the institutions that we depend on to protect us. Horizon created an affordance gap between the decision making and its wider societal context.
Both the Horizon software scandal and the contemporary Lorenzo Scandal3 predate the deeper understanding of how to build and deploy software systems that was developed at Amazon. But as my experience there shows, the problem of the affordance gap still hasn’t been solved for digital systems. The challenge of intellectual debt means that even big tech companies experience affordance gaps in their system deployment.4
Analysis of the failures shows a repeated failure to integrate mechanisms of feedback in the deployment of the technology. True feedback would integrate both the aspirations of citizens (subpostmasters for Horizon, nurses, doctors for Lorenzo) as well as the immediate needs.
Too often digital systems bring inconvenience to individuals (such as increased bureaucracy) without delivering benefits. Where the benefits are found they are distributed centrally. This means that the feel of these systems is closer to enslavement rather than empowerment, an idea that was expressed by Samuel Butler in his letter to the Press in 1863.
So how do we prevent ourselves being enslaved by this latest wave of digital technology?
In April 2023, as part of the activity of the AI Council, we advised the UK government that there was a need to tightly integrate research and practice – Bridging academia, industry, and government to solve deployment challenges. So far we’ve not seen the bridging of these capabilities at national level, but at local level we continue to ensure that the affordance gap is bridged by bringing AI technologists together with domain experts.
In recent years, UK government has struggled to convene across these groups. But we can’t wait for them to get their act together. Universities have to step up. We need to operate as ‘honest brokers’ in bridging the gap between societal understanding of AI and the technological possibilities it offers. Historically, universities haven’t done as a good a job as we might. Our focus on rapid development of “prestige science” can distract us from addressing the challenges society actually faces.
If we are to develop society’s empirical understanding of AI and support its development and deployment in a way that enhances our affordances, institutionally and individually, we need to do this together, leaning on each other as we find a new balance and pedal our way forward.
-
This is to maintain balance, to make a right turn the bike needs to lean to the right for balance, but to lean a bike to the right you have to turn left. So even if you’re not aware of it, you make a small left turn before you turn right. ↩
-
Jo Hamilton was charged with theft and wrongly convicted of false accounting. She was forced to pay the Post Office £36,000. ↩
-
The Lorenzo scandal was a software project to digitise the UK’s national health service. It was cancelled at a cost of over £10 billion. But given what happened to the subpostermasters in the Horizon scandal we can be grateful it was cancelled. ↩
-
Examples from the book include the FBLearner deployment and Microsoft’s Tay chatbot. ↩
Click to see what the machine says about the reflection and the book
Machine Commentary
Connections to The Atomic Human
This post addresses several key themes from The Atomic Human: