Intellectual Debt: With Great Power Comes Great Ignorance
Authors: Jonathan Zittrain
Published in: The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives, 2022
Abstract
In this chapter, law and technology scholar Jonathan Zittrain warns of the danger of relying on answers for which we have no explanations. There are benefits to utilising solutions discovered through trial and error rather than rigorous proof: though aspirin was discovered in the late 19th century, it was not until the late 20th century that scientists were able to explain how it worked. But doing so accrues ‘intellectual debt’. This intellectual debt is compounding quickly in the realm of AI, especially in the subfield of machine learning. Whereas we know that ML models can create efficient, effective answers, we don’t always know why the models come to the conclusions they do. This makes it difficult to detect when they are malfunctioning, being manipulated, or producing unreliable results. When several systems interact, the ledger moves further to the red. Society’s movement from basic science towards applied technology that bypasses rigorous investigative research inches us closer to a world in which we are reliant on an oracle AI, one in which we trust regardless of our ability to audit its trustworthiness. Zittrain concludes that we must create an intellectual debt ‘balance sheet’ by allowing academics to scrutinise the systems.
Summary
Zittrain introduces the concept of “intellectual debt” - the gap between our ability to use AI systems and our understanding of how they work. He argues that as AI systems become more powerful and ubiquitous, this knowledge gap creates significant societal risks and responsibilities.
In The Atomic Human, Zittrain’s concept of intellectual debt is referenced in discussions of how organizations and societies delegate decision-making to automated systems they don’t fully understand. This relates particularly to the book’s themes about the risks of ceding human agency to black-box AI systems and the importance of maintaining meaningful human control over consequential decisions.