The imperative of understanding knowledge as a commons for a flourishing future of AI
An essay in response to the open question of how artificial intelligence will affect our lives, work, and society at large
The One Hundred Year Study on Artificial Intelligence (AI100) is a longitudinal study of progress in AI and its impacts on society. In the early 2010s Stanford University invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play. Every five years a new study will be released, the first of which appeared in 2016 and the most recent published in 2021 and included a commentary on what had changed since 2016:
As a way of laying the groundwork for the next report, planned for 2026, the AI100 Standing Committee invited original essay submissions from early career researchers that react directly to one or both of the AI100 reports. What follows is the essay I prepared for that call.
Abstract
In the race between technology and our wisdom to wield it, wisdom is falling dangerously behind. Our obligation and challenge today with the philosophy, development, and governance of AI is to balance these rates. Understanding that the resource that AI deals in is knowledge itself, its role is to keep the pathways to future discovery open. This is a sentiment that echoes out of common pool resource, commons, research, which has more recently turned attention to knowledge as a commons. Consistent in all commons problems (natural resource or knowledge) are three components: strong collective action, self-governing mechanisms, and social capital. I suggest that the watershed moment surrounding the AI100 2021 report is that only through the lens of knowledge as a commons in the future of AI might we find timely and wise answers to its development, deployment, and governance.
Essay
What is the analog of the tragedy of the commons for the knowledge commons?
This might be the silent question confronting AI.
In 1968 Garrett Hardin wrote an article that has reverberated across the six decades since. The commons is a resource that is owned, used, and managed by a community; a social regime for managing a collectively owned resource. In his challenging and challenged work Hardin seared into our cultural consciousness the idea that when individuals attempt to share a scarce resource in common, the resource and the environment around it will unravel:
The tragedy of the commons is a situation in which individual users, who have open access to a resource unhampered by shared social structures or formal rules that govern access and use, act independently according to their own self-interest and, contrary to the common good of all users, cause depletion of the resource through their uncoordinated action. [Hardin, 1968]
Now we must contend with a new commons, one of ideas and knowledge. Inheriting Charlotte Hess and Elinor Ostrom's definition, knowledge is any kind of understanding gained through experience or study. What AI has access to, namely knowledge that has been digitally instantiated and that which we inadvertently provide to it through interaction, reveals how important this capacious understanding of knowledge is. Our collective knowledge is the full complement of scientific, scholarly, nonacademic, indigenous, and creative artifacts and experiences. How AI has and will affect human knowledge creation processes and capacities remains an open question.
Michael Polanyi wrote that knowledge acquisition and discovery is both a social and deeply personal process. Coupled with the fact that knowledge is also cumulative, we must understand knowledge, our idea storehouse, as a public good. Our obligation and challenge today with the philosophy, development, and governance of AI is to keep the pathways to discovery open.
So, I ask again, what is the analog of the tragedy of the commons for the knowledge commons? It is the depletion of our ideascape by merely recycling existing ideas ad infinitum without ever replenishing them or tending to their evolution. It is an endless recycling of existing intellectual property. It is pure exploitation without the balance of exploration. Cognitive neuroscience, behavioral ecology, and evolutionary science have for years reiterated that healthy resilient systems strike a careful and deliberate balance between exploration and exploitation [Addicott et al., 2017; Gopnik, 2020]. Without exploration, without play, we have a transactional relationship to knowledge. Just like natural resource commons, the result is depletion and barrenness. Here, though, the barrenness is in our ideas, our innovation, and the victim is creativity.
At what cost is this banishing of the virtue of mystery? F. Scott Fitzgerald wrote that the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function. Creativity and ongoing continuous change are perhaps the core elements of vitality. Nature is endlessly creative, not only reinventive but sometimes breaking entirely in a step change that breaks a thread clean. The 'search' of evolution is both explorative and exploitative. A mind, life, or system without exploration is an impoverished one.
In 1990 Elinor Ostrom exposed Hardin's tragedy as a misnomer--rather only a tragedy of unregulated, unmanaged commons. Her genius was in recognizing a pattern of self-governance and collective management at the heart of her examples of flourishing commons. Her eight design principles suggest an entirely new axis in the possible solution space for governing commons.
The Nobel laureate herself turned her attention to knowledge as a commons later in her career, an oracular move made apparent by the distance and wisdom that history always provides. The problem that she realized is perhaps now abundantly and existentially clear with the arrival of AI tools like chatGPT. All of a sudden we find ourselves in a cultural moment with AI. So many seem disinterested in a rigorous informed use of these tools, foretelling the manner of usage we might expect of any technology of this ilk. This unprincipled and perhaps even lazy use is a hurry toward the depletion of our idea storehouses, a tragedy of our knowledge commons.
Just like we found solutions to the depletion of natural resource commons, might we also find solutions for the knowledge commons? I suggest that this is the threshold question, the watershed realization, for our philosophy of AI. The AI community should be on fire with Ostrom's principles.
Consistent in all commons problems (natural resource or knowledge) are three components: strong collective action, self-governing mechanisms, and social capital. Look at the AI100 report again. These components are implicit in all of the study questions. They grow beyond the bounds of the report, too, silent agents in the societal conversations around AI: Are tools like ChatGPT harming humanity's ability to create novelty? How might humanity accelerate our adaptation to these technologies alongside creating a collective, enforceable decision to slow their development, moving against the grain of the multipolar trap of AI development? What is the impact of AI on our social fabric, understanding that there is unequal access and the costs and benefits will inevitably be unequally distributed? These are commons questions and they are not benign.
Inimitable transdisciplinarian EO Wilson in a few words described the imperative we face, “The real problem of humanity is [that we] have Paleolithic emotions, medieval institutions and godlike technology...and it is now approaching a point of crisis." In short, in the race between technology and our wisdom to wield it, wisdom is falling dangerously behind.
The hurry is not merely in our minds. Geoffrey West, a theoretical physicist, in a sweeping work on scaling laws from cells to cities raises a troubling sustainability problem: that to support the rate of growth in our society, namely to match the growth in complexity, we require innovation at an untenable rate. West points to evidence for a super-exponential rate of innovation. We are already ill-equipped to think in exponentials, so it is difficult to fathom how quickly our societal needs are outstripping our ability to innovate. At the same moment a countervailing force is exacerbating the problem. Vannevar Bush wrote almost a century ago that progress in any domain 'depends upon a flow of new scientific knowledge...new products, new industries, and more jobs require continuous additions to knowledge of the laws of nature, and the application of that knowledge to practical purposes...This essential, new knowledge can be obtained only through basic scientific research.' Are the AI tools we have now supporting creativity and innovation or are they diminishing them?
How might we think of AI and human-computer interaction in a way that fosters exploration, replenishing of our ideascape rather than an exploitative, transactional relationship to knowledge, one that depletes our storehouse of ideas precisely at a time when our planet demands more rapid and more profound innovation? Perhaps only through the lens of knowledge as a commons in the future of AI might we find timely and wise answers to this question.
References
Addicott, Merideth A. et al. “A Primer on Foraging and the Explore/Exploit Trade-Off for Psychiatry Research.” Neuropsychopharmacology 42 (2017): 1931-1939.
Bush Vannevar and United States. Science the Endless Frontier : A Report to the President. United States Government Printing Office 1945.
Fitzgerald, Scott. "The Crack-Up," Esquire, 1936. F. Scott Fitzgerald, Echoes of the Jazz Age, (New Directions Publishing, 1931).
Gopnik, Alison. “Childhood as a solution to explore–exploit tensions.” Philosophical Transactions of the Royal Society B: Biological Sciences 375 (2020).
Hardin, Garrett (1968). "The Tragedy of the Commons". Science. 162 (3859): 1243–1248.
Hess, Charlotte M. and E. Ostrom. “A Framework for Analyzing the Knowledge Commons : a chapter from Understanding Knowledge as a Commons: from Theory to Practice.” (2005).
Littman, Michael L. et al. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” ArXiv abs/2210.15767 (2022).
Ostrom, Elinor. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge [England] ; New York: Cambridge University Press, 1990.
Polanyi, Michael. “Personal Knowledge: Towards a post-critical philosophy.” (1959).
West, Geoffrey B.. “Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies.” (2017).