Unless you live under a rock, you know we’re living in a period of dramatic change: technological, political, social, economic, institutional, geopolitical and cultural systems are in turmoil. More fundamentally, we are just beginning to realize that three thousand years of history have led to the emergence of a terraformed planet characterized by accelerating integration of human, natural, and built systems at all scales – the Anthropocene.
In such times, the overwhelming impact of pervasive, foundational change makes it difficult to perceive unfamiliar and unexpected emergent behaviors, especially when they involve unprecedented levels of complexity. So, it isn’t surprising that the emergence of the cognitive ecosystem that already permeates virtually every aspect of our world is both unperceived and unremarked. But the power of the cognitive ecosystem to transcend and radically reshape human society, and indeed the world, argue that continued ignorance may be unwise and even dangerous.
Sure, we recognize various pieces of such an ecosystem: 5G infrastructure, AI/big data/analytics, social media products and services, and the Internet of Things (IoT), and perhaps products such as financial or social credit systems. What we don’t recognize is that there is a rapidly emerging meta-level integrating all these domains to produce functionality and behaviors that are sufficiently complex and unique that we have yet to perceive them, much less understand them or manage them effectively. This is the cognitive ecosystem.
Wrapping one’s head around the entire cognitive ecosystem is hard. It’s useful to break it down schematically into three large domains, each of which is itself a composite of many different technologies, infrastructures, and institutions. The first is the “data economy,” a rapidly growing and vibrant marketplace that parallels the existing money economy. This domain isn’t just data; it also includes a number of services, technologies, and institutions that generate, collate, and distill data, such as the IoT, social media platforms, financial and payment systems, facial recognition technologies, China’s social credit system, and so forth. The second domain, “cognitive infrastructure,” consists of those institutions, technologies, services and products that provide the functional elements of cognition, from perception (sensors) to processing networks which increasingly incorporate different types of AI, to substantial memory capacity built into everything from individual chips to large corporate networks with hundreds of thousands of networked servers. Finally, the third domain is the “institutional and services infrastructure,” which includes among other things relevant academic disciplines such as neuroscience, psychology, and behavioral economics; soft infrastructure such as the evolving Chinese social credit system and gaming products; firms from Alibaba and Tencent to Facebook, Microsoft, and Google; and relevant legal, regulatory, and institutional structures. Adding to the confusion, many existing areas of emphasis, such as IoT, can contribute to all three cognitive ecosystem domains in different ways, creating an analytical complexity that may be one reason why the cognitive ecosystem has lurked, unrecognized, for so long.
Across each of the three domains constituting the cognitive ecosystem, there is rapid, accelerating development and deployment of technologies, services, and cultural and social practices that taken together are forming the constituent components of cognition. Sensory input? An estimated 25 to 50 billion objects from toilets and refrigerators and microwaves, to most mechanical systems such as cars and airplanes (and their engines), to mobile phones and computational devices, are currently coupled to the Internet. Many of these devices have sensor and information collection as their primary function. The current number of sensors alone has been estimated at 1 trillion. Just as in biological systems, these and many other data streams are fed into decision-making devices, which can be as simple as a thermostat – “turn on or off, depending on temperature” – or as complex as autonomous vehicles, or the AI-based networks that keep modern petrochemical or manufacturing plants operating, or even Google’s search engines and China’s nascent social credit system. Moreover, learning occurs at all scales as devices talk to each other, reprogramming themselves to function more efficiently and effectively based on data and assessment across networks of devices; each Tesla teaches other Teslas. Thus, machine to machine connections at all scales are exploding, increasing from 17.1 billion in 2017 to an estimated 27.1 billion by 2021.
These data processing and information management systems mimic human cognitive processes; data identified algorithmically as redundant or irrelevant are discarded, and patterns defined by experience or training as meaningful are kept and transmitted to higher level networks. The cognitive ecosystem, like human cognition, generates levels of processing networks that float on lower level sensor and model building functions, and in turn inform higher level cognitive functions. Such coupled integrative networks operating at vastly different scales in turn require substantial distributed AI and memory capability. Memory? Estimates are that at the end of 2018, global stored data stood at 33 zettabytes, increasing to 125 zettabytes by 2025 (a zettabyte is a unit of information roughly equal to one sextillion (1021) bytes). Some idea of the scale of AI/memory functionality is provided by the fact that today some 425 million servers are deployed globally. All these figures, by the way, should be regarded as approximate because infrastructure change is rapid and many firms and governments do not release information regarding their physical infrastructure, such as number of servers, for competitive and security reasons.
So what we have, in a historically short period of time, is a metasystem emerging from an ongoing process of accelerating capability and capacity across a number of apparently unrelated infrastructures, services, institutions, and technologies tied together by AI and various institutional structures and networks that 1) contains the functional components of cognition, including ever more powerful networks linking them together operationally, 2) operates at many scales from the local and regional to the global, and is thus an increasingly critical dimension of the Anthropocene, 3) is evolving completely new emerging systemic and behavioral capabilities, and 4) is driven forward by powerful competitive forces at the state and corporate levels. This is the cognitive ecosystem, and we are building it without recognizing it.
Recognizing the emergence of the cognitive ecosystem, however, and understanding its characteristics and behaviors, has some deeply practical implications. For example, a recent Defense Innovation Board report laid out five ethical requirements for U.S. military AI, including the principle that technical experts be able to understand what the AI system is doing and how it is doing it. This may be feasible for local and bounded AIs. But given that a fundamental benefit of systemic AI is precisely its ability to draw on the cognitive ecosystem to inform decision-making in domains that are too complex, data intensive, or fast-moving for human cognition, and that functions within the cognitive ecosystem may well be occurring at levels that are not even in principle intelligible to an individual human brain, such a principle may well be doomed from the start. A critical category error lurks here: national security requires engagement not just with component systems such as AI, but with the cognitive ecosystem itself. Challenging initiatives such as the social credit system, increasingly being exported by China to other authoritarian regimes, are products not just of AI, but of the cognitive ecosystem taken as a whole.
Similarly, in response to a bevy of complicated and interconnected policy challenges including security, privacy, speech rights, the role and definition of “truth,” and the rise of vicious tribal politics, leaders in the United States and the European Union are proposing a number of policies which reach back to traditional frameworks, from anti-trust to the increasingly obsolete Constitutional idea of free speech. Knowing, however, that the situation involves the emergence of a conceptually new, more complex, infrastructure based on fundamental changes in the information environment provides a bracing reality check: just as one could not regulate Rockefeller’s Standard Oil by using the regulations applicable to a local coal supplier, the attempts to use past understanding to manage the cognitive infrastructure are category mistakes doomed to failure.
Indeed, the comfortable retreat to past verities is becoming an increasingly unethical way of avoiding the responsibility to grapple with the real complexities attendant on the emergence of the cognitive ecosystem. Without accepting that challenge, ethical, rational and appropriate policy and institutional adaptation will remain beyond reach.
About the Author:
Brad Allenby is President’s Professor of Civil, Environmental, and Sustainable Engineering, and of Law; Lincoln Professor of Technology and Ethics; Senior Sustainability Scientist; and co-chair of the Weaponized Narrative Initiative of the Center for the Future of War, at Arizona State University. He moved to ASU from his previous position as the Environment, Health and Safety Vice President for AT&T in 2004.
Dr. Allenby received his BA from Yale University, his JD and MA (economics) from the University of Virginia, and his MS and Ph.D. in Environmental Sciences from Rutgers University. He is past President of the International Society for Industrial Ecology and ex-Chair of the AAAS Committee on Science, Engineering, and Public Policy. He is an AAAS Fellow and a Fellow of the Royal Society for the Arts, Manufactures & Commerce, and has been a U. S. Naval Academy Stockdale Fellow (2009-2010), an AT&T Industrial Ecology Fellow (2007-2009), and a Templeton Research Fellow (2008- 2009). He served as Director for Energy and Environmental Systems at Lawrence Livermore National Laboratory (1995-1997), and the J. Herbert Holloman Fellow at the National Academy of Engineering (1991-1992).
His areas of expertise include emerging technologies, especially in the military and security domains; Design for Environment; industrial ecology; sustainable engineering; and earth systems engineering and management. In 2008 he was the Carnegie Foundation Arizona Professor of the Year. His latest books are Industrial Ecology and Sustainable Engineering (co-authored with Tom Graedel in 2009), The Techno-Human Condition (co-authored with Dan Sarewitz in 2011), The Theory and Practice of Sustainable Engineering (2012), The Applied Ethics of Emerging Military and Security Technologies (an edited volume released by Ashgate Press in 2015), Future Conflict and Emerging Technologies (2016), Weaponized Narrative: The New Battlespace (co-edited with Joel Garreau, released in 2017), and Moral Injury: Towards an International Perspective (co-edited with Tom Frame and Andrea Ellner, 2017).