130 19
English Pages 249 [242] Year 2022
Rodrick Wallace Editor
Essays on Strategy and Public Health The Systematic Reconfiguration of Power Relations
Essays on Strategy and Public Health
Rodrick Wallace Editor
Essays on Strategy and Public Health The Systematic Reconfiguration of Power Relations
Editor Rodrick Wallace The New York State Psychiatric Institute Bronx, NY, USA
ISBN 978-3-030-83577-4 ISBN 978-3-030-83578-1 (eBook) https://doi.org/10.1007/978-3-030-83578-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Tactical and even operational excellence are quite meaningless save with respect to their political and strategic contextual significance. —Colin S. Gray (2006a).
The subtitle of Colin Gray’s monograph on military theory from which the epigraph has been taken is “The Sovereignty of Context.” Those tasked with the protection of the state against diseases of mass mortality confront similar magnitudes of catastrophe as those who must ensure the state’s external security and would do well to consider Gray’s words. Indeed, matters of “national security” are now confronted by, and convoluted with, challenges of pandemic disease as dire as those expected of war. COVID-19, with its mere 2% fatality rate, is only a harbinger of the looming Justinian Plagues of our own imperial decline. Wars of the twentieth century caused many tens of millions of premature mortalities, as did both chronic and infectious disease. The avoidance of nuclear exchange in the latter half of the century, sometimes only by chance, was paralleled, in most industrialized polities, by significant decline in rates of infectious and chronic disease mortality through improvements in living and working conditions. At present, however, neoliberal land exploitation and agribusiness protein factory farming policies have made public health increasingly precarious (e.g., Wallace et al. 2016, 2018; R.G. Wallace 2016). Public health and national defense are distressingly similar, not only in the existential threats consequent on their strategic failures but also in the basic nature of the challenges. Armed state security faces similarly equipped and similarly skilled adversaries able to match any technological “revolution in military affairs” (RMA) tit-fortat, either directly or by Lamarckian or Darwinian evolutionary adaptation under the dynamics of active confrontation. Examples range from the demise of the Wehrmacht on the Eastern Front to US experiences in Korea after Inchon, Vietnam, Iraq, and Afghanistan.
vii
viii
Preface
Public health confronts pathogens similarly able to undergo evolutionary adaptation to contextual opportunities by means ranging from simple genetic variability to the more active modes associated with antibiotic resistance, and so on. Multiple drug resistant HIV, TB, and malaria compete with recombinant cross-species influenzas and “emergent infections” as potential sources of highly infectious outbreaks having mortality rates of 20–50%. As a consequence, potential loss of life to pandemic disease fully rivals that of the initial stages of a nuclear exchange between major actors. Subsequent trajectories of such an exchange, of course, would be marked by substantial infectious disease outbreaks among survivors immunocompromised by radiation, combustion product contamination, stress, and famine. Gray (2006a) indirectly addresses the failures of Wehrmacht and US tactical superiority under protracted strategic challenge. He finds that effecting a revolutionary change in the way one fights must be done adaptably and flexibly. Failing the adaptability test, he goes on, makes one beg to be caught out by the diversity and complexity of future warfare. He argues that locking oneself into a way of war that is highly potent only across a narrow range of strategic and military contexts, and hence operational taskings, wounds the ability to recognize and understand other varieties of radical change in warfare, causing a lag in the ability to respond effectively to them. Revolutionary change in warfare, in his view, always triggers a search for antidotes, and eventually the antidotes triumph, taking any or all of tactical, operational, strategic, or political forms. His solution, in principle if not always in practice, is to carry through an RMA that is adaptable, flexible, and dynamic. “Flexibility”—in this particular context—is an essential strategic characteristic, where strategy, we will argue at length in subsequent chapters, can be seen as a kind of continuously adaptive meaningful “statement” constructed from an alphabet of tactical and operational actions in an extended dialog with an adversary able to make its own such statements from its own—and often markedly different—tactical and operational alphabet. Angstrom and Widen (2015) describe the direct military circumstance, finding it misleading to think that only the weak party learns to adapt its strategy in relation to the strong. It is, they state, a new language that both parties ascertain from one another and create (or construct) in their interaction. In this way, the strong will learn as much from the weak as the other way around. Moreover, considering the use of force as a language, a shared language is necessary to reach a politically stable post-war peace. Gradually, they continue, it will not only be that the parties learn about each others’ preferences and thus are able to credibly commit to courses of action, but emulation will also be key to identifying a shared meaning of a particular strategic behavior. Through progressively learning and by emulating their respective strategies, they find, the adversaries learn each others’ norms and discover how the opponent understands the manner in which force leads to political results. And, they conclude, when the adversaries agree upon a certain set of norms, they will be in a position where force influences the outcome of the war.
Preface
ix
There is a recent case history that weaves war and public health together. The New Dealers, tasked with shoring up the US political system in the aftermath of the Great Depression, were thrown headfirst and wholesale into both the European and Pacific wars, courtesy the profound Japanese strategic stupidity of the (tactically flawed) Pearl Harbor attack. Sufficient Allied strategic skill directed the overwhelming industrial capacity of the USA and the steadfast dedication of the Russian and Chinese people, as expressed in sufficient, if not particularly outstanding, tactical skills, to a successful conclusion of two distinct wars against strategically incompetent polities. Social dynamics within the USA, arising from both the New Deal efforts and the subsequent full-scale war mobilization, persisted for some time afterwards, most particularly in the context of a 1947 smallpox outbreak in New York City. As Weinstein (1947) describes, it was, with some inevitable operational difficulty, nonetheless possible to vaccinate over 6 million children and adults between April 26 and May 3 of that year, suppressing the outbreak. Fortunately, smallpox has not evolved around the vaccine, and worldwide extirpation of the disease has been possible. Since the 1950 landing at Inchon, however, the USA has not, in spite of overwhelming technical and operational superiority over adversaries, won a strategically significant tactical military victory, and the domestic outfalls of US “victory” in the Cold War, including massive deindustrialization, have served to politically destabilize the country and open it further to emerging infection (e.g., Wallace et al. 1999 and references therein). Indeed, current plans for massive immunization against COVID-19 may well founder on organized and “passive resistance” antivaccination campaigns, a far cry from the country that powered victory in a truly massive two-front war, the largest in human history. According to the Centers for Medicare and Medicaid Services (CMS 2019), in 2018, the US spent $ 3.6 trillion, or $11,172 per person, on health care, some 18% of the US gross domestic product, to little avail: in comparison with other industrialized nations (Commonwealth Fund 2020), • The USA spends more on health care as a share of the economy—nearly twice as much as the average OECD country—yet has the lowest life expectancy and highest suicide rates among the 11 nations. • The USA has the highest chronic disease burden and an obesity rate that is two times higher than the OECD average. • Americans had fewer physician visits than peers in most countries, which may be related to a low supply of physicians in the USA. • Americans use some expensive technologies, such as MRIs, and specialized procedures, such as hip replacements, more often than US peers. • Compared to peer nations, the USA has among the highest number of hospitalizations from preventable causes and the highest rate of avoidable deaths. It is difficult to escape the inference that US healthcare expenditures represent an attempt to mitigate the health impacts of discriminatory power relations without
x
Preface
significant address of the underlying cause. D. Wallace and R. Wallace (2019) put it thus: If critical masses of working people in a nation feel insecure, have to work long hours for low wages and under unsafe conditions, and have no protection by culture or regulation, the whole disease guild of obesity, diabetes, HIV, TB, substance abuse, unsafe sex, violence, and other outcomes of extreme disempowerment blossoms.
An earlier study by Dubos and Dubos (1953) of an earlier time described the kernel of the matter this way, Tuberculosis was, in effect, the social disease of the nineteenth century, perhaps the first penalty that capitalistic society had to pay for the ruthless exploitation of labor.
The impending failure of “directly observed therapy” for TB control—massive drug treatment of infection—consequent on both the evolution of microbe resistance and the fragility of such programs in the face of “perturbations” like COVID19, economic catastrophe, armed conflict, and the like challenges current medical attempts to preserve the power relations driving patterns of infection. Carter et al. (2018), in a study for the World Health Organization, suggest, in effect, “directly observed monies” as a substitute strategy for preservation of those power relations (D. Wallace and R. Wallace 2019). At present, the US military budget approaches a “mere” $750 billion annually, again focusing on tactical/technological superiority over potential adversaries that has had—and will likely continue to have—little effect in real-world confrontations requiring strategic competence. A sterling current example is the trillion-dollar F35 Joint Strike Fighter debacle (Insinna 2019). Such projects come at a profound opportunity cost, squandering scarce engineering and scientific talent and resources needed to rejuvenate the deindustrialized heartlands that are the epicenters of much of the nation’s current and growing political instability (Wallace et al. 1999 and references therein). Internet memes, “information economy,” and “working-at-home” aside, the USA imports more heavy duty industrial goods than it manufactures (US Census 2020). The parallels between US strategic failures in public health and national security are not random. As Gray (2006b) puts the matter in military terms, arguing that analysis supports and affirms the view that culture in its several guises— public, strategic, and military-organizational—is vitally important. The “cultural thoughtways” of friends, foes, and oneself can have a directive or a shaping effect upon decisions and behavior. He argues that, since we are all necessarily encultured, everything that we think strategically, and that subsequently we seek to do for strategic reasons, may be influenced by the cultural dimension. There are, in his view, many reasons why policy and strategy can succeed or fail, and cultural empathy or blindness is only one of them. He finds serious reasons, rooted in local perceptions of historical experience and in a community’s geopolitical context, why a country’s strategic culture is what it is, concluding that, recognizing the need for change does not necessarily ensure that
Preface
xi
the needed change will occur, since change may meet with too much resistance. Gray (2006b) concludes Practical people, a category that should include strategists, will ask that most brutally direct of questions, “so what?” So what do we do with greater self-, and other-, cultural understanding? Culture matters greatly, but so do the other dimensions of war, peace, and strategy.
And public health. It is the intention of this work, essays from a number of perspectives, to more fully examine the role of strategic thinking in public health, and the role of culture in that thinking, with the intent of providing tools that might make needed change more likely. The context for what we study is well illustrated by the faceplate of the annual Summary of Vital Statistics released by the New York City Department of Health and Mental Hygiene (NYCDOH 2017), showing the precipitate decline in the city’s death rate that was consequent on improvements in living and working conditions driven by the political power of the Great Reform and the Labor Movement. This decline occurred by 1920, well before development of “scientific medicine” after World War II.
Fig. 1 Adapted from NYC DOH 2017. New York City deaths per 1000 population, 1800– 2017. By 1920, absent significant medical technology aimed at individual treatment of morbidity, improvements in living and working conditions—the Great Reform and Labor Movements—had brought death rates to historically low levels. We know how to do this
xii
Preface
We know how to do this, but the essential matter, in public health as in armed conflict, remains alteration of the power relations between social groupings, communities, and institutions. This book intertwines theoretical and statistical models that might be useful for empirical studies and policy planning with qualitative case histories in which power relations drive population patterns of health and illness. Both approaches are viewed through the lens of strategies—extended tactical statements—that could alter those relations and their outcomes. Although there is a unifying theme across them, the chapters are largely independent and can be read individually. Bronx, NY, USA
Rodrick Wallace
References Angstrom, J., & Widen, J. (2015). Contemporary Military Theory: The dynamics of war. Routledge. CMS. (2019). https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccounts Historical Carter, D., Glaziou, P., Lonnroth, K., Siroka, A., Floyd, K., Weil, D., Raviglione, M., Houben, R., & Boccia, D. (2018). The impact of social protection and poverty elimination on global tuberculosis incidence: a statistical modeling analysis of sustainable development goal 1. Lancet Global Health, 6 (2018). https://doi.org/ 10.1016/52214-109x(18)30195-5 Commonwealth Fund. (2020). https://www.commonwealthfund.org/publications/ issue-briefs/2020/jan/us-health-care-global-perspective-2019 Dubos, R., & Dubos, J. (1953). The white plague: Tuberculosis, man, and society. Little, Brown, and Company. Gray, C. S. (2006a). Recognizing and understanding revolutionary change in warfare: The sovereignty of context, US Army Strategic Studies Institute, ISBN 1-58487-232-2. Gray, C. S. (2006b). Out of the Wilderness: Prime-time for strategic culture, Defense Threat Reduction Agency Advanced Systems and Concepts Office Comparative Strategic Cultures Curriculum Contract No: DTRA01-03-D-0017, Technical Instruction 18-06-02. Insinna, V. (2019). Inside America’s dysfunctional trillion-dollar fighter-jet program. New York Times. https://www.nytimes.com/2019/08/21/magazine/f35joint-strike-fighter-program.html NYC DOH. (2017). Summary of Vital Statistics 2017 The City of New York. https:// www1.nyc.gov/assets/doh/downloads/pdf/vs/2017sum.pdf US Census. (2020). https://www.census.gov/foreign-trade/statistics/historical/ index.html
Preface
xiii
Wallace, D., & Wallace, R. (2019). Problems with the WHO TB model. Mathematical Biosciences, 313, 71–80. Wallace, R. G. (2016). Big farms make big flu: Dispatches on infectious disease, agribusiness, and the nature of science. Monthly Review Press. Wallace, R., Wallace, D., Ullmann, J., & Andrews, H. (1999). Deindustrialization, inner-city decay, and the hierarchical diffusion of AIDS in the USA: how neoliberal and cold war policies magnified the ecological niche for emerging infections and created a national security crisis. Environment and Planning A, 31, 113–139. Wallace, R. G., & Wallace, R. (Eds.). (2016). Neoliberal Ebola: Modeling disease emergence from finance to forest and farm. Springer. Wallace, R., Fernando Chaves, L., Bergmann, L., Ayres, C., Hogerwerf, L., Kock, R., & Wallace, R. G. (2018). Clear-Cutting Disease Control: Capital-led deforestation, Public health austerity, and vector-borne infection. Springer. Weinstein, I. (1947). An outbreak of smallpox in New York City. American Journal of Public Health, 37, 1376–1384.
Contents
1
Wicked Strategic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rodrick Wallace
1
2
Punctuated Institutional Problem Recognition . . . . . . . . . . . . . . . . . . . . . . . . . Rodrick Wallace
33
3
Fog and Friction as Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rodrick Wallace
65
4
Strategic Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rodrick Wallace
77
5
Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rodrick Wallace, Alex Liebman, Luke Bergmann, and Robert G. Wallace
89
6
Power Relations and COVID-19 in New York City . . . . . . . . . . . . . . . . . . . . . 119 Deborah Wallace
7
Tuberculosis, the Marker of Abusive Power Relations . . . . . . . . . . . . . . . . . 147 Deborah Wallace
8
Literacy and Public Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Deborah Wallace
9
How Policy Failure and Power Relations Drive COVID-19 Pandemic Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Rodrick Wallace
10
Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Rodrick Wallace and Deborah Wallace
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
xv
About the Authors
Luke Bergmann is an associate professor in the Department of Geography at the University of British Columbia, Canada. His research explores how globalization affects human-environment relations through several thematic focuses. These include changes in the evolution and spread of pathogens, as well as studies of shifting sociospatial relationships between carbon, land use, capital, and consumption. He is a co-author of Clear-Cutting Disease Control: Capital-Led Deforestation, Public Health Austerity, and Vector-Borne Infection (Springer) as well as numerous peer-reviewed academic publications in geography. Alex Liebman is a PhD student in geography at Rutgers University interested in the turn towards big data science in agricultural research and its implications for climate change, peasant livelihoods, and agroecological landscapes. He explores how agricultural data science reproduces particular forms of standardization and homogeneity that constitute racialized and exclusionary forms of international development and environmental management across the global South, focused on agrarian frontier zones in Colombia. In Colombia, he works with the La Via Campesina popular education school, IALA Maria Cano, as well as several agroecological research teams and is deeply guided and inspired by how these groups combine ecological practice, radical pedagogy, and emancipatory politics in their daily practices and theoretical work. Deborah Wallace received her PhD in symbiosis ecology from Columbia University in 1971. In 1972, she became an environmental studies manager at Consolidated Edison Co. and participated in pioneering environmental impact assessment. She became a manager of biological and public health studies at NYS Power Authority in 1974 and remained there until early 1982. In 1980, she completed a mini residency in epidemiology at Mt. Sinai Medical Center. In the mid-1970s, she also founded Public Interest Scientific Consulting Service which produced impact assessments of massive cuts in fire service in New York City. She also probed the health threats that plastics in fires posed to firefighters and became an expert witness xvii
xviii
About the Authors
in litigation for plaintiffs in large fires fueled by plastics. During 1985–1991, she worked for Barry Commoner at the Center for the Biology of Natural Systems at Queens College. During 1991–2010, she tested consumer products and services for their environmental and health impacts at Consumers Union. She retired in 2010 but continues data analysis, research, and scientific publications. Her first paper was published in 1975, and her latest publication, a book on COVID-19 in New York, in 2021. Robert G. Wallace is an evolutionary epidemiologist at the Agroecology and Rural Economics Research Corps. His research has addressed the evolution and spread of influenza, the agroeconomics of Ebola and COVID-19, the social geography of HIV/AIDS in New York City, the emergence of Kaposi’s sarcoma herpesvirus out of Ugandan prehistory, and the evolution of infection life history in response to retrovirals. Wallace is coauthor of Neoliberal Ebola: Modeling Disease Emergence from Finance to Forest and Farm (Springer) and Clear-Cutting Disease Control: Capital-Led Deforestation, Public Health Austerity, and Vector-Borne Infection (Springer). He has been consulted by the Food and Agriculture Organization of the United Nations and the Centers for Disease Control and Prevention, USA. Rodrick Wallace Rodrick Wallace is a research scientist in the Division of Epidemiology at the New York State Psychiatric Institute, affiliated with Columbia University’s Department of Psychiatry. He has an undergraduate degree in mathematics and a PhD in physics from Columbia, and completed postdoctoral training in the epidemiology of mental disorders at Rutgers. He worked as a public interest lobbyist, including two decades conducting empirical studies of fire service deployment, and subsequently received an Investigator Award in Health Policy Research from the Robert Wood Johnson Foundation. In addition to material on public health and public policy, he has published peer-reviewed studies modeling evolutionary process and heterodox economics, as well as many quantitative analyses of institutional and machine cognition. He publishes in the military science literature, and in 2019 received one of the UK MoD RUSI Trench Gascoigne Essay Awards.
Chapter 1
Wicked Strategic Problems Rodrick Wallace
Strategy is exceptionally difficult to do well, and as a consequence many, possibly most, people who attempt to do it do not perform well. —Colin S. Gray (2018). . . . [T]he U.S. government not only has lost the ability to do strategy well, but. . . many senior officials do not understand what strategy is. —Krepinevich and Watts (2009).
1.1 Introduction Institutions engaged in real-time contention—with similar agents or under highly structured environmental burdens—are embodied cognitive entities forced to function within “Clausewitz landscapes” of imprecision, uncertainty, delay, resource constraint, and, most often, adversarial intent and/or Lamarckian or Darwinian evolutionary pushback. Watts (2004), from a narrow national security and military perspective, describes strategic problems associated with prolonged conflict as “wickedly hard” in the sense of Rittel and Webber (1973): Under such challenge, there are limited means to address exponentially exploding possibilities. Tactical problems—the “alphabet” from which the statements of the ongoing “strategic dialog” with an adversarial entity or condition is constructed—by contrast, routinely yield to straightforward, if not simple or easy, “engineering” solutions. These are— classically—clever, flexible, assault combined with overwhelming force. Indeed, most Western-styled armies attempt to replicate the structure and operations of the 1940 Wehrmacht. And most, like the Wehrmacht, have also abandoned Clausewitz,
R. Wallace () The New York State Psychiatric Institute, Bronx, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 R. Wallace (ed.), Essays on Strategy and Public Health, https://doi.org/10.1007/978-3-030-83578-1_1
1
2
R. Wallace
replacing strategy with the persistent tactical thrashing Krepinevich and Watts (2009), Gray (2015, 2018), Strachan (2019) and many others so decry. Rittel and Webber (1973), whose insights Watts (2004) adopts, describe wicked problems in a bone-chilling decade of classic terms: • • • • • •
• • •
•
There is no definitive formulation of a wicked problem. Wicked problems have no stopping rule. Solutions to wicked problems are not true-or-false, but good-or-bad. There is no immediate and no ultimate test of a solution to a wicked problem. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan. Every wicked problem is essentially unique. Every wicked problem can be considered to be a symptom of another problem. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution. The planner has no right to be wrong.
The military perspective diverges significantly from the predominant (i.e., institutionally funded) academic derivatives of this work. The seminal analysis of Rittel and Webber has spawned a civilian policy science consensus viewing “wickedness” as most essentially driven by the need to actively reconcile competing interests (and their associated power relations). Western military theory, by contrast, takes the resolution of such competition by prolonged, active, conflict as a given. It is that perspective—driven by the reality of power relations—we adopt here, although other—broadly evolutionary—approaches are indeed possible (e.g., Wallace 2020b, Ch. 5). The real world, after all, is not a kind place. Such divergence in the Western representation of wicked problems has indeed been explicitly recognized in the scholarly literature. For example, Head (2019) explores how several critical perspectives have challenged the salience of positivist certainties in social policy and questioned the alluring vision of “evidence-based” policymaking, for example, focusing on the power-shaping realities of political and organizational systems. Head describes the political-economy critique, i.e., that, despite extensive evidence of poverty, public decision-making in practice reflects the interests of power elites, regardless of formal democratic processes such as legislative elections, so that the issues chosen for policy attention and the way those issues are defined will generally reflect the structural power of business, their lobbyists, and their political representatives. Head remarks on how other critics of positivist social science emphasize the cognitive and sociocultural differences that hinder agreement in a society characterized by divergent values. Head notes that this cognitive and sociocultural argument is that policy-relevant knowledge about social issues is always plural, not unitary, and therefore, consideration of social issues is ultimately about how
1 Wicked Strategic Problems
3
divergent perspectives are expressed, mobilized, and sometimes reconciled. Head concludes that the mainstream “wicked problems” literature is mainly located in this second camp, focusing on how actors in various situations articulate their diverse perspectives about public issues. Our central interest here is more in line with current military thinking, exploring the dynamic aspects and impacts of power relations, in contrast to academic mainstream wicked problems research that, perhaps to receive funding from the powerful, has focused on “reconciliation” of divergence. Varieties of strategic wickedness have also been recognized. Strachan (2019) calls for a distinction between strategy as using battle for the purposes of war and strategy using war for the purposes of policy. The distinction, he claims, fuels Napoleon’s critics, who denounce him for failing to see that the purpose of war was not to create the conditions for the next battle but the capacity to convert war into a lasting peace. Strachan (2019) continues with a significant caution regarding the characteristic hubris of the strategist, arguing that strategy needs to be modest about itself and about what it can deliver, being more of an art than a science. He warns that those who think about it and those who practice it should not be too brazen about its status. Its principles may be guidelines, in his view, but, as strategic theorists who are worth their salt have stressed, they are not rules. Precisely because strategy is a pragmatic business it lacks the clarity and purity that strategic theory so often seeks. Strategy, Strahan asserts, has to be reactive as well as predictive, and it must be open to new evidence, whether presented as intelligence or acquired as experience. The canonical absence of clarity and purity—wickedness—is front-and-center in other matters as well. The US intelligence scholar and practitioner David T. Moore (2011) characterizes a spectrum of American intelligence failures from a wicked problem perspective. These failures, of both intelligence and policy, include • Japan’s attack on Pearl Harbor (intelligence failure, policy failure). • North Korea’s invasion of the South and China’s involvement in the subsequent war (policy failure, intelligence failure). • The Soviet Union’s deployment of IRBM and MRBM nuclear missiles in Cuba (intelligence failure). • The Vietnamese Tet Offensive (policy failure, intelligence failure). • The fall of the Shah of Iran (intelligence failure) • The Soviet Union’s invasion of Afghanistan (intelligence failure). • The collapse of the Soviet Union (intelligence failure, policy failure). • Iraq’s 1990 invasion of Kuwait (intelligence failure, policy failure). Most presciently, Moore (2011) described pandemics as inherently wicked problems, finding that one of the threats faced by intelligence organizations and their professionals is that of an emergent global pandemic. Moore asks, what kind of a threat is a pandemic? Is it a tame or wicked problem, or something in between? Such considerations matter, in his view, because they define what approaches are suitable for alleviating or mitigating the threats to national security that pandemics pose. Pandemics, he states, are by their nature adaptive and possibly recurring. Seen in
4
R. Wallace
hindsight, he states, pandemics may appear to be tame problems, seemingly clearly defined and understood. But the requirement to deal with pandemics (and other wicked problems) is not to address them merely in hindsight—rather our well-being depends on foresight. This is, Moore asserts, after all, how intelligence enterprises and their professionals work their issues. In his view, Rittel and Webber’s criteria for a wicked problem provide a means of characterizing pandemics. Recent history would seem to bear out such a perspective. The progression of the COVID-19 pandemic has been marked by the interplay of power relations, historical trajectory, and the need for different perspectives to rapidly reach consensus (D. Wallace and R. Wallace 2021). Colin S. Gray (2015) explores in some detail a further fundamental dichotomy that must always be explicitly addressed, that between tactics and a strategy made up of “statements” constructed from the “engineering alphabet” of those tactics. Gray argues that the great challenge is that inherent in the differences in nature between tactics and strategy, two inclusive concepts that are fundamentally distinctive and radically different in meaning. Gray argues that any and all inherently tactical military action requires conversion into the different and higher currency of strategy. Gray finds tactics and strategy to be thoroughly mutually interdependent. He concludes that tactics should refer only to military action while strategy regards the consequences of military action. We will, ultimately, view strategy as representing the output of a “strategic information source” whose statements, constructed from the tactical alphabet, must certainly have meaning in the context of Clausewitzian fog and friction, but most particularly, also in the context of ongoing dialog with a skilled and dedicated adversary. Formal treatment of such matters, however, remains very much a minefield, as the abject failure of “game theory” during the Cold War shows: Very high level researchers are reported to have been lobbying long and hard for a thermonuclear first strike against the USSR, justified by the most elementary two-person conflict matrix. A psychopathic Merkwurdig Liebe indeed. Field (2014) describes these circumstances, focusing on how John Von Neumann, co-inventor of game theory, believed that inasmuch as the US-Soviet standoff was a Prisoner’s Dilemma, and inasmuch as both actors were rational and self-regarding, the only defensible policy was immediate attack. Von Neumann’s argument was that, since there was some chance of destroying an adversary’s offensive capability and/or will to retaliate by attacking, the best course of action was to launch now. Field shows how many others argued in a similar fashion, quoting the Joint Chiefs of Staff 1947 statement that “Offense, recognized in the past as the best means of defense, in atomic warfare will be the only general means of defense.” Field describes how one reason the Cold War remained cold was that proponents of preventive or preemptive nuclear war lost arguments in the late 1940s, throughout the 1950s, and again at the time of the Berlin Crisis in 1961 and the Cuban Missile Crisis in 1962.
1 Wicked Strategic Problems
5
Fig. 1.1 Adapted from Diamond et al. (2007). The two forms of the Yerkes–Dodson law, contrasting a simple and a difficult task: S-shaped and inverted-U
Field asserts that the heirs to von Neumann’s way of thinking have continued periodically to occupy prominent positions in the executive branch of the US Government. Given the Strangelovian implosion of game theory, how does one restart formal exploration of the deadly minefields of policy and its strategic tools? Somewhat surprisingly, the empirical Yerkes–Dodson law (Diamond et al. 2007; Yerkes and Dodson 1908), relating complex task performance to arousal for individual animals under experimental conditions, provides entry into mathematical models of institutional conflict. The YD law has been, and in some measure, remains the subject of some controversy. Diamond et al. (2007) remark that a five-decadelong misrepresentation of Yerkes and Dodson’s findings has occurred despite the unambiguous statement by these authors that “an easily acquired habit may be readily formed under strong stimulation, whereas a difficult habit may be acquired only under relatively weak stimulation.” Figure 1.1, following Diamond et al. (2007), reinterprets the original Yerkes– Dodson results. Rather than simply an “inverted-U” dose–response pattern, a more complete analysis is that the form of the performance/arousal relation depends on the difficulty of the task, and this will prove crucial to our analysis. Fauquet-Alekhine et al. (2014) show two examples, normalized performance coefficients against normalized stress for a short mental occupational stress and a strong physical stress. Experimental confirmations of the “classic” YD law appear to be relatively rare. However, Northoff and Tumati (2019) describe in detail the vast literature regarding “inverted-U” responses under biological signal transduction. For these circumstances, for example, low doses of biomimetic toxins first activate biological mechanisms, and then, at higher dosage, cease to have meaning, triggering
6
R. Wallace
decreasing response. Some case histories include Bigsby et al. (1999), McClintock and Luchinsky (1999), D. Wallace et al. (2003), Wilken et al. (2000), Wilson et al. (2000), and so on. Maturana and Varela (1980) hold that life is necessarily cognitive at every scale and level of organization. A consequent inference is that analogs to the Yerkes– Dodson relation(s) will appear very widely, from cellular processes, individual animal cognition, social function, institutional dynamics, to machine cognition, and all possible composites, including armies or other institutions “enabled” by artificial intelligence (e.g., Wallace 2021). In the real world, then, all cognition is embodied, purposed, and interacts with an embedding environment that ranges from other cellular components to vast ecological, socioeconomic, and institutional constructs. Human institutions, real-time machine control systems, and their composites are embodied. This circumstance most particularly includes the modalities of governance tasked with addressing threats of various natures at and across the many convoluted scales and levels of organization of human endeavor. Indeed, this overall approach is not particularly new, as McGann et al. (2013) describe in some detail. An “enactive” perspective to understanding the mind has emerged from the works of Varela et al. (1991). This is a naturalistic approach using a range of conceptual and empirical tools to study psychological processes as dynamic and embedded interactions between autonomous agents and their environment, so that the mind is not seen as inhering in the individual, but as emerging, existing dynamically in the relationship between organisms and their surroundings, including other agents. The perspective can, it seems, be fully generalized across cognitive components ranging from the cellular to social and institutional. Atlan and Cohen (1998) assert that cognition, most basically, involves receipt of sensory information from the embedding environment, comparison of that information with an internal, learned, or inherited picture of the world, and upon such comparison, choice of a small set of actions from a much larger repertoire of what is available. Choice decreases uncertainty. Reduction of uncertainty necessarily implies existence of an information source (Cover and Thomas 2006) “dual” to that cognitive process. The argument is direct and, in a sense, quite brutal (Wallace 2005, 2022, 2017, 2020a,b). Any such “internal picture of the world” as Atlan and Cohen invoke encounters a central, indeed, foundational problem. Manheim (2019) describes how any intelligence, whether machine learning-based, human, or AI, requires implicit simplification, since the branching complexity of even a relatively simple game such as Go dwarfs the number of atoms in the universe. He goes on to argue that, because even moderately complex systems cannot be fully represented, optimization failures are inevitable. He concludes the contrapositive to Conant and Ashby’s (1970) theorem, i.e., that if a system is more complex than the model, any attempt to control the system will be imperfect.
1 Wicked Strategic Problems
7
Imperfect control is perhaps the most fundamental norm for any and all embodied cognition. That is, the real world, with which an embodied cognitive entity interacts, will seldom conform to its wishes without resistance. The formal characterization of this conundrum is called the Data Rate Theorem.
1.2 The Data Rate Theorem All embodied cognition acting in real time on real-world environments is inherently unstable under Manheim’s contrapositive to the Conant/Ashby Theorem. Think of a vehicle driven at night on a twisting, pot-holed roadway. Such an entity requires, in addition to a competent driver, good headlights, a stable motor, and reliable, responsive brakes and steering. The Data Rate Theorem (DRT), that extends the Bode Integral Theorem, establishes the minimum rate at which externally supplied control information must be provided for such an inherently unstable control system to remain stable. For the standard argument, see Nair et al. (2007). The usual analysis makes a first-order linear expansion near a nonequilibrium steady state (nss). An n-dimensional parameter vector at time t, say xt , determines the state at time t + 1 according to the expression xt+1 = Axt + But + Wt
(1.1)
A and B are fixed n × n matrices. ut is the vector of control information, and Wt is an n-dimensional vector usually taken as Brownian “white noise.” Figure 1.2 projects down onto an irreducible minimum the centrality of an institutional (or other) command-and-control structure in the presence of “noise,” Fig. 1.2 A control system under the Data Rate Theorem. The state of the system X is compared with what is wanted, and a corrective control signal U is sent at an appropriate rate, under conditions of noise represented as W . The rate of transmission of the control signal must exceed the rate at which the inherently unstable system generates its own “topological information”
8
R. Wallace
which might include unanticipated difficulties and occurrences that are most often modeled as an undifferentiated Brownian white noise spectrum. Such “noise,” however, can be “colored,” i.e., have a shaped power spectrum, leading to more complicated models. The DRT holds that the minimum rate at which control information H must be provided for stability of the inherently unstable system H > log[| det[Am ]|] ≡ H0
(1.2)
det is the determinant of the matrix Am . Assuming m ≤ n, Am is the subcomponent of A having eigenvalues ≥ 1. The right hand side of Eq. (1.2) is characterized in the literature as the rate at which the system generates its own “topological information.” Again, Nair et al. (2007) provide details of the standard derivation. An approach to the DRT based on the Rate Distortion Theorem, a related Black–Scholes approximation, and a first-order Onsager model, is given in the Mathematical Appendix. If the inequality of Eq. (1.2) is violated, stability is lost. For night-driving, if the headlights fail, or steering becomes unreliable, twisting roadways—that generate their own “topological information” at a rate dependent on vehicle speed—cannot be navigated. French et al. (1999) constructed and explored the dynamics of a neural network model of the Yerkes–Dodson mechanism, finding an interpretation of the Yerkes– Dodson law in terms of data compression, that is, the capacity of a network to learn difficult tasks breaks down at high levels of arousal because the computation of network weights associated with the arousal and other inputs strains computational resources. French et al. conclude that easy tasks require less computation and hence the network can perform well at higher levels of arousal. Data compression inherently involves matters central to information theory, in addition to those of control theory. Again, there is a formal context: the Rate Distortion Theorem (Cover and Thomas 2006). We examine a model below that incorporates both constraints, but must first impose a critical simplification.
1.3 Scalarizing Driving Parameters We are interested in system dynamics when driven by restrictions on fundamental resources, including, but not limited, to information. Most particularly, the characterization of internal and external information, and their relations with material resources, in a large sense, requires attention before exploring system dynamics under stress. We are assuming a multicomponent, embodied, cognitive agent acting in-and-on a real-world landscape of imprecision in action, perception, and result.
1 Wicked Strategic Problems
9
There are, on such landscapes, we further assume, three essential resources for that agent. The first is the rate at which information can be transmitted between subcomponent elements of the agent itself, measured by an information channel capacity C . The second resource is the rate at which “sensory/intelligence” information is available to the agent from the “outside world,” say at the rate H . The third resource is the rate at which “material essentials” are available, taken as M . For neural cells, this is the rate at which metabolic free energy can be supplied. For armed combat, M is the rate of supply of personnel, ammunition, equipment, and fuel. For driverless cars on intelligent roads, M can be taken as the inverse linear vehicle density, in appropriate combination with a measure of road quality, vehicle responsiveness, and so on. The three rates necessarily interact, creating a 3×3 analog to a correlation matrix, say Z. An n × n matrix has n scalar matrix invariants rj that are robust to similarity transformations. These are constructed according to the standard polynomial relation p(γ ) = det[Z − γ I] = γ n − r1 γ n−1 + r2 γ n−2 − . . . + (−1)n rn
(1.3)
I is the n × n identity matrix, det the determinant, and γ a real-valued parameter. The first invariant is the matrix trace, and the last is ± the matrix determinant. The n matrix invariants may then be used in construction of a projective scalar index Z = Z(r1 , . . . , rn ) of total resource rate. The simplest might be Z = C × H × M . Such scalarization must be appropriate, and there may well be important cross-interactions between the three rates, so that “cross terms” are important. This is a subtle point: Scalarization permits projection down onto a one dimensional system. Expansion of Z into vector or full matrix form invokes multidimensional dynamic equations that are far less tractable and may require sophisticated Lie group methods for address. The Mathematical Appendix provides an outline of the methodology.
1.4 Tactical/Operational Engineering 1.4.1 First Thoughts At tactical and operational levels and scales of both routine operation and active conflict, it is sometimes possible to relatively directly measure—or at least estimate— the gap between what has been ordered and what has been performed. Gains, losses, delays, and suchlike, can be measured as “big data” to estimate the difference between what the hand reaches for and what the eye measures as the gap. Such measurement provides entry into military tactical and operational dynamics as an
10
R. Wallace
engineering discipline, viewing tactics as the alphabet of a strategic message sent along a de facto operational information channel. We assert that information theory’s Rate Distortion Theorem (RDT) provides a deep and useful perspective on cognition and its failure that directly complements insights from Data Rate Theorem. The RDT focus is on the difference between what is mandated and what is observed. That is, cognition-and-control is seen to involve an operational “channel” along which a “message” is transmitted. The difference between intent and action is characterized by a scalar distortion measure D. For any such (operational or other) channel, there is a Rate Distortion Function R(D), always convex in D (Cover and Thomas 2006), which determines the smallest channel capacity—a free energy measure from Feynman’s (2000) perspective representing an information transmission rate—needed to ensure that the average distortion between intent and execution is less than or equal to a scalar measure D. Taking R(D) itself as a free energy measure in the sense of Feynman (2000), and following the standard view from chemical reaction theory (Laidler 1987), the rate of cognition control above the punctuated threshold arising from the DRT, which we call H0 from the DRT argument, can be characterized in terms of a Boltzmann pseudoprobability: ∞ H
exp[−R/g(Z)]dR
0
exp[−R/g(Z)]dR
L ≡ P r[R > H0 ] = ∞0
= exp[−H0 /g(Z)]
(1.4)
Again, H0 is the limit from Eq. (1.2), and g(Z) is a scalar temperature analog depending on the scalar resource rate index Z that must be calculated from first principles. This is not a physical system in which temperature is an independent parameter, and the form of g(Z) must be determined from first principles for each system. The argument will be by abduction from Onsager’s approximate—here, linear— treatment of nonequilibrium thermodynamics (e.g., de Groot and Mazur 1984). Taking the denominator of Eq. (1.4) to be a statistical mechanical partition function generates an expression for an iterated free energy Morse Function F (Pettini 2007):
∞
exp[−F (Z)/g(Z)] =
exp[−R/g(Z)]dR = g(Z)
0
F (Z) = − log[g(Z)]g(Z) g(Z) =
−F (Z) W (n, −F (Z))
W (n, x) is the Lambert W-function of order n, which solves the relation W (n, X) exp[W (n, X)] = X
(1.5)
1 Wicked Strategic Problems
11
For n = 0, −1, W (n, x) is real-valued, respectively, over the ranges x > − exp[−1], 0 > x > − exp[−1]. These real-value conditions are important, explicitly generating the central phase transitions in cognitive dynamics. We can now define an associated iterated entropy-analog as the Legendre transform of F : S ≡ −F + ZdF /dZ
(1.6)
We will characterize system dynamics in terms of the first-order Onsager approximation from nonequilibrium thermodynamics (de Groot and Mazur 1984): dZ/dt ≈ μdS/dZ = f (Z)
(1.7)
f (Z) is the adaptation function of the cognitive system, determining the rate at which it can respond to incoming signals, in a large sense. Here, we will, for the most part, treat it as an exponential, taking dZ/dt = β − αZ, so that Z(t) → β/α, approached at the rate α. Other forms are possible and will constrain g(Z), and hence also the cognition rate from Eq. (1.4). Taking F (g(Z)) as a general expression, and expanding in terms of Eqs. (1.6) and (1.7), produces a second order ordinary differential equation with a solution in terms of the relation f (Z) dZ + C2 + f (Z)dZ X(Z) ≡ −C1 Z − Z (1.8) Z With F = − log[g(Z)]g(Z), explicit solutions are g(Z) = −
X(Z) W (n, X(Z))
f (Z) = Zd 2 F (Z)/dZ 2 = 2 2 2 d d d Z ln (g (Z)) dZ 2 g (Z) g (Z) + dZ 2 g (Z) g (Z) + dZ g (Z) g (Z)
(1.9)
X(Z) is from Eq. (1.8), and W is, again, the Lambert W-function of order n. On landscapes of institutional conflict—“battlefields” of one sort or another— friction/adaptation, characterized as dZ/dt = f (Z), is a central driving force, setting the stage for g(Z), subject, however, to the important boundary conditions C1 and C2 from Eq. (1.8). Recall that the zero-order branch of the Lambert W-function W (n, x) is real only for − exp[−1] < x < ∞, and the n = −1 branch is real only for − exp[−1] < x < 0, since these relations drive the phase transition dynamics of institutional cognition under stress.
12
R. Wallace
Fig. 1.3 (a) Signal transduction for cognition rate L(Z) = exp(−1/g(Z)) and (b) efficiency measures L(Z)/Z, taking f (Z) = β − αZ, α = 0.1, 1.0. Z → β/α ≡ Znss . We are thus letting Z vary as the “arousal” index. C1 = −3 for both plots, but C2 = −5, −1, respectively, for α = 1, 0.1. Only for Z in a limited range are the g(Z) functions real-valued, giving the inverted-U for the difficult task over the arousal range. The lighter curve represents a “simple” task. However, sufficient “arousal” will crash any cognitive process. Different expressions for f (Z) produce similar results. Figure 1.3b provides a rationale for breaking a “hard” problem into a set of easier ones
Figure 1.3a examines the cognition rate L = exp[−1/g(Z)] and Fig. 1.3b the efficiency measure L(Z)/Z, with ∂Z/∂t = f (Z) ≡ β − αZ: an “exponential” model. The independent variate, representing “arousal,” is taken as Z = β/α. The dark and light plots have different rate constants, α = 1, 0.1, with C1 = −3 for both and, respectively, C2 = −5, −1. We have chosen parameters to make the inverted-U cognition rate replicate the pattern of Fig. 1.1. Thus a “difficult” problem displays a direct inverted-U dynamic with increasing Z, while a comparatively “easy” problem tops out, or at least begins to decline at a higher level of “arousal.” It is of some interest in this model realization that the “easier” problem, with α = 0.1, shows, in Fig. 1.3b, much higher efficiency than the hard problem. This suggests a foundation for the common observation that it is often wise to segment a difficult task into a series (or, under time constraint, parallel) set of easier ones. Parameter variations across the model produce a considerable variety of patterns. Figure 2d of Trofimova et al. (2017) examines something similar. Other choices for f (Z) produce analogous results. Possible examples include the “Arrhenius” model (Z/α)(log[Z/β])2 , Znss = β, or β − αZ q , Znss = (β/α)1/q , and so on, again for appropriate parameter values. The third line of Eq. (1.9) seems to preclude assigning a simple S-shaped or Gaussian relation to the cognition rate relation L(Z) since the friction/adaptation relation dZ/dt = f (Z) is usually imposed by embedding circumstances and inherent structures mostly beyond the explicit control of a particular cognitive entity, although some tuning/focus capability is likely. Clausewitzian fog and friction—
1 Wicked Strategic Problems
13
here, “delay in adaptation”—make all cognitive processes subject to inverted-U behaviors. However, biological systems having undergone a billion or more years of Darwinian and other selection pressures may be more subtle than armies, corporations, or government agencies facing many fewer years of, primarily (but not exclusively), Lamarckian selection.
1.4.2 Phase Transitions From the properties of the Lambert W-function, the third line of Eq. (1.5) raises an issue regarding complex numbers for the temperature-analog g(Z) that is, according to the first line of Eq. (1.5), really the partition function. Physical theory links complex number temperatures in the statistical mechanics of quantum systems with both their phase transitions and their time evolution (Wei et al. 2014). Ruelle (1964, Sec. 5) provides another perspective. Fisher (1965) provides a more comprehensive, indeed, the classic, analysis. We wish to examine the “Fisher zeros” of the partition function, focusing on the characteristics of the Lambert W-function of order 0. We next reexamine the sharp inverted-U system of Fig. 1.3a from that perspective. Figure 1.4 looks at both the purely real and purely imaginary values of the temperature-analog/partition function g(Z), β = Zα, α = 1, C1 = −3, C2 = −5. The dark trace is the real-valued mode, the lighter the imaginary. As with physical models, onset of an “imaginary
6 5
a=1.0
4
g(Z)
3 2 1 0 -1
1
2
3
Z
4
5
6
7
Fig. 1.4 We reexamine the sharp inverted-U system of Fig. 1.3a, again taking β = Zα, α = 1, C1 = −3, C2 = −5. The figure shows both the real—darker tract—and imaginary values of the temperature-analog g(Z), using the zero-order W-function. The zeros of the imaginary mode map phase change
14
R. Wallace
temperature” indicates onset of phase transition. Compare with figure 6c of Wei et al. (2014). The conclusion, then, is that zeros of the imaginary part of g(Z) can and do represent phase transitions, of a particular sort. Thus, cognitive dynamics should be taken as perhaps strangely similar to, but essentially different from, those of the physical systems that build them.
1.4.3 Distraction: The Effects of “Noise” Figure 1.3a examines the “inverted-U” pattern of increasing arousal on a difficult problem. Here we study the effect of distraction—noise—on a difficult problem when the cognition rate is initially fixed at a maximum, for Fig. 1.3a, β = 4. In context, a number of researchers, e.g., Van den Broeck et al. (1994, 1997), Horsthemeke and Lefever (2006), and others, have explored how “noise” can drive phase transitions in physical systems. It is possible to apply their methods, based on the stability criteria of stochastic differential equations, in the study of “noise”driven phase change associated with the embodied interaction of a cognitive agent with a real-world landscape of fog, friction, and skilled adversarial intent. Equation (1.9) can be extended to calculation of the nonequilibrium steady state average of the cognition rate L(Z) under stochastic variation in terms of Z by using the Ito Chain Rule (Protter 2005). The “exponential” equation dZ/dt = f (Z) = β − αZ leads to the stochastic differential equation (SDE) dZt = f (Zt )dt + σ Zt dWt dZt = (β − αZt )dt + σ Zt dWt
(1.10)
The second term imposes a probabilistic “volatility” proportional to Z, as driven at the rate σ . Here, dWt represents Brownian white noise, the standard starting point for such analyses, which can be extended to “colored” noise (Protter 2005). Applying the Ito Chain Rule to Z 2 , the variance of Z is easily calculated from the second form of Eq. (1.10) as < Z 2 > − < Z >2 =
β α − σ 2 /2
2 −
2 β α
(1.11)
The variance, then, literally explodes as σ 2 /2 → α, a result surprisingly independent of β. The rate constant α thus counts in and of itself. Taking Eq. (1.10) as the base SDE, it is possible to use the Ito Chain Rule on various forms of the cognition rate L(Z). The resulting outputs give the mean values of L at nonequilibrium steady state, that is, taking < dLt >= 0. This procedure convolutes the Lambert W-function with σ , α, β, Z = β/α, and the boundary
1 Wicked Strategic Problems
15
conditions C1 and C2 , and represents a synergism between the effects of fog, delay, and bifurcation phase transitions. The computer algebra result, although at base rather simple, is too long for the LaTex compiler producing this manuscript, and even for reproduction as an image file. The Ito Chain Rule calculations for L(Z) are actually routine and can easily be automated using a good computer algebra program, here Maple 2020. We focus on the hard problem of Fig. 1.3a, at its peak cognition rate, so that α = 1, β = 4, C1 = −3, C2 = −5. Figure 1.5 applies the implicitplot numerical procedure of Maple 2020 to the relation < dLt >= 0 for the variates Z and σ . The gap near zero in the lower branch of the graph is an artifact of the numerical approximation. These procedures generate an equivalence class in {Z, σ }. Sufficient noise/distraction σ smoothly reduces the “effective” Z-value, but shows the possibility of a bifurcation instability at σ > 0, triggering instability permitting punctuated transition to a very low effective value of Z. For the cognition rate L, the likelihood of such a transition appears to rise well before the onset of variance instability in the adaptation function f (Z) = β − αZ, here expected only at √ σ > 2, since α = 1. Insight emerges from the real-value limits on the Lambert W-function in the < dLt >= 0 equation. Some manipulation gives − C1 Z − Z (−α Z + β ln (Z)) + C2 − 1/2 α Z 2 + β Z > − exp(−1) Fig. 1.5 Equivalence classes in {Z, σ } defined by the relation < dLt >= 0 for cognition rate, calculated using the Ito Chain Rule for the difficult problem of Fig. 1.3a. Here, β = 4, α = 1.0. As the noise amplitude σ increases, the effective value of Z first declines smoothly, but the likelihood of a bifurcation instability appears to increase for any σ > 0, well below the critical value for the adaptation function at √ σ > 2. The gap near zero for the lower branch is an artifact of the numerical approximation procedure
(1.12)
16
R. Wallace
For cognitive systems, phase transitions depend critically on the nature of noise influence. “Noise” and time delay “friction” become synergistic with cognitive system phase transitions, resulting in tortuously complex and critically unstable patterns of function and failure. These underlying dynamics become far more complicated when “noise” has a “colored” spectral amplitude. Replacing the exponential Z(t) = (β/α)(1 − exp[−αt]) used above with the logistic Z(t) = β/(1 − exp[−αt]) produces dynamics similar to Figs. 1.4 and 1.5. Analysis of Yerkes–Dodson relations in the presence of noise suggests that all cognitive processes—at any scale or level of organization, aided by machine cognition or not—can be driven to failure by sufficient “noise,” often—indeed, usually—in unexpected and highly punctuated ways. These results have severe implications for embodied cognitive enterprises burdened with uncertainty, delay, imprecision-in-action, rapid environmental change, and adversarial intent. They suggest why amateur strategists, typically political figures closely burdened with crony sycophants, are so prone to failure. Professional public health departments, like professional military command staffs, are less subject to levels of distraction and noise from problems tangential to their central duties. That is, they operate on the initial smooth degradation section of Fig. 1.5, before onset on instability. Even Stalin finally learned to listen to the generals.
1.5 Strategic Wickedness and Institutional Cognition The world changes when matters move beyond tactical/operational matters that often yield to engineering solutions and into the realm of the strategic information source, i.e., the “meaning quality,” in some sense, of the message sent by the sequence of tactical exercises. There, it is usually not possible to use “big data” to estimate the gap between intent and performance, if only because an adversary or disordered environment may mask both responses and intentions, in the further context of billowing fog and friction. More specifically, “statements” constructed from the engineering alphabet of tactics, and sent along the operational channel, must have meaning in the context of dialog with a skilled and dedicated adversary, or with a rapidly evolving environmental challenge having its own regularities of grammar and syntax, representing an interacting information source. This is, as Colin S. Gray (2015, 2018) argues at length, a different world. How is it possible to even begin modeling such a scenario, in which path-dependent historical trajectory, culture, cognition, fog, friction, adversarial action, and environmental characteristics, become inextricably intertwined? In short, we are driven into realms recognizably similar to the conundrums that confront consciousness studies (e.g., Wallace 2012). Such matters can be addressed, but the complexities of “solutions,” in the sense of Rittel and Webber (1973), reflect the wickedness of the problems to be solved.
1 Wicked Strategic Problems
17
1.5.1 The Basic Idea Institutions in conflict are cognitive hierarchical structures composed of crosstalking cognitive submodules ranging across individual human and/or machine entities, and through larger “workgroups,” in the most general sense. At every scale and level of organization, both individual entities and compounded workgroups are constrained by their own experience and training and by the larger culture in which they are embedded and with which they interact. They are further constrained by the “vote” of the environment in which they are embedded, including—but not limited to—the cognitive intent of adversaries and the regularities of grammar and syntax imposed by the embedding environment. There are further “structured uncertainties” imposed by the “large deviations” possible within the overall embedding environment, including the behaviors of adversaries who may be constrained by quite different historical trajectories and cultural imperatives. We assume four primary interacting factors: • Cognition requires choice that reduces uncertainty and implies the existence of an information source “dual” to that cognition at each scale and level of organization (Atlan and Cohen 1998; Wallace 2005, 2012, 2017, 2020a,b). The argument is— again—quite direct. • Embedding culture is itself an information source, having recognizable analogs to grammar and syntax. That is, within a culture, under specified circumstances, certain sequences of behavior are highly probable, and others have vanishingly small probability. This is a sufficient condition for our formalism (Khinchin 1957). • Geographies, in a large sense that may include the social, on which contention takes place, are similarly structured so as to have sequences of very high and very low probability: night follows day, summer’s dirt roads are followed by October’s impassible mud streams, as the Wehrmacht found outside of Moscow, and so on. • “Large deviations,” in the sense of Champagnat et al. (2006) and Dembo and Zeitouni (1998), are not entirely random, but follow high probability developmental pathways governed by “entropy” laws that we take to imply the existence of another information source. The dynamics of conflict between institutions, or between a single institution and an embedding environment, can then be characterized by a joint information source uncertainty H ({Xi }, V , LD )
(1.13)
The set {Xi } includes the cognitive and embedding cultural information sources of hierarchical system of interest, V is the information source of the embedding environment, that may include the actions and intents of adversaries, as well as
18
R. Wallace
“weather.” LD is the information source of the associated large deviations possible to the system. As before, we project the spectrum of resources, in a large sense that includes internal bandwidth, rates of intelligence information, and of material/personnel supply, down to a scalar rate Z. As above, we build an iterated “free energy” Morse Function from a Boltzmann pseudoprobability expression, enumerating high probability developmental pathways available to the system j = 1, 2, . . . as exp[−Hj /g(Z)] Pj = k exp[−Hk /g(Z)]
(1.14)
Again, a temperature-analog g(Z) must be calculated from first-order Onsager system dynamics built from the partition function, the denominator of Eq. (1.14). The cognition rate of the system of interest can be calculated in terms of the probabilities such that Hj > H0 , where H0 is the Data Rate Theorem limit. We provide an explicit, if simple, example below. The iterated free energy Morse Function becomes exp[−F /g(Z)] ≡
exp[−Hk /g(Z)
(1.15)
k
F , as a free energy analog, becomes subject to symmetry-breaking transitions as g(Z) varies (Pettini 2007). But these symmetry changes are quite unlike those associated with simple physical phase transitions represented by standard group structures. By contrast, cognitive phase changes involve shifts between equivalence classes of high probability developmental pathways that must be represented as groupoids, a generalization of groups for which a product is not necessarily defined for every possible element pair (Brown 1992; Cayron 2006; Weinstein 1996). A brief summary of groupoid formalism is given in the chapter Mathematical Appendix. Complicated cognitive interactions, however, may require invoking even more general models, i.e., “small categories” and/or “semigroupoids” for symmetrybreaking dynamics. Following previous sections, dynamic equations, invoking a first-order Onsager approximation in the gradient of an entropy measure constructed from F , are as follows: exp[−F /g(Z)] = exp[−Hk /g(Z)] ≡ h(g(Z)) k
F (Z) = − log(h(g(Z))g(Z) S(Z) ≡ −F (Z) + ZdF (Z)/dZ ∂Z/∂t ≈ μdS/dZ = f (Z)
(1.16)
1 Wicked Strategic Problems
19
leading to −Z
f (Z) dZ − log(h(g(Z)))g(Z) − C1 Z + Z
f (Z) = Zd 2 F /dZ 2 (f (Z)dZ + C2 = 0 (1.17)
Again, we set f (Z(t)) = β −αZ(t) as above. Other relations are, again, possible for the adaptation function. Here, in contrast with the sections above, F = − log(h(g(Z))g(Z), rather than F = − log(g(Z))g(Z). Examining, Eq. (1.16), it is clear that specification of any two of the functions f, g, h will permit calculation of the third. However, h is fixed by the internal structure of the larger system, and f is imposed by fog, friction, “enemy intent,” and the regularities of the embedding environment. In the same fashion, the “boundary conditions” C1 , C2 are externally imposed, further sculpting dynamic properties of the “temperature” g(Z).
1.5.2 Two Examples We begin by examining a two-state system under Data Rate Theorem constraints, that is, requiring some minimum source uncertainty rate of H0 for stability. The high probability developmental pathways are assumed to be broken into two sets, each of size N , having, respectively, H± = H0 ± δ. Some calculation produces exp[−F /g(Z)] = N exp[−(H0 + δ)/g(Z)] + N exp[−(H0 − δ)/g(Z)] = N exp[−H0 /g(Z)] (exp[−δ/g(Z)] + exp[δ/g(Z)]) = 2N exp[−H0 /g(Z)] cosh[δ/g(Z)] L(Z) =
N exp[−(H0 + δ)/g(Z)] = N exp[−(H0 + δ)/g(Z)] + N exp[−(H0 − δ)/g(Z)] 1 exp[2δ/g(Z)] + 1
(1.18)
Here, L(Z) is the cognition rate, using the argument leading to Fig. 1.4. Again, S ≡ −F +ZdF /dZ, and ∂Z/∂t ≈ μdS/dZ = f (Z). Approximating the resulting equation to third order in δ gives ⎛ − Z ln (2 N )
d2 dZ 2
⎜ g (Z) + Z ⎝1/2
d2 g (Z) dZ 2 2
(g (Z))
−
2 ⎞
d dZ g (Z) 3
(g (Z))
⎟ 2 ⎠ δ ≈ f (Z) (1.19)
20
R. Wallace
Fig. 1.6 Cognition rate L vs. Z for the two-level cognitive model using the second solution form. The relevant boundary conditions are C1 = 2, C1 = 1, with δ = 0.1. While this does not replicate the “inverted-U” of previous work, it does suggest a reasonable cognitive dynamic under increasing arousal, i.e., a kind of punctuated stochastic resonance in which rising noise first improves the rate of cognition, and then triggers full collapse
having two approximate solutions differing by an essential sign. See the Mathematical Appendix for details. Again, with f (Z) = β − αZ, this expression can be explicitly solved for g(Z). We use the second solution form for Eq. (1.19) from the Mathematical Appendix. Figure 1.6 shows the cognition rate vs. “arousal” Z, taking α = 1, β = αZ, N = 100, δ = 0.1, with the boundary conditions C1 = 2, C2 = 1. While this does not replicate the “inverted-U,” it nonetheless suggests a reasonable cognitive dynamic under increasing arousal. In Fig. 1.7 we carry out a stochastic analysis, fixing the system at Z = 5. Here, α = 1, β = αZ = 5, N = 100, δ = 0.1, C1 = 2, C2 = 1. We apply the Ito Chain Rule to the cognition rate, calculating the equivalence class {Z, σ } for the relation < dL >= 0. Very low levels of σ , in this example, drive the system to punctuated failure, i.e., Z less than ≈ 3.3, representing collapse of cognition rate to zero. The initial state of Fig. 1.7, set at Z = 5, as σ increases, first climbs up the curve of Fig. 1.6, so that the rate of cognition increases, in a form of stochastic resonance, until, in a highly punctuated manner, cognition collapses entirely. A fundamentally different example recovers something of the previous section. Suppose there is, rather than the on-off situation involving 2N denumerable states as above, a continuum of states having H-values below and above the DRT limit H0 . The partition function relation leading to a “free energy” Morse Function F becomes ∞ exp[−F /g(Z)] = exp[−(H0 + x)/g(Z)]dx = g(Z) (1.20) −H0
1 Wicked Strategic Problems
21
Fig. 1.7 Stochastic stability analysis for Fig. 1.6, using the Ito Chain Rule. We find the equivalence classes {Z, σ } corresponding to the relation < dL >= 0. Here, α = 1, β = αZ = 5, N = 100, δ = 0.1, C1 = 2, C2 = 1. For this example, low levels of “noise” drive the system from full function to highly punctuated cognitive collapse
The cognition rate becomes ∞ ∞
−H0
H0
exp[−x/g(Z)]dx
exp[−(H0 + x)/g(Z)]dx
= exp[−H0 /g(Z)]
(1.21)
leading back to Eqs. (1.4)–(1.9) and parallel results. Much depends, then, on the exact form of the underlying cognitive structure. The appendix to Wallace (2021) explores other examples.
1.6 The Tyranny of Time and Resources Colin S. Gray’s Magnum Opus, the Theory of Strategy (2018), devotes some 15 of its 147 text pages to the theme of time. He observes that a general theory of strategy can and should advise on the prudent use of time: Even more than with physical geography, as a source of discipline over which they can exercise no control it is a permanent reality for those charged with the strategic well-being of the state. Gray argues that, although time is always a player in strategic affairs, its relevance differs markedly among the distinctive characters of military activity of different kinds and intensity. In point of fact, agents of conflict do indeed attempt to ride the tiger of time by imposing the functional equivalent of “anytime algorithms” on their patterns of response. Zilberstein (1996) describes how anytime algorithms give intelligent
22
R. Wallace
systems the capability to trade deliberation time for quality of results. This capability, he argues, is essential for successful operation in domains such as signal interpretation, real-time diagnosis and repair, and mobile robot control, since what characterizes these domains is that it is not feasible (computationally) or desirable (economically) to compute the optimal answer. The essential point is to find a “good enough” solution in the time available, given inherent limitations on available resources, the Z(t) above. Here, as a simple example, we explore the dynamics of a network constructed of “tactical,” “operational,” and “command” agents j = 1 . . . n whose collective “quality of response at time t and at resource rate Zj (t)” we write as Q(Z1 (T1 ), . . . Zn (Tn )) where the Tj are the time periods permitted to each level for “good enough” response. We assume a linear chain system, operating under time and resource constraints such that T = j Tj and Z = j Zj . We assume, further, that it is possible to express the rate of change of the Zj as functions of Zj , i.e., that it is possible to write expressions of the form dZj /dt = fj (Zj (t)). We then seek to conduct a Lagrangian optimization across the quality measure as ⎛ L = Q(Z1 (T1 ), . . . , Zn (Tn )) + λ ⎝T −
j
⎞
⎛
Tj ⎠ + μ ⎝ Z −
⎞ Zj ⎠
j
∂L ∂Tj = ∂Q/∂Tj − λ = (∂Q/∂Zj ) × dZj /dTj − λ = 0 ∂L /∂Zj = ∂Q/∂Zj − μ = 0 (1.22) so that, since dZj /dTj = fj (Zj ), λ = fj (Zj (Tj )) μ λ Zj (Tj ) = fj−1 ( ) μ
(1.23)
which may be solved for the Zj (Tj ) as inverse functions of the ratio λ/μ, recalling that the “undetermined multipliers” λ and μ are interpreted in economic theory (e.g., Jin et al. 2008) as shadow prices imposed on a system by “environmental externalities.” For economic systems, these might be material or financial scarcities, or tax and regulatory strictures. In war, the mud of Autumn rains immobilizes the Panzer Divisions outside of Moscow, compounded by enemy-driven and other relentless attritions. For the last three large-scale invasions of Russia, too many of the horses pulling supply carts died: Charles XII, Napoleon, and the Wehrmacht were not as competent at strategy, intelligence, and logistics as, perhaps, the situation demanded. Typically, the Zj (T ) will be strictly increasing monotonic functions—S-shaped or inverse-J shaped—having a strictly decreasing derivative f (Z(T )) so that a limit is approached. By a standard result, if f (Z) = q is strictly decreasing
1 Wicked Strategic Problems
23
Fig. 1.8 Tanh, Exponential, and Arrhenius models for the adaptation function Z(T ). Detailed calculation shows significantly different shadow price ratio dependence for these apparently similar dynamic patterns
monotonic, then f −1 (q) is also, meaning that increasing shadow price ratio λ/μ causes monotonic decline in the constrained functions Zj (Tj ) defined in Eq. (1.23). There are, however, possible subtleties. Three examples. Figure √ 1.8 shows√Tanh, Exponential, and Arrhenius models for Z(T ), so that Z(T ) = β/α tanh( αβT ), Z(T ) = (β/α)(1 − exp[−αT ]), and Z(T ) = β exp[−α/T ]. These are monotonic increasing functions with monotonic decreasing derivatives: dZ/dT = β − αZ(T )2 , dZ/dT = β − αZ(T ), and dZ/dT = (Z(T )/α)(log[Z(T )/β])2 . Carrying through the calculation of Eq. (1.23) gives Zj (Tj ) = Zj (Tj ) =
βj − λ/μ αj
βj 1 λ − αj αj μ
(1.24)
for the Tanh and Exponential models. As expected, both expressions decline monotonically with increasing λ/μ, as would any system for which dZ/dT = β − αZ(T )γ , β, α, γ > 0. For the Arrhenius model, however, Zj (Tj ) =
αλ
2 −1
4μW 0, 2
βμ αλ
(1.25)
24
R. Wallace
where W (0, x) is the Lambert W-function of order 0. Recall that this transcendental function is real-valued only for n = 0, −1, and only over limited ranges in x. This solution—one of three possible having real-valued ranges—initially behaves like Eq. (1.24), first declining monotonically but nonlinearly with increasing λ/μ, then undergoing a shift from real to complex-valued at sufficiently large shadow price ratio, representing onset of instability. With all three models, if the shadow price ratio is sufficiently large, one or more of the scalar essential resource rates will fall below its critical level, or, for the Arrhenius model, become unstable, triggering full system collapse. Again, similar “S” or “inverted-J” expressions for the adaptation function Z(T )—having dZ/dT monotonic decreasing—like the Logistic, show analogous patterns of monotonic decline with increasing shadow price ratio under optimization constraint.
1.7 Discussion Cognitive entities at and across all scales and levels of organization, from the cellular to the institutional, that act in, and are acted on by, the real world in real time, are inherently embodied and inherently unstable. We argue that they are characterized by complex temperature analogs driving dynamics far more subtle than are analogs familiar from physical theory. “Temperature” may itself become an order parameter. Embodied cognition necessarily embraces two worlds, the cognitive and the physical. We experience it from our own behaviors, but its dynamics are much different from the “simple” physical world of rain, snow, and steam, solid rock and flowing lava, night and day, and so on. Embodied cognition, to survive, must navigate both within and between the worlds of cognition and physical existence, as modulated, among humans, by powerful embedding cultures and the burdens of history. Any such navigation, beyond the most elementary see-and-grasp, is often— perhaps usually—wicked in the sense of Rittel and Webber (1973), both in terms of the “simple” necessity of balancing multiple competing interests, and in terms of active conflict/conversation with similarly cognitive adversaries or with unfriendly environments or evolutionary phenomena having their own patterns of “grammar” and “syntax.” In essence, a contending entity’s strategic cognitive information source must compose a sufficiently relevant message to be sent along operational channels using a basic, and usually rigid, tactical alphabet. The art is in the writing of a message that will have a desired effect. Surprisingly, there almost always emerges some set of generalized Yerkes– Dodson relations, necessarily marking cognitive process under real-world uncertainties, delays, and imprecisions—what Western military science calls Clausewitzian fog and friction (e.g., Watts 2004). Difficult tactical/operational problems, where one has available measures of optimization—the hand reaches, the eye measures the gap—seem addressable by engineering methods and variants of control theory. However, the more and most complex wicked problems confronting institutional
1 Wicked Strategic Problems
25
strategic decision-making are clearly mired in conundrums similar to those that confront current theories of individual consciousness (e.g., Wallace 2005, 2012, 2022). Indeed, Wallace (2022) argues at some length that individual consciousness is a grossly simplified, stripped-down version of such “multiple global workspace” phenomena as gene expression, immune function, and institutional cognition. That is, trading down almost everything for a 100ms time constant, individual consciousness can maintain at most a single “tunable spotlight” entraining multiple cognitive submodules, while these other phenomena, acting over much longer characteristic time spans, can entertain several such multicomponent spotlights simultaneously. Think of an experienced, highly trained, well-equipped squad during combat. Such ruminations are, perhaps, not entirely unexpected to those who have grappled with such matters. After all, organized conflict is an intensely human enterprise, at all scales and levels of organization, expressing the full spectrum of culturally driven and constrained expressions of human consciousness. Contention with an unfriendly environment, having its own regularities of time, space, and manner, may be no less difficult. While institutions are not conscious, a term restricted to 100 ms time constants in neural systems, many of the “tunable global workspace” dynamics described in consciousness theory (Wallace 2005, 2012, 2022) appear to generalize through the crosstalk linking both institutional work groups and “voting” adversaries whose actions may include explicit responses to evolutionary selection pressures. Straightforward analysis suggests that some extended version of the Yerkes– Dodson relations will affect and afflict all embodied cognitive phenomena, from cellular signal transduction through institutional conflict and collapse, and the forthcoming, manifold bizarre misfortunes that will be inflicted upon us by the successful marketing of culturally embedded artificial intelligence for the control of real-time critical systems. At this writing, the failure of public health in the face of the COVID-19 pandemic provides an canonical example of confrontation with an “unfriendly environment” responding to selection pressures, policy-driven affordances, and “population susceptibilities” (e.g., Telenti et al. 2021) instantiated by marginal or marginalized communities. The physical and the embodied cognitive environments mesh and clash in a dynamic syncretism embedded within the powerful determinants of culture and path-dependent historical trajectory. Given sufficient “arousal,” we conclude that something like Figs. 1.3 through 1.7 must emerge for any and all embodied cognitive processes, “enhanced” by artificial intelligence or not. The formal descriptions of such syncretisms developed here may provide novel insights into these phenomena, and possibly suggest tools for remediation and amelioration of pathologies. The modeling exercises of this chapter suggest—but fail to prove, as must all such mathematical models of complex social and biological phenomena— that embodied cognition will collapse under sufficient burdens of fog, friction, adversarial intent, and like environmental burdens or shadow prices. This is an empirical question subject to experimental and observational exploration. While the inevitability of such failure can be viewed as a painfully “obvious” conclusion, it
26
R. Wallace
has been reached here by formal methods that can be converted to useful statistical instruments for the analysis of real-time data. That work remains to be done, but might both extend our understanding of wicked strategic problems, and permit added, if necessarily limited, prediction of, and control over, them.
Appendix The Data Rate Theorem We take control information as supplied to an inherently unstable system at a rate H . Let R(D) be the Rate Distortion Function from Fig. 1.1 characterizing the relation between system intent and system operational effect. The distortion measure D is a scalar measure of the disjunction between intent and impact of the regulator. We take Rt as the Rate Distortion Function at time t, imposing conditions of noise and volatility so that system dynamics are described by the stochastic differential equation dRt = f (Rt , t)dt + bRt dWt
(1.26)
where, as usual, dWt represents Brownian noise, f is an appropriate function, and b a parameter. We take H (Rt , t) to be the incoming rate of control information needed to impose control, and apply the Black–Scholes argument (Wallace 2020b, Sect. 14.4), expanding H in terms of R by using the Ito Chain Rule on Eq. (1.26). At nonequilibrium steady state, “it is not difficult to show” that Hnss ≈ κ1 R + κ2
(1.27)
for appropriate constants κi . For a Gaussian channel, R = 1/2 log[σ 2 /D], and, recalling Feynman’s (2000) characterization of information as a form of free energy, we can define an entropy as the Legendre transform of R, i.e., S = −R(D) + DdR(D)/dD. This leads to the usual Onsager approximation for the dynamics of the distortion scalar D: μ 2D(t) √ D(t) ≈ μt
dD/dt ≈ μdS/dD =
In the absence of control, D undergoes a classic diffusion to catastrophe.
(1.28)
1 Wicked Strategic Problems
27
This correspondence reduction to ordinary diffusion suggests an iterative approximation leading to the stochastic differential equation dDt = [
μ − M(H )]dt + βDt dWt 2Dt
(1.29)
where M(H ) is an undetermined function of the rate at which control information is provided, and the last term is a volatility in Brownian noise. The nonequilibrium steady state expectation of Eq. (1.29) is simply Dnss =
μ 2M(H )
(1.30)
Application of the Ito Chain Rule to Dt2 via Eq. (1.28) implies the necessary condition for stability in variance is √ M(H ) ≥ β μ
(1.31)
From Eqs. (1.27) and (1.30), however, M(H ) ≥
μ √ exp[2(H − κ2 )/κ1 ] ≥ β μ 2σ 2
(1.32)
giving an explicit necessary condition for stability in second order as 2βσ 2 κ1 log √ + κ2 ≡ H0 H ≥ 2 μ
(1.33)
Other channel configurations with different explicit algebraic expressions for the RDF will nonetheless have similar results as a consequence of the inherent convexity of the RDF (Cover and Thomas 2006).
Groupoids Following Brown (1992), consider a directed line segment in one component, written as the source on the left and the target on the right. • −→ • Two such arrows can be composed to give a product ab if and only if the target of a is the same as the source of b a
b
• −→ • −→ •
28
R. Wallace
As Brown (1992) puts it, One imposes the geometrically obvious notions of associativity, left and right identities, and inverses. Thus a groupoid is often thought of as a group with many identities, and the reason why this is possible is that the product ab is not always defined. We now know that this apparently anodyne relaxation of the rules has profound consequences. . . [since] the algebraic structure of product is here linked to a geometric structure, namely that of arrows with source and target, which mathematicians call a directed graph.
Cayron (2006) describes the underlying conceit as follows: A group defines a structure of actions without explicitly presenting the objects on which these actions are applied. Indeed, the actions of the group G applied to the identity element e implicitly define the objects of the set G by ge = g; in other terms, in a group, actions and objects are two isomorphic entities. A groupoid enlarges the notion of group by explicitly introducing, in addition to the actions, the objects on which the actions are applied. By this approach, many identities may exist (they correspond to the actions that leave an object invariant).
An example of the underlying mechanics by which symmetry changes in general can be expressed is described by Stewart (2017) as follows: Spontaneous symmetry-breaking is a common mechanism for pattern formation in many areas of science. It occurs in a symmetric dynamical system when a solution of the equations has a smaller symmetry group than the equations themselves. . . This typically happens when a fully symmetric solution becomes unstable and branches with less symmetry bifurcate.
It is of particular importance that equivalence class decompositions permit construction of groupoids in a highly natural manner. Indeed, Figs. 1.5 and 1.7 can be seen as representing temperature-induced groupoid symmetry breakings, with the equivalence classes defined in terms of the values of the relations < dLt >= 0. Weinstein (1996) provides more details on groupoids.
Higher Dimensional Systems We have viewed systems as fully characterized by the single scalar parameter Z, mixing material resource/energy supply with internal and external flows of information. The real world, however, will likely be more complicated, and, invoking techniques akin to Principal Component Analysis, there may be more than one independent composite entity irreducibly driving system dynamics. Thus, it may be
1 Wicked Strategic Problems
29
necessary to replace the scalar Z with an n-dimensional vector Z with orthogonal components together accounting for a good portion of the total variance in the rate of supply of essential resources. The dynamic equations are then expressed in vector form as F (Z) = − log (h(g(Z))) g(Z) S = −F + Z · ∇Z F ∂Z/∂t ≈ μˆ · ∇Z S = f (Z) −∇Z F + ∇Z (Z · ∇Z F ) = μˆ −1 · f (Z) ≡ f ∗ (Z) ∂ 2 F /∂zi ∂zj · Z = f ∗ (Z) ∂ 2 F /∂zi ∂zj |Znss · Znss = 0
(1.34)
F , g, h, and S are scalar functions, and μˆ is an n-dimensional square matrix of diffusion coefficients. The matrix ((∂F /∂zi ∂zj )) is the obvious n-dimensional square matrix of second partial derivatives, and f (Z) is a vector function. The last relation imposes a nonequilibrium steady state condition, i.e. f ∗ (Znss ) = 0. The next-to-last line of Eq. (1.34) is the multidimensional generalization of the second line of Eq. (1.9). For the (relatively) simple Rate Distortion approach, h(g(Z)) → g(Z), while, again, we assume Z(t) → Znss . For n ≥ 2, this is an overdetermined system of partial differential equations (Spencer 1969). For a general f ∗ (Z), the system is indeed inconsistent, resulting in as many as n different expressions for F (Z), and hence the same number of “temperature” measures as determined by the relation F = − log(g)g. This inference should not be surprising. The fifth expression in Eq. (1.34), where f ∗ (Z) = 0, represents, most generally, a multicomponent, cross-interacting, crosstalking, cognitive system that, if acting in real time, will almost always be far indeed from any steady state, equilibrium, or nonequilibrium. Such systems will almost always be inherently unstable, requiring constant input of control information that will necessarily lag perturbation. Such a structure should not be characterizable by a single cognitive temperature-analog g. Each cognitive component, if the system is far from a steady state, should be expected to have its own g-value, in addition to structure imposed by the multidimensional nature of Z.
30
R. Wallace
Solutions to Eq. (1.19)
References Atlan, H., & Cohen, I. (1998). Immune information, self-organization, and meaning. International Immunology, 10, 711–717. Bigsby, R., Chapin, R., Daston, G., Davis, B., Gorski, J., Gray, L., Howdeshell, K., Zoeller, R., & Vom Saal, F. (1999). Evaluating the effects of endocrine disruptors on endocrine function during development. Environmental Health Perspectives, 107(Suppl. 4), 613–618. Brown, R. (1992). Out of line. Royal Institute Proceedings, 64, 207–243. Cayron, C. (2006). Groupoid of orientational variants. Acta Crystalographica Section A, A62, 21040. Champagnat, N., Ferriere, R., & Meleard, S. (2006). Unifying evolutionary dynamics: From individual stochastic process to macroscopic models. Theoretical Population Biology, 69, 297– 321. Conant R., Ross Ashby, W. (1970). Every good regulator of a system must be a model of that system. International Journal of Systems Science, 1, 89–97. Cover, T., & Thomas, J. (2006). Elements of information theory (2nd ed.). Wiley. de Groot, S., & Mazur, P. (1984). Nonequilibrium thermodynamics. Dover. Dembo, A., & Zeitouni, O. (1998). Large deviations and applications (2nd ed.). Springer. Diamond, D., Campbell, A., Park, C., Halonen, J., & Zoladz, P. (2007). The temporal dynamics model of emotional memory processing: a synthesis on the neurobiological basis of stressinduced amnesia, flashbulb and traumatic memories, and the Yerkes-Dodson law. Neural Plasticity. https://doi.org/10.1155/2007/60803
1 Wicked Strategic Problems
31
Fauquet-Alekhine, P., Geeraerts, T., Rouillac, L. (2014). Characterization of anesthetists’ behavior during simulation training: performance versus stress achieving medical tasks with or without physical effort. Psychology and Social Behavior Research, 2, 20–28. Feynman, R. (2000). Lectures on computation. Westview Press. Field, A. (2014). Schelling, von Neumann, and the event that didn’t occur. Games, 5, 53–89. https:// doi.org/10.3390/g5010053 Fisher, M. E. (1965). In: Brittin W. E. (Ed.), Lectures in theoretical physics. University of Colorado Press. French, V., Anderson, E., Putman, G., & Alvager, T. (1999). The Yerkes-Dodson law simulated with an artificial neural network. Cognitive Systems, 5, 136–147. Gray, C. S. (2015). Tactical operations for strategic effect: The challenge of currency conversion. JSOU Press. Gray, C. S. (2018). Theory of strategy. Oxford University Press. Head, B. (2019). Forty years of wicked problems literature: forging closer links to policy studies. Policy and Society, 38, 180–197. Horsthemeke, W., & Lefever, R. (2006). Noise-induced transitions: Theory and applications in physics, chemistry, and biology (Vol. 15). Springer. Jin, H., Hu, Z., & Zhou, X. (2008). A convex stochastic optimization problem arising from portfolio selection. Mathematical Finance, 18, 171–183. Khinchin, A. (1957). Mathematical foundations of information theory. Dover. Krepinevich, A., & Watts, B. (2009). Retaining strategic competence. Center for Strategic and Bugetary Assessment. Laidler, K. (1987). Chemical kinetics (3rd ed.). Harper and Row. Manheim, D. (2019). Multiparty dynamics and failure modes for machine learning and artificial intelligence. Big Data and Cognitive Computing, 3. https://doi.org/10.3390/bdcc3020021 Maturana, H., & Varela, F. (1980). Autopoiesis and cognition. Reidel Publishing Company. McClintock, P., & Luchinsky, D. (1999). Glorious noise. The New Scientist, 161(2168), 36–39. McGann, M., De Jaegher, H., & Di Paolo, E. (2013). Enaction and psychology. Review of General Psychology, 17, 203–209. Moore, D. T. (2011). Sensemaking: A structure for an intelligence revolution. Clift Series on the Intelligence Profession. National Defense Intelligence College. Nair, G., Fagnani, F., Zampieri, S., & Evans, R. (2007). Feedback control under data rate constraints: an overview. Proceedings of the IEEE, 95, 108–137. Northoff, G., & Tumati, S. (2019). ‘Average is good, extremes are bad’ – Non-linear inverted U-shaped relationship between neural mechanisms and functionality of mental features. Neuroscience and Biobehavioral Reviews, 104, 11–25. Pettini, M. (2007). Geometry and topology in Hamiltonian dynamics and statistical mechanics. Springer. Protter, P. (2005). Stochastic integration and differential equations: A new approach (2nd ed.). Springer. Rittel, H., & Webber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155–169. Ruelle, D. (1964). Cluster property of the correlation functions of classical gases. Reviews of Modern Physics, 36, 580–584. Spencer, D. (1969). Overdetermined systems of linear partial differential equations. Bulletin of the American Mathematical Society, 75, 179–239. Stewart, I. (2017). Spontaneous symmetry-breaking in a network model for quadruped locomotion. International Journal of Bifurcation and Chaos, 14, 1730049 (online). Strachan, H. (2019). Strategy in theory; strategy in practice. Journal of Strategic Studies, 42, 171– 190. Telenti, A., et al. (2021). After the pandemic: perspectives on the future trajectory of COVID-19. Nature, 596, 495–504.
32
R. Wallace
Trofimova, I., Robbins, T., Sulis, W., & Uher, J. (2017). Taxonomies of psychological individual differences: biological perspectives on millennia-long challenges. Philosophical Transactions B, 373, 20170152. Van den Broeck, C., Parrondo, J., & Toral, R. (1994). Noise-induced nonequilibrium phase transition. Physical Review Letters, 73, 3395–3398. Van den Broeck, C., Parrondo, J., Toral, R., Kawai, R. (1997). Nonequilibrium phase transitions induced by multiplicative noise. Physical Review E, 55, 4084–4094. Varela, F., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience (6th ed.). MIT Press. Wallace, D., & Wallace, R. (2021). COVID-19 in New York City: An ecology of race and class oppression. Springer Briefs in Public Health. Wallace, D., Wallace, R., & Rauh, V. (2003). Community stress, demoralization, and body mass index: evidence for social signal transduction. Social Science and Medicine, 56, 2467–2478. Wallace, R. (2005). Consciousness: A mathematical treatment of the global neuronal workspace model. Springer. Wallace, R. (2012). Consciousness, crosstalk, and the mereological fallacy: an evolutionary perspective. Physics of Life Reviews, 9, 426–453. Wallace, R. (2017). Computational Psychiatry: A systems biology approach to the epigenetics of mental disorders. Springer. Wallace, R. (2020a). On the variety of cognitive temperatures and their symmetry-breaking dynamics. Acta Biotheoretica. https://doi.org/10.1007/s10441-019-09375-7 Wallace, R. (2020b). Cognitive dynamics on Clausewitz landscapes: The control and directed evolution of organized conflict. Springer. Wallace, R. (2021). How AI founders on adversarial landscapes of fog and friction. Journal of Defense Modeling and Simulation. https://doi.org/10.1177/1548512920962227 Wallace, R. (2022). Consciousness, Cognition, and Crosstalk: The evolutionary exaptation of nonergodic groupoid symmetry-breaking. Springer Briefs. Watts, B. (2004). Clausewitzian friction and future war (Revised Edition). Institute for National Strategic Studies, National Defense University. Wei, B., Chen, S., Po, H., & Liu, R. (2014). Phase transitions in the complex plane of physical parameters. Scientific Reports, 4, 5202. https://doi.org/10.1038/srep05202 Weinstein, A. (1996). Groupoids: unifying internal and external symmetry. Notices of the American Mathematical Association, 43, 744–752. Wilken, J., Smith, B., Tola, K., & Mann, M. (2000). Trait anxiety and prior exposure to nonstressful stimuli: Effects on psychophysiological arousal and anxiety. International Journal of Psychophysiology, 37, 233–242. Wilson, J., Nugent, N., Baltes, J., Tokunaga, S., Canic, T., Yourn, B., Bellinger, E., Delac, D., Golston, G., & Henderson, D. (2000). Effects of low doses of caffeine on aggressive behavior of male rats. Psychological Reports, 86, 941–946. Yerkes, R., & Dodson, J. (1908). The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology, 18, 459–482. Zilberstein, S. (1996). Using anytime algorithms in intelligent systems. AI Magazine, 17, 73–83.
Chapter 2
Punctuated Institutional Problem Recognition Rodrick Wallace
2.1 Introduction This chapter adapts recent studies on cognition and consciousness in higher animals—restricted by the necessity of rapid action to a single, high-speed, tunable, global workspace “spotlight” (Wallace 2022)—to the analogous problem of punctuated accession to recognition-and-action in multiple, interacting tunablespotlight systems. These are, perhaps most notably, institutions under conditions of organized conflict on “Clausewitz landscapes” marked by synergisms between resource limits, imprecision, uncertainty, environmental burden, and skilled, deadly, adversarial intent. Public health institutions in the USA currently fit the paradigm, under siege by both an evolutionarily clever pathogen and by political entities at home and abroad prepared to make use of that pathogen for their own purposes. It is a truism that institutions able to act decisively and rapidly in the face of dynamic patterns of threat and opportunity have selective advantage over individuals facing analogous challenge. From squads having characteristic decision times of seconds to staff organizations planning for decades, the simultaneous, multiple, interacting, analogs to individual consciousness characteristic of institutions— provided time and other essential resources are available and the system actually functions—give notable long-term advantage over any individual. To put it another way, in the real world, Rambo is almost always captured or killed, and well-armed teams on elephants usually get the tiger. There are Egyptian frescoes of Pharaoh’s hunting party on chariots killing the Lion—the iconography of power-in-organization. Only in neoliberal cultural and political fantasies and propaganda does selection operate solely at the individual level and
R. Wallace () The New York State Psychiatric Institute, Bronx, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 R. Wallace (ed.), Essays on Strategy and Public Health, https://doi.org/10.1007/978-3-030-83578-1_2
33
34
R. Wallace
scale of organization: As our religious traditions repeatedly attempt to instruct us, Greed is not Good. We first construct some formal scaffolding before proceeding to the dynamics of institutional cognition itself, involving information-theoretic groupoid symmetrybreaking and its control theory analogs.
2.2 Basic Variables We take an embodied cognitive agent—here an institution—as embedded in, acting on, and acted on by, the kind of adversarial landscapes described by Clausewitz and others (e.g., Gray 2018). Institutions require at least three essential resource streams, indexed here as • The rate at which information can be transmitted between parts of itself, characterized by an information channel capacity C . • The rate at which “sensory information”—reliable intelligence—about the embedding environment and the intentions of opponents is available, H . • The rate at which personnel and material resources can be delivered, M . These rates, and time, interact, generating a kind of correlation matrix Z of dimension ≥ 3. Any n dimensional matrix has n scalar invariants—characteristic numbers that remain the same under certain transformations. These invariants can be found from the standard polynomial relation p(γ ) = det[Z − γ I] = γ n − r1 γ n−1 + r2 γ n−2 − . . . + (−1)n rn
(2.1)
I is the n-dimensional identity matrix, det the determinant, and γ is an arbitrary, real-valued, parameter. The first invariant is most often taken to be the matrix trace, and the last as ± the matrix determinant. These invariants can be used to build a single scalar index Z = Z(r1 , . . . , rn ). The simplest such would be Z = C × H × M . Scalarization, however, must be appropriate to the system under study at the time of study, and there will almost always be cross-interactions between these rates. The important point for this analysis is that scalarization permits analysis of a one dimensional system. Expansion of Z into vector form leads to sometimes difficult multidimensional dynamic equations (Wallace 2020a, Section 7.1). The chapter Mathematical Appendix provides an outline.
2.3 Control, Information, and Cognition Institutions, as embodied entities, are constructed from interacting cognitive submodules. These range from individuals through workgroups, and larger hierarchies, within particular cultures and constraining historical trajectories. They are likewise
2 Punctuated Institutional Problem Recognition
35
constrained by the environment in which they operate, including actions and intents of competing and cooperating entities. Further, there is always structured uncertainty imposed by the large deviations possible within the overall embedding environment. Thus, several factors interact to build a composite information source (Cover and Thomas 2006) representing institutional cognition: • Institutional—like all cognition—requires choice that reduces uncertainty and implies the existence of an information source formally “dual” to that cognition at each scale and level of organization (Atlan and Cohen 1998). The argument is direct and agnostic regarding representation. • All cognition requires regulation. As Wallace (2017, Ch. 3) puts it, Cognition and its regulation. . . must be viewed as an interacting gestalt, involving not just an atomized individual, but the individual in a rich context. . . There can be no cognition without regulation, just as there can be no heartbeat without control of blood pressure, and no multicellularity without control of rogue cell cancers. Cognitive streams must be constrained within regulatory riverbanks.
It is here that the Data Rate Theorem (DRT), or a generalization, manifests itself. That is, there must be an embedding regulatory information source imposing control information at a rate greater than an inherently unstable cognitive process generating its own “topological information.” See the chapter Mathematical Appendix for details regarding the DRT. • For human institutions, embedding culture is also an information source, with analogs to grammar and syntax. Thus, within a culture, under particular circumstances, some sequences of behavior are highly probable, and others have vanishingly small probability (Khinchin 1957), a sufficient condition for the development of an equivalence class groupoid symmetry-breaking formalism. • Spatial and social geographies are similarly structured so as to have incident sequences of very high and very low probability: night follows day, summer’s dirt roads are followed by October’s impassible mud streams, as the Wehrmacht found outside of Moscow. • Large deviations, as described by Champagnat et al. (2006) and Dembo and Zeitouni (1998), follow high probability developmental pathways governed by entropy-like laws that imply the existence of another information source. Embedded and embodied institutional (and other) cognition is then characterized by a joint information source uncertainty (Cover and Thomas 2006) as H ({Xi }, XV , XΔ )
(2.2)
The set {Xi } includes the cognitive, regulatory, and embedding cultural information sources of the institution, XV is the information source of the embedding environment, which will include the actions and intents of adversaries or collaborators, as well as “weather.” Finally, XΔ is the information source of the associated large deviations possible to the system.
36
R. Wallace
Crosstalk between coresident information channels and sources is almost inevitable, a consequence of the information chain rule (Cover and Thomas 2006). For a set of interacting stationary ergodic information sources Xi , i = 1, 2, . . ., the joint uncertainty of interacting sources and channels is always less than or equal to the sum of the independent uncertainties (Cover and Thomas 2006): H (X1 , X2 , . . .) ≤
H (Xi )
(2.3)
i
Each information source Xi is powered by a corresponding free energy source Mi , and it takes more free energy to isolate information sources and channels than to allow their interaction. Such a “second law” conundrum confounds much of electrical engineering, particularly in the design and construction of microchips. Such “second law” problems extend to all scales and levels of organization. At the level of the biology of organisms, evolutionary process has taken this “spandrel,” in the sense of Gould and Lewontin (1979), and built whole new cathedrals from it (Wallace 2012). By contrast, at the level of institutions engaged in organized conflict, the internal crosstalk spandrel is pretty much the whole cathedral right from the beginning, so long as you can keep the logistical machine working. One is reminded of the films of Operation Barbarossa, showing Panzer divisions racing forward, followed by long lines of horse-drawn supply carts and marching infantry. Some things are written in stone. The next steps are somewhat subtle. According to the Ergodic Decomposition Theorem (e.g., Gray 1988, Ch. 7)— which states that it is possible to factor any nonergodic process into a sufficiently large sum (or generalized integral) of ergodic processes—there is little to be learned from nonergodic information sources. This is not, really, a “simple” result for dynamic systems that can suffer “absorbing states.” Individual paths—and small, closely related, equivalence classes of them—are particularly important for institutions engaged in conflict because each path may have a unique consequence: Each battle, each campaign, each pattern of “contact” initiated by an adversary can be part of a “meaningful sequence” associated with strategic stalemate or collapse, as the USA learned in Korea, Vietnam, Iraq, and Afghanistan. And as the Wehrmacht learned on the Eastern Front, in spite of truly impressive and persistent tactical superiority. It is possible to approximate any reasonably well-behaved real-valued function over a fixed interval in terms of a Fourier series. It was, in the geocentric Ptolemaic system of the heavens, via a sufficient number of nested epicycles, possible to predict planetary positions to any desired accuracy using such a de facto Fourier Decomposition. The underlying astronomical problem was both considerably simplified and greatly enhanced by the non-geocentric empirical observations of Kepler, as explained by Newton and elaborated by Einstein. Institutional cognition is considerably more complex than the motion of the planets around the sun. In this work, we expand the development of Wallace (2018), deriving from first principles—rather than imposing—a temperature analog for nonergodic cognitive systems.
2 Punctuated Institutional Problem Recognition
37
We reduced the spectrum of resources and their interactions—including internal bandwidth, rates of sensory information, and material/energy supply—to a scalar rate variable Z. To explore dynamic process, we introduce a first-order linear Onsager approximation analogous to that of nonequilibrium thermodynamics (de Groot and Mazur 1984). Ultimately, we construct an iterated free energy Morse Function (Pettini 2007) via a formalized Boltzmann probability expression, in the sense of Feynman (2000). See the Mathematical Appendix for an outline of Morse Theory. The construction is via enumeration of high probability developmental pathways available to the system as j = 1, 2, . . .. This permits definition of a path probability Pj as exp[−Hj /g(Z)] Pj = k exp[−Hk /g(Z)]
(2.4)
This formulation, following Khinchin (1957) and Wallace (2018), applies to any stationary process, ergodic or not. In particular, it can be used for systems in which each developmental pathway xj has its own source uncertainty measure Hxj . H is only expressed as a Shannon “entropy” for an ergodic system (Khinchin 1957). Here, we will be concerned with its scalar value, not its functional form. This is a significant reorientation from conventional information theory, and, via a Morse Function, the key to the extension of theory. That is, we do not need the functional form of a Hk , only its scalar magnitude. Further development, however, involves imposition of an “adiabatic approximation” that assumes a “sufficiently stationary” convergence about slowly varying underlying system dynamics, much as is done in the study of molecular quantum mechanics, where rapidly changing electron dynamics are approximated around slowly changing nuclear configurations. The temperature g(Z), however, is quite another matter, and must now be calculated from Onsager-like system dynamics built from the partition function, i.e., from the denominator of Eq. (2.4). The system’s “rate of cognition” can then be expressed, as in chemical reaction theory (Laidler 1987), by the probability such that Hj > H0 , where H0 is the lower limit for detection of a signal under embodiment in a varying and noisy environment, or for stability via the Data Rate Theorem, as described in the Mathematical Appendix. There is an important matter hidden here. This is the “prime groupoid phase transition”: We are implicitly imposing a symmetry-breaking version of the Source Coding Theorem, the Shannon–McMillan Theorem, on the system, dividing all possible paths into a small set—an equivalence class—of “meaningful” sequences consonant with some underlying grammar and syntax—in a large sense—and a very large set of vanishingly small probability paths that are not consonant, and are “of measure zero.”
38
R. Wallace
Such an equivalence class partition imposes the first of many groupoid symmetry-breaking phase transitions. We will discuss groupoid symmetries in more detail below, but, in essence, we have used a groupoid symmetry-breaking phase change to derive—or to impose—a fundamental information theory asymptotic limit theorem. Other such may emerge “naturally” from associated groupoid symmetrybreaking phenomena, all related to equivalence classes of system developmental pathways. This result can be seen as analogous to a much earlier phase transition, the sudden transmission of light across the primordial universe after the first 370,000 years. In sum, groupoid symmetry-breaking extends the Shannon–McMillan Source Coding Theorem to nonergodic—and possibly non-stationary—information sources. This is a major—if “trivially obvious”—result to which we will return below. The Mathematical Appendix provides a brief introduction to groupoids (Brown 1992; Cayron 2006; Weinstein 1996). For groupoids, unlike algebraic groups, products are not necessarily defined for all possible pairs of elements, leading to disjoint orbit partition. The iterated free energy Morse Function F is defined by the relation exp[−F /g(Z)] ≡
exp[−Hk /g(Z)] = h(g(Z))
(2.5)
k
F is subject to symmetry-breaking transitions as g(Z) varies (Pettini 2007; Matsumoto 1997). Again, see the chapter Mathematical Appendix for an outline of Morse Function formalism. Note that these symmetries are not those associated with simple physical phase transitions represented by standard group structures. Cognitive phase change involves punctuated transitions between equivalence classes of high probability signal sequences, represented as groupoids. As Tateishi et al. (2013) put it, if experimental data can be grouped into equivalence classes compatible with an algebraic structure, a groupoid approach can capture the symmetries of the system in a way not be possible with group theory, for example in the analysis of neural network dynamics. Similar matters can be found in Schreiber and Skoda (2010). Here, groupoid symmetries are driven by the directed homotopy induced by failure of local time reversibility for information systems. This is because palindromes have vanishingly small probability. In English, “the” has meaning in context while “eht” has vanishingly low probability. Thus the standard statement “except on a set of measure zero” implies a fundamental symmetry-breaking. Nonstationary cognitive processes may require even more general structures, for example, small categories and/or semigroupoids for analogs to the standard symmetry-breaking dynamics of physical systems. The analysis remains to be done. General symmetry-breaking phase changes represent extension of the Data Rate Theorem (DRT) of the chapter Mathematical Appendix to cognitive systems. Recall the central point of the DRT: that the rate at which externally supplied control information must be imposed on an inherently unstable system to stabilize it must exceed the rate at which that system generates its own “topological information.”
2 Punctuated Institutional Problem Recognition
39
One standard model is of steering a vehicle on a rough, twisting roadway at night. The headlight/steering/driver complex must impose control information at a rate greater than the “twistiness/roughness” of the road imposes its own information on the vehicle. There may be many possible phase analogs confronting a cognitive system as g(Z) varies, rather than just the “on/off” for stability implied by the DRT itself. We will make something of this in the following sections that generalize “renormalization” analysis of phase transition. Dynamic equations can be derived from Eq. (2.5) by imposing a first-order Onsager approximation in the gradient of an entropy measure constructed from the iterated free energy Morse Function F via the Legendre transform S(Z) ≡ −F (Z) + ZdF /dZ
(2.6)
leading to
exp[−F /g(Z)] = exp[−Hk /g(Z)] ≡ h(g(Z))
k
F (Z) = − log(h(g(Z))g(Z) g(Z) = −
F (Z) RootOf eX − h(−F (Z)/X) ∂Z/∂t ≈ μ∂S/∂Z = f (Z)
(2.7)
The RootOf construct defines a generalized Lambert W-function in the sense of Maignan and Scott (2016), Mezo and Keady (2015), and Scott et al. (2006). The last expression in Eq. (2.7) represents imposition of the entropy gradient formalism of Onsager nonequilibrium thermodynamics (de Groot and Mazur 1984). f (Z) is defined here as the “adaptation function,” the rate at which the system adjusts to changes in Z. Again, for information transmission there is no “temporal microreversibility,” so that there can be no Onsager Reciprocal Relations in these models. Further development leads to
f (Z) = Zd 2 F /dZ 2
f (Z) dZdZ + C1 Z + C2 Z f (Z) dZ − log(h(g(Z)))g(Z) − C1 Z + f (Z)dZ + C2 = 0 (2.8) −Z Z F (Z) =
40
R. Wallace
Taking F = − log(h(g(Z))g(Z), with h determined by underlying internal structure, leads to expressing g in terms of a generalized Lambert W-function, suggesting an underlying formal network structure (Newman 2010). Specification of any two of f, g, h allows calculation of the third. However, h is fixed by the internal structure of the larger system, and the “adaptation rate” f is imposed by externalities. Further, the boundary conditions C1 , C2 are similarly imposed, also structuring the temperature analog g(Z). Some thought suggests that the temperature analog g(Z) might also be viewed as an order ∞parameter. If, in Eq. (2.5), the sum is replaced by the integral 0 exp[−H /g(Z)]dH , then h(g(Z)) ≡ g(Z) and the RootOf construction becomes W (n, −F (Z)), where W (n, x) is the n-th order Lambert W-function that satisfies the relation W (n, x) exp[W (n, x)] = x. Only for n = 0, −1 are there regions of x for which the Lambert W-function is real-valued. This will prove to be important in what follows. It may be necessary to replace the scalar Z with an m ≤ n-dimensional vector having a number of independent, perhaps orthogonal, components accounting for considerable portions of the total variance in the rate of supply of essential resources. The dynamic equations can then be reexpressed in a more complicated vector form (Wallace 2020a, Section 7.1). See the chapter Mathematical Appendix for an outline. In a similar way, it may be necessary to introduce nonlinear or higher order Onsager models (e.g., Wallace 2021a), leading to formal algebraic power series treatments (Jackson et al. 2017). These developments would take us beyond the simplest “Y = mX + b” regression model analogs explored here. The dynamics are driven at rates determined by the adaptation function f (Z). We can ask more detailed questions regarding what happens at critical points defined in terms of the “temperature” variate g(Z) through the abduction of another basic approach from physical theory, involving symmetry-breaking, phase transition, and “renormalization.” We start with some results from computation theory.
2.4 No Free Lunch We begin by examining the implications of “no free lunch” and “tuning theorem” arguments for the punctuated address of rapidly changing patterns of threat and affordance facing a cognitive entity. Given a set of cognitive constituent modules that become linked to solve a problem the “no free lunch” argument of Wolpert and MacReady (1997) becomes significant. The central point (e.g., English 1996) is that, if an optimizer has been tuned to the most effective possible structure for a particular kind of problem or problem set, it will necessarily be worst for some other problem set, which must then have a different function optimizer for optimality. Wallace (2012) describes these matters in another way, showing that a computed solution is simply the product of the information processing of a problem, and, by a
2 Punctuated Institutional Problem Recognition
41
Fig. 2.1 Two different problems facing an organism will be optimally solved by two different linkages of available lower level cognitive modules (LLCM), characterized here by their dual information sources Xj , into different temporary networks of working structures. These are linked by crosstalk among the sources rather than as the LLCM themselves. The embedding information source Z represents the influence of external signals
very famous argument, information can never be gained simply by processing. Thus a problem X is transmitted as a message by an information processing channel, Y, a computing device, and recoded as an answer. By the “tuning theorem” argument of Chap. 4 below—Sect. 4.2—there will be a channel coding of Y which, when properly tuned, is itself most efficiently “transmitted,” in a sense, by the problem— the “message” X. In general, then, the most efficient coding of the transmission channel, that is, the best algorithm turning a problem into a solution, will necessarily be highly problem-specific. Thus there can be no best algorithm for all sets of problems, although there will likely be an optimal algorithm for any given set. Thus, different challenges facing a cognitive entity must be met by different arrangements of cooperating lower level cognitive modules. We make an abstract picture of this based on the network of linkages between the information sources dual to the lower level cognitive modules (LLCM) that can become entrained into address of those challenges. The network of lower level cognitive modules is reexpressed in terms of the information sources dual to them. Given two distinct problem classes, there must be two markedly different wirings of the information sources dual to the available LLCM, as in Fig. 2.1. The network graph edges are measured by the level of information crosstalk between sets of nodes representing the dual information sources. The possible expansion of a closely linked set of information sources dual to the LLCM into a global broadcast depends, for this model, on the underlying network topology of the dual information sources and on the strength of the couplings between the individual components of that network.
42
R. Wallace
2.5 A First Model Class The fraction of nodes within the “giant component” of a random network of N nodes—here, taken as interacting information sources dual to lower level cognitive processes at and across various scales and levels of organization—can be described in terms of the probability of contact between nodes, p,—indexed by the intensity of crosstalk—as (Newman 2010) W (0, −Np exp[−Np]) + Np Np
(2.9)
This is shown in Fig. 2.2. An important matter is the topological tunability of the threshold dynamics implied by the two limiting cases, the star-of-stars-of-stars vs. the random network. Lambert W-functions appear to suggest existence of some kind of underlying formal network structure. Above, we abducted results from nonequilibrium thermodynamics to consciousness theory, applicable to nonergodic, as well as ergodic, models of cognition. Here, we abduct the Kadanoff renormalization treatment of physical phase transitions (e.g., Wilson 1971; Wallace 2005, 2012), applying it to a reduced version of the iterated “free energy” Morse Function of Eq. (2.5), expanding the approach of Wallace (2021a). For the sake of familiarity and brevity, we project down on to the subsystem dominated by C , the internal system bandwidth, envisioning a number of internal cognitive submodules as connected into a topologically identifiable network having Fig. 2.2 Proportion of N interacting information sources dual to lower level cognitive processes that are entrained into a “giant component” institutional global broadcast as a function of the probability of contact p for random and stars-of-stars-of-stars topologies. Tuning topologies determines the threshold for “ignition” to a single, large-scale, global broadcast
2 Punctuated Institutional Problem Recognition
43
a variable average number of fixed-strength crosstalk linkages between components. The mutual information measure of crosstalk can continuously change, and it becomes then possible to conduct a parameterized renormalization in a nowstandard manner (Wilson 1971; Wallace 2005). The internal modular network linked by information exchange has a topology depending on the magnitude of interaction. We define an interaction parameter, a real number ω > 0, and examine structures characterized in terms of linkages set to zero if crosstalk is less than ω, and renormalized to 1 if greater than or equal to ω. Each ω defines, in turn, a network “giant component” (Spenser 2010), linked by information exchange greater than or equal to it. Invert the argument. That is, a given topology of interacting submodules making up a giant component will, in turn, define some critical value ωC such that network elements interacting by information exchange at a rate less than that value will be excluded from that component, will be locked out, and not “consciously” perceived. ω is a tunable, syntactically dependent, detection limit depending on the instantaneous topology of the giant component of linked cognitive submodules defining, by that linkage, a “global broadcast.” For “slow” systems (Wallace 2012)—immune response, gene expression, wound healing, institutional process—as opposed to the 100 ms time constant of higher animal consciousness, there can be many such “global workspace” spotlights acting simultaneously. Such multiple global broadcasts, indexed by the set Ω = {ω1 , ω2 , . . .}, lessen the likelihood of inattentional blindness to critical signals, both internal and external. The immune system, for example, engages simultaneously in pathogen and malignancy attack, neuroimmuno dialog, and routine tissue maintenance (Cohen 2000). Institutions engaged in organized conflict must maintain their individual Z-components under often dire circumstances, while engaging in both “local” and “grand” strategies to bring the conflict to some useful conclusion. Assuming it possible to scalarize the set Ω in something of the manner of Z above, we again impose a single, real-value ω, and can model the dynamics of a multiple tunable workspace system. Recall the definition of the iterated free energy F from Eq. (2.5), now focused within and characterized by ω. The essential idea is to invoke a “length” r on the network of internal interacting information sources. r will be more fully defined below. We follow the renormalization methodology of Wilson (1971) as described in Wallace (2005), although other approaches are clearly possible. That is, there is no unique renormalization symmetry. The central matter is a “clumping” transformation under an “external field strength” that can be, in the limit, set to zero. For clumps of size R, given a field of strength J , F [ω(R), J (R)] = F (R)F [ω(1), J (1)] χ [ω(R), J (R)] =
χ [ω(1), J (1)] R
(2.10)
44
R. Wallace
χ represents a correlation length across the linked information sources. F (R) is a “biological” renormalization relation that can take such forms as R δ , m log(R) + 1, exp[m(R − 1)/R], and so on, so long as F (1) = 1 and is otherwise monotonic increasing. Physical theory is restricted to F (R) = R 3 . Surprisingly, after some tedious algebra, the standard Wilson (1971) renormalization phase transition calculation drops right out for the extended relations, described first in Wallace (2005) and summarized in the chapter Mathematical Appendix. There remains a problem. Just what is the metric r? First, impose a topology on the system of interacting information sources such that, near a particular “language” A associated with some source uncertainty measure H , there is an open set U of closely similar languages Aˆ such that the set A, Aˆ ∈ U . Since the information sources are sufficiently similar, for all pairs of languages A, Aˆ in U it is possible to • Create an embedding alphabet that includes all symbols allowed to both. • Define an information-theoretic distortion measure in the extended joint alphabet between any high probability (i.e., properly grammatical and syntactical) paths ˆ written as d(Ax, Ax). ˆ in A and A, The different languages do not interact in this approximation. • Define the metric on U as ˆ ≡| ˆ − r(A, A) d(Ax, Ax) d(Ax, Ax)| ˆ (2.11) A,Aˆ
A,A
ˆ are paths in the languages A, Aˆ respectively, d is the distortion where Ax and Ax measure, and the second term is a “self-distance” for the language A such that ˆ > 0, A = A. ˆ r(A, A) = 0, r(A, A) Some thought shows this version of r is sufficient, if somewhat counterintuitive (Glazebrook and Wallace 2009). Extension of the Wilson technique to a fully embodied model of institutional cognition seems possible. However, since the dynamics of the embedded condition are so highly variable, there will be no unique solution, although there may well be equivalence classes of solutions, defining yet more groupoids in the sense of Tateishi et al. (2013). Groupoids seem to emerge in a more fundamental way. Wilson’s renormalization semigroup, in the cognitive circumstance of discrete equivalence classes of developmental pathways, might well require generalization as a renormalization semigroupoid, e.g., a disjoint union of different renormalization semigroups across a nested or otherwise linked set of information sources and/or iterated free energy constructs dual to cognitive modules. Following Wallace (2018), we have studied equivalence classes of directed homotopy developmental paths—individually, the {x1 , . . . xn , . . .}—associated with nonergodic cognitive systems defined in terms of single-path source uncertainties.
2 Punctuated Institutional Problem Recognition
45
These require imposition of structure in terms of the metric r of Eq. (2.11), leading to groupoid symmetry-breaking transitions driven by changes in the temperature analog g(Z). There can be an intermediate case under circumstances in which the standard ergodic decomposition of a stationary process is both reasonable and computable— no small constraint. Then there is an obvious natural directed homotopy partition in terms of the transitive components of the path-equivalence class groupoid. It appears that this decomposition is equivalent to, and maps on, the ergodic decomposition of the overall stationary cognitive process. It then becomes possible to define a constant source uncertainty on each transitive subcomponent, fully indexed by the embedding groupoid. That is, each ergodic/transitive groupoid component of the ergodic decomposition recovers a constant value of the source uncertainty dual to cognition, presumably given by standard “Shannon entropy” expression. Since it is possible to view the components themselves as constituting single paths in an appropriate quotient space, this leads to the previous “nonergodic” developments. A complication emerges through imposition of a double symmetry involving metric r-defined equivalence classes on this quotient space. That is, there are different possible strategies for any two teams playing the same game. In sum, however, groupoid symmetry-breaking in the iterated free energy construct of Eqs. (2.5) or (2.7) will still be driven by changes in g(Z) and/or ω.
2.6 A Second Model Class The Data Rate Theorem (DRT), as characterized in Eq. (2.19) in the chapter Mathematical Appendix, provides another perspective on the punctuated accession to recognition/action of an environmental or other signal. The essential idea of the DRT is that a system that generates its own “topological information” at some rate H0 cannot be stabilized—here, made to respond to an external control signal— unless the control signal information source uncertainty H is greater than H0 , the rate at which the system of interest is taking care of its own business. That is, the “control” signal is characterized in terms of the Z variate, representing a signal that must break through ongoing business-as-usual to achieve accession to widespread institutional recognition, the analog to “consciousness” in an organism with an elaborate nervous system. The most obvious extension of the DRT is the inequality H (Z) > h(Z)H0
(2.12)
where, again, Z represents the influence of the intrusive signal that is attempting accession to consciousness.
46
R. Wallace
Fig. 2.3 The horizontal line is the Data Rate Theorem limit H0 . If T (Z) < H0 , the signal will not exert sufficient control to reach general institutional recognition/global broadcast
Following a classic Black–Scholes argument (e.g., Wallace 2020, Section 14.4), an exactly solvable first approximation leads to H (Z) ≈ κ1 Z + κ2
(2.13)
Expanding h(Z) in first order, so that h(Z) ≈ κ3 Z + κ4 , leads to a new limit condition in terms of Z where, for reasons that will become clear, we set κ2 = 0, T ≡
κ1 Z > H0 κ3 Z + κ4
(2.14)
We now characterize T as a new “control temperature” variate. Again, H0 is the rate at which the system generates its own topological information and represents ongoing business as usual. H (Z) is the “control signal” information rate that must punch through for notice. For appropriate values of the κ, the relation between T and Z is shown in Fig. 2.3, expected to be monotonic increasing in Z from zero. The horizontal line in Fig. 2.3 is the DRT limit H0 , the rate at which the system generates topological information by “doing its own business.” If T (Z) < H0 , the signal Z will not have direct effect on behavior. That is, there will be no accession to the rapid mechanisms of consciousness. We can explore the punctuated accession to institutional recognition/response for this model in the presence of “distraction.” Typically, Z itself will be time dependent, rising from zero to some final value according to some adaptation function, which we again take as dZ/dt = β −αZ(t), Z → β/α. What happens to the dynamics of Fig. 2.3 in a “distraction” environment? Then, as above, dynamics
2 Punctuated Institutional Problem Recognition
47
Fig. 2.4 The dark line is the solution equivalence class {Z, σ } and the lighter line the value of Z corresponding to the Data Rate Theorem limit H0 from Fig. 2.3. For σ = 0, Z is at the nonequilibrium steady state β/α. Rising σ drives the effective value of signal below punctuated detectability
are approximated by the stochastic differential equation (Protter 2005) dZt = (β − αZt )dt + σ Zt dBt
(2.15)
where the second term represents the volatility of the system, dBt taken as Brownian noise. Our central interest, however, is not in Z but in the stochastic behavior of T (Z) = κ1 Z/(κ3 Z + κ4 ), and this can be done using the Ito Chain Rule (Protter 2005). This produces a general relation at nonequilibrium steady state as
κ1 κ1 Zκ3 − (−Zα + β) Zκ3 + κ4 (Zκ3 + κ4 )2 κ1 κ3 κ1 Zκ3 2 σ 2Z2 =0 −2 +2 + 2 (Zκ3 + κ4 )2 (Zκ3 + κ4 )3
(2.16)
The σ -term represents the Ito correction factor indexing the influence of “distraction” noise. A typical solution to Eq. (2.16) is shown in Fig. 2.4. This represents a solution equivalence class {Z, σ }, along with the value of Z corresponding to the DRT limit H0 from Fig. 2.3. If σ = 0, then Z is at the nonequilibrium steady state β/α. As σ increases, the effective value of Z declines until it falls below the punctuated signal entry value. The appearance of an equivalence class again suggests groupoid symmetry-breaking at a critical input strength.
48
R. Wallace
2.7 A Yerkes/Dodson Law for Institutional Cognition The identification of g(Z) as a temperature analog defined via generalized Lambert W-functions suggests possible extension of Eqs. (2.12)–(2.14) through an iteration in g(Z) rather than directly in terms of Z itself, so that H (g) > h(g(Z))H0 H (g) ≈ κ1 g(Z) + κ2 Tg ≡
κ1 g(Z) > H0 κ3 g(Z) + κ4
(2.17)
Again assuming an exponential model for the adaptation function, and using Eqs. (2.13) and (2.14), with h(g(Z)) = g(Z), and f (Z) = β − αZ, we set α = 1, C1 = −3, C2 = 5, κj = 1, and plot an inverted-U arousal graph for Tg vs. β in Fig. 2.5. Note here that, if Tg < H0 , accession to recognition/global broadcast fails. That is, at sufficient “arousal” β, the institution ceases being able to respond to patterns of threat or opportunity. It is again possible to conduct an Ito Chain Rule calculation, finding conditions for which < dTg >= 0. This is given in Fig. 2.6, again taking dZ/dt = f (Z) = β − αZ(t) at the peak of response, which turns out to be β = 4. For this particular model realization, the system fails at very low levels of σ , compared to the expected Fig. 2.5 A Yerkes–Dodson inverted-U for punctuated accession to institutional response. Too great a level of arousal causes desensitization and failure of the ability to respond to changing patterns of threat and opportunity
2 Punctuated Institutional Problem Recognition
49
Fig. 2.6 Numerical solution to the Ito Chain Rule calculation for < dTg >= 0 under the conditions of Fig. 2.5 for the peak β = 4. The critical value Z0 (H0 ) is indicated. Increasing probability of unstable bifurcation becomes manifest at a very low level of σ
failure of the simple SDE model for Z, i.e., dZt = f (Zt√ )dt + σ Zt dBt , which, some calculation shows, fails in second moment at σ = 2. Again, the critical value Z0 (H0 ) is indicated.
2.8 In the Dark: When Your Opponent Controls Shadow Prices Accession to institutional recognition, in terms of the “cognitive temperatures” T and Tg defined in Eqs. (2.14) and (2.17), requires T -values greater than the DRT limit H0 , the rate at which the inherently unstable Clausewitz landscape on which it operates—including the actions of the adversary—generates its own topological information. The contending institution faces overall constraints of time t, and of a resource rate Z that will always be delivered with inherent delay. A simple Lagrangian optimization model introduces “shadow prices” imposed by the embedding landscape that includes the actions of the opponent. We suppose the institution has multiple interacting subsystems, and seek to optimize cognition temperature T (Z1 (t 1 ), . . . , Zn (tn ))across the system subject to the time and resource constraints t = j tj and Z = j Zj .
50
R. Wallace
We assume, in addition, that delivery of each module’s essential resource rate Zj is itself delayed, following an exponential model: dZj /dtj = βj − αj Zj (tj )
(2.18)
so that limtj =∞ Z(tj ) = βj /αj . This leads to a Lagrangian formulation as ⎛ L = T (Z1 (t1 ), . . . , Zn (tn )) + λ ⎝t −
⎞
⎛
tj ⎠ + μ ⎝ Z −
j
⎞ Zj ⎠
j
∂L /∂Zj = ∂T /∂Zj − μ = 0 ∂L /∂tj = (∂T /∂Zj )dZj /dtj − λ = μdZj /dtj − λ = 0 (2.19) Some manipulation of the last of these expressions, remembering Eq. (2.18), finds λ/μ = dZj /dtj = f (Zj (tj )) ≡ βj − αj Zj (tj ) Zj (tj ) =
βj 1 λ − αj αj μ
(2.20)
The so-called undetermined multipliers λ and μ are, in economic parlance “shadow prices” imposed by environmental and other externalities (e.g., Jin et al. 2008). Consider, for example, Operation Barbarossa. The Wehrmacht had to get its Panzer Divisions to Moscow before the Autumn rains made roads impassable, and, when that effort failed, following the November freeze, needed to do so before the Soviet Union’s Siberian Divisions arrived for counterattack. Varus, in Germania, felt constrained to march in extended line formation through heavy forest. Other examples may come to mind. If enough of the αj are sufficiently small, and the ratio λ/μ sufficiently large, then enough Zj will be too small for cognitive accession-to-recognition, according to our model, and the contending institution will be blind to changing patterns of threat or opportunity, analogous to inattentional blindness in a higher organism. A skilled adversary will take control of shadow prices, while impeding the rate at which the institutional opponent receives resources, i.e., constraining the αj . Again, examples may come to mind. The same expression holds true even if delays are themselves distributed exponentially, so that dZ/dt = β − α 0
t
Z(t − τ )m exp[−mτ ]dτ
(2.21)
2 Punctuated Institutional Problem Recognition
51
Recognizably similar results follow from imposing fixed delays, i.e., replacing tj with tj − δj in Eq. (2.18). As described in Sect. 1.6 above, essentially the same result follows for any relation dZ/dt = f (Z(t)) in which f is monotonic decreasing with increasing Z. That is, by a standard argument, under constraint, Zj (tj ) = f −1 (λ/μ) will be monotonic decreasing in the shadow price ratio.
2.9 The Dynamics of Institutional Cognitive Fragmentation An expansion of the network dynamics explored in Sect. 2.5 follows from generalizing the institutional fragmentation model of Wallace (2015, Section 7.4), moving from a stationary, ergodic information source H to the iterated free energy of Eq. (2.5). Development involves • From Eq. (2.10), setting F (R) = R δ . • Assuming a generalized temperature measure T = 1/K that will be further characterized below. • Defining μ ≡ (K − KC )/KC . • Assuming KC represents the onset value for a critical phase transition. This leads to relations F = μδ/y F0 χ = μ−1/y χ0
(2.22)
where y is a positive constant. Following Zurek (1985, 1996), it is possible to compute fragment size in phase transition provided the rate of change of μ remains constant, |dμ/dt| = 1/τμ . The analog with physical theory suggests there is a characteristic time constant for the phase transition, τ ≡ τ0 /μ, such that if changes in μ take place on a timescale longer than τ for any given μ, the correlation length χ = χ0 μ−s , s = 1/y, will be in equilibrium with internal changes and result in very large fragments in R-space. Zurek finds the critical freeze-out time, tˆ, occurs at system time tˆ = χ /|dχ /dt| such that tˆ = τ . Taking the derivative dχ /dt, recalling that dμ/dt = 1/τμ , gives μτμ τ0 χ = = |dχ /dt| s μ
(2.23)
so that μ=
sτ0 /τμ
(2.24)
52
R. Wallace
Substituting this into the correlation length expression gives the expected size of fragments in R-space, say d(tˆ), as d = χ0
τμ sτ0
s/2 (2.25)
where s = 1/y > 0. The more rapidly K approaches KC , the smaller is τμ and the smaller and more numerous are the resulting R-space fragments. Rapid change produces small fragments more likely to risk thrashing, conflict, or other failures leading to extinction when unified economies of scale are needed. The subtle but central point of the analysis, however, lies in the definition of K = 1/T . We have three possibilities. The first—perhaps most natural—is that T = g(Z) from Eq. (2.7). It is possible, however, that different classes of systems may require definitions of T explicitly in terms of Data Rate Theorem limitations, i.e., Eqs. (2.14) and (2.17). Application of Eqs. (2.7) or (2.17) leads, of course, to complicated inverted-U dynamics within already complicated phase transitions. That is, the dynamics of fragmentation in institutional cognitive function are likely to be far more complicated than for physical processes, a matter already at the cutting edge of applied mathematics and physical theory.
2.10 Discussion and Conclusions Institutions engaged in organized conflict, from families, neighborhood, and community groups, through environmental organizations, labor unions, businesses, cartels and other crime syndicates, political parties and their de facto dictatorships, armies, and empires, must operate on inherently unstable landscapes of uncertainty, imprecision, environmental stress, and adversarial intent. Institutions—including public health departments—are cognitive, in the sense of Atlan and Cohen (1998), given that they must, under challenge or opportunity—and most often under time and resource constraint—choose and carry out one or a few of the actions possible to them. Such choice reduces uncertainty and directly implies existence of an information source dual to that choice, acting under the further need to impose control information at a rate greater than the environment—in a large sense— generates its own, often deliberately contrary, topological information. Here, we have explored the inherently punctuated nature of effective institutional cognitive response to patterns of threat and opportunity. This involves entrainment of “workgroups” and their associated resources in a coordinated manner into tactical, operational, and strategic responses sufficient to resolve or exploit those patterns at their critical scales and levels of organization. The mathematical models we have used are sufficient to illustrate something regarding the underlying dynamics, often those of symmetry-breaking on surprisingly subtle algebraic landscapes. Although, as they say, “insight is not cure,” the mathematical and probability models explored here may serve for the construction of statistical tools useful in the analysis of
2 Punctuated Institutional Problem Recognition
53
observational and experimental data, the only sources of new knowledge, as opposed to new speculation (Pielou 1977). What should be clear, at this point, is that institutional cognition, like gene expression, the immune system, and similar processes, is far more complicated than even individual human consciousness, at present socially constructed as a difficult and unsolved scientific problem (Wallace 2022). This is because consciousness, relegated by evolutionary selection pressure to about a 100 ms time constant, is unable to support more than a single tunable spotlight involving punctuated accession to attention, while institutions and other slower “global workspace” cognitive processes may entertain many such tunable spotlights. As said, in the real world, Rambo is always killed or captured, and the hunted tiger usually gets turned into a rug. There is, of course, a grave limitation to what we have done here. This is a scientific and engineering exploration of institutional cognition under conditions of conflict. While it should indeed be possible to extend this work to build “regression model” analogs as tools for real-time management, such things are often weak reeds in the presence of skilled adversarial intent during the “conversation” between combatants. The best tactical and engineering tools—including those that might be constructed from the work done here—are ultimately useless in the absence of strategic competence (Gray 2018). Canonical examples are perhaps those of US military enterprise since the 1950 landings at Inchon, or of the persistent failures of the US public health establishment in the face of emerging and reemerging infection.
2.11 Mathematical Appendix 2.11.1 Groupoids We following Brown (1992). Consider a directed line segment in one component, written as the source on the left and the target on the right. • −→ • Two such arrows can be composed to give a product ab if and only if the target of a is the same as the source of b a
b
• −→ • −→ • Brown puts it this way, One imposes the geometrically obvious notions of associativity, left and right identities, and inverses. Thus a groupoid is often thought of as a group with many identities, and the reason why this is possible is that the product ab is not always defined.
54
R. Wallace We now know that this apparently anodyne relaxation of the rules has profound consequences. . . [since] the algebraic structure of product is here linked to a geometric structure, namely that of arrows with source and target, which mathematicians call a directed graph.
Cayron (2006) elaborates this: A group defines a structure of actions without explicitly presenting the objects on which these actions are applied. Indeed, the actions of the group G applied to the identity element e implicitly define the objects of the set G by ge = g; in other terms, in a group, actions and objects are two isomorphic entities. A groupoid enlarges the notion of group by explicitly introducing, in addition to the actions, the objects on which the actions are applied. By this approach, many identities may exist (they correspond to the actions that leave an object invariant).
Stewart (2017) describes something of the underlying mechanics by which symmetry changes in general may be expressed: Spontaneous symmetry-breaking is a common mechanism for pattern formation in many areas of science. It occurs in a symmetric dynamical system when a solution of the equations has a smaller symmetry group than the equations themselves. . . This typically happens when a fully symmetric solution becomes unstable and branches with less symmetry bifurcate.
It is of particular importance that equivalence class decompositions permit construction of groupoids in a highly natural manner. Weinstein (1996) and Golubitsky and Stewart (2006) provide more details on groupoids and on the relation between groupoids and bifurcations. An essential point is that, since there are no necessary products between groupoid elements, “orbits,” in the usual sense, disjointly partition groupoids into “transitive” subcomponents. Application of groupoid formalism to the homochirality problem, with a more detailed overview of “biological” groupoid algebra, can be found in Wallace (2011).
2.11.2 The Data Rate Theorem Real-world environments are inherently unstable. Organisms (and organizations), to survive, must exert a considerable measure of control over them. These control efforts range from immediate responses to changing patterns of threat and affordance, through niche construction, and, in higher animals, elaborate, highly persistent, social, and sociocultural structures. Such necessity of control can, in some measure, be represented by a powerful asymptotic limit theorem of probability theory different from, but as fundamental as, the Central Limit Theorem. It is called the Data Rate Theorem, first derived as an extension of the Bode Integral Theorem of signal theory. Consider a reduced model of a control system as follows: For the inherently unstable system of Fig. 2.7, assume an initial n-dimensional vector of system parameters at time t, as xt . The system state at time t + 1 is then—
2 Punctuated Institutional Problem Recognition
55
Fig. 2.7 The reduced model of an inherently unstable system stabilized by a control signal Ut
near a presumed nonequilibrium steady state—determined by the first-order relation xt+1 = Axt + But + Wt
(2.26)
In this approximation, A and B are taken as fixed n-dimensional square matrices. ut is a vector of control information, and Wt is an n-dimensional vector of Brownian white noise (Fig. 2.7). According to the DRT, if H is a rate of control information sufficient to stabilize an inherently unstable control system, then it must be greater than a minimum, H0 , H > H0 ≡ log[| det[Am ]|]
(2.27)
where det is the determinant of the subcomponent Am —with m ≤ n—of the matrix A having eigenvalues ≥ 1. H0 is defined as the rate at which the unstable system generates “topological information” on its own. If this inequality is violated, stability fails.
2.11.3 Morse Theory Morse theory examines relations between analytic behavior of a function—the location and character of its critical points—and the underlying topology of the manifold on which the function is defined. We are interested in a number of such functions, for example a “free energy” constructed from information source uncertainties on a parameter space and “second order” iterations involving parameter manifolds determining critical behavior. These can be reformulated from a Morse theory perspective. Here we follow closely Pettini (2007).
56
R. Wallace
The essential idea of Morse theory is to examine an n-dimensional manifold M as decomposed into level sets of some function f : M → R where R is the set of real numbers. The a-level set of f is defined as f −1 (a) = {x ∈ M : f (x) = a}, the set of all points in M with f (x) = a. If M is compact, then the whole manifold can be decomposed into such slices in a canonical fashion between two limits, defined by the minimum and maximum of f on M. Let the part of M below a be defined as Ma = f −1 (−∞, a] = {x ∈ M : f (x) ≤ a}. These sets describe the whole manifold as a varies between the minimum and maximum of f . Morse functions are defined as a particular set of smooth functions f : M → R as follows. Suppose a function f has a critical point xc , so that the derivative df (xc ) = 0, with critical value f (xc ). Then f is a Morse function if its critical points are nondegenerate in the sense that the Hessian matrix of second derivatives at xc , whose elements, in terms of local coordinates are Hi,j = ∂ 2 f/∂x i ∂x j , has rank n, which means that it has only nonzero eigenvalues, so that there are no lines or surfaces of critical points and, ultimately, critical points are isolated. The index of the critical point is the number of negative eigenvalues of H at xc . A level set f −1 (a) of f is called a critical level if a is a critical value of f , that is, if there is at least one critical point xc ∈ f −1 (a). Again following Pettini (2007), the essential results of Morse theory are: 1. If an interval [a, b] contains no critical values of f , then the topology of f −1 [a, v] does not change for any v ∈ (a, b]. Importantly, the result is valid even if f is not a Morse function, but only a smooth function. 2. If the interval [a, b] contains critical values, the topology of f −1 [a, v] changes in a manner determined by the properties of the matrix H at the critical points. 3. If f : M → R is a Morse function, the set of all the critical points of f is a discrete subset of M, i.e., critical points are isolated. This is Sard’s Theorem. 4. If f : M → R is a Morse function, with M compact, then on a finite interval [a, b] ⊂ R, there is only a finite number of critical points p of f such that f (p) ∈ [a, b]. The set of critical values of f is a discrete set of R. 5. For any differentiable manifold M, the set of Morse functions on M is an open dense set in the set of real functions of M of differentiability class r for 0 ≤ r ≤ ∞. 6. Some topological invariants of M, that is, quantities that are the same for all the manifolds that have the same topology as M, can be estimated and sometimes
2 Punctuated Institutional Problem Recognition
57
computed exactly once all the critical points of f are known: Let the Morse numbers μi (i = 1, . . . , m) of a function f on M be the number of critical points of f of index i (the number of negative eigenvalues of H ). The Euler characteristic of the complicated manifold M can be expressed as the alternating sum of the Morse numbers of any Morse function on M, χ=
m
(−1)i μi .
i=0
The Euler characteristic reduces, in the case of a simple polyhedron, to χ =V −E+F where V , E, and F are the numbers of vertices, edges, and faces in the polyhedron. 7. Another important theorem states that, if the interval [a, b] contains a critical value of f with a single critical point xc , then the topology of the set Mb defined above differs from that of Ma in a way that is determined by the index, i, of the critical point. Then Mb is homeomorphic to the manifold obtained from attaching to Ma an i-handle, i.e., the direct product of an i-disk and an (m − i)-disk. Again, Pettini (2007) contains both mathematical details and further references. See, for example, Matsumoto (1997).
2.11.4 Higher Dimensional Resource Systems We assumed that resource delivery is sufficiently characterized by a single scalar parameter Z, mixing material resource/energy supply with internal and external flows of information. Real-world conditions will likely be far more complicated. Invoking a perspective analogous to Principal Component Analysis, there may be several independent pure or composite entities irreducibly driving system dynamics. It may then be necessary to replace the scalar Z by an n-dimensional vector Z having orthogonal components that together account for much of the total variance—in a sense—of the rate of supply of essential resources. The dynamic equations (2.7) and (2.8) must then be represented in vector form: F (Z) = − log (h(g(Z))) g(Z) S = −F + Z · ∇Z F ∂Z/∂t ≈ μˆ · ∇Z S = f (Z) −∇Z F + ∇Z (Z · ∇Z F ) = μˆ −1 · f (Z) ≡ f ∗ (Z)
58
R. Wallace
∂ 2 F /∂zi ∂zj · Z = f ∗ (Z) ∂ 2 F /∂zi ∂zj |Znss · Znss = 0
(2.28)
F , g, h, and S are, again, scalar functions, but μˆ is an n-dimensional square matrix of diffusion coefficients. The matrix ((∂F /∂zi ∂zj )) is the obvious ndimensional square matrix of second partial derivatives, and f (Z) is a vector function. The last relation imposes a nonequilibrium steady state condition, i.e., f ∗ (Znss ) = 0. For a detailed two-dimensional example—requiring Lie group symmetry methods—see Wallace (2020a, Sec. 7).
2.11.5 Cognitive Renormalizations Here, we adapt the renormalization scheme of Wallace (2005), focused on a stationary, ergodic, information source H , to the generalized free energy associated with nonergodic cognition. Equation (2.10) states that the information source and the correlation length, the degree of coherence on the underlying network, scale under renormalization clustering in chunks of size R as F [ω(R), J (R)] = F (R)F [ω(1), J (1)] χ [ω(R), J (R)]R = χ [ω(1), J (1)] with F (1) = 1. Differentiating these two equations with respect to R, so that the right hand sides are zero, and solving for dω(R)/dR and dJ (R)/dR gives, after some manipulation, dωR /dR = u1 d log(F )/dR + u2 /R v2 dJR /dR = v1 JR d log(F )/dR + JR R
(2.29)
The ui , vi , i = 1, 2 are functions of ω(R), J (R), but not explicitly of R itself. We expand these equations about the critical value ωR = ωC and about JR = 0, obtaining dωR /dR = (ωR − ωC )yd log(F )/dR + (ωR − ωC )z/R dJR /dR = wJR d log(F )/dR + xJR /R
(2.30)
2 Punctuated Institutional Problem Recognition
59
The terms y = du1 /dωR |ωR =ωC , z = du2 /dωR |ωR =ωC , w = v1 (ωC , 0), x = v2 (ωC , 0) are constants. Solving the first of these equations gives ωR = ωC + (ω − ωC )R z F (R)y
(2.31)
again remembering that ω1 = ω, J1 = J, F (1) = 1. Wilson’s (1971) essential trick is to iterate on this relation, which is supposed to converge rapidly near the critical point, assuming that for ωR near ωC , we have ωC /2 ≈ ωC + (ω − ωC )R z F (R)y
(2.32)
We iterate in two steps, first solving this for F (R) in terms of known values, and then solving for R, finding a value RC that we then substitute into the first of equations (2.10) to obtain an expression for F [ω, 0] in terms of known functions and parameter values. The first step gives the general result F (RC ) ≈
[ωC /(ωC − ω)]1/y z/y
21/y RC
(2.33)
Solving this for RC and substituting into the first expression of equation (2.10) gives, as a first iteration of a far more general procedure (Shirkov and Kovalev 2001), the result F [ω, 0] ≈
F0 F [ωC /2, 0] = F (RC ) F (RC )
χ (ω, 0) ≈ χ (ωC /2, 0)RC = χ0 RC
(2.34)
giving the essential relationships. Note that a power law of the form F (R) = R m , m = 3, which is the direct physical analog, may not be biologically reasonable, since it says that “language richness,” in a general sense, can grow very rapidly as a function of increased network size. Such rapid growth is simply not observed in cognitive process. Taking the biologically realistic example of non-integral “fractal” exponential growth, F (R) = R δ
(2.35)
where δ > 0 is a real number that may be quite small, we can be solve for RC , obtaining RC =
[ωC /(ωC − ω)][1/(δy+z)] 21/(δy+z)
(2.36)
60
R. Wallace
for ω near ωC . Note that, for a given value of y, one might characterize the relation α ≡ δy + z = constant as a “tunable universality class relation” in the sense of Albert and Barabasi (2002). Substituting this value for RC back gives a complex expression for F , having three parameters: δ, y, z. A more biologically interesting choice for F (R) is a logarithmic curve that “tops out,” for example F (R) = m log(R) + 1
(2.37)
Again F (1) = 1. Using a computer algebra program to solve for RC gives RC = [
Q ]y/z W [0, Q exp(z/my)]
(2.38)
where Q ≡ (z/my)2−1/y [ωC /(ωC − ω)]1/y Again, W(n, x) is the Lambert W-function of order n. An asymptotic relation for F (R) would be of particular biological interest, implying that “language richness” increases to a limiting value with population growth. Taking F (R) = exp[m(R − 1)/R]
(2.39)
gives a system that begins at 1 when R = 1, and approaches the asymptotic limit exp(m) as R → ∞. Computer algebra finds RC =
my/z W [0, A]
(2.40)
where A ≡ (my/z) exp(my/z)[21/y [ωC /(ωC − ω)]−1/y ]y/z These developments suggest the possibility of taking the theory significantly beyond arguments by abduction from simple physical models.
2 Punctuated Institutional Problem Recognition
61
2.11.6 Some Topological Remarks on Symmetry-Breaking The fundamental dogma of algebraic topology (e.g., Hatcher, 2001) provides instruction regarding the forming of algebraic images of topological spaces. The most basic of these images is the fundamental group, leading to van Kampen’s Theorem allowing the construction of the fundamental group of spaces that can be decomposed into simpler spaces whose fundamental group is already known. As Hatcher (2001, p. 40) puts it, By systematic use of this theorem one can compute the fundamental groups of a very large number of spaces. . . [F]or every group G there is a space XG whose fundamental group is isomorphic to G.
Golubitsky and Stewart (2006) argue that network structures and dynamics are imaged by fundamental groupoids, for which there also exists a version of the Siefert/Van Kampen theorem (Brown et al. 2011). Yeung’s (2008) results suggest information theory-based “cognitive” generalizations that may include essential dynamics of cognition and its regulation. Extending this work to the relations between groupoid structures and rather subtle information theory properties may provide one route toward the development of useful statistical models.
References Albert, R., & Barabasi, A. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74, 47–97. Atlan H., & Cohen, I. (1998). Immune information, self-organization, and meaning. International Immunology, 10, 711–717. Brown, R. (1992). Out of line. Royal Institute Proceedings, 64, 207–243. Brown, R., Higgins, P., & Sivera, R. (2011). Nonabelian algebraic topology: Filtered spaces, crossed complexes, cubical homotopy groupoids. EMS tracts in mathematics (Vol. 15). Cayron, C. (2006). Groupoid of orientational variants. Acta Crystalographica Section A, A62, 21040. Champagnat, N., Ferriere, R., & Meleard, S. (2006). Unifying evolutionary dynamics: From individual stochastic process to macroscopic models. Theoretical Population Biology, 69, 297– 321. Cohen, I. (2000). Tending Adam’s Garden: Evolving the cognitive immune self. Academic Press. Cover, T., & Thomas, J. (2006). Elements of information theory (2nd ed.). Wiley. de Groot, S., & Mazur, P. (1984). Nonequilibrium thermodynamics. Dover. Dembo, A., & Zeitouni, O. (1998). Large deviations and applications (2nd ed.). Springer. English T. (1996). Evaluation of evolutionary and genetic optimizers: no free lunch. In L. Fogel, P. Angeline, T. Back (Eds.), Evolutionary programming V: Proceedings of the fifth annual conference on evolutionary programming (pp. 163–169). MIT Press. Feynman, R. (2000). Lectures on computation. Westview Press. Glazebrook, J. F., & Wallace, R. (2009). Rate distortion manifolds as model spaces for cognitive information. Informatica, 33, 309–346. Golubitsky, M., & Stewart, I. (2006). Nonlinear dynamics and networks: the groupoid formalism. Bulletin of the American Mathematical Society, 43, 305–364.
62
R. Wallace
Gould, S., & Lewontin, R. (1979). The spandrels of san marco and the panglossian paradigm: A critique of the adaptationist programme. Proceedings of the Royal Society B, 205, 581–598. Gray, R. (1988). Probability, random processes, and ergodic properties. Springer. Gray, C. (2018). Theory of strategy, Oxford University Press. Hahn, P. (1978). The regular representations of measure groupoids. Transactions of the American Mathematical Society, 242, 35–53. Hatcher, A. (2001). Algebraic topology. Cambridge University Press. Horsthemeke, W., & Lefever, R. (2006). Noise-induced transitions: Theory and applications in physics, chemistry, and biology (Vol. 15). Springer. Jackson, D., Kempf, A., & Morales, A. (2017). A robust generalization of the Legendre transform for QFT. Journal of Physics A, 50, 225201. Jin, H., Hu, Z., & Zhou, X. (2008). A convex stochastic optimization problem arising from portfolio selection. Mathematical Finance, 18, 171–183. Khinchin, A. (1957). Mathematical foundations of information theory. Dover. Laidler, K. (1987). Chemical kinetics (3rd ed.). Harper and Row. Leon-Garcia, A., Davisson, L., & Neuhoff, D. (1979). New results on coding of stationary nonergodic sources. IEEE Transactions on Information Theory, IT-25, 137–144. Mackey, G. W. (1963). Ergodic theory, group theory, and differential geometry. Proceedings of the National Academy of Sciences of the United States of America, 50, 1184–1191. Maignan, A., & Scott, T. (2016). Fleshing out the generalized Lambert W Function. ACM Communications in Computer Algebra, 50, 45–60. Matsumoto, Y. (1997). An introduction to Morse theory. American Mathematical Society. Mezo I., & Keady, G. (2015). Some physical applications of generalized Lambert functions. arXiv:1505.01555v2 [math.CA] 22 Jun 2015. Newman, M. (2010). Networks: An introduction. Oxford University Press. Pettini, M. (2007). Geometry and topology in hamiltonian dynamics and statistical mechanics. Springer. Pielou, E. (1977). Mathematical ecology. Wiley. Protter, P. (2005). Stochastic integration and differential equations: A new approach (2nd ed.). Springer. Schreiber, U., & Skoda, Z. (2010). Categorified symmetries. arXiv:1004.2472v1. Scott, T., Mann, R., & Martinez, R. E. (2006). General relativity and quantum mechanics: towards a generalization of the Lambert W function. Applicable Algebra in Engineering, Communication and Computing, 17, 41–47. Shirkov, D., & Kovalev, V. (2001). The Bogoliubov renormalization group and solution symmetry in mathematical physics. Physics Reports, 352, 219–249. Spenser, J. (2010). The giant component: a golden anniversary. Notices of the American Mathematical Society, 57, 720–724. Stewart, I. (2017). Spontaneous symmetry-breaking in a network model for quadruped locomotion. International Journal of Bifurcation and Chaos, 14, 1730049 (online). Tateishi, A., Hanel, R., & Thurner, S. (2013). The transformation groupoid structure of the qGaussian family. Physics Letters A, 377, 1804–1809. Wallace, R. (2005). Consciousness: A mathematical treatment of the global neuronal workspace model. Springer. Wallace, R. (2011). On the evolution of homochriality. Comptes Rendus Biologies, 334, 263–268. Wallace, R. (2012). Consciousness, crosstalk, and the mereological fallacy: an evolutionary perspective. Physics of Life Reviews, 9, 426–453. Wallace, R. (2015). An ecosystem approach to economic stabilization: Escaping the neoliberal wilderness. Routledge Advances in Heterodox Econcomics. Wallace, R. (2017). Computational Psychiatry: A systems biology approach to the epigenetics of mental disorders. Springer. Wallace, R. (2018). New statistical models of nonergodic cognitive systems and their pathologies. Journal of Theoretical Biology, 436, 72–78.
2 Punctuated Institutional Problem Recognition
63
Wallace, R. (2020). Cognitive dynamics on clausewitz landscapes: The control and directed evolution of organized conflict. Springer. Wallace, R. (2020a). How AI founders on adversarial landscapes of fog and friction. Journal of Defense Modeling and Simulation. https://doi.org/10.1177/1548512920962227 Wallace, R. (2021a). Toward a formal theory of embodied cognition. Biosystems. https://doi.org/ 10.1016/j.biosystems.2021.104356. Wallace, R. (2022). Consciousness, cognition, and crosstalk: The evolutionary exaptation of nonergodic groupoid symmetry-breaking. Springer. Weinstein, A. (1996). Groupoids: unifying internal and external symmetry. Notices of the American Mathematical Association, 43, 744–752. Wilson, K. (1971). Renormalization group and critical phenomena. I Renormalization group and the Kadanoff scaling picture. Physics Reviews B, 4, 3174–3183. Wolpert, D., & MacReady, W. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1, 67–82. Yeung, H. (2008). Information theory and network coding. Springer. Zurek, W. (1985). Cosmological experiments in superfluie helium? Nature, 317, 505–508. Zurek, W. (1996). The shards of broken symmetry. Nature, 382, 296–298.
Chapter 3
Fog and Friction as Resources Rodrick Wallace
Your adversary has been taught a thousand tactical lessons—by you. If he has been paying attention, he has been also taught a seminal lesson in strategy. When you leave, who do you think is best placed to seize power in the ecosystem you have so profoundly shaped? —Betz and Stanford-Tuck (2019)
3.1 Introduction The “Cambrian explosion” of some half-billion years ago saw a singularly rapid emergence of diverse organism Bauplan and niche structure that continues to attract the attention of evolutionary biologists (Whittington 1985; Erwin and Valentine 2013). The Eldredge and Gould (1972) treatment of “punctuated equilibrium” (Gould 2002) provides something of a recurring model of such phenomena across broad Darwinian realms. The pattern is of relatively long periods of apparent stasis in the paleological record, interrupted by brief “explosions” of extinction and speciation that, in fact, take tens of millions of years. At tactical and operational scales of organized conflict, and particularly at the strategic scale, similar patterns often prevail under conditions of Lamarckian evolutionary process, in which competing cognitive systems learn and transmit what they have learned by various means. Armies win or fail in single battles, multiple campaigns, and full-out wars. Indeed, it is possible to “lose” most of the tactical confrontations but still win a war. But during contention, armies learn, and incorporate that learning into their operating doctrines, which are, in effect, the genetic material they transmit to successor versions of themselves, or to succeeding organizations.
R. Wallace () The New York State Psychiatric Institute, Bronx, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 R. Wallace (ed.), Essays on Strategy and Public Health, https://doi.org/10.1007/978-3-030-83578-1_3
65
66
R. Wallace
Here, we will examine the role of classic Western Clausewitzian fog and friction in Lamarckian evolutionary process, both in terms of “ordinary” punctuated equilibrium transition, and in the outbreak of Cambrian explosions, as occurred in Iraq after 2003 and in Northern Mexico after poorly thought out law enforcement efforts fragmented previously established large-scale criminal enterprises (Wallace and Fullilove 2014). The approach is based on, but extends to Lamarckian realms, the more conventional evolutionary studies of Wallace (2010, 2011, 2012a,b, 2013, 2014) and Wallace and Wallace (2008). The essential insight is to recognize uncertainty as a kind of temperature analog that, at larger values—and quite counterintuitively— permits higher-order symmetries. The structures underlying these symmetries are information sources, and the symmetries themselves follow from equivalence classes of high probability developmental pathways, characterized by groupoid rather than group algebras. Spontaneous symmetry-breaking across the groupoid algebras is driven by rising levels of uncertainty, particularly in the context of frictional delay and unstructured “noise.” The symmetry-breaking then generates Lamarckian macroscopic punctuated equilibrium transitions and, in the extreme, Cambrian explosions. The greater the uncertainty, the more inherent possibilities: Conflict becomes unconstrained by the intent of any single combatant. Indeed, East Asian military doctrine seeks to mitigate uncertainty by ensuring that clever “upstream” maneuver has, by the time conflict becomes overt, already pinioned an opponent so that nothing is possible except surrender (Jullien 2004). This cannot always be accomplished, particularly if one’s cultural blinders and hubris ensure that one does not really understand the opposition (Gray 1999). We begin by characterizing institutional conflict in terms of a multiplicity of crosstalking “languages,” in a large sense.
3.2 Interacting Information Sources • Doctrine can be viewed as an institutional genome, involving a set of possible behaviors having “grammar” and “syntax” in the sense that, for a sequence of actions or responses, only certain patterns have high, as opposed to vanishingly small, probabilities. This implies existence of a “genetic” doctrinal information source we will call X. The argument is direct (Cover and Thomas 2006; Khinchin 1957). Each contending agent is assumed to have such a doctrinal “genome.” • Each agent has apparatus for real-time “gene expression” of the underlying doctrine in the Atlan and Cohen (1998) sense that it takes in “sensory” signals from the embedding environment, compares those signals with a learned or inherited picture of the world, and on that comparison, chooses a smaller set of actions from a much larger set of those available to it that are within the domain allowed by doctrine. Such choice reduces uncertainty, and reduction of
3 Fog and Friction as Resources
67
uncertainty implies the existence of an information source “dual” to the cognitive ˆ Again, the argument is quite direct. “gene expression” process, say X. • Embedding environments have temporal, spatial, and other regularities so that real-time sequences of environmental observations—temperature, rainfall, mud or snow depth, surface slope, suitability for tracked vehicles, etc.—also have “grammar” and “syntax,” permitting identification of another information source, say V . Night follows day, temperatures vary in a roughly predictable manner, there is usually a rainy season, and, in temperate zones a freezing winter, mountains are not friendly to tanks, while open plains are, thunderstorms are not friendly to aircraft, and so on. • Systems undergoing dynamics under these conditions are also subject to “large deviations” in the sense of Champagnat et al. (2006) and Dembo and Zeitouni (1998). These, sometimes sudden, excursions, via sequential state paths, are governed by path probabilities obeying entropy-like necessary conditions—Sanov’s Theorem, Cramer’s Theorem, the Gartner-Ellis Theorem—that, in themselves, imply existence of yet another information source we will call LD . That is, only certain classes of “sudden surprises” have significant probabilities. In sum, there will be a joint source uncertainty (Cover and Thomas 2006) ˆ Y, Yˆ , V , LD ) H (X, X,
(3.1)
where we identify two contending agents as characterized by the X and Y variates. Feynman (2000)—and others—describe information, not as an entropy, but as a free energy. Indeed, it is surprisingly easy to imagine an (ideal) machine that turns the information within any message into useful work, i.e., free energy. This is a fundamental observation that permits the next stage in the argument, imposing regularities analogous to those of statistical mechanics, but on information processes for which palindromes—microreversibility—have vanishingly small probabilities. That is, there are very few statements that can be read both forwards and backwards. The failure of microreversibility breaks Noether’s Theorem, and limits symmetry arguments to equivalence classes of high probability developmental pathways, implying groupoid, rather than group, symmetries. Groupoids allow products and inverses between selected set pairs, but not between all possible pairs. The simplest example is a disjoint union of algebraic groups. The next simplest is a set of equivalence classes. Again, the argument is direct (Weinstein 1996; Cayron 2006; Wallace 2012a). We next construct a Morse Function, in the sense of Pettini (2007), combining the joint source uncertainty along system developmental paths from Eq. (3.1), with a scalar index characterizing the density of the fog-of-war that we will call σ .
68
R. Wallace
3.3 The Basic Model We begin with a Boltzmann-like pseudoprobability across the set of high probability developmental paths of the joint source uncertainty of Eq. (3.1), focusing on a particular path indexed as j , exp[−Hj /g(σ )] Pj = i exp[−Hi /g(σ )]
(3.2)
where g(σ ) is a function of a scalarized uncertainty index σ that must be determined from system dynamics. Determination of the functional form of g is somewhat subtle. The first step is to construct a free energy Morse Function from the denominator of Eq. (3.2) in the standard manner, treating the denominator as a statistical mechanical partition function, defining the free energy analog F as exp[−Hi /g(σ )] (3.3) exp[−F /g(σ )] = i
where the sum is over the set of high probability developmental trajectories. Modulo the definition of g(σ )—assuming only that it is a positive, increasing, function—F is subject to classic symmetry-breaking arguments (Pettini 2007), but these involve equivalence classes of high probability developmental pathways that are not microreversible, leading to the groupoid algebra. We have extended the classic symmetry-breaking arguments of physical theory to Lamarckian evolutionary process on Clausewitz landscapes dominated by fog-ofwar, as indexed by a scalar measure σ . Further symmetry restrictions, to semigroups and semigroupoids, seems likely, but will not be pursued here. In a sense, we have already recovered Lamarckian punctuated equilibrium transitions, as driven by an overall fog-of-war/uncertainty measure. The higher the “temperature” σ , the more scenarios become operationally possible, and this will happen in a punctuated manner representing transitions from less to more complicated groupoid symmetries. In other words, thickening smokescreens can have their uses, or their burdens, depending on your perspective.
3.4 Determining g(σ ) We solve Eq. (3.3) for g(σ ) using an approximation much like that of Onsager’s nonequilibrium thermodynamics (de Groot and Mazur 1984). We rewrite Eq. (3.3) as exp[−F /g(σ )] = h(g(σ )) F (σ ) = − log[h(g(σ ))]g(σ )
(3.4)
3 Fog and Friction as Resources
69
The next step is to define an entropy-analog as the Legendre transform of the free energy Morse Function F S ≡ −F (σ ) + σ dF (σ )/dσ
(3.5)
The first-order Onsager approximation (de Groot and Mazur 1984) is that the rate of change of σ will be approximated in first order as proportional to the gradient of the entropy-analog in σ : dσ/dt ≈ μdS/dσ = f (σ )
(3.6)
Expanding Eq. (3.5) using this expression leads to the implicit relation −σ
f (σ ) dσ − log[h(g(σ ))]g(σ ) − C1 σ + σ
f (σ )dσ + C2 = 0
(3.7)
Equation (3.7) can sometimes be solved explicitly for g(σ ), and often treated numerically, for example using the implicitplot function of the computer algebra program Maple 2020 and similar modules from other computer algebra programs. Restricting the argument to a nonequilibrium steady state sets f (σ ) = 0. In the context of friction, where dσ/dt = f (σ (t)), it is possible to study the influence of “noise,” which is not the same thing as uncertainty. This can be done by making σ itself a stochastic variate, via the stochastic differential equation (Protter 2005) dσt = f (σt )dt + Σσt dWt
(3.8)
where dWt represents Brownian noise and the second term is a volatility measure having amplitude Σ. Using the results of Eq. (3.7), a stochastic differential equation can then be written for dg(σ )t using the Ito Chain Rule, based on Eq.(3.8) (Protter 2005), and the resulting nonequilibrium steady state (nss) average relation < dg(σt ) >= 0 solved in the driving parameters. We will do this explicitly for a simple case below. The procedure can be done numerically for more complicated examples. First, however, Eq. (3.8) can be used to directly examine necessary conditions for stability in variance via an Ito Chain Rule expansion for σ 2 . Taking, for example, the “exponential” case of dσ/dt = f (σ (t)) = β − ασ (t), so that σ (t) → β/α, a simple calculation finds that the variance is unstable unless Σ2 = 0. A somewhat lengthy calculation gives three nonequilibrium steady state solution sets for σ , from which g(σ ) can then be calculated. These are σ = β/α 2 Σ /2 − α C1 /β β N W n, σ = 2 β Σ /2 − α
(3.13)
where W (n, Z) is the Lambert W-function of order n = 0, −1. The W-function solves the relation W (n, Z) exp[W (n, Z)] = Z. The zero-order W function is realvalued for Z > − exp[−1] and the -1 order function is real only for − exp[−1] < Z < 0. For the same parameter values as in Fig. 3.1, the nonequilibrium steady state temperature variate relations of g(σ ) are shown in Fig. 3.2 as a function of Σ, the stochastic noise as opposed to σ as a composite index of uncertainty and delay. Noise itself counts, in this model.
Fig. 3.2 Absent reasonable intelligence, fog and friction, i.e., uncertainty and delay, become compounded with stochastic variability to create conditions for ’Cambrian explosions’. Parameter values are the same as in Fig. 3.1, under the stochastic limitation that Σ 2 /2 < α for stability. The lowest line represents the nonequilibrium steady state in the absence of “noise.” The middle line expresses the Lambert W-function of order -1, expressed as a bifurcation after the critical value of Σ, and the uppermost is the W function of order zero. Here, a “Cambrian explosion” can take place at values of Σ well below onset of variance instability
72
R. Wallace
The lowest line again represents g(β/α), the simple nss condition. As Σ rises, however, a critical value is reached, and a bifurcated phase transition takes place. The uppermost trace represents the zero-order W-function—instantiating an “explosion”—and the middle line is the -1 branch of the W-function. Here, the “Cambrian explosion” is driven by the synergism between fog and friction, i.e., uncertainty compounded by delay, and pure chance. The necessary condition for real-values of the W-functions leads to a sufficient condition for onset of a Lamarckian Cambrian explosion in this model as Σ2 β >α− 2 exp[1]N C1 /β
(3.14)
This is short of the condition for second-order instability in σ itself for the exponential model. Apparently, then, there can be distinct stages to, and forms of, “Cambrian events.” More complicated models, in which the ensemble of high probability paths can be broken into more than one equivalence class, can be explored using numerical methods. The results are not significantly dissimilar.
3.6 Discussion Groupoids and nss solutions to stochastic differential equations aside, we have actually concealed the really hard parts of this analysis deep within the formalism. The determination of a scalar index σ is not straightforward. The most direct treatment might be a principal component analysis across different empirical measures of uncertainty—quantifying the “known unknowns” using standard estimates of enemy strength, difficulty and delay in transport, estimates of morale, and the like. Another approach might be to construct a detailed nonlinear model of one’s own system and estimate cross-interactions from perturbations of essential variates within the model. These variates might include measures of internal crosstalk between units, measures of intelligence information, and of the supply of essential resources. One determines an n-dimensional cross-interaction matrix, say Z = ||zi,j || from the perturbation analysis, and then constructs n matrix invariants using the relation p(γ ) = det(Z − γ I) = γ n − r1 γ n−1 + . . . + (−1)n rn
(3.15)
det is the determinant, I is the n × n identity matrix, and γ a real-valued parameter. The first matrix invariant is the trace, and the last ± the determinant. One then chooses σ = σ (r1 , . . . , rn ) as some appropriate scalar function of the matrix invariants. In the real world, of course, the enemy gets a vote, and quantification of σ requires knowledge of enemy crosstalk and similar dynamics.
3 Fog and Friction as Resources
73
Higher dimensional expansion may, in fact, be required, following the pattern described in the Mathematical Appendix to Chap. 1, in particular Eq. (1.34). For the most part, this simply cannot be done, not even for one’s own side. The dilemma has certain implications, which a number of observers have noted. Gray (1999) describes how the fog-of-war and frictions that harass and damage strategic performance do not comprise a static set of finite challenges that can be attributed by study, let alone by machines. Gray argues that every new device and mode of war carries the virus of its own technical, tactical, operational, strategic, or political negation, concluding that the map of fog and friction is a living, dynamic one that reorganizes itself to frustrate the intrepid explorer. Watts (2004) argues that general friction arises from structural aspects of combat interactions so deeply and irretrievably embedded in violent interactions between humans-in-the-loop systems that technological advances cannot eliminate friction, although they can certainly alter its manifestations. He concludes that the persistence of Clausewitzian friction in future war follows almost by inspection of its sources. Bousquet (2009) finds the nature of combat entailing an irreducible uncertainty and unpredictability in its pursuit and hence the permanent threat of chaos erupting among even the most ordered of arrangements, so that the imposition of chaos on the adversary is generally the requisite for victory in war. Gray, Watts, Bosquet—like Clausewitz and many others before and after them— have quite a good feel for this. A command apparatus might look at the percent of tactical episodes initiated by an opponent and make certain conclusions regarding Clausewitzian fog and friction. During the Tet offensive a small contingent of US forces—a few hundred men—was ordered to “clear Hue” of about 10,000 entrenched and dedicated Vietcong and NVA fighters. A competent command staff will have a fairly good intuitive feel for the magnitude of “σ .” What we have done here is show how Lamarckian evolutionary process might respond to changes in that variate. On longer timescales, punctuated equilibrium driven by rising σ may be associated with other dynamics. In particular, imposing rising σ on an opponent may make coordination between different units bound by disjunctive “strong ties” in the sense of Granovetter (1973) impossible. Then the nondisjunctive “weak ties” holding the opponent’s polity together may fail in a punctuated manner, leaving one facing the Hydra—a “Cambrian explosion” of niche and bauplan—as was the experience of sectarian hyperviolence in Iraq after 2003. A similar dynamic became established in Northern Mexico after “drug war” law enforcement operations that shattered a relatively small number of large-scale criminal enterprises into a very large number of smaller ones that began competing hyperviolently for market share (Wallace and Fullilove 2014). New York City was subjected to an informal counterinsurgency program aimed at dispersing minority voting blocs in the aftermath of the successes of the Civil Rights Movement across the US South. That program, sometimes characterized as “planned shrinkage,” caused similar fragmentation of social structures and their
74
R. Wallace
regulatory systems, ensuring that hyperviolence became a useful tool for both expressions of self and group worth, and for control and coercion of groups and individuals (Wallace et al. 1996; D. Wallace and R. Wallace 1998; Wallace and Fullilove 2014). All three examples have become the stuff of legend. Lamarckian evolutionary theory provides deep but severely limited qualitative insight into the dynamics of institutional conflict under conditions of fog and friction. A different approach focuses on cognition-and-control models having Morse Functions in which the temperature analog g is a function of a scalar measure of available resources in the context of a Brownian (or colored) volatility driven by the Σ parameter. Such cognition/control-based probability models appear more likely to be converted to statistical tools useful for empirical analysis and (limited) policy purposes, e.g., Wallace (2020), where the same quotes by Gray, Watts and Bousquet take on a slightly different aspect.
References Atlan, H., & Cohen, I. (1998). Immune information, self-organization, and meaning. International Immunology, 10, 711–717. Betz, D., & Stanford-Tuck, H. (2019). Teaching your enemy to win. Military Strategy Magazine, 6(3), 16–22. Bousquet, A. (2009). The scientific way of warfare: Order and chaos on the battlefields of modernity. Columbia University Press. Cayron, C. (2006). Groupoid of orientational variants. Acta Crystalographica Section A, A62, 21040. Champagnat, N., Ferriere, R., & Meleard, S. (2006). Unifying evolutionary dynamics: from individual stochastic process to macroscopic models. Theoretical Population Biology, 69, 297– 321. Cover, T., & Thomas, J. (2006). Elements of information theory (2nd ed.). Wiley. de Groot, S., & Mazur, P. (1984). Non-equilibrium thermodynamics. Dover. Dembo, A., & Zeitouni, O. (1998). Large deviations: Techniques and applications. Springer. Eldredge, N., & Gould, S. (1972). Punctuated equilibrium: an alternative to phyletic gradualism. In T. Schopf (Ed.), Models in paleobiology (pp. 82–115). Cooper and Co. Erwin, D., & Valentine, J. (2013). The Cambrian explosion: The construction of animal biodiversity. Roberts and Company. Feynman, R. (2000). Feynman lectures on computation. Westview Press. Gould, S. (2002). The structure of evolutionary theory. Harvard University Press. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78, 1360–1380. Gray, C. S. (1999). Why strategy is difficult. Joint Forces Quarterly, Summer, 7–12. Jullien, F. (2004). A treatise on efficacy: Between western and Chinese thinking. University of Hawaii Press. Khinchin, A. (1957). Mathematical foundations of information theory. Dover Publications. Pettini, M. (2007). Geometry and topology in hamiltonian dynamics and statistical mechanics. Springer. Protter, P. (2005). Stochastic integration and differential equations (2nd ed.). Springer. Wallace, D., & Wallace, R. (1998). A plague on your houses. Verso. Wallace, D., & Wallace, R. (2008). Punctuated equilibrium in statistical models of generalized coevolutionary resilience: how sudden ecosystem transitions can entrain both phenotype
3 Fog and Friction as Resources
75
expression and Darwinian selection. In Transactions on computational systems biology IX, LNBI 5121 (pp. 23–85). Wallace, R. (2010). Expanding the modern synthesis. Comptes Rendus Biologies, 334, 263–268. Wallace, R. (2011). A formal approach to evolution as self-referential language. BioSystems, 106, 36–44. Wallace, R. (2012a). Consciousness, crosstalk, and the mereological fallacy: an evolutionary perspective. Physics of Life Reviews, 9, 426–453. Wallace, R. (2012b). Metabolic constraints on the evolution of genetic codes: did multiple ‘preaerobic’ ecosystem transitions entrain richer dialects via serial endosymbiosis? In Transactions on computational systems biology XIV, LNBI 7625 (pp. 204–232). Wallace, R. (2013). A new formal approach to evolutionary processes in socioeconomic systems. Journal of Evolutionary Economics, 23, 1–15. Wallace, R. (2014). A new formal perspective on ‘Cambrian explosions’. Comptes Rendus Biologies, 337, 1–5. Wallace, R. (2020). Punctuated collapse of institutional cognition under contention, fog, and friction: Exploring the Western model of organized conflict. To appear. Wallace, R., Fullilove, M., & Flisher, A. (1996). AIDS, violence, and behavioral coding: information theory, risk behavior, and dynamic process on core-group sociogeographic networks. Social Science and Medicine, 43, 339–352. Wallace, R., & Fullilove, R. (2014). State policy and the political economy of criminal enterprise: mass incarceration and persistent organized hyperviolence in the USA. Structural Change and Economic Dynamics, 31, 17–31. Watts, B. (2004). Clausewitzian friction and future war (Revised Edition). Institute for National Strategic Studies, National Defense University. Weinstein, A. (1996). Groupoids: unifying internal and external symmetry. Notices of the American Mathematical Association, 43, 744–752. Whittington, H. (1985). The Burgess shale. Yale University Press.
Chapter 4
Strategic Culture Rodrick Wallace
Culture eats strategy for breakfast. . . —Peter Drucker . . . [C]ulture is everything. . . —Louis V. Gerstener Jr.
4.1 Introduction The cognitive and behavioral dynamics of individual humans, their small and large social groupings, and their encompassing formal institutions, are constrained by riverbanks of culture and path-dependent historical trajectory. Power relations between groups that determine both patterns of armed conflict and the health status of populations are sculpted by these same riverbanks. This phenomenon has been formally recognized in the military literature as “strategic culture.” Colin S. Gray (2006) comments at length on these matters, finding that strategic culture has come of age, at last. As he puts it, after years in the wilderness, the defense community has adopted it officially as an important concept with significant implications. There are some difficulties, in his view, finding a methodology to study it and in understanding just how it “works.” In the spirit of Sun-tzu and Jomini, according to Gray, there is a danger that culture is in the process of being identified as the Philosopher’s Stone for policy and strategy, the magical element that will transform ignorance into knowledge. Also, there is, Gray warns, some likelihood that culture is becoming fashionable, so that it must also become unfashionable, after a period of prime-time prominence. Gray argues, however, that culture is of the utmost importance, since it functions at, indeed as, the engine of thought and behavior. He describes how Clausewitz sees war is a contest between two wills, and the will of a belligerent is the product of
R. Wallace () The New York State Psychiatric Institute, Bronx, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 R. Wallace (ed.), Essays on Strategy and Public Health, https://doi.org/10.1007/978-3-030-83578-1_4
77
78
R. Wallace
moral factors that can be summarized as culture. Gray concludes that Sun-tzu was right in insisting on the importance of self-knowledge and of knowledge of one’s enemies. Cultural comprehension meets that insistence. Gray (2007) continues with the US case history, finding that serious trouble begins when Americans have to interact with alien societies in unfamiliar terrain. He finds that certainty of material superiority can breed overconfidence and minimize incentives to outthink the enemy, so that a practical consequence of the messianic, crusading dimension to American strategic behavior, is a self-righteousness that is not friendly, or even receptive, to unwelcome cultural and political facts. In sum, the country’s culture simply did not register the unwanted Vietnam experience. Cultures, including strategic cultures, Gray argues, are capable of ignoring what they wish to ignore. Clausewitz, however, is not the only source of wisdom on these matters, and thereupon will hang much of our tale. Mao Tse-Tung (1963, pp. 227–228) writes on similar matters in a different manner. He argues that, from the particular characteristics of war there arises a particular set of organizations, a particular series of methods and a process of a particular kind. The organizations, he continues, are the armed forces and everything that goes with them, and the methods are the strategy and tactics for directing war. The process, he continues, is the particular form of social activity in which the opposing armed forces attack each other or defend themselves against one another, employing strategy and tactics favorable to themselves and unfavorable to the enemy. He concludes that, since experience is particular, all who take part in war must rid themselves of their customary ways and accustom themselves to war before they can win victory. The Western evolutionary anthropologist Robert Boyd provides a contrasting perspective: “Culture is as much a part of human biology as the enamel on our teeth.” Our “customary ways” are deeply burned into our individual and collective patterns of behavior. Mao, in fact, describes something recognizably similar (1963, p.195), exploring failure in war. He finds the source of all erroneous views on war lies in idealist and mechanistic tendencies on the question of war, based on a subjective and onesided approach to problems. He argues that they either indulge in groundless and purely subjective talk, or, basing themselves upon a single aspect or a temporary manifestation, magnify it with similar subjectivity into the whole of the problem. But there are, he continues, two categories of erroneous views, one comprising fundamental and therefore consistent errors that are hard to correct, and the other comprising accidental and therefore temporary errors that are easy to correct. Since both are wrong, he continues, both need to be corrected. Therefore, in his view, only by opposing idealist and mechanistic tendencies and taking an objective and all-sided view in making a study of war can we draw correct conclusions on the question of war. These remarks are something of an implicit criticism of Western doctrine, of the “analytic” perspective that often bases itself on single aspects or temporary manifestations, magnifying them into the whole of the problem. This is the famous
4 Strategic Culture
79
Mereological Fallacy, imputing the whole from only a part, and characterizes a fundamental difference between “Western” and “East Asian” modes of perception. At the tactical level, Mao (1963, pp. 79–80) asks why is it necessary for the commander of a campaign or a tactical operation to understand the laws of strategy to some degree? He argues this is because an understanding of the whole facilitates the handling of the part, and because the part is subordinate to the whole. The view that strategic victory is determined by tactical successes alone, he states, is wrong because it overlooks the fact that victory or defeat in a war is first and foremost a question of whether the situation as a whole and its various stages are properly taken into account. He concludes that, if there are serious defects or mistakes in taking the situation as a whole and its various stages into account, the war is sure to be lost. The Chinese military strategist Qiao Jie (2012) describes the incorporation of “information” perspectives into contemporary PLA doctrine, arguing that today, the technological state of human society is undergoing a transition from an industrialized to an informationized age, entraining the practice of war from the mechanized to the informationized. He argues that, with the transformation from an industrial society to an information society, informationized war is beginning to replace mechanized war, a long-term change that will lead to qualitative changes in war centering on the structure of the campaign Jie continues, stating that the so-called informationized war is, as human society entered the Information Age, a new type of war carried out on an informationized battlefield, with information as the leading factor, with informationized armed forces as the main strengths, with informationized weapons and equipment as the main operational tools, and with informationized operations as the main operational form. He argues that information not only has become the leading factor in social development, but also has become the leading factor in gaining victory in war. He further contends that the battlefields for informationized operations not only include the tangible battlefields composed of the land, sea, air, space, and network EM physical fields, but also include the intangible battlefields composed of the information field and cognitive field. Informationized operations not only have covered the military field, but also involve the political, economic, diplomatic, legal, and cultural fields. This, he finds, requires conducting operations in the land, sea, air, space, network EM, and cognitive multidimensional space; conducting integratedwhole operations in all dimensions of the battlefield; and thus bringing into play the maximum operational functions of entire systems. Jullien (2004) describes Chinese views on strategy as follows: For something to be realized in an effective fashion [in the Chinese view], it must come about as an effect. It is always through a process (which transforms the situation), not through a goal that leads (directly) to action, that one achieves an effect, a result. . . Any strategy thus seems, in the end, to come down to simply knowing how to implicate an effect, knowing how to tackle a situation upstream in such a say that the effect flows “naturally” from it. . .
The fundamental difference between East Asian and Western perception and reasoning has been the subject of considerable study. For example, Nisbett et al. (2001), following in a long line of research (e.g., Markus and Kitayama 1991; Heine
80
R. Wallace
2001), review many empirical studies regarding a basic cognitive difference between individuals raised in East Asian and Western cultural heritages, which they describe as “holistic” and “analytic,” finding that • Social organization directs attention to some aspects of the perceptual field at the expense of others. • What is attended to influences metaphysics. • Metaphysics guides tacit epistemology, that is, beliefs about the nature of the world and causality. • Epistemology dictates the development and application of some cognitive processes at the expense of others. • Social organization can directly affect the plausibility of metaphysical assumptions, such as whether causality should be regarded as residing in the field vs. in the object. • Social organization and social practice can directly influence the development and use of cognitive processes such as dialectical vs. logical ones. They conclude that tools of thought embody a culture’s intellectual history, that tools have theories built into them, and that users accept these theories, albeit unknowingly, when they use these tools. Matsuda and Nisbett (2006), in a similar way, find that research on perception and cognition suggests that whereas East Asians view the world holistically, attending to the entire field and relations among objects, Westerners view the world analytically, focusing on the attributes of salient objects. Compared to Americans, East Asians were more sensitive to contextual changes than to focal object changes. These results suggest that there can be cultural variation in what may seem to be basic perceptual processes. Nisbett and Miyamoto (2005) argue that fundamental perceptual processes are influenced by culture. These findings establish a dynamic relationship between the cultural context and perceptual processes. They suggest that perception can no longer be regarded as consisting of processes that are universal across all people at all times. Wallace (2007) explores analogous dynamics involving inattentional blindness and culture. We argue that such basic biological differences must likewise be expressed at the institutional level under conditions of conflict between power groups, although perhaps via different modalities at and across different scales and levels of organization. In fact, two such distinct aspects of perception can be derived relatively easily from the asymptotic limit theorems of information theory, as explored in the next section. But this is not really the point: we already know this. The point is that there may well be a third—intermediate—stream possible in the Lamarckian evolution of strategic conflict between power groups. Such a third stream might serve to blindside both Western “analytic” and Eastern “holistic” modalities of conflict doctrine.
4 Strategic Culture
81
4.2 Analytic and Holistic Streams of Strategy Extended conflict between power groups—the back-and-forth exchange of “meaningful statements” constructed from an “alphabet” of tactics available to each side—can be viewed as the transmission of a particular kind of “message” by one party to another using a spectrum of means that may include, but not be limited to, armed combat, economic embargo, moral challenge, and the like. The message is often “encoded” physically in ordinary time and space, but also as appropriate to “cognitive space” and, more recently, “cyberspace” as well. Conflict between power groups is, then, “communicated” to an opponent through a literally compelling series of multimedia events. In getting the central “message” through to an opponent, it is important that the structure of that message remain fixed and be “received” in the form transmitted. This is, in a sense, an inverse problem of the typical information theory problem, where channels are viewed as fixed, and messages are tuned around the channel properties to achieve the maximum possible transmission rate (Cover and Thomas 2006). We focus on making sure the message is transmitted intact. This can, somewhat surprisingly, at least in theory, actually be done. An East Asian metaphor might be “Cultivate the channel of the stream so water flows how and where you want.” Some development is required, as adapted from Wallace (2017). Messages from an information source, seen as symbols xj from some alphabet, each having probabilities Pj associated with a random variable X, are “encoded” into the language of a “transmission channel,” a random variable Y with symbols yk , having probabilities Pk , possibly with error. Someone receiving the symbol yk then retranslates it (without error) into some xk , which may or may not be the same as the xj that was sent. More formally, the message sent along the channel is characterized by a random variable X having the distribution P (X = xj ) = Pj , j = 1, . . . , M. The channel through which the message is sent is characterized by a second random variable Y having the distribution P (Y = yk ) = Pk , k = 1, . . . , L. Let the joint probability distribution of X and Y be defined as P (X = xj , Y = yk ) = P (xj , yk ) = Pj,k and the conditional probability of Y given X as P (Y = yk |X = xj ) = P (yk |xj ).
82
R. Wallace
Then the Shannon uncertainty of X and Y independently, and the joint uncertainty of X and Y together, are defined respectively as H (X) = −
M
Pj log(Pj )
j =1
H (Y ) = −
L
Pk log(Pk )
k=1
H (X, Y ) = −
L M
Pj,k log(Pj,k ).
(4.1)
j =1 k=1
The conditional uncertainty of Y given X is defined as H (Y |X) = −
L M
Pj,k log[P (yk |xj )].
(4.2)
j =1 k=1
For any two stochastic variates X and Y , H (Y ) ≥ H (Y |X), as knowledge of X generally gives some knowledge of Y . Equality occurs only in the case of stochastic independence. Since P (xj , yk ) = P (xj )P (yk |xj ), then H (X|Y ) = H (X, Y ) − H (Y ). The information transmitted by translating the variable X into the channel transmission variable Y —possibly with error—and then retranslating without error the transmitted Y back into X is defined as I (X|Y ) ≡ H (X) − H (X|Y ) = H (X) + H (Y ) − H (X, Y ).
(4.3)
See Cover and Thomas (2006) for details. If there is no uncertainty in X given the channel Y , then there is no loss of information through transmission. In general this will not be true, and herein lies the central matter. Given a fixed “tactical alphabet,” in a large sense, for the transmitted variable X, and a similarly fixed alphabet and probability distribution for the channel Y , we may vary the probability distribution of X in such a way as to maximize the information sent. The capacity of the channel is defined as C ≡ max I (X|Y ) P (X)
(4.4)
subject to the subsidiary condition that P (X) = 1. The critical trick of the Shannon Coding Theorem for sending a message with arbitrarily small error along the channel Y at any rate R < C is to encode it in longer and longer “typical” sequences of the variable X, that is, those sequences whose
4 Strategic Culture
83
distribution of symbols approximates the probability distribution P (X) above which maximizes C. If S(n) is the number of such “typical” sequences of length n, then log[S(n)] ≈ nH (X),
(4.5)
where H (X) is the uncertainty of the stochastic variable defined above. Some consideration shows that S(n) is much less than the total number of possible messages of length n. Thus, as n → ∞, only a vanishingly small fraction of all possible messages is meaningful in this sense. This observation, after some considerable development, is what allows the Coding Theorem to work so well. The prescription is to encode messages in typical sequences, which are sent at very nearly the capacity of the channel. As the encoded messages become longer and longer, their maximum possible rate of transmission without error approaches channel capacity as a limit. Again, standard references on information theory provide details. This approach can be conceptually inverted to give a “tuning theorem” variant of the coding theorem. Telephone lines, optical wave guides, and the thin plasma through which a planetary probe transmits data to earth may all be viewed in traditional informationtheoretic terms as a noisy channel around which we must structure a message so as to attain an optimal error-free transmission rate. Telephone lines, wave guides, and interplanetary plasmas are, relatively speaking, fixed on the timescale of most messages, as are most sociogeographic networks. Indeed, the capacity of a channel, is defined by varying the probability distribution of the “message” process X so as to maximize I (X|Y ). Suppose there is some message X so critical that its probability distribution must remain fixed. The trick is to fix the distribution P (x) but modify the channel—i.e., tune it—so as to maximize I (X|Y ). The dual channel capacity C ∗ can be defined as C∗ ≡
max
I (X|Y ).
(4.6)
max
I (Y |X)
(4.7)
P (Y ),P (Y |X)
But C∗ =
P (Y ),P (Y |X)
since I (X|Y ) = H (X) + H (Y ) − H (X, Y ) = I (Y |X). In a formal mathematical sense, then, the message transmits the channel, and there will indeed be, according to the Shannon Coding Theorem, a channel distribution P (Y ) that maximizes the dual channel capacity C ∗ . One may do somewhat better than this by modifying the channel matrix P (Y |X).
84
R. Wallace
Since P (yj ) =
M
P (xi )P (yj |xi ).
(4.8)
i=1
P (Y ) is fully defined by the channel matrix P (Y |X) for fixed P (X) and C∗ =
max
P (Y ),P (Y |X)
I (Y |X) = max I (Y |X). P (Y |X)
(4.9)
Calculating C ∗ requires maximizing I (X|Y ) = H (X) + H (Y ) − H (X, Y ). This contains products of terms and their logs, subject to constraints that the sums of probabilities are 1 and each probability is itself between 0 and 1. Maximization is done by varying the channel matrix terms P (yj |xi ) within the constraints. This is a difficult problem in nonlinear optimization. However, for the special case M = L, C ∗ can be found by inspection: If M = L, then choose P (yj |xi ) = δj,i , where δi,j is 1 if i = j and 0 otherwise. For this special case C ∗ ≡ H (X)
(4.10)
with P (yk ) = P (xk ) for all k. Information is thus transmitted without error when the channel is constructed to become “typical” with respect to the fixed message distribution P (X). If M < L matters reduce to this case, but for L < M information must be lost, leading to Rate Distortion limitations. Thus modifying the channel becomes the canonical means of ensuring transmission of an important message in the manner desired, rather than encoding that message in a “natural” language that maximizes the rate of transmission of information on a fixed channel. We have examined the two limits in which either the distributions of P (Y ) or of P (X) are kept fixed. The first provides the usual Shannon Coding Theorem, and the second a tuning theorem variant, i.e., a tunable, retina-like, Rate Distortion Manifold, in the sense of Glazebrook and Wallace (2009). This result is essentially similar to Shannon’s (1959) observation that evaluating the rate distortion function corresponds to finding a channel that is just right for the source and allowed distortion level. But again, this is not the point. We already—at least qualitatively—understand much of this. What, then, is the point?
4 Strategic Culture
85
4.3 Third Streams of Strategy Recall Jullien’s remarks above. One can, apparently, give the East Asian metaphor something of a heuristic foundation in information theory: tune the channel. The Western metaphor emerges as “adjust the statements of the tactical alphabet to the constraints of the strategic landscape,” and Westerners—from Scipio Aficanus, to Napoleon, Erwin Rommel, and John Boyd—have often been master tacticians, and, far less often, successful strategists as well, from this perspective. Strategy, however, may not be necessarily either Western or East Asian, and matters are greatly confounded by Clausewitzian fog and friction. From an East Asian perspective, in both cognitive and cyber spaces, as described, an increasingly important tool for redirecting the stream/channel of conflict is seen to be information (Jie 2012). A report by the Center for Strategic and Budgetary Assessments (Anonymous, 2013 redacted, p. 41) put the overall problem in terms of Clausewitzian friction that almost inevitably intrudes at the tactical level of war. But, the author finds, its intrusions tend to be even more consequential at the operational and strategic levels because operational and strategic problems are usually “wicked” ones, meaning that they are ill structured, open-ended, and not amenable to closed, engineering solutions. As an example, he describes the frictions the Allies and the Germans encountered during the Normandy invasion in June 1944. The problem with friction, he continues, is that no conceivable advances in weaponry or technology are capable of eliminating it despite recurring hopes to the contrary, because general friction arises from (1) human physical and cognitive limitations, (2) the inherent uncertainties in the information on which actions in war are based, and (3) the structural nonlinearity of combat processes and interactions. The anonymous author finds that the deepest uncertainty affecting US-PRC military competition in the information aspects of war is whether the PLA will be able to do a better job of coping with the frictions of high-tech local wars under informationized conditions than US forces, so that it may be that the PLA’s prescriptive, topdown planning approach, and quest for “trumpcard” stratagems will prove to be impediments to the PLA’s capacity to deal with friction. But, he finds, there is little evidence that the American military is inclined to embrace as holistic and comprehensive an approach to the growing role of information in modern warfare as the Chinese, so that, insofar as information’s future role in war is concerned, it is difficult to avoid the conclusions that the Chinese and American militaries are operating on very different frequencies. More recently, Allen (2019) has explored the Chinese transition from “informationized” to “intelligentized” military enterprise as follows: . . . [M]ost of China’s leadership sees increased military usage of AI as inevitable and is aggressively pursuing it. . . Beyond using AI for autonomous military robotics, China is also interested in AI capabilities for military command decision-making. . . Zenh Yi [senior executive in a large Chinese defense company has said that today] ”mechanized equipment is just like the hand of the human body. In future, AI systems will be just like the brain of the human body”.
86
R. Wallace
Allen goes on, stating that, several months after AlphaGo’s momentous March 2016 victory over Lee Sedol, a publication by China’s Central Military Commission Joint Operations Command Center argued that Alpha Go’s victory “demonstrated the enormous potential of artificial intelligence in combat command, program deduction, and decision-making”. The Chinese, we argue above, have some considerable foundation for their basic approach: the “tuning theorem” formulation. On the other hand, Western military practice has much foundation in the “coding theorem.” However, and of considerable importance, in marked contrast to the assertions of Jie (2012), and to the successes of AlphaGo, a metaholistic perspective regarding conflict on Clausewitz landscapes does not find “information” or machine intelligence to be the universal solvents of current Chinese military thinking. “Information” is not a local stream of the “Flowing Water” of historical Chinese strategy (Jullien 2004). Machine intelligence is not immune from the perturbations inherent to Clausewitz landscapes of uncertainty, imprecision, and overreach (Wallace 2018, 2020b). The perspective is a kind of inverse to the typically Western Mereological Fallacy, i.e., assuming that some few singular features constitute the whole. The relations between information and control under conditions of uncertainty and the frictional distortion of intent are not at all straightforward. Analysis strongly suggests (Wallace 2020a, Ch. 1) that mastery of information and rapidity of cognition do not imply mastery of control in conflict, just as tactical competence—the ability to expeditiously carry through the individual “alphabet” elements of conflict—does not imply competence on operational or strategic scales and levels of organization, where “meaningful statements” made from the tactical alphabet must hang together and make sense in terms of extended dialog with a skilled and resourceful adversary. US and PRC military enterprises have, it appears, devoted considerable effort into creating “Blue” vs. “Red” team exercises, attempting to understand and counter each other’s strong points, which are culturally built-in to high level command dynamics. But, again in the words of Colin Gray, “So What!”. Who could tell MacArthur not to march toward the Yalu after Inchon? Who could ever tell Westmoreland anything about Vietnam? Who can tell the US leadership at this writing anything at all? Indeed, the relentless sequence of German strategic blunders on the Eastern Front comes to mind, in spite of the overwhelming tactical superiority of the Wehrmacht over the Red Army. The PRC and the PLA—like Japan before them—must operate under their own considerable cultural constraints and systematic patterns of incompetence, as exemplified, perhaps, by their 1968 “Cultural Revolution,” 1979 Vietnam, 1989 Tiananmen Square, and current Hong Kong debacles. We assert that there should be an “intermediate theorem” involving simultaneous tuning of both the “message” and the “channel” for maximal strategic impact under specific time constraints. The relative effectiveness of this intermediate approach would depend on setting those time constraints, but might often be effective beyond what would be possible from either extreme. This would require the singularly
4 Strategic Culture
87
difficult task of learning to act outside of both one’s own cultural constraints, as well as of those defined by an opponent. The benefits, however, might be similarly singular. And that is the central point of this analysis. What we suggest, then, goes somewhat beyond current “deception” doctrine that attempts to use an opponent’s inherent thought processes as a strategic tool. For example, US Army (2019) describes “ambiguity-decreasing deception” in these terms: Ambiguity-decreasing deceptions manipulate and exploit an enemy decision maker’s preexisting beliefs and bias through the intentional display of observables that reinforce and convince that decision maker that such pre-held beliefs are true. Ambiguity-decreasing deceptions cause the enemy decision maker to be especially certain and very wrong. Ambiguity-decreasing deceptions aim to direct the enemy to be at the wrong place, at the wrong time, with the wrong equipment, and with fewer capabilities. Ambiguity-decreasing deceptions are more challenging to plan because they require comprehensive information on the enemy’s processes and intelligence systems. Planners often have success using these deceptions with strongminded decision makers who are willing to accept a higher level of risk. . . . . . [I]t is generally easier to induce the deception target to maintain a preexisting belief than to deceive the deception target for the purpose of changing that belief. . . exploit[ing] target biases and the human tendency to confirm exiting beliefs. . . Any bias is potentially exploitable. Most targets are unaware of how deeply their biases influence their perceptions and decisions.
Such operations are not without risk (US Army 2019): Deceptions may produce unintended, often unwanted consequences. Believing that a threat is real, an enemy can act unpredictably. Proper planning and coordination and knowing the enemy can reduce the chance that deceptions will result in unfavorable action. Successful planners consider second- and third-order effects of the deception plan to mitigate unintended consequences. . .
Barton Whaley (2016, p. 190) remarks on cultural matters, arguing that, whenever the target of your deception belongs to a different culture than yours, problems of cross-cultural communication of the intended deception will arise. These communication problems, he continues, can range from the trivial to the decisive, that is, from minor glitches to complete failure. But wise deceivers, he concludes, will appreciate this problem, try to discover how it is apt to work in specific situations, and plan accordingly, and, similarly, the opposing deception analysts will benefit from doing the same. It may be possible, via “third stream thinking,” to act well beyond the culturally and historically driven expectations of an adversary that will and must be based, largely, on that agent’s inherently constrained understanding of its own culture and history. Gray (2006, p. 6), however, raises a central caution: Strategic cultural understanding is difficult to achieve and even more difficult to operationalize. The fact that it is an important concept, robust in its essentials against challenge, is irrelevant. The practical implications of the promotion of culture to intellectual and doctrinal leading edge status may well, indeed probably will, prove to be unduly demanding.
88
R. Wallace
These are deep waters, and even if the analytic tools developed in these essays evolve into statistical tools for data analysis and policy purposes, proper use will remain a highly skilled enterprise indeed.
References Allen, G. C. (2019). Understanding China’s AI strategy: Clues to Chinese strategic thinking on artificial intelligence and national security. Center for New American Security. Anonymous. (2013). Countering enemy ‘informationized operations’ in war and peace. Center for Strategic and Budgetary Assessments, HQ0034-09-D-3007-0014 (Redacted litigation release). https://www.esd.whs.mil/Portals/54/Documents/FOID/Reading/%20Room/ Other/Litigation%20Release%20-%20Countering%20Enemy%20 Cover, T., & Thomas, J. (2006). Elements of information theory (2nd ed.). Wiley. Glazebrook, J., & Wallace, R. (2009). Rate distortion manifolds as model spaces for cognitive information. Informatica, 33, 309–346. Gray, C. S. (2006). Out of the wilderness: prime time for strategic culture, Prepared for the Defense Threat Reduction Agency Advanced Systems and Concepts Office, Contract Nos. DTRA01-03D-0017, SP0600-04-C-5982. Gray, C. S. (2007). British and American strategic cultures. In Jamestown Symposium ‘Democracies in Partnership: 400 Years of Transatlantic Engagement’. Available online. Heine, S. (2001). Self as cultural product: an examination of East Asian and North American selves. Journal of Personality, 69, 881–906. Jie, Q. (2012). Lecture 10 from lectures on the science of Campaigns. Military Science Press. Jullien, F. (2004). A treatise on efficacy: Between Western and Chinese thinking. University of Hawaii Press. Markus, H., & Kitayama, S. (1991). Culture and the self: implications for cognition, emotion, and motivation. Psychological Review, 98, 224–253. Matsuda, T., & Nisbett, R. (2006). Culture and change blindness. Cognitive Science: A Multidisciplinary Journal, 30, 381–399. Nisbett, R., & Miyamoto, Y. (2005). The influence of culture: holistic vs. analytic perception. Trends in Cognitive Science, 9, 467–473. Nisbett, R., Peng, K., Incheol, C., & Norenzayan, A. (2001). Culture and systems of thought: Holistic vs. analytic cognition. Psychological Review, 108, 291–310. Shannon, C. (1959). Coding theorems for a discrete source with a fidelity criterion. Institute of Radio Engineers International Convention Record, 4, 142–163. Tse-Tung, M. (1963). Selected military writings of Mao Tse-Tung. Foreign Languages Press. US Army. (2019, February). Army support to military deception. FM 3-13.4. https://atiam.train. army.mil/catalog/dashboard Waley, B. (2016). Practise to Deceive: Learning curves of military deception planners. Naval Institute Press. Wallace, R. (2007). Culture and inattentional blindness: a global workspace perspective. Journal of Theoretical Biology, 245, 378–390. Wallace, R. (2017). Computational Psychiatry: A systems biology approach to the epigenetics of mental disorders. Springer. Wallace, R. (2018). Carl von Clausewitz, the fog-of-war and the AI revolution: The real world is not a game of Go. Springer. Wallace, R. (2020a). Cognitive dynamics on Clausewitz landscapes: The control and directed evolution of armed conflict. Springer. Wallace, R. (2020b). How AI founders on adversarial landscapes of fog and friction. JDMS. https:// doi.org/10.1177/1548512920962227
Chapter 5
Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict Rodrick Wallace, Alex Liebman, Luke Bergmann, and Robert G. Wallace
5.1 Introduction A disease outbreak represents more than a convergence of susceptibles, the infected, and those who have recovered from infection. A recent series of models of Ebola and vector-borne diseases, for instance, focused on the role systemic environmental stochasticity plays in driving outbreaks to extirpation or amplifying their propagation (R. G. Wallace et al. 2016, R. Wallace et al. 2018). The landscapes through which pathogens circulate have definitional impact on the outcomes of outbreaks whatever the evolutionary state of the specific disease agent. Epidemiological causality is found as much in the etiological field as in the object of the pathogen or patient. A related focus of this line of research applied the Data Rate Theorem linking control and information theories to characterizing how public health efforts control such outbreaks. The warp and woof of pathogen and patient populations are as much weaved together by broader political contestation as by population dynamics. The models hypothesized the effects of anthropogenic clashes on disease outcomes are empirically discoverable. The approach produced a broad class of statistical models that can be fitted to data across pathogen species.
R. Wallace () The New York State Psychiatric Institute, Bronx, NY, USA e-mail: [email protected] A. Liebman Department Geography, Rutgers University, New Brunswick, NJ, USA L. Bergmann Department of Geography, University of British Columbia, Vancouver, BC, Canada R. G. Wallace Institute for Global Studies, University of Minnesota, Minneapolis, MN, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 R. Wallace (ed.), Essays on Strategy and Public Health, https://doi.org/10.1007/978-3-030-83578-1_5
89
90
R. Wallace et al.
Here we rephrase that work in terms of different “temperature” measures applicable to disease ecosystems, measures that are, in fact, close analogs to sterilizing immunity in confined systems. We include index measures associated with asymmetric conflict between contending agents with massively different access to conventional material resources. We will outline how a “weak” institutional cognitive entity—here, governmental and civil society public health interests—can overcome a conventionally stronger opponent—entrenched agribusiness practice— to limit morbidity, mortality, and economic burden. As has been explored at considerable depth elsewhere, the currently practiced intensive agribusiness is unsustainable on even its own terms (e.g., Jones et al. 2013; Leonard 2014; R. Wallace and R.G. Wallace 2015; R.G. Wallace 2016; Cooper 2017). To survive, the agricultural sector must externalize the most damaging consequences of its production model by “privatizing the profits and socializing the costs.” Consumers, governments, farmers, agricultural labor, livestock, wildlife, and local fields and waterways have long borne the material and fiscal fallout of declines in nutrition, declines in animal and landscape diversities, occupational hazards, pollution, xeno-specific pathogens, and restrictions in farmer autonomy. Even the largest food conglomerates would not survive if such costs were returned to company balance sheets. The imbalance in political power has long imposed definitional impacts upon disease ecologies (Watts 1997; R.G. Wallace 2016). Haalboom (2017) observed across four major Dutch outbreaks over the 20th century, agricultural interests repeatedly dominated the public health sector, a relationship that continued into the present century: Historically, agriculture and agriculture-related export have been very important to the Dutch economy. Hence, the specific and material economic interests of the Dutch agricultural sector got priority over more abstract, general public health interests (along with professional interests to secure tasks in public health protection), which were considered politically relevant only later in time. When these interests asked for more or less the same measures, this resulted in the Dutch ‘success stories’ of effective control and public health protection, as in the cases of bovine TB in the last phase of its control and BSE. Nevertheless, trade incentives were also of overriding importance in those cases. But when the interests of the agricultural and public health domains clashed, as in the case of salmonellosis, the agricultural power to delay and influence legislation and control was large.
To characterize the impacts of such an imbalance, we begin with a conventional model of disease outbreak, exploring the role environmental stochasticity plays on disease dynamics in time, space, and genetic diversity. We rephrase the model in terms of different “temperature” measures applicable to disease ecosystems, measures that are, in fact, close analogs to sterilizing immunity in confined systems. The results suggest that over large parts of the parameter space, the “economies of scale” characteristic of modern agribusiness practices produce diseconomies in public health, amplifying disease risk as farms and food production expand in geographic extent and consolidate in ownership.
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
91
We next model patterns of “asymmetric conflict” between agents rich in material resources, as would be applied to agribusiness, and those rich in time and information, a different set of resources in this case historically associated with peasant and smallholder communities. We outline how a “weak” institutional cognitive entity— here, governmental agencies and civil society pursuing public health interests—can overcome a conventionally stronger opponent—entrenched agribusiness practice— to limit disease morbidity, mortality, and economic burden. The results suggest potential strategies by which the position of public health practitioners as ecosocial actors can be significantly improved.
5.2 Stochastic Sterilization 5.2.1 Variation in Time We begin by following an established literature in disease modeling summarized by R. Wallace et al. (2018). Most simply, for the earliest stage of a disease outbreak in a population of susceptible individuals, one can write a deterministic “exploding” equation as dN/dt = αN (t),
(5.1)
at time t, where N is the infected population, α is a positive real number representing the rate of growth of the infection, so that, early on, N(t) increases exponentially in time as N(t) = N0 exp[αt].
(5.2)
The stochastic version of this is an Itô differential equation having the form dNt = αNt dt + σ Nt dWt ,
(5.3)
where the second term represents volatility in a Brownian white noise dWt . This is taken as a random noise signal with equal power within a fixed bandwidth at any center of frequency. The effect of noise is through the parameter σ . Applying the Itô Chain Rule, a stochastic version of computing the derivative of the composition of two or more functions, to log[N], produces the relation d log[Nt ] = (α −
σ2 )dt + σ dWt , 2
(5.4)
where −σ 2 /2 is the “Itô correction factor.” Heuristically, given enough environmental noise, i.e., σ 2 /2 > α, by Jensen’s inequality for a concave function (Cover and
92
R. Wallace et al.
Thomas 2006), this gives a lower limit for log[E(Xt )] ≥ E(log[Xt ]) → a < 0. Via stochasticity, that limit will be attained, and any outbreak must eventually be driven to extinction. For “colored” noise, with skewed spectral densities of a variety of distributions, if Eq. (5.3) can be expressed as dNt = Nt dYt ,
(5.5)
where Yt is a stochastic process in some complicated noise process dBt , then the Doléans-Dade exponential (Protter 1990) can be defined by a semimartingale of bounded variation and is written as 1 E (X)t ∝ exp(Yt − [Yt , Yt ]). 2
(5.6)
[Yt , Yt ] is the quadratic variation of Yt that, since Bt is not Brownian white noise, need not be simply proportional to time (Protter 1990). Heuristically, by the Mean Value Theorem, if 1 d[Yt , Yt ]/dt > dYt /dt, 2
(5.7)
E converges in probability to zero. R. Wallace et al. (2018) expands this analysis to vector-borne diseases, inherently having dimensionality greater than one, i.e., there are both host and vector population dynamics to be addressed. Despite some notable exceptions in the parameter space, sufficient variability still drives the infection to extinction. A large enough “stochasticity temperature” Tt ≡ d[Yt , Yt ]/dt “sterilizes” disease outbreaks, in this model.
5.2.2 Variation in Space The simplest model of “spatial stochasticity” follows Murray (1989, Sec. 14.8). In one spatial dimension x, again taking N(x, t) as the number of infected individuals at location x for t, we write a “diffusion equation” for the initial stage of an epidemic outbreak as ∂N(x, t)/∂t = μ∂ 2 N(x, t)/∂x 2 + αN (x, t).
(5.8)
Expansion of the solution to this equation as a spatial Fourier series leads to a time dependence proportional to exp[(α − C 2 μ/L2 )t].
(5.9)
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
93
α is the growth rate, μ is the coefficient of spatial diffusion, and C is a constant ≈ 1 depending on the dimension of the diffusion. Infection dies out if the patch √ size L is less than the critical length√Lc = C μ/α. This leads to a second “spatial temperature” analog TL ≡ 1/Lc = α/μ/C. For a version of the result that can be applied to diffusion of infection on a “commuting field” defined by rapid, systematic patterns of travel, see Eqs. (2.11)– (2.14) of R. Wallace et al. (2018).
5.2.3 Variation in Genetic Structure The role host genetic structure plays in epidemic propagation has long been explored (e.g., O’Brien and Evermann 1988; King and Lively 2012). Explicitly correcting the deterministic treatment of Anderson and May (1991), Lively (2010) provides a direct but sophisticated stochastic model, assuming a fixed reproductive cycle, as in annual birthing or, in an animal factory farm setting, market-driven turnover. Lively takes the number of infected hosts having the ith genotype at time t + 1 as Ii(t+1) = gi(t+1) Nt+1 Pi(t+1) .
(5.10)
gi(t+1) is the frequency of the ith host genotype at time t + 1. Nt+1 is the total number of hosts at time t + 1. Pi(t+1) is the probability of infection for the ith host genotype at time t + 1. Assuming a Poisson distribution, P becomes 1 minus the zero class exp[−λ], where λ is the mean number of matching spores that contact each host. The probability of infection for the ith host genotype at t + 1 is then Pi(t+1) = 1 − exp[−λ] = 1 − exp[−BIi(t) /Nt+1 ].
(5.11)
Ii(t) is the number of infected hosts having the ith genotype at time t and B is the number of infectious propagules produced by each infection that make contact with different hosts. B thus imposes an upper limit on the number of secondary infections. Lively next assumes that a single infected individual, with genotype i, is introduced into the population of hosts at time t, so that Ii(t) = 1. The number of secondary infections, i.e., the infamous R0 , is then R0i = gi(t+1) Nt+1 (1 − exp[−B/Nt+1 ])
(5.12)
leading to the condition for propagation gi(t+1) Nt+1 >
1 . (1 − exp[−B/Nt+1 ])
(5.13)
94
R. Wallace et al.
For large populations, as N → ∞, Eq. (5.12) gives R0i = gi B > 1 B>
1 . gi
(5.14)
For a large population, if the single-strain infection reduces the fitness and the frequency of the susceptible host genotype over time, the infection will die out when gi becomes less than 1/B. Lively (2010) further argues that if the pathogen is introduced by migration at a high rate, there can be multiple coexisting strains of the infective agent, giving a mean value for R0 as Nt+1 (1 − exp[−B/Nt+1 ]) B R0i = → (5.15) < R0 >= G G G as N becomes large, where G is the number of genotypes in the host population. As Lively puts it, Thus, all else being equal, the spread and persistence of infection should more easily occur in genetically homogeneous populations.
In the Mathematical Appendix, we elaborate on the result, further correcting Anderson and May (1991). The potential applications in time, space, and genetics are foundational. Other “stochasticity temperatures” more conducive to disease control can be regionally planned. Crop rotation, agrobiodiversity, and ecological pest control can be incentivized by top-down market regulation and trade/tax structures. The city of Belo Horizonte, a city of 2.5 million in Brazil, implemented such a program (Chappell 2018). In conjunction with a state extension service, the city’s Municipal Under-Secretariat of Food and Nutritional Security helped establish agroecological practices among outlying rural smallholders and protect the mega-biodiverse Mata Atlantica-Cerrado, by guaranteeing a market and setting prices subsidized for lowincome urban consumers. Mosaic agroecologies at large spatial and population scales, by their rich environmental stochasticities—able to preempt most large-scale diseases before they emerge—appear feasible with appropriate government support.
5.3 Disease Control Failure Following closely the arguments of R. G. Wallace et al. (2016), interacting polities, including the modern nation state, are not merely natural populations or communities. The spread of infectious disease within such a complex organization cannot be treated as simply a problem in population dynamics.
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
95
Introducing stochasticity and nonlinear dynamics to vital rates or behavior states, however advanced the developments post-Anderson and May, only recapitulates the core reductive fallacy (e.g., Heesterbeek et al. 2015; Funk et al. 2015; Eksin et al. 2017). Spatial updates (e.g., Klepac et al. 2011) obviate geography’s manifest inequities (Gould 1993; D. Wallace and R. Wallace 1998; Sheppard 2011). Modeling microeconomic interventions can suggest exits out of strange attractors in which local economic incentives amplify infectious disease (e.g., Boni et al. 2013). But integrating disease control and economic optima (e.g., Klepac et al. 2011, 2013, 2015) instantiates the economism at the heart of the epidemiologist entry into capital’s professional–managerial class (Pimbert 2017; R. Wallace et al. 2018; Chaufan and Saliba 2019). Cost-effectiveness modeling, even its game theory equivalents unpacking bad institutional behavior (e.g., Barrett 2013), aims to minimize expenditures for these institutions first. In the course of controlling outbreaks at an “acceptable” level under such a constraint, the structural expropriation that produces the artificial scarcities underlying the outbreaks to begin with is left unaddressed (Farmer 2008; Sparke 2009; Chiriboga et al. 2015; Sparke 2017; R.G. Wallace et al. 2016; R. Wallace et al. 2018). Modeling certainly has its place and can be pursued under definitively different premises. Here we explicitly treat polities as cognitive entities. Confronted by any dynamic threat, the State, whatever its class character, must choose a relatively small set of responses from a much larger domain of possible policies and resources available to it. Choice reduces uncertainty and implies the existence of an information source generating successive messages. The Data Rate Theorem can be deployed to link control and information theories, tracking the effects of control information on control system effectiveness (Nair et al. 2007). Here, epizootic spillover or some other spreading epidemic is constrained, or, in the case of systemic failure, released, by the adequacy or inadequacy of the control information imposed by the broader social system via public health interventions or, in the longer term, by the quality of broader socioeconomic reform. Figure 5.1 shows a schematic of a State’s public health system, in the context of a challenge by some growing disease outbreak. We assume that an outbreak has begun to propagate and that the aim is to contain it. The system at time t receives a multidimensional state vector Xt and produces a new vector at time t + 1, written as Xt+1 . At time t, the system is likewise affected by a “noise” vector Wt representing uncontrolled inputs and by a “control signal” vector Ut from cognitive entities within the state. The basic first-order “linear plant” dynamics near some nonequilibrium steady state (nss) are then written as Xt+1 = AXt + BUt + Wt
(5.16)
with feedbacks indicated by Fig. 5.1. A and B are the fixed matrices. In the first stages of an explosive disease outbreak, the system must be seen as inherently unstable, in the sense that the matrix A can be factored by a similarity transformation into one having two diagonal submatrices AU , AS and two zero offdiagonal matrices such that AU has eigenvalues ≥ 1 and AS has eigenvalues < 1. Thus an epidemic or pandemic contagious disease with “reproduction rate of
96
R. Wallace et al.
Fig. 5.1 Schematic of a linear plant control system near nonequilibrium steady state. Xt+1 is the “plant” response to the control signal Ut and the earlier state Xt . Ut is the output of an information source. Wt is uncontrolled “noise”
infection” > 1 is clearly an inherently unstable system that must be brought under control by a State’s public health institutions via control signals Ut . The Data Rate Theorem (Nair et al. 2007) states that the rate of control information provided by the public health system, H , must be greater than the rate at which the unstable system generates “topological information,” a relation written as H > log[| det(AU )|] ≡ H0 ,
(5.17)
where det is the determinant of a matrix. The rate at which a disease outbreak generates “topological information” depends on the flow of infection along paths of contact between central cities, cities and their suburbs via the daily commute, and between individuals along social contact nets. We envision this as determined by some function of a composite scalar ρ akin to a density in a traffic model (e.g., R. Wallace 2018a). For pathogens such as influenza, Ebola, or one more intimately connected to social structure in its transmission such as HIV, ρ would be calculated as the length of a principal component from some multivariate analysis of data involving rates of deforestation, plantation or factory farming, confiscation of artisanal farmlands, housing overcrowding, percent of population in poverty, rates of deindustrialization, deurbanization, and violent crime, and so on (e.g., Wallace et al. 1999). Equation (5.17) then becomes H (ρ) > f (ρ)H0 ,
(5.18)
where H0 now represents the inherent topology of the underlying transportation and contact networks at the scale of interest. How might we characterize the functions H (ρ) and f (ρ)? Can we explicitly connect disease to the underlying socioeconomics? A complicated calculation, using an exactly solvable Black–
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
97
Fig. 5.2 The horizontal line represents the critical limit H0 . If κ2 /κ4 κ1 /κ3 , at some intermediate value of accumulated environmental insult ρ, resulting in a low value of the “control temperature” TC , the relation (κ1 ρ + κ2 )/(κ3 ρ + κ4 ) falls below that limit, and the growth of infection becomes uncontrollable. We will argue, however, that similar dynamics may affect a powerful agribusiness entity when ρ is replaced by “shadow price” and/or other burdens imposed by a coalition of highly adaptive public health opponents
Scholes approximation (R. G. Wallace et al. 2016, Section 4.3), finds in first order H = κ1 ρ + κ2 ,
(5.19)
where the κi are nonnegative constants. Taking the same level of approximation, we assume f (ρ) in Eq. (5.18) can be similarly expressed as κ3 ρ + κ4 so that the stability relation is TC ≡
κ1 ρ + κ2 > H0 , κ3 ρ + κ4
(5.20)
where we now define TC as the control temperature of the system. At low ρ, the stability condition is κ2 /κ4 > H0 , and at high ρ, it becomes κ1 /κ3 > H0 . If κ2 /κ4 κ1 /κ3 , then at some intermediate value of ρ, the essential inequality may be violated, leading to uncontrolled growth of infection (Fig. 5.2). This formalism describes a general phenomenon, imposed on all inherently unstable control systems by a theorem as powerful as the Central Limit Theorem that determines the normal distribution. The ability of a political entity—for instance, an entrenched agribusiness system—to control its own resources and contain an opposition can be similarly compromised. As we will discuss below, under the challenge of a public health opponent, ρ is then replaced by the “shadow price” that an opposition imposes on the conduct of agribusiness-as-usual.
98
R. Wallace et al.
5.4 Asymmetric Conflict In addition to producing large-scale environmental and health damage, multinational agribusiness embodies large-scale political, economic, and social power that extends beyond biophysical impact (Mann 1990; McMichael 2013; Howard 2016; Bauerly 2016; Patel and Moore 2017; Wise 2019). An economic sector of such power imposes an institutional pathology that is normalized in governance and research as the guiding spirit of an age, in this case organized around a metabolic rift that divides the bounded regenerative ecologies upon which all life depends on expectations of infinite economic growth (Lefebvre 1974 [1992]; Foster et al. 2010; R. G. Wallace and Kock 2012; Malm 2016; Michael 2018; R. G. Wallace et al. 2020b). History shows that escape from the consequences of such an imposition often demands a severe price. Courtesy of (mostly) WWII’s Eastern Front—typically grouped as the battles of Moscow, Stalingrad, Kursk, and Operation Bagration— the world was, at great cost, liberated from Nazi ideology and state (Glantz and House 1995 [2015]). There are other modes of liberation, arguably as difficult in their endeavor. The Vietnamese expelled both French and American colonial armies, as did the Algerians the French (Harbi 1980; Goscha 2003; Clayton 1994 [2013]). The Afghans expelled the British and Russians and seem intent on the same with the Americans (Sinno 2008; Chandrasekaran 2012; Petras 2019). Western colonial systems after WWII suffered expulsions across Africa, India, and Indonesia (Luttikhuis and Moses 2014). In most cases, the means to liberation—or, depending on one’s perspective, of subversion—involved conflicts between entities of vastly different material resources. These contests are described by military strategists (e.g., Farrell et al. 2013) as “asymmetric conflict.” War is only one of many forms of such conflict. There are structures underlying successful asymmetric conflict that may be harnessed by public health, environmental groups, and farming communities, within both the State and civil society, against what is now a multiscale and spatially discontinuous “colonial” system that multinational agribusiness has constructed across national borders as a series of parastatal “Soybean Republics” operating outside or in parallel to nation states (Turzi 2011; Haesbaert 2011; Meyfroidt et al. 2014; Craviotti 2016; Oliveira 2016). Examples in asymmetric advantage abound. During the American war in Vietnam, up to 94% of the combat incidents were initiated by local Viet Cong and North Vietnamese forces (Hunt 2013). Such initiative eventually decided the war. The current Afghanistan stalemate reflects a roughly similar ratio for ground combat, in spite of the U.S.’s drone war dominance and efforts to positively spin such metrics reminiscent of the Vietnam War, the latter exporting the information gap back to the home front (Cordesman 2019). Under other circumstances, such an imbalance in information and initiative would be treated as unfair and grounds for punishment. In stock market trading, grossly different information rates are called “insider trading” and subject to prosecution (Chowdhury et al. 2018). We can model the conditions under which an asymmetry of field intelligence can overcome asymmetry of power, in the context of constraints imposed by Clausewitzian fog, friction, and “shadow price” (Wallace 2020a).
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
99
We begin by considering a target institution made up of a number of cooperating and interacting cognitive entities. This composite entity operates on a landscape of conflict in which there are, for it, three essential resources. The model can be extended to greater numbers of resources, at the expense of more algebra. We take these resources as information passing between subsystems of the cognitive entity, provided at an overall internal channel capacity C , “sensory” information about the embedding environment, provided from the “outside” at a channel capacity H , and material, energy, and/or finances provided at an overall rate M . We envision a 3 × 3 matrix Z with the diagonal {C H M }, but having off-diagonal elements representing interaction between the three basic rates, analogous to, but different from, a correlation matrix. An n × n matrix has n invariants rj that remain fixed when the matrix undergoes certain symmetry transformations, and these can be used to construct a scalar measure much like a principal component analysis, via the standard polynomial relation p(γ ) = det(Z = γ I) = γ n − r1 γ n−1 + . . . + (−1)n rn .
(5.21)
det is the determinant, γ a parameter, and I the n × n identity matrix. The invariants of Z are the coefficients of γ in p(γ ), normalized so that the coefficient of γ n is 1. Typically, the first invariant is the trace of Z, and the last ± the determinant. We define a scalar index Z as an appropriate monotonic increasing scalar function of the n matrix invariants r1 , r2 , . . . , rn . This is not a trivial matter and will hold much of the art of the science. The simplest such index would be Z = C × H × M , but interaction cross-terms in Z are likely to be important in the real world. The Rate Distortion Theorem provides a different, and deep, perspective on cognition and its failure in the particular context of the Data Rate Theorem of the previous section. Central focus is on the difference between what is initially ordered by a powerful institution and what is actually observed, taking cognitive control to involve a “channel” along which a “message” is transmitted. The difference between what the institution wants and what happens is defined in terms of a scalar distortion measure D. For any such channel, there is a Rate Distortion Function R(D), always convex in D (Cover and Thomas 2006), that determines the minimum channel capacity—a free energy measure from Feynman’s (2000) perspective, representing an information transmission rate—needed to ensure that the average distortion between intent and execution is less than or equal to D. Taking R(D) as a free energy measure, and following the standard view from chemical reaction theory (Laidler 1987), the rate of cognition control above the punctuated threshold arising from the Data Rate Theorem, which we called H0 above, can be characterized as a Boltzmann pseudoprobability: ∞ H
exp[−R/g(Z)]dR
0
exp[−R/g(Z)]dR
P r[R > H0 ] = ∞0
= exp[−H0 /g(Z)],
(5.22)
100
R. Wallace et al.
where, again, H0 is the limit from Eq. (5.17), and g(Z) is a scalar temperature analog depending on the scalar resource rate index Z that must be calculated from first principles. This is not a physical system in which temperature is an independent parameter. Treating the denominator of Eq. (5.22) as a statistical mechanical partition function produces an expression for a free energy Morse Function (Pettini 2007) F as ∞ exp[−F /g(Z)] = exp[−R/g(Z)]dR = g(Z) 0
F = − log[g(Z)]g(Z) g(Z) =
−F (Z) . W (n, −F (Z))
(5.23)
Again, W is the Lambert W-function of order n that solves W (n, X) exp[W (n, X)] = X. This permits the definition of an associated entropy-analog as the Legendre transform S = −F + ZdF /dZ.
(5.24)
The next step is to define system dynamics in terms of the Onsager approximation from nonequilibrium thermodynamics (de Groot and Mazur 1984), so that, in first order, dZ/dt ≈ μdS/dZ = f (Z) Z d 2 F /dZ 2 = f (Z).
(5.25)
Taking F (Z) as a general expression, and expanding in terms of Eqs. (5.24) and (5.25), leads to a second-order ordinary differential equation having the solution F (Z) =
f (Z) dZdZ + C1 + C2 . Z
(5.26)
In view of Eq. (5.23), some manipulation gives the necessary and sufficient condition for concavity in g over the region of interest—that d 2 g/dZ 2 < 0— leading to − +
d dZ F (Z)
2
d2 F (Z) dZ 2
W (n, −F (Z))
W (n, −F (Z)) (1 + W (n, −F (Z))) F (Z)
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
−
2 d dZ F (Z)
W (n, −F (Z)) (1 + W (n, −F (Z)))2 F (Z)
101
+
d2 F (Z) dZ 2
− W (n, −F (Z)) (1 + W (n, −F (Z))) d 2 dZ F (Z) < 0. (1 + W (n, −F (Z)))3 F (Z)
(5.27)
Concavity in g(Z) determines when “overload” begins to cripple an institution’s rate of cognitive function. On Clausewitz landscapes (Wallace 2020a), as opposed to “more normal” cognitive systems, friction—here characterized by the relation dZ/dt = f (Z)— is one of the most central driving forces and will set the stage for g(Z), subject to the separate boundary conditions C1 and C2 . Equation (5.27) essentially precludes assigning a simple S-shaped relation to g(Z), since the frictional relation dZ/dt = f (Z) is imposed by embedding realities mostly beyond the explicit control of a particular cognitive entity. Recall that the zero-order branch of the Lambert W-function is real only for − exp[−1] < X < ∞, and the n = −1 branch is real only for − exp[−1] < X < 0, leading, in Fig. 5.3, after some thought on the boundary values C1 and C2 , to inverted-U signal transduction forms having punctuated dynamics. Here, the Clausewitzian friction is taken as f (Z) = β − αZ, with α = 1, so that Z → β, having boundary conditions C1 = −1, 1 C2 = 3, 1. If −F (Z) < − exp[−1], a challenged system’s rate of cognition collapses. Fig. 5.3 Signal transduction form for g(Z) from Eq. (5.23), taking f (Z) = β − αZ, α = 1. Z → β/α, and β varies. The condition for a real-valued “temperature” is −F (Z) > − exp[−1]. C1 = −1, 1 and C2 = 3, 1. Only for Z in a limited range is g(Z) real-valued, as represented by the inverted-U, in a manner highly dependent on system parameters
102
R. Wallace et al.
With regard to optimization, a multicomponent cognitive system is confronted with the necessity of high rates of cognition at all scales and levels of organization, but under the overall resource rate constraint. The dynamics of such a system can again be crudely addressed using a standard Lagrangian optimization across cognition rate. While “real world” optimization is likely to be more in the realm of “the hand reaches, the eye measures the gap,” any optimization procedure will be confronted by problems similar to those limiting Lagrangian optimization. The Lagrangian optimization problem for the overall probability/reaction rate relation of Eq. (5.22): L ≡
i
exp[−H0i /g(Zi )] + λ
Z−
Zi
i
∂L /∂Zi = 0 ∂L /∂Z = λ.
(5.28)
L is the Lagrangian, and λ is the economic shadow price (Jin et al. 2008) imposed by the resource constraints. We can then solve the second expression of Eq. (5.28) for H0i , deriving a complicated synergistic relation between λ and the F (Zi ) in terms of the Lambert W-function. These relations link λ, the environmentally imposed shadow price—another stress measure, in a large sense—to the Zi , the scalarized rates at which essential resources are delivered to each subcomponent under an overall limit. Figure 5.4 shows an example of the second part of Eq. (5.28) for a twocomponent, equally weighted system. The expected division of Z is, of course, Zmax /2. The adaptation/delay function f (Z) is assumed to be the “exponential” model, so that f (Z(t)) = β − αZ(t). For Fig. 5.4, H0 = 1, α = 1, C1 = −1, C2 = 3, Z = β/α, and β varies, taking maximum values 2, 3, and 10. For the smallest and largest values, optimization is not possible, a consequence of the inverted-U form. For β = 3, optimization is possible only if the shadow price λ is in the proper range. That is, if the environmentally imposed shadow price λ is beyond the range of possible system response, optimization is not possible. The essential inference is that shadow price burdens will interact with resource rate to impose punctuated cognitive failure across a hierarchically organized system. Again, more sophisticated optimization methods—“the hand reaches, the eye measures the gap”—can be expected to display recognizably similar patterns of punctuated failure. Van den Broeck et al. (1994, 1997), and Horsthemeke and Lefever (2006), among others, have explored the role of “noise” in driving phase transitions in physical systems. We adapt their methods, based on the stability criteria of stochastic differential equations, to examine “noise”-driven phase change associated with contention by cognitive agents on a Clausewitz landscape.
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
103
Fig. 5.4 We examine a two-component equally weighted “exponential delay” system for which the optimum resource division is equal sharing. For the third expression of Eq. (5.28), f (Z) = β − αZ. The Lambert W-function is of order zero. Here, α = H0 = 1, C1 = −1, C2 = 3, Z = β/α, and the maximum β varies: (a) β = 2, (b) β = 3, (c) β = 10. Here, even where optimization is possible, it may be precluded by shadow price λ
Equation (5.25) allows calculation of the nonequilibrium steady-state average of g(Z) under stochastic variation in terms of Z, via the Ito Chain Rule (Protter 2006). The relation dZ/dt = f (Z) = β − αZ leads to the stochastic differential equation dZt = f (Zt )dt + σ Zt dWt dZt = (β − αZt )dt + σ Zt dWt ,
(5.29)
where the second term imposes stochastic volatility proportional to Z, driven at the rate σ . dWt represents Brownian white noise, the standard starting point for such analyses, which can be extended to “colored” noise (Protter 2006). Applying the Ito Chain Rule to Z 2 , the variance of Z can be calculated from the second form of Eq. (5.31) as < Z 2 > − < Z >2 =
β α − σ 2 /2
2 −
2 β . α
(5.30)
Variance explodes as σ 2 /2 → α, independent of β and Z∞ = β/α. The rate constant α counts on its own. Using Eq. (5.29) as the base SDE, it is possible to use the Ito Chain Rule on g(Z) itself. The result for the mean value of g is at nonequilibrium steady state, i.e., when < dgt >= 0. This convolutes the Lambert W-function with σ , α, and β, and the boundary conditions C1 and C2 , representing a synergism between the effects of fog, delay, and bifurcation phase transitions. Unfortunately, the computer algebra result is too long for the LaTex compiler and is given in the Mathematical Appendix as an image from the computer algebra program Maple 2020. The Ito Chain Rule calculation for g(Z) is quite routine, if tedious.
104 Fig. 5.5 Equivalence classes in β, Z, and σ defined by the relation < dgt >= 0. α = 1, C1 = 1, C2 = 3. Recall that the condition for variance stability in Z is σ 2 /2√< α, in this case, σ < 2. Here, however, the probability of an inflammatory explosion rises significantly well before that ultimate limit. The bottom plane represents a grinding down of a nss under noise
R. Wallace et al. 20 18 16
Z
14 12 10 8 6 4 2 2
3
b
4
1 5 1.4 1.2
0 0.4 0.2 0.8 0.6
s
Setting α = 1, C1 = 1, C2 = 3, Fig. 5.5 applies the implicitplot3d numerical procedure of Maple 2020 to the pattern in < dgt >= 0 for the variates β, Z, and σ . The Lambert W-function in the expression for < dgt >= 0 imposes necessary conditions driving bifurcation phase transitions on the dynamics—the behavior—of cognitive process under fog and frictional stochasticity. This procedure generates distinct equivalence classes in {Z, β, σ }. The lowest plane in Fig. 5.5 appears to be a kind of grinding down under the burden of noise, with a nonequilibrium steady state possible until σ becomes sufficiently large, here, displaying a raised√probability of an inflammatory explosion well before the ultimate limit of σ = 2. Transition between equivalence classes representing groupoid symmetry-breaking phase transitions, as discussed in more detail below, remains to be more fully explored but appears to involve yet another “temperature” parameter, as studied in Wallace (2020b) for biological systems. The dynamics of bifurcation phase transitions depend heavily on the nature of noise influence. “Noise” and time delay “friction” become synergistic with cognitive system phase transitions, generating tortuously complex and critically unstable patterns of function and failure. These instabilities can be exploited by a sufficiently astute adversary of the primary institution under siege. The U.S. command’s focus on Khe Sanh on the eve of the Tet offensive, in spite of considerable intelligence suggesting the offensive (e.g., Bowden 2017), offers one example. A similar cognitive failure can be found in Douglas MacArthur’s blind advance into North Korea against growing evidence of Chinese military buildup in Manchuria. The U.S. occupation of Iraq in 2003 and subsequent sequelae also come to mind, as does the long-term impact of the current drone war in Afghanistan and across other Middle East and African states. Indeed, unrestrained conflict often becomes a tit-for-tat “language that speaks itself” (e.g., R. Wallace 2018b, Ch. 6).
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
105
What happens if the embedding conflict ecosystem shadow price defined by λ is outside the range for which some value of Zi ≤ Z can be found? Again, as Jin et al. (2008) explain, under pathological circumstances, no optimization can exist, and the system-of-interest fails, possibly triggering a spreading collapse as connected subcomponents are also overloaded and fail. These insights have been explicitly discussed in the military science literature.
5.5 John Boyd’s OODA Loop Military theorist John Boyd was the principal architect of the famous “left hook” in the 1991 Gulf War that so devastated the conventional Iraqi army. Generalizing from his experiences of the “thin, warm” fog-of-war characterizing mid-20th century air combat, Boyd explored the intimate relation between cognition and control that is inherent to virtually all contention, including arguably the conflict between agribusiness and public health we address here. His work developed from a simple sequential dynamic model of Observation, Orientation, Decision, and Action, to the elaborate feedback system of his later presentations. Commodore Prof. Frans Osinga (2007) of the Netherlands Defense Academy describes Boyd’s approach as follows: The OODA loop model as [later] presented by Boyd. . . represents his view on the process of individual and organizational adaptation in general, rather than only the military-specific command and control decision-making process that it is generally understood to depict. It refers to [a] conceptual spiral. . . to the process of learning, to doctrine development, to command and control processes and to the Popperian/Kuhnian ideas of scientific advance. The (neo-)Darwinists have their place, as do Piaget, Conant, Monod, Polanyi and Hall, while Prigogine and Goodwin are incorporated through Boyd’s concluding statement in the final slide that follows [his presentation of] the OODA loop picture: “The key statements of this presentation, the OODA Loop Sketch and related insights represent an evolving, open-ended, far from equilibrium process of self-organization, emergence and natural selection.” This relates the OODA loop clearly to Complex Adaptive Systems, the role of schemata and to the process of evolution and adaptation. Once again it shows that where the aim is ‘to survive and prosper’ in a nonlinear world dominated by change, novelty, and uncertainty, adaptation is the important overarching theme in Boyd’s strategic theory.
Figure 5.6, from Osinga (2007), is adapted from Boyd’s presentation and shows his fully developed adaptive cognitive/control process model, which should be compared to the inherently unstable control system model of Fig. 5.1. We have, in the previous sections, provided something of a roadmap for formal exploration of Boyd’s approach to conflict. On the other hand, Boyd’s analysis suggests that in the conflict between agribusiness and public health, in which agribusiness holds considerable material advantage, it will be necessary for public health advocates to focus on adaptation, changing the “rules of the game,” and associated strategy and tactics, faster than the opposition can respond. One objective should be to force “shadow prices” on conventional agribusiness practices that can
Observations
Feed forward
Implicit guidance and control
New information
Genetic heritage
Feedback
Feedback
Previous experiences
Cultural traditions
Orientation
Analysis and synthesis Feed forward
Decision (hypothesis)
Feed forward
Implicit guidance and control
Decision
Unfolding interaction with environment
Action (test)
Action
Fig. 5.6 Adapted from Osinga (2007). John Boyd’s fully developed OODA loop convolutes cognition with control, under the conditions of uncertainty inherent to conflict on a Clausewitz landscape strongly dominated by friction and the fog-of-war. The key to survival for a “weak” agent is greater adaptability than its “stronger” opponent
The real OODA loop.
Unfolding environmental interaction
Unfolding circumstances
Outside information
Observation
106 R. Wallace et al.
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
107
be met only by changing those practices in a manner that protects, rather than threatens, the health and safety of large human populations. That is, asymmetry in material resources can be met by a synergism of “richness” in intelligence and of flexibility. Conversely, a materially weak opponent can successfully challenge the materially superior agency by placing unbearable shadow prices on that agency’s own material, communication, intelligence, and flexibility resources. One military example stands out, if only because it illuminates the conditional complexity of such an “adaptation” surface. As described above, German Bewegungskrieg tactics in WWII were eventually countered on the Eastern Front (Glantz and House 1995 [2015]). Nonetheless, in spite of having overwhelming material superiority before and after Kursk, Allied forces were unable to break the German army until January 1945. The Germans retained superior—or at least adequate— internal communication, intelligence, and tactical flexibility until that late date. The problem of the “weak” then is to develop an adaptable strategy that places a carefully crafted synergism of unbearable shadow prices on the various essential gearings of current agribusiness practice and its relations to the machineries of governance, a strategy that adapts more rapidly than can or will be met by entrenched power. Such campaigns are by definition difficult and may demand more than merely more of the same: among possibilities, careful but flexible strategic planning, greater intelligence in agribusiness strategy and operations, decentralized execution, and the element of frequent acute surprise.
5.6 Discussion 5.6.1 Placing Public Health First In spite of their ill repute among scientists, normative dichotomies can serve as firstorder criteria. In an age of climate debates, scientists have effectively lost depending on data alone (Goeminne 2010), and it appears Hume’s guillotine, distinguishing what is from what we researchers wish, is circumstantially false. For the case presented here, it is useful and arguably necessary as a matter of scientific practice, to distinguish the language of business—the language of capital’s imaginarium (Lacan 1972; Bohm and Batta 2010; Carley and Molina 2011; Holmes 2013; Tomšiˇc 2005; Vanheule 2016; Fracchia 2017)—from the language of public health. Failures in public health’s conflict with agribusiness have produced collateral damage in the millions of people and animals hit by deadly pathogens and other environmental spillovers propagating far beyond the farm gate (Tscharntke et al. 2005; Gilchrist et al. 2007; Leibler et al. 2010; Atkins et al. 2010; Ercsey-Ravasz et al. 2012; Smit et al. 2012; Allen and Lavai 2014; Rotz and Fraser 2015; MartinPrével et al. 2016; Allen et al. 2017; Jones et al. 2017; R. G. Wallace 2018a). The clash can be resolved in the broader population’s favor, permitting social
108
R. Wallace et al.
reproduction generation to generation in the face of what presently are by all accounts a series of protopandemics. To that aim, in the series of models presented here, we examined strategies for the “sterilization” of infectious disease at the population level via stochasticities that conflict with current agribusiness “economies of scale” privatizing profits and socializing costs. We have used, in addition, the perspectives of control theory and of the asymmetric conflict between entrenched agribusiness interests rich in resources and public health entities constrained by the many pressures exerted by those very resources on the structures of governance. Asymmetric conflict has been surprisingly successful in military realms. Abduction of appropriate strategies from those examples may also prove successful for the prevention and control of massfatal, agribusiness-driven pandemics. Indeed, we argue that the ability to rapidly and systematically adapt strategies and tactics is as central to success in public health as it is in armed conflict. We are not alone in thinking so. In the light of Q fever’s devastating effects to health and economy alike 2007–2010, the Dutch public health establishment made a conscious effort, albeit tightly circumscribed in the context of the goat industry from which the pathogen sprung, to impose new regulation (Smit et al. 2012; van der Hoek et al. 2012; Quammen 2012; Hogerwerf et al. 2013; Hogerwerf 2014). An evaluation commission explicitly recounted the means by which during the outbreak the Ministry of Agriculture prevailed over the Ministry of Health, with adequate responses to the outbreak delayed for ulterior reasons, specifically undue appeals to farmer privacy and withholding locale data of infected farms (van Dijk et al. 2010). The commission recommended the Ministry of Health to take the lead in outbreaks that might follow and that incidence data be shared immediately with public health stakeholders—e.g., the National Institute for Public Health and the Environment and municipal health services—to permit the promptest response. Such a “zoonosen structuur” has since been devised and implemented. Devastating public health outcomes, however, are no guarantee changes in the public’s favor will be pursued. A pandemic strain that emerges out of agriculture, an increasingly predominant mode, can always be blamed on a virus, as proved the case for swine flu H1N1 and Ebola Makona (R. G. Wallace 2016; R. G. Wallace et al. 2016). Indeed, to that aim, agribusiness has expanded lobbying efforts and political contributions aimed at governance across jurisdictions, from local governments to UN agencies (Davis 2003; McCue 2012; Clapp 2014). Changing the “rules of the game,” as the modeling here implies, is a necessary but often insufficient step for any path to public health victory. In many fora, power is not just handed over. Conflicts can play out in more than bureaucratic tussles between ministries.
5.6.2 Agribusiness and Counterinsurgency Agricultural policy regularly bleeds into paramilitary and military action, the latter illustrated—in examples more recent than the touchstones of Chiquita in Guatemala
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
109
or Operation Produce in Indonesia—by the U.S. Military Agribusiness Development Teams in Afghanistan and Orders 24, 27, and 81 under Coalition Provisional Authority rule opening Iraq to foreign seed breeders and prohibiting farmers “from re-using seeds of protected varieties” (Grajales 2011; Johnson et al. 2011; Johnson et al. 2012; Ballv/’e 2013; Baker 2014; Woods 2018). For agribusiness in contested territory, the comparison here between military science and the struggle over land use and its public health impacts is no mere abduction. Age-related counterinsurgency appears to be evolving inside agribusiness’s new multiterritoriality and along its spatially discontinuous networks of fluctuating territorial embeddedness across borders and funding sources (Patel 2013; R. Wallace et al. 2018). A team of international legal experts implicated the Netherlands Development Finance Institution (FMO), among other financial institutions, in the 2016 murder of internationally known Honduran environmental activist Berta Cáceres. Without the capital needed to build the Agua Zarca Hydroelectric Project proposed for Río Blanco between the departments of Intibucá and Santa Bárbara in western Honduras, Desarrollos Energéticos Sociedad Anónima, founded just for the project, secured financing from international sources. Altholz et al. (2017) charged FMO, the Central American Bank for Economic Integration (CABEI), and the Finnfund, among other groups that supplied project capital, with “willful negligence” in Cáceres’s murder. The funds pursued a strategy with shareholders, executives, managers, and employees of DESA, private security companies working for DESA, public officials, and state security agencies “to control, neutralize and eliminate any opposition.” Among the tactics, DESA and its security contractors pursed were setting community groups against each other, smear campaigns, infiltration, surveillance, threats, contract killing, sabotaging communication equipment, and co-optation of justice officials and security forces. Altholz et al. report: These [international foundations] through repeated complaints and reports by international consultants, had prior knowledge of the strategies undertaken by DESA. Nevertheless, they failed to implement appropriate, effective and timely measures to guarantee respect for the human rights of indigenous communities affected by the Agua Zarca dam, much less to protect the life and integrity of Berta Isabel Cáceres Flores. Nor did they make sufficient efforts to ensure the appropriate criminal investigations.
In July 2017, FMO and Finnfund confirmed exiting the Agua Zarca project, but Berta Cáceres’s murder represents only one of what appears a surge in targeted assassinations around the world aimed at countermanding community efforts at blocking corporate land grabbing and environmental destruction. Global Witness (2014) reported 1000 environmental activists have been killed since 2002: On average two people are killed every week defending their land, forests, and waterways against the expansion of large-scale agriculture, dams, mining, logging, and other threats. Often they have been forced from their homes or seen their livelihoods harmed by environmental devastation. Some victims were environmental protesters killed in crackdowns, others murdered by hired assassins because they lived on a desirable plot of land.
110
R. Wallace et al.
These murders, many associated loosely enough to offer international private– public development partnerships plausible deniability, increased to four a week by 2017 (Global Witness 2018). For the first time since these deaths began to be tabulated, agribusiness moved past mining as the most dangerous sector, with 46 activists killed and a spike in massacres. The Business and Human Rights Resource Centre also documented 388 attacks upon campaigners in 2017, more than a quarter associated with conflicts over agribusiness projects (Bacchi 2018; Kelly 2018; Global Witness 2018).
Peasant Innovations in Asymmetric Conflict Peasants and smallholders resist such campaigns, militarized or not. The recent literature on “repeasantization” can be recast in terms of peasants, agricultural laborers, and less-capitalized farms regaining power and autonomy against agribusiness hegemony (e.g., Schneider and Niederle 2010; Viá 2012; Rosset and MartinezTorres 2014; Pahnke 2015; McMichael 2016). van der Ploeg (2009) theorizes the peasant condition as a suite of interrelating conditions that permit persistence and resilience in the face of class hostility. Among these are “a ‘self-controlled resource base’, ‘co-production’ or interaction between humans and nature, cooperative relations that allow peasants to distance themselves from monetary relations and market exchange, and an ongoing ‘struggle for autonomy’ or ‘room for maneuver’ that reduces dependence and aligns farming ‘with the interests and prospects of the producers. . . ’ ” (Edelman 2011). Some of these advantages arise from exactly the adaptations on which the modeling here converges. Peasants lose economies of scale but during downturns can survive as subsistence operations or, conversely, outcompete suffering industrial wage-labor farms by selling their products at lower prices, a history of market adaptation that predated industrial production (Banaji 1990; R. G. Wallace 2018b). In contradiction to their repute, the successes of economies of scale are contextdependent, allowing for epistemic alternatives (Reinhardt and Barlett 1989). The efficiencies of monoculture consolidation must race against diseconomies of production, from bad produce to pollution and disease outbreak. The natural economy of agriculture imposes multiple limits, including upon the speed and simultaneity of production. Agribusiness must rely on its political power to survive itself, importing near-slave labor, globalizing production and distribution, cornering subsidies and regulation regimens in their favor, all products of massive State intervention (R. G. Wallace 2018b). Agribusiness finds its advantages not in the microeconomics of production, as often presented in moralist terms, but in commanding market access. Peasants across the globe have innovated asymmetrical strategies opposed to such fait accompli, from the very first enclosures of the common field (Brenner 1976; Huizer 1976; Scott 2008; La Vía Campesina 2010; R. G. Wallace 2018b,c). In some contexts, these strategies transitioned into military action or were undertaken simultaneously (Taber 1965/2002; Hawes 1990). During the Guinean war on
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
111
Portuguese colonialism, the PAIGC under Amílcar Cabral, trained as an agronomist, supported the return to autonomous resource management severed from the colonial export economy (McCulloch 1981; Chabal 1983; Neves 2017). Models that explicitly connect disease dynamics to these political economies, as we have presented here, can contribute toward strategizing advances in public health prevention, and farmer autonomy a growing literature indicates is necessary in scaling down infectious disease outbreaks of agricultural origins. Against the sovereign epistemic individualism at the core of a dated notion of natural science (Code 2013) and a related response skepticism opposed to collective action (Doan 2016), researchers need openly articulate and act on the accumulating evidence that successful public health and disease control broadly require the commons in land and community. The specifics in form and content represent a cutting-edge science of the twenty-first century.
Mathematical Appendix Correcting Anderson and May The basic deterministic statement of the Anderson and May (1991) treatment of genetic diversity and epidemic spread is that the rate of change of infected individuals of genetic strain i is dIi /dt = βIi Si − vIi ,
(5.31)
where, letting Si = gi N and taking v as the “removal rate,” we get dIi /dt = (βgi N − v)Ii ,
(5.32)
which, for fixed β, explodes as N → ∞. The essential point, however, is to recognize that the argument leading to Eq. (5.12) applies to the infectivity parameter β. That is, the corrected deterministic equation is dIi /dt = Ii (β0 (1 − exp[−B/N])gi N − v) → Ii (β0 Bgi − v)
(5.33)
for which ‘R0 ≡ β0 Bgi /v declines below 1 with increase in genetic diversity, i.e., other things being equal, with sufficient decline in gi . We can, however, do even better by introducing more stochasticity via a volatility term in I . We then obtain the SDE dIi ≈ (β0 Bgi Ii − vIi )dt + σ Ii dWt .
(5.34)
112
R. Wallace et al.
Carrying out an Itô Chain Rule expansion on log(I ) gives a Doléans-Dade exponential E (I ) →∝ exp [β0 Bg − (σ 2 /2 + v)]t ,
(5.35)
which—because of the added σ 2 /2 term—further stochasticity—collapses even more rapidly than the solution to the corrected deterministic model.
Ito Chain Rule Calculation The nonequilibrium steady-state relation < dgt >= 0 from Eq. (5.23), using the base SDE in Z of Eq. (5.29).
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
113
Acknowledgments The authors gratefully acknowledge Dr. Lenny Hogerwerf for her perspicacious comments. The Strategic Programme (SPR) of the Netherlands National Institute for Public Health and the Environment (RIVM) partially funded the work presented here.
References Allen, J., & Lavai, S. (2014). ‘Just-in-time’ disease: Biosecurity, poultry and power. Journal of Cultural Economy, 8(3), 342–360. Allen T., Murray, K. A., Zambrana-Torrelio, C., Morse, S. S., Rondinini, C., Di Marco, M., Breit, N., Olival, K. J., & Daszak, P. (2017). Global hotspots and correlates of emerging zoonotic diseases. Nature Communications, 8, 1124. Altholz, R., Rodríguez, J. E. M., Saxon, D., Martínez, M. A. U., & Tirado, L. M. U. (2017). Represa de Violencia: El Plan que Asesinó a Berta Cáceres (in Spanish). Grupo Asesor Internacional de Personas Expertas. Anderson, R., & May, R. (1991). Infectious diseases of humans: Dynamics and control. Oxford University Press. Atkins, K. E., Wallace, R. G., Hogerwerf, L., Gilbert, M., Slingenbergh, J., Otte, J., & Galvani, A. (2010). Livestock landscapes and the evolution of influenza virulence, Virulence Team Working Paper No. 1. Animal Health and Production Division, Food and Agriculture Organization of the United Nations. Bacchi, U. (2018). Killings and threats against land rights defenders soar in 2017: rights group. Reuters, 6 February. https://www.reuters.com/article/us-global-rights-attacks/killingsand-threats-against-land-rights-defenders-soar-in-2017-rights-group-idUSKBN1FQ24D Baker, Y. K. (2014). Global capitalism and Iraq: The making of a neoliberal state. International Review of Modern Sociology, 40(2), 121–148. Ballv/’e, T. (2013). Grassroots masquerades: Development, paramilitaries, and land laundering in Colombia. Geoforum, 50, 62–75. Barrett, S. (2013). Economic considerations for the eradication endgame. Philosophical Transactions of the Royal Society B, 368, 2012014. Banaji, J. (1990). Illusions about the peasantry: Karl Kautsky and the agrarian question. The Journal of Peasant Studies, 17(2), 288–307. Bauerly, B. (2016 [2018]). The Agrarian seeds of empire: The political economy of agriculture in US State Building. Haymarket Books. Böhm, S., & Batta, A. (2010). Just doing it: Enjoying commodity fetishism with Lacan. Organization, 17(3), 345–361. Boni, M. F., Galvani, A. P., Wickelgren, A. L., & Malani, A. (2013). Economic epidemiology of avian influenza on smallholder poultry farms. Theoretical population biology, 90, 135-144. Bowden, M. (2017). Hue 1968: A turning point of the American War in Vietnam. Atlantic Monthly Press. Brenner, R. (1976). Agrarian class structure and economic development in pre-industrial Europe. Past & Present, 70, 30–75. Carley, R., & Molina, H. (2011). How women work: The symbolic and material reproduction of migrant labor in United States agribusiness. Journal of Identity and Migration Studies, 5, 37– 62. Chabal, P. (1983). Amilcar Cabral: Revolutionary leadership and People’s War. Cambridge University Press. Chandrasekaran, R. (2012). Little America: The war within the war for Afghanistan. Alfred A. Knopf. Chappell, M. J. (2018). Beginning to End Hunger: Food and the environment in Belo Horizonte, Brazil, and beyond. University of California Press.
114
R. Wallace et al.
Chaufan, C., & Saliba, D. (2019). The global diabetes epidemic and the nonprofit state corporate complex: Equity implications of discourses, research agendas, and policy recommendations of diabetes nonprofit organizations. Social Science Medicine, 223, 77–88. Chiriboga, D., Buss, P., Birn, A. E., Garay, J., Muntaner, C., & Nervi, L. (2015). Investing in health. The Lancet, 383(9921), 949. Chowdhury, A., Mollah, S., & Al Farooque, O. (2018). Insider-trading, discretionary accruals and information asymmetry. The British Accounting Review, 50(4), 341–363. Clapp, J. (2014). Financialization, distance and global food politics. The Journal of Peasant Studies, 41(5), 797–814. Clayton, A. (1994 [2013]). The wars of French decolonization. Routledge. Code, L. (2013). Doubt and denial: Epistemic responsibility meets climate change scepticism. Onati Socio-Legal Series, 3(5), 838–853. Cooper, M. H. (2017). Open up and say “baa”: Examining the stomachs of ruminant livestock and the real subsumption of nature. Society and Natural Resources, 30(7), 812–828. Cordesman, A. H. (2019). The state of the fighting in the Afghan war in mid-2019. Center for Strategic International Studies. https://csis-prod.s3.amazonaws.com/s3fspublic/publication/ 190813AfghanMilitaryProgressfinal.pdf Cover, T., & Thomas, J. (2006). Elements of information theory (2nd ed.). Wiley. Craviotti, C. (2016). Which territorial embeddedness? Territorial relationships of recently internationalized firms of the soybean chain. The Journal of Peasant Studies, 43(2), 331–347. Davis, C. L. (2003). Food fights over free trade: How international institutions promote agricultural trade liberalization. Princeton University Press. de Groot, S., & Mazur, P. (1984). Nonequilibrium thermodynamics. Dover. Doan, M. D. (2016). Responsibility for collective inaction and the knowledge condition. Social Epistemology, 30, 532–554. Edelman, M. (2011). Van der Ploeg, Jan Douwe: The New Peasantries: struggles for autonomy and sustainability in an era of empire and globalization. Human Ecology, 39(1), 111–113. Eksin, C., Shamma, J. S., & Weitz, J. S. (2017). Disease dynamics in a stochastic network game: a little empathy goes a long way in averting outbreaks. Scientific Reports, 7, 44122. Ercsey-Ravasz, M., Toroczkai, Z., Lakner, Z., & Baranyi, J. (2012). Complexity of the international agro-food trade network and its impact on food safety. PLoS One, 7(5), e37810. Farmer, P. (2008). Challenging orthodoxies: the road ahead for health and human rights. Health and Human Rights, 10(1), 519. Farrell, T., Osinga, F., & Russell, J. (2013). Military adaptation in Afghanistan. Stanford University Press. Feynman, R. (2000). Lectures on computation. Westview. Foster J. B., York, R., & Clark, B. (2010). The ecological rift: Capitalism’s war on the Earth. Monthly Review Press. Fracchia, J. (2017). Organisms and objectifications: A historical-materialist inquiry into the Human and the animal. Monthly Review, 68, 1–17. Funk, S., Bansal, S., Bauch, C. T., Eames, K. T. D., Edmunds, W. J., Galvani, A. P., & Klepac, P. (2015). Nine challenges in incorporating the dynamics of behaviour in infectious diseases models. Epidemics, 10, 21–25. Gilchrist, M. J., Greko, C., Wallinga, D. B., Beran, G. W., Riley, D. G., & Thorne, P. S. (2007). The potential role of concentrated animal feeding operations in infectious disease epidemics and antibiotic resistance. Environmental Health Perspectives, 115(2), 313–316. Glantz, D. M., & House, J. M. (1995 [2015]). When Titans clashed: How the Red Army stopped Hitler. University Press of Kansas. Global Witness. (2014). Deadly Environment: The dramatic rise in killings of environmental and land defenders. https://www.globalwitness.org/en/campaigns/environmental-activists/deadlyenvironment/ Global Witness. (2018). At what cost? Irresponsible business and the murder of land and environmental defenders in 2017. https://www.globalwitness.org/en-gb/campaigns/environmentalactivists/at-what-cost/
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
115
Goeminne, G. (2010). Climate policy is dead, long live climate politics! Ethics, Place and Environment, 13(2), 207–214. Goscha, C. E. (2003). Building force: Asian origins of twentieth-century military science in Vietnam (190554). Journal of Southeast Asian Studies, 34(3), 535. Gould, P. (1993). The slow plague: A geography of the AIDS pandemic. Blackwell Publishers. Grajales, J. (2011). The rifle and the title: paramilitary violence, land grab and land control in Colombia. The Journal of Peasant Studies, 38(4), 771–792. Haalboom, A. F. (2017). Negotiating Zoonoses: Dealings with infectious diseases shared by humans and livestock in the Netherlands (1898–2001), Dissertation, Faculty of Veterinary Medicine of Utrecht University. Haesbaert, R. (2011). El mito de la desterritorialización. Del fin de los territorios a la multiterritorialidad. Siglo XXI. Harbi, M. (1980). Le FLN, Mirage et Realite des origines a la prise du pouvoir (1945–1962). Editions Jeune Afrique. Hawes, G. (1990). Theories of peasant revolution: A critique and contribution from the Philippines. World Politics, 42(2), 261–298. Heesterbeek, H., Anderson, R. M., Andreasen, V., Bansal, S., De Angelis, D., et al. (2015). Modeling infectious disease dynamics in the complex landscape of global health. Science, 347(6227), aaa4339. Hogerwerf, L. (2014). Epidemiology of Q fever in dairy goat herds in the Netherlands, Ph.D. dissertation, Utrecht University. Hogerwerf, L., Courcoul, A., Klinkenberg, D., Beaudeau, F., Ergu, E., & Nielen, M. (2013). Dairy got demography and Q fever infection dynamics. Veterinary Research, 22, 28. Holmes, T. (2013). Farmer’s market: Agribusiness and the agrarian imaginary in California and the Far West. California History, 90(2), 24–41. Horsthemeke, W., & Lefever, R. (2006). Noise-induced transitions: Theory and applications in physics, chemistry, and biology (Vol. 15). Springer. Howard, P. (2016). Concentration and power in the food system: Who controls what we eat? Bloomsbury Academic. Huizer, G. (1976). The strategy of peasant mobilization: Some cases from Latin America and Southeast Asia. In J. Nash, J. Dandler, & N. S. Hopkins (Eds.), Popular Participation in social change: Cooperatives, collectives, and nationalized industry. Mouton Publishers. Hunt, I. A. (2013). Losing Vietnam: How America abandoned Southeast Asia. University Press of Kentucky. Jin, H., Xu, Z., & Zhou, X. (2008). A convex stochastic optimization problem arising from portfolio selection. Mathematical Finance, 18, 171–183. Johnson, G., Ramachandran, V., & Walz, J. (2011). The Commanders Emergency Response Program in Afghanistan: Refining U.S. military capabilities in stability and in-conflict development activities. In Center for Global Development Working Paper No. 265. https://papers.ssrn.com/ sol3/papers.cfm?abstract_id=1931023 Johnson, G., Ramachandran, V., & Walz, J. (2012). CERP in Afghanistan: Refining military capabilities in development activities. PRISM, 3(2), 81–98. http://www.dtic.mil/dtic/tr/fulltext/ u2/a557341.pdf Jones, B. A., Grace, D., Kock, R., Alonso, S., Rushton, J., Said, M. Y., et al. (2013). Zoonosis emergence linked to agricultural intensification and environmental change. PNAS, 110, 8399– 8404. Jones, B. A., Betson, M., & Pfeiffer, D. U. (2017). Eco-social processes influencing infectious disease emergence and spread. Parasitology, 144(1), 26–36. Kelly, A. (2018). ‘Attacks and killings’: human rights activists at growing risk, study claims. The Guardian, 9 March. https://www.theguardian.com/global-development/2018/mar/09/humanrights-activists-growing-risk-attacks-and-killings-study-claims King, K., & Lively, C. (2012). Does genetic diversity limit disease spread in natural host populations? Heredity, 109, 199–203.
116
R. Wallace et al.
Klepac, P., Laxminarayan, R., & Grenfell, B. T. (2011). Synthesizing epidemiological and economic optima for control of immunizing infections. PNAS, 108(34), 14366–14370. Klepac, P., Metclaf, C. J. E., McLean, A. R., & Hampson, K. (2013). Towards the endgame and beyond: complexities and challenges for the elimination of infectious diseases. Philosophical Transactions of the Royal Society B, 368, 201201. Klepac, P., Funk, S., Hollingsworth, T. D., Metcalf, C. J. E., & Hampson, K. (2015). Six challenges in the eradication of infectious diseases, Epidemics, 10, 97–101. La Vía Campesina. (2010). Platform of Via Campesina for Agriculture, Movimento dos Trabalhadores Rurais Sem Terra. http://www.mstbrazil.org/content/platform-campesina-agriculture0 Lacan, J. (1972). Du discours psychanalytique. In G. B. Contri (Ed.), Lacan in Italia 1953–1978 (pp. 32–55). La Salamandra. Laidler, K. (1987). Chemical kinetics (3rd ed.). Harper and Row. Lefebvre, H. (1974 [1992]). The production of space. Wiley-Blackwell. Leibler, J. H., Carone, M., & Silbergeld, E. K. (2010). Contribution of company affiliation and social contacts to risk estimates of between-farm transmission of avian influenza. PLoS One, 5(3), e9888. Leonard, C. (2014). The meat racket: The secret takeover of America’s food business. Simon & Schuster. Lively, C. (2010). The effect of host genetic diversity on disease spread. The American Naturalist, 175, E149–E152 (E-Note). Luttikhuis, B., & Moses, A. (2014). Colonial counterinsurgency and mass violence: The dutch empire in Indonesia. Routledge. Malm, A. (2016). Fossil Capital: The rise of steam power and the roots of global warming. Verso. Mann, S. A. (1990). Agrarian capitalism in theory and practice. University of North Carolina Press. Martin-Prével, A., Mousseau, F., & Mittal, A. (2016). The Unholy Alliance: Five western donor shape a pro-corporate agenda for African agriculture. Oakland Institute. https://www. oaklandinstitute.org/sites/oaklandinstitute.org/files/unholy_alliance_web.pdf McCue, M. (2012). Follow the money: Insulating agribusiness through lobbying and suppression of individual free speech. Journal of Environmental and Public Health Law, 6(2), 215–239. McCulloch, J. (1981). Amilcar Cabral: a theory of imperialism. The Journal of Modern African Studies, 19(3), 503–511. McMichael, P. (2013). Food regimes and Agrarian questions. Fernwood Publishing. McMichael, P. (2016). Commentary: Food regime for thought. Journal of Peasant Studies, 46(3), 648–670. Meyfroidt, P., Carlson, K. M., Fagan, M. F., Gutiérrez-Vlez, V. H., Macedo, M. N., et al. (2014). Multiple pathways of commodity crop expansion in tropical forest landscapes. Environmental Research Letters, 9, 074012. Michael, M. A. (2018). Biofinance: Biological foundations of capital imaginaries. Public Seminar, 13 March. http://www.publicseminar.org/2018/03/biofinance/ Murray, J. (1989). Mathematical biology. Springer. Nair, G., Fagnani, F., Zampieri, S., & Evans, R. (2007). Feedback control under data rate constraints: An overview. Proceedings of the IEEE, 95, 108–137. Neves, J. (2017). Ideology, science, and people in Amílcar Cabral. Hist/’oria, Ciências, Sa/’ude – Manguinhos, 24(2), 1–14. O’brien, S., & Evermann, J. (1988). Interactive influence of infectious disease and genetic diversity in natural populations. Trends in Ecology and Evolution, 3, 254–259. Oliveira, G. L. T. (2016). The geopolitics of Brazilian soybeans. The Journal of Peasant Studies, 43(2), 348–372. Osinga, F. (2007). Science, strategy and war: The strategic theory of John Boyd. Routledge. Pahnke, A. (2015). Institutionalizing economies of opposition: explaining and evaluating the success of the MST’s cooperatives and agroecological repeasantization. Journal of Peasant Studies, 42(6), 1087–1107.
5 Agribusiness vs. Public Health: Disease Control in Resource-Asymmetric Conflict
117
Patel, R. (2013). The Long Green revolution. The Journal of Peasant Studies, 40(1), 1–63. Patel, R., & Moore, J. W. (2017). History of the world in seven cheap things: A guide to capitalism, nature, and the future of the planet. University of California Press. Petras, J. (2019). US imperialism: The changing dynamics of global power. Routledge. Pettini, M. (2007). Geometry and topology in hamiltonian dynamics and statistical mechanics. Springer. Pimbert, M. P. (2017). Democratizing knowledge and ways of knowing for food sovereignty, agroecology, and biocultural diversity, In M. P. Pimbert (Ed.), Food sovereignty, agroecology and biocultural diversity: constructing and contesting knowledge. Routledge. Protter, P. (2006). Stochastic integration and differential equations (2nd ed.). Springer. Quammen, D. (2012). Spillover: Animal infections and the next human pandemic. W. W. Norton. Reinhardt, N., & Barlett, P. (1989). The persistence of family farms in United States agriculture. Sociologia Ruralis, 29, 203–225. Rosset, P. M., & Martinez-Torres, M. E. (2014). Agroecology and social movements, In W. D. Schanbacher (Ed.), The global food system: Issues and solutions (pp. 191–210). Praeger. Rotz, S., & Fraser, E. (2015). Resilience and the industrial food system: analyzing the impacts of agricultural industrialization on food system vulnerability. Journal of Environmental Studies and Sciences, 5, 459–473. Schneider, S., & Niederle, P. A. (2010). Resistance strategies and diversification of rural livelihoods: the construction of autonomy among Brazilian family farmers. The Journal of Peasant Studies, 37(2), 379–405. Scott, J. C. (2008). Weapons of the weak: Everyday forms of peasant resistance. Yale University Press. Sheppard, E. (2011). Geographical political economy. Journal of Economic Geography, 11(2), 319–331. Sinno, A. H. (2008). Organization at war in Afghanistan and beyond. Cornell University Press. Smit, L. A. M., van der Sman-de Beer, F., Opstal-van Winden, A. W. J., Hooiveld, M., Beekhuizen, J., et al. (2012). Q Fever and pneumonia in an area with a high livestock density: A large population-based study. PLoS One, 7, e38843. Sparke, M. (2009). Unpacking economism and remapping the terrain of global health. In A. Kay & O. D.Williams (Eds.), Global health governance: Crisis, institutions and political economy. Springer. Sparke, M. (2017). Austerity and the embodiment of neoliberalism as ill-health: Towards a theory of biological sub-citizenship. Social Science Medicine, 187, 287295. Taber, R. (1965/2002). War of the flea: The classic study of Guerilla warfare. Brassey’s. Tomšiˇc, S. (2015). The capitalist unconscious: Marx and Lacan. Verso. Tscharntke, T., Klein, A. M., Kruess, A., Steffan-Dewenter, I., & Thies, C. (2005). Landscape perspectives on agricultural intensification and biodiversity - ecosystem service management. Ecology Letters, 8, 857–874. Turzi, M. (2011). The soybean republic. Yale Journal of International Affairs, 6(2), 5968. Van den Broeck, C., Parrondo, J., & Toral, R. (1994). Noise-induced nonequilibrium phase transition. Physical Review Letters, 73, 3395–3398. Van den Broeck, C., Parrondo, J., Toral, R., & Kawai, R. (1997). Nonequilibrium phase transitions induced by multiplicative noise. Physical Review E, 55, 4084–4094. van der Hoek, W., Morroy, G., Renders, N. H., Wever, P. C., Hermans, M. H., Leenders, A. C., & Schneeberger, P. M. (2012). Epidemic Q fever in humans in the Netherlands. Advances in Experimental Medicine and Biology, 984, 329–364. van der Ploeg, J. D. (2009). The New Peasantries: Struggles for autonomy and sustainability in an era of empire and globalization. Earthscan. van Dijk, G., van Dissel, J. T., Speelman, P., Stegeman, J. A., Vanthemsche, P., de Vries, J., & van Woerkum, C. M. J. (2010). Van Verwerping tot Verheffing: Q-koortsbeleid in Nederland 2005– 2010. Evaluatiecommissie Q-koorts. https://www.burgemeesters.nl/files/File/Crisisbeheersing/ docs/20101122.pdf
118
R. Wallace et al.
Vanheule, S. (2016). Capitalist discourse, subjectivity and Lacanian psychoanalysis. Frontiers in Psychology, 7, 1948. Viá, E. D. (2012). Seed diversity, farmers’ rights, and the politics of repeasantization. International Journal of Sociology of Agriculture & Food, 19(2), 229–242. Wallace, D., & Wallace, R. (1998). A plague on your houses: How New York was burned down and National Public Health crumbled. Verso. Wallace, R., Wallace, D., Ullmann, J., & Andrews, H. (1999). Deindustrialization, inner-city decay, and the hierarchical diffucion of AIDS in the USA: How neoliberal and cold war policies magnified the ecological niche for emerging infections and created a national security crisis. Environment and Planning A, 31, 113–139. Wallace, R., & Wallace, R. G. (2015). Blowback: New formal perspectives on agriculturally driven pathogen evolution and spread. Epidemiology and Infection, 143, 2068–2080. Wallace, R. (2018a). Canonical instabilities of autonomous vehicle systems: The unsettling reality behind the dreams of greed. Springer. Wallace, R. (2018b). Carl von Clausewitz, the Fog-of-War, and the AI revolution: The real world is not a game of go. Springer. Wallace, R. (2020a). Cognitive dynamics on Clausewitz landscapes: The control and directed evolution of organized conflict. Springer. Wallace, R. (2020b). On the variety of cognitive temperatures and their symmetry-breaking dynamics. Acta Biotheoretica. https://doi.org/10.1007/s10441-019-09375-7 Wallace, R. G. (2016). Big farms make big flu: Dispatches on infectious disease, agribusiness, and the nature of science. Monthly Review Press. Wallace, R. G. (2018a). Black mirror: Did neoliberal epidemiology impose its image upon Ebola’s epicenter? Antipode, 25 January. https://antipodefoundation.org/2018/01/25/wallacebook-review-ebola/ Wallace, R. G. (2018b). Vladimir Iowa Lenin, Part 1: A Bolshevik studies American agriculture. Capitalism Nature Socialism, 29(2), 92–107. Wallace, R. G. (2018c). Vladimir Iowa Lenin, Part 2: On rural proletarianization and an alternate food future. Capitalism Nature Socialism, 29(3), 21–35. Wallace, R. G., & Kock, R. (2012). Whose food footprint? Capitalism, agriculture and the environment. Human Geography, 5(1), 63–83. Wallace, R. G., Kock, R., Bergmann, L., Gilbert, M., Hogerwerf, L., Pittiglio, C., Mattioli, R., & Wallace, R. (2016). Did neoliberalizing West African Forests produce a new niche for Ebola? International Journal of Health Services, 46(1), 149–165. Wallace, R. G., Okamoto, K., & Liebman, A. (2018). Gated epidemiologies. In D. Monk & M. Sorkin (Eds.), Between catastrophe and redemption: Essays in honor of Mike Davis. UR. Watts, S. (1997). Epidemics and history: Disease, power, and imperialism. Yale University Press. Wise, T. A. (2019). Eating tomorrow: Agribusiness, family farmers, and the battle for the future of food. The New Press. Woods, K. (2018). Rubber out of the ashes: locating Chinese agribusiness investments in ‘armed sovereignties’ in the Myanmar–China borderlands. Territory, Politics, Governance. Published online 07 May. https://www.tandfonline.com/doi/abs/10.1080/21622671.2018.1460276
Chapter 6
Power Relations and COVID-19 in New York City Deborah Wallace
6.1 Introduction Early in the American COVID-19 pandemic, New York City pumped out disease as the national super-epicenter among the metropolitan regions (NY Times 2020a), much the way it had during the 1980–1990s AIDS epidemic (Wallace et al. 1997b). The NYC epidemic crested in early April (Department of Health 2020). Contrary to the AIDS epidemic for which Manhattan provided the epicenter of the NYC metropolitan region, the Bronx served that role for COVID-19 (Wallace and Wallace 2021). Decline in percent positive swab tests and slowing of increase in cumulative case rates (cases/100,000) and death rates (deaths/100,000) occurred rapidly and took New York City out of the role of national epidemic epicenter. Six weeks passed between May 31 and Bastille Day (July 14), weeks of rapid decline. In ecology, refugia serve as eventually re-seeders of declining populations. In disease ecology, stubborn small areas of high prevalence and incidence (disease “refugia”) also serve as re-seeders of future spread when conditions get ripe. The geography of COVID markers and of their decline, thus, becomes relevant to control, especially in the light of likely introduction of large numbers of infections from states of high incidence such as California, Florida, Texas, Georgia, and Iowa. R.G. Wallace (2003) documented how anti-retroviral treatment reached poor neighborhoods of color in New York City much later than it reached wealthier white neighborhoods. Did control of COVID, as indicated by its markers (percent swab positive, cumulative cases/100,000, percent rise in cumulative cases, and new cases per square mile) affect the ZIP Code areas simultaneously or are there large differences among them in changes?
D. Wallace () Independent Disease Ecologist, New York, NY, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 R. Wallace (ed.), Essays on Strategy and Public Health, https://doi.org/10.1007/978-3-030-83578-1_6
119
120
D. Wallace
The answer to this question rests on subtle, difficult-to-interpret data. Testing differed as the epidemic progressed. Only very ill people were tested March-April 2020 (Department of Health 2020). Thus, the percent positive was very large but the confirmed cases per unit population very small. Even testing of corpses was limited so that confirmed deaths per unit population were small. Presumptive deaths eventually were added to the confirmed to get a picture of the COVID burden. After supplies of test kits were secured and the dangerous properties of the virus better understood (that asymptomatic and presymptomatic people could be infectious), the state and city tried to test widely to understand the true dimensions of the epidemic and its trends. Residents were encouraged to get tested both to catch undetected cases for quarantine and to understand the trends. The City Health Department reports cumulative data on its COVID webpage (Department of Health 2020). Thus, the earlier data reflected the strangled testing of March–April. As time elapsed, the effect diminished but did not disappear. When we examine changes over time in COVID markers, we have to interpret these changes in the light of the changes in testing. To mitigate this problem, we shall look at changes in COVID markers between May 31 and July 14, between July 14 and July 28, and between July 28 and August 17. The latter two compare data taken under similar intensity and extent; the first includes data heavily influenced by the early testing (May 31) and data less influenced (July 14). Results from these analyses provide a basis for development of public health strategy to prevent or minimize future waves of new potential pandemics, which are sure to arrive if the upstream drivers (deforestation, uncontrolled mineral extraction, and factory farming) continue their growth. The previous study (Wallace and Wallace 2021) confirmed our earlier work on AIDS, tuberculosis, low-weight births, and violence (Wallace et al. 1997b; Wallace 2011; Wallace et al. 1997a) that policy-based segregation, discrimination, and oppression turn poor New York City neighborhoods of color into potent foci of epidemics of microbial-caused diseases, contagious risk behaviors, and disempowerment-based premature aging. This study will add weight to the necessity of crafting public health strategy that realizes the idealistic motto “E pluribus, unum.” The motto opposes the current delusion that Prospero’s castle protects the wealthy, white elite. The motto also opposes the voluntary self-isolation of ethnic and religious communities from public health and safety standards, whether they be Chasidic, Latinx, Eastern European, or African, if community practices promote disease propagation and threats to public safety.
6.2 Methods The following data were acquired from the COVID webpage of the NYC Department of Health (DoH): percent positive swab tests, cumulative case rate, and cumulative death rate. These data from May 31, July 14, July 28, and August 17 were by ZIP Code area. The Excel spreadsheet of these data also included
6 Power Relations and COVID-19 in New York City
121
the following ZIP Code area socioeconomic data from the 2015–2018 American Community Survey and acquired from the database InfoShare NYC: percent Asian, percent Black, percent Latinx, percent white, percent foreign-born, number of college or higher degrees per 100 adults, percent extremely overcrowded housing units, individual median income, percent over age 65, poverty rate, rent stress (percent of renters paying over 35% of their income on rent), and unemployment rate. All data and analyses are limited to the four central boroughs to the exclusion of Staten Island. All ZIP Code areas with populations under 10,000 were also excluded. The boroughs have the following number of included ZIP Code areas: Brooklyn 37, Queens 56, Manhattan 35, and Bronx 24. Percent changes in COVID markers May 31-July 14 were calculated as follows: % change in percent positive = (%positive May 31 − %positive July 14)/%positive May 31 % change in case rate = (case rate July 14-case rate May 31)/case rate May 31 Percent changes July 14-July 28 and July 28-August 17 were similarly calculated. An index of socioeconomic (SE) risk was developed as follows: Nine SE factors were selected for inclusion: % Black, % Latinx, % white, number of college or higher degrees per 100 adults, % extremely overcrowded housing units (EOC), individual median income, poverty rate, rent stress %, and unemployment rate. For each ZIP Code area, each SE factor was divided by the median of all 152 areas. Because of the long literature on public policies and private practices that stress communities of color, normalized % Black and % Latinx were added and % white subtracted from the index. Other subtractions were normalized higher education degrees per 100 adults and individual median income. The normalized values of the other SE factors were added: EOC %, poverty rate, rent stress %, and unemployment rate. Cumulative risk index = normalized values of % Black + % Latinx + % EOC + poverty rate + rent stress % + unemployment rate − % white-higher degrees per 100 adults − individual median income. Three sets of criteria selected ZIP code areas of potential future outbreaks: (1) Over 3000 cumulative cases per 100,000, above median percent positive, and above median percent increase in case rate between the two dates of interest (May 31–July 14 and July 14–July 28). (2) Over 3000 cumulative cases per 100,000, above median percent positive, and above median increase in new cases per square mile. This set applied only to July 28–August 17. (3) Over 50 new cases per square mile between July 28 and August 17.
122
D. Wallace
The latter two sets of criteria were used because for ZIP code areas with high cumulative case rates, an extremely large increase in case rate would have been necessary to qualify after late July. The COVID markers, their changes over time, and the ZIP code area socioeconomic factors were analyzed with simple, standard statistical procedures: t-test, Mann–Whitney test, bivariate regression, and multivariate backwards stepwise regression. Simple mapping illuminated the geography of potential future outbreaks.
6.3 Results Percent change in percent positive swab tests measures the decline in positive tests between May 31 and July 14, whereas the percent changes in case rates measure the increase in cumulative cases, likewise for death rates. On average and median, Brooklyn and Manhattan enjoyed the largest declines in percent positive swab tests and the Bronx the smallest (Table 6.1). However, Manhattan showed the largest percent increases in case rate and the Bronx the smallest. The percent increase in death rate of Queens was by far the largest of all four boroughs. It was significantly different on average and median from all the others. Table 6.1 Percent changes in CoViD Markers: Four NYC boroughs Change between May 31 and July 14, 2020 Brooklyn Queens Decline in percent positive swabs Mean 50.4% 40.7% Median 50.0% 39.3% SD 9.3% 6.8% min 29.2% 30.0% max 70.0% 58.1% Increase in cumulative case rates Mean 9.3% 8.4% Median 8.7% 7.3% SD 2.5% 6.5% min 5.2% 1.3% max 14.2% 52.0% Increase in cumulative death rates Mean 12.0% 25.4% Median 9.6% 17.6% SD 12.5% 25.9% min 2.5% 0.2% max 78.0% 140.5% a
Manhattan
Bronx
48.6% 50.0% 7.0% 26.9% 60.0%
35.1% 35.9% 3.6% 26.9% 41.2%
10.0% 9.5% 3.2% 2.4% 20.0%
7.5% 7.4% 2.1% 3.3% 11.5%
10.3% 9.4% 7.6% 0a 35.1%
15.9% 10.8% 18.8% 1.5% 90.1%
Possible unreliable data. Battery City ZIP code reported as having no COVID deaths ever
6 Power Relations and COVID-19 in New York City
123
Wallace and Wallace (2021) compared Brooklyn with Queens and Manhattan with the Bronx with respect to May 31 cumulative COVID markers because of their similarities of population size. When the May 31–July 14 changes are compared, Brooklyn and Manhattan showed significantly greater declines in percent positive tests on average and median than their comparison partners Queen and the Bronx, respectively. These differences were large. The probabilities ranged from E-6 to E-12. Case rate changes also showed significant differences. However, death rate changes did not differ significantly between Manhattan and the Bronx, but did between Brooklyn and Queens. Table 6.2 displays the numbers of ZIP Code areas by borough within four ranges of percent positive swabs on May 31, July 14, July 28, and August 17, 2020: < 10, 10–19, 20–29, 30 and above. No ZIP code area on May 31 registered less than 10% positive swabs. Manhattan had 60% of its areas in the 10–19% range on May 31. In contrast, the Bronx had over 70% of its areas in the 30% positive or more. By July 14, percent positive swabs had declined greatly. Manhattan had 60% of its areas in the below 10% range, and Brooklyn had five areas in that low range. No ZIP code area registered 30% or above positive swabs. However, the majority of areas in Queens and the Bronx registered percents positive of 20–29%. On July 28, only the
Table 6.2 Number of ZIP code areas within designated ranges of %positive swabs
Range 31-May