Collaboration and Competition in Business Ecosystems 9781781908273, 9781781908266

The research featured in this volume is devoted to understanding the competitive and collaborative challenges that firms

186 56 6MB

English Pages 448 Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Collaboration and Competition in Business Ecosystems
 9781781908273, 9781781908266

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

COLLABORATION AND COMPETITION IN BUSINESS ECOSYSTEMS

ADVANCES IN STRATEGIC MANAGEMENT Series Editor: Brian S. Silverman Recent Volumes: Volume 21:

Business Strategy over the Industry Lifecycle Edited by: Joel A. C. Baum and Anita M. Mcgahan

Volume 22:

Strategy Process Edited by: Gabriel Szulanski, Joe Porac and Yves Doz

Volume 23:

Ecology and Strategy Edited by: Joel A. C. Baum, Stanislav D. Dobrey and Arien van Witteloostuijn

Volume 24:

Real Options Theory Edited by: Jeffrey J. Reuer and Tony W. Tong

Volume 25:

Network Strategy Edited by: Joel A. C. Baum and Tim J. Rowley

Volume 26:

Economic Institutions of Strategy Edited by: Jackson A. Nickerson and Brian S. Silverman

Volume 27:

Globalization of Strategy Research Edited by: Joel A. C. Baum and Joseph Lampel

Volume 28:

Project-based organizing and Strategic Management Edited by: Gino Cattani, Simone Ferriani, Lars Frederiksen and Florian Taube

Volume 29:

History and Strategy Edited by: Steven J. Kahl, Brian S. Silverman and Michael A. Cusumano

ADVANCES IN STRATEGIC MANAGEMENT

VOLUME 30

COLLABORATION AND COMPETITION IN BUSINESS ECOSYSTEMS EDITED BY

RON ADNER Tuck School of Business, Dartmouth College, Hanover, NH, USA

JOANNE E. OXLEY Rotman School of Management, University of Toronto, Canada

BRIAN S. SILVERMAN Rotman School of Management, University of Toronto, Canada

United Kingdom – North America – Japan India – Malaysia – China

Emerald Group Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2013 Copyright r 2013 Emerald Group Publishing Limited Reprints and permission service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties, express or implied, to their use. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-78190-826-6 ISSN: 0742-3322 (Series)

ISOQAR certified Management System, awarded to Emerald for adherence to Environmental standard ISO 14001:2004. Certificate Number 1985 ISO 14001

CONTENTS LIST OF CONTRIBUTORS

vii

INTRODUCTION: COLLABORATION AND COMPETITION IN BUSINESS ECOSYSTEMS

ix

PART I: THE ECOSYSTEM PHENOMENON COLLABORATING WITH COMPLEMENTORS: WHAT DO FIRMS DO? Rahul Kapoor

3

EVOLVING AN OPEN ECOSYSTEM: THE RISE AND FALL OF THE SYMBIAN PLATFORM Joel West and David Wood

27

BUILDING JOINT VALUE: ECOSYSTEM SUPPORT FOR GLOBAL HEALTH INNOVATIONS Julia Fan Li and Elizabeth Garnsey

69

PART II: ANALYTICAL PERSPECTIVES BUSINESS ECOSYSTEMS’ EVOLUTION – AN ECOSYSTEM CLOCKSPEED PERSPECTIVE Saku J. Ma¨kinen and Ozgur Dedehayir

99

DO PRODUCT ARCHITECTURES AFFECT INNOVATION PRODUCTIVITY IN COMPLEX PRODUCT ECOSYSTEMS? Sendil K. Ethiraj and Hart E. Posen

127

v

vi

CONTENTS

THE ORGANIZATION OF INNOVATION IN ECOSYSTEMS: PROBLEM FRAMING, PROBLEM SOLVING, AND PATTERNS OF COUPLING Stefano Brusoni and Andrea Prencipe

167

PART III: INTERDISCIPLINARY LINKAGES AND ANTECEDENTS THE EMERGENCE AND COORDINATION OF SYNCHRONY IN ORGANIZATIONAL ECOSYSTEMS Jason P. Davis

197

OPEN INNOVATION NORMS AND KNOWLEDGE TRANSFER IN INTERFIRM TECHNOLOGY ALLIANCES: EVIDENCE FROM INFORMATION TECHNOLOGY, 1980–1999 Hans T. W. Frankort

239

THE ORIGINS AND DYNAMICS OF PRODUCTION NETWORKS IN SILICON VALLEY AnnaLee Saxenian

283

NETWORKS AND KNOWLEDGE: THE BEGINNING AND END OF THE PORT COMMODITY CHAIN, 1703-1860 Paul Duguid

311

TOWARDS A NETWORK PERSPECTIVE ON ORGANIZATIONAL DECLINE Brian Uzzi

351

EXPLAINING THE ATTACKER’S ADVANTAGE: TECHNOLOGICAL PARADIGMS, ORGANIZATIONAL DYNAMICS, AND THE VALUE NETWORK Clayton M. Christensen and Richard S. Rosenbloom

389

LIST OF CONTRIBUTORS Stefano Brusoni

Department of Management, Technology and Economics D-MTEC, ETH Zu¨rich, Zu¨rich, Switzerland

Clayton M. Christensen

Graduate School of Business Administration, Harvard University, Boston, MA, USA

Jason P. Davis

Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA

Ozgur Dedehayir

CITER Center for Innovation and Technology Research, Department of Industrial Management, Tampere University of Technology, Tampere, Finland

Paul Duguid

School of Information & Management Systems, University of California, Berkeley, CA, USA; School of Management and Business, Queen Mary, University of London, London, UK

Sendil K. Ethiraj

London Business School, London, UK

Hans T. W. Frankort

Cass Business School, City University London, London, UK

Elizabeth Garnsey

Centre for Technology Management, University of Cambridge, Cambridge, UK

Rahul Kapoor

The Wharton School, University of Pennsylvania, Philadelphia, PA, USA

vii

viii

LIST OF CONTRIBUTORS

Julia Fan Li

Centre for Technology Management, University of Cambridge, Cambridge, UK

Saku J. Ma¨kinen

Department of Industrial Management, Tampere University of Technology, Tampere, Finland

Hart E. Posen

University of Wisconsin-Madison, Madison, WI, USA

Andrea Prencipe

Department of Business and Management, LUISS Guido Carli University, Rome, Italy

Richard S. Rosenbloom

Graduate School of Business Administration, Harvard University, Boston, MA, USA

AnnaLee Saxenian

Department of City and Regional Planning, University of California, Berkeley, CA, USA

Brian Uzzi

Department of Organization Behavior, J.L. Kellogg Graduate School of Management, Evanston, IL, USA

Joel West

KGI – Keck Graduate Institute, Claremont, CA, USA

David Wood

Delta Wisdom Ltd., Surbiton, UK

INTRODUCTION: COLLABORATION AND COMPETITION IN BUSINESS ECOSYSTEMS INTRODUCTION Rapid technological change, globalization, and the recent period of financial turbulence have brought us to a point in history where managers are painfully aware that ‘‘no man [or firm] is an island.’’ Success, in both the profit and nonprofit sectors, increasingly relies on collaboration with a broad set of stakeholders no less than it does on the firm’s own actions, or those of its traditional rivals. Scholars have long recognized the embeddedness of firms and activities within their broader environment (e.g., Thompson, 1967). Indeed, there is a well established tradition of conceptualizing the environment along different dimensions to identify systemic relationships between firm and context, such as social systems (e.g., Lawrence & Lorsch, 1967), institutional systems (e.g., Scott, 1981), and technological systems (e.g., Hughes, 1983; Rosenberg, 1976). But over the past thirty years we have witnessed a marked acceleration in strategic business interconnectedness as firms have embraced the choice to not go it alone. An important corollary to the revolution in information and communication technologies has been a matching increase in reliance on – and expectation of – coordination across firms as witnessed in a vast range of settings, from personal computers (Baldwin & Clark, 2000; Moore, 1996) and semiconductor manufacturing (Adner & Kapoor, 2010) to health care (Iansiti & Levien, 2004), telecommunications (Gawer & Cusumano, 2002), and banking (Jacobides, 2005).

WHY ECOSYSTEMS? Early models of business ecosystems (e.g., Iansiti & Levien, 2004; Intel, 2004; Moore, 1993; SAP, 2006), embraced the biological analogy of ix

x

INTRODUCTION

interdependent species coexisting in a productive and sustainable arrangement to conceptualize value creation in a world of new technological possibility. Informed by strategic applications of cooperative game theory (e.g., Brandenburger & Stuart, 1996) and module complementarity (e.g., Milgrom & Roberts, 1990), this work emphasized the importance of coordination among a community of organizations and activities where combined effort is required for the creation of value, expanding the concern with complementary assets (e.g., Teece, 1986) to include strategic interactions with independent complementors (e.g., Brandenburger and Nalebuff, 1996). As the work on ecosystems has progressed, increasing attention has been paid to the nature of the value proposition that is being created and its implication for ecosystem structure and strategy. Thus, Adner (2006, 2012) defines the ecosystem as comprising the set of partners that need to be brought into alignment in order for a firm’s value proposition to materialize in the market place. Inherent in this definition is the realization that the boundary of the ecosystem is intimately related both to the nature of the value proposition as well as to the structure of interdependence. Strategy in ecosystems must account for creating a differentiated value proposition to attract not only the end consumer, but for the required partners as well. Thus, a key distinction between competitive strategy and ecosystem strategy lies in the explicit consideration of actors who lie off the critical path to the end consumer: participation (who needs to be included), structure (who hands off to whom), and governance (who sets the rules). Just as the transition from vertically integrated firms to supply chains highlighted new issues for the strategy field – expanding the make versus buy decision to incorporate the varied spectrum of supplier relationship strategies and associated capability dynamics (e.g., Alcacer & Oxley, 2013; Chesbrough & Teece, 1996; Macher & Mowery, 2004) – so too does the transition from supply chains to ecosystems in which value creation depends on successfully navigating dependencies across a set of multilateral stakeholders. It changes the focus of decision and analysis from boundaries of the firm to boundaries of the value proposition. With this transition, consideration shifts from dyads to coalitions; from bilateral governance relationships between a well-defined buyer and a well-defined supplier to multilateral coordination in which the positioning of partners along the value chain is itself a subject of negotiation; from hierarchical relationships to mutual dependence and induced cooperation. It is becoming clear that effective analysis of relationships between parties in an ecosystem takes us beyond the critical questions of bargaining power and transaction costs to include the very structure of interdependence

xi

Introduction

and leadership contests among collaborating (i.e., nonrival) firms (e.g., Ansari & Garud, 2009; Gawer & Cusumano, 2002; Jacobides, Knudsen & Augier, 2006). Thus, analysis conducted at the level of the ecosystem expands competitive dynamics beyond traditional notions of rivalry (i.e., Samsung vs. Apple competing along industry boundaries to sell similar offers to consumers) to encompass competition for ecosystem leadership (i.e., Apple vs. AT&T competing across industry boundaries to specify the structure, rules, and membership of the collaboration that will deliver value to a shared set of consumers). Moreover, explicitly accounting for the multilateral character of value creation presents additional dimensions of choice to be considered in the construction and the study of strategy.

AN EMERGING RESEARCH FIELD In business ecosystem research – as in any emerging research field or novel phenomenon identified in the literature – there is still significant uncertainty and debate about the nature and boundaries of the object of study. This debate is heightened by the apparent overlaps and strong linkages with other established and emergent strategy research streams focusing on alliances and networks, supply chain management, coopetition, open innovation, platform competition, etc. The chapters in this volume aim to deepen our understanding of business ecosystems by attacking these issues on several fronts: Some of the chapters in the volume (notably those by Kapoor, West & Wood, and Li & Garnsey) serve to clarify and illuminate the phenomenon itself by providing rich structured descriptions of interactions among ecosystem participants in a variety of contexts; others take a more analytical approach, developing empirically grounded theoretical models of interaction among ecosystem participants (Davis, Makinen & Dedeheyir, and Brusoni & Principe). In addition, several of the chapters provide insights into how different aspects of the business ecosystems perspective can inform (or be informed by) our understanding of related phenomena, such as interfirm alliances and open innovation (Frankort), product architectures and modularity (Ethiraj & Principe), networks (Davis), and platform competition (West & Wood). We complement this new research with reprints of four ‘‘classic’’ chapters from related fields that constitute important intellectual antecedents of business ecosystem research, and foreshadow some of the important issues and patterns of interaction among ecosystem participants: from economic

xii

INTRODUCTION

geography (Saxenian), business history (Duguid), organizational networks (Uzzi), and technology transitions (Christensen & Rosenbloom).

THIS VOLUME – STRUCTURE AND CHAPTER SYNOPSES Part I: The Ecosystem Phenomenon In the first chapter of the volume ‘‘Collaborating with Complementors: What do Firms do?’’ Rahul Kapoor presents new fieldwork and surveybased evidence that exposes fascinating patterns of collaboration among ecosystem participants in the semiconductor industry. Among the key findings emerging from this research is that the extent of collaboration among complementors depends on the external environment in which an ecosystem is embedded, the competitive interactions among ecosystem participants, and the organizational choices of individual firms. Kapoor’s study thus highlights the duality of value creation and value appropriation that is a central feature of business ecosystems, and provides important insights into how ecosystem competition influences (and is influenced by) organization design. The duality of value creation and value appropriation is also evident in West and Wood’s case study of the Symbian smartphone ecosystem, described in their chapter ‘‘Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform.’’ This case study also highlights the challenges associated with ecosystem or platform leadership (Gawer, 2010) and ecosystem dynamics, particularly in the face of strong competition from rival platforms (in this case Apple’s iPhone and Google’s Android). The extended application of ecosystem thinking is exemplified by Li and Garnsey’s chapter, ‘‘Building Joint Value: Ecosystem Support for Global Health Innovations.’’ The focus of this case study is on an international public health initiative, led by an entrepreneurial medical devices company and implemented through an ecosystem comprising for-profit, governmental and nongovernmental organizations. The case highlights the benefits of collaboration in the public-private context, as well as the considerable risks and challenges associated with ecosystem leadership throughout the phases of discovery, development and delivery of an innovation. Taken together, these three studies collectively describe important elements of ecosystem dynamics. By combining in-depth exploration of the

xiii

Introduction

evolution of ecosystems embedded within three distinct environmental contexts, these chapters push the boundaries of ecosystem research by elaborating new contingencies that influence ecosystem development.

Part II: Analytical Perspectives Analysis of ecosystem dynamics and evolution can effectively draw on a variety of disciplinary perspectives, but also benefits from novel perspectives and frameworks. Makinen and Dedeheyir’s chapter, ‘‘Business Ecosystems’ Evolution: An Ecosystem Clockspeed Perspective’’ offers one such framework that draws on a technology systems approach. By analyzing the ‘‘clockspeeds’’ (time lags between successively higher levels of technology performance) of interrelated subindustries or components within a product ecosystem, it becomes possible to identify bottlenecks or technology imbalances that must be addressed in order to maximize end-user value. Makinen and Dedeheyir describe the ‘‘ecosystem clockspeed’’ measure, derive implications for the evolutionary trajectory of a systemic technology, and provide an illustrative application in the PC gaming ecosystem. Ethiraj and Posen’s chapter ‘‘Do Product Architectures Affect Innovation Productivity in Complex Product Ecosystems?’’ also concerns itself with interdependencies among components in complex product systems, focusing here on how different levels of technological interdependencies, or design information flows, impact innovation performance. Ethiraj and Posen develop an analytical framework based on a simplified form of dependence structure matrices and use historical patent citation data to map interdependencies in the PC product ecosystem. The findings that emerge from this study reinforce the notion that the returns to a firm’s component innovation efforts can be affected by efforts in other parts of the ecosystem and – most notably – are affected in a way that is susceptible to systematic analysis. Brusoni and Principe dig deeper into the nature of interdependence among innovation ecosystem participants in their chapter, ‘‘The Organization of Innovation in Ecosystems: Problem Framing, Problem Solving, and Patterns of Coupling.’’ They highlight joint problem framing and solving as key ecosystem activities, and argue that the features of the problem to be solved entail different knowledge requirements and thus require different patterns of coupling among the ecosystem’s actors. Drawing on the literature on modularity, Brusoni and Principe’s analytical framework predicts under what circumstances we should observe greater distinctiveness

xiv

INTRODUCTION

(loose coupling) among actors in an ecosystem and when more coordinated responsiveness (tight coupling) becomes a more salient property. Problem uncertainty, complexity and ambiguity are highlighted as key determinants, and illustrative examples from a variety of contexts point to the broad application of the framework. Overall, these studies elaborate a range of new, systematic empirical metrics and theoretical constructs for measuring ecosystem activity. We are confident that these metrics and constructs will prove useful and influential in facilitating future research on ecosystems.

Part III: Interdisciplinary Linkages and Antecedents Jason P. Davis’s chapter, ‘‘The Emergence and Coordination of Synchrony in Organizational Ecosystems,’’ illustrates the strong contribution that different disciplines can make to ecosystem analysis. By introducing organizational elements from network theory – particularly network sparseness and intentional coordination – into a theoretical model taken from mathematical biology, Davis develops a simulation-based model of the emergence and coordination of synchronous action in networked groups such as business ecosystems. Some of the interesting findings that emerge from this exercise are that while synchrony is more easily achieved in small, dense networks, network clustering (arguably a common feature of many innovation ecosystems) impedes synchronization. The complementarities between ecosystem and alliance research are nicely illustrated in Hans T. W. Frankort’s chapter, ‘‘Open Innovation Norms and Knowledge Transfer in Interfirm Technology Alliances: Evidence from Information Technology, 1980–1999.’’ Here, Frankort applies Williamson’s (1991) ‘‘shift parameter’’ framework to examine how alliance governance may evolve in an increasingly open innovation environment. Frankort’s empirical study reveals that a move toward more open innovation systems in the IT industry disproportionately increased knowledge sharing in nonequity arrangements. He interprets these findings to suggest that open innovation environments foster the emergence of reputation and monitoring systems that reinforce cooperative behavior and reduce appropriation and other opportunism-related concerns. As Frankort notes, his findings resonate quite strongly with Brusoni and Principe’s discussion of institutional contingencies associated with ecosystem arrangements – as well as with Ethiraj and Posen’s discussion of coordination challenges in the presence of complex product architectures – suggesting that the shift

Introduction

xv

parameter framework may also be a useful tool for examining ecosystem governance. Saxenian’s classic chapter, ‘‘The origins and dynamics of production networks in Silicon Valley,’’ focusing on computer systems firms, represents one of the first and most well-known accounts of complex supply networks within an industrial cluster. Relative to prior related work in economic geography, Saxenian identifies an important role for managed relationships among suppliers, complementors, and integrators in shaping innovation and success in the cluster. In particular, Saxenian highlights the fact that through the 1980s some Silicon Valley firms – most notably Apple and Tandem Computers – ‘‘explicitly recognize[d] their reliance on supplier networks and foster[ed] their development’’ (p. 425). For these firms, fostering supplier development involved actively promoting rich reciprocal exchange among network participants, and sometimes extended as far as making minority investments in promising firms offering complementary technology. Paul Duguid’s chapter, ‘‘Networks and Knowledge: The Beginning and End of the Port Commodity Chain, 1703–1860,’’ reminds us that the relevance of the ecosystem perspective is not limited to recent decades: although rarely recognized as such, many supply chains throughout the modern era have relied on coordinated interaction among heterogeneous interdependent participants. Duguid’s study also highlights the importance of specialized investments linking traders focused on activities clustered around the production and delivery of a single commodity, and provides a fascinating portrait of the power dynamics within the commodity chain that foreshadows some key themes from current discussions of ecosystem leadership. The above chapters highlight the positive influence of ecosystem development on innovation and market success and, as such, prefigure the positive findings of much of the current research on business ecosystems. In contrast, the final two classic chapters featured in the volume speak to a potential ‘‘dark side’’ of ecosystems. Brian Uzzi’s (1997) chapter, ‘‘Toward a Network Perspective on Organizational Decline,’’ offers an important reminder about the potential pathologies of poorly managed ecosystems, particularly when a sector faces external threats such as increased foreign competition. Network theorists draw our attention to the role of ‘‘embeddedness’’ in coordinating production networks – an idea that resonates strongly with the discussions of network synchrony and ecosystem coupling discussed in the Davis and Brusoni and Principe chapters. That said, network theorists have tended to focus on the general structures of productive networks, and the contrasting

xvi

INTRODUCTION

roles of core versus peripheral firms or network nodes; Uzzi’s depiction of a typical interfirm network in the apparel industry’s ‘‘better dress’’ sector is particularly interesting as it brings to the fore the nature of interaction among network nodes. The central message that emerges from Uzzi’s analysis is that production networks characterized by ‘‘embedded’’ ties (e.g., coordinated ecosystems) may act as an effective buffer against external threats; on the flip side, negative shocks may ripple through highly embedded or interconnected networks more readily than through more loosely-linked networks. Uzzi also explores the implications of dependence asymmetries among firms for network dynamics and performance, particularly during periods of decline.1 Marrying insights from Uzzi’s analysis with the more differentiated view of network participants in the business ecosystem perspective thus raises the possibility of intriguing new research exploring how ecosystems respond to negative demand shocks and other external threats. Finally, Christensen and Rosenbloom’s (1993) chapter, ‘‘Explaining the Attacker’s Advantage: Technological Paradigms, Organizational Dynamics, and the Value Network,’’ though rooted in a very different research tradition, also speaks to the role that participants in an innovation ecosystem – in their terms, the ‘‘value network’’ – play in shaping outcomes during times of external threat or discontinuity. Building on research findings that incumbents are often unable to respond effectively to ‘‘architectural’’ innovations, Christensen and Rosenbloom argue that these challenges may be traced to resistance from other participants in the ecosystem. In particular, if the value of new innovations is not recognized by key participants – established customers or established partners – in terms of technological or economic performance, then the ecosystem may act as a source of rigidity. Together, these studies demonstrate the value and promise of more thorough realization of potential linkages among ecosystem research and a wide range of related disciplinary/phenomenological lenses. By harnessing the insights from these related literatures, scholars will be able to push the boundaries of ecosystem research fruitfully and rapidly in the future.

CONCLUSION We believe that the works in this volume provide examples and guideposts for the next generation of ecosystem research. While each chapter stands on its own merits, we anticipate that collectively they will encourage further

xvii

Introduction

exploration – methodologically, empirically, and theoretically – into the exciting world of business ecosystems. Ron Adner Joanne E. Oxley Brian S. Silverman Editors

NOTE 1. See also Saavendra, Reed-Tsochas, and Uzzi (2008) for interesting empirical analysis of these issues.

REFERENCES Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, 84(4), 98. Adner, R. (2012). The wide lens: A new strategy for innovation. New York, NY: Portfolio. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: how the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 30(3), 306–333. Alcacer, J. A., & Oxley, J. E. (2013). Learning by supplying. Strategic Management Journal (forthcoming). Ansari, S., & Garud, R. (2009). Inter-generational transitions in socio-technical systems: The case of mobile communications. Research Policy, 38(2), 382–392. Baldwin, C. Y., & Clark, K. B. (2000). Design rules: The power of modularity (Vol. 1). Cambridge, MA: MIT Press. Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition: A revolution mindset that combines competition and cooperation; the game theory strategy that’s changing the game of business. New York, NY: Currency. Brandenburger, A. M., & Stuart, H. W. (1996). Value-based business strategy. Journal of Economics & Management Strategy, 5, 5–24. Chesbrough, H. W., & Teece, D. J. (1996). Organizing for innovation: When is virtual virtuous? Harvard Business Review, 74(1), 65–73. Gawer, A., & Cusumano, M. A. (2002). Platform leadership: How intel, microsoft, and cisco drive industry innovation. Boston, MA: Harvard Business School Press. Hughes, T. P. (1983). Networks of power: Electrification in western society 1880–1930. Baltimore, MD: Johns Hopkins University Press. Iansiti, M., & Levien, R. (2004). The keystone advantage: What the new dynamics of business ecosystems mean for strategy, innovation, and sustainability. Boston, MA: Harvard Business School Press. Intel Corporation. (2004). Intel sees unified platform and ecosystem as key to enabling the digital home. [Press release]. February 17.

xviii

INTRODUCTION

Jacobides, M. G. (2005). Industry change through vertical disintegration: How and why markets emerged in mortgage banking. Academy of Management Journal, 48(3), 465–498. Jacobides, M. G., Knudsen, T., & Augier, M. (2006). Benefiting from innovation: Value creation, value appropriation and the role of industry architectures. Research Policy, 35(8), 1200–1221. Lawrence, P. R., & Lorsch, J. W. (1967). Organization and environment: Managing differentiation and integration. Boston, MA: Division of Research, Graduate School of Business Administration, Harvard University. Macher, J., & Mowery, D. (2004). Vertical specialization and industry structure in high technology industries. Advances in Strategic Management, 21, 317–355. Milgrom, P., & Roberts, J. (1990). The economics of modern manufacturing: Technology, strategy, and organization. The American Economic Review, 511–528. Moore, J. F. (1996). The death of competition: Leadership and strategy in the age of business ecosystems. New York, NY: Harper Business. Rosenberg, N. (1976). Perspectives on technology. Cambridge: Cambridge University Press. Saavendra, S., Reed-Tsochas, F., & Uzzi, B. (2008). Asymmetric disassembly and robustness in declining networks. Proceedings of the National Academy of Science, 105(43), 16466–16471. SAP Corporation. (2006). SAP ecosystem unites to bring innovation to chemicals industry. Press release, 8 May. Scott, W. Richard (1981). Organizations: Rational, natural, and open systems. Englewood Cliffs, NJ: Prentice Hall. Teece, D. J. (1986). Profiting from technological innovation: Implications for integration, collaboration, licensing and public policy. Research Policy, 15(6), 285–305. Thompson, J. D. (1967). Organizations in action: Social science bases of administrative theory. Brunswick, NJ: Transaction Pub.

PART I THE ECOSYSTEM PHENOMENON

COLLABORATING WITH COMPLEMENTORS: WHAT DO FIRMS DO? Rahul Kapoor ABSTRACT The study considers the interdependencies between complementors in the business ecosystem and explores the nature of collaborative interactions between them. It sheds light on the organizational and the strategic contexts in which such interactions take place, and shows how they may influence the pattern and the benefits of collaboration. The evidence presented is based on fieldwork followed by a detailed survey instrument administered to firms in the semiconductor industry. The findings, while reinforcing the shift in the locus of value creation from focal firms to collaborative business ecosystems characterized by information sharing and joint action among complementors, illustrate the organizational and the competitive challenges that firms face in their pursuit of joint value creation.

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 3–25 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030004

3

4

RAHUL KAPOOR

INTRODUCTION Over the last two decades, strategy scholars have increasingly viewed firms’ ability to create value as critically dependent on complementors in the business ecosystem (Adner, 2012; Brandenburger & Nalebuff, 1996; Iansiti & Levien, 2004). The emerging literature stream has begun to systematically examine how firms manage their interdependence with complementors (e.g., Ethiraj, 2007; Gawer & Henderson, 2007; Kapoor & Lee, 2013) and how complementors shape firms’ value creation (Adner and Kapoor, 2010). The emphasis so far has been on recognizing the coordination and technological challenges associated with complements, and linking them to firm boundary choices and technology investment decisions. While interorganizational collaboration is an important driver of firms’ value creation (Dyer & Singh, 1998; Powell & Grodal, 2005), the literature has yet to offer an account of the collaborative interactions that exist between firms and their complementors and the challenges that accompany such interactions. The study attempts to address this gap by shedding light on the different ways in which firms collaborate with complementors, and by exploring how the nature and the benefits of collaboration are influenced by the organizational and strategic contexts underlying firm-complementor relationships. Specifically, it considers the organizational context through the choice of the organizational unit that firms may use to manage their relationships with complementors (Dyer, Kale, & Singh, 2001; Kale, Dyer, & Singh, 2002). It considers the strategic context through the nature of the co-opetitive interactions that characterize firm-complementor relationships (Brandenburger & Nalebuff, 1996; Casadesus-Masanell & Yoffie, 2007). The evidence presented in this study is based on fieldwork followed by a survey of firms in the semiconductor industry. The importance of complementors to firms’ value creation has been well documented in the semiconductor industry (e.g., Ethiraj, 2007; Gawer & Henderson, 2007). Key complementors to these firms include software firms and other semiconductor firms whose products are used with the focal firm’s product in a given end-user application such as a cell phone or a personal computer. The findings suggest that firms in the semiconductor industry interact with their complementors most extensively by sharing information on R&D and markets, joint product development, and customizing their products to the complementor’s offering. Hence, exchanging knowledge and combining complementary resources and capabilities seem to be key drivers of joint value creation among complementors (Dyer & Singh, 1998).

Collaborating with Complementors: What Do Firms Do?

5

Firms pursue a variety of organizational designs to manage relationships with their most important complementors. The majority of relationships were managed through engineering or marketing departments, whereas some were managed through a dedicated organizational unit. While the survey data is somewhat limited in drawing causal inference, the extent of collaborative interactions was found to be highest when the relationship was managed through a dedicated organizational unit, lowest when managed through the engineering department, and moderate when managed through the marketing department. The empirical context also enabled an exploration of how the extent of collaboration with complementors is shaped by the duality of value creation and value appropriation (Brandenburger & Nalebuff, 1996). The fieldwork allowed for mapping a given complementor according to the relative opportunity for value creation and the relative threat of value appropriation that is shaped by the complementor’s business model and capabilities. It was found that the extent of collaboration was highest with software complementors that were characterized by high opportunity for value creation and low competitive threat of complementors moving into the focal firm’s product market, lowest with general purpose semiconductor complementors which entailed constrained opportunity for value creation and a high threat of complementors expanding into focal firm’s product market, and intermediate with application-specific semiconductor complementors which represented high opportunity for value creation as well as a high threat of competition. Finally, the study also attempted to reveal the different types of benefits that firms derive from their relationships with complementors. Based on managers’ evaluations, it was found that collaboration with complementors was most beneficial in improving the performance of focal firms’ products, moderately beneficial in increasing sales or gaining customers in existing market segments, and least beneficial in gaining customers in new market segments. The data also confirmed the high value creation potential of software and application-specific semiconductor complementors. However, the superiority of the dedicated organizational unit in facilitating collaboration did not seem to correspond to high value creation. This mixed finding points to the organizational design challenges that firms may face in pursuing collaborative innovation with complementors. Complementors are neither buyers nor suppliers to the firm. While a dedicated organizational interface may facilitate interorganizational collaboration with complementors, extracting benefits from such interactions requires intra-organizational collaboration among upstream and downstream units.

6

RAHUL KAPOOR

Hence, beyond creating a dedicated organizational entity, firms may need to redesign their internal organization to facilitate both inter- and intraorganizational collaboration. The findings reinforce the shift in the locus of value creation from focal firms to collaborative business ecosystems characterized by information sharing and joint action among complementors. However, the results also illustrate the organizational and the co-opetitive challenges that firms may face in leveraging complementarities and pursuing joint value creation.

RESEARCH SETTING AND METHODOLOGY The findings reported in this study are based on fieldwork, followed by a detailed survey of senior managers at firms in the semiconductor industry. The systemic nature of the semiconductor industry with strong interdependencies between focal firms and their complementors makes it an ideal setting in which to explore the nature and extent of collaboration among these actors. Several scholars have studied ecosystem-level interactions among firms in the semiconductor industry. For example, CasadesusMasanell and Yoffie (2007) use the case of Intel and its complementor, Microsoft, to develop a formal model that captures the tension between value creation and appropriation. They show how Microsoft’s dependence on the existing installed base of PCs and Intel’s dependence on the sale of new PCs create conflict over their pricing and incentives to invest in new product generations. Gawer and Henderson (2007) provide a rich case study of how Intel selectively enters and subsidizes complementary markets so as to balance control over the key complements with incentives for new entrants to push the Intel microprocessor platform forward. Ethiraj (2007) explores how firms in different segments of the semiconductor industry (microprocessors, memory, etc.) pursued R&D investments in complements so as to manage technology bottlenecks in the PC ecosystem. None of these studies have, however, explored the nature of collaborative interactions between semiconductor firms and their complementors. In order to identify survey participants and facilitate their participation, the survey was conducted in partnership with two industry organizations – an industry trade association (Global Semiconductor Alliance, GSA) and an industry consulting firm (ATREG, Inc.) – who expressed strong interest in the research. These partners were also chosen because of their relationships with distinct segments of the semiconductor industry. The semiconductor industry comprises integrated device manufacturing (IDM) firms that

Collaborating with Complementors: What Do Firms Do?

7

design, manufacture, and sell semiconductor chips, and fabless firms that design and sell semiconductor chips but who rely on external suppliers for manufacturing (Kapoor, 2013). GSA is a leading industry trade organization focusing on the needs of the fabless firms since 1994. ATREG is a leading global advisory firm specializing in semiconductor manufacturing since 2000, with strong links to the IDM segment of the industry. Prior to designing the survey, I conducted exploratory interviews with executives at GSA and ATREG as well as with 11 managers at fabless and IDM semiconductor firms. The interviews were semi-structured and lasted an hour on average. These interviews helped me identify the different types of complementors to the semiconductor firms, the different types of collaborative interactions among them, and the nature of benefits that firms derive from collaborating with complementors. Based on the understanding developed from the interviews and a review of the academic literature on interorganizational relationships, I designed the initial survey, paying particular attention to the vocabulary with which managers in the industry are familiar. I then sought feedback from academic colleagues and pretested the survey with managers from both fabless and IDM firms. After the final revisions were made, the survey was administered to all fabless and IDM firms that were either publicly traded or considered to be firms with established product lines (as opposed to the many privately held startups that have yet to achieve successful commercialization) by GSA or ATREG. This was done to ensure that the responses were based on a somewhat stable set of interdependencies and relationships between firms and their complementors. I used the key informant approach for the survey, which has been commonly used in the literature on buyer-supplier relationships, alliances, and outsourcing (e.g., Heide & John, 1990; Kale et al., 2002; Parmigiani & Mitchell, 2009). The informants were corporate executives, business-unit heads, or senior marketing executives who have detailed knowledge of their firms’ relationships with complementors. These informants were identified by GSA and ATREG or by their contacts within the semiconductor firms. Each informant was asked to provide information for two of the key complementors that their firm or the business unit was dependent on. For almost every firm, the initial survey request was followed up with emails and phone calls to clarify the objectives of the research and to encourage survey participation. In order to ensure that survey participants did not mistake complementors for other actors (i.e., suppliers or customers), the introduction to the survey included the following text – ‘‘Complementors are companies that

8

RAHUL KAPOOR

provide complementary products to your customers such that your company’s products and complementors’ products are used together in the customer’s application. For example, hardware and software firms are complementors to each other. In the semiconductor industry, complementors could be other semiconductor firms providing integrated circuits (ICs) that are used by your customers together with your company’s products. They could also include firms that develop software or provide other products or services that are used by your customers together with your company’s products.’’ Completed surveys were received from senior managers at 36 fabless and 15 IDM firms, for an overall response rate of 37%. The response rate is consistent with the typical response rate for surveys of top managers (Anseel, Lievens, Schollaert, & Choragwicka, 2010; Baruch & Holtom, 2008). Nonresponse bias was evaluated by comparing firms’ sales and number of employees, and no significant difference was found. The median firm in the sample had 679 employees and sales of US$340 million in 2011. The average firm in the sample had 9,072 employees and sales of US$2,236 million in 2011. Of the respondents, 20 firms are headquartered in North America, 15 in Asia, and the rest in Europe. The final sample was composed of detailed information on 99 firm-complementor dyadic relationships from 51 firms.

NATURE OF COLLABORATIVE INTERACTIONS BETWEEN FIRMS AND THEIR COMPLEMENTORS To measure the nature of collaborative relationships between firms and their complementors, respondents were asked to rate on a scale of 1 (not at all) to 7 (very great extent) the extent to which their firm interacts with complementors in different ways. The nature of interactions included (1) share information on R&D plans and technology roadmaps; (2) share information on a specific market or application; (3) joint product development; (4) joint marketing; (5) setting standards; (6) licensing; (7) customizing products to the complementor; and (8) investing in the complementor. Firms in the semiconductor industry seem to interact most extensively with their complementors by sharing information on a specific market or application (mean=4.45), and on R&D plans and technology roadmaps (mean=4.12). Given that such ‘‘horizontal’’ information sharing helps complementors to coordinate their activities and products with the least

Collaborating with Complementors: What Do Firms Do?

9

amount of strategic commitment, this preponderance is expected. Customizing products to the complementor (mean=3.73) and joint product development (mean=3.71), both of which require a greater commitment on behalf of the complementors, represent the next most intense set of interactions. As argued by Dyer and Singh (1998), interorganizational collaborative interactions that entail knowledge exchange and combination of complementary resources and capabilities are key drivers of joint value creation. These collaborative interactions are followed by joint marketing (mean=3.32), setting standards (mean=3.29), licensing (mean=2.91), and investing in complementors (mean=1.95), which are more likely to be a function of firm-specific interdependencies and opportunities.1

ORGANIZATIONAL AND STRATEGIC INFLUENCES ON COLLABORATIVE INTERACTIONS Organizational Influence To learn how these interactions were affected by the organizational context in which firms managed their complementor relationships, survey respondents were asked to identify the department that was primarily responsible for coordinating activities with a given complementor. Available responses included the following: (1) dedicated department or corporate executive; (2) engineering department; (3) marketing department; (4) no specific department or executive. Fourteen percent of the relationships in the sample were managed through a dedicated department or executive, 41% through the engineering department, 39% through the marketing department, and 6% did not seem to be managed by any specific department or executive. While the organizational interfaces are well-specified with respect to vertical relationships – procurement departments manage suppliers and marketing departments manage customers – the simultaneous existence of technologylevel and market-level interdependencies seems to create ambiguity within firms regarding how to best manage their relationships with complementors. Some firms have created a dedicated organizational unit, while most others are leveraging their existing organizational structures to manage interdependence with complementors. Fig. 1 presents a summary of the results for the different types of firmcomplementor interactions by the organizational interface that firms use to

10

RAHUL KAPOOR

Share information on a specific market/application

Share information on R&D plans/roadmaps

3.8

3.0

2.0

Joint product development

3.5

2.5

3.5

Joint marketing

2.9

2.0

3.0

Setting standards 2.3

Dedicated Group

Marketing Group

1.9 1.7 1.7

5.4

4.7

3.8

3.9

4.4 3.3 3.5

2.5

3.2

2.5

Investing in your complementor

5.1

4.3

3.6 3.5

Customizing products to the complementor

Licensing

5.4

4.7

4.1

3.2

2.8

Engineering Group

No Specific Group

Fig. 1. Mean Values for the Different Types of Collaborative Interaction between Firms and Complementors, by Organizational Unit That Is Primarily Responsible for Coordinating Activities with the Complementor (Scale: 1 – Not At All, 7 – Very Great Extent).

manage these relationships. Complementor relationships that are managed through a dedicated department or executive exhibited the highest degree of collaboration for the most intensely occurring interactions (information sharing, joint product development, and product customization). This was followed by the marketing department, then by the engineering department, and finally, by the case in which no specific department was identified as the primary organizational interface. Not surprisingly, as compared to complementor relationships managed through engineering departments, those managed through marketing departments were characterized by a higher degree of joint marketing-based interactions and a lower degree of licensing-based interactions. Relationships managed through dedicated departments also outranked other departments in the level of interaction with respect to setting standards and investing in complementors.2 The higher level of collaboration associated with the dedicated organizational unit is consistent with the evidence from the alliance literature. For example, Kale et al. (2002) found that firms that created a dedicated alliance function were able to realize greater success with strategic alliances, as measured through stock market returns following alliance announcements and managers’ evaluations of alliance performance. Given that the

Collaborating with Complementors: What Do Firms Do?

11

interdependence between complementors entails cooperation and coordination across both R&D and marketing functions, a dedicated organizational unit is likely to be more effective in accessing information, coordinating tasks across the different functions, and ensuring that firms and their complementors pursue joint value creation.

Strategic Influence Scholars have argued that relationships between complementors can be characterized by both cooperation for value creation and competition for value appropriation (Brandenburger & Nalebuff, 1996). Complementors may differ in the extent of complementarity as well as in the threat of competition. These differences may shape incentives to collaborate. For example, Casadesus-Masanell and Yoffie (2007) show how complementors’ incentives to cooperate can be impacted by the degree of complementarity. The fieldwork facilitated developing an understanding of the different types of complementors to semiconductor firms. A typical end-user application (e.g., television, networking equipment, cell phone, and computer) comprises many complementary semiconductor and software products. Hence, key complementors to semiconductor firms could include software firms or other semiconductor firms focusing on a different electronic function within the same end-user application. In order to learn how firm-complementor collaborative interactions were shaped by the industry context surrounding their relationships, survey respondents were asked to identify whether the complementor’s product was software, a general purpose semiconductor, or an application-specific semiconductor. General purpose semiconductor products include analog ICs, memory ICs, microprocessors, microcontrollers, and discrete devices which can be used in a variety of end-user applications such as communications and computing. An application-specific semiconductor product is designed for a specific enduser application. Semiconductor industry analysts typically use this categorization to document industry sales and trends (e.g., Olsson, 2003). Fig. 2 presents the mean values for the different types of collaborative interactions by the nature of the complement. Firms tend to exhibit the greatest degree of collaboration with software firms, followed by applicationspecific semiconductor firms, and finally with general purpose semiconductor firms. Over the last two decades, the semiconductor industry has gradually shifted away from the PC-dominated application to a variety of consumerand communication-based applications. This shift has not only resulted in

12

RAHUL KAPOOR

Share information on a specific market/application

4.7

4.0

4.8

Share information on R&D plans/roadmaps

4.4 4.1

3.3

3.8 4.0

Customizing products to the complementor

3.5 3.6 3.8

Joint marketing

3.0

Setting standards

2.9

Licensing 2.4

Software

4.3

3.7

Joint product development

Investing in your complementor

5.2

1.7

Application Specific Semiconductor

3.0

3.5 3.6 3.9

2.3 2.4

General Purpose Semiconductor

Fig. 2. Mean Values for the Different Types of Collaborative Interaction between Firms and Complementors, by Type of Complement. (Scale: 1 – Not At All, 7 – Very Great Extent).

an increase in the share of application-specific products but also in an increase in the importance of software toward semiconductor firms’ value creation (Grimblatt, 2002; Linden, Brown, & Appleyard, 2004). In the interviews, many industry executives reinforced their semiconductor firm’s dependence on software for their firm’s ability to differentiate from their rivals and offer a superior ‘‘integrated system’’ to their customer. Collaborating with other semiconductor firms was also deemed useful, as partners could better manage and coordinate their technical and marketing activities. However, the interviewees discussed how these relationships are characterized by appropriability hazards, as partners with similar capabilities could encroach on each other’s markets relatively easily. Given that application-specific semiconductor complements tend to be more tightly coupled with the end product than are general purpose semiconductor products, there is a greater benefit to collaborating with application-specific semiconductor firms than with the general purpose semiconductor firms. Hence, the three categories of complements in the industry present an important contrast with regard to focal firms’ opportunities for value creation and threats of value appropriation. The opportunities for value

Collaborating with Complementors: What Do Firms Do?

13

creation are greater with complementors who are either software or application-specific semiconductor firms than those who are general purpose semiconductor firms. The challenges for value appropriation are greater with semiconductor complementors, who are more likely than software complementors to expand into focal firm’s product market. Jointly, these findings provide initial evidence regarding how differences in the opportunities for value creation and in the threats of value appropriation between complementors may shape the pattern of collaboration.

Regression Analysis Table 1 presents the results of the regression analysis on the different types of collaborative interactions. In addition to the type of organizational interface and the type of complement, the model includes controls for firm size as measured by the log of number of employees in 2011, whether the firm is an IDM firm, and whether it is headquartered in North America. The baseline category for organizational interface is the engineering department and for the type of complement is the general purpose semiconductor product. The findings from the regression analysis are consistent with the descriptive evidence. As compared to complementor relationships managed through the engineering department, those managed through the dedicated department or corporate executive are characterized by greater levels of information sharing on R&D (Model 1) and market applications (Model 2), joint product development (Model 3), and product customization (Model 4). The difference between the coefficients for the dedicated department and the marketing department for these most common forms of collaborative interactions was found to be statistically significant using the Wald test. Hence, managing complementor relationships through a dedicated organizational entity seems to be correlated with higher levels of collaboration rather than through engineering or marketing functions. As expected, compared to relationships managed through the engineering department, those managed through the marketing department are characterized by greater collaborative interactions through information sharing on specific applications or market segments (Model 2) as well as through joint marketing activities (Model 5). Relationships managed through dedicated departments also have greater interactions with respect to setting standards (Model 7) and firms making investments in their complementors (Model 8).

0.705 (0.376) 0.742 (0.577) 0.090 (0.077) 2.658 (0.440) 90 0.29

1.555 (0.442) 0.792 (0.443) 0.577 (0.554)

1.321 (0.472) 0.528 (0.406) 0.735 (0.642)

0.674 (0.325) 0.234 (0.420) 0.127 (0.078) 2.705 (0.479) 90 0.26

1.473 (0.374) 0.972 (0.303)

1.405 (0.536) 0.843 (0.357)

(2) Information Sharing (Marketing)

0.334 (0.413) 1.046 (0.646) 0.181 (0.096) 2.342 (0.574) 89 0.26

1.427 (0.504) 0.437 (0.388) 0.681 (0.582)

1.259 (0.483) 0.937 (0.442)

(3) Joint Product Devt.

0.819 (0.310) 1.289 (0.519) 0.263 (0.093) 2.318 (0.421) 90 0.27

1.482 (0.655) 0.064 (0.415) 1.331 (0.671)

0.461 (0.497) 0.766 (0.348)

(4) Product Customization

0.288 (0.419) 0.932 (0.570) 0.161 (0.095) 1.869 (0.551) 89 0.23

0.950 (0.599) 1.089 (0.372) 0.636 (0.502)

0.890 (0.438) 1.032 (0.453)

(5) Joint Marketing

0.315 (0.483) 1.492 (0.637) 0.241 (0.124) 1.904 (0.612) 90 0.20

0.646 (0.717) 0.546 (0.475) 0.182 (0.923)

1.375 (0.593) 0.640 (0.512)

(6) Licensing

0.486 (0.476) 0.314 (0.647) 0.064 (0.141) 3.210 (0.582) 89 0.13

1.298 (0.653) 0.041 (0.486) 0.726 (0.756)

0.254 (0.589) 0.491 (0.478)

(7) Setting Standards

0.823 (0.448) 1.298 (0.510) 0.247 (0.115) 1.738 (0.568) 89 0.31

1.196 (0.648) 0.072 (0.307) 0.190 (0.637)

0.357 (0.478) 0.302 (0.498)

(8) Investing

Notes: Baseline categories are General Purpose Semiconductor Complementor and Engineering Department. Robust standard errors in parentheses, clustered by firm. a Missing data on the types of complements and for some types of collaborative interactions resulted in the exclusion of some observations.  Significant at 10%. Significant at 5%. Significant at 1%

Observations R2

Constant

Firm size (log employees)

Integrated firm

Controls North American firm

No specific department

Marketing department

Application-specific semiconductor Type of organizational interface Dedicated department

Type of complementor Software

(1) Information Sharing (R&D)

Table 1. Regression Estimates for the Different Types of Collaborative Interactions between Firms and Their Complementors.a

14 RAHUL KAPOOR

Collaborating with Complementors: What Do Firms Do?

15

As compared to general purpose semiconductor complementors, the higher intensity of collaboration with software complementors and application-specific semiconductor complementors is also confirmed in the regression models. The coefficient for software is positive and significant for information sharing (Models 1, 2), joint product development (Model 3), joint marketing (Model 5), and licensing (Model 6). The coefficient for application-specific semiconductor is positive and significant for information sharing (Models 1, 2), joint product development (Model 3), product customization (Model 4), and joint marketing (Model 5). The coefficient estimates for application-specific semiconductor complementors are lower than those for software complementors for information sharing, joint product development, and licensing interactions. However, the difference between the two coefficients is only statistically significant for licensing (F=3.49, po0.10) and marginally insignificant for information sharing on market applications (F=2.43, p=0.12).

NATURE OF VALUE CREATION Finally, the survey instrument evaluated the nature and the extent of firms’ value creation from collaborating with complementors. Respondents indicated on a scale of 1 (not at all) to 7 (very great extent), the extent to which their firm’s relationship with the complementor has helped their firm to (1) gain new customers in existing market segments; (2) gain customers in new market segments; (3) increase their firm’s sales to existing customers; and (4) improve the performance of their products. The results indicate that firms in the semiconductor industry benefit most from their collaborative interactions with complementors through improving the performance of their products (mean=4.44) and the least through gaining customers in new market segments (mean=3.64). Increasing sales within existing market segments represented an intermediate level of benefits (mean value is 3.90 for increasing sales to existing customers and is 3.81 for gaining new customers).3 Thus, collaboration with complementors seems to be paying the most dividends in managing technological interdependencies to improve product performance that likely also has an effect of increasing sales to existing customers and new customers. These results also reaffirm that firm-complementor interactions in the semiconductor industry tend to be much more targeted at a specific application or market segment. In exploring how organizational choices and types of complements are correlated with performance outcomes, Table 2 reports the regression

16

RAHUL KAPOOR

Table 2. Regression Estimates for the Different Types of Benefits that Firms Derive from Their Relationships with Complementors. (1) Product Performance

Type of complementor Software Application-specific semiconductor

0.629 (0.340) 0.100 (0.435)

Type of organizational interface Dedicated department 1.479 (0.621) Marketing 0.315 department (0.524) No specific 1.476 department (0.686) Controls North American firm Integrated firm Firm size (log employees) Constant Observations R2

0.080 (0.332) 0.395 (0.471) 0.036 (0.124) 3.869 (0.480) 89 0.22

(2) Sales to Existing Customers

(3) New Customers in Existing Market Segments

(4) Customers in New Market Segments

0.952 (0.385) 0.808 (0.342)

1.604 (0.361) 1.282 (0.409)

1.342 (0.415) 0.711 (0.409)

0.666 (0.612) 1.035 (0.364) 0.678 (0.679)

0.334 (0.731) 0.466 (0.423) 0.604 (0.554)

0.849 (0.347) 0.756 (0.547) 0.088 (0.109) 2.138 (0.474) 90 0.25

0.843 (0.453) 1.118 (0.678) 0.077 (0.124) 1.975 (0.547) 89 0.23

0.957 (0.607) 0.915 (0.448) 0.248 (0.714) 0.830 (0.472) 0.851 (0.661) 0.023 (0.119) 1.726 (0.574) 90 0.19

Notes: Baseline categories are General Purpose Semiconductor Complementor and Engineering Department. Robust standard errors in parentheses, clustered by firm.  Significant at 10%.  Significant at 5%.  Significant at 1%.

estimates. Among the different types of complementors, collaboration with software complementors is associated with the greatest value creation, followed by collaboration with application-specific semiconductor complementors, and finally by collaboration with general purpose semiconductor complementors. As compared to general purpose semiconductor complementors, collaboration with software complementors has a significant impact on semiconductor firms’ improving their products (Model

Collaborating with Complementors: What Do Firms Do?

17

1) and increasing sales in both new (Model 4) and existing market segments (Models 2-3). Collaborating with other application-specific semiconductor firms also facilitated value creation, primarily through increasing sales in existing market segments (Models 2-3), suggesting that value appropriation threat from these complementors is likely to be lower than that from general purpose semiconductor firms. These results reinforce the understanding, developed through the fieldwork, that the potential for value creation is greater with software and application-specific semiconductor complementors than with general purpose semiconductor complementors. The results with regard to the effect of organizational units managing the firm-complementor relationships are somewhat puzzling. On the one hand, while dedicated groups facilitated extensive collaborative interaction (information sharing, product customization, joint product development), their effect on firms’ value creation, as compared to the engineering group, is only significant for improving product performance (Model 1). On the other hand, as compared to complementor relationships managed through the engineering group, those managed through the marketing group seem to be more advantageous in increasing sales to existing customers (Model 2) and gaining customers in new market segments (Model 4). Why is it that relationships managed through dedicated groups characterized by the highest levels of collaboration between firms and their complementors (just as was the case with software complementors) did not seem to result in greater value creation? Although the current analysis cannot provide any definitive answers, it does point to the organizational design complexity of both managing external relationships with complementors for joint value creation and leveraging internal resources and functions to realize that value. While marketing and engineering departments may not be as effective as dedicated departments in facilitating collaboration with complementors, they are critical to realizing the gains from these collaborations. It is possible that a more externally oriented organizational interface may be constrained in its ability to leverage internal resources and capabilities for firms to benefit from their collaboration with complementors.

DISCUSSION The study considers the interdependencies between complementors in the business ecosystem and explores the nature of collaborative interactions between them. It sheds light on the organizational and strategic contexts in which such interactions take place, and shows how they may influence

18

RAHUL KAPOOR

the pattern and the benefits of collaboration. Just as the shift from integrated enterprises toward collaborative supply chains in the 1980s and 1990s presented new opportunities for scholars to understand value creation in buyer-supplier relationships (e.g., Cusumano & Takeishi, 2006; Dyer, 1997; Helper, MacDuffie, & Sabel, 2000), so too does the recent shift from supply chains to business ecosystems present new opportunities for scholars to expand beyond the traditional buyer-supplier relationships to also consider relationships between complementors. A primary objective of this study is to initiate that research trajectory by providing some evidence regarding the different ways in which firms collaborate with their complementors, and to identify how organizational and strategic factors may shape joint value creation. The evidence presented is based on fieldwork followed by a detailed survey instrument administered to large established firms in the semiconductor industry. Key complementors to these firms include software and other semiconductor firms whose products are used in the same end-user application as the focal firms’ products. Firms in the semiconductor industry seem to interact with their complementors most extensively through sharing information on R&D plans and market applications, joint product development, and customizing their products to complementors’ products. As discussed in the literature on interorganizational collaboration, these types of interactions that entail knowledge exchange and combining complementary resources and capabilities are key drivers of joint value creation (Dyer & Singh, 1998). While firms also interact with their complementors through joint marketing, standards setting, licensing, and making financial investments, these interactions tend to be much more specific to a given firm-complementor dyad. An important consideration for the relationship with complementors is the choice of the organizational interface that is used to manage the relationship. Given that interdependencies between complementors entail both technological and commercialization elements, there was no clear consensus within the survey sample as to how these relationships are to be managed. Although some firms have a dedicated organizational interface (a department or a corporate executive), the majority of firms in the sample seem to manage it through existing engineering and marketing departments. The complementor relationships that were managed through a dedicated organizational interface were characterized by the highest levels of collaboration. The level of collaboration was lowest for relationships managed through engineering departments or when no specific department had the primary responsibility to manage the relationship.

Collaborating with Complementors: What Do Firms Do?

19

Another important consideration for collaborative interactions is the duality of value creation and appropriation between firms and their complementors. Scholars have argued how differences in the nature of complementors may shape firms’ incentives to collaborate (Brandenburger & Nalebuff, 1996; Casadesus-Masanell & Yoffie, 2007). The greater the benefits from complements (higher degree of complementarity) and the lower the appropriability hazards, the greater the firms’ incentives to collaborate. The three different types of complements identified in the semiconductor industry (software, application-specific semiconductor, and general purpose semiconductor) presented a unique opportunity to tease out the effects of complementarity and appropriability. The benefits from collaborating with software and application-specific semiconductor firms are greater than those from collaborating with general purpose semiconductor firms. The competitive threats are greatest from other semiconductor complementors with similar capabilities who would find it much easier than software complementors to expand into focal firms’ product markets. As a result, the intensity of collaboration was found to be greatest when the complementor was a software firm, lowest when the complementor was a general purpose semiconductor product firm, and moderate when the complementor was an application-specific semiconductor product firm. Finally, the survey explored the nature of benefits that firms derive from their relationship with complementors. These relationships seem to be most beneficial in improving the performance of focal firms’ products, moderately beneficial in increasing sales or gaining customers in existing market segments, and least beneficial in gaining customers in new market segments. These findings are consistent with the view that value creation with complementors in the semiconductor industry is increasingly pursued in the context of a given market segment or application, and that managing technological interdependencies to improve product performance and sales to existing customers are important motivations underlying these relationships. This analysis also reinforced the high value creation potential of software complementors, followed by that of application-specific semiconductor complementors. Somewhat surprisingly, while complementor relationships managed through dedicated organizational units consistently exhibited a high degree of collaboration, this did not seem to match the managers’ evaluation of the benefits from these relationships. The mixed finding regarding the impact of dedicated organizational units on the extent of collaboration and value creation points to the organizational challenges that underlie firm-complementor relationships. Although cultivating collaborative linkages with complementors may require dedicated organizational

20

RAHUL KAPOOR

interface, extracting benefits from such linkages requires cooperation and coordination among internal organizational units. Hence, simply creating a new organizational entity without redesigning the organization to support that entity may not allow firms to realize the potential value from such relationships. These findings, while reinforcing the relational view of the firm characterized by collaboration between the firm and partners in its ecosystem, shed light on the important differences between the management of buyer-supplier and that of firm-complementor relationships. Traditional organizational designs within firms have been created to manage buyersupplier relationships either through procurement or marketing functions. Complementors are neither buyers nor suppliers to the firm. This ‘‘indirect’’ interdependence that entails both supply-side and demand-side interactions raises the organizational design complexity that is required to manage complementor relationships. An organizational design for managing complementors needs to account for not only interorganizational interdependence between firms and complementors but also the intra-organizational interdependence between upstream (i.e., R&D) and downstream (i.e., marketing) tasks that underlie firms’ value creation. Also, while firms’ collaborative interactions with suppliers and complementors are characterized by information sharing, joint action, and specialized investments, there are significant differences in the nature of the challenges across those relationships. Often, an important concern with a given supplier relationship is whether the firm may be held up by the supplier due to high transaction costs and what may be an appropriate governance mechanism to manage such a relationship (e.g., Poppo & Zenger, 2002; Williamson, 1985). In contrast, an important concern with a given complementor relationship is the somewhat inevitable conflict over who appropriates more value and whether the complementor may intrude into the focal firm’s product market becoming its direct competitor (Gawer & Henderson, 2007; Yoffie & Kwak, 2006). The strategic interaction between Apple Inc. and Google Inc. is a case in point in which once highly collaborative complementors turned to direct competitors. Different types of complementors may vary both in the degree of complementarity (based on the nature of technological interdependence and the value creation potential) and in the extent of appropriability hazards (based on the differences in firms’ business models and capabilities). As the findings illustrate, these differences have an important impact on the nature of firm-complementor interactions. The study not only asserts that complementors present an important opportunity for management scholars to look beyond supply-chain interactions in the business ecosystem but also

Collaborating with Complementors: What Do Firms Do?

21

underscores that such an opportunity entails an explicit consideration of the different types of challenges that firms may face between managing suppliers and managing complementors. By focusing on interorganizational relationships between firms and their complementors, the study also contributes to the literature on alliances, which have traditionally tended to characterize such relationships based on the alliance function (i.e., R&D and marketing) rather than the role played by the alliance partners in the business ecosystem (e.g., Gulati & Singh, 1998; Lavie & Rosenkopf, 2006; Mowery, Oxley, & Silverman, 1996). Future research focusing on interorganizational alliances and firms’ alliance portfolios could build on these findings by being explicit about the different roles that partners play in a collaborative business ecosystem (e.g., suppliers, complementors, and customers), and how these differences interact with firms’ capabilities, alliance strategies, and performance outcomes. In many industries, there has been a significant increase in the norm of interorganizational collaboration and open innovation, providing an institutional monitoring and reputation system, and making it possible for firms to benefit from less-hierarchical forms of collaboration (Frankort, 2013). It would be interesting to see how such institutional drivers shape collaboration and coordination in business ecosystems. The methodology employed in this study, while allowing for a rich description of the nature of collaborative interactions between firms and their complementors, has several limitations. First, the research is carried out in the context of a single industry and the generalizability of these findings would need to be established through explorations in other empirical contexts. For example, in software-based industries, firms typically depend on a large number and variety of complementors. Beyond managing dyadic relationships, this would also require building and orchestrating an extensive network of complementors, and potentially increasing the organizational design complexity and the intensity of coopetitive interactions in the ecosystem (e.g., West & Wood, 2013). Similarly, in new emerging contexts such as the one studied by Li and Garnsey (2013), entrepreneurial firms face additional challenges of identifying complementors and offering them with a joint value proposition so as to mitigate the different types of risks in the ecosystem. Second, the observed relationship between the choice of the organizational unit that is used to coordinate activities with complementors and the extent of the collaborative interactions is best treated as correlation rather than causation. It is possible that firms may assign a corporate executive or create a dedicated organizational unit to manage more collaborative relationships. Whether the choice of

22

RAHUL KAPOOR

the organizational unit is a result of firms’ sorting of partners into different organizational interfaces based on the scope and the intensity of collaboration or whether this choice impacts the extent of interorganizational collaborative interaction remains an important avenue for future research. Finally, the degree and the benefits of collaboration with complementors were evaluated based on the focal firm’s perspective. Although this approach is somewhat typical of the literature on interorganizational relationships, it is possible that focal firms and complementors may have different perceptions of the collaboration and the measures may be subject to informant bias. Scholars could build on these findings by observing both sides of the interorganizational relationships and how they evolve over time.

CONCLUSION The study has attempted to shed light on the collaborative linkages that exist between firms and complementors within the business ecosystem, and how the nature and extent of their collaboration are shaped by the organizational and strategic contexts underlying these linkages. While scholars have considered the criticality of complementors to the firms’ value creation (e.g., Adner & Kapoor, 2010; Brandenburger & Nalebuff, 1996; Gawer & Henderson, 2007) and the existence of collaborative linkages between these actors (Kapoor & Lee, 2013; Mitchell & Singh, 1996), our understanding of what goes on within these interorganizational relationships and how firms benefit from them is relatively limited. The analyses presented in this paper offer new insights on how firms collaborate with complementors and on the organizational and competitive challenges that underlie joint value creation.

NOTES 1. Sharing information on a specific market or application is significantly greater than sharing information on R&D plans and technology roadmaps (t=3.32, po0.01). Sharing information on R&D plans and technology roadmaps is significantly greater than customizing products to the complementor (t=2.67, po0.01). Customizing products to the complementor and joint product development are significantly greater than joint marketing (t=2.02, po0.05; t=2.58, po0.01). Joint marketing and setting standards are significantly greater than licensing

Collaborating with Complementors: What Do Firms Do?

23

(t=1.90, po0.05; t=1.98, po0.05). Finally, licensing is significantly greater than investing (t=5.44, po0.01). 2. It is possible that the firms’ choice of the organizational interface is driven by the nature and the scope of the collaboration with complementors. For example, firms may be more likely to choose a dedicated organizational interface to manage complementor relationships with greater collaboration requirements. It is also possible that the choice between marketing and engineering departments may be influenced by whether the relationship focuses on marketing or on R&D activities. Note, however, that respondents were asked to provide inputs on two of the key complementors on which their firms or business units were dependent. Hence, given the strong dependence, the scope of collaboration is unlikely to be confined to either marketing or R&D tasks. 3. Improving product performance is significantly greater than increasing sales to existing customers (t=3.29, po0.01), and as well as gaining customers in existing market segments (t=4.71, po0.01). Increasing sales to existing customers and gaining customers in existing market segments are significantly greater than gaining customers in new market segments (t=1.66, po0.05; t=1.27, po0.10).

ACKNOWLEDGMENTS I would like to express my sincere gratitude to the Global Semiconductor Alliance (GSA) and ATREG, Inc., for their excellent partnership, and to the many industry executives who provided me with valuable insights over the course of this project. Many colleagues provided feedback on the survey design. In particular, I would like to acknowledge the help of Matthew Bidwell, Katherine Klein, John Paul MacDuffie, and Anne Parmigiani. I am thankful to the Global Initiatives Research Program and the Mack Center for Technological Innovation at the Wharton School for the financial support. Justin Mardjuki and Yingnan Xu provided excellent research assistance. All errors are mine.

REFERENCES Adner, R. (2012). The wide lens: A new strategy for innovation. New York: Penguin Group. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 30(3), 306–333. Anseel, F., Lievens, F., Schollaert, E., & Choragwicka, B. (2010). Response rates in organizational science, 1995–2008: A meta-analytic review and guidelines for survey researchers. Journal of Business & Psychology, 25(3), 335–349. Baruch, Y., & Holtom, B. C. (2008). Survey response rate levels and trends in organizational research. Human Relations, 61(8), 1139–1160.

24

RAHUL KAPOOR

Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition: A revolution mindset that combines competition and cooperation. New York: Doubleday. Casadesus-Masanell, R., & Yoffie, D. B. (2007). Wintel: Cooperation and conflict. Management Science, 53(4), 584–598. Cusumano, M. A., & Takeishi, A. (2006). Supplier relations and management: A survey of Japanese, Japanese-transplant, and US auto plants. Strategic Management Journal, 12(8), 563–588. Dyer, J. H. (1997). Effective interfirm collaboration: How firms minimize transaction costs and maximize transaction value. Strategic Management Journal, 18(7), 535–556. Dyer, J. H., Kale, P., & Singh, H. (2001). Strategic alliances work. MIT Sloan Management Review, 37–43. Dyer, J. H., & Singh, H. (1998). The relational view: Cooperative strategy and sources of interorganizational competitive advantage. Academy of Management Review, 23(4), 660–679. Ethiraj, S. K. (2007). Allocation of inventive effort in complex product systems. Strategic Management Journal, 28(6), 563–584. Frankort, H. T. W. (2013). Open innovation norms and knowledge transfer in interfirm technology alliances: Evidence from information technology, 1980–1999. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 239–282). Bingley, UK: Emerald Group Publishing Limited. Gawer, A., & Henderson, R. (2007). Platform owner entry and innovation in complementary markets: Evidence from Intel. Journal of Economics & Management Strategy, 16(1), 1–34. Grimblatt, V. (2002). Software in the semiconductor industry. Proceedings of the fourth IEEE international Caracas conference on devices, circuits and systems, Oranjestad, Aruba Dutch Caribbean. Gulati, R., & Singh, H. (1998). The architecture of cooperation: Managing coordination costs and appropriation concerns in strategic alliances. Administrative Science Quarterly, 43(4), 781–814. Heide, J. B., & John, G. (1990). Alliances in industrial purchasing: The determinants of joint action in buyer-supplier relationships. Journal of marketing Research, 27(1), 24–36. Helper, S., MacDuffie, J., & Sabel, C. (2000). Pragmatic collaborations: Advancing knowledge while controlling opportunism. Industrial and Corporate Change, 9(3), 443–488. Iansiti, M., & Levien, R. (2004). The keystone advantage: What the new dynamics of business ecosystems mean for strategy, innovation, and sustainability. Boston, MA: Harvard Business Press. Kale, P., Dyer, J. H., & Singh, H. (2002). Alliance capability, stock market response, and long term alliance success: The role of the alliance function. Strategic Management Journal, 23(8), 747–767. Kapoor, R. (2013). Persistence of integration in the face of specialization: How firms navigated the winds of disintegration and shaped the architecture of the semiconductor industry. Organization Science. Available at http://orgsci.journal.informs.org/content/early/2013/ 02/15/orsc.1120.0802.abstract Kapoor, R., & Lee, J. M. (2013). Coordinating and competing in ecosystems: How organizational forms shape new technology investments. Strategic Management Journal, 34(3), 274–296.

Collaborating with Complementors: What Do Firms Do?

25

Lavie, D., & Rosenkopf, L. (2006). Balancing exploration and exploitation in alliance formation. Academy of Management Journal, 49(4), 797–818. Li, J. F., & Garnsey, E. (2013). Building joint value: Ecosystem support for global health innovations. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 69–96). Bingley, UK: Emerald Group Publishing Limited. Linden, G., Brown, C., & Appleyard, M. (2004). The net world order’s influence on global leadership in the semiconductor industry. In M. Kenney & R. Florida (Eds.), Locating global advantage (pp. 232–257). Stanford, CA: Stanford University Press. Mitchell, W., & Singh, K. (1996). Survival of businesses using collaborative relationships to commercialize complex goods. Strategic Management Journal, 17(3), 169–195. Mowery, D. C., Oxley, J. E., & Silverman, B. S. (1996). Strategic alliance and interfirm knowledge transfer. Strategic Management Journal, 17(Winter Special Issue), 77–91. Olsson, M. (2003). Worldwide semiconductor device definitions guide, 2003. Stamford, CT: Gartner. Parmigiani, A., & Mitchell, W. (2009). Complementarity, capabilities, and the boundaries of the firm: the impact of within-firm and interfirm expertise on concurrent sourcing of complementary components. Strategic Management Journal, 30(10), 1065–1091. Poppo, L., & Zenger, T. (2002). Do formal contracts and relational governance function as substitutes or complements? Strategic Management Journal, 23(8), 707–725. Powell, W., & Grodal, S. (2005). Networks of innovators. The Oxford handbook of innovation (pp. 56–85). Oxford, UK: Oxford University Press. West, J., & Wood, D. (2013). Evolving an open ecosystem: The rise and fall of the Symbian platform. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 27–67). Bingley, UK: Emerald Group Publishing Limited. Williamson, O. E. (1985). The economic institutions of capitalism: Firms, markets, relational contracting. New York, London: Free Press, Collier Macmillan. Yoffie, D. B., & Kwak, M. (2006). With friends like these: The art of managing complementors. Harvard Business Review, 84(9), 88–97.

EVOLVING AN OPEN ECOSYSTEM: THE RISE AND FALL OF THE SYMBIAN PLATFORM Joel West and David Wood ABSTRACT Two key factors in the success of general-purpose computing platforms are the creation of a technical standards architecture and managing an ecosystem of third-party suppliers of complementary products. Here, we examine Symbian Ltd., a startup firm that developed a strong technical architecture and broad range of third-party complements with its Symbian OS for smartphones. Symbian was shipped in nearly 450 million mobile phones from 2000 to 2010, making it the most popular smartphone platform during that period. However, its technical and market control of the platform were limited by its customers, particularly Nokia. From 2007 onward, Symbian lost market share and developer loyalty to the new iPhone and Android platforms, leading to the extinction of the company and eventually its platform. Together, this suggests lessons for the evolution of a complex ecosystem, and the impact of asymmetric dependencies and divided leadership upon ecosystem success. Keywords: Platform control; ecosystems; mobile phones; ecosystems; complementary assets Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 27–67 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030005

27

28

JOEL WEST AND DAVID WOOD

INTRODUCTION Dynamics of Platform Competition For nearly 30 years, researchers have been interested in the sort of de facto standards battles that are common in consumer electronics, computing, and communications. The early research by Katz and Shapiro (1985) and others established a positive-feedback network effects model mediated by the supply of specialized complementary assets (see Gallagher & West, 2009, for a recent summary). Meanwhile, investments in such assets created switching costs that together with network effects often made insurmountable an early lead gained in a standards contest (Arthur, 1996; Farrell & Klemperer, 2007). From this, researchers have identified the dynamics of complex architectures of standardized components termed platforms (Bresnahan & Greenstein, 1999; Eisenmann, 2007; Gawer & Cusumano, 2002; Morris & Ferguson, 1993). Here, we focus on computing platforms as defined by Bresnahan and Greenstein (1999, p. 4): ‘‘as a bundle of standard components around which buyers and sellers coordinate efforts,’’ rather than definitions that might include as a platform the world wide web, or specific applications such as a web browser or Microsoft Office. One key to platform success is a technical architecture of standards that both facilitates complementary assets and allows re-use between vendors and product generations (Bresnahan & Greenstein, 1999; Gabel, 1987; West & Dedrick, 2000). A successful technical architecture allows modular innovation by both the platform sponsor and by third party complementors (Baldwin & Woodard, 2010). Firms that control the interfaces of such architecture – usually through application programming interfaces (APIs) – can control of the supply of complements and thus the allocation of profits that accrue to the platform (West & Dedrick, 2000). Another key antecedent of platform success is courting and maintaining a vibrant supply of third party complements (‘‘software’’) that makes a product (‘‘hardware’’) more valuable (Katz & Shapiro, 1985). While early research argued that a popular standard with a large installed base would automatically attract such a supply of complements, moderators of the positive-feedback process mean that standards sponsors make technical, product, and economic choices that make a standard more or less attractive to complementors (Gallagher & West, 2009). The platform sponsor must share returns of platform success with complementors to assure an ongoing supply of complements (Gawer & Cusumano, 2002). The interdependence

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

29

of the sponsor with its complementors creates an ecosystem (Adner & Kapoor, 2010). Although a sponsor should be able to capture ‘‘outsized returns’’ once an ecosystem is established (Adner, 2012, p. 117), excessive value capture by the sponsor threatens not only the survival of the complementors but also the entire ecosystem (Iansiti & Levien, 2004; Simcoe, 2006).

Smartphone Platform Competition The personal computer (PC) platform of Microsoft Windows and Intel processors (‘‘Wintel’’) is perhaps the most often cited example of platform success. Here, we examine the efforts by mobile phone producers to replicate the Wintel adoption success while avoiding the economic rents captured by Microsoft and Intel. With its partners, Symbian Ltd. created the smartphone category and enjoyed rapid success as it sponsored the most popular smartphone platform from 2002 to 2010, accounting for nearly 450 million smartphones sold worldwide during that period. Symbian’s initial strategy followed many of the key principles of platform leadership defined by Gawer (2010): technology design, strong relations with complementors, internal organization, and firm scope. Symbian built a technical architecture that was the first one optimized for smartphones, that is, cellular phones that were also programmable mobile computing devices. Symbian also built a successful ecosystem that enabled a wide range of devices from multiple manufacturers, and had the largest supply of thirdparty application software. Its internal organization was focused on developing and distributing an advanced smartphone operating system (OS). Finally, Symbian worked with its shareholder-customers – the world’s five largest handset makers – to provide a scope that included firms representing 80% of the industry. However, Symbian’s success did not prevent its own extinction, nor that of its platform. The company faced successive competition from two rival mobile computing platforms – Apple’s iPhone and Google’s Android – that more closely emulated PC capabilities, created a new dominant design, and by late 2010 had captured a majority of the market (Kenney & Pon, 2011; West & Mace, 2010). This led to a series of desperate attempts at retrenchment as Symbian licensees abandoned its platform for Android. Finally, the sole remaining customer (Nokia) orphaned Symbian in favor of a smartphone derivative of the same Microsoft Windows quasi-monopoly it had long sought to avoid.

30

JOEL WEST AND DAVID WOOD

While the story of the iPhone and Android success may be familiar to contemporary readers, less well known is that the Symbian platform had developed elements of the dominant design years before the iPhone or Android. The first Symbian smartphone from Ericsson in 1999 had a pointand-click interface with a (for its day) spacious LCD screen, while starting in 2006, Nokia phones on the Symbian platform used the same WebKit desktop browser technology as later shipped with the iPhone and Android. Even less well known is that Symbian had discussed creating its own application store in 2005 – three years before the iPhone App Store – but abandoned the project due (in part) to lack of resources. Here, we document the rise and fall of Symbian Ltd. and its Symbian OS platform. We use this to describe how the firm built a complex ecosystem of stakeholders, evolved this ecosystem over its 10-year lifespan, and how limitations in its conception and leadership of this ecosystem limited its ability to respond to the iPhone and Android threats.

Research Design Our study uses a case study research design, a widely accepted way to understand and explain complex interorganizational relationships to develop theoretical insights (e.g., Eisenhardt, 1989). Throughout our study period, we compiled data regarding the firm’s platform strategies from a wide range of primary and secondary sources. Primary data from Symbian included current information on its website, archived press releases dating back to 1998 that were published on the website, and previous information from the company website stored on the Internet Archive (Archive.org). We also referenced unpublished company memos and presentations, particularly around the evolution of the company’s formal ecosystem program during each of its phases. We utilized shareholder reports listing full audited financial statements during those years (2003-2006) when they were made available to employee-shareholders. We conducted interviews with current and former Symbian employees who managed aspects of its ecosystem strategy spanning the company’s entire existence from June 1998 to November 2008.1 We were also guided by participant observation by the second author, who was the only senior executive to span the company’s entire lifespan and was directly involved in the second phase of the ecosystem program. Finally, we supplemented our data with secondary data on the company and its ecosystem. We drew upon news coverage, particularly in The Register, a UK-based IT news site, and

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

31

summaries of the company’s history and strategy in books by Symbian authors. From this, we develop insights regarding the trade-offs in managing a complex ecosystem, including the cognitive limits to ecosystem design. We focus on the unique form of divided leadership between Symbian and its partners, and the resulting ambiguity in both perceived and actual leadership in the platform that contributed to its eventual difficulties.

CREATING THE SMARTPHONE INDUSTRY Symbian Ltd. was founded as a spinoff of another London-based company, Psion PLC, but was co-owned and funded by the world’s largest handset makers (see Table 1 for key dates).2 Psion was created in 1980 to develop PC application software, but soon shifted to developing a family of keyboardbased pocket computer systems: Organiser I (1984), Organiser II (1986), Series 3 (1991), and Series 5 (1997). These products became part of a product category called ‘‘personal digital assistants’’ (PDA), which also included the Sharp Zaurus (1993), the Palm Pilot (1996), and various ‘‘Handheld PC’’ models using Microsoft’s Windows CE (1996).3 In 1996, Nokia announced the Nokia Communicator, the first PDA-type phone, built upon software licensed from Geoworks Inc. A series of PDA makers followed by licensing their software for use in similar phones. The Palm OS was incorporated in the Qualcomm pdQ (1998), followed by a series of Treo phones from Handspring and later Palm. Microsoft licensed its Windows CE to phone makers – including Samsung (1998), HP (2001), and Sagem (2001) – but could not gain distribution by US network operators until 2002. To create its own PDA-hybrid phone, Psion Software held licensing discussions with the world’s largest handset makers from 1996 to early 1998. In June 1998, Psion, Nokia, Ericsson, and Motorola announced that they would be joint owners of a new company, Symbian Ltd.4 A key goal shared by Symbian and its owners was preventing Microsoft from extracting proprietary rents from mobile devices as it had in PCs, where it commoditized the systems vendors. As the executive of one of the initial vendors told the Financial Times, ‘‘We knew what had happened in the PC market and were determined not to let that happen in the mobile phone market’’ (Price, 1999). By aligning Symbian with the three (later five) largest handset makers, they also hoped to limit Microsoft’s eventual market share. In response, a few months later, Microsoft’s CEO Bill Gates termed the

32

JOEL WEST AND DAVID WOOD

Table 1.

Key Dates for Symbian Ecosystem.

Date

Event

1994

Psion begins developing a 32-bit PDA operating system, later known as EPOC Nokia ships Series 9000 Communicator, its first PDA phone, using software licensed from Geoworks Psion ships Series 5 PDA based on EPOC operating system Symbian Ltd. founded in London by Psion PLC, Nokia Oy, and Ericsson AB Motorola, Inc. becomes Symbian Ltd. investor, acquiring same 23.1% stake as Nokia and Ericsson NTT agrees to license Symbian OS to support its FOMA 3G network Symbian acquires Ericsson’s Mobile Application Lab, which later becomes UIQ Technology AB Symbian holds first conference in London for developers and other ecosystem members Symbian’s second developer conference is held in Silicon Valley Psion announces plans (later cancelled) to spin off Symbian shares in public offering Ericsson ships R380, the first Symbian OS phone Nokia ships its first Symbian OS phone, Nokia Communicator 9210, with Series 80 UI Sony and Ericsson combine mobile phone divisions into UK-based joint venture Nokia announces plans to license Series 60 user interface to other firms Symbian launches Symbian Platinum Partner Program, a revised ecosystem relations program Nokia 9290 Communicator is first Symbian phone sold in the United States Nokia ships 7650, the first Series 60 phone Sony Ericsson ships P800 camera phone, first Symbian handset with UIQ interface Fujitsu ships FOMA F2051 to NTT DoCoMo customers, first Symbianenabled MOAP(S) phone Motorola sells Symbian shares to Nokia and Psion NTT DoCoMo licenses Symbian OS for distribution by its phone suppliers Psion sells shares to Nokia, Sony Ericsson, Matsushita, and Siemens for d138 million Nokia ships the Nokia 7710, the only Series 90 handset ever commercially available Panasonic ships FOMA P901i, NTT DoCoMo’s first MOAP(L) phone utilizing Linux OS Nokia N91 is the first phone to ship with Symbian 9 operating system (v9.1) Symbian handset shipments reach 100 million

1996 June 1997 June 1998 October 1998 March 1999 April 1999 June 1999 February 2000 August 2000 September 2000 June 2001 October 2001 November 2001 April 2002 June 2002 June 2002 December 2002 January 2003 October 2003 November 2003 July 2004 February 2005 February 2005 April 2006 November 2006

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

33

Table 1. (Continued ) Date November 2006 October 2007 March 2008 June 2008 July 2008 August 2008 November 2008 February 2010 October 2010 February 2011 June 2012

Event Symbian agrees to sell UIQ Technology AB to Sony Ericsson Motorola agrees to buy 50% of UIQ holdings from Sony Ericsson Symbian handset shipments reach 200 million Nokia announces plans to buy out remaining 52.1% of shares, and to create a single unified platform Symbian launches 3rd ecosystem program, the Symbian Partner Network Symbian Platinum Partner program discontinued Nokia completes acquisition of Symbian Ltd., which ceases to exist as an independent entity Symbian Foundation releases 40 million lines of Symbian code as open source Samsung and Sony Ericsson cancel Symbian plans, leaving only Nokia and DoCoMo-licensed handset makers Nokia selects Windows Phone as its future smartphone platform, announcing plans to phase out use of the Symbian platform Nokia releases Nokia 808 PureView, which becomes its last Symbian handset

Source: Symbian (2008a), news accounts.

Psion spinoff ‘‘serious competition’’ in a memo leaked to the New York Times (Markoff, 1998). Symbian Ltd. was a software company whose primary focus was to license the Symbian OS to the world’s leading handset makers to produce what it termed ‘‘smartphones.’’ Beyond the ability to make voice calls on GSM mobile phone networks, the phones inherited the capabilities of Psion’s organizers (such as calendar and address book), to which Symbian and its partners added features suitable for a mobile Internet device (such as e-mail and web browsing). The firm was launched with approximately 160 employees transferred from Psion Software. By the time it had grown to 1,000 employees in 2004, technical employees – both R&D and technical consultants – comprised 77% of that total. Because it shipped no products directly to end users, it had a relatively small sales operation that worked with handset makers, while the marketing organization focused on generating industry visibility to attract end users and third-party developers. Symbian’s shareholders were its spinoff parent (Psion) and mobile phone makers that were also its customers. Psion and Symbian hoped for an IPO of the company, but it was blocked by the handset makers. Instead, shares

34

JOEL WEST AND DAVID WOOD

were bought by handset makers, with Nokia acquiring the largest stake (47.9%) in 2004 (West, 2014). Like other OS companies, Symbian sought to maximize the supply of software supplied by third parties and thus the value created by that software. At the same time, as with any platform it was forced to trade off advancing the OS capabilities against providing continuity of interfaces for such software. Symbian’s ecosystem had key differences compared to the PC archetype. While the Windows ecosystem gradually emerged during the period 1981–1991, Symbian created an ecosystem strategy even before shipping its first product, a strategy that evolved across three distinct phases in its first decade. Without Microsoft’s independent funding and control of key applications, Symbian had less supplier power and platform control. The management of the Symbian ecosystem was also constrained by the complexities of complements, systems architecture, distribution, and ownership relations not present in better-known computing architectures.

ARCHITECTING A SMARTPHONE OS Modular Architecture The Symbian platform consisted of the Symbian OS, a user interface framework, and an ARM-compatible central processing unit (CPU). ARM did not make smartphone CPUs, but licensed its reference designs to a wide range of semiconductor makers and worked closely with Symbian and the CPU makers to deploy each generation of its architecture (Chambers, 2006, pp. 100–103). ARM licensees Texas Instruments, ST Microelectronics, Renesas Technology (a Hitachi-Mitsubishi joint venture), and Ericsson Mobile Phones were leading CPU suppliers for Symbianenabled phones. The Symbian OS architecture was a hybrid between a general purpose computing architecture and that of a smartphone (Fig. 1). It included standard kernel services such as memory management and multitasking. General purpose middleware included networking, graphics, Internet, printing packages, as well as the user interface. Other middleware provided smartphone functionality, combining PDA features such as personal information manager services (for address book and calendar) with telephony services that enabled call management and logging. The platform allowed for Java-based applications and (except for NTT DoCoMo customers) native C++ applications. Each handset maker

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

35

Licensee Platforms S60

UIQ

AVKON

QIKON

Java J2ME

UI Framework UI Application Framework UI Toolkit

Uikon

UI LAF

CONE

FEP Base

Application Services PIM & Messaging & Other PIM & Messaging App Support

Office Engines

Application Framework

Data Sync Support

Internet & Web App Support

Printing Support

OS Services Serial Comms & Short Link Services Telephony Services

Graphics Services

Networking Services

Generic Services

Connectivity Services

Base Services Kernel Services & Hardware Interface

Fig. 1.

Symbian Architecture. Source: Symbian.

licensed a Java interpreter as well as an engine for editing word processing and other office documents. Makers of CPU, graphics, and other chips customized the OS for maximal compatibility with their hardware.

Sub-platforms While the first phone (Ericsson R380) had a custom user interface that was used only once, Symbian had designed its OS to make it easy to change the user interface ‘‘look and feel.’’ Unlike the Windows (or later Android) mobile phone platform, these custom user interfaces (UIs) allowed each handset vendor to offer distinctive products. In the end, five different user interfaces were developed (Table 2), but only three shipped more than 5 million units. Series 60 (later S60) was created by Nokia and licensed to other handset makers. This was the most popular user interface, both in terms of distinct

Nokia Nseries 370 million

n/a

65 million

Nokia Nokia 2002–2013 145

Series 60a

NTT DoCoMo Fujitsu 2003–2013 129

MOAP (S)

Nokia 7710 o100,000

o1 million

Hildon Nokia Nokia 2005–2006 1 (Nokia)

Series 90

Nokia 9000 series

Crystal Symbian Nokia 2001–2007 6 (Nokia)

Series 80

UIQ

Sony Ericsson P800, P900, P910 10 million

Quartz Symbian/Ericsson UIQ Technology 2002–2009 22

Includes open source successors (Symbian1, Symbian2, Symbian3, Anna, Belle) released in 2009–2012. For 304 models commercially released from September 2000 to June 2012; for sources, see end notes. c Authors’ sales estimates based on Symbian press releases and analyst reports.

b

a

Unit salesc (2000–2010)

Code name Originator Developer Retail availability Number of modelsb (2000–2012) Best-selling models

User Interface

Table 2. Handset Model Production and Unit Sales for Symbian-Based Sub-platforms.

Ericsson R380 o100,000

Emerald Ericsson Ericsson 2000–2001 1 (Ericsson)

(Emerald)

36 JOEL WEST AND DAVID WOOD

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

37

models (145 designs5 from nine vendors) and also unit sales (more than 350 million). It was character-and-icon based, with a cursor key and numeric keypad (later also a QWERTY keypad) as primary input devices. By virtue of its customer power, Nokia forced Symbian to accept Series 60 as a replacement for the ‘‘Pearl’’ interface, which was partially developed by Symbian but never used in any shipping product (Orlowski, 2010). Series 80 was a UI developed by Symbian and maintained by Nokia, optimized for Nokia’s 9000 series phones and exclusive to Nokia due to its patents (Orlowski, 2010). Like the original 1996 Nokia Communicator, these were among the heaviest and most expensive mobile phones, providing a larger screen, QWERTY keyboard, and folding ‘‘clamshell’’ design to substitute for a laptop computer. Because of the cost of maintaining a separate UI code base, Nokia dropped Series 80 in 2006 in favor of Series 90. Series 90 was Nokia’s most innovative interface, with stylus input and designed to support a larger color screen. However, it was used only by a handful of phones, only one of which (the Nokia 7710) ever reached the market. The user interface design was later re-used by Nokia in the Maemo Linux-based tablet computers that it released starting in 2005 (cf. Stuermer, Spaeth, & von Krogh, 2009). UIQ was developed by Symbian using the former Ericsson Mobile Applications Lab in Ronneby, Sweden, and was primarily used by Sony Ericsson phones. The lab was sold off as UIQ Technology AB to Sony Ericsson in 2006; in 2007 Motorola bought half. With a stylus-based interface and a large supply of third-party software, some considered the UIQ interface to be the most modern smartphone design prior to the 2007 introduction of the iPhone. MOAP (S) was developed by NTT DoCoMo and Fujitsu as the initial smartphone platform to support DoCoMo’s ‘‘FOMA’’ 3G service (Yoshizawa, Ichikawa, & Kogetsu, 2006). (DoCoMo later contracted with Panasonic and NEC to create a rival Linux-based interface called MOAP(L)). Fujitsu shipped the first MOAP handset in 2003 and half of all MOAP (S) models, while Sharp, Mitsubishi, and Sony Ericsson began shipping handsets in 2005. Unlike other Symbian UIs, MOAP (S) did not allow downloadable native applications, and the application ecosystem was managed by DoCoMo, not Symbian. Each user interface was in effect a sub-platform of the Symbian platform, each with its own UI-specific APIs. Because the UI makers had source code to the OS, they (particularly Nokia) added their own UIspecific APIs to the Symbian OS; most (but not all) APIs were eventually

38

JOEL WEST AND DAVID WOOD

migrated back to the shared Symbian code. Each user interface also had its own preferred web browser. At the same time, the proliferation of UIs increased Symbian’s coordination costs and fragmented the application market. (In addition to controlling the user interfaces, handset makers also controlled the lowest level interfaces for the platform, the hardware adaptation layer.)

Programming Interfaces Building upon Psion’s EPOC, the Symbian OS APIs used a customized version of the C++ programming language to develop native applications, and it attracted experienced Psion software developers, especially from the United Kingdom. Application developers faced two challenges in writing software for Symbian devices, both of which increased developer learning time and thus specialization costs. First, the Symbian programming model (particular memory management) was not like any other device – unlike Windows Mobile, iPhone OS, or Android that includes smartphone versions of popular PC APIs. Second, the divided platform control – between Symbian and its UI companies – meant that developers often had trouble finding the right information about APIs or other development questions. Two major efforts lowered learning costs by adding new APIs. In 2007, Symbian released POSIX-compatible C libraries that made it easier to port established Unix or Linux software – enabling versions of Quake (a multiplayer game), VNC (a desktop control client application), and Qt (a user interface library). In 2004, Nokia also released a Symbian S60 implementation of the Python programming language. As with Java on desktop computers in the 1990s (cf. West & Dedrick, 2000), adding a new shared API layer abstracted differences between mutually incompatible platforms, reducing switching and specialization costs. Although popular among Unix hackers for ease of rapid prototyping, it offered only a subset of the S60 APIs (Scheible & Tuulos, 2007). Finally, at the behest of network operators (who distributed more than 90% of the world’s mobile phones) in 2004, Symbian took steps that ended up making software development more difficult. The Symbian Signed initiative was intended to prevent viruses and other malware from taking over a handset and causing damage to a handset or the network (Morris, 2008). While security was widely seen as necessary, software developers voiced frustration over the resulting technical difficulty and bureaucratic approval delays.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

39

SYMBIAN’S EVOLVING ECOSYSTEM STRATEGY When Symbian was founded, it faced crucial challenges in building an ecosystem to support its technological innovation. Despite its inheritance from Psion, the new firm would need a new ecosystem. Its managers did not know what sort of ecosystem would be required: like its competitors, it assumed that smartphone ecosystems would be similar to those for PDAs. As its technology grew more popular, it attracted a growing number of potential ecosystem members, each wanting attention to solve their particular problems. At the same time, it was a small and (until 2005) a money-losing company with limited resources. Thus, a crucial challenge was prioritizing its scarce resources to build an ecosystem of unknown characteristics. Finally, it faced two immediate competitors in Microsoft and Palm, at least three major future competitors largely unknown in 1998 (as well as indirect competition from in-house handset software development by potential customers). As with most software companies, the intellectual property of its software copyrights and trade secrets were its major assets and it worried about leakage of that IP to existing or potential competitors (cf. Cusumano, 2004). Ultimately, these IP concerns colored (and hindered), its willingness to transfer knowledge to ecosystem members, and thus its ability to attract new members and to help them create value.

SYMBIAN’S ECOSYSTEM: OVERVIEW Symbian OS was only available to phone users pre-installed in a newly purchased Symbian-enabled phone. This meant that unlike a PC, Symbian could not sell end-user software upgrades and had effectively no direct relationship with customers. Instead, adoption of its latest technology – and revenues – depended on new adoption of smartphones and replacement purchases by existing owners. Symbian described its network of customers and complementors as an ‘‘ecosystem’’6 (e.g., Northam, 2006). Different categories of licenses and partner relationships included the following:  System integrators or ‘‘Licensees’’ (handset manufacturers) that integrated externally sourced and internally developed hardware and software to create new devices (i.e., handsets) for sale to end users.7

40

JOEL WEST AND DAVID WOOD

 CPU vendors worked to assure Symbian OS compatibility with their latest processors.  Other hardware suppliers provided drivers for their respective hardware components.  User interface companies were division of mobile phone companies or (for UIQ Technology) a separate company.  Other software developers, sometimes referred to as independent software vendors (ISVs). This included developers of user applications and also middleware components such as databases.  Consultancies and training centers. Symbian provided licensees a list of certified contract software development companies it called Symbian Competence Centers, whether mobile phone-specific consultants or the Symbian-oriented departments within large outsource software suppliers such as Satyam and Wipro.  Network operators, which in most countries were the dominant distribution channel for phones, and also decided what software components were preloaded on phones.  Enterprise software developers, for cases where a company developed Symbian-compatible software for its employees that used Symbian phones. In many cases, members of Symbian’s ecosystem were also members of competing mobile phone ecosystems, such as those surrounding Palm OS, Windows Mobile, and later Linux-based platforms such as the LiMo Foundation and Google’s Open Handset Alliance (Android). Such divided loyalties were found not only in chipmakers and operators but (unlike with PCs) also in the system vendors who made phones that incorporated the Symbian OS. Knowledge transferred to partners came in three forms: codified documentation, personalized technical support, and Symbian’s source code. Most partners had access to only a subset of source code, while both UI companies and mobile phone operators asked Symbian to limit access to sensitive interfaces (such as those that might allow a wayward application to make expensive telephone calls). For employees who had full source code access, it came at a price: Symbian demanded a ‘‘refrigeration period’’ (typically six months) – during which the engineers were blocked from working on complements for competing platforms – for fear that they would unintentionally apply concepts of the Symbian code in a way that would improve the capabilities of Symbian’s competitors. Symbian’s strategy for managing its ecosystem can be divided into three phases: ‘‘ad hoc’’ (1998-2002), ‘‘Platinum Program’’ (2002–2008), and ‘‘Symbian Partner Network’’ (2008).

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

41

Phase 1: An ad hoc Ecosystem Strategy The initial structure and conception of the Symbian ecosystem was heavily influenced by its PDA forebears, and the well-known exemplars of PC and other computer ecosystems. At the time of Symbian’s founding, all three major PDA makers – Psion, Microsoft, and Palm – had active programs for attracting third-party application software. In fact, the initial Symbian OS carried over the APIs, technical documentation, ecosystem support staff, and supply of third-party software suppliers that had worked with Psion. Psion organized a series of developer conferences, starting in November 1992 with 10 talks to an audience of around 30 developers (Symbian, 2008a) and growing to around 200 developers at events in 1997. ‘‘One of the attractions of Symbian OS for Nokia and Ericsson was the reasonably big set of developers we had built up as Psion,’’ recalled Simon East, first VP of technology for Symbian. Symbian’s initial ecosystem strategy thus focused on working with thirdparty software developers. This strategy is illustrated in Fig. 2 from a 2002 presentation made to a meeting of Symbian’s Supervisory Board (Wood, 2002). Initially, Symbian used two different forms of formalized knowledge transfer to ecosystem members. The OEM Customization Kit (OCK) provided the Symbian OS, tools, and associated documentation for handset makers, while the ‘‘SDK’’ was the software development kit, with documentation used by ISVs to create applications and other add-on software. However, the company soon realized that other potential ecosystem members needed their own specialized support. For this reason, a

Fig. 2. Symbian’s Ecosystem Concept as of 1998. OCK, OEM Customization Kit (for Handset Makers); SDK, Software Development Kit (for Software Developers); ISV, Independent Software Vendor. Source: Wood (2002).

42

JOEL WEST AND DAVID WOOD

number of separate partnering programs emerged ad hoc during the period 1998–2002:  A program for Symbian Competence Centers, announced at the opening of February 2000 Symbian’s Developer Conference in Silicon Valley;  A program for Symbian training partners;  A Symbian technology partner program, for companies providing technology (such as multimedia engines or compression modules) to run alongside Symbian OS;  A semiconductor partner program, for companies providing hardware components to phone vendors;  A tools partner program, for providers of compilers, integrated development environments, automated test facilities;  A development partner program, for firms supplying technology to Symbian itself;  A connectivity partner program, for companies providing solutions for synchronizing and backing up data between mobile devices and desktop computers.8 Each program tended to be managed by separate Symbian employees, often in different departments. Each program emerged following a separate motivation. It gradually became clear that each program had to meet complex needs, but that there were considerable commonalities. Over the next few years, Symbian’s actual ecosystem and the pattern of ecosystem coordination evolved from the original Psion-inspired model (shown in Fig. 2) to a new model (Fig. 3) that differed from the earlier conception in two key ways:  Many partners were supplying software to phone manufacturers, but needed a greater amount of technical information and software than was contained in the SDKs designed for ISVs;  These same partners (termed ‘‘Licensee Suppliers’’) needed two-way exchange of software with phone manufacturers, in ways that neither the OCK nor the SDK had envisioned or provided for. Although Symbian had serious and well-managed programs to manage both phone manufacturers and ISVs from 1998 to 2002, in retrospect it underemphasized helping those companies that supported the phone manufacturers in creating devices, delaying the availability of new phones, and thus new Symbian customers.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

Fig. 3.

43

Symbian’s Ecosystem Concept ca. 2000. Source: Wood (2002).

Phase 2: Symbian Platinum Partners Around July 2001, a proposal was created to unify many aspects of the previously separate partnering programs, into a new ‘‘Platinum Partner’’ program. The main differences were as follows:  A deliberate preference for firms providing technology supplied in devices, rather than software added on afterwards. The ‘‘device creation’’ related partners included not only phone manufacturers (e.g., Nokia) but also providers of hardware components (e.g., Texas Instruments and Intel) and those that provided bundled middleware (e.g., Sun and Real Networks) or development tools (e.g., Borland and Metrowerks).  A desire to systematize the efforts of running many different partner programs, and to obtain benefits of scale through having common development kits, event management, billing systems, and communications systems. A key aspect of the new program was a new package of software provided to technology suppliers, known as the Development Kit (‘‘DevKit’’). This contained considerably more software than in the SDKs provided for ISVs, as well as additional licensing rights, but stopped short of the software and rights available to phone manufacturers. Meanwhile, the software package previously known as the OCK was re-designed as the ‘‘CustKit’’ (Customization Kit).

44

JOEL WEST AND DAVID WOOD

The program was discussed internally for nine months, before being announced in April 2002. Reasons for the delay before launching the program included  Internal discussions over the appropriate membership fee for the program. Initially, annual fees of $15-25k were proposed. After some time, the concept emerged of a lower rate ($5k) for program membership, coupled with a surcharge if a partner wished to license the more extensive DevKit;  A change in Symbian CEOs, when its first CEO, Colly Myers, resigned in February 2002 and was replaced two months later by David Levin. Several key differences between the first and second phases of the ecosystem strategy are emphasized by comparing the previous diagram with Fig. 4:  ‘‘Licensee Suppliers’’ were renamed as ‘‘Partners’’ (sometimes referred to as ‘‘Device Creation Partners’’), and the emphasis on supporting them increased because of their important role in helping create new phones;  Previously ad hoc support mechanisms from Symbian to different Partners were reorganized around the existence of the new DevKit;  Previously ad hoc exchange of information and software between partners and licensees became governed by contractual terms in the new DevKit License (DKL);

Fig. 4. Symbian’s Ecosystem Concept as of 2002. DevKit, Development Kit (for Component Providers); CustKit, OS Customization Kit (for Handset Makers). Source: Wood (2002).

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

45

 Symbian put less priority on direct support of ISVs, on the assumption that the task of supporting applications developers would shift to the phone makers and those companies’ UI systems, which after 2001 were located outside Symbian. Instead of directly supporting ISVs, Symbian’s Developer Network program would concentrate on being a hub of support for the developer networks in partner companies, who would in turn support ISVs. Once the Platinum program structure was in place, it grew rapidly: by the end of 2002, it had attracted 100 companies, and nearly 300 by early 2006. Even as the program grew in size, Symbian management felt constant conflict between ‘‘quantity’’ and ‘‘quality’’ of partners:  The ‘‘quality’’ approach involved a preference for the larger companies that seemed most likely to become winners in the Symbian space, or which had special endorsements from Symbian’s customers.  The ‘‘quantity’’ approach followed the principle of ‘‘level playing field’’ – avoiding picking winners, but giving an equal opportunity to small and unknown companies; the idea was that even though a given company might have the best technology of its type at one moment in time, this should not become a reason to imagine that company would remain indefinitely as the leader in its space. Efforts to provide openness with a ‘‘level playing field’’ required more resources to administer a larger program – including keeping track of contacts, preparing and chasing invoices, providing technical support, and running larger partner events. Other difficulties in running a large partner program were already anticipated at the time the Platinum program was created. An April 2002 analysis of the partner program (Wood, 2002) noted two potential problems. First, many firms were trying to become partners, but they varied widely in terms of their ability to deliver meaningful products. Second, Symbian did not have a large enough technical staff to provide the desired level of support for all possible partners. For these reasons, a prioritization scheme was viewed as inevitable, and partners (including potential partners) were internally allocated to different tiers of importance: AA, A, B, and C. The AA partners were 15 companies deemed most critical to Symbian’s success, A-level were 50 companies of high significance, the B level were those with at least one internal champion, and the C level comprised the remainder (Wood, 2003).

46

JOEL WEST AND DAVID WOOD

Finally, ecosystem members differed significantly in their rights to use Symbian’s IP. Phone manufacturers received all source code to Symbian OS,9 whereas partners did not receive so-called Category A source code that was deemed to be particularly sensitive. Based on an assumed ‘‘hub-andspoke’’ model, partners could only distribute their changes to selected Symbian OS software to phone manufacturers (at the ‘‘hub’’), not to another partner. Over time, it became clear to Symbian management that both restrictions hindered the free flow of valuable information and innovation among the ecosystem; both restrictions were eventually removed. Symbian benchmarked the Platinum program on an ongoing basis: almost every year between 2004 and 2007, there were one or more internal review projects to consider major improvements in the partnering programs. These projects usually started optimistically: people would say things like ‘‘It should be easy to stop wasting effort on the low-value partner engagements and to put more effort onto the high-value partner engagements.’’ But each time, the optimism changed to acceptance that easy optimizations of the program had already been made, and that partnering activity which initially looked low value was often highly valued by important Symbian stakeholders (key customers, internal strategists, etc.).

Phase 3: Symbian Partner Network In November 2007, Symbian began developing a new partner program to meet two key objectives. The first was to increase the efficiency of the program through the enhanced use of IT, particularly increased web-based automation of common activities and creation of an improved extranet (called ‘‘SDN++’’) to communicate with ecosystem members. The second was to utilize that increased efficiency to lower the price and broaden the reach of the program, particularly with ISVs. The most visible change was a reduction in the annual fee from $5,000 to $1,500. Part of the cost savings came by eliminating assigned partner managers for each registered partner, with ecosystem members instead being supported using extranet-based standard information and (paid) technical support. Symbian also lowered its expectations to a break-even basis; the Platinum program (designed when Symbian was losing money) had eventually generated a net profit. Symbian unveiled the revised program on April 29, 2008, to ecosystem members at its semi-annual partner event, and encouraged its existing partners to migrate to the new program. It was not announced publicly until

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

47

July 7, with a concurrent announcement that the Platinum program would be discontinued six weeks later. However, the impact of the revised partner program was diminished by the surprise announcement two weeks earlier that Symbian would become a Nokia subsidiary and license its source code as open source software.

FIRM AND PLATFORM SUCCESS (2002-2007) Platform Success In the period of 30 months, three manufacturers shipped four Symbian phones with four different UIs: the Ericsson R380 (September 2000), Nokia 9210 (June 2001), Nokia 7650 (June 2002), and Fujitsu’s FOMA F2051 (January 2003). The Symbian platform enjoyed uninterrupted exponential growth from 2002 to 2007; after three flat years, sales set a new record in 2010 with nearly 112 million units sold (Fig. 5). Symbian’s initial smartphone competitors – Palm and Microsoft – were also PDA based; but led by Nokia, the Symbian platform’’ or ‘‘were also PDA based, but led by Nokia, the Symbian platform passed both to achieve a majority of global mobile device sales (including PDAs) by mid-2004 (Canalys, 2004). Over time, the competitive threat from Palm faded, but phones licensing Windows Mobile and vertically integrated smartphones from Research in Motion (the Blackberry) and Apple (the iPhone) continued to grow in sales, particularly in North America. Two nonprofit consortia were formed in 2007 to standardize and promote new Linuxderived handset platforms: the LiMo Foundation (led by British operator Vodafone and Japanese operator NTT DoCoMo) and the Open Handset Alliance (created by Google to promote Android). By the end of 2007, smartphones had grown to about 10% of all handsets sold worldwide, and Symbian OS was estimated to account for 63% of all smartphones – well ahead of Windows (12%), the Research in Motion Blackberry series (10%), Apple’s iPhone (3%), and Linux (10%) (West, 2014; West & Mace, 2010). Nokia sold about half of all smartphones and nearly 80% of all Symbian-enabled handsets. Symbian accounted for a majority of global smartphone sales through 2008, and a clear plurality through 2010 with 37.6% of the market. Symbian’s major challenge was in North America. For example, in the summer of 2004, the Symbian platform had a 6% share of the US mobile device (smartphone and PDA) market, after 43% for Palm OS and 25% for

48

JOEL WEST AND DAVID WOOD 120M!

111.6M!

100M! 78.5M! 77.3M! 73.0M!

80M!

60M!

81.0M!

51.7M!

All Vendors Nokia

34.0M!

40M!

14.4M!

20M! 6.7M!

1.0M! 0M! 2001! 2002! 2003! 2004! 2005! 2006! 2007! 2008! 2009! 2010! 2011!

Fig. 5. Global Unit Sales of Symbian OS Phones (Millions).

Windows (Canalys, 2004). One major problem is that Symbian developed a version of Symbian for CDMA networks – which accounted for a majority of US subscribers – but Nokia cancelled its CDMA phones before they could be released and the CDMA modifications sat on a shelf, unused.10 Another obstacle was winning distribution for phones from the United States’s three (later two) nationwide GSM operators. In particular, the largest – Cingular (later AT&T) – wanted weak suppliers and so rarely carried any phones from Nokia, the global mobile phone leader. As a consequence, Symbian was dependent on the relatively weak T-Mobile. When Nokia brought its first Symbian phone to the United States, the $600 Nokia 9290 Communicator, without operator support it distributed the phone via computer dealers, IT consultants, and its website. Nokia even opened retail stores in New York and Chicago in 2006, but closed them in 2010. Firm Success Despite rapid organizational growth and market share success, Symbian faced severe resource constraints, suffering years of losses developing its

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

49

platform prior to achieving economies of scale sufficient to support its R&D efforts. Building on the Psion code base, Symbian spent (by our estimate) more than d200 million on R&D from 1999 to 2004 to develop three major Symbian OS releases. It achieved its first operating profit in 2005, a year in which its revenues and unit sales more than doubled, and the year Nokia launched its high margin Nseries phones (Table 3). Symbian suffered from a lack of pricing power, particularly from 2004 onward, when Nokia accounted for more than 75% of unit sales. In early 2006, Symbian was pressed by shareholder-customers to adopt a reduced royalty schedule. The company would no longer receive a $2.25 surcharge on the first two million handsets of each major OS release. More significantly, the ordinary royalty shifted from a flat $5 fee to a graduated scale from $5 to $2.50 (Symbian, 2006). The latter provision benefited only Nokia – the only company to ship more than 5 million Symbian phones in a single year – and was in fact adopted at its behest. Finally, of Symbian’s handset customers, only Nokia was able to achieve economies of scale for its product and UI platform development costs. While Nokia averaged sales of more than 3 million units per smartphone, Sony Ericsson averaged less than 1 million, and also had to support the UIQ sub-platform development with less than 10% of Nokia’s smartphone revenues.

Assessing Symbian’s Platform Success At the beginning of 2008, Symbian’s platform and ecosystem strategies had achieved great success. It had attracted 9,282 third-party software applications and in 7½ years, its OS had been shipped in 200 million phones, the most in the industry (Symbian, 2008b). Throughout its ecosystem strategy, Symbian had ongoing debates over the balancing between competing goals such as quantity versus quality, fairness versus focus, and personal attention versus economies of scale. In making such decisions, the Symbian executives and ecosystem managers faced three major limitations. The first was a cognitive blind spot toward the nature of the ecosystem. As part of the Psion PDA (and PC) legacy, Symbian’s founders initially took for granted that its ecosystem would be like Psion’s PDA ecosystem; the implicit assumption was that the major focus of ecosystem management was working with independent software vendors. In this regard, application software as the most important complement to general purpose computers

d1.86 38.1m d2.26 34.6m

$4.4 d2.32 77.3m 60.5m

$4.5 d2.94 51.7m 40.1m

d2.85 34.0m 28.5m

1047 693 $5.14

77.5% 13.3% 47.5% 66.4%

d96.8m d18.0m d114.8m d54.5m d27.0m d15.3m

2005

d3.14 14.4m 12.0m

835 525 $5.7

70.7% 34.6% 65.8% 60.3%

d45.2m d21.3m d66.5m d43.7m d23.5m (d23.0m)

2004

Source: Symbian.com press releases via Archive.org and 2003–2006 Symbian annual reports to shareholders. Notes: Authors’ calculations shown in italics. a Estimates based on Symbian press releases, Nokia annual report and analyst reports.

Average unit royalty (d) Total unit sales Nokia unitsa

$3.7

d151.8m d14.4m d166.2m d70.0m d34.1m d55.1m

1191 824 $5.30

d179.1m d15.2m d194.3m

2006

Total employees R&D employees Average unit royalty ($)

d78.2m d7.2m d85.4m

2007

84.4% 33.1% 42.1% 67.4%

d70.8m d10.5m d81.3m

2007 Q1–Q2

Gross margin Net margin R&D intensity R&D (% of operating expense)

Royalties Other income Total revenue R&D SG&A Net income

2008 Q1–Q2

Table 3. Symbian Ltd. Financial Performance, 2002–2008.

d3.81 6.7m 5.5m

734 464

63.9% 58.3% 88.7% 68.2%

d25.5m d19.9m d45.5m d40.3m d19.9m (d26.5m)

2003

d7.70 1.0m 0.5m

653 405

43.8% 126.1% 121.3% 67.8%

d7.7m d21.8m d29.5m d35.8m d16.8m (d37.2m)

2002

50 JOEL WEST AND DAVID WOOD

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

51

was the dominant logic (as defined by Prahalad & Bettis, 1986) of the computer industry of that era. A related assumption was that add-on applications were crucial to the value of a smartphone – true for PCs and game consoles, arguably false for PDAs, and demonstrably false for conventional mobile phones. Second, new handset models were delayed because Symbian did not anticipate how hard it would be to create devices that were unmatched in complexity for a consumer electronics device. Wood (2005) identifies a number of potential pitfalls of mobile phone production, including changes in OS (or UI) APIs across new releases, problems with third-party software reliability and integration, and contractual delays in obtaining rights to distribute such software. Finally, Symbian’s ecosystem management had limited resources and had to be self-supporting – particularly until Symbian earned its first profit in 2005. Rather than maximizing partner access, the partner program was limited to providing services to those partners willing to pay enough money to support the cost of providing those services. These restrictions were gradually reduced through IT-enabled efficiencies, including shifting from paper to ‘‘click through’’ agreements and distributing information via an extranet rather than CD-ROM. Symbian’s shifting treatment of application software was also problematic. Under its initial ecosystem strategy, the company focused on applications at the expense of helping handset makers and those providing pre-installed software that had to be ready to ship with the handset. These early priorities delayed the availability and sale of smartphone handsets that would attract buyers away from conventional handsets, create an installed base for application developers, and also provide revenues to Symbian that would reduce its severe resource constraints. During the second (Platinum Partner) phase, applications were deemphasized and ISVs received less attention. Only limited progress was made on improving tools and broadening developer support, and the cost to ISVs remained relatively high. It was only after the release of the iPhone that Symbian began to develop the Symbian Partner Network to broaden the reach and lower the cost – a development that was rendered moot by Nokia’s acquisition of Symbian. Finally, while the size of the software ecosystem continued to grow with the number of applications, Symbian made little effort to ascertain the health of its ecosystem, or to question why there were no great successes akin to the Lotus, Borland, Ashton-Tate, and others of the early PC era. As it turns out, after-market software sales for Symbian smartphones remained

52

JOEL WEST AND DAVID WOOD

low, as did the software unit price – both more similar to PDAs than PCs. Unlike a platform leader who squeezes complementors for profits in a zerosum game (Gawer & Henderson, 2007), Symbian did not intend to starve its complementors, but focused more on its own problems than theirs. These difficulties suggest two modifications to the positive-feedback network effects model (cf. Gallagher & West, 2009). First, while theory asserts that more software increases hardware sales, this assumes the ceteris paribus condition that attracting software does not delay the development or sale of hardware. If the hardware has a direct utility without adding software complements – and if the hardware must compete with an established substitute to attract buyers – this suggests the early priority must be on creating an installed base of hardware. Second, the attractiveness of a platform to complement providers is not merely the size of its installed base, but the installed base size times its propensity to buy complements. If a given platform (or product category) has a higher propensity to install complements – whether PCs versus smartphones or between competing videogame platforms – then that creates a larger addressable market. Similarly, a lower unit price for complements is attractive only with a large installed base, high purchase propensity, or low specialization cost (cf. Teece, 1986) – exactly the conditions later created by Apple with its iPhone App Store. Another problem came with Symbian’s tradeoffs of selective openness of technical disclosure to complementors. As the first mover, Symbian erred on the side of secretiveness to protect its technology (particularly source code) as a trade secret – through disclosure policies that required significant contractual, technical, and administrative overhead to evaluate and then meet information requests. In doing so, it left itself vulnerable to an open source platform challenger where (by definition) the source code provides nearly complete documentation at zero transaction cost (West, 2003). The industry discussions since 2001 of Linux-based mobile phone platforms became a threat to Symbian’s existence with the debut of Android-based products in late 2008. Finally, Symbian’s entire ‘‘open’’ platform strategy arguably depended on it being an independent supplier not beholden to any one customer – which was plainly no longer true after 2004. The Wintel platform succeeded precisely because Microsoft and Intel aggressively courted and developed competitors to their initial customer IBM, which gave them new customers, grew the market, and provided incomparable economies of scale. Although Nokia lacked a controlling interest in Symbian, its de facto control of

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

53

Symbian’s revenues allowed it to discourage (if not block) efforts by Symbian to create competitors to Nokia (West, 2014).

SUDDEN AND UNEXPECTED DECLINE The year 2007 marked the high water mark for both Symbian unit sales, and also for its influence on buyers, complementors, handset makers, and public perceptions. A little more than three years later, Symbian Ltd. had ceased to exist as a legal entity and its technology was officially orphaned by Nokia, its only remaining customer. While Symbian achieved strong market share in Europe and much of the world, it had very low penetration in North America, enabling entry by three local platform sponsors: Research in Motion with its proprietary BlackBerry platform (2002), Apple with its iPhone platform (2007), and Google with its Android open source OS (2008). The original 2007 iPhone found immediate success in North America. Its success was attributed to a technical architecture that sought to replicate the Internet experience of a PC in a handheld device, through a large screen with finger-touch input, a desktop-capable web browser, and then (in 2008) a direct distribution mechanism for third-party software applications (West & Mace, 2010). These characteristics were copied by a series of Android phones (Kenney & Pon, 2011), thus cementing the dominant design for a consumer-oriented smartphone.

Anticipating but not Meeting Architectural Challenges With its clear focus on creating the smartphone category, Symbian and its partners had anticipated key elements of the dominant design before Apple, but failed to execute on bringing them to market or to combine them into a single product. The UIQ interface was a stylus-based input method, and a full-sized display (albeit at lower resolution) was characteristic of the earliest Sony Ericsson phones – P800 (2002), P900 (2003), and P910 (2004) – as well as the Motorola A1000 (2004) and A1010 (2005). However, the phones were far less popular than Nokia’s competing models, and Motorola abandoned UIQ (twice) while Sony Ericsson shifted to promoting a Walkman family of music-oriented phones.

54

JOEL WEST AND DAVID WOOD

Symbian had difficulty with its browser strategy, both due to underestimating the strategic importance of the browser and sheer bad luck. Starting in 1995, Psion and later Symbian worked to source a web browser from STNC, a small British company located less than 80 miles from Symbian’s London headquarters. However, Microsoft purchased STNC and its Hitchhiker browser in July 1999 to create its first mobile web browser. Beginning in 2002, Symbian handset makers licensed a browser from Oslobased Opera, which never provided website compatibility comparable to a desktop browser. Instead, the most successful and compatible smartphone browsers were based on the open source WebKit, created by Apple for its Macintosh PCs and later used by both the iPhone (2007) and Android (2008) platforms (West & Mace, 2010). Nokia announced its own WebKit-enabled browser as a research project in 2004, and in 2006 bundled it with the S60 phones, but its implementation lagged both Apple’s and Google’s. Due to fragmentation, a WebKit browser was never released for UIQ phones. Finally, Symbian was hindered by its legacy code and installed base in meeting the challenge of more modern APIs and development tools provided for the iPhone and Android. The two new platforms offered more modern programming languages and widely disseminated tools. (Apple had a free online course that was viewed by 100,000 potential developers in the first year.) To address this, in 2008 Nokia bought Trolltech, maker of the widely used Qt cross-platform user interface library. In 2009, it announced plans to provide a common set of Qt-based APIs for programming Symbian phones and Meego tablets. However, Nokia was still implementing this transition in early 2011 when it announced it would abandon Symbian for Windows.

Adapting to a New Ecosystem Paradigm Symbian also faced a challenge to its fundamental ecosystem strategy – first from iPhone on openness to complementors and then from Android on openness to handset vendors. Both posed a challenge that Symbian was unable to meet. In July 2008, Apple launched the iPhone App Store, providing an application distribution mechanism that bypassed both third-party distributors and the operators’ own application stores. The new store offered an unprecedented feature for a computing platform: a built-in way to directly sell and install all third-party applications. It also provided Apple with 30% of all download revenues, although a large proportion of the applications were provided free.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

55

The new App Store grew dramatically: while Symbian had taken 7½ years to acquire nearly 10,000 applications, the iPhone App Store reached 15,000 apps after six months and 100,000 after 16 months (West & Mace, 2010). The success of the App Store attracted customers and complementors, bringing tremendous favorable publicity for Apple. In response, sponsors of the Android, Windows Mobile, and BlackBerry platforms all announced their own app stores. Symbian took 15 months to launch its own app store and – constrained by both Nokia and its operator partners – was not allowed to sell directly to users. Instead, Symbian provided wholesale distribution via Nokia’s Ovi store and the operators’ stores, neither of which proved as well-implemented or popular as Apple’s or Google’s app stores. By one estimate, total 2010 app store revenues reached $2.2 billion; Apple’s store accounted for 83% of the total, with software developers receiving $1.25 billion and Apple’s commission revenues reaching $535 million (Whitney, 2011) – more than Symbian’s entire 2007 revenues. As it turns out, Symbian had considered creating its own application store back in 2005. The proposal failed to attract support within Symbian and eventually died for several reasons. Symbian felt it had been successful in attracting third-party complements; lacking a comparison, it was unaware of the financial pressures its developers faced; and it saw the store as a distraction and an expense, rather than as a source of significant revenues to solve its own financial pressures. Because Symbian lacked direct access to customers, it would also require the cooperation of its handset customershareholders and carriers, both of which later resisted its efforts to create an app store even after Apple’s success with the iPhone App Store. The other challenge came from the Android platform, which shipped its first smartphone in 2008. While Symbian had bragged that it was an ‘‘open platform,’’ source code was developed by employees of Symbian and its license holders, and was only available under nondisclosure and a royalty-bearing license. Meanwhile, Android offered a royalty-free license and full source code to any external partner.11 At the 2007 launch of the Open Handset Alliance, Android’s nominal sponsoring organization, founding members included two Symbian shareholders and Licensees – Motorola and Samsung – as well as NTT DoCoMo, Symbian’s main sponsor in Japan; shareholders Ericsson and Sony Ericsson joined 13 months later (Table 4). Symbian was founded with the intention of providing a platform shared by all handset licensees. However, by 2004, Nokia held nearly 48% of Symbian’s equity and more than 75% of its annual handset sales – and

56

JOEL WEST AND DAVID WOOD

Table 4.

Handset Models Developed by Major Symbian Licensees. Symbian

Android

Handset maker

Equity investment

UI

First modela

Last modela

No. of models

First modela

Nokia: 125 phones

1998–2012

1998–2008 2004–2008

Motorola: 7

1998–2003

2002 2001 2004 2005 2000 2002 2006 2009 2003 2005 2003 2005 2005 2004 2004

2012 2005 2004 2005 2000 2008 2008 2010 2008 2005 2012 2012 2008 2009 2005

117 6 1 1 1 12 11 3 6 1 61 37 19 15 3

None

Ericsson: 1 Sony Ericssonc: 26

S60b S80 S90 UIQ Emerald UIQ MOAP S60 UIQ MOAP MOAP MOAP MOAP S60 S60 S60 S60 UIQ S60

2004 2007 2004 2004

2004 2009 2006 2011

1 3 4 5

– 2010 – –

Fujitsu: 61 Sharp: 37 Mitsubishi: 19 Samsung: 15 Matsushita (Panasonic): 3 Siemens: 1 LG: 3 Other: 9

2003–2008 1999–2008 2002–2008

 2008

2007 2010 2010 – 2009 2012

Sources: For phone models: same as Table 2. For equity: West (2014). First shipment date for a handset model (announcement date where shipment date unavailable). b Includes S60-compatible releases after 2008 Symbian acquisition. c Joint venture of Sony and Ericsson (2001–2012); includes one Sony-branded MOAP handset. a

increasing degrees of de facto platform control. While Sony Ericsson once placed all its smartphone bets with the Symbian platform, other licensees such as Samsung and Motorola placed only tentative bets that they later abandoned. By 2009, Android had achieved what the Symbian platform ultimately failed to do: provide an open platform shared by a wide range of handset makers and controlled by none of them. Platform Extinction Challenged by Apple and Google, Nokia made a series of increasingly desperate moves to preserve its smartphone market share, leading to the

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

57

phased elimination of Symbian Ltd. and its platform. In June 2008, it announced it would acquire the remaining 52% of Symbian Ltd. for h264 million, with most Symbian engineers becoming part of Nokia and integrated into the existing S60 user interface team. Nokia hoped that combining the Symbian and S60 engineers would simplify control and speed development of the platform. The plan would also merge the UIQ and MOAP user interfaces into S60, to create a single platform. The acquisition killed Symbian’s business model, because Symbian and S60 source code were donated to a new foundation that would manage it as royalty-free open source software.12 By openly disclosing the Symbian source code and other technology, Nokia hoped to grow the Symbian ecosystem and stem smartphone market share losses to its two leading proprietary platform rivals, as well as blunt the enthusiasm that openness had brought to Android. Nokia completed its acquisition of Symbian Ltd. in November 2008, created the Symbian Foundation in early 2009, and transferred control of the Symbian OS source to it. In February 2010, the foundation released all of the Symbian OS source code: the estimated 40 million lines of code was said to be the largest open source release of formerly proprietary code. However, the open source experiment soon proved a failure. With the rise of Android, other potential handset sponsors stopped funding the Symbian Foundation, leaving Nokia’s contributions (and its internal R&D group) providing nearly all the resources to support the platform. In October 2010, both Samsung and Sony Ericsson announced they would no longer develop Symbian phones. While Nokia, Fujitsu, and Sharp continued to release new phones, both Nokia and Symbian continued to lose smartphone market share. In February 2011, Nokia announced that it would phase out Symbian in favor of Microsoft’s Windows Phone as its smartphone platform, although its Symbian sales exceeded Windows Phone sales until mid-2012.

DISCUSSION By some measures, Symbian was a tremendous success. It created the smartphone category, and built a complex ecosystem through a series of alliances with key stakeholders with very distinct value creation roles. It evolved that ecosystem strategy over time, in response to changes in its conception of the ecosystem, the expectations of its complementors, and in the availability of enabling technology (notably dissemination via the

58

JOEL WEST AND DAVID WOOD

Internet). Together, Symbian and its partners created the most popular smartphone platform, growing volume at a CAGR of 238% from 2002 to 2007 to capture two-thirds of the global smartphone market. However, unlike ecosystems limited by technological challenges (e.g., Adner & Kapoor, 2010), the downfall of Symbian’s ecosystem and platform can be traced to three (largely organizational) limitations of its ecosystem. First, Symbian created a computing ecosystem of unprecedented organizational and technical complexity. Second, the asymmetric dependencies of the various ecosystem members meant some stakeholders flourished while others struggled. Third, the divided leadership of the ecosystem limited the ability of Symbian and its ecosystem to respond to the new dominant design created by the iPhone.

Evolving a Complex Ecosystem Prior research on ecosystem strategies has suggested how sponsors can control and manage their ecosystem to best advantage, by orchestrating the value creation and taking for itself the largest share of the value capture (Gawer & Cusumano, 2002; Iansiti & Levien, 2004; Maula, Keil, & Salmenkaita, 2006). While this may be a desirable steady state goal, our data suggest that finding the path to that state is far from trivial – both due to the complexity of the task and the information available to the ecosystem leader. At best, optimizing the performance of an ecosystem built around some complex assembled product requires aligning the interests of a heterogeneous population of ecosystem members and partitioning (or selfassigning) the technical and business responsibilities among those members. Any actor has the choice to participate or not in the ecosystem; for many industries, this choice is influenced by the decision of whether to participate (non-exclusively) in one or more competing ecosystems. Finally, the actual (or prospective) failure of any ecosystem member may cause it to withdraw from an ecosystem, leaving a gap that may be filled only after a considerable delay. To this complexity, we add the inherent uncertainties (and unknowability) that come with a new ecosystem around the firm’s new platform. These uncertainties will be greater for a new firm, without prior firm-level reputation products or ecosystem experience. They will be even worse for a new-to-the-world technology or product category, where there is no direct precedent (known to any party) for partitioning the business and technical

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

59

responsibilities across ecosystem partners. A firm without products or an ecosystem will have to make assumptions about what technologies and business relationships it will need to create value. As Alan Roderick, onetime head of the Symbian Platinum Partners, summarized it: ‘‘In the early days, nobody knew where smartphones were going to go, what they were going to be capable of, or what it would take to make them sell.’’ To use the Mintzberg (1978) formulation, any ecosystem strategy has its intentional and emergent aspects, with the former manifest by the firm’s activities, structures, and programs to create and nurture an ecosystem, and the latter arising from firms that choose to join the ecosystem, pressures from competing ecosystems, and broader changes in the environment. If a firm enters a market without an existing ecosystem, then where does the firm’s initial ecosystem strategy come from? Our data – and the industry standard practice that influenced our subjects – suggests two possible cognitive heuristics that shape ecosystem formation. First, the firm and its managers will build upon the firm’s (or their individual) prior ecosystem experience – as when Symbian founders learned from their Psion experience. Second, lacking a large body of formal knowledge on ecosystem management, firms adapt strategies from similar ecosystems: in this case, the Windows ecosystem was extremely influential. Both Symbian and Nokia lacked the platform experience of an Apple or Microsoft in terms of managing a successful general purpose computing platform across multiple generations. Meanwhile, Symbian seriously underestimated the complexity required to transform an electronic pocket organizer to a general purpose, Internet-connected mobile computing device. And unlike in Ma¨kinen and Dedehayir (2013) – where the platform progress was limited by third-party software – here the limiting factor was the ability of handset makers to integrate software. Even after Symbian had shipped its first complete OS, the weaker software development capabilities of its handset makers (and their UI companies) meant they had difficulty keeping up with Symbian (and eventually, rival platforms) in implementing new platform features. As with other examples of loose coupling identified by Brusoni and Prencipe (2013), the entire ecosystem suffered when there was poor execution by one key party.

Asymmetric Dependencies with the Ecosystem Symbian depended fully on the success of its platform, as did many of its application suppliers. However, this was not true of other members of its

60

JOEL WEST AND DAVID WOOD

ecosystem, such as semiconductor makers, handset makers, and network operators. Unlike Symbian, this second group generated revenue from other mobile phones, not just Symbian. Nokia emphasized premium prices (of up to h1,000 for its best phones), maximizing its gross margins while limiting the number of customers available to Symbian and application providers. When other manufacturers (Motorola, Sony Ericsson) had less success selling Symbian phones, they sold non-smartphones or phones using Windows or (later on) Android. Additionally, competition between handset makers within the ecosystem undercut efforts to build a common platform, and align the interests of the entire ecosystem to its shared success. Nokia, Ericsson, and NTT DoCoMo each built separate sub-platforms to support their respective aims, and Ericsson’s sub-platform never attained the scale necessary to support its R&D costs. Outsiders such as Samsung, Panasonic, LG, and Siemens had difficulty developing for Nokia’s S60 and grew wary of depending on the Nokia-controlled platform – much as IBM’s rivals were wary of OS/2 (cf. Grove, 1996). Compared to Frankort’s (2013) optimistic example of intraindustry cooperation and knowledge transfer, here the knowledge transfer was much less effective. Finally, Symbian’s success in attracting third-party applications masked the difficulty its partners had in profiting from those applications. Relatively weak application sales were not a priority for Symbian and ignored by the rest of the ecosystem – until Apple’s iPhone created a new distribution paradigm that dramatically increased developer unit sales and proceeds.

Challenges of Divided Ecosystem Leadership Normally identifying the ecosystem leader is clear-cut. When there are rival claims, Adner (2012, p. 116) argues that leadership can be inferred from the actions of others: ‘‘The leader is not the one who says, ‘I’m the leader.’ He’s the one about whom everyone else says, ‘He’s the leader.’ This is the litmus test of leadership.’’ From the date of its public unveiling, Symbian Ltd. was proclaimed as the leader of a new ecosystem by investor-customers who sought to transfer their legitimacy to the Symbian platform. However, over its 10-year lifespan, the actions of these customers served to undermine that leadership as manifest by technical control, market control, and financial control of the platform. At least, four factors contributed to Symbian’s declining de facto leadership.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

61

First, its technical leadership – through control of application-facing APIs (cf. West & Dedrick, 2000) – was intentionally preempted by handset makers who asserted API control by creating custom UI layers. As Simon East recalled: ‘‘It became clear to us that Nokia had woken up to the fact that if these guys own the UI and the developer model – then that’s where the value is going to migrate to’’ (Orlowski, 2010). Relinquishing leadership both at the top (UI) and bottom (hardware) layers, Symbian and its platform suffered both from coordination problems with its licensees and also their generally weaker level of software development capabilities. Second, because it did not sell to consumers, Symbian both lacked a direct source of revenues and a marketing relationship to assert its leadership with the intended beneficiaries of the ecosystem, that is, smartphone buyers. The top mobile computing analyst for one market research firm was stark in his 2001 warning: [T]he main obstacle Symbian faces, [Ken] Dulaney said, is brand awareness. ‘‘They’ve really done a poor job of really raising (the image of) their company,’’ he said, adding that Palm and Microsoft have been much better at branding their names. Symbian has generally let the device maker do the talking while staying hidden in the background, Dulaney said – a strategy that isn’t in the company’s best interest. ‘‘I think they really need to reverse that strategy,’’ he said (Dano, 2001).

Third, because its shareholders (other than Psion) had inherent conflicts between their roles as investors and customers, the investors controlled Symbian for their benefit as customers rather than to maximize the value of the company and their investment (West, 2014). By vetoing Symbian initiatives, Nokia and Ericsson weakened Symbian’s financial health, its control of the platform, and the overall vibrancy of the ecosystem. The final challenge to Symbian’s leadership came with the end of the multilateral balance of power among the handset makers. Compared to its co-founders and ecosystem partners – notably Ericsson and Motorola13 – Nokia proved the most consistent its ability to release new Symbian smartphones (Orlowski, 2010). Its global cellphone market share doubled from 1997 to a peak of 38.6% in 2008, while that of Ericsson and Motorola fell by half. As Symbian’s dominant investor and customer after 2004, Nokia increasingly asserted leadership of the ecosystem – and others followed its lead. According to Adner (2012, p. 117), ‘‘Successful ecosystem leaders capture their outsized returns in the end, after the ecosystem is established and

62

JOEL WEST AND DAVID WOOD

running. But in the beginning they build, sacrifice, and invest to ensure everyone else’s participation.’’ Who realized the financial gains of leadership? Nokia did but Symbian did not, having lost leadership at a time when it should have been harvesting profits. We believe that Symbian’s eventual failure suggests key difficulties of divided leadership of an ecosystem and a platform. The platform literature has largely ignored the potential tensions of shared (or divided) platforms: while Gawer and Henderson (2007) defined both Microsoft and Intel as Wintel platform ‘‘owners,’’ they focus on their cooperative rather than competitive platform efforts.14 When comparing the Symbian and Wintel platforms, it was clear that individual CPU suppliers played much less of a role, whereas the system integrators (handset makers) played a greater – at times crucial – role. Symbian’s relationship with these integrators demonstrated a greater degree of rivalry than Microsoft faced either with its integrators or Intel. Although Microsoft competed with its application vendors, it enjoyed far more technical and market power than Symbian did in negotiating with handset makers. This divided leadership ultimately hurt the Symbian platform and the ecosystem members, delaying its ability to respond to the iPhone, Android, and the App Store challenges. As market leaders, Symbian and its partners (especially Nokia) initially discounted the iPhone threat. Symbian was quicker than its partners to react, but lacked both the resources and technical control to react unilaterally. A key vulnerability – the browser – was not controlled by Symbian, but its sub-platform partners (particularly the market-leading Nokia smartphones) were vulnerable to direct comparison. Other aspects of the user experience (e.g., preloaded applications) were left to the handset maker or even the operator – a model rejected by Apple although later adopted with Android.

Future Research There are inherent limitations as to the generalizability from a single case – in this case, a business ecosystem. The Symbian ecosystem differs from that for Windows Mobile, and significantly different from other mobile phone and computing ecosystems. Ecosystem relationships are considerably simpler in an industry where there is only one major class of complement such as with videogames, an industry where the degree of variation between firms and successive console generations cries out for a systematic study of ecosystem management.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

63

In the tradition of Carliss Baldwin (Baldwin & Clark, 2000; Baldwin & Woodard, 2010), this work suggests further research as to the interdependence of the technical and economic relationships within an ecosystem. The Symbian ecosystem suggests that the technical structure is more enduring than the business structure. A piece of add-on software may initially be created as a complementary product sold independently to users by its developer, but in mobile phones such complementary software often became software components that are later integrated as part of the platform capabilities. What influence does an ecosystem leader play in developing and cultivating such potential value-adding components? How can we predict the difference in the value creation when a complement (adopted by a few) becomes an integrated component (provided to many)? And does this generalize to ecosystems beyond mobile phones where users have search or use difficulties acquiring complements – or to ecosystems where a small number of parties (here manufacturers and operators) play a disproportionate role in product distribution? Since Symbian was launched, the practice of ecosystem management appears to have become better understood and more mature, with new examples such as mobile phones and videogame consoles as additional exemplars beyond the PC. Will this reduce the problem of a dominant logic for a new ecosystem that forces strategies to fit a single well-known exemplar? Or will it merely shift the definition of the dominant logic for ecosystem management?

NOTES 1. The first author interviewed Simon East, VP of Technology from 1998 to 2001; David Wood, head of Symbian Platinum Partners from 2002 to 2004; Alan Roderick, head of the Symbian Competency Centers from 2001 to 2004 and the Platinum program from 2004 to 2007; Patricia Correa, who succeeded Roderick, developing and launching the Symbian Partner Network program in 2008; as well as Isaac Dela Pena, who was a senior manager (among other positions) at Nokia from 2000 to 2010. 2. The history of Symbian and its relationship to Psion can be found in Tasker (2000) and Northam (2006). 3. Here, we focus on the pocket-sized PDAs that eventually proved to the be dominant design for the product category, rather the unsuccessful, tablet-sized PDAs such as the Apple Newton (1992–1997) and AT&T EO (1993–1994) that first gave the name to the category. 4. Motorola did not actually finalize its investment in the company until October 1998 (West, 2014).

64

JOEL WEST AND DAVID WOOD

5. We created a database of 304 Symbian handsets shipped 2000–2013 using the official public list of handsets on Symbian’s website (both in 2008 and as stored on the Internet Archive), databases of phones on GSMArena.com and Japanese phones on Wikipedia.org, and press releases and news stories about handset releases. 6. The word ‘‘community’’ was sometimes used as an alternative, but ‘‘ecosystem’’ was generally preferred since ‘‘ecosystem’’ recognizes the reality that companies have competitive relationships, not just the ‘‘friendly’’ relations implied by the word ‘‘community.’’ 7. Within Symbian, handset manufacturers were handled by the sales division rather than ecosystem management, but manufacturers also had full access to all partner information and events. 8. The state of the partner program in early 2000 can be found on the Internet Archive (Archive.org) using the March 6, 2000, backup of www.symbian.com/partners. 9. This source code excluded a very small portion that had been licensed in by Symbian from third-party suppliers under a contract preventing any other company from seeing the source. 10. Nokia’s decision to cancel its CDMA handsets was seen as tied to its patent disputes with Qualcomm (originator of the CDMA mobile standard) that continued until the two firms settled in July 2008. 11. Despite its (largely) open source code, Android scored lower than Symbian, Linux, Mozilla, and five other mobile-related open source projects in an independent 2011 study of openness in open source communities (Laffan, 2011). 12. Such open source foundations were originally created to strengthen the negotiating position of volunteer individual contributors, but Nokia’s intended use was deliberately modeled on the Eclipse Foundation, in which the foundation increases legitimacy by providing limited autonomy from its main corporate sponsor (see O’Mahony, 2003; West & O’Mahony, 2008). 13. Two other investors – Panasonic and Siemens – exited handsets, while licensee Sendo went bankrupt. For Samsung and LG, Symbian was always one of several smartphone platforms, and in 2010 the Korean firms later became the first and second largest makers of Android smartphones. 14. West and Dedrick (2001) were one of the first to identify the potential divergence of Microsoft and Intel’s interests, when Intel supported Linux as a server operating system competing with Windows. Since then, Microsoft has offered Windows implementations for mobile phones and (with Windows 8) even PCs that do not require Intel-compatible chips.

ACKNOWLEDGMENTS Earlier versions of this paper were presented at the 2008 User and Open Innovation Conference, the 1st Tilburg Conference on Innovation, and the Stanford Social Science and Technology Seminar. We thank the conference participants and the editors for their many helpful suggestions.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

65

REFERENCES Adner, R. (2012). The wide lens: A new strategy for innovation. New York: Penguin. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31(3), 306–333. Arthur, W. B. (1996). Increasing returns and the new world of business. Harvard Business Review, 74(4), 100–109. Baldwin, C. Y., & Clark, K. B. (2000). Design rules, Vol. 1: The power of modularity. Cambridge, MA: MIT Press. Baldwin, C. Y., & Woodard, C. J. (2010). The architecture of platforms: A unified view. In A. Gawer (Ed.), Platforms, Markets and Innovation. Cheltenham, UK: Elgar. Bresnahan, T. F., & Greenstein, S. (1999). Technological competition and the structure of the computer industry. Journal of Industrial Economics, 47(1), 1–40. Brusoni, S., & Prencipe, A. (2013). The organization of innovation in ecosystems: Problem framing, problem solving, and patterns of coupling. In R. Adner, J. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 167–194). Bingley, UK: Emerald Group Publishing Limited. Canalys (2004) Global smart phone shipments treble in Q3 - Worldwide handheld market falls, but growth continues outside US, press release, Canalys Ltd., October 27, 2004. Chambers, T. (2006). How smartphones work: Symbian and the mobile phone industry. Chichester, UK: Wiley. Cusumano, M. A. (2004). The business of software. New York: Free Press. Dano, M. (2001). Is Symbian OK? RCR Wireless News, April 16, p. 1. Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. Eisenmann, T. R. (2007). Managing proprietary and shared platforms: A life-cycle view. Working Paper No. 07-105; Harvard Business School. Retrieved from http://ssrn.com/ abstract=996919. Farrell, J., & Klemperer, P. (2007). Coordination and lock-in: Competition with switching costs and network effects. In M. Armstrong & R. K. Porter (Eds.), Handbook of Industrial Organization (Vol. 3, pp. 1967–2072). Amsterdam: North Holland. Frankort, H. T. W. (2013). Open innovation norms and knowledge transfer in interfirm technology alliances: Evidence from information technology, 1980–1999. In R. Adner, J. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 239–282). Bingley, UK: Emerald Group Publishing Limited. Gabel, H. L. (1987). Open standards in computers: The case of X/OPEN. In H. Landis Gabel (Ed.), Product standardization and competitive strategy. Amsterdam, The Netherlands: North-Holland. Gallagher, S., & West, J. (2009). Reconceptualizing and expanding the positive feedback network effects model: A case study. Journal of Engineering and Technology Management, 26(3), 131–147. Gawer, A. (2010). The organization of technological platforms. In N. Phillips, G. Sewell & D. Griffiths (Eds.), Research in the Sociology of Organizations, (vol. 29, pp. 287–296). Gawer, A., & Cusumano, M. A. (2002). Platform leadership: How Intel, Microsoft, and Cisco drive industry innovation. Boston, MA: Harvard Business School Press.

66

JOEL WEST AND DAVID WOOD

Gawer, A., & Henderson, R. (2007). Platform owner entry and innovation in complementary markets: Evidence from Intel. Journal of Economics and Management Strategy, 16(1), 1–34. Grove, A. S. (1996). Only the Paranoid survive: How to exploit the crisis points that challenge every company and career. New York: Doubleday. Iansiti, M., & Levien, R. (2004). The Keystone advantage: What the new dynamics of business ecosystems mean for strategy, innovation, and sustainability. Boston, MA: Harvard Business School Press. Katz, M. L., & Shapiro, C. (1985). Network externalities, competition, and compatibility. American Economic Review, 75(3), 424–440. Kenney, M., & Pon, B. (2011). Structuring the smartphone industry: Is the Mobile Internet OS Platform the key? Journal of Industry, Competition and Trade, 11(3), 239–261. Laffan, L. (2011). Open Governance Index: Measuring the true openness of open source projects from Android to Webkit. London: Vision Mobile Ltd. Ma¨kinen, S. J., & Dedehayir, O. (2013). Business ecosystems’ evolution – An ecosystem clockspeed perspective. In R. Adner, J. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 99–125). Bingley, UK: Emerald Group Publishing Limited. Markoff, J. (1998). Microsoft memo offers a glimpse of Gates 2.0, New York Times, Oct. 12, p. C1. Maula, M., Keil, T., & Salmenkaita, J.-P. (2006). Open innovation in systemic innovation contexts. In H. Chesbrough, W. Vanhaverbeke & J. West (Eds.), Open innovation: Researching a new paradigm (pp. 241–257). Oxford: Oxford University Press. Mintzberg, H. (1978). Patterns in strategy formation. Management Science, 24(9), 934–948. Morris, B. (2008). Platform security and Symbian Signed: Foundation for a secure platform,’’ white paper, Symbian Developer Network Version: 1.4, January. Morris, C. R., & Ferguson, C. H. (1993). How architecture wins technology wars. Harvard Business Review, 71(2), 86–96. Nathani, P. (2008). ‘‘Happy Birthday to PyS60!’’ forums.nokia.com. Retrieved from http:// blogs.forum.nokia.com/blog/pankaj-nathanis-forum-nokia-blog/2008/12/26/happybirthday-pys60. Accessed on December 26. Northam, P. (Ed.). (2006). How smartphones work: Symbian and the mobile phone industry. Chichester, UK: Wiley. Orlowski, A. (2010). Symbian’s secret history: The battle for the company’s soul; How Nokia took charge, and never let go, The Register. Retrieved from: http://www.theregister. co.uk/2010/11/29/symbian_history_part_two_ui_wars/. Accessed on November 29. O’Mahony, S. (2003). Guarding the commons: How community managed projects protect their work. Research Policy, 32(7), 1179–1198. Prahalad, C. K., & Bettis, R. A. (1986). The dominant logic: A new linkage between diversity and performance. Strategic Management Journal, 7(6), 485–501. Price, C. (1999). Symbian partners exploit Epoc to transform phones, Financial Times, June 2, p. 1. Scheible, J., & Tuulos, V. (2007). Mobile Python: Rapid prototyping of applications on the mobile platform. Chichester, UK: Wiley. Simcoe, T. (2006). Open standards and intellectual property rights. In H. Chesbrough, W. Vanhaverbeke & J. West (Eds.), Open innovation: Researching a new paradigm (pp. 161–183). Oxford: Oxford University Press.

Evolving an Open Ecosystem: The Rise and Fall of the Symbian Platform

67

Stuermer, M., Spaeth, S., & von Krogh, G. (2009). Extending private-collective innovation: A case study. R&D Management, 39(2), 170–191. Symbian (2008a). Symbian @ 10: History of Symbian, Retrieved from http://tenyears. symbian.com/timeline.php, Accessed June 23, 2008. Symbian (2008b). Symbian reports first quarter results for 2008, press release, May 20. Retrieved from http://www.symbian.com/news/pr/2008/pr20089950.html, Accessed on July 9, 2008. Tasker, M. (2000). A new EPOC. In Professional Symbian Programming: Mobile Solutions on the EPOC Platform (Chapter 1). Birmingham, UK: Wrox Press. Teece, D. (1986). Profiting from technological innovation: Implications for integration, collaboration, licensing and public policy. Research Policy, 15(6), 285–305. West, J. (2003). How open is open enough? Melding proprietary and open source platform strategies. Research Policy, 32(7), 1259–1285. West, J. (2014). Challenges of funding open innovation platforms: Lessons from Symbian Ltd. In H. Chesbrough, W. Vanhaverbeke & J. West (Eds.), Open innovation: New research and developments. Oxford: Oxford University Press. West, J., & Dedrick, J. (2000). Innovation and control in standards architectures: The rise and fall of Japan’s PC-98. Information Systems Research, 11(2), 197–216. West, J., & Dedrick, J. (2001). Open source standardization: The rise of Linux in the Network Era. Knowledge, Technology & Policy, 14(2), 88–112. West, J., & Mace, M. (2010). Browsing as the killer app: Explaining the rapid success of Apple’s iPhone. Telecommunications Policy, 34(5-6), 270–286. West, J., & O’Mahony, S. (2008). The role of participation architecture in growing sponsored open source communities. Industry & Innovation, 15(2), 145–168. Whitney, L. (2011). Report: Apple remains king of app-store market, CNET News.com. Retrieved from http://news.cnet.com/8301-13579_3-20032012-37.html. Accessed on February 11. Wood, D. (2002). ‘‘Symbian Developer Expo 2002 - in context’’ internal presentation, Symbian Ltd., London, April 2002. Wood, D. (2003). ‘‘Partnering goals, 1H 2003’’ internal document, Symbian Ltd., London, January 2003. Wood, D. (2005). Symbian for smartphone leaders: principles of successful smartphone development projects. Chichester, UK: Wiley. Yoshizawa, M., Ichikawa, Y., & Kogetsu, Y. (2006). Expansion of ‘‘MOAP’’ software platform for mobile terminals. NTT DoCoMo Technical Journal, 8, 15–18.

BUILDING JOINT VALUE: ECOSYSTEM SUPPORT FOR GLOBAL HEALTH INNOVATIONS Julia Fan Li and Elizabeth Garnsey ABSTRACT Healthcare innovations for bottom-of-pyramid populations face considerable risks and few economic incentives. Can entrepreneurial innovators provide new solutions for global health? This chapter examines how a technology enterprise built a collaborative network and supportive ecosystem making it possible to steer an innovation for TB patients through discovery, development, and delivery. Ecosystem resources were mobilized and upstream and downstream co-innovation risks were mitigated to commercialize a new diagnostic test. Detailed evidence on this innovation for TB care uses ecosystem analysis to clarify core issues in the context of joint value creation. The case study shows how resources from private and public partners can be leveraged and combined by the focal firm to build joint value and to lower execution, co-innovation, and adoption risks in healthcare ecosystems combating diseases of poverty. Keywords: Ecosystem; joint value; entrepreneurship; innovation value chain; collaboration

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 69–96 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030006

69

70

JULIA FAN LI AND ELIZABETH GARNSEY

INTRODUCTION Incentives to offer goods and services usually arise through market exchange. For the poor who lack purchasing power, there is chronic under-provision of goods and services, including critical healthcare services. Approximately 4 billion people, living on less than $2,000 USD per year, make up the collectively termed ‘‘bottom-of-the-pyramid’’ (BOP) market (Prahalad & Hammond, 2002). The latest biomedical innovations often do not reach BOP populations. Private sector activity and innovation seldom develop new solutions to diseases that predominantly affect the poor in lowincome countries (Trouiller et al., 2002). However, in view of the economic and moral imperative to innovate for all, new private-sector activity is being stimulated in fighting diseases that affect BOP populations (Harper, 2007). Whether a firm’s innovations eventuate and are translated into use depends on complementary innovations and may require the involvement of an ecosystem of interdependent participants (Adner, 2006). Innovator firms face a multitude of challenges including execution, co-innovation, and adoption chain risks (Adner, 2012). How can they overcome these challenges and leverage both internal and external resources to create value for patients and capture enough value themselves to sustain further research and development (R&D)? The chapter examines the innovator’s relationships with other ecosystem participants and how co-innovation impacts on value creation and capture by the entrepreneurial firm. Technological innovations based on biomedical discovery and development work are expensive and require upstream funders. They also require secure downstream delivery routes to market. In resource-constrained environments, such routes often rely on partnerships to help open distribution channels. We inform our analysis with evidence from the case study of a medical devices company combating tuberculosis (TB), Cepheid, and draw conclusions applicable to other innovators seeking to create shared value.

ENTREPRENEURIAL INNOVATION Throughout history, innovations by entrepreneurs have met needs that had long gone unaddressed (Nairn, 2002). Few scholars have set the issue of entrepreneurial innovation in the context of innovation ecosystems. But wherever the aim is to create value in innovative ways, a supportive ecosystem may be needed (Adner & Kapoor, 2010). Working with

Building Joint Value: Ecosystem Support for Global Health Innovations

71

established organizations in an ecosystem allows a new entrepreneurial firm to gain legitimacy and to reduce risks (Eisenhardt & Schoonhoven, 1996). Collaboration with customers and suppliers helps to define the new value proposition around a seed innovation (Helfat & Peteraf, 2003). Once admitted to an on-going ecosystem, the innovator can mobilize and leverage resources of other partners and ultimately contribute back to the innovation ecosystem. In some cases, firms need to create a new ecosystem that will support their innovations (Garnsey & Leong, 2008). Entrepreneurs excel at coupling activities that provide them with resource access: because they are resource constrained, they have the incentive to orchestrate the dynamics of interactions among ecosystems’ complementary and supporting actors (Brusoni & Prencipe, 2013). The role of entrepreneurial innovators has not been a focus of BOP literature, but the need for supportive institutions for BOP markets has been recognized (Mair & Marti, 2009). And there is increasing interest in how partnerships and alliances can make products and services available to the BOP (Karamchandani, Kubzanasky, & Frandano, 2009). To this end, a number of recent studies have focused on the strategies of large multinational corporations and the provision of existing goods and services (Hart & Christensen, 2002; London & Hart, 2004). But little attention has been paid to the innovation value chain of discovery, development, and delivery of new technological and social solutions specifically designed for BOP populations (Karnani, 2007). Existing innovation literature has provided rich examples of networks of knowledge and collaborative learning, and the importance of anchor firms in business networks (Iansiti & Levien, 2004). Strategic alliance literature offers insights into interconnections between firms and transaction costs of mobilizing resources (Mowery, Oxley, & Silverman, 1996). Studies of innovators in global health can draw on and extend the analysis offered by this research. Biomedical innovations move from invention and discovery to development involving clinical testing and through to delivery to the end user (Hansen & Birkinshaw, 2007). In healthcare innovation, downstream and upstream partners often share technical knowledge and it is a shared objective to ensure that the innovation reaches the end patient. In healthcare innovation, downstream and upstream partners often share technical knowledge and share an objective to ensure that the innovation reaches the end patient. Moreover, regulatory bodies, advocacy groups and political mindsets affect the rate of adoption of health innovations. The ecosystem perspective is used in this chapter to examine the innovator firm’s

72

JULIA FAN LI AND ELIZABETH GARNSEY

interactions with other participants in a dynamic environment where each participant aims to balance interdependence risks and benefits. We consider two related questions. First, how can firms leverage their internal resources and combine these with resources from their ecosystem to provide for unmet medical needs in conditions of poverty? Second, we address the issue of value capture: can value be secured by the value-creating firm to sustain their activities and contribute to their ecosystem?

CREATING JOINT VALUE IN BOP INNOVATION ECOSYSTEM New technologies are slow to reach the poor (Prahalad & Hammond, 2002). This is especially the case in healthcare (Hotez et al., 2007). Many definitions of innovation and the commercialization of invention assume market introduction and use effective demand as a proxy for wants or needs, assuming the customer’s ability to pay (Afuah, 2003). In contrast, BOP literature makes the business case for widening access to all innovations and including the 4 billion people who live on less than $2 per day (Prahalad, 2006). To make healthcare innovations more accessible, business models are turning to volume and impact rather than margins alone (Munir, Ansari, & Gregg, 2010). Partnership models can be used to mitigate risks of failure and to achieve shared benefits. We show that business models that involve the pooling of resources enable the process of co-innovation even in distinctive conditions of abject poverty.

INNOVATION VALUE CHAIN AND THE ECOSYSTEM Along the industry value chain from primary producers to final customers, inputs are transformed into more valuable outputs by individual productive units (Porter, 1980). The ecosystem concept includes not only participants in the conventional industry value chain but also the funders, resource providers, standard setters, and complementary innovators who make it possible to generate joint value. The ecosystem lens allows for elements of joint value creation to be included in strategic analysis and builds upon classic literature on partnership and alliances. Partnerships and alliances are seen to help the innovator to obtain necessary resources in various forms of open innovation that compensate for the absence of vertical integration

Building Joint Value: Ecosystem Support for Global Health Innovations

73

(Chesbrough, 2003). The ecosystem view enriches the open innovation construct, allowing analysis to include external challenges and benefits experienced by partners in these networked relationships (Adner & Kapoor, 2010). The creation of shared economic and social value, advocated by Porter, can be made possible by ecosystem participants (Porter & Kramer, 2011).

FIGHT AGAINST TUBERCULOSIS Tuberculosis is an age-old bacterial disease that still claims the lives of approximately 1.8 million people a year (Lo¨nnroth et al., 2010). TB primarily affects lower socioeconomic groups, often in environments with poor sanitation. There are few economic incentives to engage in R&D to find treatments for diseases that disproportionately affect the poor (Hotez et al., 2007; Moran et al., 2009). Just over $2.5 billion was invested in R&D of new neglected disease products in 2007, about 1.6% of the estimated $160 billion global R&D spent on healthcare (Burke & Matlin, 2008). No new TB vaccine has been brought to market since the 1921 and no new antibiotic drug class has been proven effective for TB since 1963. Infection by the TB infectious agent, Mycobacterium tuberculosis, may remain latent within the human body, may develop into active disease, or may be eradicated by the host’s immunological response. TB diagnostic tests are required to assign a patient to one of these categories. However, detection remains difficult because of inaccurate diagnostic methods and confounding factors (Wallis et al., 2010). Modern-day TB diagnostic tools are neither appropriate nor affordable for resource-poor environments. National TB programs in disease-endemic countries still rely on older and inaccurate methods for diagnosing TB. Without correct diagnosis of a patient’s condition, appropriate medical treatment cannot be provided. Where conventional public health efforts have done little to resolve persistent problems, could entrepreneurial innovation provide a way forward? We sought a case where new solutions were forthcoming through entrepreneurial innovation and aimed to identify how this was achieved.

TUBERCULOSIS DIAGNOSTICS IN CONTEXT The innovator presented here addressed pressing issues. One-third of the world’s population is infected with latent TB and an estimated 9 million

74

JULIA FAN LI AND ELIZABETH GARNSEY

Table 1.

Diagnostic Requirements for Resource-Poor Settings.

Barriers

Solutions

 Shortage of trained medical professionals

 Simple, easy to use and interpret

 Lack of reliable electricity and clean water

 Resource efficient

 Extreme environmental conditions

 Robust

 Cost prohibitions

 Low cost

 Limited transportation options

 Point of care

Source: BIO Ventures for Global Health.

people develop active TB every year; however, most do not receive confirmed diagnosis. A market research report found that worldwide, $1 billion USD is spent on TB tests and evaluations, which screen 100 million people per annum (FIND/TDR, 2006). The slow pace of innovation for TB diagnostics is related to the underdeveloped science of developing biomarkers and lack of clarity as to the required product profile. Table 1 outlines diagnostic requirements for resource-poor settings. The absence of incentives to develop products targeted for diseases of the poor slows down the diffusion and uptake of best practices and the most advanced diagnostics (Wallis et al., 2010). But over the past 10 years, progress has been made in expanding the TB diagnostic pipeline. There is increased funding from the private sector, from foundations, and government programs that see combating TB as among the millennium development goals. Table 2 lists entrepreneurial innovators working toward improved TB diagnostics from institutions around the world. Table 2 shows that private sector entrepreneurial efforts outnumber academic and not-for-profit entrepreneurial efforts in new TB diagnostics. These firms usually have a platform technology relevant to a variety of diseases. No firm listed only works on TB diagnostics. These firms also often engage in strategic alliances with each other and cross-invest in each other’s ventures.

RESEARCH METHOD AND DATA COLLECTION The research design uses a single case study, selected from a wider dataset, as an exemplar. It is not claimed that the unique case is representative, but

Building Joint Value: Ecosystem Support for Global Health Innovations

Table 2.

75

Innovator Efforts on New TB Diagnostics.

Public Institutions

University of Medicine and Dentistry of New Jersey London School of Hygiene and Tropical Medicine

Private Enterprises

BD Diagnostic Systems Tauns Laboratories Inc

Not-for-Profit Organizations Foundation for Innovative New Diagnostics World Health Organization Special Program on Research and Training for Topical Diseases

Standard Diagnostics Inc Innogenetics HAIN Lifescience Chembio Bigtec Labs/Molbio Cepheid Inc LW Scientific Fraen Carl Zeiss Inc Eiken Chemical TREK Diagnostic Systems Alere Source: Treatment Action Group, World Health Organization.

rather that it can offer special insight and the basis for analytical generalization (Siggelkow, 2007, p. 20). Single case studies have provided the research base for influential studies in management science, for example, those of Penrose (1960) and Schein (2010). A qualitative case study presents and interprets rich evidence, serving as an exemplar to inform understanding of a phenomenon that is still relatively unknown (Brown & Eisenhardt, 1997; Eisenhardt, 1989; Yin, 2003). Appendix I includes more detail on the data collection and data analysis underlying this account.

INTRODUCING CEPHEID: AN ENTREPRENEURIAL INNOVATOR Cepheid is an on-demand molecular diagnostics company based in California. Its platform technology (‘‘GeneXpert’’) makes it possible to conduct rapid genetic testing of organism and genetic-based disease by automating laboratory procedures. GeneXpert incorporates a fully integrated laboratory in a cartridge that performs all sample processing and

76

JULIA FAN LI AND ELIZABETH GARNSEY

eliminates the constraints of having to operate in a central reference laboratory. The platform brings diagnostics closer to point-of-care between the clinician and the patient. Three partners, Thomas L. Gutshall, M. Allen Northrup, and Kurt Petersen, founded Cepheid in California in August 1996. Petersen was the visionary scientist and a serial technology entrepreneur.1 The three partners licensed a key component of their core technology from the US Lawrence Livermore National Laboratory2 that allowed them to build an integrated test. For the first five years of Cepheid’s existence, it was able to attract funding from private investors, founders, and grants from government agencies. In order to raise additional funds, Cepheid went public in an initial public offering on NASDAQ in 2000. Consistent with the firm’s strategy to offer an open system supportive of lab-developed tests, Cepheid targeted the life sciences research market and made its first sales in 2000. However, its clinical and laboratory strategy was interrupted by external factors and events on September 11, 2001. When terrorists attacked New York City, the Pentagon in Washington, DC, and conducted anthrax-spore attacks in September 2001, the threat of bioterrorism escalated. Unexpected changes in user needs spiked interest in small-niche biodetection companies (Christensen, 1992). Cepheid scientists made no significant component or technological advances in September 2001, but new orders poured through from federal agencies. The US Army and Centre for Disease Control (CDC) wanted to use Cepheid’s DNA detection systems for both laboratory and field testing (Henderson, 2002a). In May 2002, Cepheid joined a partnership led by Northrop-Grumman Corp, a defense contractor that had won a bid to install anthrax detectors at postal centers nationwide in the United States.

Recognizing an Opportunity While the biodetection contract would ultimately peak at $58 million, Cepheid’s CEO John Bishop recognized that the future of the firm and its offering of the integrated testing platform could achieve greater potential in the clinical medical market. The evolution of a technology platform from an industrial setting to a clinical setting posed significant challenges, requiring that the firm gain understanding of a new operating environment. New technologies need to prove superiority compared to current alternatives to be adopted by users. The firm also needed to establish new distribution channels and influence physician/user adoption.

Building Joint Value: Ecosystem Support for Global Health Innovations

77

It was recognized at Cepheid that its GeneXpert testing platform could be configured to test for a variety of infectious diseases including TB. However, it had to consider the initiation risks of investment and return, adoption risks, and incentives to commence the TB project over other projects in its portfolio. Cepheid did not have the in-house technical knowledge to develop the TB test alone or the justification of the investment without a large commercial market to its shareholders.

Mobilizing Competencies and Partnering for Resources To overcome technical barriers and economic constraints, Cepheid started to investigate partnership possibilities. As a publicly listed company, Cepheid was on a commercial mission to develop and deliver technologies on a profitable basis. Cepheid examined its own internal competences and looked externally to partners with similar interests and complementary knowledge. Each partnership had to be funded so that the collaborations met dual criteria of business viability and technical development. In August 2002, Cepheid received its first US National Institutes of Health (NIH) grant to co-develop a rapid diagnostic test for TB (Newswire, 2002). Cepheid received $376,000 for its collaboration with Dr. David Alland at the University of Medicine and Dentistry of New Jersey (UMDNJ). Dr. Alland’s university laboratory had been focused on molecular beacons, infectious diseases, and assay development and was welcoming of Cepheid’s development knowledge. From the beginning, the goal of the collaboration was said by Dr. Alland to ‘‘develop a diagnostic system for TB and drug resistant TB that can be used anywhere in the world by individuals with minimal training.’’ Shortly thereafter, another collaboration set up with Children’s Medical Centre of Dallas and the University of Texas Southwestern Medical Centre at Dallas using Cepheid’s platform technologies (Henderson, 2002b). On the basis of their clinical contacts, Cepheid started to gain the attention of clinicians, competitors, and collaborators. In 2003, the Foundation for Innovative New Diagnostics (FIND), a not-for-profit foundation, was established in Geneva, Switzerland, with a mission to drive the development and implementation of accurate and affordable diagnostic tests in lowresource settings. FIND was supported by a founding grant through the Bill & Melinda Gates Foundation. The initial focus of the FIND program was TB and an exhaustive international search was undertaken to identify potential partners in meeting its objectives. FIND identified Cepheid as

78

JULIA FAN LI AND ELIZABETH GARNSEY

having the most technically advanced platform available. FIND’s objective to develop an accurate, affordable, and rapid TB diagnostic created a specific clinical opportunity for Cepheid’s simplifying technology.

Value Creation in a Supportive Ecosystem Cepheid created value through the commercialization of GeneXpert MTB/ RIF, an accurate, rapid, affordable diagnostic test for TB and detection of potential drug-resistant TB strains. The World Health Organization approved Cepheid’s platform diagnostic on December 8, 2010.3 Proof-of-concept studies of GeneXpert MTB/RIF for TB were published as evidence prior to approval. Post-approval effectiveness studies in different geographic regions expanded. This increase in evidence of effectiveness in scientific publications by third parties independent of Cepheid marks a transition from a period of high interdependence risks during the development phase to decreasing adoption risks as technology diffusion occurred. Value for patients resulted from correct diagnosis of TB alongside the appropriate course of treatment. More widely, accurate and rapid diagnosis can help prevent further infection and transmission of both drug-responsive and drug-resistant TB. Given the difficulty in diagnosing TB rapidly and accurately (avoiding false positives), this diagnostic test impacts on the entire value chain of treatment and control of TB.

Overcoming Co-innovation Risks To fulfill its mission to develop new diagnostics, FIND searched for private sector partners who could help the not-for-profit foundation work toward affordable diagnostics for diseases disproportionately burdening lowincome countries. Program executives from both Cepheid and FIND closely managed the GeneXpert development commercialization agreement. A steering committee and governing body was set up to oversee the collaboration with formal monthly, quarterly, and annual in-person meetings. On a practical basis, the scientists and program staff communicated daily and interacted frequently to solve technical obstacles. To ensure an effective development partnership and to mitigate natural tensions of accountability, success metrics of the collaboration were defined early. A strong governance structure also allowed for appropriate partitioning of project responsibilities. The target product profile and technical

Building Joint Value: Ecosystem Support for Global Health Innovations

79

specifications of the point-of-care TB diagnostic test were negotiated and screened by experts from FIND and Cepheid. The upstream development partnership with FIND also specified from the beginning key market and delivery objectives. The agreement stated that the pricing of the end product for developing countries must be as affordable as possible to end customers. The contract required that Cepheid offer special pricing for developing countries with a heavy TB burden in exchange for FIND project funding. Cepheid was to price on the basis of manufacturing cost plus a small percentage margin. It was predicted that the high volume of orders from disease-endemic countries would provide returns for Cepheid to cover their R&D investment in test development. Cepheid’s long-term business model also relies on customers buying recurrent supplies required by its product offering. Each downstream testing location or clinician’s office may have to purchase the testing machine only once, but demand will be sustained by the recurring orders for replacement test cartridges. A similar R&D steering committee was formed with the academic laboratories at the UMDNJ with a similar governance structure. The collaboration with academia was more focused on research with general objectives of knowledge discovery and exploring potential applicability. There were fewer specifications about output metrics, which is more characteristic of development partnerships further downstream in the innovation value chain.

Creating Joint Value Provides Incentives that Inform Business Strategy Cepheid views the upstream and downstream network for TB innovation as providing financial and technical value capture for the firm and as congruent with its business strategy. The company, existing customers, and shareholders all benefit from shared investments in the TB program. Cepheid has expanded the scale and technical capabilities of its GeneXpert platform and the results have been positive. In addition, Cepheid benefits from volume, impact, and propagation of its technology on a global basis. The TB platform test allows for a new entry point for customers. Customers can start using GeneXpert for TB and in the future, and easily purchase additional services on the GeneXpert platform menu (for additional diseases and indications). Using the ecosystem schema, an overview of joint value creation in Cepheid’s operating environment can be seen in Fig. 1. Cepheid’s technology platform was a resource that helped Cepheid attract further resources for development of the TB point-of-care diagnostic.

80

JULIA FAN LI AND ELIZABETH GARNSEY

Fig. 1.

Cepheid’s Innovation Ecosystem for TB Diagnostics.

The delivery of the innovation to end customers was initially dependent on downstream ecosystem partners. For many of the low-income countries, Cepheid had no prior sales relationships or direct distribution channels. Downstream partners were required in delivering this innovation. Cepheid relied on in-country distributors and authorized service providers in these new markets. Education and awareness trips were organized by FIND, government officials, UN agencies, and civil society organizations to inform health practitioners of the new technology available. These delivery partners engaged in early demonstration of the GeneXpert platform and supported adoption of technology through post-sales training and support. Over time, as demand has increased for GeneXpert in new markets, Cepheid has set up direct corporate distribution channels and, in some instances, purchased distributors in developing countries to establish a direct presence. As sales volume increases for Cepheid’s products, direct distribution can provide long-term service maintenance, quality control, and support for Cepheid’s customer base.

How an Ecosystem Was Mobilized to Support Cepheid’s Innovation Cepheid commercialized its GeneXpert platform for TB by building on a strong platform and harnessing resources from its scientific partners and downstream collaborators. Financial resource providers such as biomedical

Building Joint Value: Ecosystem Support for Global Health Innovations

81

grants, foundation grants, and universities in the ecosystem enabled knowledge sharing upstream. This was undertaken by the UMDNJ, FIND, and by Cepheid, facilitated by financial support that linked the three collaborators on the same project code. Collaboration had started in 2002 between Cepheid and UMDNJ on the application of Cepheid’s technology to TB and underlying research. However, there was no internal program at Cepheid for TB work and allocation of both staff and resources was uncertain and slow. The pace of development increased rapidly in 2006 when FIND joined Cepheid to be a co-development partner for GeneXpert for TB.4 The collaboration included UMDNJ, Cepheid, and FIND and was catalyzed by a $3.7 million USD grant by the Bill & Melinda Gates Foundation over three years (Newswire, 2006). Subsequently, Cepheid was awarded a phase II $3.3 million grant from the US NIH for TB test development (News, 2006). By supporting a consortium of R&D collaborators in the ecosystem, resource providers also reduced the risks faced by their investments and grants; access to the internal resources of additional participants increased the chances of product success. External funding allowed Cepheid to expand its technology platform and extend it into the area of TB diagnosis. ‘‘Cepheid was pursuing its own clinical development activities and the TB market was in an unfamiliar area outside of its domestic (US) market boundaries,’’ stated Steven Nelson, Cepheid Vice President business development.5 The Cepheid system was the only closed, self-contained, fully integrated, and automated system for molecular testing available on the commercial market in 2006. ‘‘We saved up to three years development time than if we funded a university lab or concept idea,’’ remarked Dr. Mark Perkins, chief scientific officer of FIND on the search for a TB product development partner.6

Finding Partners in Technological Diffusion to Reduce Adoption Risks A Joint Development Committee was created between FIND and Cepheid to manage the project with a TB project manager on each side. FIND acted both as a supplier of upstream technical knowledge and downstream complementor in helping Cepheid deliver the end product to customers. For example, Cepheid and FIND worked to develop internal technology for molecular diagnostics so as to avoid upstream royalty payments to additional patent holders (other firms) and keep overall cost of production as low as possible. FIND was also a helpful downstream partner in delivery and reduced adoption risks by other ecosystem participants by undertaking

82

JULIA FAN LI AND ELIZABETH GARNSEY

essential demonstration trials of the product. FIND published clinical trial results illustrating the effectiveness and superiority of GeneXpert as against other alternatives on the market. FIND helped Cepheid navigate the World Health Organization approval process, which is required before United Nations agencies are allowed to procure medical devices. UN agencies often help low-income countries procure essential biomedical products and deliver them to end customers who could not otherwise afford them. The public-private partnership between Cepheid and FIND helped to mitigate science and technological risk, regulatory risk, reputational risk, and implementation risks. By working with another ecosystem participant as both an upstream supplier of knowledge and downstream complementor, Cepheid was able to better control its co-innovation risks and adoption risks. Adoption of the innovation across the ecosystem value chain is facilitated because partners are assisted thereby to achieve their own missions. Cepheid’s TB project operates in the development phase of the innovation value chain with partners collaborating on discovery and delivery. Cepheid’s core technology of rapid nucleic acid testing was originally verified in the biothreat market with confirmation of technology through anthrax detection machines. Cepheid reoriented its technology into the clinical setting, which included a specific co-funded application for TB. Cepheid cobuilt upon the original technology findings with FIND and worked on development for a commercial product. However, adoption risk is still present in the ecosystem in view of competitive forces from existing technologies. The GeneXpert MTB/RIF system had to prove itself against incumbent diagnostic technologies by achieving superior performance on effectiveness and cost. Thus, the demonstration trials undertaken by FIND and other academic downstream ecosystem partners helped to decrease adoption risk by customers. For example, the GeneXpert MTB/RIF diagnostic has been used in many scientific studies in both developed and developing country clinical settings to test for effectiveness. The results and subsequent publications in scientific literature post 2010 device approval have increased the pace of technological diffusion and use of the test. As shown in Fig. 1, policymakers, governments, and nongovernmental organizations (NGOs) also impact the shared global health ecosystem. Once the GeneXpert platform had been proven to be effective in clinical trials, the innovation moved down the innovation value chain toward delivery to customers. The broad nature of downstream adoption risks differs from precise and specific technical co-innovation risks characterizing the development stage. Cepheid started to engage with new intermediary

Building Joint Value: Ecosystem Support for Global Health Innovations

83

partners such as UN agencies and large NGOs focused on healthcare delivery and implementation partners including the Clinton Foundation Health and Access Initiative. These delivery partners provided a bridging relationship for Cepheid and respective country governments and Ministries of Health. They demonstrated the GeneXpert test and provided funding for training, support, and country roll-out programs. ‘‘Long-term support of the political environment is key,’’ commented Steve Nelson, Cepheid Vice President business development on in-country delivery programs, ‘‘our experience with different countries is often reflective of the quality of the intermediary partners.’’ In the production and delivery phase of an innovation, product pricing plays a key role, especially in resource-constrained environments. Cepheid’s original development contract with FIND specified that low-income countries would obtain the platform and cartridges at 75% price discount compared with high-income countries. Further to the high demand and proven effectiveness of the GeneXpert MTB/RIF test upon commercial launch, a further price reduction was negotiated in 2012 for high disease burden developing countries to lower diffusion barriers and increase access to the test.7 The 2012 reduction in price was negotiated with the Bill & Melinda Gates Foundation and several international funders. The commercial price points for low-income countries were reduced further by 40%. It was acknowledged that an upfront subsidy from the international consortium of funders would enable long-term sustainable flow of product to high-burden developing countries. The investment allowed Cepheid to invest in capital expenditures to strengthen their manufacturing and supply chain to keep pace with customer demand. The surge in volume orders allowed Cepheid to realize economies of scale much sooner from its production capabilities.

DISCUSSION We considered a number of interrelated questions in the context of coinnovation by ecosystem participants. How can innovative enterprises create value for users through leverage of their internal resources and attraction of external resources from their ecosystem? Can this provide for unmet medical needs in conditions of poverty? Can enough value be secured by the entrepreneurial innovator to make it possible to maintain its activities, including new product development? In this section, we reflect on the

84

JULIA FAN LI AND ELIZABETH GARNSEY

evidence and why we present and interpret these questions in terms of an innovation ecosystem.

Mobilizing Partnerships to Form a Supportive Ecosystem and to Reduce Risks Cepheid exemplified the entrepreneurial skills of opportunity recognition by seeing that its resource base was well suited to creating value in the arena of TB care. Cepheid also demonstrated how the entrepreneurial innovator might have to move into proactive opportunity creation by mobilizing a supportive ecosystem to this end. In doing so, like all innovators engaged in forms of open innovation, the enterprise must attract partners and manage co-innovation risks in working with them (Adner, 2012). We summarize below key partnerships that together gave rise to a supportive ecosystem for Cepheid’s innovation, and consider how the mitigation of risks for partners helped create incentives for them to take part in this ecosystem. Risks are compounded in severely resource-constrained BOP environments. The development of the GeneXpert MTB/RIF test for TB by Cepheid faced technical and financial development risks, as well as delivery risks exacerbated by limited distribution channels. Cepheid could not develop and deliver an improved TB diagnostic without the support of participants in an ecosystem mobilized for innovation. In return, Cepheid was able to contribute value to ecosystem partners who shared the mission of better healthcare for BOP populations. To overcome initiation risks involved in starting a new and uncertain R&D project, Cepheid developed relationships with knowledgeable technical experts to mitigate risks of failure and collaborated with established academic experts in TB and diagnostics to gain legitimacy in a new technological area (Suchman, 1995). The firm further reduced initiation risks by partnering with FIND, both as an upstream knowledge supplier and downstream complementor in delivery of the innovation (Eisenhardt & Schoonhoven, 1996; Helfat & Peteraf, 2003). Discovery and development partnerships with UMDNJ and FIND signaled credibility and accelerated funding from resource providers. With combined knowledge and experience, all three ecosystem participants were better able to harness advocacy and political support as well as navigate regulatory hurdles. Each of the collaborators used their internal technical capabilities to advance the innovation, thereby attracting further funding.

Building Joint Value: Ecosystem Support for Global Health Innovations

85

Co-innovation risks inevitably emerge when a firm forms a collaborative network. Risks of delay from upstream and downstream partners were reduced by attracting partners who shared Cepheid’s output goals. Detailed development goals were agreed from the outset, with outcome-oriented product specifications and participant responsibilities. Funding grants were received collectively for the project (awarded to Cepheid, FIND, and UMDNJ), aligning incentives for the primary collaborators who shared overlapping technical knowledge (Mowery, Oxley, & Silverman, 1998). In a shared ecosystem, the innovation value chain can operate in such a way that no one participant can monopolize value capture. Value created at upstream phases of the value chain can increase if downstream innovators work to improve delivery. For example, there was greater value in the research agreement between Cepheid and UMDNJ once their discoveries were to be used in a development process endorsed by FIND aimed at patients around the world. Participants can harness value added by others in the ecosystem to provide joint benefits and shared value. Cepheid reduced adoption risks by taking into consideration the needs of downstream complementors and end users. An understanding of the BOP customer context was embedded in the technical and pricing specifications of the diagnostic test and addressed such factors such as size, mobility, robustness, and cost. Adoption barriers were considered in the development phase so that each downstream user would enjoy benefits enhanced by coinnovation (Adner, 2012). Cepheid drew upon support of secondary ecosystem participants including health education groups, NGOs, and governments to inform clinicians, hospitals, and politicians about the new testing platform and eventual use for patients. Table 3 provides a summary of how Cepheid leveraged internal resources and attracted resources from its ecosystem to overcome execution, co-innovation, and adoption risks to commercialize GeneXpert MTB/RIF.

Incentives for Collaboration Two key quotations explain incentives to collaborate. We heard from Cepheid that ‘‘Without the presence of the two main resource providers FIND and NIH, this program would not have been possible on Cepheid’s internal efforts alone.’’8 According to FIND, ‘‘We partnered with Cepheid in the private sector because they had a existing platform and the platform could be adapted for TB.’’9 Numerous challenges involving risk assessment and resource building lie behind these statements.

86

JULIA FAN LI AND ELIZABETH GARNSEY

Table 3.

Innovation Ecosystem Risks and Mitigations.

Risks Execution risks – including technical, market, social, financial risks involved with any project

Co-innovation risks – uncertainties in partnerships with complementors in the ecosystem

Adoption risks – will all participants in the ecosystem value chain see value in the innovation and adopt it?

Ecosystem-Based Risk Mitigation Cepheid’s working relationships with discovery-stage partners helped mitigate technical risks by building on existing platforms, thus providing innovation continuity. Product attributes were matched to requirements for TB diagnosis in resource-poor settings. Financial risks were mitigated by grants and by persuading technical collaborators to use their own resources for the project. Market and social risks in product delivery were mitigated by partnerships with complementors with knowledge of government policies, regulations, and distribution channels in developing countries. Funding for the project partially provided by discovery suppliers Foundation for Innovative New Diagnostics (FIND) and University of Medicine and Dentistry of New Jersey. Technical collaboration was complemented by mutual timelines, goals, and shared funding sources. FIND was both an upstream supplier and downstream complementor. Cepheid and complementor partners worked together on educating the diverse customer base of the diagnostic test. They also worked on affordable manufacturing. The aim was to achieve a product superior in technical and diagnostic reliability but also in cost-effectiveness and ease of use, such as to make it a preferred substitute for competitor products.

To signal joint value creation, Cepheid developed metrics that marked the progress of the project in terms that reflect value provided to other ecosystem partners. Developmental milestones included technical achievements, data collected from demonstration trials, and regulatory approval by the World Health Organization. To decrease investment risks, funding bodies often fund technical consortia where partners must demonstrate shared values, overlapping aims, technologies, and objectives. Partnering

Building Joint Value: Ecosystem Support for Global Health Innovations

87

with the public sector and civic organizations increases emphasis on delivery of innovations that are affordable and accessible to BOP populations. Finding ways to balance risks and benefits of operating in an innovation ecosystem form a core part of the innovator’s business strategy. Even a firm with a social mission cannot afford to offer all the value created to other members of the ecosystem. Enough of this value must be secured and reinvested by the innovating firm in order to sustain its future and maintain its R&D capabilities – by means of which it can also continue to innovate. The business models that operationalize business strategy must reduce the risks of failing to ensure the innovator’s own survival and growth. This was reflected in Cepheid’s corporate strategy by ensuring that the commercialization of the TB diagnostic test was complementary to the remainder of the firm’s portfolio and would provide economies of scope and enhance the firm’s reputation. Technical advances stemming from the UMDNJ collaboration and FIND co-development included remote calibration for point-of-care tests free of the requirement for on-site qualified technicians; these advances are also applied to other Cepheid products. The nature of delivering healthcare innovations to BOP end patients often require partnerships with public sector Ministries of Health and/or not-forprofit delivery organizations. This contrasts with pure business-to-business vendor relationships in high-income markets where commercial terms are negotiated first. The ecosystem dynamics in the public-private arena differ from this as shown by the case of GeneXpert MTB/RIF where essential funding depended on demonstrating that successful innovation would fulfill a key mission of public sector funders. Cepheid acknowledges that the public-private partnership with FIND was a risk for a publicly listed company like themselves. However, the risks were mitigated by a strong bilateral governance structure that weighed reputation effects and accountability equally on both sides of the alliance (Frankort, 2013). In addition, the contractual partnership attracted co-funding and accrued benefits to Cepheid of developing important technical knowledge that it initially lacked. Cepheid recognizes that the TB route is a high-value entry point for many developing countries and if customers are satisfied with the initial product, this serves as an avenue for expansion into emerging markets. The reinvestment of the technical and financial value achieved by Cepheid through the TB-program partnership can be used to expand the menu of testing options on the Cepheid platform. For example, in February 2011, FIND and Cepheid announced a new collaboration to accelerate the

88

JULIA FAN LI AND ELIZABETH GARNSEY

development of a rapid molecular test for the measurement of HIV viral load. The HIV test will complement the TB test and will run on the same GeneXpert platform.10 Consolidation of the technology platform is recognized as a key competitive advantage and lock-in for a strong customer base in clinical molecular diagnostics.11

Implications for Research and Practice There are implications in our findings for participants in global health ecosystems who are addressing diseases of poverty. The recognition and pursuit of opportunity is conventionally seen as the defining feature of entrepreneurship (Kirzner, 1973). Those providing innovations for BOP markets need to engage in proactive partnering and ecosystem building as part of the pursuit of opportunities. Thus, partnerships with delivery agents downstream are required to reduce risks of adoption of new medical interventions in BOP settings. Experience gained through prior partnerships can be harnessed in subsequent partnerships to reduce risks and transaction costs. Entrepreneurial innovators must make their strengths known to potential collaborators if they are to attract partners. Developing biomedical innovations for BOP populations is expensive and requires additional funding for which market incentives are unavailable. Public-private partnerships can reduce risks and so create incentives for government to supply grants toward BOP innovations. The prospect of social value can increase chances of obtaining public and philanthropic funding if perceived risk can be decreased through ecosystem-wide partnerships. Limitations of the Study A single case study has inevitable limitations. However, an exemplar serves its purpose if it informs understanding of a new phenomenon that is relatively unknown, as we hope to have done here. Another limitation follows from the focus on a specific technology and disease area. Future work could address some of these limitations. For example, ecosystem mapping of firms addressing other diseases of poverty could be conducted and compared with Cepheid’s experience to illuminate patterns and differences. Innovation ecosystem strategy can be analyzed from the funder’s point of view instead of that of the focal firm. The resource

Building Joint Value: Ecosystem Support for Global Health Innovations

89

provider’s perspective would illuminate alternative ways of achieving healthcare improvements, potentially in the same ecosystem. Benefits of Ecosystem Analysis Ecosystem analysis and models have previously been applied mainly to corporate innovators in high-income countries (Adner, 2012; Moore, 1996). This study has shown that the approach can apply to entrepreneurial innovation and, further, can illuminate the complex processes involved in building technological resources and attracting partners to participate in the development of healthcare innovations for BOP populations. For patients in BOP environments, access, availability, and affordability considerations are no less important than the safety and efficacy of biomedical innovations. In addressing these issues, we found that ecosystem analysis provided greater analytic focus than an open innovation perspective could offer. This case extends ecosystem analysis to show how value creation depends on successful transitions through the phases of discovery, development, and delivery. The innovator had to mobilize a healthcare ecosystem by engaging with partners at each phase of innovation. Ecosystem analysis helped provide focus on key strategic issues for the entrepreneurial innovator. The ecosystem perspective can be applied to diseases of poverty beyond TB. It helps to bring the core concerns of innovation research into sharper focus when multiple participants, private and public, are contributing to the innovation value chain. We have shown how internal and external resources can be leveraged and combined by the focal firm to build joint value in a health ecosystem and lower execution, co-innovation, and adoption risks in fighting diseases of poverty.

NOTES 1. Dr. Kurt Petersen has cofounded five Silicon Valley companies in MEMS technology: Transensory Devices Inc in 1982, NovaSensor in 1985 (now owned by General Electric), Cepheid in 1996, SiTime in 2004, and Verreon in 2009 (now owned by Qualcomm MEMS). Today, Dr. Petersen consults for a number of MEMS-based companies and research organizations as founder and president of KP-MEMS consulting company. History of events obtained from KP LinkedIn profile: http:// www.linkedin.com/pub/kurt-petersen/b/32a/865 – accessed January 21, 2011. 2. Prior to founding Cepheid, Northrup was the principal engineering at the Microtechnology Centre of Lawrence Livermore National Laboratory.

90

JULIA FAN LI AND ELIZABETH GARNSEY

3. WHO endorses Xpert MTB/RIF test for tuberculosis press release: http:// www.finddiagnostics.org/media/press/101208.html – accessed January 13, 2011. 4. FIND and Cepheid to develop rapid TB test for developing nations press release: http://www.finddiagnostics.org/media/press/060523.html – accessed January 26, 2011. 5. Per discussion with Mr. Steven Nelson, vice president business development, Cepheid, on January 13, 2011. 6. Per discussion with Dr. Mark Perkins, chief scientific officer, FIND, on November 19, 2010. 7. Press release: Cepheid announces first phase of Xpert MTB/RIF buy-down for high burden developing countries – August 2012: http://ir.cepheid.com/releasedetail. cfm?ReleaseID=698417 8. Per discussion with Mr. Steven Nelson, vice-president business development, Cepheid, on January 13, 2011. 9. Per discussion with Dr. Mark Perkins, chief scientific officer, FIND, on November 19, 2010. 10. Press release: http://www.cepheid.com/company/news-events/press-releases/ ?releaseID=1523723 and internal Cepheid corroboration April 12, 2012. 11. Heavy competition for platform diagnostics was emphasized at the Global Health Commercialization and Funding Roundtable, April 2012 (Li et al., 2012), and one method for existing firms including Cepheid to defend against competition is to develop a suite of tests as quickly as possible. 12. Per World Health Organization definition of neglected diseases of poverty: The people who are most affected by these diseases are often the poorest populations, living in remote, rural areas, urban slums, or conflict zones. Neglected tropical diseases persist under conditions of poverty and are concentrated almost exclusively in impoverished populations in the developing world: http://www.who. int/features/qa/58/en/ – accessed September 21, 2012.

REFERENCES Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, 84(4), 98–107. Adner, R. (2012). The wide lens: A new strategy for innovation. New York, NY: Penguin Books. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31(3), 306–333. Afuah, A. (2003). Innovation management. Oxford: Oxford University Press. Brown, S., & Eisenhardt, K. (1997). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations. Administrative Science Quarterly, 42(1), 1–34. Brusoni, S., & Prencipe, A. (2013). The organization of innovation in ecosystems: Problem framing, problem solving and patterns of coupling. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in

Building Joint Value: Ecosystem Support for Global Health Innovations

91

Strategic Management (Vol. 30, pp. 167–194). Bingley, UK: Emerald Group Publishing Limited. Bryman, A. (2008). Social research methods (3rd Ed.). Oxford: Oxford University Press. Burke, M., & Matlin, S. (2008). Monitoring financial flows for health research 2008: Prioritizing health research for health equity. Geneva, Switzerland: Global Forum for Health Research. Checkland, P. (2000). Soft systems methodology: A thirty year retrospective. Systems Research and Behavioural Science, 17(Suppl), S11–58. Chesbrough, H. (2003). Open innovation: The new imperative for creating and profiting from technology. Boston, MA: Harvard Business School Press. Christensen, C. (1992). Exploring the limits of the s-curve: Part I Component technologies; Part II Architectural technologies. Production and Operations Management, 1(4), 334–366. Eisenhardt, K. (1989). Building theories from case study research.. The Academy of Management Review, 14(4), 532–550. Eisenhardt, K., & Schoonhoven, C. (1996). Resource-based view of strategic alliance formation: strategic and social effects in entrepreneurial firms. Organization Science, 7(2), 136–150. FIND/TDR. (2006). Diagnostics for tuberculosis: global demand and market potential. Geneva, Switzerland: World Health Organization. Frankort, H. T. W. (2013). Open innovation norms and knowledge transfer in interfirm technology alliances: Evidence from information technology 1980–1999. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 239–282). Bingley, UK: Emerald Group Publishing Limited. Garnsey, E., & Leong, Y. (2008). Combining resource-based and evolutionary theory to explain the genesis of bio-networks. Industry and Innovation, 15(6), 669–686. Garud, R., Jain, S., & Kumaraswamy, A. (2002). Institutional entrepreneurship in the sponsorship of common technological standards: The case of Sun Microsystems and java. The Academy of Management Journal, 45(1), 196–214. Hansen, M., & Birkinshaw, J. (2007). The innovation value chain. Harvard Business Review, 85(6), 121–130. Harper, C. (2007). Tuberculosis, a neglected opportunity? Nature Medicine, 13(3), 309–312. Hart, S., & Christensen, C. (2002). The great leap: Driving innovation from the base of the pyramid. MIT Sloan Management Review, 44(1), 51. Helfat, C. E., & Peteraf, M. A. (2003). The dynamic resource-based view: Capability lifecycles. Strategic Management Journal, 24(10), 997–1010. Henderson, C. (2002a, January 8). CDC conducts biothreat tests. TB & Outbreaks Week. Henderson, C. (2002b, October 22). Cepheid announces partnership in infectious diseases. TB & Outbreaks Week. Hotez, P., Molyneux, D., Fenwick, A., Kumaresan, J., Sachs, S., Sachs, J., et al. (2007). Control of neglected tropical diseases. New England Journal of Medicine, 357(10), 1018–1027. Iansiti, M., & Levien, R. (2004). The keystone advantage: What the new dynamics of business ecosystems mean for strategy, innovation and sustainability. Boston, MA: Harvard Business School Press. Jick, T. (1979). Mixing qualitative and quantitative methods: Triangulation. Administrative Science Quarterly, 24, 603–611.

92

JULIA FAN LI AND ELIZABETH GARNSEY

Karamchandani, A., Kubzanasky, M., & Frandano, P. (2009). Emerging markets, emerging models: Market-based solutions to the challenges of global poverty. Mumbai, India: Monitor Institute Publication. Karnani, A. (2007). The mirage of marketing to the bottom of the pyramid: How the private sector can help alleviate poverty. California Management Review, 49(4), 90–111. Kirzner, I. (1973). Competition and entrepreneurship. Chicago, IL: The University of Chicago Press. Li, J., Khetarpal, P., Niklitschek, T., Person-Rennell, N., Pirjol, D., & Radl, A. (2012). Global health commercialization & funding roundtable 2012. Cambridge, UK: Institute for Manufacturing, University of Cambridge. London, T., & Hart, S. (2004). Reinventing strategies for emerging markets: Beyond the transnational model. Journal of International Business Studies, 35(5), 350–370. Lo¨nnroth, K., Castro, K., Chakaya, J., Chauhan, L., Floyd, K., Glaziou, P., et al. (2010). Tuberculosis control and elimination 2010-50: Cure, care, and social development. The Lancet, 375(9728), 1814–1829. Mair, J., & Marti, I. (2009). Entrepreneurship in and around institutional voids: A case study from Bangladesh. Journal of Business Venturing, 24(5), 419–435. Miles, M., & Huberman, A. (1994). Qualitative data analysis: An expanded sourcebook (2nd Ed.). Thousand Oaks, CA: Sage Publications. Moore, J. (1996). The death of competition: Leadership and strategy in the age of business ecosystems. Chichester, UK: Wiley. Moran, M., Guzman, J., Ropars, A., McDonald, A., Jameson, N., Omune, B., y Wu, L. (2009). Neglected disease research and development: How much are we really spending? PLoS Medicine, 6(2), 137–146. Mowery, D., Oxley, J., & Silverman, B. (1996). Strategic alliances and interfirm knowledge transfer. Strategic Management Journal, 17(S), 77–91. Mowery, D., Oxley, J., & Silverman, B. (1998). Technological overlap and interfirm cooperation: Implications for the resource-based view of the firm. Research Policy, 27(5), 507–523. Munir, K., Ansari, A., & Gregg, T. (2010). Beyond the hype: Taking business strategy to the ‘‘bottom of the pyramid’’. In J. Baum & J. Lampel (Eds.), The globalization of strategy research. Advances in Strategic Management (Vol. 27, pp. 247–276). Bingley, UK: Emerald Group Publishing Limited. Nairn, A. (2002). Engines that move markets: Technology investing from railroads to the internet and beyond. New York: Wiley. News, D. (2006, August 30). Cepheid awarded $3.3M STTR NIH/NIAID grant for TB test development. Down Jones News Service. Newswire, P. (2002, August 22). Cepheid awarded NIH research grant for rapid diagnostic testing of tuberculosis. PR Newswire. Newswire, P. (2006, May 23). FIND and Cepheid to develop rapid TB test for developing nations. PR Newswire. Penrose, E. (1960). The growth of the firm: A case study: The Hercules powder company. Business History Review, 34, 1–23. Porter, M. (1980). Competitive strategy: Techniques for analyzing industries and competitors. New York: Free Press. Porter, M., & Kramer, M. (2011). Creating shared value: How to reinvent capitalism and unleash a wave of innovation and growth. Harvard Business Review, 89(1/2), 62–77.

Building Joint Value: Ecosystem Support for Global Health Innovations

93

Prahalad, C. (2006). The fortune at the bottom of the pyramid: Eradicating poverty through profits. New Jersey: Wharton School Publishing. Prahalad, C., & Hammond, A. (2002). Serving the world’s poor, profitably. Harvard Business Review, 26, 54–67. Schein, E. (2010). Organizational culture and leadership (4th Ed.). San Francisco, CA: JosseyBoss A Wiley Imprint. Siggelkow, N. (2007). Persuasion with case studies. Academy of Management Journal, 50(1), 20–24. Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. The Academy of Management Review, 20(3), 571–610. Trouiller, P., Torreele, E., Oillaro, P., Orbinski, J., Laing, R., & Ford, N. (2002). Drug development for neglected diseases: A deficient market and a public-health policy failure. The Lancet, 359(9324), 2188–2194. Wallis, R., Pai, M., Menzies, D., Doherty, T., Walzl, G., Perkins, M., et al. (2010). Biomarkers and diagnostics for tuberculosis: Progress, needs, and translation into practice. The Lancet, 375(9729), 1920–1937. Yin, R. (2003). Case study research: Design and methods. Thousand Oaks, CA: Sage Publications.

94

JULIA FAN LI AND ELIZABETH GARNSEY

APPENDIX: RESEARCH METHODS Data Collection We used both primary and secondary data sources for this study. The logic of case selection was to identify an innovator firm using a technological innovation to improve healthcare for a neglected disease of poverty.12 Our preliminary search centered on the 1,000+ members of the Stop TB Partnership of organizations aiming to eliminate TB. To check that we were identifying all relevant firms, we consulted the relevant industry directories including advocacy reports such as Treatment Action Group and reports on improving healthcare from developing countries. We used this information to construct a database of relevant firms. This process yielded a dataset of 18 organizations engaged in R&D to improve diagnostics for TB and that had a product in the development pipeline or ready for patient use. Cepheid was one of the 14 entrepreneurial firms identified and was selected as a case study for several reasons. First, it was the TB diagnostic most recently approved by the World Health Organization and eligible for use in developing and developed countries alike. Its GeneXpert platform is a point-of-care device that can test for both drug-sensitive and drug-resistant strains of TB, allowing physicians to make quicker and more accurate diagnoses. Second, Cepheid illustrates partnership creation in ways that might be relevant to illuminating open innovation and ecosystem as analytic constructs. Thus, Cepheid’s experience provides empirical evidence that can inform future thinking about an important generic issue (Yin, 2003) – the way a firm can mobilize internal and external resources in an innovation ecosystem to create joint value. We collected data through interviews, surveys, secondary sources, and a focus group of experts (a Delphi method). An extensive review of archival sources was completed and secondary data including company documents, grant proposals, press reports, and third-party databases were reviewed as recommended by Jick (1979) and others. An interview protocol was devised to find out from interviewees how Cepheid operates and how it obtains resources to enable operations. All interviews and data gathering took place between October 2010 and January 2013. Interviews varied from thirty minutes to two hours. Semistructured interviews were conducted with the senior vice president of business development, the senior program manager of the TB, and the chief scientific officer of the public NGO in the publicprivate development partnership in which Cepheid was taking part. Evidence from both sides of the partnership between Cepheid and its

Building Joint Value: Ecosystem Support for Global Health Innovations

95

alliance NGO (Foundation for Innovative New Diagnostics) was thus obtained. An international meeting was organized to bring together and facilitate networking between the subjects of the wider research program of which this formed part (Li et al., 2012). Participants included innovator firms and other global health ecosystem participants – funders involved in public and private partnerships, technical experts from the World Health Organization, and member state/government representatives. These participants provided further evidence, making possible Delphi-style research (Bryman, 2008) that drew on global expertise in the field of TB treatment. Soft systems methodology and the use of instant feedback through live rich pictures were employed by the Delphi-focus group to iterate between facts, discussion, and new ideas (Checkland, 2000). At this Global Health Commercialization and Funding Roundtable held in April 2012, presenters shared their business strategies with academics, funders, experts, and participants in their own ecosystem. Cepheid provided a detailed presentation at the conference and its ecosystem partners corroborated the evidence in their presentation. This provided a further check on retrospective bias and facilitated triangulation.

Data Analysis The data analysis phase took place in three stages. First, the collected testimonies from semistructured interviews were recorded and transcribed for textual analysis. The second step was to code the primary themes that emerged from the transcripts and case study. The emerging themes were cross-compared and analyzed in relation to relevant literature on strategy, innovation, and entrepreneurship with a focus on BOP markets. This allowed us to identify a set of primary themes around value creation and the interdependence of the focal firm on its innovation ecosystem. Early results showed how Cepheid built its strategy on the basis of gaining leverage from technical knowledge shared with its university collaborators and foundation partners that were used to support its collaborative model. We used an inductive approach to take into account Cepheid’s position in the ecosystem and to identify the nature of relations with other ecosystem participants (Garud, Jain, & Kumaraswamy, 2002). The final step involved triangulation of the case study and analysis. At the global health conference, two senior representatives of Cepheid offered a public presentation and an organizational update 12 months after their

96

JULIA FAN LI AND ELIZABETH GARNSEY

initial World Health Organization approval of their testing platform for TB. Cepheid’s business model and case study were presented to approximately 30 experts, followed by discussion; Cepheid’s business model was examined and cross-compared with other known innovator firms for external validity (Miles & Huberman, 1994). The public presentations were recorded (for publicly available podcasts) and further analyzed in an iterative manner. The subsequent analysis of the data following the roundtable conference allowed us to refine our findings.

PART II ANALYTICAL PERSPECTIVES

BUSINESS ECOSYSTEMS’ EVOLUTION – AN ECOSYSTEM CLOCKSPEED PERSPECTIVE Saku J. Ma¨kinen and Ozgur Dedehayir ABSTRACT There is a growing need for measures assessing technological changes in systemic contexts as business ecosystems replace standalone products. In these ecosystem contexts, organizations are required to manage their innovation processes in increasingly networked and complex environments. In this paper, we introduce the technology and ecosystem clockspeed measures that can be used to assess the temporal nature of technological changes in a business ecosystem. We analyze systemic changes in the personal computer (PC) ecosystem, explicitly focusing on subindustries central to the delivery of PC gaming value to the end user. Our results show that the time-based intensity of technological competition in intertwined subindustries of a business ecosystem may follow various trajectories during the evolution of the ecosystem. Hence, the technology and ecosystem clockspeed measures are able to pinpoint alternating dynamics in technological changes among the subindustries in

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 99–125 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030007

99

100

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

the business ecosystem. We subsequently discuss organizational considerations and theoretical implications of the proposed measures. Keywords: Business ecosystem; technological system; reverse salient; industry clockspeed

INTRODUCTION The evolution of industries is marked by changing competitive environments and patterns of technological variation (Audretsch, 1995; Malerba, Nelson, Orsenigo, & Winter, 1999; Wezel, 2005). During the course of this progression, organizations make technological and competitive choices along the time path of industry evolution. These activities determine the rates of product and process innovation (e.g., Abernathy & Utterback, 1978), the entry and exit of organizations (e.g., Klepper, 1996), and the changing basis of product competition (e.g., Christensen, 1997) that are witnessed throughout the industry’s development. Timing of the technological choices is especially important for attaining and retaining competitive advantage in dynamic environments (e.g., Brown & Eisenhardt, 1997). Additional layers of challenges are created in organizations, as the contemporary global business is increasingly based on activities of business ecosystems consisting of multiple industries rather than activities contained in a traditional single industry – value chain domain. There is a growing need to measure the rate of change of the industry in its systemic industry context as business ecosystems increasingly replace standalone products (Adner, 2012; Adner & Kapoor, 2010). In these systemic contexts, organizations face the need to manage their innovation processes in an intertwined environment, where actors form business ecosystems and influence their decision making (Adner, 2006; Tiwana, Konsynski, & Bush, 2010). Traditionally, industry change has been measured using different parameters, such as the number of organizations in the industry (e.g., Chesbrough, 2003), the number of industry patents (e.g., Brockhoff, Ernst, & Hundhausen, 1999), and the industry’s annual turnover (e.g., Gooroochurn & Hanley, 2007). In this paper, we focus on the clockspeed measure of industry change, which compares the rates of change in the industry’s product technology, process technology, and organizational capability (Fine, 1998). While the clockspeed measure has traditionally been used to evaluate the rate of change of specific industries with

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

101

intra-industry focus, we extend this measure to include multiple industries forming a business ecosystem. Here, we view the business ecosystem as a network of subindustries that specialize in producing the interdependent technical subsystems of a hierarchically structured technological system (Clark, 1985; Murmann & Frenken, 2006; Tushman & Murmann, 1998). In this manner, subindustries are interdependently connected to one another due to the interrelatedness of their output technologies. Systemic industries may therefore produce complex product systems such as aircraft engines, flight simulators, and offshore oil platforms (Hobday, 1998; Miller, Hobday, Leroux-Demers, & Olleros, 1995), as well as modular systems such as personal computers (PCs) and automobiles (Baldwin & Clark, 1997; Langlois & Robertson, 1992). All these business ecosystems integrate functionally interdependent subsystems, produced by their own specialized subindustries, into holistic systemic products. In business ecosystems, a clockspeed measure that compares the pace of change of a particular subindustry with respect to the pace of change of other interdependent subindustries can greatly aid organizations in coping with their dynamic environments. This is because the innovation processes of organizations in ecosystems are intertwined with performance improvements in the subsystem technologies that are central to their own subindustry, as well as with the interfaces between other interdependent subindustries that produce interdependent technological subsystems (Ethiraj & Puranam, 2004). Moreover, an organization inside a subindustry may remain more competitive by utilizing information about the rate of change of interdependent subindustries, as performance deficiencies in component and complementary technologies can jeopardize the ability of the organization to create its own value for end users (Adner & Kapoor, 2010). In this paper, we aim to enhance current understanding of business ecosystem dynamics by extending the clockspeed measures developed by Dedehayir and Ma¨kinen (2011) to the business ecosystem context. We first apply the ‘‘technological industry clockspeed,’’ which we refer to as the technology clockspeed measure that evaluates the time lag between successively higher levels of technological product performance in a given subindustry. Second, we apply the ‘‘systemic technological industry clockspeed,’’ which we refer to as the ecosystem clockspeed measure that informs the temporal nature of the performance discrepancies that emerge between interrelated subsystems over time. In these states of technological disequilibrium (Rosenberg, 1976), we focus on the subsystem, which, due to its performance deficiency, hinders the value delivery of the ecosystem’s

102

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

holistic product. The ecosystem clockspeed therefore measures the duration of time required for the performance-deficient subsystem to attain the performance levels of more advanced subsystems. We demonstrate the use of these clockspeed measures in a longitudinal study of the PC ecosystem, focusing on key subindustries that produce the subsystems central for computer gaming. The evolutionary trajectories of technology clockspeed facilitate informed views on monitoring, outlining, and planning competitive activity within a given subindustry. This information can be used, for example, in seeking synchrony between interlinked subindustries’ output to maximize end-user value and designing competitive actions centering on timing technological performance improvements. The ecosystem clockspeed, in turn, provides information on the business ecosystem’s holistic performance delivery, over time. The ability to assess the evolution of the business ecosystem in its entirety allows actors to implement strategic, competitive actions. Our empirical results, for instance, suggest that the ecosystem clockspeed may progress through distinctively differing time paths, with increasing temporal latencies expected as the technological paradigm matures in interdependent subindustries. Hence, more than the dynamics of change in a firm’s own subindustry, such stylized patterns can help these actors identify the shifting competitive priorities in the business ecosystem in which they are positioned.

THEORETICAL BACKGROUND Industry Dynamics and Clockspeed Industries not only change due to competitive dynamics, but they do so at different rates, as a result of the differing pace of introduction and utilization of technological innovations in the industry (Audretsch, 1995; Klepper & Graddy, 1990; Suarez & Lanzolla, 2005). The changing pace of industry development thus requires the timely modification of an organization’s activities and assets in order to remain competitive in the industry (McGahan, 2000, 2004). Eisenhardt and Martin (2000) underline organizational responses to varying industry conditions by comparing the dynamic capability of firms that negotiate moderately dynamic markets (i.e., where change is frequent, though largely predictable) and firms competing in highvelocity markets (i.e., where change is nonlinear and less predictable). Studies of the computer industry indicate the need to adopt strategy making decisions and product development efforts in such high-velocity market

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

103

contexts (Brown & Eisenhardt, 1997; Eisenhardt & Tabrizi, 1995; Eisenhardt, 1989). Romanelli and Tushman (1994) further demonstrate that punctuated changes in the US computer industry acted as external stimuli that necessitated organizational adaptation through similarly revolutionary transformations. Different metrics have been developed to evaluate and explain the rates of change of industries. Of these, the clockspeed measure of industry change introduced by Fine (1998) remains prominent, allowing the differentiation of high clockspeed industries from low clockspeed industries (Guimaraes, Cook, & Natarajan, 2002; Mendelson & Pillai, 1999; Nadkarni & Narayanan, 2007; Souza, Bayus, & Wagner, 2004). However, organizations positioned in dynamically changing competitive environments additionally require a temporal clockspeed measure of industries that can detect possible life cycle effects (Fine, 1998), thereby providing a guideline for the implementation of innovation processes. To address this issue, Dedehayir and Ma¨kinen (2011) have recently expanded Fine’s industry clockspeed framework by introducing a measure that evaluates the time between successively higher levels of technological performance in the industry’s product technology.1 We define this temporal latency as technology clockspeed. To align with Fine’s definition of product clockspeed, which measures the frequency of new product introductions in a given industry, the technology clockspeed measures the rate of new technological performance levels that are introduced in the industry, without taking into account the magnitudes of these performance level increases.

Clockspeed Measure in Business Ecosystems Just as biological ecosystems consist of a variety of interdependent species, business ecosystems analogously depict interdependent networks of organizations, which collectively produce a holistic, integrated technological system that creates value for customers (Agerfalk & Fitzgerald, 2008; Bahrami & Evans, 1995; Basole, 2009; Lusch, 2010; Teece, 2007). In this network, each member contributes to the ecosystem’s overall well-being and is dependent on other members for its survival. These organizations coevolve by working cooperatively as well as competitively in the creation of products and services (Moore, 1993). The business ecosystem may comprise a variety of actors, including suppliers, complementors, system integrators, distributors, advertisers, finance providers, universities and research institutions, regulatory authorities and standard-setting bodies, and the

104

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

judiciary, as well as customers (Iyer & Davenport, 2008; Li, 2009; Meyer, Gaba, & Colwell, 2005; Pierce, 2009; Whitley & Darking, 2006). The technological systems produced by business ecosystems are hierarchically structured networks of interdependent technical subsystems (Clark, 1985; Murmann & Frenken, 2006; Tushman & Murmann, 1998). The modularization of these product and process systems often leads to the emergence of specialized subindustries that focus on manufacturing a particular module or subsystem (Brusoni & Prencipe, 2001; Ethiraj & Puranam, 2004; Ulrich, 1995). In this manner, the business ecosystem provides value to the end user by integrating functionally interdependent subsystems that are produced by their own specialized subindustries. In the aircraft ecosystem, for example, engines and airframes are distinct subsystems integrated by aircraft builders such as Airbus and Boeing (Bonaccorsi & Giuri, 2000; Schilling, 2000), while microprocessors, graphics processors, and software are subsystems integrated by computer makers in the PC ecosystem (Ethiraj & Puranam, 2004; Macher & Mowery, 2004). The systemic relationship between these subindustries suggests that changes in the technological output of one subindustry may affect the technological output of other, interdependent subindustries and also the output of the business ecosystem as a whole. Consequently, technological evolution in business ecosystems is curbed by the emergence of bottlenecks when a particular subsystem (e.g., complement or component technology) lacks development and does not deliver the level of performance demanded by the focal firm (Adner & Kapoor, 2007; Kapoor & Adner, 2007). The centrality of such blockades to performance and value creation has been recognized in the literature for some time. For instance, Rosenberg (1969) proposes that when the main impulse for technological evolution is economical, to derive favorable economic conditions, managers of firms should approach the most economically restrictive constraints first. These constraints therefore emerge as economical needs that require technological solutions, and the development of these solutions subsequently drives technological evolution. Rosenberg labels these constraints as ‘‘focusing devices,’’ which importantly act as sources of technological change. According to Rosenberg, three focusing devices have historically triggered technological development: (i) the unpredictability of labor, (ii) the unavailability of resources and other overarching constraints, and (iii) technological imbalances. In this paper, our aim is to understand the evolution of business ecosystems through the focusing device of technological balances, which initiate ‘‘compulsive sequences’’ that drive technological change (Rosenberg, 1969). More specifically, we address the performance imbalances that

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

105

appear, say, between the components of complex machines or operations, which compulsively necessitate the improvement of the component that lacks the performance capacity of other components. Innovations that overcome the insufficient performance capacity, in turn, reveal the insufficient performance of another component, which is then resolved through further innovations.2 In the context of business ecosystems, these constraints, or reverse salients,3 have been shown to hinder the attainment of a higher level of technological performance by either limiting the focal firm’s ‘‘ability to create value with its product’’ or by ‘‘constraining the customer’s ability to derive full benefit from consuming’’ the firm’s product (Adner & Kapoor, 2010). For example, Philips, Sony, and Thompson incurred heavy financial losses by developing and introducing the high-definition television in the 1990s, concurrently failing to deliver value to the end user due to performance deficiencies in complementary technologies such as studio production equipment, signal compression technologies, and broadcasting standards (Adner, 2012). With respect to the generic depiction of business ecosystems, the reverse salient can reside in two locations (see Fig. 1). In Fig. 1, a state of technological imbalance emerges when the performance of the reverse salient subsystem (e.g., the technological subsystem produced by Supplier 1 or Complementor 1) is lower than the performance of the interdependent (i.e., salient) subsystem (e.g., the technological subsystem of the focal firm, in Fig. 1).4 The length of time required for the reverse salient to attain the salient subsystem’s performance level informs the reverse salient subsystem’s rate of development in the technological system context.5 This duration of time concurrently informs the rate of development of the reverse salient subindustry (i.e., the subindustry producing the reverse salient subsystem) in the business ecosystem context.

Complementor 1

Supplier 1

Customer

Focal Firm Supplier 2

Fig. 1.

Complementor 2

A Generic Schema of a Business Ecosystem and Location of Reverse Salient (Adapted from Adner & Kapoor, 2010).

106

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

We subsequently use this delay to measure the ecosystem clockspeed. Similar to the technology clockspeed, shorter spans of time needed for the reverse salient to attain the technological performance levels of the salient subsystem indicate faster ecosystem clockspeeds. Following Dedehayir and Ma¨kinen’s framework, the ecosystem clockspeed extends Fine’s notion of industry clockspeed to the business ecosystem arena. This measure is founded on earlier depictions of a basic unit of the business ecosystem, which constitutes a focal firm that is directly connected to its suppliers, customers, and complementors (Adner, 2012; Adner & Kapoor, 2010). Using the rate of technological change in the focal firm’s subindustry as reference, the ecosystem clockspeed measures the delays in the subindustries’ technologies attaining the same performance levels, across time. The ecosystem clockspeed is thus a quantitative tool intended to be used specifically as a basic unit measure of temporal latencies attached to technological discrepancies in business ecosystems.

METHODOLOGY In our empirical study, we first employ the technology clockspeed measure to analyze the evolution of a given subindustry in the ecosystem with the rate of performance accumulation in its technology over time. Second, we apply the ecosystem clockspeed measure to compare the rates of technological performance development among the ecosystem’s subindustries. We measure the clockspeed from the technological evolution curves of the salient and the reverse salient subsystems that are superimposed on a common set of axes (see Fig. 2). Our schematic representation in Fig. 2 compares the evolution of the reverse salient’s technological performance parameter, represented by the dashed line, with the technological performance evolution of the salient subsystem, represented by the solid line. Fig. 2 illustrates the technology clockspeed measurements for both the salient and the reverse salient subindustries, denoted by DtTC,S and DtTC,RS, respectively. The technology clockspeed therefore informs us of the temporal separation of the steps of technological progression in the subindustry. In turn, the magnitude of ecosystem clockspeed at any point along the time axis is measured by calculating the temporal separation of the reverse salient from the salient trajectory at that point, denoted by DtEC. This separation reflects the length of time that is required for the reverse salient subsystem to attain the salient subsystem’s level of technological performance.

technological performance

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

107

Trajectory of Salient

p2

Trajectory of Reverse salient

p1

t1

t3

t2

Δ tTC,S

t4

time

Δ tTC,RS

Δ tEC

Fig. 2. A schema of Clockspeed Measurements from Superimposed S-Curves (DtTC and DtEC Refer to the Technology Clockspeed and the Ecosystem Clockspeed, respectively).

PC Gaming Ecosystem In our empirical illustration, we have studied the PC ecosystem that creates value for the end user by integrating functionally interdependent subsystems (e.g., hardware and software subsystems) that are produced by their own specialized subindustries. However, our analysis focuses specifically on the value of the computer gaming function that the ecosystem delivers to the end user, simultaneously directing our analytical lens to the subsystems central for computer gaming and to the subindustries that produce these subsystems. The PC ecosystem provided a suitable empirical setting for our illustration due to the significance of the PC as a platform for computer gaming since the 1990s (Hayes & Dinsey, 1995; Poole, 2004), due to its systemic nature (Baldwin & Clark, 1997; Langlois & Robertson, 1992), and also due to the highly dynamic nature of technological evolution observed in this context (Eisenhardt & Tabrizi, 1995; Rosas-Vega & Vokurka, 2000). The video game industry, leading up to the 1990s, had been dominated by the console platform (Kent, 2001). The reasons for this domination were

108

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

rather apparent. For the market, the console represented a relatively affordable and easy to operate machine that was solely dedicated to the gaming purpose. For the supply side, the console was a dedicated hardware platform that experienced generational leaps in technological performance from time to time. The hardware manufacturing was dominated by a few players, such as Nintendo and Sega, who also produced their own software. Nevertheless, companies that specialized in software development were also integrated into the industry to provide resources to quench the thirst of the gamer market for more games. The complete system of the hardware and software offered the market unequivocal and insurmountable gaming experience. In this manner, gaming on the console platform paved the way for the birth of the PC ecosystem delivering gaming value. The PC had for a long time lacked the hardware sophistication that would allow good quality games to be played on it. For the most part, PCs were used for word processing and similar applications. Any technological developments in the PC hardware, leading up to the mid-1990s, were therefore seen to be pushed by game software that, in order to deliver satisfactory performance on the PC, needed higher hardware capacity. Subsequently, the PC began increasing penetration into households despite remaining significantly behind the game consoles, due to technological progress as well as more affordable prices. Key subsystems such as the central processing unit (CPU) had not only jumped to new technological platforms (e.g., Intel’s 486 processor) but also the prices had dropped substantially. Added to this, the components of the PC were upgradeable to higher technological levels, eliminating the need for the purchase of a complete system, which was inherent to console systems. This made better gaming experience available to the computer owner as long as the necessary hardware could be upgraded to the required level. During the same period, a new data storage medium technology in the CD (compact disc) entered the fray. The industry (in particular console makers) saw this as a significant threat and a possible point of departure from the existing regime of game software that was supplied on cartridges and played on hardware suited for the cartridge format. Indeed, game developers such as Electronic Arts began refocusing on developing their game software for the CD format, and hardware developers began collaborating with CD manufacturers to incorporate the new format onto their consoles (Hayes & Dinsey, 1995). The integration of the CD player into the PC platform enhanced the PC’s competitiveness with console machines for a greater share of the video game market. The refocusing of game software developers toward the CD

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

109

medium from cartridges acted as a catalyst in the creation of this potential. Moreover, game developers were glad to have an alternative to the dominant consoles. Notwithstanding these developments, the complexity of the PC’s operation still remained an Achilles’ heel. The launch of Microsoft’s Windows 95 operating system (OS), however, became a significant turning point for the rise of the PC in computer gaming application, as it significantly simplified game play, along with other functions, on the PC (Hayes & Dinsey, 1995). In the resulting PC system, video gaming was made possible with the integration of two interdependent components: software and hardware. While the software provides the intelligence for the virtual worlds, characters, and plots of the games, the hardware provides the technical basis that enables the creation of the interactive environment. The resulting technical qualities, such as movement, graphics, and sound, make video games seemingly addictive and contagious (Hayes & Dinsey, 1995). The relentless pace of technological development in video games is a salient point of departure from other forms of entertainment such as television, movies, and music (Bethke, 2003). To a great extent, this is bestowed by technological enhancement of game software, through which game developers can increase the scope of existing plots and storylines, integrate more detail into the game environment, and deliver greater video and audio experience. It is the pursuit of realism that drives the development of these features in game software design. However, the enhancement of software increases its complexity, thus requiring greater hardware capacity to enable the game’s designed function. For computer games that are played on the PC, this means that the game software needs to utilize a greater amount of the hardware technological performance that is available. Accordingly, the PC ecosystem analyzed in this paper constitutes the main subindustries that produce the hardware and software subsystems required for delivering gaming performance value to the end user. We subsequently identify the PC game software subindustry as the focal subindustry in our PC ecosystem schema and consider its linkages to four other subindustries that produce important and interdependent subsystems (see Fig. 3). On the supply side, we consider the CPU and GPU (graphics processing unit) that are integrated into the PC as crucial hardware components upon which the PC game software will function. Additionally, we analyze two software subsystems, namely, the DirectX6 and the OS, which are complementors of the PC game software and similarly necessary for the game to function.

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

110

Hardware

Software DirectX

CPU PC GPU

Fig. 3.

PC game

PC gaming Customer

Operating system OS

A Schema of the PC Business Ecosystem Delivering Gaming Performance.

In the typical development of game software, there is an inherent need for the PC game developer to utilize the available levels of technological performance in interdependent subsystems when commencing game software development. This necessarily imposes certain minimum temporal latency in using interdependent technologies. This delay is further exacerbated by the game developer’s need to test for compatibility with the connected software and hardware subsystems, for example, by testing each new game’s capability to function with individual video cards (Ripolles & Chover, 2008). Furthermore, the search for market opportunities, the realization of game concepts, the availability of support from connected subsystems, and waiting for the installed base to increase magnify this latency. Also the desire to avoid risks of project delays and cost overruns in a highly dynamic ecosystem motivates game developers to delay their technology selection, at least to some extent (MacCormack, Verganti, & Iansiti, 2001). Notwithstanding the technological advancements in the interconnected subsystems, these delays necessarily define the PC game software as the reverse salient from the end user point of view, who is interested in maximizing the gaming performance on the holistic PC system. Our empirical study therefore illustrates the ecosystem clockspeed of the PC game subindustry, which, in producing game software, requires some duration of time to attain the technological performance levels embedded in the hardware and software products of the supplier and complementor subindustries.

Method and Data For the PC game software subsystem, we have considered that the software is designed to utilize at least some minimum level of performance

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

111

provisioned by the interdependent subsystems, such that the intended game qualities can materialize on the holistic PC system.7 In our study, we have used the minimum hardware as well as the minimum software requirements stipulated by the PC game developer (i.e., the focal firm) as the technological performance indicator. In this manner, we identify a PC game as having a higher level of technological performance than another when its stipulated minimum requirements are higher than those of the latter. For the supplied hardware subsystems, we have selected the performance indicator of processing speed (measured in Hertz) for the CPU and graphics memory (measured in megabytes, MB) for the GPU, since these are the parameters most important for PC game developers for game functionality. For the complementary software subsystems, DirectX and the Windows OS, we denote the launches of successive versions of these technologies as performance increases that provide additional and better functionality. Our empirical analysis proceeded as follows. First, we measured the technology clockspeeds of the subindustries that constitute the PC ecosystem by studying the technological evolutions of the corresponding hardware and software subsystems. For technology clockspeed, we measured the number of days between the technological performance increases of the CPU and GPU product launches with processor speed and graphics memory, respectively. Similarly, we traced the number of days between the launch dates of successive versions of DirectX and the Windows OS. We also measured all the technology clockspeeds of the focal PC game subindustry with the number of days between successively increasing minimum requirements pertaining to these interdependent component and complement technologies. Second, we evaluated the ecosystem clockspeed of the PC game subindustry by superimposing the technology evolution curves of the PC game subsystem and the interdependent subsystems onto respective sets of axes, and by calculating the time delays in the reverse salient’s attainment of the salient subsystem’s technological performance. We collected data on CPU processor speeds from processor performance databases found on Intel and Advanced Micro Devices (AMD) corporate Web sites, the two primary manufacturers of CPUs, and we accessed GPU graphics memory data from the corporate Web sites of NVIDIA and ATI,8 the two dominant players in the graphics processor industry. The launch dates of the DirectX and Windo OS versions were accessed from Microsoft’s corporate Web site. The data on PC game minimum requirements were, in turn, collected from the Web sites of game publishers and game developers, as well as from the gaming community, GameSpot.com, and the online vendor, Amazon.com.

112

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

We limited the list of PC games used in our empirical analysis to those that had been launched in the United States and which had been reviewed and rated by one reputable online source, GameSpot.com, or one reputable printed source, PC Gamer magazine. We selected an observation interval stretching from August 1995 until the end of 2008. The beginning of the observation interval was selected to correspond to the launch of the Windows 95 OS that established a new technological platform upon which hardware and PC game software could be developed, and which therefore produced a significant change in the gaming industry (Hayes & Dinsey, 1995). These limitations may curb our results, for example, due to the exclusion of many indie games that are not rated by our sources and are therefore omitted from our data set. However, at the same time, these limitations allow us to analyze a data set that is representative of the industry from the mainstream business and market perspectives.

RESULTS AND DISCUSSION In Fig. 4, we present the technology clockspeeds of the salient subindustries, namely, the CPU, GPU, DirectX, and Windows OS, which connect to the reverse salient PC game subindustry in the analyzed ecosystem. In our conceptualization, the shorter spans of time between successive product launches in these subindustries indicate faster clockspeeds, that is, the lower the number of days between launches, the faster the clockspeed is. Similarly, the decrease in the number of days between launches signifies acceleration and quickening of the clockspeed. Panel (a) indicates that a quickening of the clockspeed is witnessed after an early period of low technology clockspeed in the CPU subindustry. However, recent years (from 2003 onward) show a slowing down in the technology clockspeed. Interestingly, this period of time coincides with the technical limitations that materialized in CPU development, leading to difficulties in increasing processor speeds, which may have led to the increasing duration of time between successively higher performing CPUs. Visible in panel (a) is, for example, the rapid launch of Pentium II at 300 MHz right after the initial introduction of the 233 and 266 MHz versions, signifying Intel’s platform approach and its influence on the CPU subindustry’s technological development. Panels (b) and (c) show significant variation in the GPU and DirectX technology clockspeeds, respectively, while the technology clockspeed of the Windows OS in panel (d) displays a period of slowing down following an early period of quickening. We also

113

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective 600

900

(a)

800

(b)

Number of days (GPU)

Number of days (CPU)

500 400 300 200

700 600 500 400 300 200

100 100 0 Jan-95

Jan-97

Jan-99

Jan-01

Jan-03

0 Jan-95

Jan-05

400

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07 Jan-09

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07

1200

(c)

(d)

350

1000

Number of days (WIN)

Number of days (DX)

Jan-97

300 250 200 150 100

600 400 200

50 0 Jan-95

800

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

0 Jan-95

Jan-09

Fig. 4. The Technology Clockspeed of the Salient Subindustries Measured as the Number of Days between Technology Launches with Successively Higher Performance. (a) CPU sub-industry (b) GPU sub-industry (c) DirectX sub-industry (d) Windows OS sub-industry.

observe that DirectX stays on very fast clockspeed levels, which suggests a rapid succession of game launches with higher requirements, and signifies Microsoft’s commitment to satisfy market requirements related to DirectX for the game software performance. Fig. 5, in turn, presents the technology clockspeed measures of the PC game subindustry, measured with the stated minimum technological performance requirements of the games pertaining to processor speed, graphics memory, DirectX version, and the Windows OS version, respectively. In panel (a), we observe that the technology clockspeed of the PC game software subindustry displays a period of quickening until the end of 2004, after which a period of high variation in the clockspeed is witnessed. This oscillation markedly started with the introduction of UberSoldier, a game which had ‘‘massive system requirements,’’ according to GameSpot. It still did not get positive feedback in general, although it presented, at its best, good game play in its first person shooter genre. Our observation suggests that designing games with a focus on CPU performance has been central to

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

114

1000

(a)

Number of days (game min. MB)

Number of days (game min. MHz)

800 700 600 500 400 300 200 100 0 Jan-95

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07

(b)

800 700 600 500 400 300 200 100 0 Jan-95

Jan-09

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07 Jan-09

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07

1200

400

(c)

Number of days (game min. WIN)

Number of days (game min. DX)

900

350 300 250 200 150 100 50 0 Jan-95

(d)

1000

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

800 600 400 200 0 Jan-95

Jan-09

Fig. 5. The Technology Clockspeed of the Reverse Salient PC Game Subindustry Measured as the Number of Days between Game Launches with Successively Higher Minimum Performance Requirements of CPU, GPU, DirectX, and Windows OS. (a) The technology clockspeed (CPU) (b) The technology clockspeed (GPU) (c) The technology clockspeed (DirectX) (d) The technology clockspeed (Windows OS).

the competitiveness within the PC game subindustry for some period of time, although lately this focus has diminished. By contrast, the technology clockspeed of the PC game subindustry indicates a slowing down in panel (b), high oscillation in panel (c), and rapid quickening and stabilization in panel (d). We interpret these clockspeed trends to suggest that the PC game subindustry has firstly a diminishing focus on developing games that rapidly utilize the highest possible GPU performance, given the rather slow and decelerating clockspeed shown in panel (b). This trend is illustrated with the launch of the highly successful game RoboBlitz in late 2006, when the technology clockspeed of the PC game subindustry in relation to the GPU minimum requirement was very slow. RoboBlitz was noted for its high graphics memory requirements, as being the first to be based on Unreal Engine 3, and having gained several honors, such as being nominated for the 2007 Independent Games Festival for Excellence in Visual Art award. The strategic choice to incorporate

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

115

higher graphics performance despite a delayed release appears therefore to have been more attractive to the software developing firm than introducing a game earlier with lower graphics requirements. This is representative of the slow technology clockspeed observed in the panel. And further, despite the witnessed oscillation, the fast clockspeeds shown in panel (c) suggest that the PC game subindustry sees it important to integrate the latest DirectX versions when launching new products. And finally, the PC game subindustry appears to have a stabilizing view both on the integration of the Windows OS versions into newly launched games and on the expectation of market adoption of the new Windows OS versions. While the technology clockspeed measures temporal latency internal to a given subindustry, the ecosystem clockspeed measures this latency between subindustries. To derive the ecosystem clockspeed, we next evaluate the temporal latency for the reverse salient PC game software subindustry’s attainment of salient subindustry performance levels. Fig. 6 displays the evolutions of the ecosystem clockspeeds across the time frame of our analysis. 2000

1800

(a) Number of days (GPU-game)

Number of days (CPU-game)

1800 1600 1400 1200 1000 800 600 400 200

0 Jan-95

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

1400 1200 1000 800 600 400 200 0 Jan-95

Jan-07 Jan-09

300

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07 Jan-09

Jan-99

Jan-01

Jan-03

Jan-05

Jan-07 Jan-09

1400

(d) Number of days (WIN-game)

(c) Number of days (DX-game)

(b)

1600

250 200 150 100 50 0 Jan-95

Jan-97

Jan-99

Jan-01

Jan-03

Jan-05

1200 1000 800 600 400 200 0 Jan-95

Jan-97

Fig. 6. The Ecosystem Clockspeed of the Reverse Salient PC Game Subindustry, in Relation to the Four Connected Subindustries. (a) The ecosystem clockspeed (CPU_ PCGAME) (b) The ecosystem clockspeed (GPU_PCGAME) (c) The ecosystem clockspeed (DX_PCGAME) (d) The ecosystem clockspeed (OS_PCGAME).

116

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

The ecosystem clockspeed derived from the co-evolution of the PC game software and CPU subindustries in panel (a) shows a continuously increasing time delay of technological performance attainment. This suggests that the higher levels of computer gaming performance received by the end user from the interdependence of the CPU and PC game software subsystems require longer lengths of time to materialize as the ecosystem evolves. This outcome contrasts with the PC game subindustry’s quickening technology clockspeed presented earlier (Fig. 5a). The combination of these results suggests that while PC game developers do see value in focusing on processor performance in the context of their own subindustry, they do not necessarily derive competitive advantage by pursuing the latest CPU hardware performance in the business ecosystem context. An example of long utilization times leading to the development of a successful game is the slowing down of the ecosystem clockspeed from 2002 to 2004 in Fig. 6a, with the launch of the game Painkiller in early 2004. This game followed the traditional first person shooter style in the footsteps of Quake and Doom and received acclaim in the industry despite (or due to) the slow ecosystem clockspeed. This industry acclaim, including the ‘‘PC Game of the Month’’ award from GameSpot, was due to the high performance of the game, for example, in its overall design and gameplay. In contrast, the longitudinal analysis of the PC game subindustry’s ecosystem clockspeed with respect to the GPU subindustry development in panel (b) shows three sequential eras marked by quickening, constant fast, and finally slowing down clockspeed. This finding indicates the diminishing competitive advantage that the PC game software developers derive from focusing on incorporating latest GPU advances in their minimum graphics performance requirements in the ecosystem context. With respect to the complement technologies (panels (c) and (d)), we generally observe the growing focus of the PC game subindustry to utilize the latest DirectX and the Windows OS versions. The end user consequently derives greater value in the form of gaming performance on the PC platform, as the potential of the complementary technologies is realized more readily. For instance, game developers are able to integrate the latest versions of DirectX more rapidly as this complement can be freely downloaded or supplied by game developers. A noteworthy turning point here is the utilization of Windows ME as a minimum requirement in PC games in late 2000. Windows ME was the successor to Windows 98, targeted specifically for home PC users, and released after Windows 2000, which was a business-oriented OS. In contrast to Windows 2000, Windows ME did not achieve substantial penetration in the marketplace, which may be one

117

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

of the reasons why Windows 2000 was stated as a minimum requirement much sooner than Windows ME. After the launch of Windows XP, which penetrated the market quickly, game developers included this version as a minimum requirement much more rapidly, while subsequent Service Packs increased the pace of this development. Finally, to explore the temporal dynamics of the ecosystem clockspeed and provide a more substantive indication of the holistic picture of the evolution of the subindustries, we plotted all the ecosystem clockspeed measures on the same figure, with fitted polynomial trend lines, in Fig. 7. The polynomial fitting results underline the differing trends in our ecosystem clockspeed measures. First, the reverse salient PC game subindustry shows a linear-like slowing trend (R2 =0.9448) in relation to the salient CPU subindustry. This trend signifies the diminishing gains from increasing CPU processing performance as a minimum requirement in PC games, as the processor speeds stabilize and multicore processors replace single core processors. Second, the PC game subindustry’s ecosystem clockspeed in relation to the GPU subindustry displays a trend line with

2000 1800 1600

Number of days

1400 1200 1000 800 600 400 200 0 11.4.1995

5.1.1998

1.10.2000

28.6.2003

24.3.2006

18.12.2008

Date

Fig. 7.

CPU_PCGAME

GPU_PCGAME

OS_PCGAME

DX_PCGAME

Poly. (CPU_PCGAME)

Poly. (GPU_PCGAME)

Poly. (OS_PCGAME)

Poly. (DX_PCGAME)

The Representation of Trends in the Ecosystem Clockspeeds of the PC Game Subindustry, in Relation to the Four Connected Subindustries.

118

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

a trough around 2000 to 2003 (R2 =0.5843), signifying the turning point in the importance of graphics memory to the PC game developing subindustry. Beyond the trough, it appears that the graphics memory available to end users has reached a level that is sufficient to game developers, who no longer engage aggressively in introducing increasingly higher memory requirements into their products. In relation to the Windows OS, the PC game subindustry shows a mirror image of the GPU trend line (R2 =0.6709), with a peak around 2000 to 2003. Therefore, the use of the Windows OS and the capabilities brought about with it to the end user has become an important means of technological competition in game design. Similarly, the reverse salient PC game subindustry has remained on a fast-paced evolutionary track, with the DirectX trend line being very linear at low levels of temporal latency (R2 =0.1131). Fig. 7 reveals a few notable features. First, the trends of different ecosystem clockspeed measures are rather heterogeneous. While the PC game subindustry has a rather stable ecosystem clockspeed in relation to DirectX, the ecosystem clockspeed is dramatically changing and dynamic in relation to the Windows OS, despite both of these representing software subsystems in the analyzed business ecosystem. Partially, this may be due to DirectX being free, readily distributable, and downloadable and important for game design. Second, the temporal stability of the clockspeed measures also varies substantially. The ecosystem clockspeed of the reverse salient PC game subindustry in relation to the salient GPU subindustry remains at the same level for much longer than, say, it does in relation to the CPU subindustry. And third, in general, the ecosystem clockspeed measures pertaining to the hardware components are much slower than the clockspeed measures pertaining to the software subsystems. It is also evident from Fig. 7 that the ecosystem clockspeed of the PC game subindustry in relation to the CPU subindustry has evolved steadily from fast ecosystem clockspeed, signifying the importance of time-based competition toward slower ecosystem clockspeed as the technology around single core processors has matured.

CONCLUSION AND IMPLICATIONS This paper applies clockspeed measures to assess the rate of change of a subindustry that is situated in the business ecosystem context. We have first applied the measure of a subindustry’s own technology clockspeed by evaluating the durations of time between the launch of products with successively higher technological performance. Second, we applied the

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

119

measure of ecosystem clockspeed to evaluate the length of time required for the ecosystem’s reverse salient subindustry to attain the level of technological performance of its interdependent subindustries. As a subindustry’s internal measure of innovative productivity, the evolutionary trajectories of the technology clockspeed show remarkable variation reflecting the evolving competitive rivalry within subindustries. In the ecosystem context, the focal subindustry’s technology clockspeed allows understanding of the temporal nature of technological competition within this subindustry. This information can be used, for example, in seeking synchrony between interlinked subindustries’ output to maximize end-user value (Davis, 2013) or designing competitive actions (Ferrier, Smith, & Grimm, 1999). Thereby, this measure may facilitate informed views on monitoring, outlining, and planning competitive activity that centers on improving technological performance of a subindustry. From the observed patterns of technology clockspeeds, we propose that different eras of technology clockspeed, in a similar manner to the traditional measures such as the number of organizations, products, or innovations (Abernathy & Utterback, 1978; Gort & Klepper, 1982; McGahan, 2000), could be linked to the different phases of industry evolution in future research. In addition to the technology clockspeed, the ecosystem clockspeed informs of the business ecosystem’s holistic performance delivery as the technological performance of the reverse salient subindustry improves over time. Our empirical results suggest that the ecosystem clockspeed may progress through distinctively differing time paths, with increasing temporal latencies expected as the technological paradigm matures in interdependent subindustries. Noteworthy is the maturation of the CPU subindustry in terms of the processor speed parameter. As the focus of the CPU performance enhancement shifts from processor speed to adding cores to the processor, the ecosystem clockspeed of the reverse salient PC game subindustry utilizing this parameter slows down steadily. The slowing ecosystem clockspeed informs of the dynamics of interplay inside the business ecosystem as the product architecture changes and innovative effort is redirected to new areas. This finding is in line with Ethiraj and Posen (2013) in that the PC game developer receives design information from complementor and supplier firms, and this information consequently governs its product development efforts. This may reflect the shifting basis of competition that is expected to take place as the technological paradigm and bases of competition are changing (Christensen, 1997). Therefore, we may propose that the ecosystem clockspeed may be used in detecting and analyzing phases of industry evolution.

120

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

As demonstrated in our empirical illustration, the business ecosystem’s capacity to deliver holistic value to the end user can be evaluated by comparing the clockspeed trajectories in different locations within the ecosystem. This comparison enables ecosystem members to identify the bottlenecks to value creation (Adner, 2012; Adner & Kapoor, 2010) and to assign priority to alleviate these blockages. Additionally, our empirical results suggest that the ecosystem clockspeed of the reverse salient subindustry may progress through distinctively differing time paths. These differences in trajectory, depending on the salient subindustry under scrutiny, thereby signify the influence of contingency factors. This presents ample opportunities for future studies on the possible reasons for these differences. We may propose, for instance, that differences in market and demand conditions heavily influence the nature of profitable innovative effort depending on the role of the subindustry in the business ecosystem, thus guiding the selection of component technologies utilized. Similarly, intra- and inter-industry competition inside the business ecosystem naturally influences technological choices and their timing. Future studies could also extend both the limited observation interval and industry setting of our present study to identify different industry evolution phases and contingency factors at play. Also future investigations may find fruitful grounds in assessing technological performance levels in relation to temporal measures during the industry life cycle. In conclusion, our study attempts to design metrics for investigating the evolution of business ecosystems following the suggestion of prior scholars (e.g., McGahan, Argyres, & Baum, 2004). Our investigation is an attempt to build a part of the set of analytical measures that could be used to further explore business ecosystem dynamics and evolution of industries. The results of our work point to the significance of considering temporal aspects of this evolution from the holistic ecosystem perspective while emphasizing the dynamic nature of the evolution and its contingent nature on supplydemand side factors.

NOTES 1. This evolution is traditionally observed as S-curves (e.g., Foster, 1986). 2. The machining operation in the manufacturing of bicycle hubs in the 19th century exemplifies such technological imbalance. At this time, bicycle hubs were produced by machining the outside and the inside of the hub. Despite process improvements which increased the speed of machining the outside of the bicycle hub, the overall pace of hub production remained unchanged. This was because the machining technology that was used to form the inside of the hub was inadequate to

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

121

keep up with the speed of forming the outside of the hub. Subsequently, economic benefits from the improvements in the outside forming process were not realized until the inside drilling process was quickened. This was achieved through the implementation of oil-tube drilling, which had previously been used in the drilling of gun barrels (Rosenberg, 1969, 1976). 3. The evolution of technological systems is marked by differences in the performance levels delivered by their subsystems. In this state of imbalance, the subsystem that delivers the lowest level of performance and hence curbs the performance of the holistic system is referred to as the ‘‘reverse salient’’ (Hughes, 1983), and the state of imbalance manifest in the system due to the appearance of a reverse salient is referred to as ‘‘reverse salience.’’ 4. We have shown the component and complement technologies to be the reverse salients in the figure to align with the previous works of Adner and Kapoor. However, it should be noted that in a state of technological imbalance, the reverse salient may also be the focal firm itself, when its technological performance is lower than the performances of the component or complement technologies. 5. In their extension of Fine’s (1998) notion of the industry clockspeed, Dedehayir and Ma¨kinen (2011) use this duration to measure the industry’s clockspeed in the systemic context (referred to as ‘‘systemic technological industry clockspeed’’). 6. DirectX, a Microsoft product, is designed to handle tasks such as multimedia, in particular for video game applications on Windows platforms. The DirectX interface concept was developed and employed soon after the launch of the Windows 95 OS and onward. Such interfaces include Direct3D, DirectDraw, and DirectMusic, such that the ‘‘X’’ in ‘‘DirectX’’ represents a particular interface. The DirectX software development kit (SDK) is made available for PC game developers to assist them in designing their products. 7. The game software stipulates a set of minimum performance requirements corresponding to the CPU, GPU, DirectX, and OS, with which the software will function as designed. 8. ATI was acquired by AMD in October 2006.

ACKNOWLEDGMENTS We thank Ron Adner, Brian Silverman, and Joanne Oxley for insightful comments on earlier versions of this paper.

REFERENCES Abernathy, W. J., & Utterback, J. M. (1978). Patterns of industrial innovation. Technology Review, 80(7), 40–47. Adner, R. (2012). The wide lens: A new strategy for innovation. New York, NY: Portfolio/ Penguin. Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, April, 98–107.

122

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

Adner, R., & Kapoor, R. (2007). Managing transitions in the semiconductor lithography ecosystem. Solid State Technology, November 20. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31, 306–333. Agerfalk, P. J., & Fitzgerald, B. (2008). Outsourcing to an unknown workforce: Exploring open sourcing as a global sourcing strategy. MIS Quarterly, 32(2), 385–409. Audretsch, D. B. (1995). Innovation and industry evolution. Cambridge, MA: The MIT Press. Bahrami, H., & Evans, S. (1995). Flexible re-cycling and high-technology entrepreneurship. California Management Review, 37(3), 62–89. Baldwin, C. Y., & Clark, K. B. (1997). Managing in an age of modularity. Harvard Business Review, 75(5), 84–93. Basole, R. C. (2009). Visualization of interfirm relations in a converging mobile ecosystem. Journal of Information Technology, 24, 144–159. Bethke, E. (2003). Game development and production. Plano, TX: Wordware Publishing, Inc. Bonaccorsi, A., & Giuri, P. (2000). When shakeout doesn’t occur: The evolution of the turboprop engine industry. Research Policy, 29, 847–870. Brockhoff, K. K., Ernst, H., & Hundhausen, E. (1999). Gains and pains from licensing patentportfolios as strategic weapons in the cardiac rhythm management industry. Technovation, 19, 605–614. Brown, S. L., & Eisenhardt, K. M. (1997). The art of continuous change: Linking complexity theory and time-based evolution in relentlessly shifting organizations. Administrative Science Quarterly, 42, 1–34. Brusoni, S., & Prencipe, A. (2001). Unpacking the black box of modularity: Technologies, products and organizations. Industrial and Corporate Change, 10(1), 179–205. Chesbrough, H. (2003). Environmental influences upon firm entry into new sub-markets evidence from the worldwide hard disk drive industry conditionally. Research Policy, 32, 659–678. Christensen, C. M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business School Press. Clark, K. B. (1985). The interaction of design hierarchies and market concepts in technological evolution. Research Policy, 14(5), 235–251. Davis, J. P. (2013). The emergence and coordination of synchrony in interorganizational networks. Advances in Strategic Management, 30, 197–238. Dedehayir, O., & Ma¨kinen, S. J. (2011). Measuring industry clockspeed in the systemic industry context. Technovation, 31(12), 627–637. Eisenhardt, K. M., & Tabrizi, B. N. (1995). Accelerating adaptive processes: Product innovation in the global computer industry. Administrative Science Quarterly, 40(1), 84–110. Eisenhardt, K. M. (1989). Making fast strategic decisions in high-velocity environments. Academy of Management Journal, 32(3), 543–576. Eisenhardt, K. M., & Martin, J. A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21, 1105–1121. Ethiraj, S. K. & Posen, H. E. (2013). Do product architectures affect innovation productivity in complex product ecosystems. Advances in Strategic Management, 30, 127–166.

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

123

Ethiraj, S., & Puranam, P. (2004). The distribution of R&D effort in systemic industries: Implications for competitive advantage. In J. A. C. Baum & A. M. McGahan (Eds.), Business strategy over the industry life cycle (pp. 225–253). Oxford: Elsevier Ltd. Ferrier, W. J., Smith, K. G., & Grimm, C. M. (1999). The role of competitive action in market share erosion and industry dethronement: A study of industry leaders and challengers. The Academy of Management Journal, 42(4), 372–388. Fine, C. H. (1998). Clockspeed: Winning industry control in the age of temporary advantage. Reading, MA: Perseus Books. Foster, R. N. (1986). Innovation: The attacker’s advantage. New York: Summit Books. Gooroochurn, N., & Hanley, A. (2007). A tale of two literatures: Transaction costs and property rights in innovation outsourcing. Research Policy, 36, 1483–1495. Gort, M., & Klepper, S. (1982). Time paths in the diffusion of product innovations. The Economic Journal, 92(367), 630–653. Guimaraes, T., Cook, D., & Natarajan, N. (2002). Exploring the importance of business clockspeed as a moderator for determinants of supplier network performance. Decision Sciences, 33(4), 629–644. Hayes, M., & Dinsey, S. (1995). Games war. London: Bowerdean Publishing Company Ltd. Hobday, M. (1998). Product complexity, innovation and industrial organisation. Research Policy, 26, 689–710. Hughes, T. P. (1983). Networks of power: Electrification in western society, 1880–1930. USA: The John Hopkins University Press. Iyer, B., & Davenport, T. H. (2008). Reverse engineering Google’s innovation machine. Harvard Business Review, April, 1–11. Kapoor, R., & Adner, R. (2007). Technology interdependence and the evolution of semiconductor lithography. Solid State Technology, November, 51–54. Kent, S. L. (2001). The ultimate history of video games. New York: Three Rivers Press. Klepper, S., & Graddy, E. (1990). The evolution of new industries and the determinants of market structure. The RAND Journal of Economics, 21(1), 27–44. Klepper, S. (1996). Entry, exit, growth, and innovation over the product life cycle. The American Economic Review, 86(3), 562–583. Langlois, R. N., & Robertson, P. L. (1992). Networks and innovation in a modular system: Lessons from the microcomputer and stereo component industries. Research Policy, 21, 297–313. Li, Y. (2009). The technological roadmap of Cisco’s business ecosystem. Technovation, 29, 379–386. Lusch, R. F. (2010). Reframing supply chain management: A service-dominant logic perspective. Journal of Supply Chain Management, 47(1), 14–18. MacCormack, A., Verganti, R., & Iansiti, M. (2001). Developing products on internet time: The anatomy of a flexible development process. Management Science, 47(1), 133–150. Macher, J. T., & Mowery, D. C. (2004). Vertical specialization and industry structure in high technology industries. In J. A. C. Baum & A. M. McGahan (Eds.), Business strategy over the industry life cycle (pp. 317–355). Oxford: Elsevier Ltd. Malerba, F., Nelson, R. R., Orsenigo, L., & Winter, S. G. (1999). ‘History friendly’ models of industry evolution: The computer industry. Industrial and Corporate Change, 8(1), 3–40. McGahan, A. M. (2000). How industries evolve. Business Strategy Review, 11(3), 1–16.

124

SAKU J. MA¨KINEN AND OZGUR DEDEHAYIR

McGahan, A. M. (2004). How industries change. Harvard Business Review, 82(10), 87–94. McGahan, A. M., Argyres, N., & Baum, J. A. C. (2004). Context, technology and strategy: Forging new perspectives on the industry life cycle. In J. A. C. Baum & A. M. McGahan (Eds.), Business strategy over the industry life cycle (pp. 1–21). Oxford: Elsevier Ltd. Mendelson, H., & Pillai, R. R. (1999). Industry clockspeed: Measurement and operational implications. Manufacturing & Service Operations Management, 1(1), 1–20. Meyer, A. D., Gaba, V., & Colwell, K. A. (2005). Organizing far from equilibrium: Nonlinear change in organizational fields. Organization Science, 16(5), 456–473. Miller, R., Hobday, M., Leroux-Demers, T., & Olleros, X. (1995). Innovation in complex system industries: The case of flight simulators. Industrial and Corporate Change, 4, 363–400. Moore, J. F. (1993). Predators and prey: A new ecology of competition. Harvard Business Review, May-June, 75–86. Murmann, J. P., & Frenken, K. (2006). Toward a systematic framework for research on dominant designs, technological innovations, and industrial change. Research Policy, 35, 925–952. Nadkarni, S., & Narayanan, V. K. (2007). Strategic schemas, strategic flexibility, and firm performance: The moderating role of industry clockspeed. Strategic Management Journal, 28, 243–270. Pierce, L. (2009). Big losses in ecosystem niches: How core firm decisions drive complementary product shakeouts. Strategic Management Journal, 30, 323–347. Poole, S. (2004). Trigger happy: Video games and the entertainment revolution. New York: Arcade Publishing. Ripolles, O., & Chover, M. (2008). Optimizing the management of continuous level of detail on GPU. Computers & Graphics, 32, 307–319. Romanelli, E., & Tushman, M. L. (1994). Organizational transformation as punctuated equilibrium: An empirical test. The Academy of Management Journal, 37(5), 1141–1166. Rosas-Vega, R., & Vokurka, R. J. (2000). New product introduction delays in the computer industry. Industrial Management and Data Systems, 100(4), 157–163. Rosenberg, N. (1969). The direction of technological change: Inducement mechanisms and focusing devices. Economic Development and Cultural Change, 18, 1–24. Rosenberg, N. (1976). Perspectives on technology. Cambridge: Cambridge University Press. Schilling, M. A. (2000). Toward a general modular systems theory and its application to interfirm product modularity. The Academy of Management Review, 25(2), 312–334. Souza, G. C., Bayus, B. L., & Wagner, H. M. (2004). New-product strategy and industry clockspeed. Management Science, 50(4), 537–549. Suarez, F. F., & Lanzolla, G. (2005). The half-truth of first-mover advantage. Harvard Business Review, 83(4), 121–127. Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28, 1319–1350. Tiwana, A., Konsynski, B., & Bush, A. A. (2010). Platform evolution: Coevolution of platform architecture, governance, and environmental dynamics. Information Systems Research, 21(4), 675–687. Tushman, M. L., & Murmann, J. P. (1998). Dominant designs, technology cycles, and organizational outcomes. Research in Organizational Behavior, 20, 231–266.

Business Ecosystems’ Evolution – An Ecosystem Clockspeed Perspective

125

Ulrich, K. (1995). The role of product architecture in the manufacturing firm. Research Policy, 24, 419–440. Wezel, F. C. (2005). Location dependence and industry evolution: Founding rates in the United Kingdom motorcycle industry, 1895–1993. Organization Science, 26(5), 729–754. Whitley, E. A., & Darking, M. (2006). Object lessons and invisible technologies. Journal of Information Technology, 21, 176–184.

DO PRODUCT ARCHITECTURES AFFECT INNOVATION PRODUCTIVITY IN COMPLEX PRODUCT ECOSYSTEMS? Sendil K. Ethiraj and Hart E. Posen ABSTRACT In this paper, we seek to understand how changes in product architecture affect the innovation performance of firms in a complex product ecosystem. The canonical view in the literature is that changes in the technological dependencies between components, which define a product’s architecture, undermine the innovation efforts of incumbent firms because their product development efforts are built around existing architectures. We extend this prevailing view in arguing that component dependencies and changes in them affect firm innovation efforts via two principal mechanisms. First, component dependencies expand or constrain the choice set of firm component innovation efforts. From the perspective of any one component in a complex product (which we label the focal component), an increase in the flow of design information to the focal component from other (non-focal) components simultaneously increases the constraint on focal component firms in their choice of profitable R&D projects while decreasing the constraint on non-focal component firms.

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 127–166 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030008

127

128

SENDIL K. ETHIRAJ AND HART E. POSEN

Second, asymmetries in component dependencies can confer disproportionate influence on some component firms in setting and dictating the trajectory of progress in the overall system. Increases in such asymmetric influence allow component firms to expand their innovation output. Using historical patenting data in the personal computer ecosystem, we develop fine-grained measures of interdependence between component technologies and changes in them over time. We find strong support for the empirical implications of our theory. Keywords: Production architectures; innovation; productivity; product ecosystems

INTRODUCTION This paper investigates the relationship between the architecture of complex product systems and firm innovation productivity. Understanding the sources of technological innovation and its consequences has a long and rich research history that cuts across multiple literatures (Abernathy & Utterback, 1978; Clark, 1985; Nelson, 1959b; Rosenberg, 1974; Schmookler, 1966). In the strategic management literature, research on innovation (see Ahuja, Lampert, & Tandon, 2008, for a recent survey) has highlighted two distinct themes related to complex product systems. One theme, focused on the product architecture, has sought to understand how interdependencies among components of the product system affect the rate and direction of innovation (Baldwin & Clark, 2000; Ethiraj, 2007; Ethiraj & Levinthal, 2004; Henderson & Clark, 1990). A second theme, focused on the product ecosystem, casts attention on the specific roles of firms in an ecosystem, and asymmetries among them in the benefits derived from innovation (Adner & Kapoor, 2010; Christensen, 1997). In this paper, we follow recent attempts to bridge these two themes. We focus on technological interdependencies between components in a product ecosystem and examine the impact on the innovation performance of firms (Boudreau, 2010; Jacobides, Knudsen, & Augier, 2006). It is generally recognized that product architectures affect the innovation performance of firms (Henderson & Clark, 1990). The canonical view in the literature was that changes in product architectures often undermine the innovation efforts of incumbents because their product development efforts are built around existing architectures. Changes in architectures conflict

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

129

with rigidities in organization structures and communication patterns that then disadvantage incumbent firms. Increasingly, however, several notable papers have uncovered exceptions to this expectation. On the one hand, some have questioned the premise that firm organization structures mirror existing product architectures and thus challenged the foundation of the inertia hypothesis (Cabigiosu & Camuffo, 2012; Hoetker, 2006). On the other hand, others have recognized notable real-world examples of architectural change that reinforced rather than undermined the innovation performance of firms (see, e.g., Jacobides et al., 2006). We continue in this recent tradition and seek to develop a discriminating understanding of how one aspect of product architectures – interdependence between components and changes in them – affects firm innovation productivity. We focus on innovation in complex product ecosystems. A product ecosystem includes the components that work together to provide value for the end user. The architecture of a product ecosystem includes the grouping of functions into components and the mapping of dependencies between them (Ulrich, 1995; Ulrich & Eppinger, 1999). Research and development (R&D) effort is fundamentally an investment made under uncertainty (Dixit & Pindyck, 1994). In complex product systems, the uncertainty accompanying component R&D investments is amplified in the presence of significant interdependencies between components. In a world where firm capabilities are heterogeneous and firms are making autonomous R&D choices, the success or usefulness of one firm’s component innovation effort is intimately tied to the nature of underlying component dependencies. In other words, the innovative efforts of firms are subject to externalities (positive and negative) that alter the rate and success of innovation and its appropriability. We argue that component dependencies and changes in them affect firm innovation efforts via two principal mechanisms. First, component dependencies expand or constrain the choice set of firm component innovation efforts. From the perspective of any one component in a complex product (which we label the focal component), an increase in the flow of design information to the focal component from other (non-focal) components simultaneously increases the constraint on focal component firms in their choice of profitable R&D projects while decreasing the constraint on non-focal component firms. Second, asymmetries in component dependencies can confer disproportionate influence on some component firms in setting and dictating the trajectory of progress in the overall system. Increases in such asymmetric influence allow component firms to expand their innovation output. We label the former constraint-enhancing

130

SENDIL K. ETHIRAJ AND HART E. POSEN

design dependencies and the latter influence-extending design dependencies. We map the flow of design information dependencies between components in a complex product, and changes in them, to understand how product architectures affect firm innovation productivity. In order to systematically identify and separate the impact of product architectures, we represent the micro-foundations of product architectures using a simplified form of dependency structure matrices (DSMs) (see, e.g., Browning, 2001; Eppinger, Whitney, Smith, & Gebala, 1994; Steward, 1981). We treat the grouping of functions into components as exogenous and focus on between-component dependencies. Using historical patent citation data in the personal computer (PC) product ecosystem, we map interdependencies between four central components of the PC (microprocessor, memory, graphics adapter, and disk drive) for each year from 1979 to 1998. Using the DSMs, we construct measures of constraintenhancing and influence-extending design dependencies between components. We then estimate the innovation productivity of firms in the PC product system and show that constraint-enhancing design dependencies are negatively related to innovation productivity, whereas influence-extending design dependencies are positively related to innovation productivity. The rest of the paper is organized as follows. The following section briefly reviews the literature on innovation productivity and the impact of product architectures. Section three develops the theory relating product architectures to firm innovation productivity and sets up the key hypotheses that we seek to test. Section four describes the PC product ecosystem that comprises the empirical context for the study. Section five outlines the model specification, data, and methods. Sections six and seven present the results and a discussion of their implications and broader contributions to the literature.

LITERATURE REVIEW Innovation Productivity Traditionally, social scientists interested in modeling innovation productivity have adopted one of two approaches: demand pull, and supply push (Thirtle & Ruttan, 1987). The demand-pull view contends that R&D investments respond to market demand (Schmookler, 1966). The basic argument suggests that market demand is a function of consumers’ preferences. As consumers’ tastes,

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

131

income levels, and budget constraints change, so do their preference or utility functions. These changes in the utility functions, mirrored by shifts in the demand curve, will act as an impetus for R&D investment and innovation. Schmookler (1966) concluded that innovative effort rather than being random or exogenous to the economic system was in fact driven primarily by demand-side considerations through a conditioning of market size expectations for the products arising out of inventive activity. The supply-push approach argues that autonomous developments in scientific and technological knowledge tend to shift the supply curve for technical change to the right (Nelson, 1959a, p. 106). The supply-push argument has three variants that seek to explain the rate and direction of innovative activity: induced innovation, learning by doing, and industry structure. Rosenberg (1969) argued for a theory of induced innovation based on the ‘‘obvious and compelling need’’ to overcome constraints on the growth of production or of factor supplies. He argued that ultimately, ‘‘the allocation of inventive effort at the level of the firm is strongly influenced by the perception of technical possibilities or needs which are thrown up by the technology itself’’ (Rosenberg, 1973, p. 357). The main contribution of Rosenberg (1969) was to show that the underlying architecture of a complex product or technology can itself shape incentives for R&D effort. Ethiraj (2007) extends the application of the induced innovation hypothesis to explore asymmetric incentives for induced innovation among firms that make up the product ecosystem. The second explanation for innovation productivity is rooted in learningby-doing or experience effects. The learning-by-doing hypothesis suggests that, other things held constant, learning from prior R&D experience will improve the productivity of future R&D investments (Arrow, 1962b; David, 1975). Even though learning by doing by itself is not an incentive-based explanation for innovation productivity, others have argued that learning by doing can shape and direct innovative effort (Nelson & Winter, 1982). The incentive to engage in innovative efforts in the neighborhood of prior R&D is increased to the extent that the R&D cost function is declining as a function of learning by doing. Finally, several studies have attempted to document the relationship between market structure (e.g., industry structure, appropriability conditions, technological opportunity) and R&D investments and innovation (see Cohen, 1995, for the most recent survey). Empirical studies have documented the influence of market structure on R&D and innovation examining industry characteristics such as demand conditions (Kamien & Schwartz, 1970), industry concentration or monopoly power (Mansfield, 1968;

132

SENDIL K. ETHIRAJ AND HART E. POSEN

Williamson, 1965), appropriability conditions (Levin, Klevorick, Nelson, & Winter, 1987), and technological opportunity (Dasgupta & Stiglitz, 1980, Knott & Posen, 2009). The general conclusion from these studies is that the endogeneity between R&D investments and elements of market structure shackles much of the prior empirical work (Cohen 1995). In sum, the literature on innovation productivity emphasizes the importance of economic incentives. While incentives are no doubt important in any economic decision, the broader contexts within which such decisions occur often amplify or dampen the ultimate outcome of the decision. A variety of exogenous factors such as actions of competitors, changing technologies, product architectures, or unanticipated government policy changes can affect incentive intensity. In this paper, we focus on a complex product ecosystem – and we argue that one such exogenous factor that affects innovation productivity is product architecture. We elaborate below how and why product architectures affect innovation productivity.

Why Are Product Architectures Important? A complex product may be usefully viewed as a set of components that together provide utility to customers (Garud & Kumaraswamy, 1995, p. 94). The complexity of the product system stems primarily from the oftenunknown nature and magnitude of interactions between different components of the product system and their product performance implications. For instance, the nature of interaction between two components may range from positive (increasing in one another), negative (decreasing in one another), or unrelated. Furthermore, the nature of the interaction may alternate between positive, negative, and unrelated over different ranges of interaction strength. As a result, overall product performance can exhibit highly nonlinear and/or non-monotonic behavior in response to changes in one or more components. Thus, complexity is a function of the number and composition of components that make up the product system and the nature of interdependence between them. A ‘‘product architecture is the arrangement of functional elements of a product into physical chunks that become the building blocks for a product’’ (Ulrich & Eppinger, 1999). While there can be several ways of designing products’ architectures, complex products are often designed with modular design logics (Baldwin & Clark, 2000; Ulrich & Eppinger, 1999). The design of modular architectures is based on the grouping of functional

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

133

elements into components or chunks such that interdependence within components is maximized and interdependence across components is minimized (Alexander, 1964; Simon, 1962). However, the interdependencies between components can never be completely eliminated (Baldwin & Clark, 2000). For instance, the electronic circuitry for cruise control in an automobile is interdependent with the engine that provides acceleration and the braking system. Each of these components is based on distinct technological knowledge and can perhaps be designed and manufactured in isolation. Nonetheless, the components need to communicate with each other in order to function effectively. This creates functional interdependencies between the components that make up a product or system. Therefore, product architecture includes both (1) the grouping of elements or functions into components and (2) the mapping of interdependence between them. The broader literature on product architectures may be usefully partitioned into two distinct themes. One that treats changes in product architectures as exogenous to the firm and explores its implications for firm performance and survival (e.g., Christensen, 1997; Henderson & Clark, 1990; Tushman & Anderson, 1986) and a second that treats product architecture as endogenous to the firm and explores the intermediate performance implications of alternative architectures (Boudreau, 2010; Clark & Fujimoto, 1991; Jacobides et al., 2006; Ulrich, 1995). In the literature treating product architectures as exogenous to the firm, the general consensus has been that organization designs or architectures often mirror their product architectures (Cusumano & Selby, 1998; Sanchez & Mahoney, 1996) and thus any change in the product architecture may disrupt organizational effectiveness by rendering less useful the communication channels, information filters, problem-solving strategies, skills, and routines of the organization that were tailored to the previous product architecture. An inappropriate organizational architecture will guide the identification of the wrong problems and the application of inappropriate solutions, resulting in lowered innovation productivity. At the extreme, if the organizational product architecture misalignment continues, the firm’s survival may be hampered (Henderson & Clark, 1990). In recent years, however, some scholars have challenged the assumption that firm structures mirror product architectures (Hoetker, 2006). In contrast, the literature treating product architecture as endogenous to the firm views it as managerial instruments of design that help firms attain intermediate performance goals such as minimizing development cost and time or maximizing product flexibility (Ulrich & Eppinger, 1999). Put

134

SENDIL K. ETHIRAJ AND HART E. POSEN

differently, endogenous changes in product architectures can actually help improve the performance of firms. As Ulrich (1995, p. 419) observes, ‘‘product architecture is particularly relevant to the research and development (R&D) function of a company, because architectural decisions are made during the early phase of the innovation process where the R&D function often plays a lead role. While these architectural decisions are linked to the overall performance of the firm, they are also linked to specific R&D issues, including the ease of product change, the division between internal and external development resources, the ability to achieve certain types of technical product performance, and the way development is managed and organized.’’ The research in this tradition tends to treat product architecture as a design decision, that is, endogenous to the firm, and explores the intermediate performance trade-offs of alternative architectures (see Baldwin & Clark, 2000; Garud, Kumaraswamy, & Langlois, 2001). All of these intermediate performance measures are ultimately related to firm innovation productivity. Moreover, the autonomous product architecture decisions of individual firms introduce an important element of aggregate uncertainty in the product system. One important manifestation of the uncertainty is in the changing nature of component dependencies in the product system. Though the two themes in the literature on product architectures differ in the focus of their explanatory efforts, they are in agreement that product architectures have important implications for firm performance via affecting the usefulness of R&D efforts. In particular, we draw two conclusions from the review of the extant literature. On the one hand, the innovation productivity literature has largely ignored the role of product architectures in shaping economic incentives and its effect on returns from R&D activity. On the other hand, the product development literature underlines the role of product architectures in determining firm innovation productivity. These two observations constitute the point of departure for this paper. We advance a simple theory of how product architectures affect the returns from firm R&D and empirically test the predictions that follow. The following section elaborates our theory.

ARCHITECTURE OF COMPLEX PRODUCT SYSTEMS AND INNOVATION PRODUCTIVITY This section develops our theory and principal hypotheses. We first outline the nature and content of component dependencies in the context of firm

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

135

R&D decisions in complex products. We then describe briefly the firm R&D decision problem focusing particularly on the micro-mechanisms by which component dependencies affect R&D choice and returns. Finally, we outline the relationship between component dependencies and innovation productivity and set up empirically testable hypotheses.

Component Dependencies Recall from section ‘‘Literature Review’’ that product architecture represents the grouping of elements or functions into components and the nature and magnitude of interdependence between them. Product architecture, therefore, comprises the structural arrangement of functional elements that can increase or decrease design complexity and consequently the return on R&D effort. For instance, modular architectures reduce design complexity, while the converse is true for integral architectures (Ulrich, 1995). In this paper, we are primarily interested in one aspect of product architectures – the design dependencies between components of a complex product – and how such dependencies affect firm innovation productivity. The notion of design dependencies requires explanation on two aspects: the nature and content of dependence. The earliest elaboration on the nature of dependence and its relevance to design activity has its roots in Thompson (1967, pp. 54–55). He suggested that dependence between two activities, A and B, and its consequence for realized performance, P, can assume three stylized forms. They may be pooled (P=f[A, B]), sequential (P=f[A-B]), or reciprocal (P=f[A2B]). In the terminology of product design, pooled interdependence corresponds to fully modular architectures, sequential interdependence to nearly modular, and reciprocal to non-modular. As is evident, a transition from pooled interdependence to reciprocal interdependence entails a concomitant increase in design complexity. In pooled interdependence, each activity can proceed in isolation such that maximizing the performance of A and B in isolation results in global maximization. In contrast, when there is reciprocal interdependence, maximizing the performance of A and B in isolation will not necessarily result in global performance increase. The shared interdependence between A and B demands coordination to achieve global performance goals. Thompson (1967) did not explicitly address the content of interdependence, since his interest was primarily in organization design. The engineering design literature has devoted extensive attention to the content of

136

SENDIL K. ETHIRAJ AND HART E. POSEN

dependence relationships (see, e.g., Pahl & Beitz, 1991; Pimmler & Eppinger, 1994). Sosa, Eppinger, and Rowles (2003), in the context of commercial aircraft engines, identify five types of design dependencies: spatial, structural, energy, material, and information. These dependencies refer to the content of operational interaction between components, that is, energy or materials flow from one component to another. Our interest in this paper lies in explaining the effect of product architectures on firm innovation productivity. In other words, we are interested in the content of interdependencies between components of a complex product that affect firm R&D effort and the returns to the effort. Since R&D is primarily an intangible or informational asset (Arrow, 1962a), the content of interdependence that we study is closest to what Sosa et al. (2003) call informational dependencies. Note, however, that Sosa et al. (2003) are referring to information dependencies in the operation of a complex product. We focus instead on informational dependencies in the design process itself. That is, in the context of R&D, it is the flow of design information or constraints that is relevant. Such dependencies set the broad parameters or boundaries within which component R&D allocations are made. Thus, the main premise of this paper is that firm-level R&D effort and the returns that accrue to that effort depend on the informational dependencies between components of a complex product and the changes in these dependencies over time. In explicating the importance of design dependencies and how they affect firm innovation productivity, we found it useful to draw on the conceptual apparatus and language of DSMs (see Steward, 1981). Consider a hypothetical DSM for four components of a PC (microprocessor, memory, display adaptor, and hard disk drive) as represented in Fig. 1 (see Eppinger, 2001, for an excellent lay introduction to DSMs). Fig. 1, which is a binary square matrix, illustrates four hypothetical functions for each component. The ‘‘x’’ in each row-column intersection connotes the presence of dependence between functions. Within each component, all functions are strongly interdependent with each other as shown by the saturated ‘‘x’’s in all the cells. The ‘‘x’’s above and below the principal diagonal indicate dependencies between different functions of different components. For instance, the input-output function of the microprocessor (microprocessor column 1) shares interdependencies with the data transfer function within the memory component (memory row 2) of the PC. The off-diagonal ‘‘x’’s are to be read as ‘‘column affects row’’ or ‘‘row is dependent on column.’’ Note that a symmetric ‘‘x’’ in corresponding cells above and below the diagonal reflects interdependence between functions, whereas a single ‘‘x’’

Architectures Affect Innovation Productivity in Complex Product Ecosystems? Microprocessor 1 1

2

3

4

x

x

x

2

x

3

x

x

x

4

x

x

Memory 1

x

3

x

x

x

x

x

x

x x

x

x

x

x

x x

4

x

4

1

x

x

x

x

x

x

x

3

x

x x

x x

x

x

x

x

x

x

x

x

x

x

x

x

x

x x x x

x

x

Fig. 1.

x x

x

x x

x

x

x

4 x

x x

3

x

x

x

x

1 2

2

x

x

2 3

3

x

1

4

2

x

x

4

1

x x

x

4

x

x

1 2

3

Hard drive

x

x x

2

Display adpt.

137

x x

x

x

x x

x

x

x

x

x x x

x

Illustrative Dependency Structure Matrix.

either above or below the diagonal reflects one-way dependence. Thus, changes in the magnitude of above- and below-diagonal dependencies reflect changes in component dependencies. Such DSMs have been found useful for representation and analysis in a variety of applications including product design and development (Eppinger, 2001; Eppinger et al., 1994), organization design (Browning, 1998, 1999), and information flow (Clark & Wheelwright, 1993).

R&D in Complex Product Systems We adopt a simple conception of firm R&D in complex product systems. In line with our description above, we assume that a complex product comprises several components or modules that share design dependencies. We also assume that firms in complex product industries are specialized in the development of one or more components. They make autonomous

138

SENDIL K. ETHIRAJ AND HART E. POSEN

component R&D choices every period. Component dependencies affect R&D projects and their returns via two principal mechanisms. Assume for a moment a world where there are no component dependencies. Firms can choose a range of component R&D projects based simply on a cost-benefit calculus. Consider the circle A in Fig. 2 as representing the portfolio of feasible R&D projects that yield a positive expected return. Into this ideal world, introduce component dependencies. The introduction of component dependencies will alter the cost function associated with each R&D project. For instance, Recaro Aircraft seating, a seat supplier to Airbus’ A380, is facing pressure to reduce the weight of a seat by about 30 percent as compared with its regular seats. The only way to achieve such a reduction is by using carbon-based composites in the frame. This, however, adds more than $1,500 to the cost of a seat (Matlack, 2004). Thus, the introduction of dependence between the weight of a seat and the weight of the aircraft reduces the range of feasible design options for Recaro by altering the costbenefit calculus of its seat options. Projects that would have been economically feasible without the dependence will be no longer profitable. Thus, in complex products, component dependencies always constrain and reduce the range of feasible component R&D projects for firms1 into a smaller set as in circle B in Fig. 2. Component dependencies, in addition to constraining the menu of feasible R&D projects, also affect the returns to R&D or the useful R&D output. Other things held constant, if there were no changes in component dependencies between the time the R&D allocation was made and the R&D output was realized, then the useful R&D output, C, in Fig. 3 will be identical to the actual R&D projects undertaken, B. However, if component dependencies change in the interim, the extent of overlap between useful R&D output and the R&D projects undertaken will no longer be perfectly

B

A

Economically viable portfolio of R&D projects

Fig. 2.

Component Dependencies

Actual portfolio of R&D projects

Component Dependencies and R&D Project Choice.

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

Change in dependencies

No change in dependencies

B

139

C

Actual portfolio of R&D projects

Fig. 3.

=

Useful R&D output

B

Useful R&D Output

C

Component Dependencies and R&D Output.

congruent. The extent of reduction in useful output will be related to the nature of change in component dependencies. For instance, in the late 1980s, R&D in the rigid disk drive industry was progressing along a trajectory of increasing storage capacity. At this time, the emergence of portable computers altered the dependency between the disk drive and the other components such that size of the drive or its form factor began dominating capacity concerns (Christensen, 1997). Firms that invested R&D in improving capacity while ignoring form factor found some portion of their R&D investments to be less useful. Thus, as represented in Fig. 3, the extent of overlap between R&D projects undertaken and their useful output will vary depending on the nature of change in component dependencies. In sum, we posit two micro-mechanisms that characterize the relationship between product architectures and firm innovation productivity. Component dependencies and changes in them (1) determine and/or alter constraints on the choice of R&D projects and (2) affect the useful output of R&D efforts. The following section outlines the principal hypotheses.

Component Dependencies and Firm Innovation Productivity We focus primarily on interdependencies between components of a complex product and how they affect firm innovation productivity. If design and functional dependencies between components were uniform, then the contribution of each component to overall product performance would be identical. In such a case, the impact of product architectures would be identical across components and firms. The more interesting (and also

140

SENDIL K. ETHIRAJ AND HART E. POSEN

representative) case is that of asymmetric dependencies between components. First, asymmetries in dependencies create imbalances in the power distribution of components in a complex product, that is, some components are more important than others and this can change over time as component dependencies shift. Second, asymmetries also place limits on the choice of component R&D projects. In seeking to understand the impact of such asymmetries in component dependencies, we sought to quantify the intensity of dependencies below and above diagonal for each component. The intensity or magnitude of dependencies is mapped as the number of off-diagonal ‘‘x’’s in the DSM for a given component. Higher number of x’s denotes greater intensity of dependence. The presence of design dependence means that the focal component is the recipient of design information from one or more other components as reflected in the x’s above the diagonal. Similarly, a focal component is the sender of design information to one or more other components to the extent that there are x’s below the diagonal. Consider the case of a focal component, say microprocessor, in Fig. 1. In this case, there are several x’s below the diagonal suggesting that the functionality of other components, such as memory and hard drive, depend on the microprocessor. The greater the intensity of below-diagonal dependencies, the greater is the flow of design information from the focal component to the other components. More generally, high intensity of below-diagonal dependencies gives the focal component firm one important advantage that in turn has significant implications for firm innovation productivity. The focal component firms with high intensity of belowdiagonal dependencies have, by virtue of the asymmetric dependence structure, enhanced leverage to dictate the nature, magnitude, and direction of design change and innovation in the product system. For instance, Ford or General Motors as auto assemblers/designers have the relative freedom to introduce unilateral design changes in their cars. Their component suppliers, due to a unidirectional dependence, have no choice but to adapt to and follow any design changes. The greater the flow of design information from a focal component to other components, the greater is the influence of the focal component on the rate and direction of R&D effort in the complex product, and this in turn extends the useful output of R&D effort for the focal component firms. In understanding the intuition here, it is useful to draw a contrast between consensus and authority. Although consensus forces agreement and coordination, authority relies on fiat in implementing decisions. Thus, a focal component firm with a bundle of R&D assets will be able to usefully deploy a larger proportion of the assets when it has the clout to force

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

141

the broader system to adopt the design changes associated with its R&D output. This enlarges the proportion of useful R&D output for the focal component firms as shown in Fig. 3. Therefore, more generally, we expect any increase in the intensity of below-diagonal dependencies (defined as the design influence of the focal component on the rest of the product ecosystem) to concomitantly extend the influence of focal component firms on the overall product system. An increase in control will increase the useful output of the R&D. Thus, we expect to observe that an H1. Increase in the intensity of below-diagonal dependencies will be positively related to changes in firm innovation productivity. We also argued that firm innovation productivity is partly a function of the choice of R&D projects and that component dependencies can increase or decrease such choice. Specifically, the greater the flow of design information from other components to the focal component, the greater is the need for coordination of the focal component firm’s R&D efforts. This affects the autonomy to choose component R&D projects. The extent of autonomy here is a direct function of the intensity of interdependencies between components of a complex product and the direction of flow of interdependencies. The intensity of above-diagonal dependencies reflects the dependence of the focal component on other components. The greater the intensity of above-diagonal dependencies (defined as the design influence of the product ecosystem on the focal component), the greater is the constraint on choice of component R&D projects. For instance, in Fig. 1, the greater the dependence of the microprocessor’s functionality on memory, the less able the microprocessor firms will be to pursue autonomous component R&D agendas. The adoption of any microprocessor design changes will first require coordination with memory producing firms. The net effect will be to constrain the range of R&D choices of the microprocessor component firms. Thus, we expect that H2. Increases in the intensity of above-diagonal dependencies will be negatively related to changes in firm innovation productivity. To summarize, the theory and hypotheses advanced above suggest that changes in the dependencies between components reflect changes in the micro-foundations of product architectures. A firm’s innovation productivity is determined to some extent by changes in the intensity of the dependencies between the component in which it operates (focal

142

SENDIL K. ETHIRAJ AND HART E. POSEN

component) and other components in the product system. The functional or dysfunctional effects of changes in component dependencies, from a focal component firm’s standpoint, are driven in part by an ability to generate returns from R&D (i.e., increase useful R&D output) and the ability to autonomously choose component R&D projects. The following section describes the context where we sought to test the hypotheses.

RESEARCH CONTEXT In the sections above, we argued that in complex product systems, changes in the dependencies between components – in particular, changes that affect information flows in the design process – affect firm innovation productivity. Moreover, the impact of such changes in design dependencies differs across components due to asymmetries in dependencies. In testing the hypotheses that flow from this assertion, at least two criteria for the research site must be met. First, the research site should resemble a complex product system in that it includes several components that share design interdependencies. Second, since our theory is silent about the grouping of functions into components, in order to make use of panel data, there must be little change over time in the grouping of components. Perhaps the most well-known example of a complex product system that fits these requirements is the PC. The PC product system consists of a number of discrete component industries such as microprocessor, memory, display adaptor, and hard disk drive. These component industries are distinct in that firms are typically highly specialized in a particular component industry and rarely participate in the product markets of other components (i.e., firms are typically unique at the four-digit SIC level).2 Nevertheless, the PC’s functioning depends on how the components fit together as a system. Thus, component R&D cannot happen in isolation of other components due to the flow of design information from other components in the PC. In addition, there has been little change in the component grouping of the PC over the time period of our observation, which provides us a unique context to examine the effects of changes in interdependencies between components. In addition to the four components above, the PC, depending on changing user needs over time, has included other components such as 5.25inch disk drive, 3.5-inch disk drive, CD drive, DVD drive, tape drive, telephone modem (of varying standards), cable modem, DSL modem, keyboard, and printer (to name but the most prominent). In order to

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

143

determine which components to include in this study, we adopted two criteria: (a) the component has to be a part of the PC over the life of the industry that we study and (b) the interface between the component of interest and other components must be primarily governed by hardware. With regard to this latter criterion, although a hardware interface necessitates design coordination between components, a software interface affords greater flexibility in that the component designer enjoys greater design autonomy, since communication between components occurs via software device drivers. For example, one may substitute a laser printer with an inkjet printer (a different technology) and in doing so, no significant hardware reconfiguration is necessary – as the interface is managed via printer driver software. The application of these two criteria resulted in the identification of five main components: microprocessor, memory, display adaptor, hard disk drive, and mainboard. Ultimately, we had to drop mainboard from the dataset, since there were only two publicly listed firms operating in this component industry (the results are robust to the inclusion of the mainboard component firms). A brief description of the four components follows. The microprocessor performs the data processing functions of the PC. It is an integrated circuit containing the arithmetic, logic, and control circuitry required to interpret and execute instructions from a computer program. The microprocessor is linked, via its input/output unit to the other components. Second, random access memory (RAM) serves as the primary interface between the user and the microprocessor, both passing programs and data from the hard drive to the microprocessor, and passing output to the hard drive and the display adaptor. Third, the display adaptor is the interface between the PC and the monitor that allows data or graphics to be displayed. Finally, the hard disk drive is used for long-term storage of data on a PC in a format that is accessible to the PC’s memory.

RESEARCH DESIGN Basic Model In developing our hypotheses, we argued that changes in the dependencies between components affect firm innovation productivity. The empirical investigation of these hypotheses requires the estimation of an innovation productivity equation as a function of changes in the dependencies between components.

144

SENDIL K. ETHIRAJ AND HART E. POSEN

There is a long tradition of research in the economics of technical change that has sought to estimate firm R&D production functions (Hausman, Hall, & Griliches, 1984; Pakes & Schankerman, 1984). Following in this tradition, we estimate an innovation production function of the form Y=f(X, W, Z), where (a) Y is the annual count of patents granted to the firm in its core component; (b) X is a vector of measures of dependencies between components; (c) W is a vector of inputs to the production function, including R&D, labor, and capital; and (d) Z is a vector of controls. In the next three subsections, we discuss the research design in detail.

Sample and Data The sample and data for the study were gathered from four sources: the Corptech Directory, Compustat, the Cassis Patent database, and the Micropatents LLC patent database. These sources, and the respective data gathered, are discussed below. We compiled a sample of firms in the four PC component industries using the annual Corptech Directory of Technology Companies. Corptech categorizes firms by both industry and product line. We used this information to identify a total of 106 publicly traded US firms in the four component industries (microprocessor, memory, display adaptor, and hard drive).3 A fortunate coincidence is that almost all the firms in our dataset (with the exception of nine firms) generate at least 80 percent of their sales revenues from a single component product market.4 Employing the prevailing definitions of firm diversification, nearly all firms in our sample would be classified as single-business firms (Rumelt, 1986). Thus, we were able to establish a one-to-one mapping between firms and their component product markets. Financial data on capital, labor, and R&D were taken from the COMPUSTAT industrial annual file that contains annual operating data on companies listed on the major US stock exchanges. We gathered data for the 106 publicly traded firms for the years 1979 through 1998. We used patent data to construct measures of interdependence between PC components. We extracted a census of all patents that may be relevant to the PC from the Cassis product distributed by the US Patent Office. The data available include patent number, assignee name, application date, issue date, and original and current patent classes. We sought to assign all patents granted during the period 1979 to 1998 to their relevant component domain (microprocessor, memory, display adaptor, and hard drive). This

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

145

assignment task proved to be difficult as there is no established mapping between patent classes and components. We conducted the assignment process in three stages. First, using the descriptions of patent classes, we identified patent classes associated with a component. This provided a first pass mapping of patent classes to components, which was generally consistent with a concordance map produced by the United States Patent and Trademark Office (USPTO) (Shane, 2001). Second, we excluded from the dataset any patent subclasses that did not fall within one of the four specified components. Third, we confirmed the classification with industry experts and employees at the Office of Technology Assessment and Forecasting (OTAF). This methodology identified 145,646 patents with application dates between 1979 and 1998, mapped to the four components.5 We gathered data on patent citations for these 145,646 patents from the Micropatents LLC database. We identified all patents cited (backward in time) in these 145,646 patents and went back to the Cassis database to identify their patent classes. Using the mapping of patent classes to component domains explained above, we assigned the relevant pre-1979 patents to one of the four component domains. For instance, we obtained all patents in the four components applied for in the year 1979 and tracked their citations to all previous patents in the same four components granted from 1969 onward. We did the same for the remaining years from 1980 to 1998. In doing so, we identified 600,358 cited patents across all the four component technologies.

Measures Dependent Variable Following the R&D production function literature (Hall & Ziedonis, 2000; Hausman et al., 1984; Pakes & Schankerman, 1984), we use as our dependent variable the annual count of patents granted in the firm’s primary component product market. Thus, we were able to derive an annual estimate of each firm’s patent counts (based on patent application dates) in its primary component product market. For robustness, we also constructed a citation-weighted measure of patent counts to control for quality heterogeneity among patents. An important question for our purposes is that of the extent to which patent counts are a good measure of innovation productivity. In general, the consensus in the literature is that, in the absence of firm-specific data on innovative output (e.g., new products), patents are a useful, albeit noisy,

146

SENDIL K. ETHIRAJ AND HART E. POSEN

measure of innovation productivity (Berndt, Griliches, & Rappaport, 1995). The main concern with using patents as an indicator of innovation productivity is the wide variation in their quality. One way to account for such variation is weighting patent counts using forward citations (citations to the focal patent) (Trajtenberg, 1990). More recently, Lanjouw and Schankerman (2004) refined the quality adjustment measure using a multiindicator weighting system – in which forward citations play a central role. Weighted patent counts are usually employed as right-hand side variables in models measuring the value of patents. However, in models where innovation productivity is the dependent variable, extant research suggests that weighted counts are not appropriate, and the use of unweighted patent counts is the norm (Ahuja & Katila, 2001; Henderson & Cockburn, 1996). In addition, there is the question of patenting in the PC industry specifically – and here, two issues are relevant. First, how prominent is patenting in the PC industry? Second, to what extent are patent counts a valid measure of innovation productivity in the PC industry? On the first point, the PC (for the most part a subset of the semiconductor industry) industry is one of the most patented industries across sectors of the economy (Cohen, Nelson, & Walsh, 2000) and its patenting rate continues to grow (Hall, Jaffe, & Trajtenberg, 2005). On the second point, there seems to be a consensus in the literature that patents in the semiconductor industry are indeed a good measure of innovation productivity. Papers supporting this conclusion are numerous (see Narin & Breitzman, 1995; Stuart, 2000). Of final note is the suggestion in the literature that patents may be used more as a defensive tool in the semiconductor industry rather than to protect intellectual property (IP) (Hall et al., 2005). Results from the Yale (Levin, Klevorick, Nelson, & Winter, 1987) and Carnegie (Cohen, Nelson, & Walsh, 2000) surveys of R&D managers suggest that in the semiconductor industry, patents are weak instruments of enforcing IP protection. This is primarily because technology changes faster than firms can litigate to enforce IP rights. In other words, the useful life of patents is far less than the period of validity (i.e., 20 years). While our study does not distinguish between these alternative motives for patenting, we believe that using patent counts as a measure of innovation productivity is valid as long as the relationship between R&D spending and patent counts is consistent across firms within a component industry. Explanatory Variables We employed patent citation data to construct our measures of changes in the dependence between components. Each patent filed in the United States

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

147

is required to reference the ‘‘prior art’’ upon which the patent is built. It is possible to categorize all the patents cited in a given patent into two broad groups. The first is the set of all citations to patents that belong to the same component, that is, a patent in the microprocessor component citing prior patents in the microprocessor component. A second group represents all citations in the given patent to patents in other components, that is, the patent in the microprocessor component citing patent(s) in the memory or disk drive component. We employ the latter set of citations to capture the flow of design information and constraints from one component to another. The key question is to what extent are patent citations a reliable indicator of design information flows. A scan of the prior literature indicates that backward citations (citations by the focal patent) are widely used as an indicator of knowledge or information flows. These uses include, among others, (a) knowledge spillovers across geographically co-located firms (Almeida & Kogut, 1999; Jaffe, Trajtenberg, & Henderson, 1993); (b) knowledge flows across countries (Hu & Jaffe, 2003); (c) knowledge flows from public R&D to private firms (Jaffe & Lerner, 2001); and (d) knowledge similarity between firms (Mowery, Oxley, & Silverman, 1998). The closest counterpart to our usage of patent citations to capture design dependencies is Fleming and Sorenson (2001) who construct measures of technological interdependence using patent data. Nevertheless, there are two significant criticisms regarding the use of backward citations as a measure of information flows. First, backward citations are a noisy measure of information flows in that citations may be included in a patent by the inventor or the patent examiner. Examiner-added patents may not reflect information flows but, rather, the scope or importance of the cited patent (Hall et al., 2005). Second, backward citations only capture a subset of knowledge flows – those that result in a patentable invention. Recent work has sought to enhance our understanding of these limitations. In a survey of inventors, Jaffe, Trajtenberg, and Fogarty (2000) asked questions about the firms’ patents and the relationship to a cited patent as well as a placebo (non-cited but similar) patent. While almost half of the cited patents were judged to have had no knowledge flow impact, there was a statistically significant difference in impact between the cited and placebo patents. Thus, patent citations, while noisy, seem to contain systematic data about knowledge flows (Hall et al., 2005). Mapping the aggregate pattern of citations from patents belonging to one component to patents belonging to other components allows us to create a version of a DSM as shown in Fig. 1. Nevertheless, we recognize the limitation that the DSM we generate is limited in scope. Our mapping of

148

SENDIL K. ETHIRAJ AND HART E. POSEN

dependencies using patent citations is intended to capture the flow of information in the design process only. Without further empirical validation, it is unlikely to be useful in understanding functional dependencies between components of the PC. As noted previously, we identified a subset of all patents having application in the PC industry and assigned them to one of the four components. We then used citation data to map the interdependence between the component class of the citing patent and the component classes of all cited patents. The raw data used to construct the interdependence measures are drawn from the pattern of patent citations across the components. The data are structured as follows. We construct 20, 4  4 matrices (as there are four component technologies and 20 years of firm data). We let i index row, j index column, and t index time (t individual 4  4 matrices). Each cell, in each of the t matrices, indicates the count of number of citations from row component patents issued in year t (based on patent application date) to the column component patents (applied for, and issued, in all previous years). We denote this count of cross-component citations with the variable, Pijt. This cross-citation count can be read as the number of citations from component technology i to component technology j in year t. We disaggregate the effect of intensity of dependence into two dimensions: above-diagonal dependence and below-diagonal dependence. An example calculation of each of the measures, corresponding to focal component 1 (microprocessors) in the year 1990, is presented in Table 1. Each entry in the table is a count of citations from the row component to the column component. The table is read as follows: In 1990, there were 1,138 citations from component 2 (memory) to component 1 (microprocessor), Table 1.

Illustration of IB and IA Variables Construction.

1 Microproc.

2 Memory

3 Display Adp.

4 Hard Drive

1 11,466 1,259 108 139 2 1,138 3,390 56 86 3 129 75 2,027 21 4 46 81 1 5,648 ð1;138þ129þ46Þ Interdependence IBi;1;1990 ¼ ð1;138þ129þ46þ3;390þ75þ81þ56þ2;027þ1þ86þ21þ5;684Þ ¼ 0:1031 Interdependence below (IB) ð1;259þ108þ139Þ Interdependence IA1;j;1990 ¼ ð11;466þ1;259þ108þ139Þ ¼ 0:1161 Interdependence above (IA) Note: Example for focal component, k=1, year=1990.

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

149

thus enabling us to quantify the dependence of component 2 on component 1. Each of these elements is discussed below. Intensity Below Diagonal. The intensity of below-diagonal interdependence variable seeks to capture the degree to which changes in the focal component affects other components. In order to capture this effect, we measure the percentage of citations from other components to the focal component in each year from 1979 to 1998. We construct the variable IBikt. This variable is the intensity of the belowdiagonal interdependence where i indexes the citing component, k indexes the focal component (which in this case is the component being cited), and t indexes the year. The numerator is the total number of citations by nonfocal components to the focal component k. The denominator is the total number of citations by the non-focal components j. This variable is calculated for all focal components k (1rkr4) in all years t. Thus, this variable measures the fraction of citations by components, other than the focal component k, to the focal component k’s patents, in year t. P4ðkÞ

i¼1 Pikt IBkt ¼ IBikt ¼ P4ðkÞ P j Pijt i¼1

(1)

Looking at the example in Table 1 (the focal component is the microprocessor), we summed the count of citations from components 2–4 to component 1 (rows 2–4 in column 1), and divided by the total number of citations by components 2–4 (rows 2–4 in columns 1–4). Intensity Above Diagonal. Similarly, the intensity of above-diagonal interdependence variable seeks to capture the degree to which changes in other components affect the focal component. We construct the variable IAkjt. The numerator is the number of focal component k citations to the other components j (i.e., excluding self-citations). The denominator is the total number of citations by the focal component (i.e., including selfcitations). Thus, this variable measures the fraction of focal component (k) citations to the three other components in year t. P4ðkÞ IAkt ¼ IAkjt ¼

j¼1 Pkjt P j Pkjt

(2)

150

SENDIL K. ETHIRAJ AND HART E. POSEN

Controls Our model includes three innovation production function inputs which are measured in constant 1998 dollars and entered in logs (Hausman et al., 1984; Pakes & Schankerman, 1984). First, labor is an important driver of productivity differences between firms, and also serves to capture size differences. We would ideally like to measure labor as the annual payroll of the firms in the sample. However, payroll data are not well recorded in Compustat. Following convention (Griliches, 1979), we use number of employees instead. Second, R&D capital is an important input to the innovation equation. We follow Hall and Ziedonis (2000) in treating R&D as a lagged flow variable normalized by number of employees to mitigate the confounding effects of size and R&D. Third, capital is the contribution of physical assets to innovation productivity and is measured as net assets per unit of labor. Theory suggests a number of additional controls. First, there is a large literature that examines the role of heterogeneous capabilities in the invention process (e.g., Henderson & Cockburn, 1994; Iansiti, 1995). One important source of heterogeneity is differential innovation productivity resulting from learning by doing (Arrow, 1962b). We include in the model a variable labeled experience measured as each firm’s cumulative patent stock (lagged one year). We also include an indicator variable entrant, coded 1 for firms under six years old, to account for unobserved differences between entrants and incumbents. Second, the literature suggests that diversification may affect firm R&D inputs by increasing appropriability (Arrow, 1962a; Nelson, 1959a), although empirical evidence is mixed (see Cohen, 1995, for a review). We created two variables to control for diversification. We construct a variable, breadth, as the Herfindahl index of patents (sum of squared shares) across the firm’s non-core component classes. This captures the dispersion in a firm’s R&D efforts across component industries. In addition, a small subset of firms participates in multiple components of the computer industry. We identified these firms and created the indicator variable diversified. Third, there is a significant literature building on Arrow (1962a) that suggests that capital constraints limit the ability of firms to conduct R&D (see Goodacre & Tonks, 1995, for a recent review). We account for capital constraints with a measure of lagged net income. Fourth, we control for systematic differences in component industry patenting propensity computed as the number of patents per dollar of R&D in year t in component k. Finally, prior research suggests that productivity may vary over time and react to exogenous temporal shocks that affect all firms uniformly. We include a set of year dummies to control for these effects.

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

151

Model Specification and Estimation The dependent variable in the innovation production function is a nonnegative integer count. Estimating the equation using OLS would result in the violation of the assumptions of homoscedasticity and normality of the error structure (Greene, 1997). The usual approach to estimating such models is to assume that the process generating the count data follows a Poisson distribution – that is, the expected count of patents, conditional on the independent variables, can be approximated by a Poisson distribution. However, our data exhibit over-dispersion (Cameron & Trivedi, 1998). In general, over-dispersion is a result of either unobserved heterogeneity in the patent count data or excess zeros counts. The negative binomial model, which allows the parameterization of between-firm unobserved heterogeneity, is useful if the over-dispersion arises from unobserved heterogeneity (Cameron & Trivedi, 1998). An examination of our data suggests that it contains more zero observations than would be expected in a negative binomial specification. An alternative model is the zero inflated negative binomial (ZINB) model that allows for over-dispersion while modeling the process generating zero count observations with a logit model. We estimated the probability of a zero count using ln(lagged firm total patents) as the predictor. The main motivation for using past patenting for discriminating between zero and non-zero counts is based on the presumption that firms are likely to be consistent with their past behavior. This is a reasonable conjecture given that the semiconductor industry, of which the PC is a constituent, is characterized by significant cumulativeness and path dependence in R&D activity (Cohen et al., 2000). We conducted the Vuong test of the ZINB model versus the negative binomial model, which rejected the negative binomial model in favor of the ZINB model. Thus, in the analysis presented in the remainder of the paper, we estimate the innovation production function assuming that the patent count follows the ZINB process. All models are estimated using the method of maximum likelihood and employ the Huber-White sandwich estimator yielding robust standard errors for the regression coefficients.

RESULTS The descriptive statistics and correlation matrix of variables used in the regression estimation are presented in Table 2. Table 3 presents the results of the regression estimation. Model (1) presents a baseline model that includes

0.05 0.17

0.13

0.07

ln (Assets PP) (t1)

ln (Labor) (t1)

ln (R&D Exp PP) (t1)

Interdependence below (IB)

Interdependence above (IA)

Experience

Diversified

(4)

(5)

(6)

(7)

(8)

(9)

(10) Patent propensity

0.09

0.04

0.93

1.31

0.76

0.11

(13) Entrant

0.59

0.48

0.24

0.06

0.79

0.05

0.34

0.06

0.48

0.10

0.90

1.00

(1)

1.00

(3)

0.50

0.50

0.22

1.00

(4)

0.17

0.04

0.04

Note: Correlations greater than 0.10 are significant at 0.05 level or less.

0.05

0.28 0.64 0.02 0.29

0.00

1.00

(7)

0.36 0.40

0.32

0.03 0.21

0.01 0.04 0.01

0.31

1.00

(9)

0.55

0.47

(11)

0.09 0.34

0.07 1.00

1.00

(10)

1.00

(12)

(13)

0.13 0.17 0.04 0.00 1.00

0.04

0.41

0.15 0.08

0.04

1.00

(8)

0.02 0.07

0.03

0.69 0.02 0.29 0.04

0.30 0.07 0.11

(6)

0.18 1.00

1.00

(5)

0.56 0.06 0.08 0.07

0.49

0.20 0.17

0.10

0.69 0.25

0.10 0.06

0.66

0.06

0.38

0.06

0.48 0.23

0.11

1.00

(2)

0.31 0.08 0.09

0.08

71.29 416.30

(11) Breadth (t1)

(12) Net income (t1)

0.34

42.82 153.31

0.12

0.04

7.34

3.45

0.40

86.29 276.05

Citations

(3)

35.69

(2)

10.87

SD

Patents

Mean

Descriptive Statistics and Correlation Table, 1979–1998, N=893.

(1)

Table 2.

152 SENDIL K. ETHIRAJ AND HART E. POSEN

Net income (t1)

Breadth (t1)

Patent propensity

Diversified

Experience

IA/IB

Interdependence above (IA)

Interdependence below (IB)

ln(R&D Exp PP) (t1)

ln(Labor) (t1)

ln(Assets PP) (t1)

Model Dep. Var. 4.257e-01 (1.238e-01) 8.503e-01 (8.485e-02) 2.553e-02 (8.703e-02)

5.664e-01 (1.172e-01) 1.080e+00 (4.985e-02) 1.123e-01 (9.857e-02)

1.495e-03 (5.212e-04) 7.618e-01 (1.436e-01) 9.904e+00 (2.007e+00) 8.859e-01 (2.935e-01) 1.737e-05 (1.628e-04)

(2) ZINB Patent Count

6.466e-04 (4.929e-04) 7.768e-01 (1.340e-01) 6.404e+00 (2.283e+00) 4.627e-01 (2.884e-01) 2.926e-05 (1.129e-04)

4.304e-01 (1.196e-01) 7.701e-01 (8.094e-02) 2.040e-01 (9.225e-02) 1.278e+01 (1.880e+00) 3.028e+00 (7.035e-01)

(3) ZINB Patent Count 4.273e-01 (1.210e-01) 7.715e-01 (8.118e-02) 2.071e-01 (9.431e-02) 1.320e+01 (2.919e+00) 3.109e+00 (7.637e-01) 7.399e-03 (3.660e-02) 6.319e-04 (5.066e-04) 7.815e-01 (1.379e-01) 6.380e+00 (2.282e+00) 4.658e-01 (2.901e-01) 2.880e-05 (1.126e-04)

(4) ZINB Patent Count

Model Estimation, 1979–1998.

(1) ZINB Patent Count

Table 3.

8.995e-04 (3.221e-04) 9.590e-01 (1.500e-01) 1.190e+00 (2.974e+00) 3.010e-01 (5.047e-01) 2.520e-05 (6.612e-05)

4.795e-01 (1.642e-01) 8.357e-01 (7.971e-02) 1.210e-01 (1.450e-01) 2.106e+01 (2.802e+00) 4.804e+00 (1.432e+00)

(5) ZIP Patent Count

7.610e-04 (4.893e-04) 6.119e-01 (1.384e-01) 6.120e+00 (2.334e+00) 4.236e-01 (3.032e-01) 5.474e-05 (1.139e-04)

5.221e-01 (1.228e-01) 6.027e-01 (8.071e-02) 3.567e-01 (9.779e-02) 1.575e+01 (2.032e+00) 3.430e+00 (7.576e-01)

(6) ZINB Citation Weighted

Architectures Affect Innovation Productivity in Complex Product Ecosystems? 153

sig. 2.639e+00 (6.755e-01) 893 1,573.742 709.295

(1) ZINB Patent Count

(3) ZINB Patent Count 5.614e-01 (1.729e-01) sig. 4.319e+00 (6.409e-01) 893 1,499.827 1,103.473

(2) ZINB Patent Count 7.033e-01 (1.767e-01) sig. 3.292e+00 (6.033e-01) 893 1,521.608 758.622

Note: Robust standard errors in parentheses. + Significant at 10%; significant at 5%; significant at 1%.

Observations LL Wald w2

Year dummy Constant

Entrant

Model Dep. Var.

Table 3. (Continued )

5.571e-01 (1.736e-01) sig. 4.313e+00 (6.419e-01) 893 1,499.810 1,108.478

(4) ZINB Patent Count 5.305e-01 (2.249e-01) sig. 4.021e+00 (9.866e-01) 893 3,151.968 1,857.805

(5) ZIP Patent Count 7.544e-01 (1.764e-01) sig. 6.587e+00 (6.673e-01) 893 2,319.298 1,172.102

(6) ZINB Citation Weighted

154 SENDIL K. ETHIRAJ AND HART E. POSEN

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

155

the standard innovation production function variables and the year dummies. In line with prior research, capital and labor are positive and significant (Hall & Ziedonis, 2000). R&D is also positive, though nonsignificant.6 Model (2) includes additional controls. First, we controlled for timevarying firm heterogeneity using two variables. Firm R&D experience is positive and significant, while the indicator variable for new entrant is, as expected, negative and significant. Second, we controlled for diversification using two variables. Breadth, a measure of supply-side diversification within the PC industry, is positive and significant, while the indicator variable diversified, which controls for product-market diversification, is negative and significant. The contrast between these two diversification results is consistent with the literature’s conclusions that the effect of diversification on innovation varies widely – in this case, the type of diversification seems to drive the differing results. Third, we controlled for the availability of internal finances. While positive as expected, it was non-significant. Fourth, we controlled for component industry differences in patenting propensity. This variable was positive and significant. Model (3) includes the two main variables of interest: interdependence below (IB) diagonal and interdependence above (IA) diagonal. An examination of the likelihood function and the Wald test statistic indicates that the model including IB and IA offers significantly better fit as compared with Model (2). The coefficient on below-diagonal intensity of interdependence, IB, was positive and significant (t=6.8). This is consistent with hypothesis 1 that an increase in the intensity of belowdiagonal interdependence extends the influence of focal component firms on the other component firms in the industry. The increased influence increases the useful output of R&D that in turn positively affects firm innovation productivity. The coefficient on intensity of above-diagonal interdependence, IA, was negative and significant (t=4.3). This is consistent with hypothesis 2 that an increase in the intensity of abovediagonal interdependence reduces the choice or latitude of focal component firms to pursue autonomous component R&D agendas. The joint test of IB and IA was statistically significant (Wald w2=52.76; po0.00001). In addition, the Vuong test (z=7.85; po0.0000) rejects the negative binomial model in favor of the zero inflated model. Thus, these results provide strong support for both hypotheses advanced in the paper.

156

SENDIL K. ETHIRAJ AND HART E. POSEN

Robustness Our hypotheses reflect the main effect of changes in below- and abovediagonal dependencies on firm innovation productivity. However, the relative symmetry or asymmetry in below- and above-diagonal interdependencies can alter the constraint on firms and thus their innovation productivity as well. Ignoring such effects can result in biased estimates of the main variables due to an omitted variable problem. To account for this, we included the ratio of IA and IB (IA/IB).7 Model (4) presents the results. IA/IB was not statistically significant, though all the three variables (IB, IA, and IB/IA) were jointly significant. The coefficients on the main variables, IB and IA, were unchanged. This provides additional confidence that the main effects of changes in IB and IA on firm innovation productivity are robust and independent. We conducted other robustness checks using a variety of alternative model specifications. The results are broadly consistent. First, we estimated the zero inflated Poisson model. Model (5) in Table 2 presents the results of this analysis. The strongly significant coefficients with the predicted signs support our conclusion that IB diagonal has a positive effect on innovation productivity, while IA diagonal has a negative effect on innovation productivity. As noted in the specification discussion, the Vuong test of ZIP versus Poisson rejects the Poisson model (z=7.96; PrWz=0.0000), and the likelihood-ratio test of the ZIP versus ZINB rejects the ZIP model (chibar2=3,304.28; PrZchibar2=0.0000). We also estimated alternative models using citation-weighted patent counts as the dependent variable to account for heterogeneity in patent quality across firms. The results, presented in Model (6), are qualitatively similar. In addition, we conducted several specification tests, the results of which are not reported due to space constraints. First, we employed an alternative method of accounting for firm heterogeneity. We estimated the equation with a generalized negative binomial model that allows us to fit a shape parameter to the unobserved firm heterogeneity. The results were robust in this alternative estimation as well. Second, we examined robustness to alternative sampling criteria. For example, in order to ensure that the emergence of the new optical disk component in the late 1990s did not affect the results, we estimated our model with various temporal subsets of the data, and in particular a model excluding all data post-1995 (when the optical disk first emerged). The results are robust to all such temporal resampling. Third, we tested for alternative functional forms of the control variables including the use of levels rather than

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

157

logs, and alternative lag structures. The results were qualitatively similar. In sum, the analyses with the alternative specifications taken together provide confidence that the main conclusions of the paper are largely robust to a variety of specification and estimation concerns.

DISCUSSION Innovation is challenging, not only because of the inherent variability in innovation outcomes but also because the quality of the final product is only one of many factors driving the returns to innovating. Early research on this latter issue focused on industry structure (see Cohen & Levin, 1998, for a review) and buyer-supplier relationships (Teece, 1986). Recently, research has begun to consider the entire ecosystem of interdependent firms that deliver value to end users (Adner & Kapoor, 2010; Jacobides et al., 2006) and in some settings, the implications of alternative platform strategies that shape the development of the ecosystem (Boudreau, 2010; Gawer & Henderson, 2007; Zhu & Iansiti, 2012). Against this backdrop, we examined one dimension of a product ecosystem – the product architecture that underlies the technical relationships between product components (Ulrich & Eppinger, 1999). In a product ecosystem, these components may be dispersed across a set of interdependent firms, as is indeed the case in the PC industry. We focused on technological interdependencies in the form of design information flows between components, and examined their impact on innovation performance. We argued that the existence of dependencies between components of a complex product alters the uncertainty associated with firm component innovation efforts. We argued that an increase in below-diagonal dependencies, that is, the flow of design information from the focal component to other components, extends the influence of the focal component firms on the rest of the ecosystem and enhances their ability to capitalize on their component innovation efforts. In contrast, an increase in above-diagonal dependencies, that is, the flow of design information and constraints from other components to the focal component, reduces the range of profitable innovation choices of the focal component firms. An empirical test of the two hypotheses provides strong support. Our analysis emphasizes attention on the dynamics of innovation in product ecosystems. The innovation productivity literature has tended to treat technological interdependencies and product architectures as

158

SENDIL K. ETHIRAJ AND HART E. POSEN

somewhat exogenous to the innovation decisions of individual firms. This is entirely plausible given that it is difficult for any single firm to unilaterally alter product architectures. However, when component interdependencies are non-trivial and firms are specialized in component industries, the returns to component innovation are not independent of the rate and direction of change in product architectures. The payoffs to the uncertain bets in innovation that firms make are contingent on the changing nature of the interdependencies between components in the ecosystem. Product architectures and changes in them limit the choice of profitable component innovation projects and also affect the ex post useful output of innovation efforts. Thus, our contribution to the innovation productivity literature is to assert that, consistent with emerging work on product ecosystems (Adner, 2012), other component industries within the complex product system systematically affect returns to firm component innovation efforts. A common theme in the literature on the impact of product architectures on firm performance is the survival threat of architectural change (Christensen, 1997; Henderson & Clark, 1990). Broadly speaking, the literature points to two mechanisms that explain enhanced failure risk. One strain has emphasized the intra-firm coordination problem that follows architectural change (Henderson & Clark, 1990). Firms develop routines and communication patterns that resemble the flow of design information in the product architecture. Thus, any change in the architecture disrupts the usefulness of the coordination routines and the process of realignment of routines to the new architecture increases the survival risk to firms. A second strain has emphasized the incentive problem that accompanies architectural change (Christensen, 1997). Firms usually ignore new architectural improvements that do not address the needs of their existing market or addresses a small and relatively undeveloped market. Firms do not have the incentive to pursue and accommodate the new architecture, since their focus is on the existing ecosystem and meeting its needs. This impedes attention to potential enlargement of and addition of new members to the ecosystem. The results of our study complement the above research in that we point to a third mechanism by which architectures affect firm performance. Firms make innovation investments based on beliefs about future states of the world. In a stylized sense, from the standpoint of a focal component firm, the future states of the world can preserve the status quo of component interdependencies, increase/decrease below-diagonal dependencies, or increase/decrease above-diagonal dependencies. If firms assume status quo in making their innovation decisions, then changes in component dependencies can alter the ex post payoffs to their R&D efforts. Thus, the

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

159

uncertainty associated with changing component dependencies introduces a contingency that affects firm returns from component innovation efforts. A potentially useful avenue for future work is to examine the relative importance of the three mechanisms to firm survival in the context of product ecosystems. Recent research has begun to challenge this pessimistic perspective on the implications of architectural change in product ecosystems. Some have questioned the premise that firm organization structures mirror existing product architectures and thus challenged the foundation of the inertia hypothesis (e.g., Cabigiosu & Camuffo, 2012; Hoetker, 2006). Others have pointed to examples of architectural change that reinforced rather than undermined performance (see, e.g., Jacobides et al., 2006). We shed light on this more optimistic view of architectural change from a different perspective and argue that architectural change is not a unitary phenomenon. We distinguish between two types of architectural change: change in the component boundaries and changes in the dependencies between components. We focus on the latter (by examining an empirical context where the component boundaries did not change over the course of our study). Our results suggest that the performance implications of architectural change depend crucially on the component within which a firm resides and the nature of shared dependencies with other components. Thus, strategic action on the part of firms to alter the nature of technological dependencies between firms in a complex product system may serve not only to mitigate the negative implications of changes in architecture but indeed to engender changes designed to shift the architecture in a way that may significantly enhance own-firm innovation performance. Several chapters in this volume usefully complement our own study. Although we treat component dependencies as exogenous to firm choices, Brusoni and Prencipe (2013) offer a framework to help understand the menu of firm strategies to alter component boundaries and dependencies between components. West and Wood (2013) highlight the role of asymmetric interdependencies in the rise and subsequent decline of the Symbian operating system in mobile phones. It provides a rich case study of our abstract theory in operation and extends the implications to market performance. Finally, Ma¨kinen and Dedehayir (2013) explore the dynamics of component changes in the PC gaming ecosystem and show how differential rates of changes in one component in the ecosystem affects other components (see also Ethiraj, 2007, for a closely related study). This study, while making some useful contributions, is not without its limitations. First, while product ecosystems are defined by the entire set of

160

SENDIL K. ETHIRAJ AND HART E. POSEN

relevant firms, we have focused only on the set of rivals and complementors at one stage of the value chain. Among the actors in the ecosystem we ignore, perhaps most prominent is that of software. In particular, we were unable to include the operating system as a component, since patenting in software is only a recent phenomenon. To the extent that the operating system of the PC was an important driver of design dependencies in the PC, our analysis suffers from an omitted variable problem. Moreover, the PC industry is quite unique in that most firms are largely focused within one component product market. This limits the power of any one firm to enforce any changes in design dependence on other firms. Thus, because of the character of the empirical context of this study, it is not clear that our results will generalize to other complex product systems in broader ecosystems where component manufacturers are under the oversight of assemblers (e.g., automobiles, aircraft). Second, our work is predicated on the relationship between patent citations and knowledge flows. While patent citations have been used broadly in the literature, there are still residual concerns(Alcacer, Gittelman, & Sampat, 2009). For instance, to what extent do patent examiner-added citations simply lead to noise in our measures versus bias in the measures? To the extent that the distribution of examiner-added citations does not reflect actual knowledge flows, then noise predominates and our empirical tests will underestimate the significance of our variables. In addition, work in explicating issues of design structure has argued that not all dependencies are equally critical (Browning, 2001). To the extent that our interdependence measures implicitly include a weighting by the number of citations (we control for heterogeneity in patenting propensity across components), we accommodate such concerns. However, we do not account for heterogeneity in the value of each citation. Finally, to our knowledge, we are the first to suggest a relationship between patent citations and design dependencies. Our contribution is a potential advance in the effort toward understanding dependencies between components in complex products. However, the difficult process of empirical validation remains to be done. Recognizing this limitation, we have explicitly limited the scope of our claim regarding patent citations and design dependencies. We argue that our patent citation measures capture only design information flows, and thus eschew any claim that we are capturing the myriad other types of design dependencies (Sosa et al., 2003). Moreover, to the extent that the type of design dependency that we capture is distinct in its impact from other design dependencies, then our measure may indeed enhance the understanding of interdependence in complex products.

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

161

In sum, research has just begun to examine the potentially broad implications of product ecosystems for the efficacy of firms’ innovation efforts. Our research in this domain points to the importance of product architectures – highlighting the possibility that architectural change is not always performance decreasing and, indeed, that strategic action by firms to direct changes in architecture may be an important source of performance gains.

NOTES 1. Note that component dependencies can affect both the technical feasibility and economic feasibility of R&D projects. Component dependencies that affect technical feasibility can constrain R&D project choice without affecting economic feasibility. For simplicity, we subsume technical feasibility within economic feasibility. Theoretical models of R&D accomplish this by parameterizing costs at infinity for technically infeasible projects. 2. There are, of course, exceptions to this generalization, and we account for this in the empirical specification. 3. We limited the sample that we collect along two dimensions: (a) Temporally – the PC industry got its start around 1981, but the Corptech directory is only available starting in 1985. Left censoring would occur for firms that entered and exited the PC industry pre-1985. (b) Geographically – since we needed financial data, our sample comprises US firms only. To the extent that the unobserved firms have different innovation productivity than observed firms, the generalizability of the results is limited, but this does not affect the integrity of the estimation itself. 4. We ultimately exclude both IBM and Texas Instruments from the analysis, as they participate significantly in many different component markets. However, our results do not change even if we include them. 5. This includes all patents in the four components, not just those granted to the firms in our sample. 6. Modest collinearity between R&D and assets is causing the coefficient on R&D to be nonsignificant. A model without capital results in a positive and significant coefficient on R&D. 7. We did not introduce a multiplicative term, since IA and IB are both proportions bounded in the [0,1] interval.

ACKNOWLEDGMENTS We thank Gautam Ahuja and Anita McGahan for useful comments on an earlier version of this paper. We also thank executives at IBM, Intel, Texas Instruments, and the staff at CHI Research and the Office of Technology

162

SENDIL K. ETHIRAJ AND HART E. POSEN

and Forecasting for help with mapping patent classes to component industries. Responsibility for all remaining errors and omissions is our own.

REFERENCES Abernathy, W. J., & Utterback, J. (1978). Patterns of industrial innovation. Technology Review, 80, 40–47. Adner, R. (2012). The wide lens: A new strategy for innovation. New York: Portfolio Penguin. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31(3), 306–333. Ahuja, G., & Katila, R. (2001). Technological acquisitions and the innovation performance of acquiring firms: A longitudinal study. Strategic Management Journal, 22(3), 197–220. Ahuja, G., Lampert, C. M., & Tandon, V. (2008). Moving beyond Schumpeter: Management research on the determinants of technological innovation. Academy of Management Annals, 2, 1–98. Alcacer, J., Gittelman, M., & Sampat, B. (2009). Applicant and examiner citations in US patents: An overview and analysis. Research Policy, 38(2), 415–427. Alexander, C. (1964). Notes on the synthesis of form. Cambridge: Harvard University Press. Almeida, P., & Kogut, B. (1999). Localization of knowledge and the mobility of engineers in regional networks. Management Science, 45(7), 905–917. Arrow, K. (1962a). Economic welfare and the allocation of resources for invention. In R. Nelson (Ed.), The rate and direction of inventive activity (pp. 609–625)). Princeton, NJ: Princeton University Press. Arrow, K. J. (1962b). The economic implications of learning by doing. The Review of Economic Studies, 29(3), 155–173. Baldwin, C. Y., & Clark, K. B. (2000). Design rules: The power of modularity. Cambridge, MA: The MIT Press. Berndt, E. R., Griliches, Z., & Rappaport, N. J. (1995). Econometric estimates of price indexes for personal computers in the 1990s. Journal of Econometrics, 68(1), 243–268. Boudreau, K. (2010). Open platform strategies and innovation: Granting access vs. devolving control. Management Science, 56(10), 1849–1872. Browning, T. R. (1998). Integrative mechanisms for multiteam integration: Findings from five case studies. Systems Engineering, 1, 95–112. Browning, T. R. (1999). Designing system development projects for organizational integration. Systems Engineering, 2, 217–225. Browning, T. R. (2001). Applying the design structure matrix to system decomposition and integration problems: A review and new directions. IEEE Transactions on Engineering Management, 48(3), 292–306. Brusoni, S., & Prencipe, A. (2013). The organization of innovation in ecosystems: Problem framing, problem solving, and patterns of coupling. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems (Vol. 30). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing.

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

163

Cabigiosu, A., & Camuffo, A. (2012). Beyond the ‘‘mirroring’’ hypothesis: Product modularity and interorganizational relations in the air conditioning industry. Organization Science, 23(3), 686–703. Cameron, A. C., & Trivedi, P. K. (1998). Regression analysis of count data. Cambridge, UK: Cambridge University Press. Christensen, C. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business School Press. Clark, K. B. (1985). The interaction of design hierarchies and market concepts in technological evolution. Research Policy, 14(5), 235–251. Clark, K. B., & Fujimoto, T. (1991). Product development performance. Boston, MA: Harvard Business School Press. Clark, K. B., & Wheelwright, S. C. (1993). Managing new product and process development. New York: Free Press. Cohen, W. M. (1995). Empirical studies of innovative activity. In P. Stoneman (Ed.), Handbook of the economics of innovation and technological change (pp. 182–264). Cambridge, MA: Blackwell. Cohen, W. M., & Levin, R. C. (1998). Empirical studies of innovation and market structure. In R. Schmalensee & R. Willig (Eds.), Handbook of industrial organization (pp. 1060–1107). North Holland, the Netherlands: Elsevier Science. Cohen, W. M., Nelson, R. R., & Walsh, J. (2000). Protecting their intellectual assets: Appropriability conditions and why U.S. manufacturing firms patent (or not). Working Paper. National Bureau of Economic Research, Boston, MA. Cusumano, M. A., & Selby, R. W. (1998). Microsoft secrets: How the world’s most powerful software company creates technology, shapes markets, & manages people. New York, NY: Simon & Schuster. Dasgupta, P., & Stiglitz, J. E. (1980). Industrial structure and the nature of innovative activity. Economic Journal, 90(358), 266–293. David, P. A. (1975). Learning by doing and tariff protection: A reconsideration of the case of the ante-bellum United States cotton textile industry. In P. A. David (Ed.), Technical choice, innovation & economic growth (pp. 95–173). Cambridge: Cambridge University Press. Dixit, A. K., & Pindyck, R. S. (1994). Investment under uncertainty. Princeton, NJ: Princeton University Press. Eppinger, S. D. (2001). Innovation at the speed of information. Harvard Business Review, 79, 149–158. Eppinger, S. D., Whitney, D. E., Smith, R. P., & Gebala, D. A. (1994). A model-based method for organizing tasks in product development. Research in Engineering Design, 6, 1–13. Ethiraj, S. K. (2007). Allocation of inventive effort in complex product systems. Strategic Management Journal, 28(6), 563–584. Ethiraj, S. K., & Levinthal, D. A. (2004). Modularity and innovation in complex systems. Management Science, 50(2), 159–173. Fleming, L., & Sorenson, O. (2001). Technology as a complex adaptive system: Evidence from patent data. Research Policy, 30, 1019–1039. Garud, R., & Kumaraswamy, A. (1995). Technological and organizational designs for realizing economies of substitution. Strategic Management Journal, 16(Special Issue), 93–109. Garud, R., Kumaraswamy, A., & Langlois, R. (2001). Managing in the modular age: Architectures, networks and organizations. London: Blackwell.

164

SENDIL K. ETHIRAJ AND HART E. POSEN

Gawer, A., & Henderson, R. (2007). Platform owner entry and innovation in complementary markets: Evidence from Intel. Journal of Economics & Management Strategy, 16(1), 1–34. Goodacre, A., & Tonks, I. (1995). Finance and technological change. In P. Stoneman (Ed.), Handbook of the economics of innovation and technological change (pp. 298–341). Cambridge, MA: Blackwell. Greene, W. H. (1997). Econometric analysis (3rd ed.). Upper Saddle River, NJ: Prentice-Hall. Griliches, Z. (1979). Issues in assessing the contribution of research and development to productivity growth. Bell Journal of Economics, 10(1), 92–116. Hall, B., Jaffe, A. B., & Trajtenberg, M. (2005). Market value of patent citations. RAND Journal of Economics, 36(1), 16–38. Hall, B., & Ziedonis, R. (2000). The patent paradox revisited: An empirical study of patenting in the U.S. semiconductor industry, 1979–1995. RAND Journal of Economics, 32(1), 101–128. Hausman, J., Hall, B., & Griliches, Z. (1984). Econometric models for count data with an application to the patents-R&D relationship. Econometrica, 52(4), 909–938. Henderson, R., & Cockburn, I. (1994). Measuring competence? Exploring firm effects in pharmaceutical research. Strategic Management Journal, 15(Special Issue), 63–84. Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative Science Quarterly, 35(1), 9–30. Henderson, R. M., & Cockburn, I. (1996). Scale, scope, and spillovers: The determinants of research productivity in drug discovery. RAND Journal of Economics, 27(1), 32–59. Hoetker, G. (2006). Do modular products lead to modular organizations? Strategic Management Journal, 27(6), 501–518. Hu, A. G. Z., & Jaffe, A. B. (2003). Patent citations and international knowledge flows: The cases of Korea and Taiwan. International Journal of Industrial Organization, 21(6), 849–880. Iansiti, M. (1995). Technology integration: Managing technological evolution in a complex environment. Research Policy, 24(4), 521–542. Jacobides, M. G., Knudsen, T., & Augier, M. (2006). Benefiting from innovation: Value creation, value appropriation and the role of industry architectures. Research Policy, 35(8), 1200–1221. Jaffe, A. B., & Lerner, J. (2001). Reinventing public R&D: Patent policy and the commercialization of national laboratory technologies. RAND Journal of Economics, 32(1), 167–198. Jaffe, A. B., Trajtenberg, M., & Fogarty, M. S. (2000). Knowledge spillovers and patent citations: Evidence from a survey of inventors. American Economic Review, Papers and Proceedings, 90(5), 215–218. Jaffe, A. B., Trajtenberg, M., & Henderson, R. (1993). Geographic localization of knowledge spillovers as evidenced by patent citations. The Quarterly Journal of Economics, 108(3), 577–598. Kamien, M. I., & Schwartz, N. L. (1970). Market structure, elasticity of demand, and incentive to invent. Journal of Law & Economics, 13, 241–252. Knott, A. M., & Posen, H. E. (2009). Firm R&D behavior and evolving technology in established industries. Organization Science, 20(2), 352–367.

Architectures Affect Innovation Productivity in Complex Product Ecosystems?

165

Lanjouw, J. O., & Schankerman, M. (2004). Patent quality and research productivity: Measuring innovation with multiple indicators. Economic Journal, 114(495), 441–465. Levin, R. C., Klevorick, A. K., Nelson, R. R., & Winter, S. G. (1987). Appropriating the returns from industrial research and development. Brookings Papers on Economic Activity, 3, 783–833. Ma¨kinen, S. J., & Dedehayir, O. (2013). Business ecosystems evolution: An ecosystem clockspeed perspective. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems (Vol. 30). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing. Mansfield, E. (1968). Industrial research and technological innovation: An econometric analysis. New York: Norton. Matlack, C. (2004). Airbus’ megaplane has a weight problem. Business Week, p. 63. Mowery, D. C., Oxley, J. E., & Silverman, B. S. (1998). Technological overlap and interfirm cooperation: Implications for the resource-based view of the firm. Research Policy, 27(5), 507–523. Narin, F., & Breitzman, A. (1995). Inventive productivity. Research Policy, 24(4), 507–519. Nelson, R. R. (1959a). The economics of invention: A survey of the literature. Journal of Business, 32(2), 101–127. Nelson, R. R. (1959b). The simple economics of basic scientific research. Journal of Political Economy, 67(3), 297–306. Nelson, R. R., & Winter, S. (1982). An evolutionary theory of economic change. Cambridge, MA: Belknap Press. Pahl, G., & Beitz, W. (1991). Engineering design: A systematic approach. New York: SpringerVerlag. Pakes, A., & Schankerman, M. (1984). The rate of obsolescence of patents, research gestation lags, and the private rate of return to research resources. In Z. Griliches (Ed.), R&D, patents, and productivity (pp. 73–88). Chicago, IL: University of Chicago Press. Pimmler, T. U., & Eppinger, S. D. (1994). Integration analysis of product decompositions. Proceedings of the ASME design theory and methodology conference (DE-Vol. 68, pp. 343–351). Minneapolis, MN. Rosenberg, N. (1969). The direction of technological change: Inducement mechanisms and focusing devices. Economic Development and Cultural Change, 18, 1–24. Rosenberg, N. (1973). The direction of technological change: A reply. Economic Development and Cultural Change, 21, 356–357. Rosenberg, N. (1974). Science, invention, and economic growth. Economic Journal, 84(333), 90–108. Rumelt, R. P. (1986). Strategy, structure, and economic performance. Boston, MA: Harvard Business School Press. Sanchez, R., & Mahoney, J. T. (1996). Modularity, flexibility, and knowledge management in product and organization design. Strategic Management Journal, 17(Winter), 63–76. Schmookler, J. (1966). Invention and economic growth. Cambridge, MA: Harvard University Press. Shane, S. (2001). Technological opportunities and new firm creation. Management Science, 47(2), 205–220. Simon, H. A. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106, 467–482.

166

SENDIL K. ETHIRAJ AND HART E. POSEN

Sosa, M. E., Eppinger, S. D., & Rowles, C. M. (2003). Identifying modular and integrative systems and their impact on design team interactions. ASME Journal of Mechanical Design, 125, 240–252. Steward, D. V. (1981). The design structure system: A method for managing the design of complex systems. IEEE Transactions on Engineering Management, 28, 71–74. Stuart, T. (2000). Interorganizational alliances and the performance of firms: A study of growth and innovation rates in a high-technology industry. Strategic Management Journal, 21, 791–811. Teece, D. J. (1986). Profiting from technological innovation: Implications for integration, collaboration, licensing and public policy. Research Policy, 15(6), 285–305. Thirtle, C. G., & Ruttan, V. W. (1987). The role of demand and supply in the generation and diffusion of technical change. New York: Harwood Academic Publishers. Thompson, J. D. (1967). Organizations in action. New York: McGraw-Hill. Trajtenberg, M. (1990). A penny for your quotes: Patent citations and the value of innovations. RAND Journal of Economics, 21(1), 172–187. Tushman, M. L., & Anderson, P. (1986). Technological discontinuities and organizational environments. Administrative Science Quarterly, 31(3), 439–465. Ulrich, K. T. (1995). The role of product architecture in the manufacturing firm. Research Policy, 24, 419–440. Ulrich, K. T., & Eppinger, S. D. (1999). Product design and development. New York: McGrawHill. West, J., & Wood, D. (2013). Evolving an open ecosystem: The rise and fall of the Symbian platform. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems (Vol. 30). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing. Williamson, O. E. (1965). Innovation and market structure. Journal of Political Economy, 73, 67–73. Zhu, F., & Iansiti, M. (2012). Entry into platform-based markets. Strategic Management Journal, 33(1), 88–106.

THE ORGANIZATION OF INNOVATION IN ECOSYSTEMS: PROBLEM FRAMING, PROBLEM SOLVING, AND PATTERNS OF COUPLING Stefano Brusoni and Andrea Prencipe ABSTRACT This chapter adopts a problem-solving perspective to analyze the competitive dynamics of innovation ecosystems. We argue that features such as uncertainty, complexity, and ambiguity, entail different knowledge requirements which explain the varying abilities of focal firms to coordinate the ecosystem and benefit from the activities of their suppliers, complementors, and users. We develop an analytical framework to interpret various instances of coupling patterns and identify four archetypical types of innovation ecosystems. Keywords: Innovation ecosystems; patterns of coupling; innovation

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 167–194 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030009

167

168

STEFANO BRUSONI AND ANDREA PRENCIPE

INTRODUCTION Innovation is an increasingly distributed and collective process involving a variety of actors such as users, suppliers, universities, and competitors (Freeman & Soete, 1997). Its distributed and collective features stem from the information and knowledge requirements related to new products and services – or their integration into solutions (Davies, 2004) – which require the combination of scientific, engineering, and service knowledge (Hobday, Davies, & Prencipe, 2005; Patel & Pavitt, 1997; Pavitt, 1998; Rosenberg, 1982). Scholars have proposed the construct of innovation ecosystem to capture the cross-industry and cross-country complexity of the innovation process (see, e.g., Adner, 2006; Iansiti & Levien, 2004; Moore, 1993). Similar to biological ecosystems, innovation ecosystems are inhabited by a variety of different species of actors who share their fate (Moore, 1993). Species operate cooperatively and competitively to create value – that is, they develop and deliver new products, and to capture value – that is, they satisfy customer needs (Adner & Kapoor, 2010). Innovation characterizes the ecosystem in constituting the locus around which species coevolve, and acts as a catalyst for the ecosystem’s evolution (Moore, 1993). Borrowing from biology, Iansiti and Levien (2004) identify specific business ecosystem features, productivity, robustness, and niche creation, to illustrate its status and the role of its actors, for example, keystone species. Productivity is the innovation ecosystem’s ability to consistently transform new technologies into improved and new products. Robustness measures the ability of an innovation ecosystem to survive disruptions caused by unforeseen technological or socioeconomic change; an ecosystem should also exhibit variety to support a diversity of species. Niche creation relates to its capacity to applying emerging technologies across new product domains. Keystone species organizations lead to the creation of new niches, enhancing the performance of niche organizations, and eventually increasing overall ecosystem robustness. Although extant research provides some valuable insights into the specific features of innovation ecosystems, we know relatively little about the microprocesses underpinning the functioning of different ecosystems. Focusing on the effects of the external challenge of innovation on the ecosystem’s focal organization, Adner and Kapoor (2010) discuss how the different location of innovation (i.e., upstream vs. downstream), and its content (i.e., component innovation vs. complementary innovation) affect the focal firm’s competitive position in the ecosystem. They find that the upstream location of these challenges has a positive effect on the focal

Problem Framing, Problem Solving, and Patterns of Coupling

169

organization’s performance, but a downstream location does not. Adner and Kapoor (2010) emphasize the interplay between technical and behavioral uncertainty in balancing value creation and value appropriation. In this chapter, we build on this intuition and focus on the nature and strengths of the relationships between the focal firm and other actors within the ecosystem. We draw on the concept of coupling (Orton & Weick, 1990) to explain the microprocesses that enable (or not) firms to solve problems of varying degrees of difficulty, in innovation ecosystems populated by functionally related though often loosely coordinated actors. The main argument is that the features of the problem that require to be framed and solved entail different knowledge requirements and specific patterns of coupling among the ecosystem’s actors. We analyze the focal organization, which orchestrates the dynamics of the interactions among the complementary and supporting actors in the ecosystem. We make two assumptions. First, we conceive focal organizations as problem-framing as well as problem-solving institutions which arrange and generate knowledge to make sense of and to solve problems to promote innovation and capture its value (e.g., Brown & Eisenhardt, 1997; Dosi & Marengo, 1994; Nickerson & Zenger, 2004; Wolter & Veloso, 2008). Innovation ecosystems provide the context for the focal organization’s decision about which problems to solve, how to select and deploy the relevant capabilities, and how to implement the solution(s) identified. Second, we conceive innovation ecosystems as complex entities comprised of a group-related actors whose combined knowledge and capabilities evolve (e.g., Christensen & Rosenbloom, 1995; Christensen, 1997; Christensen, Verlindem, & Westerman, 2002). Innovation ecosystems differ in the coupling among their actors, that is, in the strength and intensity of the linkages among them. Different types of coupling entail different types of approaches to knowledge generation. The contribution of this chapter is twofold. First, we argue that the emergence of different types of ecosystems is related to the knowledge requirements imposed on the focal organization by the type of problem requiring resolution. We identify ambiguity, complexity, and uncertainty as key features of the problems that the focal organization must solve in order to carry out its strategic and operational tasks. Second, uncertainty, complexity, and ambiguity lead to the emergence of increasingly difficult problems which affect the degree of coupling among the ecosystem’s actors. The degree of their coupling has an impact on the structure of the ecosystem that develops around the focal organization. The chapter is organized as follows. The next section discusses the concept of coupling, followed by a section analyzing the effects of

170

STEFANO BRUSONI AND ANDREA PRENCIPE

ambiguity, complexity, and uncertainty on the manifestation of different types of coupling in ecosystems. The fourth section discusses the notion of coupling as process and structure in innovation ecosystems. The final section provides some conclusions.

Distinctiveness, Responsiveness, and Organizational Coupling Organizational Coupling: Distinctiveness and Responsiveness Orton and Weick (1990) define coupling, that is, the strength and intensity of the linkages among the nodes in a network, in terms of the two constructs of responsiveness and distinctiveness. Distinctiveness relates to the property in the components of a system to retain their identities. For instance, if we understand a symphony orchestra as a system, the violinist retains her identity and her distinctiveness within the system allowing her to play her musical part. Responsiveness relates to the property of the system components to maintain a degree of consistency with each other, even in a changing environment. The musicians in a symphony orchestra are responsive – they play/perform together to translate the written music into a sound performance. The interaction between distinctiveness and responsiveness determines the type of coupling. If responsiveness prevails, the system is described as ‘‘tightly coupled’’; if distinctiveness prevails, the system is described as ‘‘decoupled.’’ If distinctiveness and responsiveness are quite well balanced, the system is described as ‘‘loosely coupled’’ (Orton & Weick, 1990, p. 205) (Table 1). We assume that systems must be responsive in order to react to environmental stimuli while simultaneously promoting changes in the environment. Responsiveness may be automatic or enacted. Responsiveness is achieved automatically when the interfaces across organizational units and subunits are given, and changes are permitted only within a predefined range. Building on Henderson and Clark’s (1990) seminal paper on architectural Table 1. The Definition of Loose Coupling. Distinctiveness

Responsiveness

No Yes

No

Yes

Non-coupled system Tightly coupled system

Decoupled system Loosely coupled system

Problem Framing, Problem Solving, and Patterns of Coupling

171

innovation, research on modularity proposed that product-level interfaces also determine organizational-level interfaces (Baldwin & Clark, 2000; Jacobides, 2005; Jacobides & Winter, 2005; Langlois & Robertson, 1992; Sanchez & Mahoney, 1996). Responsiveness is achieved simply by respecting the design rules that define which functions are performed within each module, the performance range of the module, and what information each module should receive in order to function, and to transmit to enable other modules to function. This type of responsiveness assumes that the external environment varies within a predictable range, which enables the focal organization to retain a degree of control and coordination with minimal direct intervention (Brusoni, Jacobides, & Prencipe, 2009; Brusoni & Prencipe, 2006, 2009, 2011; Ceci & Prencipe, 2008). However, too much reliance on the automatic organizational responsiveness enabled by standardized interfaces can be dangerous for incumbents (Brusoni, Prencipe, & Pavitt, 2001; Henderson & Clark, 1990; Wolter & Veloso, 2008). When the extent of technological change becomes difficult to control or to predict, then reliance on the coordination enabled by standardized interfaces becomes a source of problems (e.g., Brusoni et al., 2001; Nickerson & Zenger, 2004). Within innovation ecosystems, firms must be able to develop heuristics, routines, and procedures to change their internal and external patterns of communications, command, and information filters. We describe this as enacted responsiveness. Responsiveness is enacted when the interfaces among units are not precisely defined ex ante and, therefore, require active management to ensure consistency. Interfaces are characterized by a dynamics of change according to which variations in a unit interface entail unpredictable and/or ambiguous variation in one or more unit interfaces. These dynamics of change, therefore, require active direction to maintain some degree of consistency across organizational units, and with the external environment. Enacted responsiveness requires managerial attention (Ocasio, 1997) and explicit managerial authority (Radner, 1992). To return to the example of the symphony orchestra, the violinist can be responsive to the other violinists in the orchestra by playing the notes written on the musical score, which in some cases is enough, or can adapt the performance by following the directions of the orchestra conductor or soloist, which will be especially necessary for tricky musical passages or interpretations. The former performance is automatic responsiveness; the second is enacted responsiveness, which is the focus of this chapter. Enacted responsiveness is necessary each time units have to cope with interdependent and unpredictable (as opposed to modular and predictable) interfaces

172

STEFANO BRUSONI AND ANDREA PRENCIPE

(Christensen et al., 2002, p. 959). We argue that achieving this type of responsiveness is particularly difficult in ecosystems where the focal firms cannot directly control or influence the activities of their complementors (e.g., Adner & Kapoor, 2010). The Determinants of Distinctiveness and Responsiveness and their Impact on Ecosystems This section discusses why and how the interplay between distinctiveness and responsiveness determines the emergence of specific types of coupling in innovation ecosystems. We focus on focal organizations that rely on innovation ecosystem units to frame and solve problems. We discuss ambiguity, complexity, and uncertainty, and how they define different and decreasingly difficult problems for the focal organization. Ambiguity. Ambiguity refers to lack of meaning in a situation and the resulting inability to interpret or to make sense of it (Weick, 1969). Zack (2001) argues that ambiguity relates to the state where the decision maker is not able to formulate questions. Ambiguity is a central concept in the literature on organizational decision-making. According to March (1978, 1987) ambiguity is a pervasive feature of organizations. March (1987) lamented the fact that in decision theory, management science, and microeconomics the study of organizational choice focused on optimization at the expense of given preferences and alternatives, and that little effort was devoted to generating alternatives or developing choices. Nevertheless, situations characterized by ambiguity are common in the literature. Ambiguous situations are unclear and vague in the sense that the decision maker does not have the interpretive knowledge required to frame and define them. The knowledge requirements in ambiguous situations are related to sense-making efforts, generation of alternatives and discovery of possible solution paths, rather than demonstration, exploitation, and problem solving. The challenges are related to the absence of an accepted meaning or the presence of multiple contradictory meanings. Research has identified the difficulty inherent in situations characterized by an absence of criteria against which to rank options or to define the options available. This has resulting in a large body of evidence on the processes used by top and middle management to define the available options rather than to make a decision. For example, Henderson and Clark (1990) highlight the danger posed by radical or architectural innovations, for incumbents, which try to accommodate new problems in old problem-solving patterns embodied in tacit and rigid organizational

Problem Framing, Problem Solving, and Patterns of Coupling

173

structures and communication channels. Garud and Rappa (1994) document the slow emergence of new technological frames in the case of cochlear implants. Tripsas and Gavetti (2000) analyze the problems faced by Polaroid in trying to adapt its business model to the digital imaging paradigm. The technological capabilities existed, but top managers failed to reframe the problem and to move away from established ways of generating revenue (by selling film) to a model where cameras did not require film. Prencipe (2001) discusses the slow and path-dependent process that enabled assembling firms to nurture in-house technological capabilities in order to be prepared to unexpected changes in the value chain. Acha (2004) suggests that technology framing directly influences senior managers’ interpretive systems, which, in turn, are fundamental for directing the organization’s current position and future strategic opportunities. Kaplan (2008) investigates how the interaction among different frames generates the context for competition within organizations struggling to make sense of a new technology and the business opportunities it affords. Ambiguous problems have implications for the interplay between distinctiveness and responsiveness and, therefore, affects organizational coupling. The examples presented above suggest that in order to solve ambiguous problems, organizations must create contexts that allow individuals to talk, interact, disagree, and try out their ideas. The main organizational challenge is to provide a socially cohesive yet intellectually diverse set of individuals with an interactive, face-to-face, information-rich environment. Hence, ambiguity is at odds with the idea of specialized, compartmentalized systems. Kogut and Zander (1992, 1996) argue that organizations as opposed to markets, provide a ‘‘context of discourse and coordination among individuals with disparate expertise’’ (1996, p. 503), which, in turn, constitutes the basis for the development of a distinct organizational identity necessary for the creation of socially shared knowledge. This is a major responsibility for focal organizations embedded in broad ecosystems and is highlighted by Adner and Kapoor’s (2010) discussion of complementors. Adner and Kapoor (2010) suggest that the location of innovation along the value chain can have different effects on the focal organization. Upstream innovations introduced by or with the involvement of suppliers, are less problematic for the focal organization than downstream innovations introduced by complementors. The latter are not linked directly to the focal firm, may not share the same knowledge base, and have no supplier relationship. However, their innovative activities are critical to the viability of the focal organization, which has no control over or clear understanding of them, or responsiveness to them. The problem is rendered ambiguous

174

STEFANO BRUSONI AND ANDREA PRENCIPE

because it lacks any level of cognitive overlap required to understand the nature of the problem and to frame it. Diversity may be a source of novel ideas, but problem framing requires some reconciliation among the distinctive elements of different strategies and ideas. Garud and Rappa (1994) and Kaplan (2008) discuss the time-consuming efforts involved in the emergence of a common frame of reference required to solve a complex strategic problem. Cacciatori and Jacobides (2005) and Cacciatori (2012) argue that non-emergence of common cost accounting categories can prevent an integrated organization from developing functional routines for design, build, and maintenance related to complex civil engineering structures. We thus propose: Proposition 1. Increased ambiguity will require decreased distinctiveness and increased responsiveness. In our view, ambiguity requires tightly coupled innovation ecosystems comprised of strong focal firms able to coordinate the activities of suppliers and complementors. For example, Rolls-Royce Aircraft Engines, one of the world’s largest aircraft engine manufacturers, relied on a tightly coupled structure to develop a path-breaking technology (the wide chord fan blade). Rolls-Royce’s in-house capabilities, accumulated over time, provided a base for its learning processes and promoted the framing and enacting of a radical breakthrough. The role of the firm’s frame of reference and especially the underlying learning processes that led to changes to it, were crucial for the continuous introduction of innovative technological solutions (Lazonick & Prencipe, 2005; Prencipe, 2001). Pisano (2006) argues that in the biopharmaceuticals industry, the seeming lack of progress in pushing new biotechnology-based drugs through the pipeline, to some extent can be explained by a lack of knowledge integration processes in the industry. The concentration on specialization and division of labor does not provide the right context for the integration of specialized knowledge to produce novel products. Thus, biopharmaceuticals, as an organizational system, is characterized by excessive distinctiveness. In the tire manufacturing sector, Brusoni and Prencipe (2006, 2011) show that the successful introduction of radical innovation is facilitated by the presence of a strong, tight, and cohesive group of specialists working together to redefine the technology and its associated business model. In vaccinology, the recent evolution of anti-HIV vaccines demonstrates increasing integration in response to ambiguity. The treatment or prevention of HIV-AIDS is surrounded by huge scientific ambiguity. The basic science is still weak and, although a number of therapies are available,

Problem Framing, Problem Solving, and Patterns of Coupling

175

scientists engaged in trying to develop a vaccine are faced with a very difficult problem: there is no animal model available and nobody has survived the disease,1 and the virus mutates rapidly. In response to these challenges, the International AIDS Vaccine initiative (IAVI) was established in 1997 to advance the search for a vaccine. IAVI originally was organized according to the traditional nongovernmental organization (NGO) model, to raise money, to redistribute it to researchers, to evaluate results, and to raise more money. According to IAVI sources, this model was not successful and there was a rapid shift toward a more proactive, integrating role to shift the research agenda, define the boundaries of the field, fill the gaps in the ongoing research, and channel funds toward neglected subfields (e.g., Chataway, Brusoni, Cacciatori, Hanlin, & Orsenigo, 2007). The shift from a brokering to a proactive ‘‘integrating’’ role was a response to the need to coordinate developments from specialized subfields in a context of extreme scientific and policy ambiguity. Complexity.

Simon (1969, p. 195) defines a complex system as:

one made up of a large number of parts that interact in a non-deterministic way. In such systems, the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts, and the laws of their interaction, it is not a trivial matter to infer the properties of the whole.

Simon (1976) proposed methods to characterize system complexity: cardinality, that is, number of components comprising the system; interdependence among components; decidability; and information content in relation to the variety of the system components.2 Complex problems compared to ambiguous problems demand specific and different knowledge. In the case of a complex problem, the firm knows its overall structure but is unable to compute an ‘‘optimal’’ solution. The knowledge requirements of complexity arise from the limits of computational ability to deal with a combination of: (a) plurality and variety of elements; (b) interactions among system elements at various levels (i.e., among subsystems, between the whole system and the subsystems); and (c) the unpredictability of the interactions among components. Work on modularity discusses the implementation of an organizational and product design strategy aimed at reducing complexity by introducing standard interfaces. Monteverde (1995), in the context of the semiconductor industry, discusses unstructured technical dialogue and its impact on vertical integration decisions in the design of digital memory and analogue products. The unpredictability of the interactions between product and process design

176

STEFANO BRUSONI AND ANDREA PRENCIPE

led to an unstructured technical dialogue that called for vertical integration. Sanchez and Mahoney (1996) discuss the implications of modularity in various industries, to illustrate product design strategy aimed at simplifying interaction patterns across ranges of known components. Hoetker, Swaminathan, and Mitchell (2007) establish a link between product modularity and the ability of firms to reconfigure their structures and Tiwana (2008) argues that technological modularity can substitute for explicit managerial control. Cabigiousu and Camuffo (2012) claim that a stable product architecture reduces information-sharing needs. The relationship between product modularity and organizational structure is difficult to disentangle. Complexity is a multifaceted concept that involves several dimensions. Cardinality, interdependence, and unpredictability have distinct effects on organizational coupling: an increase in cardinality induces increased distinctiveness, while higher levels of interdependency and unpredictability require increased responsiveness. The increased number of components in products requires dedicate design and production resources. This applies also if the bodies of knowledge underlying the components increase, which requires that organizations continuously monitor, absorb, and eventually integrate ever more technologies (e.g., Brusoni et al., 2001). We suggest that an increase in distinctiveness would be a better solution for this situation. Other things being equal, a decentralized organizational system would allow a broader and deeper understanding of the individual scientific and technological disciplines. For example, when the breadth of the knowledge base underpinning a certain product market increases, parallel design becomes more viable and more efficient than sequential design (Clark & Fujimoto, 1991). Reliance on a wide network of specialized suppliers allows organizations in complex environments to more easily reconfigure by exploiting the advantages provided by technological modularity (Lorenzoni & Baden-Fuller, 1995; Lorenzoni & Lipparini, 1999; Hoetker, 2006). Individual organizational units can work independently of and concurrently with other units and new organizations can be added to the overall system when new and potentially useful components and functionalities are introduced. Tiwana (2008) argues that increasing interfirm modularity (i.e., distinctiveness) lowers the need for interfirm knowledge sharing. Thus, with reference to complexity as cardinality (i.e., the number of elements of the problem), we propose that: Proposition 2a. Increased (decreased) problem complexity will require increased (decreased) distinctiveness.

Problem Framing, Problem Solving, and Patterns of Coupling

177

If interdependency and unpredictability increase, coordination becomes more difficult. High levels of interdependency mean that a change to the design of one component will require changes in the design of other components in the system. Unpredictability may add to the complexity of problem. In modular products such as Baldwin and Clark (2000) and Langlois and Robertson (1992) describe, the components are numerous and highly interdependent. However, their interdependencies are predictable and disintegration is a viable organizational strategy. Unpredictable interdependencies render system behavior similarly unpredictable. Thus, the activities of each organizational unit dedicated to component design, require active coordination. Responsiveness must increase in order to accomplish objectives and ‘‘unstructured technical dialogue’’ (Monteverde, 1995) becomes necessary. The findings from micro-level studies of modular technologies do not show consensus that modularity leads to a sharp decrease in the reliance on hierarchies. For example, Tiwana (2008) notes that, in software projects, modularity leads to a reduction in process control, but an increase in outcome control. Hoetker’s (2006) study of the evolution of the notebook industry finds no evidence of a positive relationship between technological modularity and decreasing hierarchical control or coupling. Staudenmayer, Tripsas, and Tucci (2005) find that firms involved in new software development create specific organizational processes to manage product interdependencies. For instance, members of software development teams interact more frequently – on an ad hoc basis – to achieve effective adjustment to ongoing changes. In the case of Lucent, Staudenmayer et al. (2005, p. 314), describe how: If two different teams or individuals caused a system ‘‘crash’’ upon reintegration they communicated directly with each other to negotiate a solution – all dependencies were to be pursued person to person and not via proxy or third party y Interestingly, Lucent found that most interdependencies were only sporadic and not consistent.

Staudenmayer et al. (2005) also find that the software house, Red Hat, set up longer term development projects not connected with a specific product release in order to deal with the problem of coordinating systemic product upgrades, and the firm’s partners developed ex-ante solutions to deal with major product changes: The Red Labs teams typically had 5–10 internal developers, 10–20 active external developers representing different firms’ (and even individuals’) interests, and 100–200 less active external members whose primary roles were tracking what was happening, providing limited input, and developing small pieces of the products/features. (Staudenmayer et al., 2005, p. 314)

178

STEFANO BRUSONI AND ANDREA PRENCIPE

Similarly, the case discussed by Li and Garnsey (2013) in this volume, describes complexity related to broad patterns of interdependencies among organizations engaged in developing tools to treat tuberculosis. Li and Garnsey (2013) describe an ecosystem-level innovation that relies on a mature technology (the drug), but required development of a new set of relationships among innovators, distributors, and complementors to facilitate diagnosis and guarantee affordability in base-of-the-pyramid markets. They focus upon the progressive development of a new supporting infrastructure – partly technological (i.e., the GenXepert platform), partly institutional (i.e., the Foundation for Innovative New Diagnostics (FIND)) – that enabled the focal organization to reorganize the value chain around its core technology, through a series of interventions (or enacted responsiveness), to realign incentives and capabilities, and enable the distribution of an innovative medical technology in some of the world’s poorest countries. Thus, with reference to complexity as interdependencies, we propose: Proposition 2b. Increased (decreased) complexity requires increased (decreased) enacted responsiveness. Propositions 2a and 2b propose new innovation ecosystems characterized by both distinctiveness and responsiveness, that is, loosely coupled systems. This conceptualization of loose coupling is consistent with Orton and Weick’s (1990) definition, but differs from the understanding in the modularity literature. Sanchez and Mahoney (1996) describe modular networks as loosely coupled systems; in our view, these should better be described as decoupled systems characterized by automatic (i.e., embedded in the technical interface) responsiveness. We need a finer grained definition of loose coupling to allow analytical clarity and to explain some apparent paradoxes in analyses of the relations between different types of innovation and vertical (dis)integration (e.g., Wolter & Veloso, 2008). Uncertainty. Uncertainty refers to lack of information required to predict the state of a context (Kahneman, Slovic, & Tversky, 1982; Leblebici & Salancik, 1981; Tosi, Aldag, & Storey, 1973). Uncertain problems are characterized by well-framed problems, whose solutions require extensive searches of information across alternative sources. Uncertain situations can be identified in a variety of empirical settings. For example, computational chemistry firms working on behalf of pharmaceutical companies, identify compounds whose properties have been predefined by the pharmaceutical company’s research team. The advantage of small, specialized providers of

Problem Framing, Problem Solving, and Patterns of Coupling

179

computational chemistry services is their ability to search extensive databases of basic components and their building blocks, while the ‘‘target’’ is defined by the client based on the strengths of its research team (Orsenigo, Pammolli, & Riccaboni, 2001). Similarly, in Open Sources Software (OSS) environments, crowds of independent programmers contribute to making continuous improvements to source code. Nevertheless, and despite Raymond’s (1999) famous claim that ‘‘given enough eyeballs, every bug is shallow,’’ successful OSS environments, such as Linux, rely on well-established frames defined by well-guarded kernels that are unlikely to be modified. Thus, not all bugs are ‘‘equal’’: the environment is uncertain because complex operating systems require continuous maintenance and updating, but the overall frame is unambiguous and is defined and guarded by a restricted group of programmers to prevent forking (Narduzzo & Rossi, 2005). Knowledge requirements entailed by uncertainty derive from the lack of information about future states of the environment. According to information theory, information and uncertainty are inversely related (Newell & Simon, 1972; Shannon & Weaver, 1949). Information theorists argued that data, such as cues and message units, have the potential to reduce the level of uncertainty. However, relevant cues must be sifted out of noise in order to become information that can be used to reduce uncertainty. Decision makers need to identify organizational solutions for the acquisition of data and transform them into information to improve the accuracy of their predictions (Gifford, Bobbit, & Slocum, 1979). The absence of complexity and ambiguity allows decision makers to rely on a wellestablished and familiar model in defining the problem to be solved. Although information gathering may be difficult and time-consuming, it does not require development of a novel problem frame. In uncertain situations, reliance on a wide and diverse network of information providers may be advantageous. Hansen (1999, p. 82) studies a sample of new product development projects in microelectronics showing that weak interunit ties helped the project team to search for useful knowledge in other subunits, but impeded the transfer of complex knowledge. Granovetter (1973) highlights that the strength of weak ties relies on the ability to search for and acquire diverse information. Similarly, Sturgeon (2002, p. 455) argues that the emergence of modular networks in the international personal computer (PC) and microelectronics industry is related to the possibility of exploiting ‘‘distinct breaks in the value chain [since they] tend to form at points where information regarding product specifications can be highly formal y linkages are achieved by the transfer of codified information.’’

180

STEFANO BRUSONI AND ANDREA PRENCIPE

On this basis, we argue that uncertain problems can be managed more appropriately by systems that retain a high level of distinctiveness. Responsiveness (at least the enacted type we discuss here) is not required because the problem solving builds on an accepted problem frame. Coordination can be achieved by respecting the interfaces established among the known subproblems. According to Orton and Weick (1990), a decoupled system is a system where distinctiveness prevails. We argue that decoupled systems are more appropriate for gathering new data as long as they can be transferred efficiently along established channels. In decoupled systems, individual units become independent sensors that monitor different (in terms of their nature), distant (in terms of disciplines, e.g., electronics and fluid dynamics), and dispersed (i.e., geographically) sources with the aim of acquiring new data that must be transformed into information. Therefore, uncertainty paves the way to the emergence of decoupled organizational systems since resolving uncertain situations requires the collection of more and newer data. A good example is the recent shift from closed to open innovation to respond to increasing uncertainty in consumer goods markets (Chesbrough, 2003). For instance, Procter & Gamble shifted from a tightly coupled approach to R&D – characterized by global in-house research facilities that hired and retained the world’s best talents in scientific and technological disciplines, to a decoupled approach they called ‘‘Connect and Develop.’’ Connect and Develop relies on external connections with individuals, suppliers, and private and public research laboratories that experiment with new products and technologies in new markets (Houston & Sakkab, 2006). Thus, Procter & Gamble have overcome the not-inventedhere-syndrome and revised its research aim, which is to identify promising new product and process ideas throughout the world, and exploit their development, manufacturing, and marketing capabilities. Decoupling is important not only for the acquisition of information but also for decision making. Decoupled systems are likely better suited to increasing variety through the bottom-up process of information gathering and to promoting top-down decisions. Bourgeois and Eisenhardt (1988) found that in the high velocity environment of the microcomputer industry in the late 1970s and early 1980s, those firms that achieved better performance were characterized by greater delegation of decision-making. In the best performing firms, the power distribution within top management teams leans toward functional executives. Similarly, Nickerson and Zenger (2004) argue that markets and decentralized forms of organizing more generally, are better able than authority-based organizations to solve decomposable problems through incremental (directional) search processes.

Problem Framing, Problem Solving, and Patterns of Coupling

181

On this basis, we propose: Proposition 3. Increased uncertainty requires increased distinctiveness and decreased responsiveness. Uncertainty, therefore, leads to decoupled innovation ecosystems. Although the emergence of decoupling is related to the presence of uncertain problems with known structures, there may be other factors, with organizational implications, that might affect the structure of the innovation ecosystem. There are at least three main barriers to the emergence of a decoupled, decentralized ecosystem: appropriability regime; munificence of the ecosystem; and safety. Appropriability refers to the ability of the organization to capture the financial returns from the introduction of innovation (Teece, 1986). Appropriability regimes vary in strength depending on the effectiveness of the prevailing intellectual property rights protection system, for example, patent and trademark system. A strong appropriability regime enables distinctiveness to prevail, allowing some relaxation of the symbiotic relationships among firms in the same ecosystem. For example, Arora and Gambardella (1994) argue that a strong intellectual property rights regime based on patents, copyright, or embodiment of knowledge into saleable artifacts or services, can lead to the emergence of efficient technology markets where specialized firms can compete with the incumbents on the basis of superior skill in specific steps of the innovation process. Arora and Gambardella argue that the increase in the numbers of contract research organizations in biopharmaceuticals, independent software vendors, specialized engineering firms, and ‘‘fabless’’ semiconductors companies are all examples of the increasing specialization and division of labor enabled by knowledge codification, and by abstract categories which makes it easier to define and enforce ownership of the intellectual property in knowledge modules. This reduces the benefits of vertical integration even in innovative contexts. In this context, specialization and thus distinctiveness are prioritized over responsiveness. On the other hand, a weak appropriability regime calls for more responsiveness in the ecosystem. Issues of access and control require management through secrecy and close interaction, that is, higher degree of coupling, in order to capture the financial revenues from innovation. A higher degree of coupling along the supply chain, for instance, would enable organizations to control each step of the innovation process. In particular, weak appropriability increases ambiguity; it renders, for example, the connection between investments in innovation and profitability unclear and fuzzy. This

182

STEFANO BRUSONI AND ANDREA PRENCIPE

discussion is related of course to the very important distinction between technological and behavioral uncertainty (Adner & Kapoor, 2010). The former is informed by the problem characteristics and the latter about the no less important appropriability considerations. Hence, a weak appropriability regime is likely to push the ecosystem toward increased coupling (irrespective of the nature of the problem), and to increase the need for responsiveness and tighter forms of coupling. Tushman and Anderson (1986, p. 145) define munificence as the ‘‘extent to which an environment can support growth’’ and argue that ‘‘environments with greater munificence impose fewer constraints on organization’’ to grow (Tushman & Anderson, 1986, p. 445). Following Tushman and Anderson, we argue that munificence or the generosity of the ecosystem, is conducive to less enacted responsiveness and to decoupled organizational systems. Environments characterized by greater munificence have a higher degree of freedom to choose the patterns of growth identified by their distinctive units and the need for enacted responsiveness is reduced. The resource slack associated with a munificent environment allows greater tolerance of mistakes by the organization. A munificent environment offers the organization higher and safer growth prospects, enabling the exploitation of the advantages of specialization and reducing the risks (e.g., poor appropriability, coexistence of different standards, barriers to entry, etc.). For example, in the early 1980s, the PC market was growing rapidly. IBM, which entered the race late, relied on a wide network of suppliers for its entry: most components were bought from specialized suppliers (Microsoft and Intel which provided the operating system and microprocessors). This strategy backfired and IBM realized that it had overestimated the continuing advantage of the IBM brand. Its customers moved away from its OS/2 preferring to buy cheaper and more modular PCs produced by IBM’s competitors. This unintentionally created a completely different ecosystem. More recently, the Internet bubble led to massive entry into various segments of the information and communication technology industry: based on skyrocketing stock markets, thousands of small, specialized, Internetbased firms entered the market and something similar happened in biotechnology. Now, both sectors are experiencing a process of rapid, and brutal, reorganization: there are too many firms, that are too small, and are incapable of delivering new products or services to the market, which does not represent a viable industry structure. Most revenue-generating biotech firms have achieved this success by developing software to support the drug development activities of the old pharmaceutical giants. Very few

Problem Framing, Problem Solving, and Patterns of Coupling

183

biotechnology firms have grown to become fully fledged pharmaceutical companies (Orsenigo et al., 2001; Pisano, 2006). While specialization (and thus distinctiveness) has led to a phase of very rapid exploration and learning sustained by an over-optimistic stock market, the development of products and services requires tightly coupled organizations. Hence, the current trend toward reintegration and enacted responsiveness. Another source of constraint for decoupled ecosystems is safety. Who will be responsible in the case of a malfunction? If our computers do not work, we can hardly blame Intel. If a jet engine fails, it is often the airline (not the engine manufacturer) that attracts the criticism. Similarly, in the automotive sector, failures become the responsibility of the assembler, irrespective of which supplier manufactured the faulty part. In relation to the nature of the problem, the automotive industry remains tightly coupled, with relatively few entrants able to challenge the big players in particular because of safety issues (Sako, 2003). This applies also to the aeronautic industry. Although characterized by increasingly global supply chains (e.g., Kotha & Srikanth, 2013), it is still tightly coupled in relation to the allocation of responsibility in the case of a malfunction – as exemplified by the recent case of the Dreamliner.

DISCUSSION: MODELS OF COUPLING IN BUSINESS ECOSYSTEMS Because ambiguity, complexity, and uncertainty represent problems involving knowledge requirements, we would suggest that they affect the choices that focal organizations have to make about distinctiveness and responsiveness to the framing and solving of the problems within an ecosystem. Distinguishing the effects of ambiguity, complexity, and uncertainty on distinctiveness and responsiveness suggests the need to discuss the emergence of different coupling patterns in innovation ecosystems: ambiguous problems call for tightly coupled ecosystems; complex problems call for loosely coupled ecosystems; and uncertain problems call for decoupled business ecosystems. This distinction complements and extends research on strategy and organizational theory by addressing some empirical and theoretical gaps (e.g., Wolter & Veloso, 2008). Coupling as Process The proposed framework also explains the process of evolution of ecosystems from decoupled, through tightly coupled to loosely coupled

184

STEFANO BRUSONI AND ANDREA PRENCIPE

structures. Orton and Weick’s (1990) dialectical definition of coupling is inherently processual. In considering the introduction of radical innovations, it is important to distinguish between two phases. The first phase of exploration and information gathering, when the focal organizations rely on a broad network of external, specialized organizations to search and map the territory. The second phase is directed to the integration of new knowledge in new products to provide a solution to the problem. This phase calls for tight coupling, that is, structures characterized by close cooperation and cohesion that facilitate the development of a common organizational code necessary to perform efficiently unstructured technical dialogue (Monteverde, 1995) and, therefore, to frame the problem and enact its solution. However, tight coupling is a temporary feature of ecosystems. Once a code is in place and ambiguity is reduced, innovation ecosystems are likely to evolve as loosely coupled structures, and focal firms can reduce their direct interventions in the activities of their suppliers and complementors. Research on the emergence of technological platforms (Boudreau & Hagiu, 2009; Gawer, 2009) provides new insights into the evolutionary processes of ecosystems and how focal firms struggle to establish themselves as major bottlenecks in these systems. A platform is meant to provide a context for suppliers, customers, and complementors to interact with no need for explicit coordination by the focal firm so long as these actors obey some rules. Platforms enable automatic responsiveness instead of enacted responsiveness, which reduces the scope of coordination efforts of the focal firms. Once a platform is in place, focal firms can enjoy the benefits of distinctiveness without the need for direct involvement in developing products, and coordinating day-to-day activities. Ultimately, innovation activities can be delegated to complementors and users. Most platforms are rooted in the recent evolution of the microelectronics industry and the emergence of the Internet as a platform enabling online transactions. For example, in the mobile phone sector, focal firms competed to establish their own platforms (West & Mace, 2010). Platforms include a variety of components, such as hardware, software, network infrastructure and, more recently, content, that is, prerecorded and entertainment, information such as news and sport, and applications such as games (West & Mace, 2010). Hence, platforms are a coordination tool that enables other ecosystem actors to make choices within a given range. Firms can conceive platforms in different ways. While Nokia, Sony Ericsson, and others relied on disintegrated platforms, that is, they separated the supply of key components from hardware sales, Apple pursued an integrated platform

Problem Framing, Problem Solving, and Patterns of Coupling

185

that included the operating system, hardware, built-applications, and online services, though still enabling suppliers to develop apps and components (West & Mace, 2010). The chapter by West and Wood (2013) in this volume discusses the performance implications of different design decisions (in relation to openness vs. closedness of the underlying platform) in the case of the operating systems for mobile telephony. Instances of these processes occur also in sectors characterized by the absence of open or proprietary platforms. Brusoni et al. (2001) describe the technological evolution of aircraft engine control systems and their organizational implications. Since the 1970s the engine control system has been characterized by a radical shift in its underlying technologies: from hydromechanics to digital electronics. In the 1950s, aircraft engine control systems were based on hydromechanical technologies and were applicationspecific, that is, a change in the design of the engine required a change in the design of the control system. Ecosystem structures were tightly coupled. When hydromechanical technology reached maturity, engineers modularized the interfaces between the engine and the control system, so that the ecosystem structures evolved toward decoupling. The higher thrust engines, such as the turbofan, which were being developed during the 1970s included a large number of parameters that required accurate measurement and computation: the hydromechanical control systems could not cope. The complexity of the thrust calculations, the high level of accuracy and fast response time required, and the fact that most parameters (turbine temperature, fan speed, altitude) were available in electronic form, rendered digital electronics more suitable for the control systems (Prencipe, 2000). The introduction of digital electronics progressively enlarged the role, importance, and functions of the engine control system: the digital control system became the ‘‘brain’’ of the engine. Although digital control systems controlled a higher number of engine components, these interdependencies were governed by the interface software. The software component meant that the digital control systems were not application-specific, and hardware and software modules could be reused in different applications. Following the introduction of digital technologies, the ecosystems of firms engaged in the engine development process evolved into loosely coupled structures with limited direct involvement of the large engine manufacturers (Brusoni et al., 2001). Considering coupling as a technology-enabled process helps our understanding of how focal firms can overcome the challenges posed by the lack of control over their complementors (Adner & Kapoor, 2010). Platforms, such as ITunes, and architectural rules, for example, the digital control

186

STEFANO BRUSONI AND ANDREA PRENCIPE

system, may impose precise boundaries on what complementors can do, without becoming exceedingly normative and imposing on the focal firm the need for complete control of the system. An important question remains about the competitive processes underpinning the emergence of platforms and architectures and how they challenge the role and capabilities of the focal organizations. The model of ‘‘organizational level synchrony,’’ discussed in this volume by Davis (2013), is very much related to the emergence of systems integration capabilities (Brusoni & Prencipe, 2001; Prencipe, 1997), and adds analytical clarity and rigor to a discussion based so far on detailed, but often difficult to compare case studies.

Coupling as Structure Wolter and Veloso (2008) developed a model to analyze the evolution of industry scope in response to incremental, modular, architectural, and radical technological change (Henderson & Clark, 1990). Their model predicts that the direction of industry integration will be straightforward for incremental innovation, that is, no change and architectural innovation, that is, increased integration. The model predicts that modular and radical innovation will have indeterminate effects on the direction of industry integration, by reinforcing the incentives for both disintegration and integration. Our framework helps to explain such cases. We argue that what appears to be an indeterminate effect (Wolter & Veloso, 2008, p. 597) is the outcome of the dialectical tension between distinctiveness and responsiveness that leads to loosely coupled ecosystem structures. This is consistent with Orton and Weick’s (1990, pp. 204–205) quest for a theory that considers ‘‘a system that is simultaneously open and closed, indeterminate and rational, spontaneous and deliberate.’’ Prior studies show that, in order to successfully introduce and implement changes, focal organizations must be equipped with capability for systems integration that allows organizational and technological leadership of the ecosystem units (Brusoni et al., 2001). To paraphrase Orton and Weick (1990), leadership is essential to identify and govern dispersed sources of change. On the one hand, ecosystems must have the ability to offer timely and cost effective and increasingly complex products, which also need to be highly customized (e.g., mass customization for the automotive sector). This contrasts with calls for high levels of specialization, localized adaptation, and efficiency. On the other hand, ecosystems must be able to monitor the technological and competitive landscape to identify possible breakthroughs,

Problem Framing, Problem Solving, and Patterns of Coupling

187

develop new architectures, and reorganize supply chains. This requires persistence, knowledge integration capabilities, effectiveness, and adaptability. Loose coupling emerges as a distinctive ecosystem structure to manage the indeterminacy related to modular and architectural innovations (Wolter & Veloso, 2008). Modular innovations imply complex problems whose resolution requires organizational structures to generate and exchange knowledge. One of the key advantages of modularity is that tasks can be allocated to distinct organizational units and, as long as the interfaces remain in place, require little or no managerial coordination (Sanchez & Mahoney, 1996; Schilling, 2000; Sturgeon, 2002). However, empirical studies illustrate that outsourcing decisions related to knowledge, are more conservative than those related to manufacturing (e.g., Brusoni et al., 2001; Chesbrough & Kusunoki, 2001; Fleming & Sorenson, 2001; Gambardella & Torrisi, 1998; Takeishi, 2002). Batteries and battery technologies are an example of this case. Batteries are modular components with well-defined and standardized interfaces with the products they power, but they are also critical for the functioning of these products. For instance, the life and weight of a mobile phone battery has a major impact on portability, a critical feature of the mobile phone. The technological evolution of mobile phones is on the technological development of batteries, making the battery one of the most critical – yet fully modular – components. Nokia has established strategic partnerships with battery producers to enact responsiveness in relation to defining battery specifications, and has created an in-house dedicated laboratory to research the underlying technologies to promote developments in new battery technologies.3 In 2013, the aviation authorities grounded the Boeing 787 Dreamliners belonging to various airlines because of battery malfunctions. Apparently, Boeing’s approach to designing, developing, and manufacturing the 787 did include back-up technological expertise in battery technologies. The modular interface related to batteries led to a welldefined pattern of division of labor between Boeing and its battery suppliers with the result that their learning trajectories became ossified around the design rules that informed the existing, modular pattern of division of labor, which resulted in the malfunction. There were similar reasons for the aircraft engine gearbox problems described in Brusoni and Prencipe (2011). Loose coupling also provides an appropriate structure to manage the indeterminacy related to architectural innovations. As architectural innovations do not affect a component’s underlying technologies, working with internal suppliers is beneficial because it saves on transaction costs

188

STEFANO BRUSONI AND ANDREA PRENCIPE

(Wolter & Veloso, 2008). However, using internal suppliers may result in cognitive and behavioral inertia that reverses the integration benefits as organizational processes, for example, information channels and filters, become set around old architectures (Henderson & Clark, 1990). In addition, as architectural innovations are infrequent, the additional costs related to full vertical integration render it not economically viable. Loose coupling enables established organizations to circumvent the weak incentives of suppliers to invest in new technologies. In many cases, the problem is not related to suppliers not proposing new ideas, but rather to the inability of systems integrating organizations to understand the advantages of new technical solutions (e.g., Chesbrough & Kusunoki, 2001). Loosely coupled ecosystems, composed of focal organizations equipped with strong R&D functions linked to a dense web of specialized suppliers, are better able to explore the technological space, act upon new ideas, and reduce bureaucracy. In the case of the pharmaceutical industry, Nesta and Saviotti (2005) demonstrate the strategic importance of integrative capabilities for explaining firms’ innovativeness and, ultimately, their profitability and survival. In this context, being effective means being able to respond to or even introduce changes that match the changes in the architecture of artifacts and in the intra and interorganization of labor. We would suggest also that loose coupling contributes to discussion on the nature and essential features of hybrid ecosystems, or the forms existing between markets and hierarchies. From a loose coupling perspective, hybrid ecosystems are not residuals, but represent a distinctive way of organizing economic activities. Loosely coupled ecosystems exploit the advantages of both distinctiveness and responsiveness to develop and maintain over time the capability to sense and to implement changes. Elements of loosely coupled systems are simultaneously subject to spontaneous and independent changes so that they maintain some degree of indeterminacy. The openended dynamics of loose coupling, which may originate in local changes at the organizational unit-level, may create opportunities for system-level changes and, therefore, increase the adaptability of the entire ecosystem. On the other hand, the degree of indeterminacy may require order to govern dispersed elements and implement changes (Orton & Weick, 1990).

CONCLUSIONS This chapter set out to develop an analytical framework to analyze coupling in ecosystems. The framework proposed is based on the determinants of

Problem Framing, Problem Solving, and Patterns of Coupling

189

coupling: distinctiveness and responsiveness. Using these features as the foundation for our framework draws on the idea of tight or loose coupling or decoupling among actors, determined by the interplay between distinctiveness and responsiveness. The manifestations of distinctiveness and responsiveness and, therefore, the types of coupling among actors, are influenced by uncertainty, complexity, and ambiguity. Uncertainty, complexity, and ambiguity pose different degrees of difficulty in relation to knowledge requirements, which, in turn, can be matched to and resolved by different types of coupling. In our view, it is important to adopt a problem-solving perspective for the discussion of ecosystems in order to understand when this concept is useful. For example, under conditions of mere computational uncertainty, the extent of interaction and coordination required of the ecosystem is minimal. This might include those problems that can be usefully approached using the traditional tools of, say, transaction cost economics. Ambiguous problems are relatively rare, but more interesting from an ecosystem viewpoint. Indeed, firms struggle to make sense of problems whose dimensions are unclear, and where it is not clear who might encompass the relevant capabilities, need help, or interact. This makes it difficult for firms to achieve competitive advantage. In complex problem situations, there may be a basic problem structure or a basic infrastructure, such as the Internet. However, firms may not know how to deal with them. The requirement may be for a combination of in-house technical developments to integrate existing technologies (e.g., the first smartphone), coupled with openness to enable users and complementors to complete the new technology with content and functionalities (e.g., the open interfaces that enabled the development of numerous apps for the smartphone). It is in this context that the notion of ecosystems might be most helpful.

NOTES 1. However, in July 2012, a team of physician-researchers at the Division of Infectious Diseases of Brigham and Women’s Hospital, after performing bone marrow transplants in two men with longstanding HIV infections, announced that these two men no longer have detectable HIV in their blood cells (Science Daily, July 26, 2012). 2. Similar to the case of uncertainty, complexity has a subjective dimension. Simon (1976, p. 508) argues that alongside the system’s structure, this complexity ‘‘may also lie in the eye of a beholder of that system.’’

190

STEFANO BRUSONI AND ANDREA PRENCIPE

3. Based on the framework proposed in this chapter, we would argue that Nokia lost its industry leadership to Apple due to a lack of responsiveness related to overreliance on specific product features – a large product portfolio to cater for all market niches as opposed to the solo product strategy pursued by Apple, customizable with apps – which also led to the critical importance of service-based offering a` la iTunes, a complementor, being overlooked (Adner & Kapoor, 2010).

REFERENCES Acha, V. A. (2004). Technology frames: The art of perspective and interpretation in strategy. SPRU Electronic Working Paper No. 109, University of Sussex, Brighton, UK. Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, 84(4), 98–107. Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31, 306–333. Arora, A., & Gambardella, A. (1994). The changing technology of technical change: General and abstract knowledge and the division of innovative labor. Research Policy, 23, 523– 532. Baldwin, C., & Clark, K. (2000). Design rules: The power of modularity. Cambridge, MA: MIT Press. Boudreau, K., & Hagiu, A. (2009). Platforms rules: Multi-sided platforms as regulators. In A. Gawer (Ed.), Platforms, markets, and innovation (pp. 163–191). Cheltenham: Edward Elgar. Bourgeois, L. J., & Eisenhardt, K. M. (1988). Strategic decision processes in high velocity environments: Four cases in the microcomputer industry. Management Science, 34(7), 816–835. Brown, S. L., & Eisenhardt, K. M. (1997). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations. Administrative Science Quarterly, 42(1), 1–34. Brusoni, S., Jacobides, M., & Prencipe, A. (2009). Strategic dynamics in industry architectures: The challenges of knowledge integration. European Management Review, 6(4), 209–216. Brusoni, S., & Prencipe, A. (2001). Unpacking the black box of modularity: Technologies, products, organizations. Industrial and Corporate Change, 10, 179–205. Brusoni, S., & Prencipe, A. (2006). Making design rules. Organization Science, 17(2), 179–189. Brusoni, S., & Prencipe, A. (2009). Design rules for platform leaders. In A. Gawer (Ed.), Platforms, markets and innovation (pp. 306–322). Cheltenham: Edward Elgar. Brusoni, S., & Prencipe, A. (2011). Patterns of modularization: The dynamics of product architecture in complex systems. European Management Review, 8(2), 67–80. Brusoni, S., Prencipe, A., & Pavitt, K. (2001). Knowledge specialization, organizational coupling, and the boundaries of the firm: Why do firms know more than they make? Administrative Science Quarterly, 46(4), 597–621. Cabigiousu, A., & Camuffo, A. (2012). Beyond the ‘mirroring’ hypothesis: Product modularity and interorganizational relations in the air conditioning industry. Organization Science, 23(3), 686–703.

Problem Framing, Problem Solving, and Patterns of Coupling

191

Cacciatori, E. (2012). Resolving conflict in problem solving: Systems of artefacts in the development of new routines. Journal of Management Studies, 49(8), 1559–1585. Cacciatori, E., & Jacobides, M. (2005). The dynamic limits of specialization: Vertical integration reconsidered. Organization Studies, 26(12), 1851–1883. Ceci, F., & Prencipe, A. (2008). Configuring capabilities for integrated solutions. Industry & Innovation, 15(3), 277–296. Chataway, J., Brusoni, S., Cacciatori, E., Hanlin, R., & Orsenigo, L. (2007). The international AIDS vaccine initiative (IAVI) in a changing landscape of vaccine development: A public private partnership as knowledge broker and integrator. European Journal of Development Research, 19(1), 100–117. Chesbrough, H. (2003). Open innovation. Cambridge, MA: Harvard Business Press. Chesbrough, H., & Kusunoki, K. (2001). The modularity trap: Innovation, technology phase shifts, and the resulting limits of virtual organizations. In I. Nonaka & D. Teece (Eds.), Managing industrial knowledge: Creation, transfer and utilization (pp. 202–230). Thousand Oaks, CA: Sage. Christensen, C. M. (1997). The innovator’s dilemma. Cambridge, MA: Harvard Business School Press. Christensen, C. M., & Rosenbloom, R. (1995). Explaining the attacker’s advantage: Technological paradigms, organisational dynamics and the value network. Research Policy, 24, 233–257. Christensen, C. M., Verlindem, M., & Westerman, G. (2002). Disruption, disintegration and the dissipation of differentiability. Industrial and Corporate Change, 11(5), 955–993. Clark, K., & Fujimoto, T. (1991). Product development performance. Cambridge, MA: Harvard Business School Press. Davies, A. (2004). Moving base into high-value integrated solutions: A value stream approach. Industrial & Corporate Change, 13, 727–756. Davis, J. P. (2013). The emergence and coordination of synchrony in organizational ecosystems. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 197–237). Bingley, UK: Emerald Group Publishing Limited. Dosi, G., & Marengo, L. (1994). Some elements of an evolutionary theory of organizational competencies. In R. W. England (Ed.), Evolutionary concepts in contemporary economics (pp. 234–274). Ann Arbor, MI: University of Michigan Press. Fleming, L., & Sorenson, O. (2001). Technology as a complex adaptive system: Evidence from patent data. Research Policy, 30(7), 1019–1039. Freeman, C., & Soete, L. (1997). The economics of industrial innovation. London: Pinter. Gambardella, A., & Torrisi, S. (1998). Does technological convergence imply convergence in markets? Evidence from the electronics industry. Research Policy, 5, 445–464. Garud, R., & Rappa, M. (1994). A socio-cognitive model of technology evolution: The case of cochlear implants. Organization Science, 5(3), 344–362. Gawer, A. (2009). Platforms, markets, and innovation. Cheltenham: Edward Elgar. Gifford, W. E., Bobbit, H. R., & Slocum, J. W. (1979). Message characteristics and perceptions of uncertainty by organizational decision makers. Academy of Management Journal, 22(3), 458–481. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380.

192

STEFANO BRUSONI AND ANDREA PRENCIPE

Hansen, M. (1999). The search-transfer problem: The role of weak ties in sharing knowledge across organization subunits. Administrative Science Quarterly, 44(1), 82–111. Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative Science Quarterly, 35, 9–30. Hobday, M., Davies, A., & Prencipe, A (2005). Systems integration: A core capability of the modern corporation. Industrial and Corporate Change, 14(6), 1109–1143. Hoetker, G. (2006). Do modular products lead to modular organizations? Strategic Management Journal, 27(6), 501–518. Hoetker, G., Swaminathan, A., & Mitchell, W. (2007). Modularity and the impact of buyersupplier relationships on the survival of suppliers. Management Science, 53(2), 171–191. Houston, L., & Sakkab, N. (2006). Connect and develop: Inside Procter & Gamble new model for innovation. Harvard Business Review, 83(3), 58–66. Iansiti, M., & Levien, R. (2004). The keystone advantage: What the new dynamics of business ecosystem mean for strategy, innovation, and sustainability. Boston, MA: Harvard Business School Press. Jacobides, M. (2005). Industry change through vertical dis-integration: How and why markets emerged in mortgage banking. Academy of Management Journal, 48(3), 465–498. Jacobides, M., & Winter, S. (2005). The co-evolution of capabilities and transaction costs: Explaining the institutional structure of production. Strategic Management Journal, 26(5), 395–413. Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press. Kaplan, S. (2008). Framing contests: Making strategy under uncertainty. Organization Science, 19(5), 729–752. Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383–397. Kogut, B., & Zander, U. (1996). What do firms do? coordination, identity, and learning. Organization Science, 7, 502–514. Kotha, S., & Srikanth, K. (2013). Managing a global partnership model: Lessons from the Boeing 787 ‘Dreamliner’ program. Global Strategy Journal, 3(1), 41–66. Langlois, R. N., & Robertson, P. L. (1992). Networks and innovation in a modular system: Lessons from the microcomputer and stereo component industries. Research Policy, 21, 297–313. Lazonick, W., & Prencipe, A. (2005). Dynamic capabilities and sustained innovation: Strategic control and financial commitment at Rolls-Royce plc. Industrial and Corporate Change, 14(3). Li, J. F., & Garnsey, E. (2013). Building joint value: Ecosystem support for global health innovations. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 69–96). Bingley, UK: Emerald Group Publishing Limited. Leblebici, H., & Salancik, G. R. (1981). Effects of environmental uncertainty on information and decision processes in banks. Administrative Science Quarterly, 26, 578–596. Lorenzoni, G., & Baden-Fuller, C. (1995). Creating a strategic center to manage a web of partners. California Management Review, 37(3), 147–162. Lorenzoni, G., & Lipparini, A. (1999). The leveraging of inter-firm relationships as a distinctive organizational capability: A longitudinal study. Strategic Management Journal, 20, 317–338.

Problem Framing, Problem Solving, and Patterns of Coupling

193

March, J. (1978). Bounded rationality, ambiguity, and the engineering of choice. The Bell Journal of Economics, 9(2), 587–608. March, J. (1987). Ambiguity and accounting: The elusive link between information and decision making. Accounting, Organizations, and Society, 12, 153–168. Monteverde, K. (1995). Technical dialog as an incentive for vertical integration in the semiconductor industry. Management Science, 41(10), 1624–1638. Moore, J. (1993). Predators and prey: A new ecology of competition. Harvard Business Review, 71(3), 75–83. Narduzzo, A., & Rossi, A. (2005). The role of modularity in free/open source software development. In S. Koch (Ed.), Free/Open source software development (pp. 84–102). Hershey, PA: Idea Group. Nesta, L., & Saviotti, P. P. (2005). Coherence of the knowledge base and the firm’s innovative performance: Evidence from the US pharmaceutical industry. Journal of Industrial Economics, 53(1), 123–142. Newell, A., & Simon, H. A. (1972). Human problem solving. New York, NY: Prentice Hall. Nickerson, T., & Zenger, J. (2004). A knowledge-based theory of the firm: The problem-solving perspective. Organization Science, 15, 617–632. Ocasio, W. (1997). Towards an attention-based view of the firm. Strategic Management Journal, 18, 187–206. Orsenigo, L., Pammolli, F., & Riccaboni, M. (2001). Technological change and network dynamics: The case of the bio-pharmaceutical industry. Research Policy, 30, 485–508. Orton, J. D., & Weick, K. E. (1990). Loosely coupled systems: A reconceptualization. Academy of Management Review, 15, 203–223. Patel, P., & Pavitt, K. (1997). The technological competencies of the world’s largest firms: Complex and path-dependent, but not much variety. Research Policy, 26, 141–156. Pavitt, K. (1998). Technologies, products and organizations in the innovating firm: What Adam Smith tells us and Joseph Schumpeter doesn’t. Industrial and Corporate Change, 7, 433–452. Pisano, G. P. (2006). Can science be a business? Harvard Business Review, 84(10), 114–125. Prencipe, A. (1997). Technological competencies and product’s evolutionary dynamics: A case study from the aero-engine industry. Research Policy, 25, 1261–1276. Prencipe, A. (2000). Breadth and depth of technological capabilities in complex product systems: The case of the aircraft engine control system. Research Policy, 29, 895–911. Prencipe, A. (2001). Exploiting and nurturing in-house technological capabilities: Lessons from the aerospace industry. International Journal of Innovation Management, 3(3), 299–322. Radner, R. (1992). Hierarchy: The economics of management. Journal of Economic Literature, 30, 1382–1415. Raymond, E. (1999). The cathedral and the bazaar: Musings on linux and open source from an accidental revolutionary. Sebastopol, CA: O’Reilly and Associates. Rosenberg, N. (1982). Inside the black box: Technology and economics. Cambridge, UK: Cambridge University Press. Sako, M. (2003). Modules in design, production and use: Implications for the global automotive industry. In A. Prencipe, A. Davies & M. Hobday (Eds.), The business of systems integration (pp. 229–254). Oxford: Oxford University Press. Sanchez, R., & Mahoney, J. T. (1996). Modularity, flexibility, and knowledge management in product and organization design. Strategic Management Journal, 17(Special Issue), 63–76.

194

STEFANO BRUSONI AND ANDREA PRENCIPE

Schilling, M. A. (2000). Toward a general modular systems theory and its application to interfirm product modularity. Academy of Management Review, 25, 312–334. Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana, IL: University of Illinois Press. Simon, H. (1976). How complex are complex systems? Proceedings of the Biennial Meeting of the Philosophy of Science Association, 2, 507–522. Simon, H. A. (1969). The sciences of artificial. Cambridge, MA: The MIT Press. Staudenmayer, N., Tripsas, M., & Tucci, C. L. (2005). Interfirm modularity and its implications for product development. International Journal of Product Innovation Management, 22, 303–321. Sturgeon, T. (2002). Modular production networks: A new model of industrial organization. Industrial and Corporate Change, 11(3), 451–496. Takeishi, A. (2002). Knowledge partitioning in the interfirm division of labor: The case of automotive product development. Organization Science, 13(3), 321–338. Teece, D. (1986). Profiting from technological innovation: Implications for integration, collaboration, licensing, and public policy. Research Policy, 15, 285–306. Tiwana, A. (2008). Does technological modularity substitute for control? A study of alliance performance in software outsourcing. Strategic Management Journal, 29(7), 769–780. Tosi, H., Aldag, R., & Storey, R. (1973). On the measurement of the environment: An assessment of the Lawrence and Lorsch environmental uncertainty subscale. Administrative Science Quarterly, 18, 27–36. Tripsas, M., & Gavetti, G. (2000). Capabilities, cognition and inertia: Evidence from digital imaging. Strategic Management Journal, 21, 1147–1161. Tushman, M. L., & Anderson, P. (1986). Technological discontinuities and dominant designs: A cyclical model of technological change. Administrative Science Quarterly, 31, 439–465. Weick, K. (1969). The social psychology of organizing. Reading, MA: Addison-Wesley. West, J., & Mace, M. (2010). Browsing as the killer app: Explaining the rapid success of Apple’s iPhone. Telecommunication Policy, 34, 270–286. West, J., & Wood, D. (2013). Evolving an open ecosystem: The rise and fall of the Symbian Platform. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (Vol. 30, pp. 27–67). Bingley, UK: Emerald Group Publishing Limited. Wolter, C., & Veloso, F. M. (2008). The effects of innovation on vertical structure: Perspectives on transaction costs and competences. Academy of Management Review, 33, 586–605. Zack, M. H. (2001). If managing knowledge is the solution: What is the problem? In Y. Malhotra (Ed.), Knowledge management and business model innovation. Hershey, PA: Idea Group Publishing.

PART III INTERDISCIPLINARY LINKAGES AND ANTECEDENTS

THE EMERGENCE AND COORDINATION OF SYNCHRONY IN ORGANIZATIONAL ECOSYSTEMS Jason P. Davis ABSTRACT This paper explores the emergence and coordination of synchrony in networked groups like those that develop integrated product platforms in collaborative ecosystems. While synchronized actions are an important objective for many groups, interorganizational network theory has yet to explore synchrony in depth perhaps because it does not fit the typical diffusion models this research relies upon. By adding organizationally realistic features – sparse network structure and intentional coordination – to the firefly model from theoretical biology, I take some first steps in understanding synchrony in organizational groups. Like diffusion, synchrony is more effective in denser networks, but unlike diffusion clustering decelerates synchrony’s emergence. Coordination by a few group members accelerates group-wide synchrony, and benefits the coordinating organizations with a higher likelihood that it converges to the coordinating organization’s preferred rhythm. This likelihood of convergence to an organization’s preferred rhythm – what I term

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 197–237 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030010

197

198

JASON P. DAVIS

synchrony performance – increases in denser networks, but is not dependent on tie strength and clustering. Keywords: Synchrony; networked groups; industry ecosystems

INTRODUCTION A signature achievement of organizational theory has been to demonstrate that single organizations are embedded in broader networks of interorganizational relationships. Relationships between two organizations are the focus of most interorganizational research (Gulati & Gargiulo, 1999; Powell, Koput, & Smith-Doerr, 1996; Stuart, 1998). Yet many organizations participate in groups of three or more (Granovetter, 2005). Geographically clustered business groups (Ghemawat & Khanna, 1998; Guille´n, 2000), peer groups among suppliers in manufacturing (Whitford, 2005; Zuckerman & Sgourev, 2006), and firms in collaborative ecosystems that produce integrated product platforms are prominent examples (Adner & Kapoor, 2009; Davis & Eisenhardt, 2011). A growing body of empirical work finds that group membership has a largely beneficial effect on organizational performance (Chang & Choi, 1998; Fisman, 2001; Khanna & Rivkin, 2001). Other research suggests that the privileges of membership are due to the interorganizational linkages that groups leverage to achieve common objectives (Browning, Beyer, & Shetler, 1995; Keister, 2000; Khanna & Rivkin, 2006; Saavedra, Hagerty, & Uzzi, 2011). Yet despite its importance, we know very little about how interorganizational relationships are actually mobilized inside groups and to what ends. One reason why organizational group research may lag dyadic alliance research is that theory is less developed about the interactions and outcomes that are salient in networked groups than about dyads. In contrast to dyadic relationships where the key interaction choice is whether to form or dissolve a specific alliance, group interactions entail greater complexity because multiple organizations are interacting at different times in groups. Unique group outcomes can range from complex sequences of seemingly random behavior to synchronized behavior in which multiple organizations act simultaneously. Synchrony is an important outcome because it signals to outside stakeholders that organizational action is unified in time. To illustrate,

The Emergence and Coordination of Synchrony in Organizational Ecosystems

199

consider the important case of networked industry ecosystems that develop integrated technology platforms such as personal computers (Bresnahan & Greenstein, 1999; Gawer & Henderson, 2007; Iansiti, 1995). It is often observed in the practitioner literature that computing companies leveraging similar platform technologies tend to release their products simultaneously (Gawer & Cusumano, 2002; Yoffie & Kwak, 2006). Research about platform ecosystems and R&D racing finds that complementary products which are available simultaneously enhance value creation for consumers and the surplus captured by producers (Adner & Kapoor, 2009; Boudreau, 2012; Milgrom, Quian, & Roberts, 1991). Collaboration between complementors can range from intensive joint development efforts to reactive influence based on other firm’s moves (Kapoor, 2013; West & Wood, 2013). For example, the success of one videogame platform over its competitors is often credited to multiple games that are simultaneously released with each new generation of gaming console (Takahashi, 2006). Yet despite its apparent value, how organizations achieve synchrony is less clear. Because it has been relatively unexplored by researchers, I spoke to multiple computer industry executives about synchrony. Many indicated that achieving synchrony was an essential element of their technology strategy.1 For example: ‘‘We coordinated our roll-out of M-tech with our partner Falstaff’s F-tech y they were both available for customers at the end of August. We also tried to synchronize with our partner Lear, but they weren’t ready this time around, so we’ll hope to hit them in the next generation. We’d also like to synchronize with Ariel and Cleopatra – they’re not our partners, but we hoped they would decide to release in August too. As it turns out, Ariel did and Cleopatra didn’t. So our customers were happy to buy a system with new components from Macbeth, Falstaff, and Ariel in April.’’ (CTO of Macbeth, a major networking equipment company)

As he indicates, when simultaneous product releases are valued by customers, managers may desire and even plan for synchrony. Yet this executive also indicates that synchrony is not a guaranteed outcome, and that the number of synchronized organizations in the group can change over time. Given the reported importance of synchrony, it is reasonable to ask ‘‘why is it easier to achieve and maintain in some groups than others?’’ Second, ‘‘what role does intentional coordination versus random synchronization play in this process?’’ The executive indicated that interorganizational linkages played some role in both intentional and unintended synchrony. What this executive lacked is a broader view of network structure beyond his own company’s relationships. This implies a final question: ‘‘what role

200

JASON P. DAVIS

do different features of network structure play in synchronization?’’ Answering these questions will bring us a significant step forward in our understanding of organizational synchrony. A central organizational puzzle is how sparsely connected groups become synchronized.2 Consider two organizations A and C that are not directly connected, but are linked through another n organizations, B1, y ,Bn. If we assume that influence across network ties takes time, then it is unclear how organizations communicate influence to become fully synchronized. In this stylized example, the influence from A may not reach C in time to synchronize; the influence of other nodes such as B1, y ,Bn could mis-time alignment, and the countercyclical influence of C on A could desynchronize further actions. The impact of intentional coordination by a few partners on group synchronization is also not clear. In the example, if B1 and B2 intentionally align their actions, does this magnify or mitigate the influence that A has on C? In these networks, influence accumulates across all network ties in multiple directions over time, and the endogenous influence dynamics may not guarantee synchronization even if this is the preferred outcome of all organizations. The purpose of this paper is to develop a better understanding of how synchrony emerges in organizational groups like the networked industry ecosystems that produce complex and complementary products. Despite consistent evidence of synchrony and its value, our understanding of synchrony in organizational groups has consisted primarily of practitioner accounts and some field research. Yet while a rigorous theory of organizational synchrony does not yet exist, aspects of the phenomena allow us to modify an existing model from mathematical biology to develop predictions about organizational group synchrony, as I detail below. The main theoretical contributions are insights into the role of network structure in generating synchrony. This produces a better understanding of the effect of density and clustering on synchrony in organizational groups – unlike diffusion processes in which denser and more clustered networks both increase the speed with which all organizations are infected by the object of contagion, density has positive but clustering has negative effects on the speed with which all organizations synchronize in groups. A second contribution is how intentionally coordinated synchrony by a few partners affects the emergence of group-wide synchrony. Coordinating organizations benefit from their coordination efforts with a higher likelihood that group synchrony converges to their preferred rhythm. These effects of intentional coordination depend upon density but not clustering.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

201

BACKGROUND Diffusion and Influence Models in Organization Studies To date, organizational scholars have mostly drawn upon diffusion models to explain the behavior and outcomes of interorganizational networks (Greve, 2009; Strang & Soule, 1998; Westphal & Zajac, 1997). For example, one important line of research indicates that diffusion of resources is more effective in denser networks because more ties provide more opportunities for individuals to transfer knowledge to other organizations in the network (Davis & Greve, 1997; Greve, 2009). Dense networks have more ties relative to nodes, so they enable resources to spread faster than in less dense networks. Other modeling and evidence suggest that, in addition to density, more clustered networks diffuse knowledge faster (Fleming, King, & Juda, 2007; Owen-Smith & Powell, 2003). Defined as the degree to which friends of friends tend to be friends, clustered groups have many different links by which subgroup members can be exposed (Schilling & Phelps, 2007; Uzzi & Spiro, 2005). The effect of clustering on diffusion speed is particularly powerful if the diffusing knowledge is complex and so requires multiple exposures to be adopted by recipients (Centola & Macy, 2007). Taken together, diffusion models have effectively explained the adoption of practices and information by organizations. Yet organizational diffusion models are inadequate to explain synchrony for two major reasons. First, while diffusion typically involves a fixed state change (e.g., A transfers the knowledge to B and B stays ‘‘infected’’ with the knowledge), a set of actions from two or more agents can be synchronized in one time period but unsynchronized in others if the source of continued influence disappears. For example, Macbeth and Falstaff may synchronize one generation of products, but not the next generation of products. Thus, synchrony must be constantly reproduced to continue. Second, while diffusion models are generally time-invariant (e.g., infection depends upon reaching a certain number of exposures, but not when they occur), the likelihood of synchrony depends on the time of influence. In the executive’s example, Macbeth’s influence may have caused Lear to accelerate but it did not produce a synchronous product release in this cycle – yet this acceleration may have produced synchrony had Lear been nearing a product release anyways. More broadly, synchronization appears to be a special case of a larger class of social influence models which relax the fixed-state and time-invariance assumptions of diffusion.

202

JASON P. DAVIS

Fortunately, a mathematical model does exist that uses the timing of accumulated influence to model synchrony. Mathematicians Renato Mirollo and Steven Strogatz developed what is often called the ‘‘firefly’’ model because of its original application to the problem of how fireflies synchronize their flashes (Mirollo & Strogatz, 1990). Using their simple model – interconnected agents that generate a sequence of actions according to their own internal schedules, but are influenced by the flashes of other flies to flash a little sooner – they proved that complete synchrony will emerge in groups provided that accumulated outside influence has a decreasing marginal effect on the internal generation of actions. This model has been used to explain a wide range of synchronous biological phenomena ranging from cardiac pacemakers to the wake/sleep cycle to the flashing of fireflies (Strogatz, 2003).

Organizational Groups and the Firefly Model The firefly model fits some features of the organizational case. The core problem the model explains is how unsynchronized agents become synchronized without some pre-existing shared knowledge about when to act in unison. Mirollo and Strogatz (1990) showed that agents can become synchronized without the need for shared knowledge. Imperfectly shared knowledge is a feature of many organizational examples (Hoopes & Postrel, 1999; Kogut & Zander, 1996; Puranam, Singh, & Chaudhuri, 2009). Organizations in ambiguous and nascent markets, industries with many new entrants, or markets with emerging technologies are all cases where organizations lack shared knowledge of when to act to produce synchrony. Second, agents in the model also lack common knowledge of each other’s internal processes (Mirollo & Strogatz, 1990). Common knowledge – defined as a state in which all agents know all other agents know some key facts (e.g., their internal resources and incentive structure) about themselves – is often an effective solution to coordination problems because it enables agents to reach a preferred equilibrium provided other agents also prefer it (Becker & Murphy, 1992; Chwe, 2001; Milgrom & Stokey, 1982). Even without preexisting shared knowledge of when to act, common knowledge of each other’s internal resource states might enable agents to predict each other’s behavior and pre-calculate the time to act in perfect synchrony. In the firefly model, synchrony is a real problem for agents because without common knowledge of each other’s internal processes, agents must rely on external influence signals (e.g., the flashes of other flies) that may not be intended as signals

The Emergence and Coordination of Synchrony in Organizational Ecosystems

203

to accelerate development. This resembles product development in that organizations lack complete knowledge of all their partner’s internal processes and resource states (Dougherty, 1992). Thus, organizations are primarily influenced by external signals from their partner such as their pattern of product releases (Henderson & Clark, 1990). Yet unfortunately the firefly model doesn’t fit the organizational case perfectly – organizations differ from flies in two important ways. First, managers in two or more organizations may intentionally align their actions in time, a form of interorganizational coordination (Camerer & Knez, 2006; Gulati, Lawrenece, & Puranam, 2005; Heath & Staudenmayer, 2000). In the example above, Falstaff and Macbeth intentionally aligned their product release dates. Presumably, this involved a costly coordination process of aligning resource allocation and development in such a way that is infeasible for large groups of organizations (Davis, 2007). In industry ecosystems, a few firms may take the lead in coordinating alignment among a few other partners (Kapoor, 2013; West & Wood, 2013). Thus, the organizational case differs from the firefly model in that organizational groups rely on both social influence and intentional temporal coordination among a few agents to produce synchrony.3 Second, the organizations in a group are typically linked by a sparsely connected network structure (Owen-Smith & Powell, 2003; Schilling & Phelps, 2007). In contrast to the firefly model which assumes that every organization is connected to every other organization4 – an ‘‘all-to-all’’ network – sparse network connections imply that synchronization can spread unequally throughout the network. A simple linkage does not guarantee immediate synchrony. In the example, Macbeth is connected to Falstaff and Lear, but only synchronizes with Falstaff at the time of the quote because of intentional coordination. This example is consistent with prior research on socially embedded alliances in which passive influence and intentional coordination are shown to be more likely and more effective between organizations who share a prior relationship than those organizations who do not (Gulati & Sytch, 2007; Stuart, 2000; Uzzi, 1997, 1999). This example also illustrates the implications of combining these two departures from the firefly model. On the one hand, a network tie was not necessary to synchronize the unlinked pair Falstaff and Ariel. This is presumably a random occurrence or the result of other distant network interactions. On the other hand, intentional coordination is not the only mechanism by which network linkages shape synchrony: Macbeth’s CTO implies that their partner Lear may become synchronized in the future. Presumably, this need not require intentional coordination – simply .

204

JASON P. DAVIS

observing Macbeth’s (and Falstaff’s) product releases may be enough to influence Lear to accelerate development and become synchronized in the future. Taken together, it appears that a more realistic exploration of organizational synchrony should incorporate the effects of both intentional coordination and social influence, as well as various sparse network structures, into the firefly model.

METHODS The prior discussion suggests that understanding the emergence of synchrony in interorganizational networks has important implications for understanding how organizations collaborate in groups. To explore this issue, I employ an inductive approach using simulation methods (Davis, Eisenhardt, & Bingham, 2007). Specifically, I seek to develop a simple computational model grounded in existing research on interorganizational network dynamics which can be used to explore the emergence of synchrony in a controlled, virtual environment (Burton & Obel, 1995). Simulation is a particularly effective method for research such as this where some of the basic elements of the theory are understood, but its underlying theoretical logic is limited (Davis et al., 2007). As Rudolph and Repenning (2002, p. 4) note, simulation ‘‘facilitates the identification of structures common to different narratives.’’ Given its computational precision, simulation is useful for internal validation of theoretical logic as well as the elaboration of theory through experimentation (March, 1991; Zott, 2003). Simulation is also an especially useful method when the phenomenon is non-linear (Davis et al., 2007; Lennox, Rockart, & Lewin, 2006). While case and statistical methods may indicate non-linearities, they are less precise than simulation in elucidating complex temporal effects such as tipping points, entrainment, and synchrony. This research builds upon a trend toward utilizing endogenous and/or network models to understand social and organizational phenomena (Centola & Macy, 2007; Lenox, Rockart, & Lewin, 2007; Repenning & Sterman, 2002; Strang & Macy, 2001; Zott, 2003). Such models are often more realistic and can reveal potentially surprising behaviors that are difficult to discern in exogenous or cross-sectional models that do not involve interdependencies (Davis et al., 2007). Many researchers are considering the impact of endogenous dynamics that are generated by the simultaneous interactions of multiple variables or agents over time (Repenning & Sterman, 2002; Sastry, 1997; Strang & Macy, 2001; Zott,

The Emergence and Coordination of Synchrony in Organizational Ecosystems

205

2003). For instance, Strang and Macy (2001) model the abandonment and adoption of innovations based on the perception of other organizations’ similar decisions, generating faddish cycles of innovation. DiMaggio and Garip (2008) model stratification as depending on the adoption of services with network externalities where the value of adoption depends upon the extent of adoption by other agents. Rudolph and Repenning (2002) use an interrelated model of stress and interruptions to model the emergence of tipping points leading to organizational collapse. Like these precedents, the problem of networked synchrony requires an endogenous model – in this case, the timing of any agent’s behaviors will be determined by the internal processes of each agent and the influence of other agent’s actions in the network.

Modeling Oscillating Organizations: Time Varying Resources and Organizational Actions To explore synchrony in organizational networks, the analysis here develops a simple analytical structure to model the oscillation of an organization’s resources, the occasional generation of actions by these organizations, and the influence of these actions on other organizations in a network. In doing so, it builds upon the work of Mirollo and Strogatz (1990) and Peskin (1975). These researchers developed a simple but powerful analytic structure to represent a network of oscillating agents they call the pulse-coupled oscillator model.5 This model is adapted to the organizational context as follows. Each agent i, an organization for our purposes, is characterized by Xit, a state variable representing the amount of resources at a given time. These resources oscillate between minimum and maximum values which are normalized to 0 and 1. The oscillation dynamics are described by a simple differential equation of the form below where S is the constant growth and b generates diminishing marginal growth, resulting in a slowing upward curve of resources over time. dX i ¼ Sbn X it dt

(1)

This representation is consistent with scholarship on repeated product development (Brown & Eisenhardt, 1995; Ulrich & Eppinger, 1999) where resources may increase to a threshold, after which they are used up during product development. Research indicates that resources like available cash and even engineering talent fluctuate with the retail seasons or internal

206

JASON P. DAVIS

product development cycles (Clark & Fujimoto, 1991; Nickerson & Zenger, 2002). The key assumption is that new resources must be acquired to produce each new action. This is consistent with the new product development phenomenon. When enough technological resources have been developed, organizations can release new products; each new generation of products requires additional, different resources (Brown & Eisenhardt, 1997; Katila & Ahuja, 2002).6 In the model, organizational actions are generated in what biologists call an ‘‘integrate-and-fire’’ fashion: resources rise steadily until they reach the threshold of 1, when an action is generated. In practice, these dynamics are instantiated in a discrete time simulation and the resource state is updated every time period according to DXi=(SbXi)Dt, as is standard in stochastic process modeling (Law & Kelton, 1991). Actions are discontinuous pulses lasting a single time period. If an organization’s resources reach the threshold of 1 in time t, they produce an action in time t. Resources are then reset to zero in t+1 unless this organization experiences influence in t+1, as I will describe in the next section. Fig. 1 depicts the resources and actions of one such organization: left alone (without influence by other organizations), a single organization’s resource stock will increase at the diminishing rate until it reaches the

Threshold Resources

Time Actions

Time

Fig. 1.

Temporal Dynamics of One Organization.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

207

threshold of 1. At the threshold, an action is generated, resources are reset to zero in the next time period, and the organization begins the cycle again. Sometimes, I will refer to an organization’s ‘‘original rhythm,’’ which is the initial pattern of steadily repeating actions that are produced by an organization without external influence.7 Overall, this model captures the important insight that an organization’s resources (like free cash flow or engineering talent) can oscillate over time and, thus, influence the timing of actions in the environment (like product releases). It makes the critical assumption that managers prefer to increase their resources, but that these resources are utilized or occupied with each new action.

Pulse-Coupled Interorganizational Networks In a network of multiple organizations, each organization is assumed to influence its partners through its actions alone. In product development, the typical actions are product releases – influence occurs when a partner accelerates its product development process by applying additional resources (Clark & Fujimoto, 1991). In the model, an action by any organization, i, influences all other organizations to which it is linked; specifically, each jth organization that is linked to i will increase its resources Xj by an amount equal to the tie strength, e.8 By convention, this influence is modeled as occurring in the next time period before the state changes. That is, if organization i generates an action in time t, then for all organizations j that are linked to organization i: X j ðt þ 1Þ ¼ X j ðtÞ þ e

(2)

In this way, organizations are repeatedly influencing each other’s resource states, Xj, and subsequent distance to the threshold, so that the time of action generation for any organization in the network is endogenous to the overall system dynamics. In the simulation experiments that follow, I treat the tie strength, e, as homogenous across organizations. The model can be depicted with a simple system dynamics diagram as in Fig. 2, which is described below.

System Dynamics and Initial Conditions The pulse-coupled oscillator model has been used to successfully model biological systems such as cardiac pacemakers, the wake/sleep cycle, and

208

JASON P. DAVIS Xi

dXi = S – bX i dt

t

If Xi (t) ≥ 1, then Xi (t+1) = 0

Resources Resource Growth Rate

Resource Use Rate

If Xi (t) ≥ 1, then Xi (t+1) = Xi (t) + e

If Xi (t) ≥ 1, then Xi (t+1) = Xi (t) + e

Network Influence

Fig. 2.

Model Structure and System Dynamics.

the rhythmic flashing of fireflies. The model has become prominent in mathematical biology because of the emergent property of synchrony. Synchrony is often observed in nature, as in the case of fireflies that congregate in the mangrove trees of Southeast Asia. Fireflies begin their flashing in chaotic patterns that are out-of-sync, but over time their flashes become synchronized. The dramatic result is a bright, synchronous flashing of the entire population that can be seen for miles. As Peskin (1975) first showed for the two oscillator case, and Mirollo and Strogatz (1990) showed for arbitrarily many oscillators, under most conditions a network of pulse-coupled oscillators will eventually synchronize its actions even if each started with different resource states. What is remarkable about this model is that influence occurs only through the pulse-like interactions. There is no central ‘‘clock’’ that coordinates synchrony – synchrony emerges from the interactions in the system. Central to their proof is the notion of temporal cooptation – what Mirollo and Strogatz (1990) term ‘‘absorption’’ – that is, the idea that over time the influence of some oscillators on others through the discrete jumps, e, would cause them to share the same frequency. Mirollo and Strogatz (1990) showed that once temporal cooptation occurs, these oscillators share the same rhythm indefinitely. Because they fire in unison, they cause each other’s resource states to be reset simultaneously to zero, and then begin their resource climb again. Their synchronized actions may create

The Emergence and Coordination of Synchrony in Organizational Ecosystems

209

a combined influence that coopts a third organization to act simultaneously. In this manner, all oscillators eventually become coopted and remain synchronized. The proof assumes that the resources of each oscillator are monotonically increasing and concave down, as in dXi/dt above, and that each is linked to each other.9 The emergence of synchrony in these systems is surprising from an organizational perspective because it is not necessarily the intended outcome of any single agent. That is, it may or may not be a deliberate strategy. Instead, systems come to be synchronized through a series of cooptation events. In the organizations literature, the notion of cooptation begins with Selznick (1949), who described how allowing local leaders to participate in the TVA program in exchange for agreement with its objectives accelerated support for the program among the local population. In general, cooptation is a process whereby external elements are incorporated into the processes of a broader coalition (Scott, 2003, p. 71), whether a single organization or a group of organizations. From the perspective of temporal dynamics and synchrony, it will be instructive to examine temporal cooptation events defined as occurring when an action by one organization influences another organization to increase its resources to threshold and, therefore, become synchronized with the other organization. What is unexplored in this model is the impact of network structure on synchrony, the impact of interorganizational coordination on synchrony, and the differential performance of organizations in a temporal sense as the network approaches synchrony. How does intentional coordination by two organizations of the sort found in the field study by Davis (2007) affect broader network synchronization? Furthermore, it is unclear how long it takes to synchronize and engage in temporal cooptation in networks with different structures. Moreover, it is particularly important to explore synchrony in population sizes that reflect the reality of organizational groups, which often number in the hundreds or less. By contrast, many modeling studies of synchrony in networks are of varying large sizes including thousands of nodes (c.f., Barahona & Pecora, 2002; Sakaguchi, Shinomoto, & Kuramoto, 1987). To explore these questions, the analysis below adapts this model to the organizational context, and systematically explores the questions using simulation experiments. By manipulating the initial conditions and experimental parameters of the model, we can better understand synchrony in cooperative interorganizational networks. The system dynamics of a network of oscillating organizations is depicted in Fig. 2; it can be summarized as follows: Each organization begins with resources, Xit. In each time period, t, each organization increases its resources

210

JASON P. DAVIS

Xit by an amount given by dXit/dt. If an organization’s resources reach a threshold of 1, the organization generates an action and resources are reset to 0. This action influences all other organizations to which the focal organization is linked, causing them to increase their resources by an amount equal to the tie strength, e, in the next time period, t+1. This system generates a time series of continuous resource states Xi(t), and a time series of discrete action events Ai(t) for each organization i like those depicted in Fig. 1.

Assumptions and Model Boundaries Like all research, this model involves a few important assumptions. Focusing on influence and resource dynamics, the model operationalizes these temporal processes with oscillating resources and discontinuous action pulses. While organizations no doubt have multiple rhythms and types of actions, this model presents one such combination for the sake of simplicity and tractability (although future research could relax these assumptions). Future research could explore multiple, heterogeneous features of organizations. Moreover, social influence and resource processes are conspicuously at the macro-organizational level, although future research could detail the individual demographics and networks that no doubt underpin these organizational mechanisms. Like all models, this one is a simplified picture of the world that represents ‘‘some but not all features of that world’’ in order to address a focused set of research questions (e.g., the impact of various network structures on the amount of sync and time to sync) (Lave & March, 1975). The research strategy investigates the emergence of synchrony as an important, but certainly not exclusive, outcome of networked groups. In this research, I make the critical assumption that organizations wish to synchronize, have adequate incentives to develop new products, allow themselves to be influenced by their partner’s actions, and can gain resources to accelerate development when so influenced. Making these assumptions allows me to focus on less well-explored issues related to network structure and coordination. While reasonable, future research could explore the core assumptions made here.

Simulating the Model In the simulation experiments that follow, the ties between organizations have equal tie strength, e, and all organizations have the same S and b

The Emergence and Coordination of Synchrony in Organizational Ecosystems

211

parameters, although different parameter values for e, S, and b are explored in sensitivity analyses. (All results are robust to a large range of parameter combinations.) The only differences across organizations are the initial resource states, Xj(1), which are randomized in each simulation run, and the experimental constructs described below (network structure and intentional coordination). Since the focus of this research is on synchrony under various group conditions, these homogenous parameter choices are justified, although future research could relax these assumptions and explore heterogeneous e, S, and b.

ANALYSIS I use this analytical structure of a network of oscillating organizations and influence through action pulses to engage in two sets of analyses. The first examines the emergence of synchrony – the time to reach full group synchrony and the evolution of cooptation across time – and its dependence on density and clustering, two important features of network structure. I begin by examining the emergence of synchrony in networks where no organization necessarily intends to synchronize, confirming that unintentional influence dynamics alone can generate synchronized actions in a networked group. I then analyze the impact of intentional coordination across dyads or triads on group-wide synchrony, investigating coordination’s effect on the time to synchronize and the performance of coordinating organizations. But before I conduct any experiments it is important to validate the model since all further experiments (e.g., various network structures and coordination) depend on it (Davis et al., 2007, p. 491). Thus, it is helpful to examine the outputs of a single representative run of the simulation, depicted in Fig. 3, to examine how synchrony emerges. Ten fully connected organizations begin with randomly determined resource states between 0 and 1. Some organizations begin with resources closer to others, while others are farther apart. As the simulation progresses, some organizations reach threshold, produce an action, and thereby influence all other linked organizations. This influence increases the resources of other organizations, causing some of them to reach threshold and come into synchrony in the next time period. Over time, groups of organizations quickly form coalitions that act in unison, as can be observed in the lower graph in Fig. 3. By t=13, five organizations are acting in unison; by t=22, eight organizations are acting in unison. The resources of all 10 organizations are synchronized by

212

JASON P. DAVIS

Resources

1

0.5

0 0

5

10

15

20

25

30

35

40

45

50

0

5

10

15

20

25 Time

30

35

40

45

50

Actions

10

5

0

Fig. 3.

Evolution of Resources and Actions until Synchrony.

t=24, causing them to act in unison forever. That is, the network is fully synchronized. This pattern was repeated across all individual runs in all experiments – with enough time, all organizations in a networked group that begin with unaligned resource states will reach full synchrony.

Emergence of Synchrony from Network Influence To ensure that the results reflect the underlying synchronization process and not merely particular outputs of stochastically generated initial conditions, all the results that follow are based on the average behavior of at least 1000 independent runs of the simulation unless otherwise noted. For each of these runs, a distinct set of initial resource conditions for each organization are generated using multiple draws from uniform random variables between 0 and 1 for each organization. Thus, to explore the impact of increasing one parameter – for example, network size N – on system behavior the simulation is run 1000 times at multiple values of this parameter and the outputs are averaged while all other parameters (e.g., tie strength e, oscillation frequency 1/T and resource growth rate S) are held constant.

213

The Emergence and Coordination of Synchrony in Organizational Ecosystems

In this manner, the impact of varying multiple parameters on model behavior can be systematically explored. Unless stated otherwise, the parameters conform to standard parameter settings used by Mirollo and Strogatz (1990) in their simulations. The standard parameter settings include a resource growth rate S=2, resource growth dissipation rate b=1, frequency of oscillation 1/T=1/10, tie strength e=.3, and time=50, and resources Xit normalized to a range of 0 to 1. Further sensitivity analyses where multiple parameters are simultaneously varied are conducted to confirm the robustness of the simulation results. In the first set of experiments, I explore the effect of density and clustering on synchrony. To quickly establish the basic network structure results, I first plot the amount of time it takes to reach synchrony – the ‘‘time-to-sync’’ – under these different network conditions. Each of the four graphs in Fig. 4 plots the results of multiple experiments to explore these dependencies and better understand the synchronization process. The four experiments test the impact of varying a key parameter (N, K, e, or Beta) on the average Time-to-Sync vs. K (ER Network; N=200/e=.3) 1100

1000

1000

900

900

800

800

Time to Synchrony

Time to Synchrony

Time-to-Sync vs. N (ER Network; K=6/e=.3) 1100

700 600 500 400 300 200

700 600 500 400 300 200

100

100

0

0 0

10

20

30

40

50

60

70

80

90

0

100

10

20

30

400

240

350

230 Time to Synchrony

Time to Synchrony

250

300 250 200 150

0.5 e

90

100

200

170

0 0.4

80

190 180

0.3

70

210

50 0.2

60

220

100

0.1

50

Time-to-Sync vs. CC (WS Network; N=100/K=60/e=.3)

Time-to-Sync vs. e (ER Network; N=20/K=14) 450

0

40

K

N

0.6

0.7

0.8

0.9

1

160 –4 10

10

–3

–2

10

–1

10

Beta

Fig. 4. Time-to-Sync for Varying Size, Degree, Strength, and Clustering.

10

0

214

JASON P. DAVIS

time-to-sync. All experiments use the Erdo¨s–Renyi (ER) random network except the lower right graph exploring clustering (CC), which uses the Watts–Strogatz (WS) small world network model. Each point in these graphs is the time-to-sync averaged over 1000 simulation runs. Density Dependence of Synchrony Network density – the ratio of ties to the number of possible ties in a network – can be decomposed into two components: number of ties of each actor, their degree, di(n), and the number of nodes, N. As I described in the methods, the ER random network model allows us to manipulate the mean degree of the actors in a network, K, and the number of actors, N, to guarantee that Density=K/(N1). I explore N and K independently because of their different effects on the emergence of synchrony. The upper left graph demonstrates the strong dependence of time-to-sync on network size, N. Time-to-sync increases steadily with network size because each additional node dilutes the relative impact of cooptation since more organizations must be coopted for network-wide synchrony to emerge. By contrast, the upper right graph plots the strong negative dependence of time-to-sync on mean degree (K). As the number of ties increases, time-to-sync declines because of the increasing likelihood of combined influence by multiple organizations coopting other organizations into synchrony.10 When combined together, the positive and negative effects of N and K on Density=K/(N1) demonstrate a strong negative effect of density on the time-to-sync – that is, synchrony accelerates in denser networks. Proposition 1. The time to synchronize decreases as network density increases. To understand the density dependence of synchrony, I conduct an event history analysis of cooptation events. A cooptation event occurs when an organization comes to be synchronized to another organization’s rhythm. These event analyses for all parameters are depicted in Fig. 5. To examine the relationship between cooptation and size, I plot the instantaneous cooptation rate, lc, in sparsely connected (K=6) networks with N=8, N=12, and N=20 organizations for 50 time periods in the upper left graph.11 Each point in the graph is the average synchrony over 1000 simulation runs. These results provide insight into why synchrony is slower in bigger networks. While the rate of cooptation peaks around t=8 for all network sizes, this rate is actually lower in larger networks. To understand this, recall that the cooptation rate is normalized by network size in order to make appropriate inferences about the likelihood of cooptation for each

215

The Emergence and Coordination of Synchrony in Organizational Ecosystems Cooptation Rate vs. K (ER Network; N=20/e=.3)

Cooptation Rate vs. N (ER Network; K=6/e=.3) 0.7

N=8 N=12 N=20

0.6

Cooptation Rate

Cooptation Rate

0.5 0.4 0.3

0.5 0.4 0.3

0.2

0.2

0.1

0.1 0

0 0

5

10

15

20

25

30

35

40

45

0

50

5

10

15

20

25

30

35

40

45

50

Cooptation Rate vs. CC (WS Network; N=100/K=60/e=.3)

Cooptation Rate vs. e (ER Network; N=50/K=30) 0.8

e=.05 e=.1 e=.25

1

Beta=.001, CC=.736 Beta=.01, CC=.728 Beta=.1, CC=.667 Beta=1, CC=.611

0.7 Cooptation Rate

0.8 Cooptation Rate

K=2 K=10 K=14

0.6

0.6

0.4

0.6 0.5 0.4 0.3 0.2

0.2 0.1 0

0 0

5

Fig. 5.

10

15

20

25

30

35

40

45

50

0

5

10

15

20

25

30

35

40

45

50

Cooptation Rate for Varying Size, Degree, Strength, and Clustering.

organization across variations. In larger networks, there are more organizations to be coopted and, thus, a lower cooptation rate. Indeed, multiple competing rhythms may coexist in large networks for a long time before one dominant rhythm emerges, which is reflected in the slower time-to-sync. Next, I plot the cooptation rate in a moderately sized network (N=20) with low, medium, and high mean degree – that is, K=2, K=10, and K=14, respectively. The event history analysis indicates that the rate of cooptation is higher in networks with more ties since these organizations possess more ties across which influence can occur. While all networks eventually reach synchrony, networks with fewer ties per organization take much longer to synchronize than those with more ties per organization because of these diminished opportunities for immediate cooptation, although influence can eventually accumulate to produce cooptation. I also explored the dependence of time to synchronize on tie strength, e, to perform a manipulation check on the effect of more powerful influence. As expected, time-to-sync’s dependence on tie strength, e, in the bottom left graph of Fig. 4 is also negative in that greater influence accelerates

216

JASON P. DAVIS

synchrony. It has a sharp decline from 0 to .5 becoming a more gradual decline with values greater than .5.12 To gain more insight, the bottom left graph of Fig. 5 plots the cooptation rate for networks with tie strength equal to e=.05, e=.1, and e=.25. While the peak cooptation rate is the same for all three variations, these curves are offset across time. Higher tie strength accelerates cooptation because fewer attempts at influence are needed to push an organization’s resources over threshold and, thus, generate a cooptation event. Thus, the cooptation analysis indicates a fundamental difference in between structural (density) and relational (strength) effects on synchrony in that density mainly increases the magnitude of cooptation with more combined influence through multiple ties, whereas tie strength has its main effect through accelerating cooptation typically through single influence events with greater influence strength. Clustering Dependence of Synchrony Next I explore the effect of clustering on time-to-sync by, again, holding all other parameters constant across simulation runs. In the lower right-hand corner I plot the time-to-sync versus an exponential range of Beta values, which correspond to linear values of CC, the clustering coefficient. While weaker than the main effect of density dependence, the negative impact of increasing Beta is observed in the range .01 to 1, the range in which increasing Beta strongly decreases clustering (Watts & Strogatz, 1998). At the limit of Beta=0, clustering is maximized and the WS model generates perfectly regular ring lattice networks with time-to-sync values that plateau. Thus, the effect of increasing clustering is to increase the time-to-sync – that is, synchrony decelerates in more clustered networks. Proposition 2. The time to synchronize increases as network clustering increases. To understand this result better, I plot the cooptation rate for moderate structural values of CC, moderately sized networks (N=100) with a high mean degree (K=60) where the path lengths between pairs of organizations are relatively short. Prior modeling research indicated that increasing Beta may generally improve synchronization because more shortcuts enable shorter path lengths by which synchrony can be reinforced (Barahona & Pecora, 2002) – by choosing organizationally realistic network sizes with a higher mean degree, we can better isolate the effect of clustering as opposed to path length. This analysis indicates that cooptation rate is very fast in these moderately clustered networks, such that most of the graphs are almost perfectly superimposed over

The Emergence and Coordination of Synchrony in Organizational Ecosystems

217

each other. This led me to examine the outputs of many experiments in these early periods. I found that while cooptation within clustered subgroups is quick, the cooptation across clustered subgroups is slower. That is, the rhythms that emerge across clusters are highly variable, leading to longer times for subgroups to coopt each other and generate network-wide synchrony. By contrast, less clustered networks have more evenly distributed ties, which enable organizations to coopt each other more uniformly such that networkwide synchrony can emerge more quickly. This logic is easiest to understand in the special case of cliques, which are perfectly clustered subgroups of organizations which are all connected to each other. For example, the simplest multi-clique structure is two triads connected to each other through one tie: (A,B,C)–(I,J,K). Cliques of different sizes can emerge in a highly clustered network. In these networks, clique members are likely to synchronize quickly because they are the most densely connected part of the network. Yet because different cliques may only be connected by a few ties in a sparsely clustered network, the synchronous rhythms that initially emerge in different cliques may differ. That is, (A,B,C) may act synchronously every third time period, while (I,J,K) acts synchronously every fifth time period. Thus, the deceleration due to clustering can occur because the different rhythms in cliques are strongly maintained by the reinforcing influence of dense interconnections compared to the relatively weaker influence that will ultimately synchronize the rhythms across cliques which is connected by only a few ties. In the two-triad example, the combined influence of three ties that maintains synchrony in one triad-clique (A,B,C) slowly gives way to the influence from the other triad-clique (I,J,K) that works through one tie (C–I). A similar logic holds for larger cliques that are connected by a few more bridging ties, imperfect cliques which are clustered subgroups that may lack a few interconnections, and larger networks with many clusters of heterogeneous sizes and densities that are prevalent in small world models as long as the network is somewhat clustered. Taken together, clustered networks synchronize slower because synchronized organizations within clustered subgroups are slower to synchronize with organizations across clustered subgroup than is the case in less clustered networks.

Impact of Dyadic and Triadic Coordination on Time-to-Sync and Performance To this point, we have explored the temporal consequences of network structure on the synchrony that emerges from social influence. Denser

218

JASON P. DAVIS

networks are found to accelerate synchrony, while more clustered networks decelerate synchrony. The next set of experiments focuses on the effect of intentional coordination by a few organizations on both the time to synchronize and the likelihood that synchrony will converge on the coordinator’s preferred rhythm, what I term synchronous performance of an organization.

Time to Sync Improvement Ratio

Coordination and Time-to-Sync The upper graph in Fig. 6 compares the results of two variations – dyadic and triadic coordination – on convergence to synchrony. To explore the impact of intentional coordination on synchronization, simulations in the ‘‘dyadic’’ variation operationalize coordination by choosing two organizations at random and setting their initial resource states equal to the same randomly generated value. That is, I operationalize coordination as the intentional pre-synchronization of resources, an intuitive and easy-toimplement form of coordination. I also experiment with triadic coordination in order to see the effect of additional coordination. While less 0.1 0.08 0.06 0.04 0.02 0 –0.02 –0.04 –0.06 –0.08 –0.1 Triadic

Dyadic Coordination 0.4

Sync Performance

0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 Dyadic

Triadic Coordination

Fig. 6.

Impact of Coordination on Time-to-Sync and Performance (N=10/K=8) number of simulations=5000.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

219

prevalent than dyads, some prominent triads do exist in platform groups such as the triadic alliances between Intel, Microsoft, and Cisco (Gawer & Cusumano, 2002; Yoffie & Kwak, 2006). Triadic coordination is operationalized by choosing three organizations at random and setting their initial resource conditions equal to the same randomly generated value. The upper graph plots the time-to-sync ‘‘improvement ratio’’ due to dyadic and triadic coordination cases. The dyadic improvement ratio is defined as the difference between the time-to-sync without dyadic coordination (TtSwithout) and time-to-sync with dyadic coordination (TtSdyadic) over the time-to-sync without dyadic coordination or (TtSwithoutTtSdyadic)/TtSwithout. The time-to-sync improvement ratio due to triadic coordination is computed similarly: (TtSwithoutTtStriadic)/TtSwithout. To compute these ratios each variation – (1) without coordination, (2) dyadic coordination, and (3) triadic coordination – is run separately and metrics are averaged over 5000 simulation runs using small ER random networks (N=10) that are highly connected (K=8) to provide a conservative illustration of coordination’s effects in almost fully connected networks. Sensitivity analyses with various other network topologies including less connected and fully connected cases show that these effects are quite robust, as will be illustrated in the next set of experiments. The central result depicted in the upper graph of Fig. 6 is that the time to synchronize the entire network decreases (i.e., synchrony accelerates) when some organizations are coordinating. The dyadic coordination time-tosync improvement ratio is computed as follows: (TtSwithoutTtSdyadic)/ TtSwithout=(72.334073.1580)/73.1580= .01, while the triadic coordination time-to-sync improvement ratio is computed like this: (TtSwithout TtStriadic)/TtSwithout=(70.478673.1580)/73.1580=.04. While triads are presumably quite costly, these computations show that the additional coordination in triads versus dyads dramatically improves the time to synchronize. With resources intentionally pre-synchronized by dyadic and triadic coordination, cooptation can immediately proceed through coalitions of two and three, respectively. As we learned in prior experiments, increasing the likelihood that cooptation occurs through dense subgroups accelerates group-wide synchrony. This effect will be substantial if the initial resource positions of the pre-synchronized dyad or triad were already close to coopting other organizations. This result illustrates that coordination benefits every member of the group with faster synchrony, ranging from the coordinator’s themselves to the other non-coordinating organizations in the group.

220

JASON P. DAVIS

These results are modest (1% and 4% faster synchrony), although we should expect them to be so. In these networks with coordination, only 2 of 10 (dyadic) organizations or 3 of 10 (triadic) organizations are presynchronized: the system must coopt 80% (dyadic) or 70% (triadic) of the organizations to synchronize fully. Indeed, it is not guaranteed that presynchronization efforts always help network synchronization in every configuration of agent’s resources. It is possible that the pre-synchronized organizations begin with initial conditions very different from the other organizations, thus impeding synchrony, as I observed in a few simulation runs. On average, though, the upside of cooptation with intentionally coordinated coalitions outweighs this possible downside. Other organizations benefit from these temporal spillovers by reaching synchrony sooner than they otherwise would, and synchrony accelerates with additional coordination in terms of more organizations in the coordinating subgroup. Proposition 3. The time to synchronize decreases as interorganizational coordination increases. Coordination and Sync Performance Anecdotal evidence and some speculation by platform researchers suggested that partners benefit from coordination not only by accelerating synchrony for their group but also by way of a leadership-oriented benefit – namely, they increase the likelihood that group-wide synchrony tips to their preferred rhythm (Davis, 2007; Gawer & Cusumano, 2002). If this is true, then the additional costs of coordination may be somewhat offset by lower cost of resource adjustment that occurs when organizations shift their schedules as a result of influence and adopt another organization’s rhythm. The experiments depicted in the lower graph of Fig. 6 explore the average synchronization performance of organizations using dyadic and triadic coordination. As described in the Methods section, the synchronization performance, Psync(i), measures the likelihood that an organization coopts all other organizations to its preferred rhythm in some treatment condition (either dyadic or triadic coordination) relative to the likelihood of doing so in a control condition (without coordination). In this case, the baseline likelihood that random organization in the control condition (no coordination) will win is 1/10 because N=10. The performance calculations are as follows. For dyadic coordination, the performance calculation is this: Psync(i)=oWsync(i)WdyadicoWsync(i)Wcontrol=.22.1=.12. For triadic coordination this is the performance calculation: Psync(i)=oWsync(i)Wtriadic oWsync(i)Wcontrol=.37.1=.27.13 Therefore, the results indicate that

The Emergence and Coordination of Synchrony in Organizational Ecosystems

221

dyadic and triadic coordination do generate performance advantages for focal organizations undertaking these strategies in terms of a higher likelihood that group-wide synchrony will converge to their preferred rhythm. Proposition 4. Synchronous performance increases as interorganizational coordination increases. This occurs because coordination with partners amplifies an organization’s influence relative to non-coordinating organizations: the two (or three) organizations work as pair (or trio) to coopt others to their preferred rhythm. Does this performance advantage depend on features of network structure? Is synchrony a better strategy in some networks than others?

Network Structure, Coordination, and Performance The final analysis explores whether the performance advantage that coordination provides is dependent on the two major features of network structure explored in this paper – density and clustering. Similar to the timeto-sync analysis depicted in Fig. 4, in this analysis I systematically vary network structure parameters while holding all others constant. I plot the performance of dyadic coordination in these graphs; triadic coordination has higher overall values, but similarly shaped dependencies. Each point in the graphs in Fig. 7 is the averaged over 1000 simulation runs. All experiments use the ER random network except the lower right graph exploring clustering (CC), which uses the WS small world network model. To explore the density dependence of performance, I again analyze N and K separately. The upper left graph illustrates the strong negative dependency of performance on network size (N) with N ranging from 2 to 100,14 although the following results are robust for all KoN. When N is low, performance is high because the pairs of organizations engaged in dyadic coordination have a higher likelihood of sequentially coopting all the organizations in the population. Performance is particularly high when N is between 2 and 10 because the network is fully connected in this range (see footnote 14 for treatment of networks where NoK). As N increases, the likelihood that pairs of organizations engaged in dyadic coordination coopt all organizations declines. Instead, other synchronized subgroups may emerge, some of which are larger than the coordinated coalition. These other coalitions decrease the likelihood that the coordinating organizations will win. Performance is notably lower in the region of NWK, but nonetheless declines as N increases.

222

JASON P. DAVIS Performance vs. K (N=50;e=.3)

Performance vs. N (K=10;e=.3) 0.5

0.05

0.45

0.045 0.04 Sync Performance

Sync Performance

0.4 0.35 0.3 0.25 0.2 0.15

0.035 0.03 0.025 0.02 0.015

0.1

0.01

0.05

0.05 0

0 0

10

20

30

40

50

60

70

80

90

0

100

0.5

0.5

0.45

0.45

0.4

0.4

0.35

0.35

0.3 0.25 0.2 0.15

0 0.4

0.5 e

Fig. 7.

0.6

0.7

0.8

25

30

35

40

45

50

0.2

0.1

0.3

20

0.15

0.05 0.2

15

0.3

0.1

0.1

10

0.25

0.05 0

5

K Performance vs. Beta (N=50;K=10;e=.3)

Sync Performance

Sync Performance

N Performance vs. e (N=50;K=10)

0.9

1

0 –4 10

10

–3

10

–2

–1

10

10

0

Beta

Impact of Network Structure on Dyadic Coordination Performance.

This suggests an important interaction between N and K. Recall that the total number of ties in these networks is well estimated by NK. Thus, in cases where N and K are close together, the number of ties is significantly greater than N alone – that is, NKcN when N and K are not small. For example, if N=K=10, then NK=100cN. Thus, the magnitude of any dependencies that rely upon the coordinating organizations reaching other organizations in the broader network is expected to increase greatly as N and K become more similar due to the greatly increasing number of ties relative to N. Indeed, analyzing the dependency of coordinated performance on mean degree (K) bears this out. The upper right graph illustrates that performance increases as the mean number of ties per node (K) increases. Two interesting things bear mentioning: first, this dependency on N is an order of magnitude greater (ranging from 0 to .5) than the dependency on K (ranging from 0 to .05). This difference in ranges accounts for the more jagged curve in the upper right graph that ‘‘zooms in’’ and magnifies the stochastic output at

The Emergence and Coordination of Synchrony in Organizational Ecosystems

223

this level. Nonetheless, this graph illustrates an increasing performance advantage to coordination, which becomes particularly high as K nears N. This is the converse of the prior finding: as K nears N, the total number of ties is large relative to the number of nodes. Thus, coordinating organizations are more likely to be connected to other organizations that they can quickly coopt. This builds a growing coalition over time, which increases their likelihood of winning. Thus, this coordination advantage is particularly acute in very dense networks (with high K relative to N). When combined together, the positive and negative effects of N and K in Density=K/(N1) imply a strong positive effect of density on the performance of coordinators. Taken together, coordinators should prefer denser networks, particularly as K nears N. Proposition 5. Synchronous performance increases as network density increases. The next two experiments illustrate non-dependencies that nonetheless solidify our intuition about coordination and sync performance. The lower left graph varies tie strength, e, from 0 to 1, generating a flat performance curve. Yet, this flat curve is greater than zero, reflecting the positive effect of coordination versus the baseline, our measure of performance. Overall, performance does not vary with tie strength. While increasing tie strength accelerates influence, acceleration’s effect on cooptation is similar for coordinating and non-coordinating organizations. This generates no variation in performance. The lower right graph varies Beta between .0001 and 1. Again, the performance is flat and greater than zero. Thus, performance does not vary with clustering. Recall that clustering creates subgroups (and sometimes fully connected groups) that initially produce widely varying rhythms. While network-wide measures like the time-to-sync have a clear dependence on clustering, performance does not because the coordinating organization is not more or less likely to be in or out of any given cluster than any other organization. Thus, the performance advantage due to coordination does not vary with clustering.

DISCUSSION I began by noting that despite important examples of organizational groups and some evidence that synchrony is a valuable outcome for those groups, our understanding of organizational synchrony was lacking. This is perhaps

224

JASON P. DAVIS

the case because synchrony is an outcome of social influence processes that are relatively unexplored by organizational researchers compared to the more limited diffusion models which have been repeatedly applied to the interorganizational network research. While a rigorous theory of organizational synchrony does not yet exist, the firefly model from mathematical biology fit some features of the organizational case well – many agents begin unsynchronized and lack shared and common knowledge – but was lacking other areas – sparsely connected networks and the possibility of intentional coordination. I took some steps toward understanding organizational group synchrony by adapting the firefly model to the purpose of understanding organizational groups with the addition of sparse network structure and coordination. The main results are insights into the emergence and coordination of synchrony in networked organizational groups. I find that, as in diffusion, synchrony is more effective in denser networks, but unlike diffusion, clustering decelerates synchrony’s emergence. The density dependence of synchrony operates through its dual effects on network size (N) and degree (K). Groups converge to synchrony faster in smaller networks because more ties per organization enable more influence to accumulate and accelerate synchrony. Yet in contrast to research linking clustering (CC) with faster diffusion, groups converge to synchrony slower in more clustered networks because variable rhythms emerge in cliques and other clustered subgroups that take longer to synchronize. This highlights what may be a general difference between influence and diffusion models – because the production of actions in the synchrony influence model depends on the exact timing of accumulated influence for each agent, competing rhythms can emerge that decelerate the achievement of groupwide outcomes in clustered networks. I also found that coordination by a few group members to align their internal resource processes accelerates group-wide synchrony and benefits the coordinating organizations with a higher likelihood of convergence to the coordinating organization’s preferred rhythm. This likelihood of convergence to an organization’s preferred rhythm – what I term synchrony performance – increases in denser networks, but is not affected by tie strength and clustering. The performance of coordination depends on density because more ties magnify the combined influence of two or three coordinating organizations, although tie strength and clustering do not convey performance advantages because stronger ties and clustered subgroups do not magnify the combined influence of coordination.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

225

Theoretical Contributions to Organizational Theory This study has a number of theoretical contributions for specific streams of research about interorganizational networks (Podolny & Page, 1998; Powell, White, Koput, & Owen-Smith, 2005; Schilling & Phelps, 2007). For instance, research on business groups repeatedly suggests that their network structure could influence how effectively they achieve group outcomes (Granovetter, 2005; Khanna & Yafeh, 2007; Yiu, Lu, Bruton, & Hoskisson, 2007), although this has not been well explored. This literature also suggests that central coordination is an essential aspect of business groups (Yiu et al., 2007). The study at hand sheds light on both phenomena through the lens of synchrony. While not strictly necessary, intentional coordination could accelerate group outcomes based on social influence. In fact, coordination by a small subset of firms – a few banks or leading diversified firms – could effectively influence a larger business group if the network is densely connected. In fact, if generating the rhythm to which firms synchronize is valuable, coordinating firms could enjoy more benefits when groups are more densely connected. If leading firms withdraw overt coordination, synchrony may still emerge if the network isn’t too big, although clustering may lead to delay if it is allowed to persist. Finally, synchrony has perhaps been most explored in technology platform ecosystems because of their multiple product releases (Adner & Kapoor, 2009; Henderson & Clark, 1990). Because of this extensive and publicly available data, platform groups may be the ‘‘fruitfly’’ of research on organizational group behavior (Bresnahan & Greenstein, 1999; Gawer & Henderson, 2007). New research on product ecosystems finds a range of behavior ranging from intensive coordination to arms-length interactions between weakly linked firms (Kapoor, 2013; West & Wood, 2013). Still other research finds that the degree of interaction and outcomes depends on the cycle times in these industry ecosystems (Makinen & Dedehayir, 2013). Yet despite extensive documentation of simultaneous product releases, and the cooperative incentives that underlie them, the role of network structure in this context has been unclear. This study provides some understanding of how network structure shapes a platform’s capacity to synchronize product releases, an important practical problem for which there is little theory. These and other ideas could form a foundation for empirical investigation of synchrony in industry ecosystems. In conclusion, while synchrony is an important outcome for many groups, how it is produced has not been well explored by many strategy and organization scholars. Using a modified firefly model, one can generate

226

JASON P. DAVIS

a number of important predictions that relate to this important group outcome. I found that synchrony emerges faster in denser networks that are less clustered, that coordination accelerates synchrony and benefits with coordinator with increased synchrony performance, particularly in dense networks. If the ideas developed here survive empirical test, they could extend our understanding of organizational groups beyond simple diffusion to a richer view of the influence dynamics that are unfolding in many organizational groups.

NOTES 1. I asked these executives about synchrony as part of a broader study about technological collaboration in the computer industry. I disguise the company names with Shakespearean character names like Macbeth and Falstaff. 2. Most research on the network structure of business groups indicates that they are not fully connected – that is, sparse. For example, platform groups are typically organized around a few key relationships – Intel, Microsoft, and Cisco in computing, for example – with other players linked in through the periphery. Other examples include ‘‘pyramidal’’ network structures of business groups in Latin America and other emerging economies that include aspects of sparse hierarchical structure. While network measures of density and clustering that are explored here have the advantages of generality and may subsume many of the effects across organizational groups, future research could explore particular structures like stars and pyramids in more depth through simple modifications of the computational model developed here. 3. I thank a reviewer for their advice to clarify that the specific coordination involved is temporal alignment, and that social influence acts through discrete pulses. Strictly speaking, social influence may or may not be intended by managers, so I don’t term it necessarily ‘‘unintentional.’’ The key difference in model is social influence through organizational actions that cause partners to accelerate resource acquisition, and intentional coordination of two or three partners who align their internal resource cycles so that their actions will be aligned. 4. Mirollo and Strogatz (1990) use an all-to-all network because it fits their case – fireflies influence each other in the neighborhood of mangrove trees where they swarm. An all-to-all network also simplifies the model so that analytical solutions are easier to derive. 5. These scholars, in turn, built upon a modeling tradition that begins with Winfree (1967) and other biological models of oscillatory behavior. 6. I am especially grateful to a reviewer who suggested I clarify this and ground it in known product development research. 7. I also refer to an organization’s ‘‘preferred rhythm’’ which is the pattern of actions that represents the minimal departure from their original rhythm necessary to synchronize with others. 8. Given the lack of common knowledge about one’s partners, it seems reasonable to analyze a similar amount of influence for each organization’s action since it is hard

The Emergence and Coordination of Synchrony in Organizational Ecosystems

227

for organizations to distinguish between the importance of actions without some common ground. Nonetheless, this assumption of fixed e could be relaxed in future research. 9. Mirrollo and Strogatz’s (1990) proof assumes an all-to-all connected network which enables analytical progress to progress with ‘‘mean field approximations’’ which require fully connected networks. Simulation studies, however, can produce consistent empirical, if not analytical, solutions to various problems in networks. (c.f., Watts and Strogatz, 1998). 10. It should be noted that the apparent asymptotes in the N and K analyses at time-to-sync=1000 are artifacts of setting the maximum number of iterations to 1000; time-to-sync for high N and low K are very large but finite numbers. 11. I allowed these simulations to run 100 time periods, but zoom in on 50 time units here because most cooptation occurs early in the life cycle. In addition, I also performed this experiment in fully connected and less connected networks. Size results are robust to these alternatives, but because the paper is motivated by the sparse network synchrony problem I keep this as the primary illustration. I thank an anonymous reviewer for reminding me to clarify this important sensitivity analysis. 12. This is easily explained by recalling that resource states are normalized between 0 and 1 with an intrinsic expected value of .5. As e increases, accelerated influence reduces the time to synchrony by reducing the number of time periods required to coopt another organization. In networks with eW.5, some organizations that remain uncoopted are most likely experiencing influence when their resources are below the mean, perhaps by multiple coalitions that are gradually pulling these organizations toward cooptation over many cycles. While increasing e above .5 does accelerate this process, it is slower than in those organizations in which resource states happen to be high when influence occurs. 13. It is important to remember when interpreting these results that the baseline (likelihood a random organization in control condition will win) is 1/N or 1/10=.1 when N=10. That is why .1 is deducted from each raw likelihood of the treatment conditions in the performance metric. 14. It is important to note that K=10 except for when Nr10 where this is impossible. In these cases, K is reset to Knew=N1 so that all pairs are linked. Doing so enables illustration of the inflection point of this dependency at N=K. 15. Since the cooptation rate is based on the possible organizations that could be coopted, it is important to make an adjustment for network size in order to make inferences about the likely magnitude of cooptation affecting each organization in different sized networks. To do so, I simply normalize by dividing lc by N/oNWvariations, the number of organizations in a given variation over the mean number across variations. This convenient rescaling allows us make inferences about cooptation in variations where N is varied but does not affect our estimate of lc where N is not varied since N/oNWvariations=1 in those cases. 16. For instance, in a later analysis, I’ll compare organizations that coordinate versus those that do not. However, the baseline likelihood of winning for a noncoordinating organization will depend upon the number or organizations, N – specifically, the likelihood of winning declines by 1/N, due to chance. Thus, to make appropriate inferences, we need to compare to this baseline.

228

JASON P. DAVIS

ACKNOWLEDGMENTS I am especially grateful for the insightful comments and generous support of Rebecca Henderson, Ezra Zuckerman, Roberto Fernandez, Deborah Ancona, Wanda Orlikowski, Jan Rivkin, Chris Wheat, Pai-Ling Yin, Kathleen Eisenhardt, Mark Granovetter, Riitta Katila, Christoph Zott, Ron Adner, and seminar participants in the MIT Distributed Leadership Research Group, the BYU/Utah Strategy Conference, and the Academy of Management Meeting. Support for this research was provided by MIT’s Sloan School of Management.

REFERENCES Adner, R., & Kapoor, R. (2009). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31(3), 306–333. Barabasi, A.-L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286, 509–512. Barahona, M., & Pecora, L. (2002). Synchronization in small-world systems. Physical Review Letters, 89(5), 054101. Becker, G. S., & Murphy, K. M. (1992). The division of labor, coordination costs, and knowledge. Quarterly Journal of Economics, 107(4), 1137–1160. Boudreau, K. (2012). Let a thousand flowers bloom? An early look at large numbers of software app developers and patterns of innovation. Organization Science. doi: 10.1287/ orsc.1110.0678 Bresnahan, T., & Greenstein, S. (1999). Technological competition and the structure of the computer industry. Journal of Industrial Economics, 47(1), 1–40. Brown, S. L., & Eisenhardt, K. M. (1995). Product development: Past research, present findings, and future directions. Academy of Management Review, 20(2), 343–378. Brown, S. L., & Eisenhardt, K. M. (1997). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations. Administrative Science Quarterly, 42, 1–34. Browning, L., Beyer, J. M., & Shetler, J. C. (1995). Building cooperation in a competitive industry: SEMATECH and the semiconductor industry. Academy of Management Journal, 38(1), 113–151. Burton, R. M., & Obel, B. (1995). The validity of computational models in organization science: From model realism to purpose of the model. Computational and Mathematical Organization Theory, 1(1), 57–72. Camerer, C., & Knez, M. (2006). Coordination, organizational boundaries and fads in business practices. Industrial and Corporate Change, 5, 89–112. Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3), 702–734. Chang, S. J., & Choi, U. (1998). Strategy, structure, and performance of Korean business groups: A transactions cost approach. Journal of Industrial Economics, 37, 141–158.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

229

Chwe, M. (2001). Rational ritual: Culture, coordination, and common knowledge. Princeton, NJ: Princeton University Press. Clark, K., & Fujimoto, T. (1991). Product development performance. Boston, MA: Harvard Business School Press. Davis, G. F., & Greve, H. R. (1997). Corporate elete networks and governance changes in the 1980s. American Journal of Sociology, 103(1), 1–37. Davis, J. P. (2007). Collaborative innovation, organizational symbiosis, and the embeddedness of strategy. Unpublished dissertation, Stanford University, Stanford, CA. Davis, J. P., & Eisenhardt, K. M. (2011). Rotating leadership and collaborative innovation: Recombination processes in symbiotic relationships. Administrative Science Quarterly, 56, 159–201. Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32(2), 480–499. DiMaggio, P., & Garip, F. (2008). Intergroup inequality as a product of diffusion of practices with network externalities under conditions of homophily: Applications to the digital divide in the U.S. and rural/urban migration in Thailand. Working Paper. Dougherty, D. (1992). Interpretive barriers to successful product innovation in large firms. Organization Science, 3(2), 179–202. Fisman, R. (2001). Estimating the value of political connections. American Economic Review, 91, 1095–1102. Fleming, L., King, C., III., & Juda, A. I. (2007). Small worlds and regional innovation. Organization Science, 18, 938–954. Gawer, A., & Cusumano, M. (2002). Platform leadership: How Intel, Microsoft, and Cisco drive industry innovation. Boston, MA: Harvard Business School Press. Gawer, A., & Henderson, R. (2007). Platform owner entry and innovation in complementary markets: Evidence from Intel. Journal of Economics and Management Strategy, 16(1), 1–34. Ghemawat, P., & Khanna, T. (1998). The nature of diversified business groups: A research design and two case studies. Journal of Industrial Economics, 46, 35–62. Granovetter, M. (2005). Business groups and social organization. In N. J. Smelser & R. Swedberg (Eds.), The handbook of economic sociology (2nd ed., pp. 429–450). Princeton, NJ: Princeton University Press. Greve, H. R. (2009). Bigger and safer: The diffusion of competitive advantage. Strategic Management Journal, 30, 1–23. Guille´n, M. F. (2000). Business groups in emerging economies: A resource-based view. Strategic Management Journal, 43, 362–380. Gulati, R., & Gargiulo, M. (1999). Where do interorganizational networks come from? American Journal of Sociology, 104(5), 1439–1493. Gulati, R., Lawrenece, P. R., & Puranam, P. (2005). Adaptation in vertical relationships: Beyond incentive conflict. Strategic Management Journal, 26, 415–440. Gulati, R., & Sytch, M. (2007). Dependence asymmetry and joint dependence in interorganizational relationships: Effects of embeddedness on a manufacturer’s performance in procurement relationships. Administrative Science Quarterly, 52, 32–69. Heath, C., & Staudenmayer, N. (2000). Coordination neglect: How lay theories of organizing complicate coordination in organizations. Research in Organization Behavior, 22, 52–191. Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative Science Quarterly, 35, 9–30.

230

JASON P. DAVIS

Hoopes, D. G., & Postrel, S. (1999). Shared knowledge, ‘‘glitches,’’ and product development performance. Strategic Management Journal, 20(9), 837–865. Iansiti, M. (1995). Technology integration: Managing technological evolution in a complex environment. Research Policy, 24, 521–542. Kapoor, R. (2013). Collaborating with complementors: What do firms do? In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (pp. 3–25, Vol. 30). Bingley, UK: Emerald Group Publishing Limited. Katila, R., & Ahuja, G. (2002). Something old, something new: A longitudinal study of search behavior and new product introduction. Academy of Management Journal, 45(6), 1183–1194. Keister, L. (2000). Chinese business groups: The structure and impact of interfirm relations during economic development. Oxford, NY: Oxford University Press. Khanna, T., & Rivkin, J. (2001). Estimating the performance effects of business groups in emerging markets. Strategic Management Journal, 22, 45–74. Khanna, T., & Rivkin, J. (2006). Interorganizational ties and business group boundaries: Evidence from an emerging economy. Organization Science, 17(3), 333–352. Khanna, T., & Yafeh, Y. (2007). Business groups in emerging markets: Paragons or parasites? Journal of Economic Literature, XLV, 331–372. Kogut, B., & Zander, U. (1996). What firms do? Coordination, identity, and learning. Organization Science, 7(5), 502–523. Lave, C., & March, J. G. (1975). An introduction to models in the social sciences. New York, NY: Harper & Row. Law, A. M., & Kelton, D. W. (1991). Simulation modeling and analysis (2d ed.). New York, NY: McGraw-Hill. Lennox, M., Rockart, S., & Lewin, A. (2006). Interdependency, competition, and industry dynamics. Management Science, 52(5), 757–772. Lenox, M. J., Rockart, S. F., & Lewin, A. Y. (2007). Interdependency, competition, and industry dynamics. Management Science, 53(4), 599–615. Ma¨kinen, S. J., & Dedehayir, O. (2013). Business ecosystems’ evolution – An ecosystem clockspeed perspective. In R. Adner, J. E. Oxley & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (pp. 99–125, Vol. 30). Bingley, UK: Emerald Group Publishing Limited. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. Milgrom, P., Quian, Y., & Roberts, J. (1991). Complementarities, momentum, and the evolution of modern manufacturing. American Economic Review, 80(3), 511–528. Milgrom, P., & Stokey, N. (1982). Information, trade and common knowledge. Journal of Economic Theory, 26(1), 17–27. Mirollo, R. E., & Strogatz, S. H. (1990). Synchronization of pulse-coupled biological oscillators. SIAM Journal of Applied Mathematics, 50(6), 1645–1662. Nickerson, J. A., & Zenger, T. R. (2002). Being efficiently fickle: A dynamic theory of organizational choice. Organization Science, 13(5), 547–566. Owen-Smith, J., & Powell, W. W. (2003). Knowledge networks as channels and conduits: The effects of spillovers in the Boston biotechnology community. Organization Science, 15(1), 5–21.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

231

Peskin, C. S. (1975). Mathematical aspects of heart physiology 268–278. New York, NY: Courant Institute of Mathematical Sciences. Podolny, J. M., & Page, K. L. (1998). Network forms of organization. Annual Review of Sociology, 24, 57–76. Powell, W. W., Koput, K. W., & Smith-Doerr, L. (1996). Interorganizational collaboration and the locus of innovation: Networks of learning in biotechnology. Administrative Science Quarterly, 41, 116–145. Powell, W. W., White, D. R., Koput, K. W., & Owen-Smith, J. (2005). Network dynamics and field evolution: The growth of interorganizational collaboration in the life sciences. American Journal of Sociology, 110(4), 1132–1205. Puranam, P., Singh, H., & Chaudhuri, S. (2009). Integrating acquired capabilities: When structural integration is (un)necessary. Organization Science, 20(2), 313–328. Repenning, N., & Sterman, J. (2002). Capability traps and self-confirming attribution errors in the dynamics of process improveent. Administrative Science Quarterly, 47, 265–295. Rudolph, J., & Repenning, N. (2002). Disaster dynamics: Understanding the role of quantity in organizational collapse. Administrative Science Quarterly, 47, 1–30. Saavedra, S., Hagerty, K., & Uzzi, B. (2011). Synchronicity, instant messaging, and performance among financial traders. Proceedings of the National Academy of Sciences, 108(13), 5296–5301. Sakaguchi, H., Shinomoto, S., & Kuramoto, Y. (1987). Local and global self-entrainments in oscillator lattices. Progress of Theoretical Physics, 77(5), 1005–1010. Sastry, M. A. (1997). Problems and paradoxes in a model of punctuated organizational change. Administrative Science Quarterly, 42(2), 237–275. Schilling, M. A., & Phelps, C. (2007). Interfirm collaboration networks: The impact of small world connectivity on firm innovation. Management Science, 53, 1113–1126. Scott, W. R. (2003). Organizations: Rational, natural and open systems (5th ed.). Upper Saddle River, NJ: Prentice Hall. Selznick, P. (1949). TVA and the grass roots: A study in the sociology of formal organization. New York, NY: Harper & Row. Strang, D., & Macy, M. W. (2001). In search of excellence: Fads, success stories, and adaptive emulation. American Journal of Sociology, 107(1), 147–182. Strang, D., & Soule, S. A. (1998). Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24, 265–290. Strogatz, S. H. (2003). Sync: The emerging science of spontaneous order. New York, NY: Hyperion Books. Stuart, T. E. (1998). Network positions and propensities to collaborate: An investigation of strategic alliance formation in a high-technology industry. Administrative Science Quarterly, 43(3), 668–698. Stuart, T. E. (2000). Interorganizational alliances and the performance of firms: A study of growth and innovation rates in a high-technology industry. Strategic Management Journal, 21, 791–811. Takahashi, D. (2006). The Xbox 360 uncloaked: The real story behind Microsoft’s nextgeneration video game Console. Raleigh, NC: Lulu Press. Ulrich, K., & Eppinger, S. (1999). Product design and development. New York, NY: McGrawHill/Irwin.

232

JASON P. DAVIS

Uzzi, B. (1997). Social structure and competition in interfirm networks: The paradox of embeddedness. Administrative Science Quarterly, 42, 36–67. Uzzi, B. (1999). Embeddedness in the making of financial capital: How social relations and networks benefit firms seeking financing. American Sociological Review, 65, 210–233. Uzzi, B., & Spiro, J. (2005). Collaboration and creativity: The small world problem. American Journal of Sociology, 111(2), 447–504. Watts, D., & Strogatz, S. (1998). Collective dynamics of ‘small world’ networks. Nature, 393(4), 440–442. West, J., & Wood, D. (2013). Evolving an open ecosystem: The rise and fall of the Symbian platform. In: R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems. Advances in Strategic Management (pp. 27–67, Vol. 30). Bingley, UK: Emerald Group Publishing Limited. Westphal, J. D., & Zajac, E. J. (1997). Defections from the inner circle: Social exchange, reciprocity, and the diffusion of board independence in U.S. corporations. Administrative Science Quarterly, 42, 161–183. Whitford, J. (2005). The new old economy: Networks, institutions, and the organizational transformation of American manufacturing. New York, NY: Oxford University Press. Winfree, A. T. (1967). Biological rhythms and the behavior of populations of coupled oscillators. Journal of Theoretical Biology, 16(1), 15–42. Yiu, D. W., Lu, Y., Bruton, G. D., & Hoskisson, R. E. (2007). Business groups: An integrated model to focus future research. Journal of Management Studies, 44(8), 1551–1579. Yoffie, D., & Kwak, M. (2006). With friends like these: The art of managing complementors. Harvard Business Review, 84(9), 88–98. Zott, C. (2003). Dynamic capabilities and the emergence of intra-industry differential firm performance: Insights from a simulation study. Strategic Management Journal, 24, 97–125. Zuckerman, E. W., & Sgourev, S. (2006). Peer capitalism: Parallel relationships in the U.S. economy. American Journal of Sociology, 111(5), 1327–1366.

The Emergence and Coordination of Synchrony in Organizational Ecosystems

233

APPENDIX: OPERATIONALIZING REGULAR AND RANDOM NETWORKS Each simulation run used in the experiments presented below uses a newly generated network. Thus, each experiment may require 1000 or more networks to be generated. To quickly generate these networks, I rely on three standard models in the literature on network dynamics and computation (Barabasi & Albert, 1999; Watts & Strogatz, 1998). As a baseline model, I sometimes generate regular ring lattice networks defined by parameters N and K. These networks are simply N nodes connected in a ring to each of their closest K neighbors. Ring lattice networks are said to be ‘‘regular’’ because they repeat a pattern for all nodes and ties – that is, a ring. These networks provide an easy manipulation check on the role that N and K play in network models since they are deterministic; consequently, known network statistics (density, centralization, etc.) are analytically computable for any choices of N and K. However, it is well known that the regularity of ring lattices can produce artifactual results that do not reflect the full range of dynamical behaviors. To explore the fuller parameter space of network dynamics and make appropriate inferences, random network models are needed. As a result, I only report the results of random network models. However, I should note that I ran both major analyses below – the time-to-sync and performance analyses – on ring lattices as a manipulation check and found that they displayed similar behavior to those of the second network generating model (see below). The second network generating model is the Erdo¨s–Renyi (ER) random network model. This model is very simple to operationalize. The network begins with N unconnected nodes. The parameter PER is the uniform probability ranging between 0 and 1 that a tie exists between any two nodes. In simulations, a random number can be compared to PER in order to determine whether any two nodes i and j have a tie. Since the expected number of ties is the same for each node, the PER that generates ER random networks with mean degree equal to K can be determined. (Note: For convenience, I use K as the label for mean degree in the analyses that follow, and d(ni) for the exact degree of node i – that is, the actual number of ties emanating from node i.) Thus, network size (N) and mean degree (K) can be independently varied with this model. It should be noted, though, that ER networks sometimes produce disconnected networks where no paths exist between some stranded ‘‘islands’’ of nodes. The synchronization process cannot work across islands. Thus, I make one important modification to

234

JASON P. DAVIS

the ER random network model. To ensure that networks are connected, I seed the ER network generator with a ring lattice with K=2 where every node is connected to at least two others, and make the appropriate correction to PER that ensures mean degree (K) is correct. This guarantees that all networks are connected – every node has at least two ties to its neighbors. I use this ER random network model to conduct all experiments involving N and K, which are parameters in the model, as well as tie strength, e. Finally, to explore clustering (CC), I utilize the Watts–Strogatz (WS small world model, a random network model of growing popularity. The algorithm to generate a WS network is also simple (Watts & Strogatz, 1998). Let N be the number of nodes, K be the desired mean degree, and Beta be the probability of rewiring. Then the model begins with a regular ring lattice of N nodes connected to K neighbors. For every focal node i, the probability Beta determines if each of i’s ties will be rewired to a different node. Each other possible node is equally probable within the set of nodes that wouldn’t generate self-ties or duplicate ties. Similar to the ER network algorithm, whether a tie is rewired can be determined by comparing Beta to a random number generated by the computer. After the algorithm has examined each node’s ties, the program is finished generating the new network. N and K can be varied by varying the N and K in the original ring lattice since the algorithm eliminates no nodes or ties. More significantly, Watts and Strogatz (1998) showed empirically that Beta in an intermediate region of .01 and .1 generates high clustering coefficients (CCs) but relatively short path lengths. Figure 2 in their paper shows that clustering increases dramatically in this region, but that path lengths remain almost as short as they were with Beta=1 (Watts & Strogatz, 1998). As a result, this is a key region in which to explore clustering since path length changes much less than clustering in this region. Prior modeling research indicated that increasing Beta may generally improve synchronization because more shortcuts enable shorter path lengths by which synchrony can be reinforced (Barahona & Pecora, 2002) – by choosing organizationally realistic network sizes with a higher mean degree, we can better isolate the effect of clustering as opposed to path length. Finally, it is important to note two network statistics that will be used in the analysis. Consistent with graph theory, I define network density as the ratio of the total number of ties to the number of possible ties. In the ER and WS networks defined above, the total number of ties is given by NK/2; the possible number of ties is N(N1)/2. Thus, density is given by

The Emergence and Coordination of Synchrony in Organizational Ecosystems

Density ¼

K N1

235

(A.1)

The clustering coefficient is a network-level measure of the extent to which individual friends of friends are also friends. The network clustering coefficient is found by averaging the individual local clustering coefficient. An individual’s local clustering coefficient, Ci, is given by Ci ¼

2E jk ki ðki 1Þ

(A.2)

where Ejk is the total number of ties between i’s partners, and ki is the number of i’s partners. After computing each Ci, the network’s clustering coefficient, CC, is then given by CC ¼

1 1X Ci n n

(A.3)

POPULATION AND ORGANIZATION-LEVEL MEASURES: COOPTATION RATE AND SYNCHRONOUS PERFORMANCE To analyze the behavior of the system, it is helpful to define a number of measures. First, the analysis by Mirrollo and Strogatz indicated the temporal cooptation is an important window into synchrony. Therefore, to examine the synchronization process more directly and reveal the causal mechanisms at work, it is helpful to define a measure of the cooptation rate, lc, as the time-varying instantaneous rate of cooptation. A cooptation event is defined as the generation of an action by organization i causes the resources of another organization, j, that is not synchronized in time period, t, to become synchronized in time period t+1. Formally, a cooptation event in t+1, Kj(t+1), requires that Xi(t)6¼Xj(t) and Xi(t+1)= Xj(t+1). Kj(t) takes only the values 1 (cooptation of j in time t) or 0 (no cooptation of j in time t). Doing so allows us to perform event history analyses and plot hazard rates of these events, a common technique in population-level organizational analysis. The hazard rate of cooptation, lc, is defined as lc ¼ lim

Dt!0

Pðt  Tot þ Dtjt  TÞ DT

(A.4)

236

JASON P. DAVIS

where T is a positive and continuous random variable denoting the time of event transition from ‘‘non-cooptation of j’’ to ‘‘cooptation of j’’ and P(  ) is simply the probability of cooptation between time t and t+Dt. The cooptation rate, lc, will be helpful in understanding how network synchronization unfolds over time.15 Next, it is useful to measure the performance of individual organizations in the context of synchronization. As described above, our intuition is that organizations might prefer to have the network synchrony tip to their own, original rhythm as defined by their initial condition resource state, Xi(1). Thus, a high-performing organization in a temporal sense is one that seeds the emergent synchronous cycle with its own underlying rhythm; conversely, low-performing organizations are those who are more likely to synchronize to other organization’s rhythmic impulses through multiple adjustments to their own rhythm. This measure will be used to determine if coordination shapes the network’s ultimate convergent rhythm. While many such measures are possible, in this context it is natural to contrast organizations that are coopted to those that do the coopting. Synchronization occurs in a step-by-step fashion in this model: early on, one or more organizations are coopted to the rhythm of another focal organization, w. As the model progresses, more organizations may be coopted to the coalition that contains this original organization w until some time when all organizations are synchronized to this rhythm. Of course, the actions of other organizations may change the exact rhythm of organization w and its growing coalition of synchronized organizations but, nonetheless, it is possible to find this original organization w – the temporal ‘‘winner’’ – which ultimately coopts all other organizations. For ease of discussion, I define ‘‘preferred rhythm’’ to be the rhythm that the winner achieves that involves the least changes possible to achieve synchrony – of course, the winner’s preferred rhythm may differ from their original rhythm, but nonetheless this preferred rhythm represents the least costly adjustments necessary to synchronize. Simulation analysis enables detailed tracking of the exact timecourse of cooptation so that the winning organization w can be found by backtracking through the simulation output. For each simulation run, I record Wsync(i) for each organization i. It is defined by the following piecewise equation: ( W sync ðiÞ ¼

1

if i ¼ w

0

if iaw

(A.5)

The Emergence and Coordination of Synchrony in Organizational Ecosystems

237

That is, Wsync(i) is 1 if i is the winning organization w and 0 otherwise. When averaged over multiple simulation runs the mean of wins and losses, oWsync(i)W represents the likelihood of winning for organization i, and ranges from 0 to 1. While informative, oWsync(i)W does not conform to our notion of performance. We desire a performance metric that compares organization i’s likelihood of winning with this characteristic relative to its likelihood of winning without this characteristic – that is, in treatment vs. control experimental conditions holding all other variables constant.16 That is, to draw inferences about the effect of the treatment conditions on the likelihood of winning we need to adjust for the baseline likelihood of winning in the environment by subtracting this baseline likelihood of winning from the likelihood of winning with the treatment condition to give us the performance advantage of those conditions, labeled Psync(i): Psync ðiÞ ¼ oW sync ðiÞ4treatment oW sync ðiÞ4control

(A.6)

The ith organization’s likelihood of winning for treatment and control must be calculated from separate simulation runs. This measure of performance captures the intuition that an organization’s performance in the context of synchrony depends upon their relative capacity to coopt the network of other organizations to their own preferred rhythm. It provides an objective metric with which to compare the efficacy of different strategies (e.g., dyadic vs. triadic coordination) in different structures (e.g., less vs. more clustered) at the organization level. The sync performance of an organization ranges between 1 and 1.

OPEN INNOVATION NORMS AND KNOWLEDGE TRANSFER IN INTERFIRM TECHNOLOGY ALLIANCES: EVIDENCE FROM INFORMATION TECHNOLOGY, 1980–1999 Hans T. W. Frankort ABSTRACT Firms tend to transfer more knowledge in technology joint ventures compared to contractual technology agreements. Using insights from new institutional economics, this chapter explores to what extent the alliance governance association with interfirm knowledge transfer is sensitive to an evolving industry norm of collaboration connected to the logic of open innovation. The chapter examines 1,888 dyad-year observations on firms engaged in technology alliances in the U.S. information technology industry during 1980–1999. Using fixed effects linear models, it analyzes longitudinal changes in the alliance governance association with interfirm knowledge transfer, and how such changes vary in magnitude across bilateral versus multipartner alliances, and across computers, telecommunications equipment, software, and microelectronics Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 239–282 Copyright r 2013 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030011

239

240

HANS T. W. FRANKORT

subsectors. Increases in industry-level alliance activity during 1980–1999 improved the knowledge transfer performance of contractual technology agreements relative to more hierarchical equity joint ventures. This effect was concentrated in bilateral rather than multipartner alliances, and in the software and microelectronics rather than computers and telecommunications equipment subsectors. Therefore, an evolving industry norm of collaboration may sometimes make more arms-length governance of a technology alliance a credible substitute for equity ownership, which can reduce the costs of interfirm R&D. Overall, the chapter shows that the performance of material practices that constitute innovation ecosystems, such as interfirm technology alliances, may differ over time subject to prevailing institutional norms of open innovation. This finding generates novel implications for the literatures on alliances, open innovation, and innovation ecosystems. Keywords: Open innovation; industry norm of collaboration; technology alliance governance; multipartner alliances; interfirm knowledge transfer; information technology

Since the early 1980s, firms in knowledge-intensive industries such as biopharmaceuticals and information technology have widely increased their activities in the area of what Chesbrough (2003a) calls ‘‘open innovation.’’ As opposed to closed innovation, open innovation signifies an innovation logic in which firms progressively encourage and engage in research and development (R&D) activities with a variety of external parties (Laursen & Salter, 2006), for example, through collaboration with universities and end users (Adams, Chiang, & Starkey, 2001; von Hippel, 2005), corporate venture capital investments (Dushnitsky, 2006), open-source projects (Kogut & Metiu, 2001), spinoff ventures (Parhankangas & Arenius, 2003), and technology alliances with competition (Hagedoorn, 2002). Therefore, the success of firms increasingly rests on their ability to manage interactions with a range of different competitors, complementors, and distributors within their innovation ecosystem (Adner, 2006; Kapoor, 2013a; West & Wood, 2013). With few exceptions, largely limited to descriptive historical accounts at the individual firm or industry levels of analysis (Chesbrough, 2003a; Powell, 1996; Powell & Giannella, 2010), research has typically focused on the different material practices that constitute open innovation (Dahlander & Gann, 2010). But beyond material practices, open innovation also reflects the progressive institutionalization of collaborative norms at the industry level

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

241

(Pattit, Raj, & Wilemon, 2012). We know comparatively less about the ways in which such broader industry norms condition the optimal organization of the different material practices that constitute innovation ecosystems. Consequently, an important question is how the performance of open innovation practices changes when an industry norm of collaboration evolves. Firms embedded in innovation ecosystems need to manage their interactions with competitors, complementors, and distributors and so they face critical managerial challenges along a number of performance dimensions (Adner, 2006): How can knowledge transfer to and from competitors be encouraged? How can interdependence with various complementors be managed? How can coordination across stages of the value chain be created and sustained? While answers to all three questions are important in order to understand the performance implications of firms’ innovation ecosystems, here, I focus on the first by studying interfirm knowledge transfer in one among a broader possible set of open innovation practices – technology alliances among competitors (Hagedoorn, 2002; Mowery & Teece, 1996). Therefore, the research question I set out to answer is this: how does knowledge transfer in interfirm technology alliances change when an industry norm of collaboration evolves? I address this question in three steps. I begin with the observation that in technology alliances that firms establish to perform joint research and development related to new technologies, products, and processes, interfirm knowledge transfer is likely to be greater when the alliance is governed by a more hierarchical equity joint venture rather than a more arms-length contractual agreement (Oxley & Wada, 2009). Second, I propose that in the U.S. information technology (IT) industry, the empirical context of my study, an industry norm of collaboration evolved during 1980–1999, which progressively acted as an institutional reputation and monitoring system. Finally, drawing on Williamson’s (1991) ‘‘shift parameter’’ framework, I develop and test the argument that this industry norm of collaboration represented an institutional shift parameter that disproportionally augmented knowledge transfer in more arms-length contractual agreements relative to more hierarchical equity joint ventures. Longitudinal analysis of 1,888 dyad-year observations on firms engaged in technology alliances in the U.S. IT industry during 1980–1999 broadly suggests support for the proposition that an industry norm of collaboration moderated the alliance governance association with interfirm knowledge transfer: over time, contractual agreements became significantly more effective as knowledge transfer conduits compared to joint ventures. Motivated both by conceptual differences between bilateral and multipartner

242

HANS T. W. FRANKORT

alliances and by differences in the proliferation of collaborative norms across several IT subsectors during the study period, the empirical analysis also speaks to two plausible contingencies that add nuance to the aggregated shift parameter effect. First, the benefits of an industry norm of collaboration appear concentrated disproportionally in bilateral rather than multipartner alliances and so particularly bilateral contractual agreements seem to have benefited from an industry norm of collaboration. Second, consistent with the idea that some IT subsectors may have seen a more significant increase in the prevalence and importance of collaborative norms during 1980–1999, the shift parameter effect appears concentrated in the software and microelectronics rather than computers and telecommunications equipment subsectors. These results generate conceptual implications for the literatures on alliances, open innovation, and innovation ecosystems. One managerial implication is that an evolving industry norm of collaboration may sometimes make more arms-length governance a credible substitute for equity ownership, thus reducing the costs of interfirm R&D.

CONCEPTUAL BACKGROUND Technology Alliance Governance, Appropriability Hazards, and Interfirm Knowledge Transfer Technology alliances constitute an important knowledge transfer mechanism because they channel the exchange of technological knowledge between partnered firms (Gomes-Casseres, Hagedoorn, & Jaffe, 2006; Mowery, Oxley, & Silverman, 1996; Oxley & Wada, 2009; Rosenkopf & Almeida, 2003; Stuart & Podolny, 1999). In this study, I define interfirm knowledge transfer as the process through which the technological knowledge of one firm is learned and applied by another firm, as reflected in changes within the latter’s knowledge stock (Argote, McEvily, & Reagans, 2003; Darr & Kurtzberg, 2000). Several studies suggest that the governance structure of technology alliances may influence interfirm knowledge transfer (Mowery et al., 1996; Oxley & Wada, 2009). The governance structure of alliances is important because it sets the conditions within which the partner firms can manage their relationship effectively (Oxley, 1997). Governance structures range from more arms-length contractual agreements between independent firms to more hierarchical equity-based joint ventures in which the partner firms ‘‘share ownership of the assets and derived revenues and, thus, share

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

243

monitoring and control rights’’ (Kogut, 1988, p. 175). From a transaction cost economics perspective, the optimal governance structure is the one that most competently addresses the contracting hazards of a given transaction (Oxley, 1997; Pisano, 1989). Absent notable contracting hazards, the default governance structure for an interfirm alliance is a contractual agreement, but when contracting hazards increase, an alliance may be more effectively governed by an equity joint venture (Oxley, 1997; Sampson, 2004). The primary contracting hazards in technology alliances are appropriability hazards, the hazards associated with the leakage of valuable intellectual property (Oxley, 1997; Teece, 1986). The use of a contractual agreement for the development and transfer of technological knowledge requires specification of the relevant property rights, and the monitoring and control mechanisms that support partners’ cooperation ex post. This may be problematic for at least two reasons. First, ex ante specification of a firm’s knowledge and know-how would effectively allow an alliance partner to acquire it without cost (Arrow, 1962, p. 615). Second, a technology alliance is formed to create new technologies, products, and processes – any of which by definition do not exist at the time of contracting – and so a better understanding of the contracted assets will only develop during the collaboration. Therefore, the inherent uncertainty and ambiguity in technology alliances will complicate adequate specification of contracts. Transaction cost economists thus argue that compared to a contractual agreement, a joint venture promotes greater cooperation and knowledge transfer between partnered firms because shared ownership helps align their incentives (Oxley & Wada, 2009). Moreover, because a joint venture is a separate legal and physical entity with a joint management board and administrative controls, and because of enhanced disclosure requirements (Pisano, 1989), it allows the partner firms to monitor and control the appropriation of technological knowledge (Kogut, 1988). Therefore, knowledge transfer should be greater in joint ventures compared to contractual agreements (Mowery et al., 1996; Oxley & Wada, 2009).

The Importance of Industry Norms The baseline expectation that equity joint ventures are associated with greater interfirm knowledge transfer than contractual agreements is agnostic about the industry environment within which partnered firms are embedded. However, firms perform their technology alliance activities

244

HANS T. W. FRANKORT

against the backdrop of an evolving institutional environment that provides ‘‘a set of fundamental y ground rules’’ (Davis & North, 1971, p. 6). Institutional theories argue that industry norms represent one key set of ground rules that prescribe expectations about firms’ patterns of behavior (Scott, 2001). As Scott notes, ‘‘normative elements involve the creation of expectations that introduce a prescriptive, evaluative, and obligatory dimension into social life’’ (2003, p. 880). A prevailing norm expresses the ultimate value attitudes held by individual actors, which then form the basis for ‘‘positive or negative sanctions that reinforce obedience to the institutional norm’’ (Coleman, 1990, p. 334). Therefore, normative conformity may be an important source of reputation and legitimacy when firms are subject to the sanctioning mechanisms associated with industry norms (Fauchart & von Hippel, 2008). Beyond arguing that macro-level industry norms may motivate firms to portray certain desirable behaviors, theoretical work in new institutional economics has suggested that an industry norm may represent a ‘‘shift parameter’’ that can interact with micro-level institutional arrangements, such as governance structures in technology alliances, in shaping the benefits firms will reap from their transactions (Williamson, 1991). By this logic, it is reasonable to imagine that the alliance governance association with interfirm knowledge transfer may differ subject to prevailing industry norms. This observation is critical to the extent that such industry norms change over time, in which case the optimal organization of otherwise identical technology alliances may differ depending on the period in which they occur. Building on this insight, in what follows I will situate my focus on knowledge transfer in technology alliances within the empirical context of my study – the U.S. information technology industry during 1980–1999 – to develop the argument that an evolving industry norm of collaboration has shifted knowledge transfer performance from more toward less hierarchical alliance governance structures.

An Industry Norm of Collaboration in IT, 1980–1999 An industry norm of collaboration is one industry norm that has emerged during the past several decades, especially within knowledge-intensive industries such as biopharmaceuticals and information technology (Pattit et al., 2012, pp. 314–315). It represents an element of the broader logic that Chesbrough (2003a) has labeled ‘‘open innovation,’’ an emergent innovation model in which firms systematically encourage and engage in R&D

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

245

activities with a range of external actors (Dahlander & Gann, 2010; Laursen & Salter, 2006). In the United States, after several decades following World War II characterized by the inward orientation of industrial R&D (Mowery & Teece, 1996), the emergence of an industry norm of collaboration, and more open innovation practices in general, was reflected in a number of developments. For example, the Bayh–Dole Act of 1980 began to permit universities and small businesses to claim ownership of intellectual property associated with federally funded research (Mowery, Nelson, Sampat, & Ziedonis, 2004). It was one of the factors increasing the number and size of industry–university cooperative research centers that stimulated technology transfer between academic institutions and industry (Adams et al., 2001). Not surprisingly, perhaps, the percentage of university research funded by industry rose from about 4% in the 1980s to about 20% during the early 1990s (Cohen, Florida, & Goe, 1994; Rosenberg & Nelson, 1996). U.S. universities also began creating spinouts at an increasing rate, which progressively created linkages between academia and industry (Mowery et al., 2004). As a consequence, the percentage of new products and processes based on academic research increased during the 1980s and 1990s. Mansfield (1998) presents survey data suggesting that in the information processing industry, the percentage of new products and processes that could not have been developed in the absence of recent academic research rose from 11% (for both products and processes) during 1975–1985 to, respectively, 19% for products and 16% for processes during 1986–1994. Moreover, in the same industry, the average time interval between an academic finding and the commercial introduction of a product or process developed with very substantial aid from such a finding decreased from 6.2 years during 1975–1985 to 2.4 years during 1986–1994. Beyond a growth in the number and extent of university–industry linkages, a second factor reflecting an emerging industry norm of collaboration is a growth in the level of corporate venture capital (CVC) investments during 1980–1999, especially in sectors such as computers, telecommunications, and semiconductors (Dushnitsky, 2006). The trend in CVC investments paralleled a steep increase in the availability of venture capital more broadly (Gompers & Lerner, 2001). CVC investments are interfirm relationships that allow established firms to tap into emerging technology fields and major IT companies such as Intel, Cisco, and Microsoft were among the largest venturing firms driving this growth. Underlying the explosive growth in CVC investment during the 1980s and

246

HANS T. W. FRANKORT

1990s is the fact that among the largest venturing firms during 1969–1999, all IT-related firms except Xerox and Motorola began investing only well after 1980 (Dushnitsky, 2006, p. 395). A third development indicative of a mounting trend toward collaboration is firms’ increasing engagement with customers (von Hippel, 2005). For example, John Armstrong, former vice president for science and technology at IBM, describes the proliferation of joint projects between IBM researchers and customers (Armstrong, 1996). Involvement of customers, in general, and lead users, in particular, has increased in scale and importance in IT sectors such as microelectronics (custom-integrated circuits, see von Hippel, 2005, pp. 127–128) and software, where user communities have increasingly contributed to the development of open-source software (Kogut & Metiu, 2001; Lerner & Tirole, 2002). A fourth reflection of firms’ increasing commitment to open innovation, following a broader wave of corporate refocusing during the 1980s (Davis, Diekmann, & Tinsley, 1994), was their growing propensity to generate spinoff ventures (Chesbrough, 2003b; Parhankangas & Arenius, 2003). Such spinoffs began to generate networks of knowledge transfer and resource sharing between established and new firms. Spinoffs are consistent with firms’ refocusing on their core activities and reducing the scale and scope of internal R&D. During the 1980s the IT industry went through a gradual process of vertical de-integration and toward horizontal organization, which increased the specialization of firms’ technological knowledge (Bresnahan & Greenstein, 1999; Langlois, 1990; Macher & Mowery, 2004). Growing specialization was reflected in the increasing distribution of technological knowledge and intellectual property rights across firms. For example, the number of firms performing R&D in the United States information technology industry more than doubled between 1986 and 1999, while employment in firms with over 10,000 employees dropped by roughly 30% during 1980–1999 (National Science Foundation, 2010). Moreover, the number of corporate assignees successfully filing IT patents increased from less than 1,000 in 1980 to more than 4,000 in 1999 (Hall, Jaffe, & Trajtenberg, 2002). Therefore, firms became gradually more dependent on others to implement their technological knowledge in integrated solutions. To this point, examples reflecting an evolving industry norm of collaboration have focused on university–industry collaboration, investment into and creation of new firms, and interaction with users. In all those categories, the IT industry has seen a steady increase in activity during 1980–1999 and so it appears reasonable to imagine that open innovation norms may have become increasingly institutionalized within IT.

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

247

A final factor consistent with an evolving industry norm of collaboration, and one particularly central to the thesis of this study, is a steep increase in the number of newly formed technology alliances (Hagedoorn, 2002). Based on the Cooperative Agreements and Technology Indicators database (CATI, see Hagedoorn, 2002), Fig. 1 describes patterns of technology alliance formation in the population of IT technology alliances during 1980–1999. The aggregate number of newly formed technology alliances increased considerably during this time window, from a few dozen in the early 1980s to several hundred in the 1990s. This trend reflects, first, the increasingly widespread distribution of technological assets (Hall et al., 2002) as well as changes in the antitrust regime in the United States through the National Cooperative Research Act of 1984, which reduced potential antitrust liabilities on research joint ventures and standards development organizations. Second, it is consistent with an increase in the costs of R&D (Hagedoorn, 1993; Mowery & Teece, 1996). Finally, an increase in international competition motivated especially U.S. firms to join their efforts to be able to face a growth in the number of technologically sophisticated competitors internationally (Nelson, 1990). Overall, the result was a steady increase in the connectedness of firms in IT through technology 350 300 250 200 # 150 100 50 0 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 Calendar year All technology alliances Joint ventures

Fig. 1.

Contractual agreements Multisector alliances

Number of Newly Formed Technology Alliances in IT, 1980–1999. Source: CATI.

248

HANS T. W. FRANKORT

alliances (Cloodt, Hagedoorn, & Roijakkers, 2006, 2010), and the progressive centrality of prominent firms such as IBM, Hewlett-Packard, and Microsoft that began to function as ‘‘important mediators in the information flows among different partners’’ (Cloodt et al., 2006, p. 738). Fig. 1 also shows that the aggregate alliance formation pattern was characterized by a growth in the prevalence of contractual agreements relative to joint ventures. Moreover, IT technology alliances became increasingly multisectoral – that is, focused on multiple IT subsectors, such as computers, telecommunications equipment, microelectronics, and software. For example, large manufacturers like IBM began to adapt their technology alliance portfolios toward subsectors such as software to be able to become integrated service providers (Dittrich, Duysters, & de Man, 2007; Hagedoorn & Frankort, 2008). The increase in multisectoral technology alliances reflects the progressive convergence of individual subsectors, sparked in part by evolving interconnections across technologies such as computers, telecom, software, networking, and the Internet (Cloodt et al., 2006; Graham & Mowery, 2003; Mowery & Teece, 1996). The proliferation of technology alliances within IT was also characterized by an increase in the formation of multipartner alliances. Fig. 2 shows the number of newly formed multipartner alliances in IT during 1980–1999. Multipartner alliances became increasingly popular during the 1980s, even though their growth stagnated somewhat during the 1990s. In subsectors 45 40 35 30 #

25 20 15 10 5 0 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 Calendar year

Fig. 2.

Number of Newly Formed Multipartner Technology Alliances in IT, 1980–1999. Source: CATI.

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

249

such as microelectronics, the early growth of multipartner alliances was in part associated with the formation of the Semiconductor Industry Association (established in 1977) and consortia like SEMATECH (established in 1987), both of which stimulated and legitimized multiparty R&D collaboration (Browning, Beyer, & Shetler, 1995). Overall, descriptive evidence shows that during 1980–1999, the IT industry witnessed increases in university–industry collaboration, investment into and creation of new firms, interaction with users, and technology alliances among competitors. Paired with the possibility that several open innovation practices may feed on each other – for example, in the software subsector, Dushnitsky and Lavie (2010) show that alliances may function as an antecedent to their CVC investments and Van de Vrande and Vanhaverbeke (2013) show how the reverse may be true in pharmaceuticals – it is reasonable to suggest that an industry norm of collaboration became progressively institutionalized in IT during 1980–1999. Using Williamson’s (1991) shift parameter framework, the following section begins to connect an industry norm of collaboration to the alliance governance association with interfirm knowledge transfer.

An Industry Norm of Collaboration as a Shift Parameter Central to Williamson’s (1991) shift parameter framework is the assumption that parameters of the institutional environment, such as an industry norm of collaboration, can interact with the institutions of governance, such as governance structures in technology alliances. An industry norm of collaboration is associated with value attitudes of cooperation as the basis for positive and negative sanctioning. I propose that variance in reputational concerns supplies the mechanism connecting an industry norm of collaboration and concomitant value attitudes of cooperation to the knowledge transfer implications of alliance governance. These reputational concerns arise both from the growing importance of a reputation for cooperation and an increasing likelihood that information about firms’ reputations spreads in the industry. First, linkages across firms in IT increased in prevalence and importance during 1980–1999, which reflected both the progressive dependency of firms’ businesses on external partnering as well as the growing need to attract new partners in the future. Therefore, relational legitimacy – that is, the ‘‘perceived worthiness as an attractive alliance partner’’ (Dacin, Oliver, & Roy, 2007, p. 174) – became more important to partnered firms. Because

250

HANS T. W. FRANKORT

opportunistic behavior puts firms at risk of compromising their ability to form new alliances in the future, an increasing dependency on collaboration with external parties makes firms more likely to conform actively to behavioral norms associated with appropriate cooperative behavior (Suchman, 1995). Second, an increase in linkages across firms, in general, and technology alliances among competition, in particular, also generates the possibility that information about a firm’s reputation spreads more quickly within the industry. In a relatively disconnected setting, alliance partners operate more or less in isolation, while growing interconnectedness exposes firms to a setting where the amount of information available about potential partners is progressively larger. Because firms draw on their external network to gather information about potential exchange partners (Gulati, 1999), both cooperative as well as opportunistic behaviors are more quickly and accurately communicated when firms are connected to each other through a larger number of direct and indirect linkages (Williamson, 1991, pp. 290– 291). Consistent with the idea that greater connectedness in turn reduces the probability of opportunism, Robinson and Stuart (2007) show that equity participation in a strategic alliance between two firms diminishes, and pledged funding increases, when the firms are more proximate in an industry’s alliance network. Overall, an industry norm of collaboration acts as an institutional reputation and monitoring system that establishes the importance of relational legitimacy and shapes firms’ ability to find out about others’ reputations. Therefore, when a reputation for cooperation becomes more important, the probability of opportunistic behavior is likely to decrease, which is reinforced by the likelihood that reputational information reaches a greater number of actors more quickly (Provan, 1993). A crucial question remains: does the reputation mechanism associated with an industry norm of collaboration operate differently in contractual agreements compared to joint ventures? Reputational concerns attenuate the probability of opportunistic behavior by alliance partners and should therefore have greater significance in situations where appropriation concerns are more prevalent (Oxley, 1999; Williamson, 1991). All else equal, the transactional hazards associated with the possibility that partners appropriate each other’s technological knowledge are greatest in alliances with a limited capacity to monitor and align the incentives of alliance partners – that is, contractual agreements. In joint ventures, alternatively, the joint management board, administrative controls, and enhanced disclosure requirements allow partner firms to monitor and control the

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

251

appropriation of knowledge, while shared ownership helps align partners’ incentives. Therefore, broader reputational concerns should play a more limited role in joint ventures, where appropriation concerns are less pronounced. Overall, these arguments suggest that an industry norm of collaboration represents an institutional shift parameter whose benefits are concentrated disproportionally in contractual agreements rather than equity joint ventures. Hypothesis 1. An evolving industry norm of collaboration increases knowledge transfer in technology alliances governed by contractual agreement relative to those governed by equity joint venture. An industry norm of collaboration should act on the probability of opportunistic behavior depending on the monitoring and incentive alignment capacity of an alliance. Thus, if the number of partners in an alliance affects levels of monitoring and incentive alignment, then the shift parameter effect of an industry norm of collaboration (as summarized in Hypothesis 1) will not be neutral between bilateral and multipartner alliances. Multipartner alliances embed a collaborative dyad in a cohesive group of interacting firms because they connect partners through multiple reciprocal linkages (Li, Eden, Hitt, Ireland, & Garrett, 2012). By acting as an echo chamber for both positive and negative behaviors of partnered firms, cohesion in a multipartner alliance may act as a monitoring, reputationinducing, structure that can align the incentives of partner firms. Two firms affiliated to one or more third parties are subject to stronger reputational concerns and the possibility that they will be sanctioned for noncooperative behavior (Burt & Knez, 1995). Multipartner alliances may thus contain a self-enforcing governance mechanism that lowers the value of noncooperative behavior within a dyad. Instead, others have suggested that multipartner alliances may be prone to strategic behaviors such as free riding and coalition building and so the probability of opportunism within a dyad may actually be greater, rather than smaller, when it is embedded in a multipartner alliance (Lavie, Lechner, & Singh, 2007). Overall, cohesion and opportunism perspectives on multipartner alliances hold opposing views on the incentive properties of multipartner alliances compared to bilateral alliances, with the former arguing for stronger, and the latter for weaker, incentive alignment and monitoring in multipartner compared to bilateral alliances. All else equal, therefore, application of the shift parameter logic to the cohesion perspective suggests that an industry norm of collaboration increases knowledge transfer in bilateral relative to multipartner technology alliances, while the opportunism perspective on

252

HANS T. W. FRANKORT

multipartner alliances instead suggests that an industry norm of collaboration increases knowledge transfer in multipartner relative to bilateral technology alliances. Integration of these views with the arguments motivating Hypothesis 1 thus generates two rival predictions. Given that monitoring and incentive alignment is weaker in contractual agreements compared to joint ventures, the cohesion perspective suggests that an industry norm of collaboration will have the greatest positive effect on knowledge transfer in bilateral contractual agreements, while the effect will be weakest in multipartner joint ventures. Hypothesis 2a. An evolving industry norm of collaboration increases knowledge transfer in bilateral technology alliances governed by contractual agreement relative to other technology alliances. Instead, the opportunism perspective on multipartner alliances suggests that an industry norm of collaboration will have the greatest positive effect on knowledge transfer in multipartner contractual agreements, while the effect will be weakest in bilateral joint ventures. Hypothesis 2b. An evolving industry norm of collaboration increases knowledge transfer in multipartner technology alliances governed by contractual agreement relative to other technology alliances. Differences across IT Subsectors Computers, telecommunications equipment, microelectronics, and software are among the prominent IT subsectors to which firms directed their technology alliance activities during 1980–1999. Differentiation in the sectoral emphasis of individual alliances naturally raises the question to what extent alliances focusing on different subsectors were subjected more strongly to an evolving industry norm of collaboration. In response to the growth of the PC and networking markets, vertically integrated computer and telecommunication equipment manufacturers became more open, but often involving alternative subsectors (Cloodt et al., 2010). For example, when IBM entered the PC market, it did so through a software alliance with Microsoft and a microprocessor alliance with Intel (Malerba, Nelson, Orsenigo, & Winter, 1999). The latter in fact stimulated openness in microelectronics, and not computers, by forcing Intel to share intellectual property related to its microprocessors with secondsource suppliers, such as AMD and Fujitsu (Hagedoorn & Schakenraad, 1992; Henkel, Baldwin, & Shih, 2012). Entry by new firms affected both computers and microelectronics subsectors during the 1980s, but established

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

253

firms in microelectronics were much more active in encouraging such subsector entry through licensing and alliances (Macher & Mowery, 2004). In semiconductors, for example, integrated incumbents began to transact extensively with a large number of specialized, ‘‘fabless,’’ entrants (Kapoor, 2013b). Moreover, in response to intensifying competition from the Japanese semiconductor industry, the Semiconductor Industry Association and consortia such as SEMATECH stimulated progressive collaboration among U.S. semiconductor firms (Browning et al., 1995), and such collaboration became more important with the advent of deep ultraviolet manufacturing technologies in the late 1980s (Iansiti, 1998). Vertical de-integration in computers was associated with large-scale entry by specialized software producers and, additionally, software radically increased in importance as a general purpose technology (Graham & Mowery, 2003). Because of the gradual commoditization of hardware, and given the growth of mass-markets for ‘‘packaged’’ software, networked computing, and the Internet, dynamism in the industry amplified as its focus shifted from computer hardware in the early 1980s toward a multitude of software applications in the 1990s. Moreover, as a result of an increasing demand for integrated systems, software also steeply increased in importance to semiconductor firms since the beginning of the 1990s (Grimblatt, 2002).1 Underlining the increasing liability of disconnectedness in such a setting, Cloodt et al. note that ‘‘to increase their ability to respond very quickly to the changes surrounding them y companies had to remain open innovators’’ (2010, p. 125). This discussion suggests that microelectronics and software subsectors may have seen a more significant increase in the prevalence and importance of open innovation norms during 1980–1999 than computers and telecommunications equipment subsectors. Therefore, the effects as predicted in Hypotheses 1 and 2a/2b are likely to be stronger in these two subsectors, though I leave this as an open empirical question. I next examine the shift parameter logic in the context of the U.S. information technology industry, both across and within individual industry subsectors.

METHOD Data I use data on technology alliance governance and patenting by firms engaged in technology alliances in the U.S. information technology industry during 1980–1999. Part of the data I use was matched for analyses reported

254

HANS T. W. FRANKORT

in Gomes-Casseres et al. (2006). The alliance data come from CATI (Hagedoorn, 2002), the patent data come from the NBER patent data file (Hall et al., 2002), and several control variables come from COMPUSTAT. For this study, I added data from CATI, containing information about technology alliances formed since 1960; from USPTO, Osiris, Datastream, the SEC and 10K filings, the U.S. Census Bureau, Eurostat, firms’ annual reports, and numerous press releases. Three rules combined to determine firms’ inclusion in the estimation sample. First, firms needed at least one patent in the IT patent classes in 1980–1999. Second, firms needed at least one technology alliance in IT during 1980–1999. Third, in each dyad, at least one firm needed to be headquartered in the United States. I generated a dyad-year panel to test the refutable implications of the shift parameter logic. In a total of 3,545 dyadyear records, due to a lag specification and missing data on some of the control variables, I have complete data for 1,888 dyad-years, which form the basis for the statistical analyses. The panel is unbalanced, reflecting firms’ increasing proclivity to enter into technology alliances during 1980–1999 (Hagedoorn, 2002). All dyad and firm-level measures are based on yearly adjacency matrices reflecting firms’ technology alliance activities within a three-year window. For example, the 1993 matrix contains the alliances for 1991–1993, the 1994 matrix those for 1992–1994, and so on. Specifying an alliance window is important as for many alliances (roughly 90%) termination dates cannot be traced. Further, including alliances only in the formation year would severely underestimate their impact on knowledge transfer between partnered firms. I based the three-year window on the approximately 10% of alliances with traceable duration, as documented in CATI. Left censoring may be a concern for sample firms that were already in business prior to the sampling window. I therefore include the technology alliances formed by the sample firms between 1978 and 1980 in the 1980 adjacency matrix, and those formed in 1979 in the 1981 adjacency matrix.

Dependent Variable and Analytic Strategy The unit of analysis is the dyad-year and so the empirical models focus on how interfirm knowledge transfer varies over time with the governance of the alliance(s) within a dyadic relationship. Prior research suggests that an aggregated (here: dyad-level) count of the number of patent cross-citations may be a valid indicator of knowledge transfer, in general (Jaffe, Trajtenberg,

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

255

& Fogarty, 2000), and within the context of interfirm technology alliances, in particular (Frankort, Hagedoorn, & Letterie, 2012, pp. 517–518). While citation-based approaches to interfirm knowledge transfer acknowledge that noise in patent citation measures is unavoidable, they assume that such noise would mainly add measurement error, inflating standard errors, forcing the estimates toward insignificance, and hence producing conservative estimates. However, some evidence suggests that beyond imprecision, patent citation measures may introduce bias as a consequence of citations inserted by patent examiners (Alca´cer & Gittelman, 2006), and such bias may actually lead to overstated results. Therefore, I follow Alca´cer and Oxley (2013) in using as the dependent variable the overlap in the distribution of firms’ patenting activities across technology domains in a given year (Jaffe, 1986). While this measure is based on patents rather than patent citations, consistent with my knowledge transfer definition, it nevertheless captures the notion that convergence in firms’ technological activities can be viewed as evidence of interfirm knowledge transfer (Mowery et al., 1996). I begin with firm-level patent class distribution vectors Fi,t+1=(Fi,1,t+1y Fi,K,t+1) describing firm i’s position in technology space in year t+1, where Fi,k,t+1 is firm i’s number of patents successfully applied for in patent class k in year t+1. This generates two yearly vectors for each dyad, describing the distribution of partners’ patenting activity across the USPTO’s primary patent classes in existence during the sampling period (Hall et al., 2002, pp. 452–453). The technological overlap of partner firms i and j is then calculated as 0

KT ij;tþ1

Fi;tþ1 n Fj;tþ1 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 0 ðFi;tþ1 n Fi;tþ1 Þn ðFj;tþ1 n Fj;tþ1 Þ

which is the uncentered correlation of the firms’ patenting vectors. This measure is bounded by 0 and 1, and values closer to 1 indicate a greater overlap between the patenting activities of firms i and j across technology domains in a given year. The econometric models focus on the association between the technological overlap measure of interfirm knowledge transfer KTij,t+1 describing a dyadic relationship between two firms i and j in year t+1, and measures of alliance governance, multipartner collaboration, and an industry norm of collaboration in year t – vectors Xij,t and Xt. Specifically: EðKT ij;tþ1 Þppij Xij;t þ pXt þ kij Xij;t Xt þ tij Rij;t þ gi Ci;t þ gj Cj;t þ fYt þ nij lij;t þ rij Wyij;t þ dij

256

HANS T. W. FRANKORT

where Rij,t is a vector of dyad-level control variables; Ci,t, Cj,t, and Yt are vectors of firm and time-period effects, respectively; lij,t is an inverse Mills ratio; Wyij,t is a dyad autocorrelation term; and dij is a dyad-specific fixed effect. All independent and control variables are lagged by one year to avoid simultaneity. I estimate all models using fixed effects linear specifications. Alliance governance may be based on firm, dyad, or industry characteristics and so similar factors may determine both alliance governance and interfirm knowledge transfer (Shaver, 1998). From an omitted variables perspective (Heckman, 1979), alliance governance is therefore endogenous in the knowledge transfer equation if key determinants of alliance governance that correlate with interfirm knowledge transfer remain uncontrolled. To capture several such factors, I use a number of control variables and I capture unobserved heterogeneity by including dyadic and time fixed effects, and a dyad autocorrelation variable. Additionally, I use two-stage specifications that account for a selection hazard (Heckman, 1979). To absorb any effects on interfirm knowledge transfer that would otherwise be spurious treatment effects of alliance governance, I include an inverse Mills ratio (i.e., lij,t) constructed from robust probit estimates in the knowledge transfer models. The appendix details estimation of the probit model.

Independent Variables Technology Alliance Governance and Multipartner Collaboration I use the alliance classification data available in CATI to distinguish nonequity contractual agreements (CATI categories: Joint Research Pacts and Joint Development Agreements) and equity joint ventures (CATI categories: Joint Ventures and Research Corporations). The variable Joint venture represents the share of joint ventures in all technology alliances within a dyad in a given year. Multipartner collaboration takes the value of ‘‘1’’ in case at least one active technology alliance between two firms has three or more partners. Industry Norm of Collaboration I proxy an industry norm of collaboration using Industry alliances, the average number of newly formed IT alliances within the five-year window prior to the observation year. This measure reflects that the norm of collaboration evolved gradually, as captured by a moving average of yearby-year changes in the IT alliance formation series (see Fig. 1). While

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

257

admittedly a moving average of industry alliances is an imperfect proxy for an evolving industry norm of collaboration, estimation of models across subsectors that may have differed in the prevalence and importance of open innovation norms should assuage such a concern. I test Hypothesis 1 using the interaction between Joint venture and Industry alliances in the full sample. In subsamples distinguishing contractual agreements (Joint venture=0) and joint ventures (Joint ventureW0), I assess the extent to which Industry alliances, the shift parameter, has a greater effect on knowledge transfer in contractual agreements compared to joint ventures. I test Hypotheses 2a/2b using the interaction between Joint venture and Industry alliances in subsamples distinguishing bilateral alliances (Multipartner collaboration=0) and multipartner alliances (Multipartner collaboration=1). Moreover, I examine Hypothesis 1 across IT subsectors by comparing coefficients on the interaction between Joint venture and Industry alliances in subsector samples for computers and telecommunications equipment, microelectronics, and software. Finally, I examine Hypothesis 2 across IT subsectors by comparing coefficients on the interaction between Joint venture and Industry alliances in subsector samples for computers and telecommunications equipment, microelectronics, and software, after splitting the subsector samples by bilateral alliances (Multipartner collaboration=0) and multipartner alliances (Multipartner collaboration=1). To capture other macro factors homogeneously shaping the behavior and performance of sampled firms, I include dummies for each two-year period as time fixed effects. Inclusion of yearly dummies would preclude identification of the measure for industry alliances. In subsector analyses split by bilateral/multipartner alliances, I instead include a dummy for the 1990s to evade collinearity issues associated with two-year dummies, which would otherwise impact the stability of the estimates in several smaller subsamples.

Control Variables Dyad-Level Control Variables Partner-specific alliances controls for the number of active technology alliances between two firms, while Partner-specific alliance experience captures the number of technology alliances between the two partner firms prior to the current three-year window. Moreover, each pair of firms collaborated in one or several IT subsectors. Therefore, I include dummy

258

HANS T. W. FRANKORT

variables for Computers/telecom (as a result of the internet, technologies in computers and telecommunications equipment subsectors virtually merged during the 1990s; see Mowery & Teece, 1996), Microelectronics, and Software, each taking the value of ‘‘1’’ in case the alliance(s) between two firms concerned activities within these respective IT subsectors. The dummies are not mutually exclusive and so the counterfactual to a dyad’s activity within a particular subsector is activity in zero or more other subsectors. Firm-Level Control Variables Firm attributes may influence interfirm knowledge transfer in two distinct ways (Lincoln, 1984, pp. 49–52). First, they index firms’ dispositional tendencies potentially affecting any dyad firms are part of, irrespective of who is the alliance partner. Second, partner firms’ attributes may combine to determine knowledge transfer, engendering interaction effects that are dyad specific. To capture dispositional effects, I include the sum of two firms’ scores on the respective controls.2 Collinearity concerns preclude inclusion of the product terms for the firm-level controls in the knowledge transfer models. Because I have no substantive interest in these product terms as such, sole inclusion of the sum terms appears the most reasonable option.3 To account for the general alliance experience of the partner firms, I include Alliance experience capturing partners’ total historical count of technology alliances formed outside the focal dyad until the observation year. I also include Time in network, measuring the number of years across which partners’ alliance experience had accumulated. I capture firm Age as partners’ logged age in years since incorporation. Also, because firms differ in size and R&D intensity, I control for firm Size by measuring partners’ asset value, and R&D intensity as the ratio of R&D spending to sales, in a given year. The asset-based firm size control is particularly important because firms that differ in asset intensity may have responded differently to the strengthening of patent rights especially in microelectronics and software, following the Diamond v. Diehr case in 1981 and Texas Instruments’ successfully challenging several Japanese and U.S. semiconductor firms in court during 1985–1986 (Hall, 2005; Hall & Ziedonis, 2001). Firms may be active in multiple dyads simultaneously and, when unaccounted for, this generates dyadic autocorrelation, potentially leading to systematically underestimated standard errors for firm attributes that are constant across multiple dyads within years (Lincoln, 1984). More importantly, such autocorrelation may lead to the misattribution of

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

259

partners’ general proclivity to generate knowledge transfer within their alliances to characteristics of the focal dyad (such as alliance governance). Therefore, I control for Dyad autocorrelation (i.e., Wyij,t) as the mean of the dependent variable (technological overlap) for all dyads the partner firms maintained in a given year, but excluding the focal dyad (Lincoln, 1984, pp. 56–61). Because any unobserved, time varying, firm characteristics driving interfirm knowledge transfer within the focal dyad would be manifest in the knowledge transfer within partners’ broader set of alliances, this autocorrelation control additionally helps rule out a number of time-varying sources of unobserved heterogeneity at the partner firm level.

RESULTS Main Results Table 1 shows descriptive statistics for all variables included in the analysis. Table 2 shows estimates for the knowledge transfer models. Models 1 and 2 in Table 2 present specifications based on the full sample. Model 1 shows the main effect for joint venture. Consistent with prior

Table 1. Summary Statistics (n=1,888). Variable Technological overlap Industry alliances Joint venture Multipartner collaboration Partner-specific alliances Partner-specific alliance experience Computers/telecom Microelectronics Software Alliance experience (sum) Time in network (sum) Age (sum) Size (sum, in millions of U.S. $) R&D intensity (sum) Dyad autocorrelation Inverse Mills ratio

Mean

SD

Min.

Max.

0.167 185.567 0.426 0.261 1.396 0.777 0.490 0.540 0.297 83.788 19.204 7.339 46,180.680 0.177 0.179 0.029

0.192 49.438 0.495 0.439 1.056 1.492 0.500 0.499 0.457 70.306 8.229 1.166 49,734.450 0.068 0.136 0.602

0 22.6 0 0 1 0 0 0 0 3 2 3.296 339.000 0.022 0 3.071

0.863 246.6 1 1 14 17 1 1 1 383 41 9.915 321,256 0.547 0.748 2.488

Time in network (sum)

Alliance experience (sum)

Software

Microelectronics

Partner-specific alliance experience Computers/telecom

Partner-specific alliances

Joint venture  Industry alliances Multipartner collaboration

Joint venture

Industry alliances

(2) 0.002 (0.001) 0.235 (0.064) 0.001 (0.000) 0.003 (0.009) 0.002 (0.002) 0.004 (0.003) 0.007 (0.009) 0.020+ (0.010) 0.014 (0.010) 0.000 (0.000) 0.016 (0.004)

(1)

0.002 (0.001) 0.126 (0.058) – – 0.008 (0.009) 0.003 (0.002) 0.004 (0.003) 0.008 (0.009) 0.021 (0.011) 0.012 (0.010) 0.000 (0.000) 0.017 (0.004)

Full Sample

0.0027 (0.001) – – – – 0.000 (0.028) 0.004 (0.004) 0.012 (0.008) 0.002 (0.017) 0.021 (0.020) 0.013 (0.018) 0.000 (0.000) 0.020 (0.005)

(3)

Contractual agreements

0.001 (0.001) – – – – 0.001 (0.010) 0.001 (0.003) 0.002 (0.004) 0.010 (0.012) 0.022 (0.014) 0.026 (0.012) 0.000 (0.000) 0.003 (0.006)

(4)

Joint ventures

0.000 (0.000) 0.380 (0.043) 0.0006 (0.000) – – 0.001 (0.003) 0.015 (0.004) 0.002 (0.010) 0.004 (0.011) 0.009 (0.011) 0.000 (0.000) 0.002+ (0.001)

(5)

Bilateral alliances

Subsamples

0.001+ (0.001) 0.386 (0.087) 0.0007 (0.000) – – 0.019 (0.005) 0.013 (0.005) 0.019 (0.018) 0.014 (0.021) 0.043 (0.020) 0.000 (0.000) 0.003 (0.003)

(6)

Multipartner alliances

Table 2. Fixed Effects Linear Models of Technology Alliance Governance and Interfirm Knowledge Transfer in IT, Full Sample and Split by Governance Structure and Number of Partners, 1980–1999.

260 HANS T. W. FRANKORT

0.094 (0.027) 0.000 (0.000) 0.023 (0.086) 0.014 (0.030) 0.018 (0.012) 0.679 (0.206) Y 1,888 581 0.131

0.068 (0.028) 0.000+ (0.000) 0.041 (0.087) 0.016 (0.030) 0.018 (0.011) 0.568 (0.206) Y 1,888 581 0.142 0.025 (0.039) 0.000 (0.000) 0.000 (0.117) 0.024 (0.045) 0.010 (0.014) 0.224 (0.314) Y 805 126 0.194

po0.1; all tests are two-tailed.

+

0.068 (0.047) 0.000 (0.000) 0.134 (0.132) 0.031 (0.040) 0.036 (0.028) 0.448 (0.339) Y 1,083 458 0.128

Standard errors are in parentheses. po0.001; po0.01; po0.05;

Time fixed effects Dyad-years Unique dyads R2

Constant

Inverse Mills ratio

Dyad autocorrelation

R&D intensity (sum)

Size (sum)

Age (sum)

0.034 (0.006) 0.000 (0.000) 0.115 (0.081) 0.018 (0.028) 0.095 (0.014) 0.144 (0.114) Y 1,396 518 0.240

0.033 (0.014) 0.000 (0.000) 0.073 (0.152) 0.110 (0.091) 0.087 (0.030) 0.113 (0.184) Y 492 171 0.281

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999 261

262

HANS T. W. FRANKORT

research, the model suggests that joint venture governance is associated with greater interfirm knowledge transfer than governance by contractual agreement, even after controlling for the endogeneity of alliance governance. All else equal, knowledge transfer – that is, the subsequent overlap in the distribution of firms’ patenting activities across technology domains – is 0.126 units greater in a joint venture compared to a contractual agreement. The coefficient on the multipartner collaboration variable is insignificant. Model 2 turns to an assessment of the shift parameter logic. Consistent with Hypothesis 1, the negative and significant interaction between Joint venture and Industry alliances suggests that the comparative knowledge transfer performance of joint ventures versus contractual agreements decreases in magnitude at higher levels of industry alliance activity. Because these estimates are within-dyad, and because in the cross-section an individual alliance cannot at once be a contractual agreement and a joint venture, the interaction term in Model 2 artificially collapses separate longitudinal changes in the different governance structures. For example, the interaction is consistent with an industry norm of collaboration solely increasing knowledge transfer in contractual agreements. It is also consistent with an industry norm of collaboration increasing knowledge transfer in both governance structures, but more so in contractual agreements. To assess these alternatives, Models 3 and 4 present estimates split by individual governance structure. A split-sample approach generates more conservative estimates because it allows residual variance to differ across subsamples (Greene, 2003). In Models 3 and 4, the coefficient for industry alliances is positive and significant for contractual agreements but insignificant for joint ventures. Therefore, it appears reasonable to suggest that an industry norm of collaboration has generated shifts in the knowledge transfer performance of alliances through a disproportionate complementary effect on alliances governed by contractual agreement. The results in Models 3 and 4 are based on fixed effects specifications that impose a within-dyad correlation structure on the data. Therefore, we can interpret the effect of an industry norm of collaboration in Model 3 as acting on knowledge transfer in a contractual agreement moving through 1980–1999. Based on Model 3, Fig. 3 shows estimates of knowledge transfer in a contractual agreement during the sampling period, assuming all else equal. The figure shows that knowledge transfer in contractual agreements has increased considerably over time. The conditional effect of industry alliances is 0.065 in 1980, while it is 0.711 in 1999 (see Fig. 3). Because the sample standard deviation of technological overlap is 0.192, these estimates

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

263

1

Technological overlap

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 Calendar year

Fig. 3. Estimates of Knowledge Transfer (Technological Overlap) in Contractual Agreements, All Else Equal, 1980–1999. Note: The line shows the conditional effect of industry alliances on technological overlap in contractual agreements across 1980-1999, based on Table 2, Model 3.

suggest that an industry norm of collaboration generated an increase in the knowledge transfer performance of contractual agreements of around 3.4 standard deviations during 1980–1999. Models 5 and 6 in Table 2 show estimates split by bilateral versus multipartner alliances. Consistent with Hypothesis 2a, the interaction between Joint venture and Industry alliances is negative and significant in bilateral alliances (Model 5), while it is insignificant in multipartner alliances (Model 6). Therefore, the benefits of an industry norm of collaboration appear concentrated disproportionally in bilateral contractual agreements. Because the reputation mechanism associated with an industry norm of collaboration should act on the probability of opportunistic behavior depending on the monitoring and incentive alignment capacity of an alliance, this result suggests that the baseline probability of opportunism in multipartner alliances is lower than that in bilateral alliances. It is important to note that the results in Table 2 are unlikely to reflect firms or dyads learning to collaborate and coordinate because they are drawn from models that hold constant a large number of learning correlates (e.g., partner-specific alliance experience, firms’ alliance experience, and their network tenure, age, and R&D intensity) and they additionally control for both stable and time-variant dyadic and firm heterogeneity, unobserved temporal effects, and the endogeneity of alliance governance.

264

HANS T. W. FRANKORT

Subsector Results The industry alliances measure is an imperfect proxy for an industry norm of collaboration and so I generated a number of additional models split by IT subsector. These models exploit the intuition that different subsectors within IT embraced open innovation at different rates during the sampling window, with the microelectronics and software subsectors expected to have stronger open innovation norms than computers and telecom subsectors. Reputational concerns associated with an industry norm of collaboration should thus be weaker in computers and telecom, and stronger in microelectronics and software subsectors. Table 3 shows the subsector estimates for the shift parameter effect as predicted in Hypothesis 1. Because firms within a dyad regularly collaborated in multiple subsectors simultaneously (and increasingly so over time; see Fig. 1), the total number of dyad-years across the three subsectors exceeds the sample size of Models 1 and 2 in Table 2. To isolate the effects of open innovation norms within alliances in individual subsectors, dummies for subsectors capture variance in knowledge transfer associated with dyads’ simultaneous activities in multiple subsectors. Note that the (marginally) significant computers/telecom dummy captures variance in knowledge transfer associated with a broad sectoral scope in software (Model 9), while the software dummy captures such scope-related variance in computers/telecom (Model 7) and microelectronics (Model 8). In Models 7–9, the interaction between Joint venture and Industry alliances is insignificant in computers/telecom (Model 7), negative and marginally significant in microelectronics (Model 8; p=0.065), and negative and significant in software (Model 9). Moreover, the size of the coefficient on the interaction term is more than twice as large (0.0011 vs. 0.0005) in software as in microelectronics, suggesting that the combined effects of alliance governance and an industry norm of collaboration play a much larger role in shaping knowledge transfer in software than in microelectronics. These findings are consistent with the suggestion that open innovation norms within these three IT subsectors differed in prevalence and importance during 1980–1999, with an industry norm of collaboration having a stronger effect in microelectronics rather than computers/telecom, and the strongest effect in software. Finally, to examine the idea that an industry norm of collaboration had the most substantive effect in bilateral contractual agreements (see Models 5 and 6 in Table 2), Table 4 shows estimates of the interaction between Joint venture and Industry alliances in subsector samples split by alliance type

Table 3. Fixed Effects Linear Models of Technology Alliance Governance and Interfirm Knowledge Transfer in IT, Split by Subsector, 1980–1999. Subsamples

Industry alliances Joint venture Joint venture  Industry alliances Multipartner collaboration Partner-specific alliances Partner-specific alliance experience Computers/telecom Microelectronics Software Alliance experience (sum) Time in network (sum) Age (sum) Size (sum) R&D intensity (sum) Dyad autocorrelation Inverse Mills ratio Constant Time fixed effects Dyad-years Unique dyads R2

Computers/telecom

Microelectronics

Software

(7)

(8)

(9) +

0.001 (0.001) 0.018 (0.124) 0.0004 (0.000) 0.009 (0.015) 0.000 (0.003) 0.008 (0.004) – – 0.005 (0.017) 0.026 (0.013) 0.001 (0.000) 0.000 (0.005) 0.008 (0.047) 0.000 (0.000) 0.000 (0.116) 0.043 (0.041) 0.006 (0.018) 0.026 (0.332) Y 925 366 0.171

Standard errors are in parentheses. po0.001; po0.05;

0.001 (0.001) 0.116 (0.085) 0.0005+ (0.000) 0.010 (0.012) 0.000 (0.003) 0.010+ (0.006) 0.008 (0.017) – – 0.031+ (0.018) 0.000 (0.000) 0.010+ (0.005) 0.000 (0.049) 0.000 (0.000) 0.060 (0.122) 0.050 (0.042) 0.032 (0.020) 0.046 (0.349) Y 1,019 342 0.168 +

0.004 (0.001) 0.555 (0.144) 0.0011 (0.000) 0.007 (0.020) 0.005 (0.004) 0.008+ (0.005) 0.031+ (0.018) 0.020 (0.025) – – 0.001 (0.000) 0.016+ (0.009) 0.094 (0.062) 0.000 (0.000) 0.636 (0.289) 0.060 (0.082) 0.009 (0.023) 0.934 (0.444) Y 560 179 0.283

po0.1; all tests are two-tailed.

266

HANS T. W. FRANKORT

Table 4. Fixed Effects Linear Models of Technology Alliance Governance and Interfirm Knowledge Transfer in IT, Split by Number of Partners Within Subsectors, 1980–1999. Subsamples Computers/telecom

Microelectronics

Software

Bilateral Multipartner Bilateral Multipartner Bilateral Multipartner alliances alliances alliances alliances alliances alliances (10) Industry alliances

0.001

(0.001) 0.025 (0.131) Joint venture  0.000 Industry alliances (0.000) Partner-specific 0.002 alliances (0.004) Partner-specific 0.007 alliance experience (0.007) Computers/telecom – – Microelectronics 0.021 (0.025) Software 0.013 (0.016) Alliance experience 0.001+ (sum) (0.000) Time in network 0.006 (sum) (0.005) Age (sum) 0.077+ (0.047) Size (sum) 0.000+ (0.000) R&D intensity (sum) 0.026 (0.117) Dyad 0.065 autocorrelation (0.041) Inverse Mills ratio 0.003 (0.029) 1990s 0.043 (0.021) Constant 0.557+ (0.299) Dyad-years 808 Joint venture

(11) 0.000 (0.002) – – 0.002 (0.002) 0.005 (0.007) 0.019 (0.007) – – 0.119 (0.047) 0.080 (0.049) 0.001 (0.002) 0.009 (0.019) 0.171 (0.355) 0.000 (0.000) 1.571 (0.608) 0.110 (0.214) 0.062 (0.062) 0.000 (0.000) 1.780 (2.605) 117

(12) 0.003

(13)

0.018 (0.001) (0.013) 0.041 – (0.094) – 0.000 0.021 (0.000) (0.013) 0.000 0.019+ (0.004) (0.011) 0.008 0.005 (0.008) (0.007) 0.021 0.003 (0.032) (0.026) – – – – 0.040 0.035 (0.031) (0.024) 0.001 0.000 (0.000) (0.001) 0.023 0.002 (0.006) (0.011) 0.117 0.630 (0.054) (0.162) 0.000 0.000+ (0.000) (0.000) 0.445 0.769 (0.141) (0.248) 0.048 0.111 (0.046) (0.133) 0.023 0.011 (0.023) (0.074) 0.043 0.065 (0.020) (0.025) 0.749 4.376 (0.375) (1.179) 733 286

(14) 0.004 (0.001) 0.436 (0.158) 0.001 (0.000) 0.002 (0.006) 0.003 (0.010) 0.028 (0.022) 0.013 (0.034) – – 0.001 (0.001) 0.019 (0.010) 0.138 (0.066) 0.000 (0.000) 0.619+ (0.321) 0.058 (0.086) 0.041 (0.033) 0.017 (0.037) 1.103 (0.428) 384

(15) 0.004+ (0.002) – – 0.000 (0.002) 0.030 (0.010) 0.014+ (0.008) 0.051 (0.041) 0.106 (0.053) – – 0.001 (0.002) 0.019 (0.025) 0.730+ (0.428) 0.000 (0.000) 0.493 (0.735) 0.292 (0.326) 0.025 (0.056) 0.037 (0.122) 5.685+ (3.134) 176

267

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

Table 4. (Continued ) Subsamples Computers/telecom

Microelectronics

Software

Bilateral Multipartner Bilateral Multipartner Bilateral Multipartner alliances alliances alliances alliances alliances alliances

Unique dyads R2

(10)

(11)

(12)

(13)

(14)

(15)

354 0.133

57 0.401

308 0.192

93 0.171

135 0.233

76 0.396

Standard errors are in parentheses. po0.001; po0.01; po0.05; +po0.1; all tests are two-tailed. Note: In the multipartner collaboration subsector samples, there is insufficient variance on Joint venture and so its main effects are absorbed by dyadic fixed effects. Results of random effects models are similar in signs and significance.

(bilateral vs. multipartner alliances). Models 10–13 show no significant interaction effect across models of bilateral and multipartner alliances focusing on computers/telecom and microelectronics. In Models 14 and 15, the interaction between Joint venture and Industry alliances is negative and significant in bilateral software alliances, while it is insignificant in multipartner software alliances. Based on Model 14, Fig. 4 shows estimates of the comparative knowledge transfer efficacy of joint ventures relative to contractual agreements in bilateral alliances focused on the software subsector during the sampling period, assuming all else equal. The figure suggests that in software, an emerging industry norm of collaboration dramatically augmented the knowledge transfer benefits of bilateral contractual agreements. The conditional estimates show that in software, bilateral joint ventures generated more than 5.5 times as much knowledge transfer as bilateral contractual agreements in 1980. This comparative efficacy decreased sharply over time, as an otherwise identical joint venture only generated about 1.2 times as much knowledge transfer as an otherwise identical contractual agreement in 1999. Overall, consistent with the results aggregated across subsectors (Models 2–6 in Table 2), the benefits of an industry norm of collaboration appear substantive and concentrated disproportionally in bilateral contractual agreements. Moreover, consistent with the subsector results in Table 3, these effects occur especially in technology alliances within the software subsector.

HANS T. W. FRANKORT Comparative knowledge transfer efficacy (joint venture/contractual agreement)

268 6 5 4 3 2 1

0 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 Calendar year

Fig. 4. Estimates of Knowledge Transfer (Technological Overlap) in Bilateral Alliances Focusing on Software, Joint Ventures Relative to Contractual Agreements, All Else Equal, 1980–1999. Note: The line shows the ratio of estimated technological overlap in joint ventures to that in contractual agreements across 1980–1999, based on Table 4, Model 14.

DISCUSSION AND CONCLUSION How does knowledge transfer in interfirm technology alliances change when an industry norm of collaboration evolves? I examined this question in some detail within the context of the U.S. information technology (IT) industry during 1980–1999, with a focus on historical changes in the alliance governance association with interfirm knowledge transfer. Descriptive evidence suggests that during the study period, an industry norm of collaboration became progressively institutionalized in IT. I argued that such an industry norm began to act as an institutional reputation and monitoring system that produced incentives for, and reinforced, cooperative rather than opportunistic behavior. Because an industry norm of collaboration should have greater significance in situations where appropriation concerns are more prevalent, application of Williamson’s (1991) shift parameter logic suggests that over time, an evolving industry norm of collaboration has disproportionally increased knowledge transfer in technology alliances governed by contractual agreement relative to those governed by equity joint venture. The empirical analysis broadly

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

269

corroborates this proposition. Moreover, the shift parameter effect appears particularly concentrated in bilateral rather than multipartner contractual agreements, and in the software and microelectronics subsectors. First, the assessment of longitudinal change in the alliance governance association with interfirm knowledge transfer complements prior alliance research. Some studies have suggested that the performance effects of alliance governance differ across alliances – for example, technology, marketing, or production – and across different industries (e.g., Oxley, 1997, 1999; Pisano, 1989; Sampson, 2004). The results of this study additionally suggest that even within one type of alliance and within one industry, the performance and optimal governance of otherwise identical alliances may differ depending on the historical period in which they occur. Therefore, following calls both to exploit the unique insights that longitudinal data can offer (Bergh & Holbein, 1997; Isaac & Griffin, 1989) and to consider the historical context in which firm behavior transpires (Kahl, Silverman, & Cusumano, 2012), this study introduces a historical contingency into prior findings that have often shown effects averaged across time, even in designs spanning several decades (e.g., Gomes-Casseres et al., 2006). Oxley’s (1999) seminal study was among the first to implement Williamson’s (1991) shift parameter logic, by connecting theoretically and empirically firms’ institutional environment to the governance structure of their alliances. However, as noted by Nickerson and Bigelow, ‘‘for inter-firm R&D relationships y the shift parameter framework has yet to be applied to investigate exchange performance’’ (2008, p. 192). The current study offers such an application, using the shift parameter framework to evaluate the performance rather than the choice of alliance governance structures (cf. Gulati & Nickerson, 2008). The findings offer fertile ground for further exploration of the longitudinal implications that institutional shift parameters may have for the performance and optimal governance of interfirm alliances. Indeed, governance structures that may appear misaligned with underlying transactional attributes could in fact represent the boundedly rational optimum when viewed through a historical lens that considers relevant changes in the institutional environment. For example, Hagedoorn’s (2002; see also Fig. 1) observation that technology alliances became progressively governed by contractual agreement especially in uncertain environments appears consistent with an industry norm of collaboration acting on the relative benefits of contractual agreements versus joint ventures. Second, the introduction of broader norms associated with open innovation into an assessment of the material practices that constitute

270

HANS T. W. FRANKORT

innovation ecosystems contributes new insight to the open innovation literature. The diffusion of the open innovation logic is reflected not just in the proliferation of a range of material practices, it is also evident in an evolving system of norms that may help govern such material practices. This generates the possibility that as open innovation practices become more prevalent, they at once become less risky and perhaps less costly, by enabling firms to substitute more arms-length governance arrangements for more hierarchical ones. Prior research has devoted considerable attention to exploring institutional differences across nations and industries, and how they shape firm behavior and performance in the area of innovation (e.g., Alexy, Criscuolo, & Salter, 2009; Chesbrough, 1999). Among the implications of such work is the notion that the optimal organization for innovation differs across nations and industries. Complementing these findings, assuming all else equal, the evidence presented here suggests that the optimal organization of innovation activities also varies longitudinally, as open innovation norms evolve at the industry level. Third, the study shows that an industry-level reputation mechanism acts differentially across different types of alliances. The finding that the shift parameter effect is concentrated in bilateral contractual agreements suggests that appropriability hazards are less pronounced in multilateral joint ventures. This is consistent with the idea that more tightly coupled partners – for example, those coupled through equity joint ventures, common third parties, or both – have greater control over each other’s behavior and are better able to respond to ex post behavioral contingencies (Williamson, 1991). Moreover, partner control through tighter coupling appears more important absent institutional mechanisms that bound appropriation concerns, and less so in the presence of such institutional mechanisms. These findings may extend to the ecosystem level of analysis. For example, Brusoni and Prencipe (2013) suggest that the need for responsiveness and tighter coupling among the members of an ecosystem will be greater in an institutional regime with weak appropriability. This conceptual proposition on the institutional contingency of organizational coupling in ecosystems resonates closely with my empirical findings at the dyadic level of analysis. It thus opens up opportunities for the application of the shift parameter logic to the empirical analysis of interactions between the institutional environment and the prevalence and effectiveness of different types of coupling in innovation ecosystems. Similar to Williamson’s (1991) focus on both transactional hazards and the available governance solutions to address such hazards at the transaction level of analysis, Brusoni and Prencipe (2013) discuss both the cooperation and

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

271

coordination problems that firms in ecosystems may face as well as the solutions that different patterns of organizational coupling can offer to such problems. Thus, their discussion offers a good starting point to begin to think about the theoretical mechanisms through which the institutional environment may affect the difficulty of problems faced by firms in ecosystems, the effectiveness of coupling patterns in addressing such problems, or both. Fourth, technology alliances are one among a broader set of practices constituting innovation ecosystems, and the transfer of technological knowledge is only one of the relevant performance metrics. The ideas presented and tested here may extend to the study of other aspects of innovation ecosystems. Both the management of interdependence with complementors and coordination with a range of downstream distribution partners generates considerable risks, while dependencies between various complementors may be asymmetric (Adner, 2006; Kapoor, 2013a). For example, Wood and West (2013) illustrate that though Symbian depended fully on its smartphone platform, this was not the case for a large number of complementors in its ecosystem. Therefore, the commitment of individual complementors to the shared success of the ecosystem was unbalanced. In these and other cases, perhaps broader norms can offer an informal source of channel incentives that stimulate multilateral cooperation by aligning the interests of the players in an ecosystem. Moreover, in a study of the global semiconductor manufacturing industry, Kapoor and McGrath (2012) document how different types of partners – that is, suppliers, research organizations, and users – may be more or less prevalent in firms’ collaboration portfolios across the technology life cycle. It is conceivable that open innovation norms act on all such collaboration types. However, because the prevalence and importance of these partner types may differ over time, it is plausible that at different points in time, broader industry norms may have stronger or weaker effects in collaborations with different types of partners. Also, different norms may act on different stages of the value chain, and interesting questions arise as to how firms’ reputations within one stage of the value chain spill over upstream or downstream. Additional opportunities exist to extend this research as well as address several of its limitations. First, the current analysis is truncated because it disregards the set-up costs of the governance alternatives. A more integrative analysis assesses both the benefits of governance solutions as well as their costs, with the potential to generate a greater understanding of when firms substitute one governance structure for another. Second, the

272

HANS T. W. FRANKORT

focus here has been on contractual agreements and joint ventures, two common modes of governance in technology alliances, and consideration of a broader set of governance structures appears useful. Third, though the study period here reflects one in which open innovation began to evolve, important questions remain about the extent to which there may be limits to the development of collaborative norms associated with open innovation. Finally, Alexy and Reitzig (2013) show how in the mid-2000s, privatecollective innovators in infrastructure software coordinated with one another by waiving their exclusion rights so as to establish a broader norm of non-enforcement. This began to expose even proprietary innovators to an unfavorable view of enforcing exclusion rights, generating both reputational benefits to innovators looking for cooperative solutions to intellectual property disputes as well as reputational penalties to those straightforwardly enforcing their property rights. These findings suggest that both the development of norms and their application may be confined to specific open innovation practices, which opens up avenues for a more granular assessment of the enabling and constraining functions of open innovation norms. The arguments and evidence presented in this study hopefully offer an impetus for further exploration of the interaction between parameters of the institutional environment and the costs and benefits of innovation ecosystems and their constituent open innovation practices.

NOTES 1. Consistent with the comparative importance of software collaborations, Kapoor (2013a) finds that the collaboration of semiconductor firms with software complementors is more strongly associated with information sharing in R&D and joint product development than collaboration with other complementors. 2. Rather than including firm-level variables for the partner firms separately, for parsimony I include their sums as covariates (i.e., [Ci,t+Cj,t]). While doing so constrains the coefficients on the firm-level variables to equality, results of models in which they are entered separately are largely identical. 3. Jim Lincoln, personal communication.

ACKNOWLEDGMENTS I thank volume editors Ron Adner, Brian Silverman, and especially Joanne Oxley for providing detailed feedback and guidance, which has significantly

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

273

improved the chapter. I am indebted to John Hagedoorn for providing data, and for generously sharing his deep expertise concerning technology alliances in information technology. I also thank Charles Baden-Fuller, Igor Filatotchev, Santi Furnari, Cliff Oswick, Vangelis Souitaris, Andre Spicer, and participants in research and practitioner workshops at Cass Business School for helpful comments and discussion. All errors are mine.

REFERENCES Adams, J. D., Chiang, E. P., & Starkey, K. (2001). Industry-university cooperative research centers. Journal of Technology Transfer, 26(1–2), 73–86. Adner, R. (2006). Match your innovation strategy to your innovation ecosystem. Harvard Business Review, 84(April), 98–107. Alca´cer, J., & Gittelman, M. (2006). Patent citations as a measure of knowledge flows: The influence of examiner citations. Review of Economics and Statistics, 88(4), 774–779. Alca´cer, J., & Oxley, J. (2013). Learning by supplying. Strategic Management Journal. doi:10.1002/smj.2134 Alexy, O., Criscuolo, P., & Salter, A. (2009). Does IP strategy have to cripple open innovation? MIT Sloan Management Review, 51(1), 71–77. Alexy, O., & Reitzig, M. (2013). Private-collective innovation, competition, and firms’ counterintuitive appropriation strategies. Research Policy, 42(4), 895–913. Argote, L., McEvily, B., & Reagans, R. (2003). Managing knowledge in organizations: An integrative framework and review of emerging themes. Management Science, 49(4), 571–582. Armstrong, J. (1996). Reinventing research at IBM. In R. S. Rosenbloom & W. J. Spencer (Eds.), Engines of innovation (pp. 151–154). Boston, MA: Harvard University Press. Arrow, K. (1962). Economic welfare and the allocation of resources for invention. In NBER, The rate and direction of inventive activity: Economic and social factors (pp. 609–626). Princeton, NJ: Princeton University Press. Bergh, D. D., & Holbein, G. F. (1997). Assessment and redirection of longitudinal analysis: Demonstration with a study of the diversification and divestiture relationship. Strategic Management Journal, 18(7), 557–571. Bresnahan, T. F., & Greenstein, S. (1999). Technological competition and the structure of the computer industry. Journal of Industrial Economics, 47(1), 1–40. Browning, L. D., Beyer, J. M., & Shetler, J. C. (1995). Building cooperation in a competitive industry: SEMATECH and the semiconductor industry. Academy of Management Journal, 38(1), 113–151. Brusoni, S., & Prencipe, A. (2013). The organization of innovation in ecosystems: Problem framing, problem solving, and patterns of coupling. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems (Vol. 30, pp. 167–194). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing Ltd. Burt, R. S., & Knez, M. (1995). Kinds of third-party effects on trust. Rationality and Society, 7(3), 255–292.

274

HANS T. W. FRANKORT

Chesbrough, H. (1999). The organizational impact of technological change: A comparative theory of national institutional factors. Industrial and Corporate Change, 8(3), 447–485. Chesbrough, H. (2003a). Open innovation. Boston, MA: Harvard University Press. Chesbrough, H. (2003b). The governance and performance of Xerox’s technology spin-off companies. Research Policy, 32(3), 403–421. Cloodt, M., Hagedoorn, J., & Roijakkers, N. (2006). Trends and patterns in inter-firm R&D networks in the global computer industry: An analysis of major developments, 1970– 1999. Business History Review, 80(4), 725–746. Cloodt, M., Hagedoorn, J., & Roijakkers, N. (2010). Inter-firm R&D networks in the global software industry: An overview of major trends and patterns. Business History, 52(1), 120–149. Cohen, W., Florida, R., & Goe, R. (1994). University industry research centers in the United States. Pittsburgh, PA: Carnegie Mellon University. Coleman, J. S. (1990). Commentary: Social institutions and social theory. American Sociological Review, 55(3), 333–339. Dacin, M. T., Oliver, C., & Roy, J.-P. (2007). The legitimacy of strategic alliances: An institutional perspective. Strategic Management Journal, 28(2), 169–187. Dahlander, L., & Gann, D. M. (2010). How open is innovation? Research Policy, 39(6), 699–709. Darr, E. D., & Kurtzberg, T. R. (2000). An investigation of partner similarity dimensions on knowledge transfer. Organizational Behavior and Human Decision Processes, 82(1), 28–44. Davis, J. F., Diekmann, K. A., & Tinsley, C. H. (1994). The decline and fall of the conglomerate firm in the 1980s: The deinstitutionalization of an organizational form. American Sociological Review, 59(4), 547–570. Davis, L. E., & North, D. C. (1971). Institutional change and American economic growth. London: Cambridge University Press. DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160. Dittrich, K., Duysters, G., & de Man, A.-P. (2007). Strategic repositioning by means of alliance networks: The case of IBM. Research Policy, 36(10), 1496–1511. Dushnitsky, G. (2006). Corporation venture capital: Past evidence and future directions. In M. Casson, B. Yeung, A. Basu & N. Wadeson (Eds.), Oxford handbook of entrepreneurship (pp. 387–431). Oxford, UK: Oxford University Press. Dushnitsky, G., & Lavie, D. (2010). How alliance formation shapes corporate venture capital investment in the software industry: A resource-based perspective. Strategic Entrepreneurship Journal, 4(1), 22–48. Fauchart, E., & von Hippel, E. (2008). Norms-based intellectual property systems: The case of French chefs. Organization Science, 19(2), 187–201. Frankort, H. T. W., Hagedoorn, J., & Letterie, W. (2012). R&D partnership portfolios and the inflow of technological knowledge. Industrial and Corporate Change, 21(2), 507–537. Gomes-Casseres, B., Hagedoorn, J., & Jaffe, A. B. (2006). Do alliances promote knowledge flows? Journal of Financial Economics, 80(1), 5–33. Gompers, P., & Lerner, J. (2001). The venture capital revolution. Journal of Economic Perspectives, 15(2), 145–168. Graham, S. J. H., & Mowery, D. C. (2003). Intellectual property protection in the software industry. In W. Cohen & S. Merrill (Eds.), Patents in the knowledge-based economy:

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

275

Proceedings of the Science, Technology and Economic Policy Board (pp. 219–258). Washington, DC: National Academies Press. Greene, W. H. (2003). Econometric analysis (5th ed.). Upper Saddle River, NJ: Pearson Education. Grimblatt, V. (2002). Software in the semiconductor industry. Proceedings of the fourth IEEE international Caracas conference on devices, circuits, and systems (pp. I032-1–I032-4). Oranjestad, Aruba, the Netherlands. Gulati, R. (1999). Network location and learning: The influence of network resources and firm capabilities on alliance formation. Strategic Management Journal, 20(5), 397–420. Gulati, R., & Nickerson, J. (2008). Interorganizational trust, governance choice, and exchange performance. Organization Science, 19(5), 688–708. Gulati, R., & Singh, H. (1998). The architecture of cooperation: Managing coordination costs and appropriation concerns in strategic alliances. Administrative Science Quarterly, 43(4), 781–814. Hagedoorn, J. (1993). Understanding the rationale of strategic technology partnering: Interorganizational modes of cooperation and sectoral differences. Strategic Management Journal, 14(5), 371–385. Hagedoorn, J. (2002). Interfirm R&D partnerships: An overview of major trends and patterns since 1960. Research Policy, 31(4), 477–492. Hagedoorn, J., & Frankort, H. T. W. (2008). The gloomy side of embeddedness: The effects of overembeddedness on inter-firm partnership formation. In J. A. C. Baum, & T. J. Rowley (Eds.), Network strategy (Vol. 25, pp. 503530). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing Ltd. Hagedoorn, J., & Schakenraad, J. (1992). Leading companies and networks of strategic alliances in information technologies. Research Policy, 21(2), 163–190. Hall, B. H. (2005). Exploring the patent explosion. Journal of Technology Transfer, 30(1–2), 35–48. Hall, B. H., Jaffe, A. B., & Trajtenberg, M. (2002). The NBER patent-citations data file: Lessons, insights, and methodological tools. In A. B. Jaffe & M. Trajtenberg (Eds.), Patents, citations & innovations (pp. 403–459). Cambridge, MA: The MIT Press. Hall, B. H., & Ziedonis, R. H. (2001). The patent paradox revisited: An empirical study of patenting in the U.S. semiconductor industry. RAND Journal of Economics, 32(1), 101–128. Heckman, J. (1979). Sample selection bias as a specification error. Econometrica, 47(1), 153–161. Henkel, J., Baldwin, C. Y., & Shih, W. C. (2012). IP modularity: Profiting from innovation by aligning product architecture with intellectual property. Working Paper No. 13–012. Cambridge, MA: Harvard Business School. Iansiti, M. (1998). Technology integration. Cambridge, MA: Harvard University Press. Isaac, L. W., & Griffin, L. J. (1989). Ahistoricism in time-series analyses of historical process: Critique, redirection, and illustrations from U.S. labor history. American Sociological Review, 54(6), 873–890. Jaffe, A. B. (1986). Technological opportunity and spillovers of R&D: Evidence from firms’ patents, profits, and market value. American Economic Review, 76(5), 984–1001. Jaffe, A. B., Trajtenberg, M., & Fogarty, M. S. (2000). Knowledge spillovers and patent citations: Evidence from a survey of inventors. American Economic Review, 90(2), 215–218. Kahl, S. J., Silverman, B. S., & Cusumano, M. A. (2012). History and strategy (Vol. 29). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing Ltd.

276

HANS T. W. FRANKORT

Kapoor, R. (2013a). Collaborating with complementors: What do firms do? In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems (Vol. 30, pp. 3–25). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing Ltd. Kapoor, R. (2013b). Persistence of integration in the face of specialization: How firms navigated the winds of disintegration and shaped the architecture of the semiconductor industry. Organization Science, forthcoming. doi:10.1287/orsc.1120.0802 Kapoor, R., & McGrath, P. J. (2012). Unmasking the interplay between technology evolution and R&D collaboration: Evidence from the global semiconductor manufacturing industry, 1990–2010. Working Paper. Philadelphia, PA: University of Pennsylvania. Kogut, B. (1988). A study of the life cycle of joint ventures. In F. J. Contractor & P. Lorange (Eds.), Cooperative strategies in international business (pp. 169–186). Lexington, MA: Lexington Books. Kogut, B., & Metiu, A. (2001). Open-source software development and distributed innovation. Oxford Review of Economic Policy, 17(2), 248–264. Lampel, J., & Shamsie, J. (2000). Probing the unobtrusive link: Dominant logic and the design of joint ventures at General Electric. Strategic Management Journal, 21(5), 593–602. Langlois, R. N. (1990). Creating external capabilities: Innovation and vertical disintegration in the microcomputer industry. Business and Economic History, 19, 93–102. Laursen, K., & Salter, A. (2006). Open for innovation: The role of openness in explaining innovation performance among U.K. manufacturing firms. Strategic Management Journal, 27(2), 131–150. Lavie, D., Lechner, C., & Singh, H. (2007). The performance implications of timing of entry and involvement in multipartner alliances. Academy of Management Journal, 50(3), 578–604. Lerner, J., & Tirole, J. (2002). Some simple economics of open source. Journal of Industrial Economics, 50(2), 197–234. Levinthal, D. A., & March, J. G. (1993). The myopia of learning. Strategic Management Journal, 14(S2), 95–112. Li, D., Eden, L., Hitt, M. A., Ireland, R. D., & Garrett, R. P. (2012). Governance in multilateral R&D alliances. Organization Science, 23(4), 1191–1210. Lincoln, J. R. (1984). Analyzing relations in dyads: Problems, models, and an application to interorganizational research. Sociological Methods & Research, 13(1), 45–76. Macher, J. T., & Mowery, D. C. (2004). Vertical specialization and industry structure in high technology industries. In J. A. C. Baum, & A. M. McGahan (Eds.), Business strategy over the industry lifecycle (Vol. 21, pp. 317356). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing Ltd. Malerba, F., Nelson, R., Orsenigo, L., & Winter, S. (1999). ‘History-friendly’ models of industry evolution: The computer industry. Industrial and Corporate Change, 8(1), 3–40. Mansfield, E. (1998). Academic research and industrial innovation: An update of empirical findings. Research Policy, 26(7–8), 773–776. March, J. G. (1994). A primer on decision making: How decisions happen. New York, NY: The Free Press. Mowery, D. C., Nelson, R. R., Sampat, B. N., & Ziedonis, A. A. (2004). Ivory tower and industrial innovation: University-industry technology transfer before and after the Bayh– Dole Act. Stanford, CA: Stanford University Press.

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

277

Mowery, D. C., Oxley, J. E., & Silverman, B. S. (1996). Strategic alliances and interfirm knowledge transfer. Strategic Management Journal, 17(Winter Special Issue), 77–91. Mowery, D. C., & Teece, D. J. (1996). Strategic alliances and industrial research. In R. S. Rosenbloom & W. J. Spencer (Eds.), Engines of innovation (pp. 111–129). Boston, MA: Harvard University Press. Murray, M. P. (2006). Avoiding invalid instruments and coping with weak instruments. Journal of Economic Perspectives, 20(4), 111–132. National Science Foundation (2010). Number of R&D-performing companies, by industry, by size of company, and by size of R&D program. Retrieved from http://www.nsf.gov/statistics/ Nelson, R. R. (1990). Capitalism as an engine of progress. Research Policy, 19(3), 193–214. Nickerson, J., & Bigelow, L. (2008). New institutional economics, organization, and strategy. In E. Brousseau & J.-M. Glachant (Eds.), New institutional economics: A guidebook (pp. 183–208). Cambridge, UK: Cambridge University Press. Oxley, J., & Wada, T. (2009). Alliance structure and the scope of knowledge transfer: Evidence from US-Japan agreements. Management Science, 55(4), 635–649. Oxley, J. E. (1997). Appropriability hazards and governance in strategic alliances: A transaction cost approach. Journal of Law, Economics, & Organization, 13(2), 387–409. Oxley, J. E. (1999). Institutional environment and the mechanisms of governance: The impact of intellectual property protection on the structure of inter-firm alliances. Journal of Economic Behavior & Organization, 38(3), 283–309. Oxley, J. E., & Sampson, R. C. (2004). The scope of international R&D alliances. Strategic Management Journal, 25(8–9), 723–749. Parhankangas, A., & Arenius, P. (2003). From a corporate venture to an independent company: A base for a taxonomy for corporate spin-off firms. Research Policy, 32(3), 463–481. Pattit, J. M., Raj, S. P., & Wilemon, D. (2012). An institutional theory investigation of U.S. technology development trends since the mid-19th century. Research Policy, 41(2), 306–318. Pisano, G. P. (1989). Using equity participation to support exchange: Evidence from the biotechnology industry. Journal of Law, Economics, & Organization, 5(1), 109–126. Powell, W. W. (1996). Inter-organizational collaboration in the biotechnology industry. Journal of Institutional and Theoretical Economics, 152(1), 197–215. Powell, W. W., & Giannella, E. (2010). Collective invention and inventor networks. In B. H. Hall & N. Rosenberg (Eds.), Handbook of the economics of innovation (Vol. 2, pp. 575–605). Oxford, UK: Elsevier. Provan, K. G. (1993). Embeddedness, interdependence, and opportunism in organizational supplier-buyer networks. Journal of Management, 19(4), 841–856. Robinson, D. T., & Stuart, T. E. (2007). Network effects in the governance of strategic alliances. Journal of Law, Economics, & Organization, 23(1), 242–273. Rosenberg, N., & Nelson, R. R. (1996). The roles of universities in the advance of industrial technology. In R. S. Rosenbloom & W. J. Spencer (Eds.), Engines of innovation (pp. 87–109). Boston, MA: Harvard University Press. Rosenkopf, L., & Almeida, P. (2003). Overcoming local search through alliances and mobility. Management Science, 49(6), 751–766. Sampson, R. C. (2004). The cost of misaligned governance in R&D alliances. Journal of Law, Economics, & Organization, 20(2), 484–526. Scott, W. R. (2001). Institutions and organizations (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.

278

HANS T. W. FRANKORT

Scott, W. R. (2003). Institutional carriers: Reviewing modes of transporting ideas over time and space and considering their consequences. Industrial and Corporate Change, 12(4), 879–894. Shaver, J. M. (1998). Accounting for endogeneity when assessing strategy performance: Does entry mode choice affect FDI survival? Management Science, 44(4), 571–585. Stuart, T. E., & Podolny, J. M. (1999). Positional consequences of strategic alliances in the semiconductor industry. In S. Andrews & D. Knoke (Eds.), Research in the sociology of organizations (Vol. 16, pp. 161–182). Greenwich, CT: JAI Press. Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. Academy of Management Review, 20(3), 571–610. Teece, D. J. (1986). Profiting from technological innovation: Implications for integration, collaboration, licensing and public policy. Research Policy, 15(6), 285–305. Van de Vrande, V., & Vanhaverbeke, W. (2013). How prior corporate venture capital investments shape technological alliances: A real options approach. Entrepreneurship Theory and Practice, forthcoming. doi: 10.1111/j.1540-6520.2012.00526.x von Hippel, E. (2005). Democratizing innovation. Cambridge, MA: The MIT Press. West, J., & Wood, D. (2013). Evolving an open ecosystem: The rise and fall of the Symbian platform. In R. Adner, J. E. Oxley, & B. S. Silverman (Eds.), Collaboration and competition in business ecosystems (Vol. 30, pp. 27–67). Advances in Strategic Management. Bingley, UK: Emerald Group Publishing Ltd. Williamson, O. E. (1991). Comparative economic organization: The analysis of discrete structural alternatives. Administrative Science Quarterly, 36(2), 269–296.

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

279

APPENDIX: FIRST-STAGE PROBIT MODEL Table A.1 shows robust probit estimates for the choice of alliance governance structure in a firm-partner dyad. I use these estimates to construct an inverse Mills ratio lij,t for inclusion in the second-stage knowledge transfer models (Heckman, 1979). I model a binary choice in the first stage (i.e., 1 if a dyad contained at least one joint venture, 0 otherwise), even though about 20% of the dyad-years contain more than one alliance. Thus, implicit in my modeling approach is the assumption that most of the endogeneity comes from the decision to use joint ventures at all. Nevertheless, results of two-stage models on dyads containing only one alliance reveal identical results. The first instrument is governance autocorrelation, which is the mean of the dependent variable (alliance governance structure) for all dyads the partner firms maintained in a given year, but excluding the focal dyad. This instrument takes the same form as the dyad autocorrelation measure in the second stage. Hence, it also addresses autocorrelation in the first stage (Lincoln, 1984, pp. 56–61). Further, since it controls partner firms’ baseline proclivities to favor joint ventures over contractual agreements, it also captures otherwise uncontrolled firm heterogeneity. The second instrument is industry joint ventures, the share of joint ventures in all newly formed technology alliances within IT in a given year. Firms’ preference for joint ventures across their technology alliances should correlate positively with the focal dyad containing one or more joint ventures. This may reflect a dominant logic (Lampel & Shamsie, 2000) and perhaps perceptions of competence in managing a particular type of alliance (Levinthal & March, 1993). Such normative organizing principles affect firms’ rate of adopting a certain governance structure, while the adoption itself is the more proximate source of any performance consequences. Similarly, the aggregate industry-level preference for joint ventures should correlate positively with the focal dyad containing one or more joint ventures. By institutional theory, firms tend to conform to externally constructed conceptions of legitimate organization (DiMaggio & Powell, 1983) and organizational theorists similarly suggest that a desire to keep pace with competition motivates firms to form ideas about appropriate action based on the actions of others in the industry (March, 1994). Thus, firms are more likely to adopt joint ventures if others in the industry also prefer joint ventures over contractual agreements. It is plausible that governance preferences in partner firms’ technology alliance portfolios, and

280

HANS T. W. FRANKORT

Table A.1. Robust Probit Estimates of Alliance Governance Structure (1, Joint Venture; 0, Contractual Agreement), 1980–1999. Multipartner collaboration Partner-specific alliance experience Computers/telecom Microelectronics Software Prior patent cross-citations Alliance experience (sum) Alliance experience (product) Size (sum) Size (product) R&D intensity (sum) R&D intensity (product) Governance autocorrelation Industry joint ventures Constant Dyad-years Log pseudolikelihood Pseudo R2

0.795 (0.119) 0.302 (0.078) 0.480 (0.123) 0.382 (0.137) 0.093 (0.170) 0.005 (0.001) 0.002 (0.002) 0.000 (0.000) 0.000 (0.000) 0.000 (0.000) 0.620 (1.647) 35.125+ (18.407) 1.636 (0.253) 3.634 (0.528) 2.862 (0.343) 2,540 883.16 0.482

Robust (dyad clustered) standard errors are in parentheses. po0.001; po0.01; po0.05; + po0.1; all tests are two-tailed.

within the industry, will be related to interfirm knowledge transfer only through their impact on dyadic alliance governance, making the exclusion of governance autocorrelation and industry joint ventures from the knowledge transfer equations valid (Murray, 2006).

Open Innovation Norms and Interfirm Knowledge Transfer, 1980–1999

281

In addition to several variables included in the second-stage models (Tables 2–4), I followed Lincoln (1984, pp. 49–52) in including product terms for the firm-level controls (alliance experience, size, and R&D intensity) and I include prior patent cross-citations, a dyad-specific variable for the number of times two firms had cited each other’s patents by the observation year. This additional control captures the depth of the collaboration between partnered firms, and their propensity to draw on each other’s technological knowledge base extensively, and should be positively related to equity sharing in a dyadic relationship, all else equal. The odds that an alliance is governed by a joint venture increase by a multiplicative factor of 5.135 (i.e., exp[1.636]) as my measure of governance autocorrelation changes from zero to one. And the odds that an alliance is governed by a joint venture increase by a multiplicative factor of 37.864 (i.e., exp[3.634]) as my measure of industry joint ventures changes from zero to one. The likelihood of joint venture governance is thus highly sensitive to both instruments. Moreover, the combined t-value of these two variables is 13.349 (i.e., 6.466+6.883), suggesting they are jointly relevant as instrumental variables. The coefficient on multipartner collaboration is positive and significant. In a sample of 169 technology alliances commencing in 1996 in the electronics and telecommunications equipment industries, Oxley and Sampson (2004, p. 741) found similar coefficients on their measure of multipartner collaboration (i.e., Multilateral) in models predicting alliance governance structure (see also Sampson, 2004, p. 510). Such a finding may appear inconsistent with the argument that cohesion in a multipartner alliance acts as a reputation-inducing structure, while it appears consistent with the idea that free riding and coalition building are justified concerns in such alliances. Note that both these arguments focus on the probability of opportunistic behavior. However, even holding constant the probability of opportunism, bilateral and multipartner alliances may be governed by different governance structures because of differences in the interdependence of tasks within each type of alliance (Gulati & Singh, 1998). It is reasonable to imagine that interdependence may be greater in multipartner alliances, which creates coordination challenges that perhaps require a more hierarchical governance structure. Importantly, this may be the case regardless if the probability of opportunism is low or high. For this reason, we cannot use the governance selection model to infer if bilateral or instead multipartner alliances are subject to a greater probability of opportunistic behavior. In the knowledge transfer models, comparison of interactions between joint venture governance and an industry norm of

282

HANS T. W. FRANKORT

collaboration across subsamples containing either bilateral or multipartner alliances (Models 5 and 6 in Table 2) offers a more compelling alternative, as an industry norm of collaboration will act narrowly on the probability of opportunistic behavior, and solely as a function of the monitoring and incentive alignment capacity of an alliance rather than its capacity to facilitate coordination.

THE ORIGINS AND DYNAMICS OF PRODUCTION NETWORKS IN SILICON VALLEY$ AnnaLee Saxenian$$ ABSTRACT Computer systems firms in Silicon Valley are responding to rising costs of product development, shorter product cycles and rapid technological change by focusing and building partnerships with suppliers, both within and outside of the region. Well-known firms like Hewlett-Packard and Apple Computers and lesser known ones like Silicon Graphics and Pyramid Technology are organized to combine the components and subsystems made by specialist suppliers into new computer systems. As these firms collaborate to both define and manufacture new systems, they are institutionalizing their capacity to learn from one another. Three cases - a

$

This chapter is a reprint from of the article ‘‘The origins and dynamics of production networks in Silicon Valley’’ first published in Research Policy Vol. 20 Iss. 5 (1991). $$ Special thanks to Christopher Freeman, David Teece, Chris DeBresson and two anonymous reviewers for helpful comments and encouragement on earlier drafts of this paper. Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 283–309 Copyright r 2013 Elsevier B.V. All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030012

283

284

ANNALEE SAXENIAN

contract manufacturer, a silicon foundry, and the joint development of a microprocessor - illustrate how inter-firm networks help account for the sustained technological dynamism of the regional economy. Keywords: production networks, Silicon Valley, product development, computer systems This essay analyzes the origins and dynamics of production networks in Silicon Valley from the perspective of the region’s computer systems firms. Students of Silicon Valley have focused almost exclusively on the evolution of the semiconductor industry; when that industry fell into crisis in the mid1980s, most assumed that the region itself would decline. Yet by the end of the decade, the regional economy had rebounded, as hundreds of new computer producers and suppliers of microprocessors, specialty chips, software, disk drives, networking hardware and other components generated a renewed wave of growth. This revitalization is evident in regional output and employment figures. In spite of the worst recession in the region’s history during 1985-86, the shipments of Silicon Valley high technology manufacturers and software enterprises grew 60 percent between 1982 and 1987 (from $15 billion to $24 billion), and employment in these firms expanded more than 45 percent during the decade. While there were only 69 establishments in the region producing computers in 1975, by 1980 there were 113, and by 1985 the number had more than doubled to 246.1 These new computer systems firms are at the hub of Silicon Valley’s expanding production networks. Well-known companies such as Tandem and Apple Computers, and lesser known ones such as Silicon Graphics and Pyramid Technology are organized to recombine components and subsystems made by specialist suppliers - both within and outside of the region - into new computer systems. As they collaborate with key suppliers to define and manufacture new systems, they are reducing product

1

High technology manufacturing and services include: computers and office equipment (SIC 357), communications equipment (SIC 366), electronic components (SIC 367) guided missiles (367), instruments (SIC 38), and data processing (SIC 737). Data on the value of shipments is from the U.S. Census of Manufactures and Census of Service Industries; employment figures are from the California Employment Development Department, ES202 Series; and the number of establishments is from U.S. Bureau of the Census, County Business Patterns.

The Origins and Dynamics of Production Networks in Silicon Valley

285

development times and institutionalizing their capacity to learn from one another. These production networks help account for the sustained technological dynamism of the Silicon Valley economy. Geographers and other social scientists have documented the emergence of flexible systems of production in regions such as Silicon Valley [22,28,32,2,31]. Most of the research on these regions, however, overlooks the changing nature of inter-firm and inter-industry relationships. In their detailed study of the location of the U.S. semiconductor industry, for example, Scott and Angel document the vertical disaggregation of production and the dense concentration of inter-firm transactions in Silicon Valley, but do not explore the nature of the relations between semiconductor firms and their customers and suppliers. When Florida and Kenney argue that Silicon Valley’s flexibility derives from arms-length exchanges and atomistic fragmentation - and thus provides no match for Japan’s highly structured, large-firm dominated linkages - they, too, overlook growing evidence of the redefinition of supplier relations among U.S. technology firms [8,17]. Moreover, it is difficult to reconcile their bleak predictions with the continued dynamism of the Silicon Valley economy. Students of business organization, by contrast, have focused on the emergence of network forms of industrial organization - intermediate forms which fall between Williamson’s ideal types of market exchange and corporate hierarchy. In the two decades since Richardson [26] observed the pervasive role of cooperation in economic relations, the literature on interfirm networks and alliances has burgeoned [23,20,16,10,14,15,29]. Nonetheless, there has been little attention to the emergence of inter-firm networks in America’s high technology regions. The case of the computer systems business in Silicon Valley demonstrates how inter-firm networks spread the costs and risks of developing new technologies and foster reciprocal innovation among specialist firms. This paper begins by describing how the region’s systems firms are responding to the rising costs of product development, shorter product cycles and rapid technological change by remaining highly focused and relying on networks of suppliers. In so doing, they are rejecting the vertically integrated model of computer production which dominated in the postwar period, in which a firm manufactured most of its technically sophisticated components and sub-systems internally.

286

ANNALEE SAXENIAN

The paper’s second section analyzes the redefinition of supplier relations among Silicon Valley computer firms and their vendors. The creation of long-term, trust-based partnerships is blurring the boundaries between interdependent but autonomous firms in the region. While this formalization of inter-firm collaboration is recent, it builds on the longstanding traditions of informal information exchange, inter-firm mobility and networking which distinguish Silicon Valley [1,3,27]. The final section of the paper presents three cases which illustrate how interfirm collaboration fosters joint problem-solving between Silicon Valley systems firms and their specialist suppliers. These cases - of a contract manufacturer, a silicon foundry and the joint development of a microprocessor - demonstrate how the process of complementary innovation helps to account for Silicon Valley’s technological dynamism. This paper draws on the findings of more than 50 in-depth interviews with executives and managers in Silicon Valley-based computer systems firms and suppliers during 1988, 1989 and 1990. The sample includes the region’s leading computer systems firms, many computer firms started during the 1980s, and a wide range of producers of semiconductors, disk drives, and other components. Creating production networks Competitive conditions in the computer systems business changed dramatically during the 1970s and 1980s. The cost of bringing new products to market increased at the same time that the pace of new product introductions and technological change accelerated. Hewlett-Packard’s Vice-President of Corporate Manufacturing, Harold Edmondson, claims that half of the firm’s orders in any year now come from products introduced in the preceding three years, and notes that: In the past, we had a ten year lead in technology. We could put out a product that was not perfectly worked out, but by the time the competition had caught up, we’d have our product in shape. Today we still have competitive technology, but the margin for catch up is much shorter - often under a year.2

Computer makers like HP must now bring products to market faster than ever before, often in a matter of months. 2

Interview, Harold Edmondson, Vice-President of Corporate Manufacturing, Hewlett-Packard Corporation, 5 February 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

287

The cost of developing new products has in turn increased along with growing technological complexity. A computer system today consists of the central processing unit (CPU) which includes a microprocessor and logic chips, the operating system and applications software, information storage products (disk drives and memory chips), ways of putting in and getting out information (input-output devices), power supplies, and communications devices or networks to link computers together. Although customers seek to increase performance along each of these dimensions, it is virtually impossible for one firm to produce all of these components, let alone stay at the forefront of each of these diverse and fast changing technologies. Systems firms in Silicon Valley are thus focusing on what they do best, and acquiring the rest of their inputs from the dense infrastructure of suppliers in the region as well as outside. This represents a fundamental shift from the vertically integrated approach to computer production characterized by IBM, DEC and other established U.S. computer firms.3 In this model, which survived in an era of slower changing products and technologies, the firm designed and produced virtually all of the technologically sophisticated components and sub-systems of the computer in-house. Subcontractors were used as surge capacity in times of boom demand, and suppliers were treated as subordinate producers of standard inputs. When Sun Microsystems was established in 1982, by contrast, its founders chose to focus on designing hardware and software for workstations and to limit manufacturing to prototypes, final assembly and testing. Sun purchases application specific integrated circuits (ASICs), disk drives, and power supplies as well as standard memory chips, boxes, keyboards, mice, cables, printers and monitors from suppliers. Even the printed circuit board at the heart of its workstations is assembled by contract manufacturers. Why, asks Sun’s Vice-President of Manufacturing Jim Bean, should Sun vertically integrate when hundreds of specialty shops in Silicon Valley invest heavily in staying at the leading edge in the design and manufacture of microprocessors, disk drives, printed circuit boards (PCBs), and most other computer components and sub-systems? Relying on outside suppliers reduces Sun’s overhead and insures that the firm’s workstations use stateof-the art technology.

3

IBM was forced to rely on outside vendors to an unprecedented extent in the early 1980s in order to bring a personal computer to market rapidly enough to compete with Apple.

288

ANNALEE SAXENIAN

This unbundling also provides the flexibility to introduce new products and rapidly alter the product mix. According to Sun’s Bean: ‘‘If we were making a stable set of products, I could make a solid case for vertical integration.4 He argues, however, that product cycles are too short and technology is changing too fast to move more manufacturing in-house. Relying on external suppliers allowed Sun to introduce four major new product generations in its first five years of operation, doubling the priceperformance ratio each successive year. Sun eludes clone-makers by the sheer pace of new product introduction. The guiding principle for Sun, like most new Silicon Valley systems firms, is to concentrate its expertise and resources on coordinating the design and assembly of a final system, to advance critical technologies which represent the firm’s core capabilities [24], and to spread the costs and risks of new product development through partnerships with suppliers. Tandem Computers manufactures its own PCBs, but purchases all other components externally. Mips Computer Systems set out to manufacture the microprocessors and PCBs for its workstations, but quickly sold its chipmaking and board assembly operations in order to focus on system design and development. Some of the region’s firms explicitly recognize their reliance on supplier networks and foster their development. Apple Computers’ venture capital arm makes minority investments in promising firms which offer complementary technology. In 1984, for example, Apple invested $2.5 million in Adobe Systems, which produces the laser printer software critical to desktop publishing applications. Tandem Computers similarly invested in a small local telecommunications company, Integrated Technology Inc, and the two firms have jointly developed networking products to link together Tandem non-stop systems. Companies like Sun, Tandem, and Mips recognize that the design and production of computers can no longer be accomplished by a single firm: it requires the collaboration of variety of specialist firms, none of which could complete the task on its own. This reliance on outsourcing is reflected in the high level of sales per employee of Silicon Valley firms: compare Apple’s

4

Cited in: ‘‘For flexible, quality manufacturing, don’t do it yourself,’’ Electronic Business, 15 March 1987. In 1990, Sun introduced limited printed circuit board assembly operations, however the firm remains committed to a highly focused strategy.

The Origins and Dynamics of Production Networks in Silicon Valley

289

$369,593 and Silicon Graphics $230,000 per employee to IBM’s $139,250 and DEC’s $84,972 [25]. These highly focused producers depend on the unparalleled agglomeration of engineers and specialist suppliers of materials, equipment and services in Silicon Valley, and on the region’s culture of open information exchange and interfirm mobility, which foster continual recombination and new firm formation [1,3,27]. This infrastructure supports the continued emergence of new producers, while allowing them to remain specialized, and helps explain the proliferation of new computer systems producers in the region during the 1980s - even as the costs of developing and producing systems skyrocketed. The decentralization of production and reliance on networks is not limited to small or new firms seeking to avoid fixed investments. Even HewlettPackard, which designs and manufacture chips, printed circuit boards, disk drives, printers, tape drives, and many other peripherals and components for its computer systems, has restructured internally to gain flexibility and technical advantage. During the 1980s, HP consolidated the management of over 50 disparate circuit technology units into two autonomous divisions, Integrated Circuit Fabrication and Printed Circuit Board Fabrication. These cross-cutting divisions now function as internal subcontractors to the company’s computer systems and instrument products groups. They have gained focus and autonomy which they lacked as separate units tied directly to product lines. Moreover, they must now compete with external subcontractors for firm business, which has forced them to improve service, technology, and quality. These units are even being encouraged to sell to outside customers in some instances. In short, HP appears to be creating a network within the framework of a large firm. The networks extend beyond the system firms and their immediate suppliers. Silicon Valley’s suppliers of electronic components and sub-systems are themselves vertically disaggregated – for the same reasons as their systems customers. Producers of specialty and semi-custom integrated circuits, for example, have focused production to spread the costs and risks of chipmaking. Some specialize in design, others in process technology, and still others provide fabrication capacity for both chip and systems firms [30]. The same is true in disk drives. Innovative producers like Conner Peripherals and Quantum have explicitly avoided vertical integration,

290

ANNALEE SAXENIAN

relying on outside suppliers not only for semiconductors but also for the thin-film disks, heads and motors which go into hard drives. Facing the pressures of rapidly changing product designs and technologies, they rely heavily on third party sources for most components and perform only the initial design, the final assembly, and testing themselves. The costs and risks of developing new computer systems products are thus spread across networks of autonomous but interdependent firms in Silicon Valley. In an environment which demands rapid new product introductions and continual technological change, no one firm can complete the design and production of an entire computer system on its own. By relying on networks of suppliers - both within the region and more distant - Silicon Valley systems firms gain the flexibility to introduce increasingly sophisticated products faster than ever before. The new supplier relations Silicon Valley’s systems makers are increasingly dependent upon their suppliers for the success of their own products. Sun founder Scott McNealy acknowledges that ‘‘the quality of our products is embedded in the quality of the products we purchase’’ - which is no understatement, since so much of a Sun workstation is designed by its suppliers.5 The highly focused systems producer relies on suppliers not only to deliver reliable products on time but also to continue designing and producing high quality, state-of-the-art products. While many systems firms begin as Sun did, integrating standard components from different suppliers and distinguishing their products with proprietary software, virtually all now seek specialized inputs to differentiate their products further. These computer makers are replacing commodity semiconductors with ASICs and designing customized disk drives, power supplies, keyboards and communication devices into their systems.6 As specialist suppliers continue to advance technologies critical to their own products, they reproduce the technological instabilities that allow this decentralized system to flourish. And there is little evidence that the pace of innovation in computers will stabilize in the near future. 5

Cited in W. Bluestein, ‘‘How Sun Microsystems buys for quality,’’ Electronics Purchasing, March 1988.

6

On the trend to customize inputs such as disk drives and power supplies, see R. Faletra and M. Elliot, ‘‘Buying in the Microcomputer Market,’’ Electronics Purchasing, October 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

291

Competition in computers is thus increasingly based on the identification of new applications and improvements in performance rather than simply lower cost. Silicon Valley firms are well known for creating new product niches such as Tandem’s fail-safe computers for on-line transaction processing, Silicon Graphics’ high performance super workstations with 3-D graphics capabilities, and Pyramid Technology’s minimainframe computer systems. Nonetheless, even the producers of general purpose commodity products such as IBM-compatible personal computers (‘‘clones’’) are being driven to source differentiated components in order to reduce costs or improve the performance of their systems. Everex Systems, for example, designs custom chip sets to improve the performance of its PC clones. The more specialized these computers and their components become, the more the systems firms are drawn into partnerships with their suppliers. And as they are increasingly treated as equals in a joint process of designing, developing and manufacturing innovative systems, the suppliers themselves become innovative and capital-intensive producers of differentiated products. This marks a radical break with the arms-length relations of a mass production system, in which suppliers manufactured parts according to standard specifications and competed against one another to lower price, and in which portions of production were subcontracted as a buffer against fluctuations in market demand, output and labor supply [12]. In this model, suppliers remained subordinate and often dependent on a single big customer. IBM was notorious for managing its suppliers in this fashion during the early 1980s, and Silicon Valley systems firms today explicitly contrast their supplier relations with those of IBM [19]. 7 Silicon Valley systems firms now view relationships with suppliers more as long-term investments than short-term procurement relationships.8 They recognize collaboration with suppliers as a way to speed the pace of new product introductions and improve product quality and performance. Most

7

See, for example, E. Richards, ‘‘IBM pulls the strings’’, San Jose Mercury News, 31 December 1984.

8

D. Davis, ‘‘Making the Most of Your Vendor Relationships’’, Electronic Business, 10 July 1989. Collaborative supplier relations have being documented in a wide range of industries, including the US and German auto industries [11,28], the French machine tool industry [18], and the Japanese electronics and auto industries [7,21].

292

ANNALEE SAXENIAN

firm designate a group of ‘‘privileged’’ suppliers with whom to build these close relationships. This group normally includes the 20 percent of a firm’s suppliers that account for 75-80 percent of the value of its components: typically between 15 and 30 producers of integrated circuits, printed circuit boards, disk drives, power supplies, and other components which are critical to product quality and performance. These relationships are based on shared recognition of the need to insure the success of a final product. Traditional supplier relations are typically transformed by a decision to exchange longterm business plans and share confidential sales forecasts and cost information. Sales forecasts allow suppliers to plan investment levels, while cost information encourages negotiation of prices that guarantee a fair return to the supplier while keeping the systems firm competitive. In some cases these relationships originate with adoption of Japanese just-in-time (JIT) inventory control systems, as JIT focuses joint attention on improving product delivery times and quality. It often requires a reduction in the number of suppliers and the creation of long-term supplier relations as well as the sharing of business plans and technical information.9 Reciprocity guides relations between Silicon Valley’s systems firms and their suppliers. Most of these relationships have moved beyond the inventory control objectives of JIT to encompass a mutual commitment to sustaining a long-term relationship. This requires a commitment not to take advantage of one another when market conditions change and can entail supporting suppliers through tough times - by extending credit, providing technical assistance or manpower, or helping them find new customers. Businesses commonly acknowledge this mutual dependence. Statements like ‘‘our success is their success’’ or ‘‘we want them to feel like part of an extended family’’ are repeated regularly by purchasing managers in Silicon Valley systems firms, whose roles have changed during the past decade from short-term market intermediaries to long-term relationship-builders. Managers describe their relationships with suppliers as involving personal and

9

When HP introduced JIT in the early 1980s, for example, the firm’s cost reductions and improvements in manufacturing efficiency were widely publicized in Silicon Valley. JIT has since been widely adopted in the region. See ‘‘Hewlett Packard swears by ‘Just-in-Time’ System’’, San Jose Business Journal, 10 June 1985.

The Origins and Dynamics of Production Networks in Silicon Valley

293

moral commitments which transcend the expectations of simple business relationships. In the words of one CEO: In these partnerships, the relationship transcends handling an order. There is more than a business relationship involved. In addition to the company’s commitment, there are personal commitments by people to make sure things happen. Furthermore, there are moral commitments: not to mislead the other party, to do everything possible to support the other party, and to be understanding.10

Suppliers are being drawn into the design and development of new systems and components at a very early stage; and they are typically more closely integrated into the customer’s organization in this process. A key supplier is often consulted during the initial phases of a new computer system’s conception - between two and five years prior to actual production - and involved throughout the design and development process. Some Silicon Valley firms even include suppliers in their design review meetings. This early cooperation allows a supplier to adapt its products to anticipated market changes and exposes the systems engineers to changing component technologies. In the words of HP Manufacturing VP Harold Edmondson: We share our new product aspirations with them and they tell us the technological direction in which they are headingy We would never have done it this way 10 years ago.11

Tandem’s Materials Director, John Sims, similarly describes how information is shared early in the firm’s product development process: There is a lot of give and take in all aspects of these relationshipsy We have a mutual interest in each others’ survival. We share proprietary product information, and we work jointly to improve designs and develop the latest technologies. We continually push each other to do better.12

According to an executive at Silicon Valley contract manufacturer Flextronics: In the early stages of any project, we live with our customers and they live with us. Excellent communication is needed between design engineers, marketing people, and the production people, which is Flextronics.13

10

Cited in M. Cohodas, ‘‘What makes JIT work’’, Electronics Purchasing, January 1987.

11

Cited in S. Tierston, ‘‘The Changing Face of Purchasing,’’ Electronic Business, 20 March 1989. 12

Interview, John Sims, Director of Materials, Tandem Computers, 13 April 1988.

13

Cited in The San Jose Mercury News, 25 July 1988.

294

ANNALEE SAXENIAN

Once production begins, the relationship between the two firms continues at many different levels. Not only does the customer firm’s purchasing staff work with the supplier, but managers, engineers, and production staff at all levels of both firms meet to redefine specifications or to solve technical or manufacturing problems. In many cases, the flow of information between the two firms is continuous. These relationships represent a major departure from the old practice of sending out precise design specifications to multiple sources for competitive bids. In fact, price is rarely considered as important as product quality and reliability in a selecting a key supplier. Most firms choose a reliable, high quality supplier for a long-term relationship, recognizing that the price will be lower over the long term because unpredictable cost fluctuations will be reduced. As these relationships mature, it is increasingly difficult to speak of these firms as bounded by their immediate employees and facilities. This blurring of firm boundaries is well illustrated by the case of Adaptec, Inc., a Silicon Valley based maker of input-output controller devices. When it was formed in 1981, Adaptec management chose to focus on product design and development and to rely on subcontractors for both semiconductor fabrication and board assembly. The key to this strategy is the investment Adaptec has made in building long-term partnerships with its care suppliers including Silicon Valley start-up International Microelectronic Products (IMP), Texas Instruments (TI), and the local division of the large contract manufacturer SCI. Adaptec’s Vice-President of Manufacturing, Jeffrey Miller, describes the high degree of trust which has evolved through continuing interaction between engineers in both organizations, claiming: Our relations with our vendors is not much different than my relationship was at Intel with our corporate foundry - except now I get treated as a customer, not as corporate overheady It really is very hard to define where we end and where our subcontractors begin: Adaptec includes a portion of IMP, of TI, and of SCI.14

In the words of HP’s Edmondson, the partners in these relationships cooperate in order to ‘‘pull one another up relative to the rest of the industry.’’15 This blurring of the boundaries of the firm transcends distinctions 14 Interview, with Jeffrey Miller, Vice-President of Marketing, Adaptec Corporation, 10 May 1988. 15 Interview, Harold Edmondson, Vice-President of Corporate Manufacturing, HP, 5 February 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

295

of corporate size or age. While many Silicon Valley start-ups have allied with one another and ‘‘grown up’’ together, others have benefitted from relationships with large established firms, both in and outside of the region. Moreover, while non-disclosure agreements and contracts are normally signed in these alliances, few believe that they really matter (especially in an environment of high employee turnover like Silicon Valley). Rather the firms accept that they share a mutual interest in one another’s success and that their relationship defies legal enforcement. According to Apple Computers’ Manager of Purchasing, Tom McGeorge: We have found you don’t always need a formal contracty If you develop trust with your suppliers, you don’t need armies of attorneys y In order for us to be successful in the future, we have to develop better working relationships, better trusting relationships, than just hounding vendors for price decreases on an annual basis.16 Of course, truly collaborative relationships do not emerge overnight or function flawlessly. There is a constant tension between cooperation and control. It may take years before trust develops or a supplier is given more responsibility. As with any close relationship, misunderstandings arise. Some relationships are terminated - in industry lingo, they result in ‘‘divorce’’ - while others languish temporarily and are revitalized with joint work. What is striking is how many of these relationships appear to not only survive but to flourish. Although these relationships are often remarkably close, both parties are careful to preserve their own autonomy. Most Silicon Valley firms will not allow their business to account for more than 20 percent of a supplier’s product and prefer that no customer occupy such a position. Suppliers are thus forced to find outside customers, which insures that the loss of a single account will not put them out of business. This avoidance of dependence protects both supplier and customer, and it promotes the diffusion of technology across firms and industries. One local executive suggests the ideal situation is to hold a preferred position with suppliers, but not an exclusive relationship. ‘‘Dependence,’’ he notes, ‘‘makes both firms vulnerable.’’17

16

Cited in M. Cohodas, ‘‘How Apple buys electronics,’’ Electronics Purchasing, November 1986.

17 Interview, Henri Jarrat, President and Chief Operating Officer, VLSI Technology, 10 May 1988.

296

ANNALEE SAXENIAN

Regional proximity facilitates collaborative supplier relations. Materials Director at Apple Computers, Jim Bilodeau, describes the firm’s preference for local suppliers: Our purchasing strategy is that our vendor base is close to where we’re doing businessy We like them to be next door. If they can’t, they need to be able to project an image like they are next door.18

Sun’s Materials Director Scott Metcalf similarly claims that: In the ideal world, we’d draw a 100 mile radius and have all our suppliers locate plants, or at least supply depots, into the area.19

These managers agree that long-distance communication is often inadequate for the continuous and detailed engineering adjustments required in making technically complex electronics products. Face-to-face interaction allows firms to address the unexpected complications in a supplier relationship that could never be covered by a contract. The president of a firm which manufactures power supplies for computers and peripherals, explains: I don’t care how well the specifications are written on paper, they are always subject to misinterpretation. The only way to solve this is to have a customer’s engineers right here. There is no good way to do it if you are more than fifty miles away.20 Nor is this desire for geographic proximity reducible to cost considerations alone. The trust, information exchange and teamwork which are the basis of collaborative supplier relations require continued interaction which is difficult to achieve over long distances. This is not to suggest that all Silicon Valley systems firms are tightly integrated into cooperative relationships with all of their suppliers, Traditional arms-length relations persist, for example, with suppliers of such commodity inputs as raw materials, process materials, sheet metal, and cables. Nor is it to imply that all of a firm’s key suppliers are located in the 18

Cited in M. Cohodas, ‘‘How Apple Buys Electronics,’’ Electronics Purchasing, November 1986. 19

Interview, Scott Metcalf. Director of Materials, Sun Microsystems, 30 March 1988.

20

In order to improve its responsiveness, the firm recently moved part of its manufacturing from Hong Kong to Silicon Valley. Sun Microsystems, which is its neighbor, in turn increased its purchases of the firm’s power supplies from $500,000 to more than $8 million a year. Interview, Robert Smith, President, Computer Products/Boschert, 1 September 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

297

same region. Many Silicon Valley firms purchase components such as commodity chips or disk drives from Japanese vendors. Systems firms in Silicon Valley are, however, redefining their relationships with their most important suppliers. A network of long-term, trust-based alliances with innovative suppliers represents a source of advantage for a systems producer which is very difficult for a competitor to replicate. Such a network provides both flexibility and a framework for joint learning and technological exchange. Production networks and innovation Silicon Valley today is far more than an agglomeration of individual technology firms. Its networks of interdependent yet autonomous producers are increasingly organized to grow and innovate reciprocally. These networks promote new product development by encouraging specialization and allowing firms to spread the costs and risks associated with developing technologyintensive products. They spur the diffusion of new technologies by facilitating information exchange and joint problem solving between firms and even industries. Finally, the networks foster the application of new technologies because they encourage new firm entry and product experimentation. Three cases demonstrate how these production networks promote technological advance in Silicon Valley. The first is the relationship of systems firms to their contract manufacturers, which are changing from sweatshops into technologically sophisticated, capital-intensive businesses as they assume more responsibility for product design and process innovation. The second case involves a foundry relationship between a large systems firm and a small design specialist in which each contributes distinctive, state-of-the-art expertise to a process of complementary innovation. In the third case, a systems firm spreads the costs of perfecting a state-of-the-art microprocessor through joint product development with a semiconductor producer. Taken together, these cases demonstrate how collaboration fosters joint problem solving and how Silicon Valley’s firms are learning to respond collectively to fast changing markets and technology. Contract manufacturers Printed circuit board assembly has historically been among the most laborintensive and least technically sophisticated phases of electronics manufacturing. Contract assembly was traditionally used by systems firms in Silicon Valley to augment in-house manufacturing capacity during periods

298

ANNALEE SAXENIAN

of peak demand. Commonly referred to as ‘‘board stuffing’’, it was the province of small, undercapitalized and marginal firms which paid unskilled workers low wages to work at home or in sweatshops. Many of these assemblers moved to low-wage regions of Asia and Latin America during the 1960s and early 1970s. This profile changed fundamentally during the 1980s. Systems firms like IBM, HP and Apple expanded their business with local contract manufacturers in order to lower their fixed costs and respond to shorter product cycles. This enabled the region’s PCB assemblers to expand and upgrade their technology. As small shops received contracts and assistance from larger systems firms, they invested in state-of-the-art manufacturing automation and assumed more and more responsibility for the design and development of new products. Flextronics Inc. was one of Silicon Valley’s earliest board stuffing firms. During the 1970s it was a small, low value-added, ‘‘rent-a-body’’ operation which provided quick turnaround board assembly for local merchant semiconductor firms. By the late 1980s it was the largest contract manufacturer in the region and offered state-of-the-art engineering services and automated manufacturing. This transformation began in 1980 when Flextronics was purchased by new management. The company expanded rapidly in the subsequent years, shifting the bulk of its services from consignment manufacturing, in which the customer provides components which the contract manufacturer assembles according to the customer’s designs, to ‘‘turnkey’’ manufacturing, in which the contract manufacturer selects and procures electronic components as well as assembling and testing the boards. The shift from consignment to turnkey manufacturing is a shift from a low risk, low value-added, low loyalty subcontracting strategy to a high risk, high value-added, high trust approach because the contract manufacturer takes responsibility for the quality and functioning of a complete subassembly. This shift greatly increases the systems firm’s dependence on its contract manufacturer’s process and components. Flextronics’ CEO Robert Todd describes the change: With turnkey they’re putting their product on the line, and it requires a great deal of trust. This kind of relationship takes years to develop and a major investment of people time.21

21

Interview, Robert Todd, CEO, Flextronics Inc., 2 February 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

299

Todd claims that whereas a consignment relationship can be replicated in weeks, it can take years to build the trust required for a mature turnkey relationship in which the design details of a new product are shared. These relationships demand extensive organizational interaction and a surprising amount of integration.22 As a result, firms which consign their manufacturing typically have six or seven suppliers which compete on the basis of cost, while those relying on turnkey contractors build close relations with only one or two firms, selected primarily for quality and responsiveness. The shift to turnkey manufacturing has clear implications for a firm’s location. Flextronics CEO Robert Todd claims: We’ve never been successful for any length of time outside of a local area. We might get a contract initially, but the relationship erodes without constant interaction. Sophisticated customers know that you must be close because these relationships can’t be built over long distances.23

This explains why the US contract manufacturing business is highly regionalized. During the 1980s, flextronics established production facilities in Massachusetts, South Carolina, Southern California, Hong Kong, Taiwan, Singapore, and the People’s Republic of China.24 SC1 Systems, the largest US contract manufacturer, is based in Alabama where costs are very low, but has a major facility in high-cost Silicon Valley in order to build the relationships needed to serve the local market. By 1988 over 85 percent of Flextronics’ business was turnkey; in 1980 it had been entirely consignment. This growth was initially due to a close relationship with rapidly expanding Sun Microsystems which by 1988 accounted for 24 percent of Flextronics business. The two firms have explicitly sought to 22

Flextronics’ CEO meets with Sun’s Senior Vice-President of Operations for breakfast once a month to insure that trust is maintained at the top and high-level problems are addressed. Meanwhile planning, engineering, purchasing and marketing personnel from the two firms meet still more frequently - often weekly, and in some cases daily – to solve problems and plan for the future. This involves an immense amount of sharing and typically results in highly personalized relationships between the two firms. Interview, Dennis Stradford, Vice-President of Marketing, Flextronics, 3 March 1988. 23

Interview, Robert Todd, CEO, Flextronics, Inc., 2 February 1988.

24

This expansion was too rapid. In 1989, Flextronics was forced to restructure its worldwide business because of significant excess manufacturing capacity and operating losses which began with a downturn in the disk drive business. To eliminate excess capacity, the production facilities in Massachusetts, South Carolina, Southern California, and Taiwan were sold or closed.

300

ANNALEE SAXENIAN

limit this share in order to avoid dependency. Flextronics has diversified its customer base by developing customers in a wide range of different industries. The firm now also serves firms in the disk drive, tape drive, printer, and medical instruments industries. Two recent trends in contract manufacturing illustrate how specialization breeds technological advance and increasing interdependence. On one hand, Silicon Valley systems companies are relying on contract manufacturers for the earliest phases of board design. Flextronics now offers engineering services and takes responsibility for the initial design and layout of Sun’s circuit boards as well as the pre-screening of electronic components. The use of contract manufacturers for board design implies a radical extension of inter-firm collaboration because systems firms must trust subcontractors with the proprietary designs which are the essence of their products. When successful, such a relationship increases the agility of the systems firm while enhancing the capabilities of the contract manufacturers. In fact, Flextronics is now capable of manufacturing complete systems, although this accounts for only 5 percent of their business. The second trend, the increasing use of surface mount technology (SMT), is transforming printed circuit board assembly into a capital-intensive business. While the traditional through-hole assembly technique involves soldering individual leads from an integrated circuit through the holes in circuit boards, SMT uses epoxy to glue electronic components onto the board. The new process is attractive because it produces smaller boards (components can be mounted on both sides of the board) and because it is cheaper in volume than through-hole. SMT is, however, far more complex and expensive than through-hole assembly. It requires tight design rules, high densities, and a soldering process which demands expertise in applied physics and chemistry and takes years of experience to perfect. Industry analysts describe SMT as 5 to 10 times more difficult a process than through-hole. Moreover, a single high speed SMT production line costs more than $1 million. Contract manufacturer Solectron Corporation has led Silicon Valley in the adoption of SMT, investing more than $18 million in SMT equipment since 1984.25 It has captured the business of IBM, Apple, and HP (as well as many

25 G. Lasnier, ‘‘Solectron to acquire 10 advanced Surface Mount Systems,’’ San Jose Business Journal, 8 February 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

301

smaller Silicon Valley firms) by automating and emphasizing customer service, high quality, and fast turnaround. According to one venture capitalist and industry veteran, Solectron’s manufacturing quality is superior to that found in any systems firm in Silicon Valley.26 This manufacturing excellence is due in part to Solectron’s investment in state-of-the-art equipment. It is also the result of their expertise accumulated by applying lessons learned from one customer to the next. All of Solectron’s customers thus benefit from learning that would formerly have been captured by individual firms. Moreover, lessons learned in manufacturing for firms in one sector are spread to customers in other sectors, stimulating the inter-industry diffusion of innovations. The use of contract manufacturers, initially an attempt to spread risks, focus resources, and reduce fixed costs in an era of accelerating new product introductions, is thus producing mutually beneficial technological advance. While many of Silicon Valley’s contract assemblers remain small and labor intensive, some, such as Flextronics and Solectron, are no longer subordinate or peripheral units in a hierarchical production system. Rather, they have transformed themselves into sophisticated specialists which contribute as equals to the vitality of the region’s production networks. Silicon foundries Silicon foundries are the manufacturing facilities used for the fabrication of silicon chips, or semiconductors. The use of external foundries grew rapidly in the 1980s as semiconductor and systems firms began designing integrated circuits themselves but sought to avoid the cost of the capital intensive fabrication process [30]. Like contract manufacturers, foundries offer their customers the cumulative experience and expertise of specialists. Unlike contract assemblers, however, silicon foundries have always been technologically sophisticated and highly capital intensive – and they have thus interacted with customers as relative equals offering complementary strengths. This relationship can be an exchange of services with limited technical interchange, or it can offer significant opportunities for reciprocal innovation. The collaboration between Hewlett-Packard and semiconductor design specialist Weitek illustrates the potential for complementary innovation in 26

Interview, William Davidow, Partner, Mohr Davidow Ventures, 21 April 1988.

302

ANNALEE SAXENIAN

a foundry relationship. Weitek, which has no manufacturing capacity of its own, is the leading designer of ultra-high speed ‘‘number crunching’’ chips for complex engineering problems. In order to improve the performance of the Weitek chips, HP opened up its state-of-the-art 1.2 micron wafer fabrication facility, historically closed to outside firms, to Weitek for use as a foundry. This alliance grew out of a problem that HP engineers were having with the development of a new model workstation. They wanted to use Weitek designs for this new product, but Weitek (which had supplied chip-sets to HP for several years) could not produce chips which were fast enough to meet HP’s needs. Realizing that the manufacturing process at the foundry Weitek used slowed the chips down, the HP engineers suggested fully optimizing the Weitek designs by manufacturing them with HP’s more advanced fabrication process. This culminated in a three-year agreement which allows the two firms to benefit directly from each other’s technical expertise. The agreement guarantees that HP will manufacture and purchase at least $10 million worth of the Weitek chip-sets in its foundry and it provides Weitek the option to purchase an additional $20 million of the chip-sets from the foundry to sell to outside customers. This arrangement assures HP a steady supply of Weitek’s sophisticated chips and allows them to introduce their new workstation faster than if they designed a chip in-house and it provides Weitek with a market and the legitimacy of a close association with HP, as well as guaranteed space in a state-of-the-art foundry. Moreover, the final product itself represents a significant advance over what either firm could have produced independently. Both firms see the real payoff from this alliance in expected future technology exchanges. According to an HP program manager who helped negotiate the deal: ‘‘We wanted to form a long-term contact (sic) with Weitek - to set a framework in place for a succession of business opportunities.’’27 By building a long-term relationship, the firms are creating an alliance which allows each to draw on the other’s distinctive and complementary expertise to devise novel solutions to common problems. HP now has greater access to Weitek’s design talent and can influence the direction of these designs, Weitek has first-hand access to the

27 Cited in S. Jones, ‘‘Hewlett-Packard inks Major Chip Deal,’’ San Jose Business Journal, 18 May 1987.

The Origins and Dynamics of Production Networks in Silicon Valley

303

needs and future plans of a key customer as well assured access to HP’s manufacturing capabilities. Both are now better positioned to respond to an unpredictable and fast changing market. In spite of this increased interdependence, HP and Weitek have preserved their autonomy. Weitek sells the chip-sets they produce on HP’s fab to third parties, including many HP competitors, and continues to build partnerships and collect input from its many other customers (in fact, Weitek deliberately limits each of its customers to less than 10 percent of its business). Meanwhile, HP is considering opening its foundry to other outside chip design firms, and it still maintains its own in-house design team. The openness of such a partnership insures that design and manufacturing innovations that grow out of collaboration diffuse rapidly. Both firms see this relationship as a model for the future. While HP does not intend to become a dedicated foundry, it is looking for other partnerships that allow it to leverage its manufacturing technology using external design expertise. Weitek, in turn, depends upon a strategy of alliances with firms which can provide manufacturing capacity as well as insights into fast evolving systems markets. Collaborative product development Joint product development represents the ultimate extension of interdependence in a networked system. The collaboration between Sun Microsystems and Cypress Semiconductor to develop a sophisticated version of Sun’s RISC (reduced instruction set computing) microprocessor is a classic example. A RISC chip uses a simplified circuit design that increases computing speed and performance of a microprocessor. Sun’s first workstations were based entirely on standard parts and components. The firm’s advantage lay in proprietary software and its ability to introduce new products quickly. Over time, the firm began to distinguish its products by adding new capabilities, enhancing its software, and purchasing semi-custom components. Sun’s most significant innovation was to design its own microprocessor to replace the standard Motorola microprocessors used in its early workstations. This RISC based microprocessor, called Sparc, radically improved the speed and performance of Sun’s products - and simultaneously destabilized the microprocessor market.

304

ANNALEE SAXENIAN

Sun further broke with industry tradition by freely licensing the Sparc design, in contrast with Intel and Motorola’s proprietary approach to their microprocessors. The firm established partnerships with five semiconductor manufacturers, which each use their own process technology to produce specialized versions of Sparc.28 The resulting chips share a common design and software, but differ in speed and price. After supplying Sun, these suppliers are free to manufacture and market their versions of Sparc to other systems producers. As a result, Sun has extended acceptance of its architecture while recovering some of its development costs and avoiding the expense of producing and marketing the new chip. Its suppliers, in turn, gain a guaranteed customer in Sun and a new and promising product - which they are jointly promoting. Collaboration allowed Sun to reduce significantly the cost of producing a new microprocessor. The firm spent only $25 million developing the Sparc chip, compared to Intel’s $100 million investment in its 80368 microprocessor. In the words of one computer executive: The real significance of Sparc and of RISC technology is that you no longer have to be a huge semiconductor company, with $100 million to spare for research and development, to come up with a state-of-the-art microprocessor.29

Mips Computer Systems has similarly designed its own RISC chip and licensed it to three Silicon Valley semiconductor vendors. Sun’s partnership with Cypress Semiconductor extends such collaboration the furthest. In 1986, the two firms agreed to develop jointly a high speed, high performance version of Sparc. A team of approximately 30 engineers from both companies worked at a common site for a year – thus combining Sun’s Sparc architecture and knowledge of systems design and software with Cypress’ integrated circuit design expertise and advanced CMOS fabrication process. This core team was supported through constant feedback from the product development, marketing, and testing specialists in each firm. Cypress Vice-President of Marketing, Lowell Turriff, describes the 28 The firms are Fujitsu Ltd. (the first to manufacture the Sparc chip because the leading US semiconductor firms refused to accept external designs), LSI Logic, Bipolar Integrated Technologies, Cypress Semiconductor and Texas Instruments. 29 B. Schlender, ‘‘Computer Maker aims to transform Industry and become a Giant,’’ The Wall Street Journal, 18 March 1988.

The Origins and Dynamics of Production Networks in Silicon Valley

305

collaboration as an ‘‘ideal marriage’’ characterized by ‘‘an amazing environment of cooperation.’’30 The two firms benefit from complementary expertise: Sun gained access to Cypress’ advanced design capabilities and its state-of-the-art CMOS manufacturing facility to produce a very high speed microprocessor; Cypress gained an alliance with a rapidly growing systems firm, insights into the direction of workstation technology and a new, high performance, product. Cypress executives envision similar partnerships with customers in other industrial markets, including telecommunications and automobiles. By building a network of collaborative relationships with suppliers like Cypress, Sun has not only reduced the cost and spread the risks of developing its workstations, but has been able to bring new products with innovative features and architectures to market rapidly. These relationships prevent competitors from simply imitating Sun’s products, and represent a formidable competitive barrier. This explains Sun’s championing of systems which rely on readily available components and industry standard technologies (or ‘‘open systems’’). Under this approach, computers made by different firms adhere to standards which allow them to use the same software and exchange information with one another. This marks a radical break from the proprietary systems approach of industry leaders IBM, DEC and Apple. Open standards encourage new firm entry and promote experimentation because they force firms to differentiate their products while remaining within a common industry standard. Proprietary systems, by contrast, exclude new entrants and promote closed networks and stable competitive arrangements. As Silicon Valley producers introduce specialized systems for a growing diversity of applications and users, they are fragmenting computer markets. The market no longer consists simply of mainframes, minicomputers and personal computers: it is segmented into distinct markets for supercomputers, super minicomputers, engineering workstations, networked minicomputers, personal computers, parallel and multiprocessor computers and specialized educational computers [19]. As long as this process of product differentiation continues to undermine homogeneous mass markets for computers, Silicon Valley’s specialist systems producers and their networks of suppliers will flourish.

30

Interview, Lowell Turriff, Vice-President Marketing, Cypress Semiconductor, 7 March 1988.

306

ANNALEE SAXENIAN

Conclusions Technical expertise in Silicon Valley today is spread across hundreds of specialist enterprises, enterprises which continue to develop independent capabilities while simultaneously learning from one another. As computer systems firms and their suppliers build collaborative relationships, they spread the costs and risks of developing new products while enhancing their ability to adapt rapidly to changing markets and technologies. This is not to suggest that inter-firm networks are universally diffused or understood in Silicon Valley. The crisis of the region’s commodity semiconductor producers in the mid-1980s is attributable in part to distant, even antagonistic, relations between the chipmakers and their equipment suppliers [30,33]. Other examples of arms-length relationships and distrust among local producers can no doubt be identified [6]. However, these failures of coordination do not signal inherent weaknesses in network forms of organization, but rather the need for the institutionalization of inter-firm collaboration in the U.S. Proposals to replace Silicon Valley’s decentralized system of production with an ‘‘American keiretsu’’ - by constructing a tight alliances among the nation’s largest electronics producers and suppliers [5] - would sacrifice the flexibility which is critical in the current competitive environment. Such proposals also misread the changing organization of production in Japan, where large firms increasingly collaborate with small and medium-sized suppliers and encourage them to expand their technological capabilities and organizational autonomy [13,21,7]. In Japan, as in Silicon Valley, a loosely integrated network form of organization has emerged in response to the market volatility of the 1970s and 1980s. The proliferation of inter-firm networks helps to account for the continued dynamism of Silicon Valley. While the region’s firms rely heavily on global markets and distant suppliers, there is a clear trend for computer systems producers to prefer local suppliers and to build the sort of trust-based relationships which flourish with proximity. The region’s vitality is thus enhanced as inter-firm collaboration breeds complementary innovation and cross-fertilization among networks of autonomous but interdependent producers. References [1] D. Angel, The Labor Market for Engineers in the US Semiconductor Industry, Economic Geography 65 (2) (1989) 99–112.

The Origins and Dynamics of Production Networks in Silicon Valley

307

[2] M. Best, The New Competition: Institutions of Industrial Restructuring (Harvard University Press, Cambridge, MA, 1990). [3] A. Delbecq and J. Weiss, The Business Culture of Silicon Valley: Is it a Model for the Future? in: J. Weiss (ed.), Regional Cultures, Managerial Behavior and Entrepreneurship (Quorom Books, New York, 1988). [4] R. Dore, Goodwill and the Spirit of Market Capitalism, The British Journal of Sociology XXXIV (4) (1983) 459–482. [5] C. Ferguson, The Coming of the U.S. Keiretsu, Harvard Business Review (July-August 1990). [6] R. Florida and M. Kenney, Why Silicon Valley and Route 128 Won’t Save Us, California Management Review 33(1) (1990) 68–88. [7] M. Fruin, Cooperation and Competition: Interfirm Networks and the Nature of Supply in the Japanese Electronics Industry, Euro-Asia Center, INSEAD (1988). [8] R. Gordon, Growth and the Relations of Production in High Technology Industry, Paper presented at Conference on New Technologies and New Intermediaries, Stanford University (1987). [9] M. Granovetter, Economic Action and Social Structure: The Problem of Embeddedness, American Journal of Sociology 91 (1985) 481–510. [10] H. Hakansson, Industrial Technological Development: A Network Approach (Croom Helm, Beckenham, 1987). [11] S. Helper, Supplier Relations at a Crossroads: Results of Survey Research in the US Auto Industry, Boston University, Department of Operations Management (1990). [12] J. Holmes, The Organization and Locational Structure of Production Subcontracting’’ in: A. Scott and M. Storper (eds.), Production, Work and Territory (Allen and Unwin, Boston, 1986). [13] K. Imai, Evolution of Japan’s Corporate and Industrial Networks, in: B. Carlsson (ed.), Industrial Dynamics (Kluwer, Dordrecht, 1988). [14] J. Jarillo, On Strategic Networks, Strategic Management Journal 9 (1988) 31–41. [15] J. Johanson and L. Mattson, Interorganizational Relations in Industrial Systems: A Network Approach Compared with the Transactions Cost

308

ANNALEE SAXENIAN

Approach, International Studies of Management and Organization XVII (1) (1987) 34–48. [16] R. Johnston and P. Lawrence. Beyond Vertical Integration - the Rise of the Value-Adding Partnership, Harvard Business Review (July-August 1988) 94–101. [17] A. Larson, Cooperative Alliances: A Study of Entrepreneurship, PhD dissertation, Harvard University, Sociology and Business Administration (1988). [18] E. Lorenz, Neither Friends nor Strangers: Informal Networks of Subcontracting in French Industry, in: D. Gambetta (ed.), Trust (Basil Blackwell, New York, 1988). [19] R. McKenna, Who’s Afraid of Big Blue? (Addison Wesley, New York, 1989). [20] R. Miles and C. Snow, Organizations: New Concepts for New Forms, California Management Review XXVIII (3) (1986) 62–73. [21] T. Nishiguchi, Strategic Dualism: An Alternative in Industrial Societies, Ph.D. dissertation, Oxford University, Nuffield College (1989). [22] M. Piore and C. Sabel, The Second Industrial Divide (Basic Books, New York, 1984). [23] W. Powell, Neither Market nor Hierarchy: Network Forms of Organization, Research in Organizational Behavior 12 (1990) 295–336. [24] C. Prahalad and G. Hamel, The Core Competence of the Corporation, Harvard Business Review (May-June 1990). [25] J. Quinn, T. Doorley and P. Paquette, Technology in Services: Rethinking Strategic Focus, Sloan Management Review (Winter 1990) 79–87. [26] G. Richardson, The Organisation of Industry, The Economic Journal (Sept. 1972) 3883–896. [27] E. Rogers, Information Exchange and Technological Innovation, in: D. Sahal (ed.), The Transfer and Utilization of Technical Knowledge (Lexington Books, Lexington, MA, 1982). [28] C. Sabel, Flexible Specialization and the Reemergence of Regional Economies, in: P. Hirst and J. Zeitlin (eds.), Reversing Industrial Decline (Berg, Oxford, 1988).

The Origins and Dynamics of Production Networks in Silicon Valley

309

[29] C. Sabel, H. Kern and G. Herrigel, Collaborative Manufacturing: New Supplier Relations in the Automobile Industry and the Redefinition of the Industrial Corporation, Massachusetts Institute of Technology (1989). [30] A. Saxenian, Regional Networks and the Resurgence of Silicon Valley, California Management Review 33 (1) (1990) 89–112. [31] A. Scott and D. Angel, The U.S. Semiconductor Industry: A Locational Analysis, Environment and Planning A19 (1987) 875–912. [32] M. Storper and A. Scott, The Geographical Foundations and Social Regulation of Flexible Production Complexes, in: J. Wolcher and M. Dear (ed.), Territory and Social Reproduction (Allen & Unwin, London, 1988). [33] J. Stowsky, The Weakest Link: Semiconductor Equipment, Linkages, and the Limits to International Trade, Working Paper No. 27, Berkeley Roundtable on the International Economy, University of California, Berkeley, 1988.

NETWORKS AND KNOWLEDGE: THE BEGINNING AND END OF THE PORT COMMODITY CHAIN, 1703-1860$ Paul Duguid$$ ABSTRACT Diversified trading networks have recently drawn a great deal of attention. In the process, the importance of diversity has perhaps been overemphasized. Using the trade in port wine from Portugal to Britain as an example, this essay attempts to show how a market once dominated by general, diversified traders was taken over by dedicated specialists whose

$

This chapter is a reprint of the article ‘‘Networks and Knowledge: The Beginning and End of the Port Commodity Chain, 1703-1860’’ published in Business History Review Volume 79 Issue 3 (2005). $$ PAUL DUGUID is adjunct professor at the School of Information & Management Systems at the University of California, Berkeley, and professorial research fellow at the School of Management and Business, Queen Mary, University of London. The research on which this paper is based was funded in part by grants from the National Endowment for the Humanities and the Fundac- ao Luso-Americana para o Desinvolvimento. I am grateful for very helpful comments from Regina Grafe, David Hancock, Teresa Silva Lopes, Gaspar Martins Pereira, and James Simpson. Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 311–349 Copyright r 2013 Cambridge University Press All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030013

311

312

PAUL DUGUID

success might almost be measured by the degree to which they rejected diversification to form a dedicated ‘‘commodity chain.’’ The essay suggests that this strategy was better able to handle matters of quality and the specialized knowledge that port wine required. The essay also highlights the question of power in such a chain. Endemic commoditychain struggles are clearest in the vertical brand war that broke out in the nineteenth century, which, by concentrating power, marked the final stage in the transformation of the trade from network to vertical integration. Keywords: networked society, Port commodity chain, vertical integration

Emphasizing the recent emergence of a ‘‘networked society,’’ modern commentators too easily make networks seem particularly new.1 Avner Greif’s account of early medieval Maghribi traders, John Padgett and Christopher Ansell’s description of the Medicis’ trade and banking in the Renaissance, and Fernand Braudel’s study of commercial society in early modern Europe all remind us, however, that business networks are in fact remarkably old. In Perry Gaucci’s words, the ‘‘modern obsession with ‘networking’ has a long heritage.’’2 Another problem with many current discussions of networks is that they too easily embrace almost every social arrangement beyond (and perhaps including) the nuclear family. In any discussion, it is

1

See, for example, Manuel Castells, The Information Society: Economy, Society, Culture, 3 vols. (Oxford, 1994-98). For further discussion of the question of ‘‘networks,’’ see the introduction to this issue.

2

Avner Greif, ‘‘Reputation and Coalitions in Medieval Trade: Evidence on the Maghribi Traders,’’ Journal of Economic History 49 (1989): 857-82; .John F. Padgett and Christopher K. Ansell, ‘‘Robust Action and the Rise of the Medici, 1400-1434,’’ American Journal of Sociology 98 (1993): 1259-1319; Fernand Braudel, Civilization and Capitalism, 15th-18th Century, 3 vols., trans. Sian Reynolds (New York, 1981); Perry Gaucci. The Politics of Trade: The Overseas Merchant in State und Society, 1660-1720 (Oxford, 2000), 63. For recent analysis of business networks in history, see David Hancock, Citizens of the World: London Merchants and the Integration of the British Merchant Community (Cambridge, Mass., 1995); Mary Rose, Firms, Networks, and Business Values: The British and American Cotton Industries since 1750 (Cambridge, Mass., 2000); Naomi R. Lamoreaux, Daniel M. G. Raff, and Peter Temin, ‘‘Beyond Markets and Hierarchies: Towards a New Synthesis of American Business History),’’ American Historical Review 108 (Apr. 2003): 404-33.

Networks and Knowledge

313

important to clarify the kind of network that is being discussed.3 In this essay, I will question some widely accepted assumptions about networks in general by looking at one particular kind of network, the ‘‘commodity chain.’’ Terrence Hopkins and Immanuel Wallerstein define such a chain as ‘‘a network of labor and production processes whose end result is a finished commodity,’’ indicating that it is indeed a kind of network, but a highly specific one, organized around a pa1ticular commodity and often a particular market, toward which it will stretch in a linear fashion from production to consumption.4 Business-history literature, while expansive on

3

Podolny and Page are among the few brave enough to try to define a network organization: ‘‘any collection of actors (NZ2) that pursue repeated, enduring exchange relations with one another and, at the same time, lack a legitimate organizational authority to arbitrate and resolve disputes that may arise during the exchange.’’ The stipulation about authority might rule out the family. See Joel M. Podolny and Karen L. Page, ‘‘Network Forms of Organization,’’ Annual Review of Sociology 24 (1998): 59; Ezra W. Zuckerman, ‘‘On Networks and Markets by Rauch and Castella,’’ Journal of Economic Literature 41 (2003): 545-65.

4

Terence K. Hopkins and Immanuel Wallerstein, ‘‘Commodity Chains in the World-Economy Prior to 1800,’’ Review 10 (1986): 159. A variety of concepts—supply chain, value chain. even filie`re—offer themselves for this discussion. The literature of the commodity chain seems most apt because Hopkins and Wallerstein developed it to analyze international trade and deployed it historically. Further, as developed by Gary Gereffi, the concept focuses on issues of power, knowledge, and coordination over geographic distances—all issues of interest to the argument offered here. In supply- and value-chain literature, by comparison, such matters tend to be peripheral if considered at all. See Paul Duguid, ‘‘Brands and Supply Chains: Governance before and after Chandler,’’ in Gouverner les Organisations, ed. Herve´ Dumez (Paris, 2004), 329-69; Gary Gereffi, ‘‘International Trade and Industrial Upgrading in the Apparel Commodity Chain, Journal of International Economics 48 (June 1999): 37-70; Gary Gereffi and Miguel Korniewicz, eds., Commodity Chains and Global Capitalism (Westport, Conn., 1994). See also Peter Dicken et al., ‘‘Chains and Networks, Territories and Scales: Towards a Relational Framework for Analysing the Global Economy,’’ Global Networks 1 (2001): 89-112; Philip Raikes, Michael Friis Jensen, and Stefano Ponte, ‘‘Global Commodity Chain Analysis and the French Filie`re Approach: Comparison and Critique,’’ Economy and Society 3 (2000): 390-417. For an interesting discussion of the filie`re in the context of the wine trade, see J. Barker, N. Lewis, and W. Moran, ‘‘Reregulation and the Development of the New Zealand Wine Industry,’’ Journal of Wine Research 12 (2001): 199-221. I am grateful to Regina Grafe for her helpful comments that pushed me toward the commodity chain. Neither Hopkins and Wallerstein, nor Gereffi, nor this paper restricts the use of commodity to raw materials. The restricted use, the Oxford English Dictionary suggests, is fairly recent. See ‘‘commodity (6a),’’ Oxford English Dictionary, 2nd ed. (Oxford, 1989). Historically, the wine trade has viewed wine as a raw foodstuff and a manufactured commodity to suit its convenience. See Edouard Calmels, Des Noms et Marques de Fabrique et de Commerce de la Concurrence De´loyal (Paris, 1858), especially pp. 80ff. Because of the fortification process by which it is made, port falls into the category of processed wines.

314

PAUL DUGUID

networks in general, is remarkably reticent about commodity chains— despite their importance in the history of business. To address this reticence, I will explore the formation of the port commodity chain over the course of 160 years, primarily by looking at the activities of three port-wine exporters, Hunt & Co, Offley & Co, and Sandeman & Co, that originated in the eighteenth century and have survived into the present day.5 This particular commodity chain deserves attention because, while accounts of merchant capitalism and trading companies generally show that success came to those that expanded their networks and diversified their activities, interests, and outlets, this sector shows an alternative progression.6 In the port-wine trade, the firms that rose to prominence did so largely by extracting themselves from ‘‘infinite networks of trade,’’ developing, instead, a more singular chain.7 The most successful port firms of the period narrowed their extensive trading connections in order to focus on a relatively linear set of links. By contrast, port firms that continued to operate in diverse networks lost competitive ground. The contrast is distinct enough to suggest that, in certain circumstances, expanded networks are not the preferred structure for commercial success.

5

All three firms now belong to the Portuguese holding company, Sogrape Vinhos. Over two hundred or more years of existence, the names of these firms changed as their partners changed. For simplicity’s sake, here they will be referred to by the first name of the most enduring partnership, Hunt, Offley, and Sandeman. The records of the first two firms are in the archives of A. A. Ferreira & Ca., Vila Nova de Gaia, Portugal (hereafter AAF), except for one Offley letterbook, which anomalously is with the records of Sandeman & Co., in the archive of the House of Sandeman, Vila Nova de Gaia, Portugal (hereafter HoS). Since 2002, all three firms have been owned by Sogrape & Ca. I am grateful to Sogrape & Ca, George Sandeman, Anto´nio Oliveira Bessa, Luisa Olaza´bel, and Paula Montes Leal for permission to use and their help in using these records.

6

Chapman, for example. echoes Braudel in seeing ‘‘‘diversification’’ as a critical characteristic of merchant capitalists. Jones, likewise, sees the ‘‘tendency to diversify’’’ as a ‘‘distinguishing feature of trading companies’’ until the mid-twentieth century. Stanley Chapman, Merchant Enterprise in Britain: From the Industrial Revolution to World War I (Cambridge, 1992), 35; Geoffrey Jones, Merchants to Multinationals: British Trading Companies in the Nineteenth and Twentieth Centuries (New York, 2000), 1. In the context of this study, it is worth noting that Jones gives port firms as an example of firms that, historically, specialized rather than diversified (ibid., 25). This paper attempts to show why.

7

H. D. Traill, ‘‘The Ant’s Nest,’’ Recaptured Rhymes (London, 1882), 156-60.

Networks and Knowledge

315

In presenting the example of this chain, I will argue that one such circumstance may involve networks built around complex goods whose quality, while critical, is highly variable; for which price, as James E. Rauch puts it, is ‘‘uninformative’’; and in which the progress to market requires concatenating diverse participants from distinct geographic locations.8 Such networks face the challenge of binding the disparate knowledge of their diverse participants into a coherent continuum—a difficult task that demands (and rewards) increasing commitment to the network and its specialized knowledge, and, for the same reason, requires members to extract themselves from other networks and their generalized capabilities. Port, as I will attempt to show, was such a good and made such demands during the period covered in this essay. After discussing the beginnings of the port chain and describing the peculiarities of the commodity, I will explore the chain’s critical links—some highly visible, others less so—and their evolution over time. I will then examine the threat posed by competing products to port’s hold on the British market and the strategies adopted by the port firms in response. In conclusion, I point out that the principal response redistributed power in the commodity chain and, in so doing, changed the way knowledge was reticulated. In the process, cooperation, as G. B. Richardson describes it, steadily turned into direction, until increasingly integrated networks of the late nineteenth century emerged as vertically organized firms by the twentieth.9 Forging the Chain The port trade’s chain stretched primarily from the north of Portugal to Britain. Its politics had more dimensions, however, as they involved AngloFrench as much as Anglo-Portuguese relations. Indeed, the port trade flourished between the period of Anglo-French tensions at the end of the seventeenth century and the Anglo-French rapprochement toward the end of the nineteenth. From the earlier date, repeated embargoes of French wines opened the British market to other wines, and the Anglo-Portuguese Methuen Commercial Treaty of 1703 cemented Portugal’s advantage by guaranteeing that Britain would place lower duties on Portuguese than on

8

James E. Rauch, ‘‘Networks versus Markets in International Trade,’’ Journal of International Economics 48 (1991): 7-35.

9

G. B. Richardson, ‘‘The Organisation of Industry,’’ The Economic Journal 82 (1972): 883-96.

316

PAUL DUGUID

French wines. Lisbon, to the south, heavily favored Portuguese colonial trade, so northern Portuguese harbors (principally Porto) and northern vineyards (principally in the Douro River valley) met the British demand for Portuguese wine in the early eighteenth century. Port wine became familiar and cheap, and, because it was not French, drinking it was regarded as a patriotic act. Over the course of the eighteenth century, port rose to dominate both the Portuguese economy and the British wine market. In 1798, port wine alone accounted for almost 26 percent of Portugal’s trade; a couple of years later, Portuguese wine (which was mostly port) accounted for almost 76 percent of British wine imports.10 Port’s fortunes, which rose during the eighteenth century, however, declined in the nineteenth, as first Spanish and then French wines surpassed Portuguese wine imports. French dominance was guaranteed with the signing of the Cobden-Chevalier Treaty and the enactment of Gladstone’s budget of 1860. The latter effectively reversed the Methuen advantage by assessing higher taxes on wines fortified with brandy, such as port, than on unfortified wines, such as claret, burgundy, and champagne. Two other factors heavily influenced—and were in turn influenced by—the shape of this chain: regulation and the nature of the commodity. Regulation. A decline in port’s reputation in Britain in the 1750s inevitably produced a crisis in northern Portugal, which had depended heavily on the wine trade, prompting the mercantilist Portuguese state to intervene. In the following decade, the government set up a monopoly company, the Companhia Geral da Agricultura das Vinhas do Alto Douro, to oversee production. The Companhia demarcated the region in which port could be grown. It judged each year’s crop, deciding how much and whose wine could be exported. It set the price at which the exporters could buy the wine, dictated the terms of payment, and supervised transportation downriver and warehousing in Porto. Though focused on viticulture, the Companhia, with the help of the Porto customs house, also exerted tight control over shipments and, through its sales activities, even over the British market.11 Conceic- a˜o Andrade Martins, A Memo´ria do Vinho do Porto (Lisbon, 1990), quadro 74; T. G. Shaw, Wine, the Vine, and the Cellar, 2nd ed. (London, 1864), table 10. 10

11 For the fall in reputation and the recriminations that followed, see English Factory, Novas Instrucoens da Feitoria Ingleza a Respeito dos Vinhos do Douro, Setembro de 1754 (Lisbon, 1754). The Companhia was founded in 1756. See Instituic- ao˜ da Companhia Geral da Agricultura das Vinhas do Alto Douro (Lisbon, 1756). The port region was demarcated

Networks and Knowledge

317

Draconian and capricious, the Companhia’s heavy, invasive hand profoundly shaped the trade until its dissolution in the aftermath of the Portuguese civil war, which lasted from 1828 to 1834.12 Nevertheless, from the perspective of the trade (though not of those who were executed or exiled for transgressing), it was successful. Exports rose fairly steadily up to Napoleon’s ‘‘continental blockade,’’ after which they began a long decline. Although the volume of exports recovered somewhat in the second half of the nineteenth century, the high point of the Companhia’s achievement in the last decade of the eighteenth century was not equaled until the twentieth.13 Peculiarities of Port. Wine not only was subject to the stern hand of regulation but also was vulnerable to the whims of consumer fashions, which may, on the one hand, demand consistent quality while, on the other, reflect changing notions of what constitutes quality. Thus, producers faced a moving target, equipped, moreover, with a highly unpredictable weapon. As Adam Smith noted, wine is an awkward commodity. The same grapes grown in different regions—or even different parts of the same vineyard—will produce wines that are quite distinct. Moreover, the same vines will produce different wines from year to year, depending on unpredictable variables like the weather. Equally, once made, wine continues to change. Consequently, filling repeat orders for the ‘‘same’’ wine, or one of ‘‘similar quality,’’ is a Heraclitan, if not a Herculean, task.14

between 1758 and 1761. Through its control of exports, the Companhia also had a near monopoly over port sales in Brazil, but the Brazilian market lies outside the scope of this paper. 12 The Companhia was resurrected in response to another crisis of the late 1830s but was never quite the force it had been in the late eighteenth century. 13

In 1801, 78,606 pipes of port were exported, a figure not surpassed until 1918, when in a postwar boom, 82,914 pipes were exported. See Martins, Memo´ria, quadro 66. A pipe of wine contained 128 imperial gallons. 14

Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations (New York, 1937, first published 1776), 156. Despite the variability, customers asked for ‘‘similar style,’’ ‘‘same as last,’’ and even ‘‘identically the same as before,’’ stipulations, which, one letter writer protests, ‘‘we cannot after this length of time answer for.’’ Sandeman & Co to Sandeman (London), 10 Aug. 1824, letterbook, HoS. For the challenge of quality in wine, see Alessandro Stanziani, ‘‘La Construction de la Qualite´ du Vin, 1880-1914,’’ in La Qualite´ des Produits en France (XVIIIe-XXe Sie`cles), ed. Alessandro Stanziani (Paris, 2003).

318

PAUL DUGUID

Quinta das Bellas. This quinta, or farm, is near Peso da Regoa, the principal town in the port wine region. When Vizetelly visited, it was owned by Antonio da Costa, the Sandeman comissa´rio in the Alto Douro.15

Wines produced in extremely hot regions like the Douro can be particularly temperamental. Grapes grown in such conditions develop a high sugar content, making the wine unstable and likely, if provoked, to turn into vinegar. The addition of brandy—the process of ‘‘fortification’’—helped stabilize the wine for transport, it was claimed.16 (The convention of 15

The four illustrations for this article come from the archives of the House of Sandeman, Vila Nova de Gaia, Portugal, and are used with the permission of George Sandeman. They are from the original illustrations made for the book by Henry Vizetelly, Facts about Port and Madeira with Notices of the Wines Vintaged around Lisbon, and the Wines of Tenerife (London, 1880). The titles of the illustrations come from Vizetelly’s book. The artist is Ernest Vizetelly, Henry’s son (see Ernest Vizetelly, In Seven Lands: Germany, Austria, Hungary, Bohemia, Spain, Portugal, Italy [London, 1916], although the illustrations have on occasion been attributed to William Pratter). I am obliged to Ligia Marques, public relations manager of Sogrape Vinhos SA, for providing me with copies. 16 Nineteenth-century oenologists made the claim that brandy stabilized the wine; nineteenthcentury port traders contested it. See, for example, J. L. W. Thudichum and August Dupre´, A Treatise on the Origin, Nature, and Varieties of Wine: Being a Complete Manual of Viticulture

Networks and Knowledge

319

fortifying port with large amounts of brandy probably began in the first quarter of the seventeenth century, although the exact date is unknown. Certainly, the practice was well established by the middle of that century.)17 Brandy may have simplified transportation, but it also added complications. It takes time for the brandy to be absorbed by the wine, and until that point the wine can be harsh and crude. Thus, the addition of brandy, on the one hand, made it harder to sell young wines, thereby raising inventories and increasing the cost of doing business. On the other hand, by enabling transportation, brandy allowed links at the top of the chain to pass the wine down the chain to exporters in Porto and importers in Britain. In so doing, the upstream links were also passing to different points in the chain the considerable costs and risks of care and storage, as the wine, though ready to move, was not ready to market. Fortification further complicated negotiations between links, because it took skilled palates to assess the potential of young wines. The following note from a representative of Offley & Co. in the late eighteenth century, in response to a retailer’s complaint about wine, captures the problem encountered by both distributors and retailers: It is almost needless for us to observe to you that port wines owing to the increased demand for these two or three years past have come over newer than formerly and of course it required longer keeping. The wines you mentioned could only at present be fit to put in the bottle not to use. We therefore persuade ourselves that the judgement found was prematureyWe therefore request of you to suspend your opinion until they have had more time to mature.18

and Oenology (London, 1872); Joseph .James Forrester, Observations on the Attempts Lately Made to Reform the Abuses Practised in Portugal, in the Making and Treatment of Port Wine (Edinburgh, 1845). 17

Small amounts of brandy had been added to the wine since the sixteenth century. See Silvestre Gomes de Morais, Agicultura das Vinhas (Lisbon, 1818, first published 1723), 149. (I am grateful to Gaspar Martins Pereira for this reference.) In a note to the Duke of Newcastle in 1725, an agent in Lisbon reported the addition of significant amounts of brandy to white port, as if the practice was new enough to be unfamiliar to the English. SP 389/31 105, National Archives, Kew, England. By 1754, however, the practice of adding wine during vinification was well established. In a celebrated exchange of letters, exporters and their agents in the Douro argued over how much brandy to add. That brandy would be added was taken as given. See English Factory, Novas Instrucoens. 18 Offley & Co. to Thomas Harridge, 6 May 1793, letterbook of Offley & Co., HoS. The question of how long to wait could lead to complex legal cases about implied warrants. See the case of Rumens v. Offley reported in the Times, 2 Apr. 1850, 8e.

320

PAUL DUGUID

The changing character of the wine and the knowledge required to assess future quality and set current prices made negotiations hard for honest merchants with inexperienced clients. Equally, these features made it easy for unscrupulous merchants to cheat unsuspecting clients. In both cases, the nature of the wine put a premium on both experience and knowledge in the chain.19 Looking at Links The port commodity chain developed in response to a number of factors: the politics of the eighteenth century; the complexity of the product; the geographic, social, and cultural distance between producers and consumers; and the transformation effected by skill and time on the wine as it crossed that distance. As the journey from grape to glass became increasingly regulated and standardized, the chain it traveled also became standardized, taking, more or less, the following form. The wine was made in the upper Douro River valley by predominantly Portuguese winegrowers, or lavradores. In the spring, it changed hands at an annual wine fair and passed downriver to exporters in the entrepoˆts of Porto (which gave its name to the wine) and Vila Nova de Gaia, on the north and south banks of the Douro’s Atlantic estuary. Here the exporters (predominantly British) stored and aged the wine for a couple of years before preparing it for export (by adding brandy and blending different lots) to their wholesale clients (predominantly in Britain). British importers—wine merchants, innkeepers, and hoteliers—would then sell port in barrels, bottles, or glasses to consumers, often after performing a little blending of their own. Each of these groups—lavradores, exporters, importers, and retailers—came to represent enduring links, chained to one another and to specific places—the Douro, Porto, and Britain—in the port commodity chain. To understand the chain as a whole, it is necessary to look at each

19 In noting that the commodity played a significant part in shaping the chain, this paper might seem to be reflecting themes made prominent by Actor Network Theory. There is little more than coincidence in this. Indeed, by pointing to the role of international diplomacy and state fiscal regulation in the chain’s development, the argument, pace some of that school, suggests that such networks cannot usefully be explained in terms of the microactors in the chain alone.

Networks and Knowledge

321

link in turn. (While examining them separately, as I will attempt to do in this section, we need to remember that the challenge for the chain was to articulate distinctly separate units of knowledge and skills and straighten them into a workable continuum and a viable product.) As we shall see, besides these canonical points and the tensions they exerted on one another, some less visible intermediaries played important stabilizing roles, bridging gaps in social, human, and financial capital between the points. The Douro. The Douro wines that were shipped from Porto were initially the product of monasteries and absentee aristocratic landowners, whose vineyards clustered close to the cities in what was otherwise a rugged, roadless region of the upper Douro valley.20 As the market for port developed in Britain, viticulture spread, both geographically and demographically. Simultaneously the region, whose rocky land was inhospitable to most crops other than vines, became increasingly monocultural: the wine and the knowledge that came from making it developed in response to a single, highly distinct market. The effect was self-reinforcing: participants hung onto the port commodity chain ever more tightly; strong ties became stronger, and weak ones grew only weaker. To meet growing demand in the eighteenth century, lavradores were drawn to the Douro and more land was put under vines and brought within the demarcation. Between 1770 and 1821, the number of lavradores grew from just under two thousand to almost four thousand.21 Few of them exported,

Conceic- a˜o Andrade Martins, ‘‘Vinha, Vinho e Politica Vinicola em Portugal: Do Pombalismo a` Regenerac- a˜o’’ (Ph.D. diss., University of E´vora, 1998). 21 For 1770, Gaspar Martins Pereira gives the figure 1,977, Norman Bennet the similar but slightly lower figure of 1,970. See Gaspar Martins Pereira, ‘‘Aspectos Sociais da Viticultura Duriense nos Fins do Se´culo XVIII,’’ Actas das Primeiras Joumadas do CENPA (Porto, 1986); Norman R. Bennett, ‘‘The Golden Age of the Port Wine System, 1781-1807,’’ International History Review 12 (1990): 221-48. The 1821 figure is from the ‘‘Arrolomento Mestre do Distrito do Embarque’’ for 1821, from the archives of the Companhia in possession of the Real Companhia Velha [hereafter RCV], Vila Nova de Gaia. The demarcated area, initially fixed in 1761, was extended in 1791; hence the larger figure reflects in part a larger catchment. Both figures exclude the more numerous wine growers who did not formally fall within the export region but whose wine often seeped in. 20

322

PAUL DUGUID

however, and the Companhia, constantly suspicious of engrossing, forestalling, and futures contracting by exporters, worked hard to inhibit any kind of enduring relations between lavradores and the exporters. To this end, it created a spot market in the Douro. Before the annual fair, which took place in early spring, no one was allowed to buy or sell, and while the fair was going on the lavradores had to accept the first offer made at the set price, or taxa. Both buyers and sellers regularly maneuvered around these restrictions. In particular, powerful lavradores and those with high-quality wines (the two classes did not necessarily overlap) built long-term relations with particular exporters, made contracts in advance of the harvest, and received payments under the t able that exceeded the taxa.22 Consequently, the chain developed enduring links to an elite, relatively small ‘‘port-wine aristocracy,’’ who sold all their wine annually on terms they could more or less dictate and remained relatively free from Companhia interference but tightly linked to exporters.23 The composition of this aristocracy was, however, not static. One of the most enduring relations to emerge during this period was formed between Hunt & Co. and the Orato´rio of Porto, a religious order with extensive, high-quality vineyards. The Orato´rio, which could hardly describe wine as its central concern, sold almost exclusively to Hunt & Co. from the 1780s to the 1830s. By contrast, the longest-lasting connection established by Sandeman & Co. was with a secular viticulturalist, Braz Gonc- alves Pereira, whose reputation did not rely on his extensive landholding, his political influence, or his social status, but rather depended on the quality of his wine and his commitment to the port trade. Between 1770 and 1814 the trade became increasingly the domain of such dedicated practitioners.

22 See Gaspar Martins Pereira , ‘‘As Quintas do Orato´rio do Porto no Alto Douro,’’ Revista de Histo´ria Econo´mica e Social 13 (1984): 13-50; wine accounts, Hunt & Co., MF; ‘‘Livro da Receita e Despeza desta Casa da Congregac¸a˜o do Orato´rio do Porto,’’ Livro de Mosterios, 2116, Arquivo Distrital do Porto; wine accounts and Livros de Lavradores, HoS. 23 Susan Cora Schneider. ‘‘The General Company of the Cultivation of the Vine of the Upper Douro, 1756-1777: A Case Study of the Marquis of Pombal’s Economic Reform’’ (Ph.D. diss., University of Texas, 1970), quotation at 53.

Networks and Knowledge

323

The Quintilla Pass, Serra do Mara˜o. This difficult pass stands between the coast and the Alto Douro. It not only kept the cool sea breezes out of the inland wine-growing area but also deterred many of the wine exporters from going to the region until a rail route was opened.

Life for most lavradores outside this elite, whether sacred or secular, was significantly tougher. As the region’s economy became more specialized, the population became increasingly dependent on the port-wine system, but its grasp on the chain itself was tenuous. Table 1 illustrates the transient nature of the relationships between lavradores and major exporters in different periods under the Companhia. So, for example, of the 313 lavradores who supplied Sandeman with wine from 1814 to 1832, 207, or 66 percent, did so in one year only. Only 6 percent supplied the Companhia for more than five years.24 In Hunt’s case, of 268 lavradores, only 2 supplied that firm for more than five years. In the year 1821, the Companhia’s Livros do Arrolomento, which record what each vineyard produced and sold, show that of the 148 Only one supplied in every year; that was Braz Gonc- alves Pereira, mentioned above.

24

324

Table 1.

PAUL DUGUID

Exporters and Their Suppliers in the Douro and the Number of Years the Latter Supplied the Former.

Exporter Period Total Suppliers Number supplying for One year only Two years Three years Four years Five years More than five

Offley

Hunt

Sandeman

1779-95a

1813-32

1808-32

1814-32

444

292

268

313

350 68 13 10 3 n.a.

183 50 20 8 8 23

228 25 7 4 2 2

207 54 21 8 4 19

Source: AAF: wine accounts of Hunt & Co. and Offley & Co. for the years indicated; HoS: wine accounts of Sandeman & Co. n.a. = not applicable. a For Offley & Co., it was not possible to get data for every year in the period 1779 to 1795. Consequently, the figures may underestimate the number of suppliers and the number of longterm relations between Offley and its suppliers in this period.

lavradores of the parish of Covas, one of the best wine-growing areas in the Douro, 74 of 148 lavradores were left with surplus wine, 15 were forced to sell to the Companhia (and so received no premium over the set taxa), and 5 sold no wine at all. Twenty-two percent of the wine produced found no buyer. The year 1821 was exceptional. Stocks were low, and the wine was particularly good. In 1816, when market conditions were poor, 138 lavradores offered wine, 108 were left with surplus, 34 had to sell to the Companhia, 29 sold no wine at all. Twenty-nine percent of the wine found no buyer.25 For the unattached suppliers, there were few alternative legal markets—or even illegal ones, as roads in Portugal developed very late and the river was closely controlled by the Companhia. Moreover, even had they been able to overcome these geographic obstacles, their wine priced itself out of most tablewine markets. Thus the predominantly small growers, whose numbers grew in good times and whose immiseration increased in bad, formed a ‘‘reserve army,’’ whose members might or might not be called up in any particular year (and whose overproduction could, from the exporters’ perspective,

25

Livros do Arrolomento, 1816 and 1821, RCV.

Networks and Knowledge

325

usefully encumber the Companhia).26 The strengthening chain demanded commitment but did not necessarily reward it. Many of the lavradores were more involuntarily welded than willingly wedded to the system. Broker Links. A reliably large pool of surplus wine provided little incentive for backward integration; consequently very few exporters were involved in production.27 The result was a division not only of labor but also of knowledge between the Douro and Porto. The British exporters—

Lagareiros reposing. Wine was made in large stone troughs known as lagares. Hence the people who trod the wine—mostly, but not exclusively, men—were known as lagareiros. The task was long and arduous, often continuing through the night. The lagareiros worked in shifts. The men in this picture, with wine stains on their legs, were probably resting between spells in a !agar.

26 As James Simpson (personal communication, 28 Apr. 2005), points out, the very smallest growers can be thought of as the very first link in the chain. They supplied either baskets of grapes (cestos de ouvas) or small amounts of wine (vinhos a bica), and their access to the chain was controlled by the lavradores and comissa´rios (see below). 27

For a similar situation, see Field’s account of the beef supply chain in the United States, in which meatpackers, forming something of a monopsony, had little incentive to integrate back into production. Gary Fields, Territories of Profit: Communications, Capitalist Development, and the Innovative Enterprises of G. W. Swift and Dell Computer (Stanford, 2004), 241n16.

326

PAUL DUGUID

expatriates, few of whom ever felt wholly at home in Portugal—seldom went to the Upper Douro, except to attend the wine fair. That event was too short and too hectic to allow serious assessment of the wines on offer and of the lavradores’ offerings. For the chain to work, buyers at the wine fair had to have access to local knowledge, so they could assess availability and reliability before making irrevocable purchases.28 Consequently, there was a place in the chain for intermediaries, which was filled by the comissa´rios, or ‘‘brokers.’’ These Douro-based Portuguese moved with agility between town and country, supervising the flow of wine down the Douro, the stream of cash back up, and the copious exchange of communications in both directions. They helped circumvent the strict spot market sought by the Companhia, but in so doing they sustained the same interest as the one pursued by the Companhia: the long-term survival of the port chain. Brokers are fairly commonplace in wine regions, yet the Douro brokers are distinct in interesting ways that reflect the linear chain in which they participated, the social relations of the Douro, and the competitive knowledge required to operate in the complex system of production that they represented.29 Where brokers usually work in networks of multiple suppliers and clients, often buying on speculation and selling when a suitable buyer can be found, Douro brokers tied themselves to the fortunes of one exporter and had to be pushed to look for new suppliers. Further, rather than buying on their own account, they bought on behalf of their particular exporter. For this, they received a commission on each pipe of wine, but as far as the firms’ accounts reveal, no salary. Annual purchases (and hence commissions) could fluctuate dramatically, so rewards for loyalty were uncertain.30 Nonetheless, comissa´rios seem to have been remarkably loyal. While there was a high turnover of suppliers for any particular firm, comissa´rios, who possessed important proprietary knowledge but remarkably little security, worked with particular exporters for

28

As Jones suggests in Merchants, international trading companies survived as much on their access to local knowledge as on their access to local commodities. 29 For brokers in other regions, see, for example, Thomas Brennan, Burgundy to Champagne: The Wine Trade in Early Modem France (Baltimore, 1997). 30 In the period 1808 to 1832, purchases by Hunt & Co. reached a high of 1,033 pipes (in 1814) and sank to a low of 78 (1817). Their broker’s income would have fluctuated accordingly. Hunt & Co. wine accounts, AAF.

Networks and Knowledge

327

long periods. Some died while in office, and the job often passed between siblings or from father to son.31 The comissa´rios’ loyalty was important to the exporters, not only on account of these brokers’ knowledge of the Douro but also owing to their familiarity with the workings of the firms. Firms carefully guarded what, when, where, and on what terms they bought their wines. Some of this information firms needed to keep from their rivals, and some—particularly illicit dealing—from the Companhia’s prying eyes and aggressive punishments. After the harvest of 1787, for example, the Companhia conducted a major investigation into contracting and payments above the taxa. Hunt & Co. was clearly unsure where its comissa´rios’ loyalties lay and was worried that they might reveal the firm’s excess payments. Having piously cautioned the comissa´rios to have care for their souls when they gave testimony, the exporter was clearly relieved when the comissa´rios incriminated the lavradores but kept quiet about the firm’s activities.32 It was probably not only the ability to broker wine that tied comissa´rios to exporters but also the entre´e this connection gave them to broker power in the Douro. Comissa´rios determined whose samples were tasted, who found a buyer for their wine, who was paid beyond the taxa, whose wine received

31

For example, Hunt & Co. worked with Joa˜o Manoel Martins Teixeira from around 1780 until his death in 1835, at which point his son petitioned to take his place. Offley & Co. worked with Jose´ Jacinto Henriques da Silva Pereira from before 1780 to 1830. In 1819 he was joined by Lourenc- o Henrique da Silva Pereira, who may have been his sun or grandson. When Sandeman & Co. set up in Porto after the Napoleonic wars, they acquired the services and knowledge of Carlos Anto´nio Pereira da Silva, whose brother, another Jose´ Jacinto, had previously worked with the recently defunct firm of Bartholomew Casey (and was probably preceded by his father in that position). The two brothers worked for Sandeman & Co. until Carlos Anto´nio’s death in the 1830s. Finally, Quarles Harris & Co., another old port firm, worked with what were probably three generations of a family called Borges from before 1790 to after 1825. Data from Hunt & Co. and Offley & Co. wine accounts and letterbooks, AAF; Sandeman & Co. wine accounts and letterbooks, HoS; Governo Civil PRT, Registo de Privile´gios, especially vols. 9, 10, 11, 18, and 21. Administrac- a˜o Central, Arquivo Distrital do Porto. 32 See Hunt & Co. to the comissa´rios Manuel Ferreira Romano and Jose´ Manuel Martins Teixeira from Nov. 1787 to May 1788, Hunt & Co. letter book, AFF. The specific injunction is from a letter to Romano, 4 Nov. 1787. Following the testimony, the firm’s correspondence soon swelled with angry letters from the lavadores who had been turned in for seeking excess payments, but, as we have seen, the loyalty of most lavadores was relatively unimportant to the exporters.

328

PAUL DUGUID

preferential treatment on the voyage down the Douro, and who might be pursued by the Companhia. They were able to favor their neighbors (and thus assure the prosperity of their district) and to impoverish those they disliked. In a region of endemic surpluses, few contracts, and quick, steep drops from plenty to want, such a position, bridging diverse geographic and social locations, was undoubtedly powerful. This advantage was supported by the exporters, who had much more financial but far less social capital than the comissa´rios. By delegating power to the brokers, the exporters gained access to essential local knowledge and connections that were denied to the firms of their rivals. These reciprocal interests and complementary activities helped forge less visible links in the chain between Douro and Porto while curbing the appetite of these key participants, the comissa´rios, to develop alternative networks.33 From a distance, the supply end of the port commodity chain can assume the pastoral hue of market-driven relations between town and country, negotiated in an archetypal fair populated by buyers and sellers haggling over price. In fact, relations were far more complex. In some cases, strong bonds developed between town and country, giving exporters confident access to good wines and providing some lavradores with a lasting entre´e to the lucrative British market. In many other cases, suppliers had to depend on luck and local patronage. In both circumstances, comissa´rios played an easily overlooked yet critical part, undermining the purity of the market while increasing the viability of the chain. If the Companhia, with its abstract regulatory powers, helped forge the formal links in the chain around the spot market, the comissa´rios, working their local knowledge, outside and to a certain degree despite the formal regulation, provided stability.34 Porto. At the time of the Methuen Treaty, exports were primarily in the hands of supercargoes and factors, who traded in wine sporadically. As the trade became more lucrative, it inevitably became more organized. Firms

33 The two lavradores who lasted beyond five years with Hunt & Co. were the powerful Orato´rio (see note 21 above) and Hunt’s comissa´rio, Jose´ Manuel Martins Teixeira. The phrase ‘‘complementary activities’’ comes from Richardson and strikes me as more apt than Teece’s more often used ‘‘complementary assets.’’ Richardson, ‘‘The Organisation of Industry,’’ 889; David J Teece, ‘‘Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing, and Public Policy,’’ Research Policy 15 (1986): 285-305. 34 The comissa´rios were subject to formal regulation.

329

Networks and Knowledge

began to trade regularly in wine.35 Formal organization appeared in Porto, as in the Douro, with the creation of the Companhia. It divided Porto’s exporters into three groups: portugueses (the Portuguese); estrangeiros (foreigners, but in this case all foreigners except the British); and ingleses (the English—though many in this category were actually Scottish or Irish). During the reign of the Companhia, even though portugueses predominated numerically and there were several estrangeiros, the ingleses effectively controlled the trade to Britain, drawing serious competition only from the Companhia itself.36 As Table 2 shows, in 1777, forty-two ingleses out of a total of seventy- three exporters controlled 77 percent of the exports (by volume). In 1786, a smaller group of twenty-three ingleses out of a larger total of eighty-nine exporters managed still to command 67 percent of exports. More significant, however, is that in both years, the ingleses held about 85 percent of the high-

Table 2. Number of Exporters by Nationality and Share of Port Exports to All Markets and to Britain, Selected Years in the Eighteenth Century. 1777

Portugueses Estrangeiros Ingleses

No.

All Markets %

24 7 42

22.45 0.64 76.91

1786 Britain No. %

13.05 0.67 86.28

57 9 23

All Markets % 31.47 1.07 67.44

1796 Britain No. %

13.28 1.34 85.38

34 8 21

All Markets %

Britain %

29.63 6.26 64.11

29.52 2.82 65.09

Source: Arquivos de Torre de Tombo, Lisbon: Mesa do Consulado e Fragatas, Alfaˆndega do Porto, Sahidas for the years indicated.

35

H. E. S. Fisher, The Portugal Trade: A Study of Anglo-Portuguese Commerce, 1700-1770 (London, 1971): 77-86. Among these would have been Daniel Defoe, who claimed to have taken 700 pipes to England in one year. See Daniel Defoe, Review of the State of the British Nation 1(1704), 193, quoted in Frank Bastian, Defoe’s Early Life (London, 1981), 91. 36

Export lists published by the Alfaˆndega do Porto (customs house) for 1809-40 offer totals of 341 portugueses, 44 estrangeiros, and 96 ingleses. Deciding nationality from the names in the export lists is, inevitably, a highly subjective practice. Assumptions made here differ in some details from those used by the Companhia elsewhere.

330

PAUL DUGUID

value British market, while the Companhia dominated the remaining 15 percent. From the first to the second period, the cohort of ingleses might have been shrinking in size, but its members were growing in power, and a small oligarchy was forming. In 1796, twenty-one British still controlled 64 percent of exports. Their share of the British market had fallen to 65 percent, however, for Portuguese exporters had made remarkable inroads into Ireland and accounted for more than 53 percent of the port sold there.37 For the next fifteen years, Anglo- French wars and the continental blockade disrupted the steady trade.38 During that time, Portuguese exporters increased their share. But when normal trading resumed and the British firms returned to Porto in 1811, the ingleses slowly reasserted their power. As Table 3 reveals, with the return of the British the Portuguese share of the market steadily shrank and the British share rose, until by 1840 the ingleses

Table 3. Number of Exporters by Nationality and Share of Premium Wine Exported to All Markets, Selected Years, Nineteenth Centurya.

Portugueses Estrangeiros Ingleses

1811

1820

1830

1840

Exports

Exports

Exports

Exports

No.

%

No.

%

No.

%

No.

%

23 6 6

74.42 18.12 7.46

54 9 24

41.54 11.97 46.49

58 10 34

32.61 14.65 59.07

22 9 31

23.01 11.1 65.89

Source: Alfaˆndega do Porto, ‘‘Vinho d’Embarque despachado na Alfandaga do Porto,’’ 1811, 1820, 1830; Folha Mercantil da Cidade do Porto (Jan. 1840), 191. a Tables 2 and 3 differ slightly. Detailed data on who sent to what markets, used for Table 2, has not been compiled from export manifests for the nineteenth century. In this period, however, the annual figures published by the Alfaˆndega separate premium export wine (embarque), which went primarily to Britain, from secondary wine, which went to other markets. The figures given in Table 3 indicate the share of the embarque exports that the British held. As almost all of this went to Britain, the figures for ‘‘Britain’’ in Table 2 and ‘‘exports’’ in Table 3 can be compared with reasonable confidence.

37 How the Portuguese managed to capture this large share of a growing market is not clear and needs further investigation. They did not succeed in holding on to it. 38 Jorges Borges de Macedo, O Bloqueio Continental: Economia e Guerra Peninsular (Lisbon, 1990).

331

Networks and Knowledge

had once again achieved the level of domination they held at the end of the previous century. Together, Tables 2 and 3 suggest that, during its reign, the Companhia acquiesced in, if it did not actually seek, an international division of labor that joined Portuguese knowledge of production with British knowledge in distribution and consumption, and the relationship continued even after the Companhia was abolished in 1834. Changing Generations, Changing Strategies In the years after the final retreat of the French from Portugal in 1811, British firms, some old and some new, reappeared in Porto. The more successful firms adopted the strategy of moving away from general, diversified trading networks and committing themselves to the port commodity chain. Four companies, three old and one new, illustrate this trend. The three old—Hunt & Co., Offley & Co., and another longestablished trader, Warre & Co.—had been near the top of the list of exporters before the war. In 1796, for example, among sixty-three exporters, Offley & Co. was first, Hunt & Co. fourth, and Warre & Co. eighth. The new company, Sandeman & Co., began trading from the city in 1814. Table 4, showing the average number of pipes exported annually over ten-year periods by each of these firms, indicates the relative strengths of each before the blockade and afterward. Three different types of wine exporter are represented in this group. Warre & Co., an example of the first, has roots in Porto that extend back to the seventeenth century, but in 1823 a partner aptly described the firm as still a ‘‘general Merchant’’ principally involved in ‘‘the Portugal and Brazil

Table 4. Average Annual Exports (in Pipes of Wine) of the Four Leading Firms Over Ten-Year Intervals, 1792-1840. Annual Average Exports Firm Warre Hunt Offley Sandeman

1792-1801

1802-1811

1812-1821

1822-1831

1832-1840

2,423 2,160 3,800 n.a.

638 578 1,543 n.a.

0 341 844 660

89 221 1,512 1,082

370 595 1,377 2,881

Source: Shaw, Wine, the Vine, table 13. Annual lists of exports published by the Alfaˆndega do Porto. n.a. = not applicable.

332

PAUL DUGUID

Trade.’’39 Nonetheless, it had played a dominant role in the port- wine trade in the eighteenth century (see Table 4). Indeed, one of Hunt’s letters describes Warre & Co. as ‘‘one of the principal houses’’ of the trade.40 However, its business was sufficiently general that, in the difficult years between 1814 and 1832, Warre only appears twice in the export lists for port, and the amounts were paltry (see Table 4). Clearly, its networks were extensive and malleable enough that it could easily survive without the wine business. As a general merchant, it appears to have felt no great commitment to wine. When Warre tried to return to the trade, however, it was unable to regain its prewar position. The second type of trader handled both imports and exports, dealing in certain incoming goods on commission while exporting wine on its own account. Hunt & Co. received consignments of salted cod, a Portuguese staple, and exported wine. The relationship was fairly evenly balanced. In 1777, wine provided 2,829 milreis to Hunt’s profit-and-loss account, while commission on fish provided 3,564 milreis.41 A couple of years later, these proportions were more or less reversed. Despite extensive dealings in wine, Hunt & Co.’s partners appear not to have thought of the firm as a wine house. Letters refer to ‘‘regular traders’’ and ‘‘wine exporters’’ as groups in which the writer did not include his own firm. It is consequently unsurprising to see that Hunt & Co., like Warre & Co., was able to draw down its win e exports and concentrate on its other business when it chose, which, as Table 4 suggests, it did during the uncertain 1820s. In so doing, it was not, perhaps, entirely its own master. Indeed, Hunt & Co. was tied into an extensive network of interlocking firms in England, Newfoundland, and Latin America that dealt in fishing, shipping, salt, and other goods be- sides wine. On behalf of this network, it had to deal with ruffled wine customers who felt that Hunt & Co. was favoring its partners’ interests over their orders. Further, it had to fight to uphold its own interest in wine against the inclinations of the network. In 182 1, in a notable departure from business

39 James Warre, The Past, Present & Probably the Future State of the Wine Trade: Proving that an Increase of Duty Caused a Decrease of Revenue (London, 1823), 1-2. 40

Hunt & Co. to J. Cutler, 26 Jan. 1788 Hunt & Co. letterbook, AFF.

41

The exchange rate fluctuated around 5s6d or 66 pence to the milreis, so four milreis came to a little more than one pound. See Fisher, The Portugal Trade, appendix 6. Al l the Porto houses dealt in bills of exchange to some degree, but this was necessary to deal with fluctuations in exchange rates a n d shortages in coin and gold as well as the demands of long-term credit. It can hardly be thought of as diversification.

Networks and Knowledge

333

decorum, a junior partner in the London house dismissed the profits the house made from the win e business (which it handled on commission) as ‘‘piddling’’ and barely able to cover an ‘‘extra clerk’s salary.’’42 The small rewards the network derived from wine were then measured against the major value that the fish business contributed. Following this exchange, Hunt & Co. more or less put the wine business on hold for a few years, living off what the network deemed profitable and barely exporting 100 pipes a year, where once it had regularly shipped 2,000. Like Warre, when it tried to return, Hunt & Co. found the market less favorable to its networked strategy. Offley & Co., the dominant British exporter in the late eighteenth century, also dealt in imports as well as wine exports. In line with the Methuen Treaty, it had long accepted consignments of wool and exported wine. As the wool market declined, the company turned to cotton goods. Though these imports came on commission, they could involve a trader in cumbersome obligations. Not only did Offley & Co. have to pursue bad debts and deal with innumerable Portuguese complaints about shoddy goods; it also had to wrestle with the customs house over goods held on bond. At the end of the siege of Porto during the civil war, when Offley had to attend fully to the newly reopened wine supply line to England, its limited personnel were forced to deal with protracted and distracting claims for cotton goods damaged in a bombardment. Furthermore, like Hunt & Co., Offley & Co. owned shares in ships, which occasionally prejudiced its wine business, as customers complained that their wine was traveling on slower or more expensive ships to favor Offley’s shipping interests. Nevertheless, Offley & Co. was much closer than Hunt & Co. to being a dedicated wine trader. In 1779, for example, the wine account delivered 55,246 milreis to profit and loss, and the commission account merely showed 2,555 milreis. For the period from 1779 to 1807, commission paid only 37,613 milreis to profit and loss, whereas wine paid 1,525,790 milreis.43 There are undoubtedly several reasons for the success of Offley & Co. in the port trade, but its relative specialization, compared to competitors like Warre, is certainly one. Indeed, Offley dominated the trade initially until its

42

Letter from Newman & Co., 13 Mar. 1821, incoming letters, Hunt & Co., AFF.

43

Profit-and-loss account, ledger, 1777, Hunt & Co.; profit-and-loss account, ledger, 1779, Offley & Co., AAF.

334

PAUL DUGUID

commitment to the trade wavered and it diversified more; subsequently it was challenged by a firm that was more committed to the wine commodity chain. Its wavering can be seen in the changing proportions of commission to wine in the post-Napoleonic War period. Between 1812 and 1840, the proportion rose from the prewar 2.5 percent to almost 9.5 percent, indicating that Offley & Co. was more involved in importing than it had been at the height of its power. By 1832, Offley had lost its position as the largest private exporter, to be replaced by a third kind of trader, which had little interest in extended networks and no distractions from imports. Sandeman & Co is a good example not only of this type of dedicated trader but also of the new generation of firms that came to Porto in the nineteenth century, when the chain was changing in the wake of the Napoleonic Wars. After opening for business in Porto in 1814, Sandeman & Co. dealt almost exclusively in wine and stayed away from imports. This commitment seems to have played a significant part in their success. Within twenty years of arriving in Porto, Sandeman & Co. had passed Offley & Co. to become the largest private exporter, a position it held through most of the nineteenth century. By the second half of the century, Sandeman & Co. was joined at the top of the list of exporters by another similarly dedicated firm, Cockburn & Co., which had begun operations in Porto a year after Sandeman, in 1815. By 1851, Cockburn & Co. was second on the export list to Sandeman’s first. For the rest of the century, these two generally held first and second place and were responsible for between 15 percent and 20 percent of exports. Unlike the dominant eighteenth-century exporters, these firms devoted themselves to the port trade rather than to the unspecified or diversified trade. They mostly refused to be distracted by larger networks that could be spun out from Porto, or even by the relatively light demands of import trade. As will become clear, they were instead linked to equally dedicated agents in Britain, who were able to order extensive amounts of wine, and suppliers in Porto and the Douro, who were capable of providing the volume to meet these orders.44 This development suggests that increasing commitment to the chain was rewarded by increasing success, whereas the diversified network practices that had dominated in the past were not.

44 Whereas in the late eighteenth century, suppliers of Hunt & Co. provided 16 pipes of wine on average—and the median amount was probably much smaller—In 1834, Sandeman & Co. bought 5,000 pipes from a single supplier, Ferreira & Co. (see below). Hunt & Co. wine accounts, AAF; Sandeman & Co. wine accounts, HoS.

Networks and Knowledge

335

City Brokers. Apart from the Companhia and other than during the period of the continental blockade, there were notably few Portuguese among the leading exporters. Ferreira & Co., one of the best-funded Portuguese houses and a major producer, tried to set up its own agent in England after the Napoleonic Wars to boost its exports but withdrew from London within five years and concentrated on other points in the chain.45 During its reign, the Companhia itself was a central obstacle to Portuguese endeavors.46 H. E. S. Fisher, the historian of Anglo-Portuguese trade in the eighteenth century, gives another explanation, suggesting that the Portuguese simply lacked capital.47 This claim might be true of the period of Fisher’s study (1700-70), but it is a less satisfactory explanation for the situation in the early nineteenth century. Even the ingleses, who had used capital to shore up their oligarchy in the earlier period, got by on much less in the latter. In 1790, Offley & Co. was working on capital of about 424,000 milreis (c. d113,000) provided by its partners. In 1815, it had only 50,000 milreis. Moreover, although this fact may only be gauged indirectly, several Portuguese merchants seem to have had capital. In 1827 and 1828, Offley & Co. surveyed the stocks of premium wine held in the entrepoˆt.48 In these lists, only half of the top dozen are ingleses. Moreover, four of the portugueses on the list in 1827 were replaced by four others in 1828. Entries for one year alone suggest that some Portuguese had sufficient capital for stock holding, the capital- intensive part of the trade, while entries for the two years combined suggest that several Portuguese firms had a sufficient supply of stocks to be counted among the top ten. This assumption gets support from testimony before a parliamentary committee in the 1850s, in which a partner in a British firm reported that individual Portuguese stockholders had between 10,000 and 15,000 pipes of wine on hand for ready supply.49 Indeed, British firms entering the trade relied both on these Gonc- alves Guimara˜es, Um Portugueˆs em Londres (Vila Nova de Gaia, 1988). See Paul Duguid and Teresa da Silva Lopes, ‘‘Divide and Rule: Regulation and Response in the Port Wine Trade, 1812-1840,’’ in European Yearbook of Business History, ed. Terence Gourvish (Aldershot, 2000). Portugueses suffered under other government regulations. In particular, it was difficult for them to find shipping insurance in the controlled Portuguese market. British companies insured themselves in London, and some even sold insurance in Portugal, often at cost. 45 46

47

Fisher, The Portugal Trade, 79. Offley & Co. to Offley Brothers, Forrester, 7 Feb. 1827 and 22 Jan. 1828, Offley & Co. letterbooks, AFF. 48

49

Shaw, Wine, the Vine, 174, reporting the testimony of a partner of Martinez, Gassiot & Co.

336

PAUL DUGUID

stocks and on the long-term credit that often went with them to overcome the limits of their own meager capital. In sum, it seems that the Portuguese did not lack capital as much as outlets for capital—links in an established chain—that were available to their British counterparts. Principally, the Portuguese faced considerable difficulties at the consumption end of the trade, in Britain. Navigation acts and the imposition of an extra duty put foreign importers at a financial disadvantage. Moreover, unlike the British in Portugal, the Portuguese in England had no special privileges.50 Furthermore, supplementary duties and fierce competition in London, as well as expanding regional markets, drew increasing amounts of wine toward provincial outposts in the nineteenth century. There, the disadvantages of foreignness can only have intensified the difficulties of competing in open markets or forging new chains. At the local level, ‘‘freetrade Britain’’ was not quite such an open market. Consequently, it is not surprising that, as we shall see, Portuguese merchants formed several explicit or implicit joint ventures with the ingleses in order to reach the British market along established chains. This lock on the British part of the chain led to the formation of a cadre of Portuguese brokers in Porto. Unlike the Douro brokers, members of this group traded on their financial capital as much as their local knowledge: it is they who are represented in the lists of stock holding mentioned above. Again, unlike the Douro brokers, the city brokers traded on their own account. However, they bought not to export but to sell to exporters whose stocks were running low. By providing a reserve supply of wine to cushion the trade, such brokering overcame the difficulties for the British of the Companhia’s spot market, the limitations of British capital, and the scarcity of outlets available to the Portuguese.51 By selling wine that had been aged and was ready for drinking, these brokers enabled exporters to overcome their inexperience in judging the potential of young wines. The brokers also provided another means to discipline both the lavradores and the

50

The English had been granted privileges since Cromwell’s time. H. V. Livermore, ‘‘The Privileges of an Englishman in the Kingdoms of Portugal,’’ Atlante 2 (1954): 55-77. For the difficulties of Portuguese agents, see Alfredo Ayres de Gouvea, ‘‘Apontamentos sobre a Famila de Joa˜o Allen (1698-1948), Boletim Cultural, Camara Municipal do Porto 21 (1958): 390-532; and 22 (1959): 235-320. (I am grateful to an anonymous reviewer for this reference.) 51 Several of these brokers registered as exporters to gain the preferential access granted to exporters at the wine fair, but then only exported nominal amounts of wine—enough to satisfy the registration requirement.

Networks and Knowledge

337

comissa´rios of the Douro. In 1825, for example, when the wine on offer at the fair proved unsuitable, the exporters bought little in the Douro but returned to Porto, where, one merchant reported, eight thousand pipes changed hands.52 For many Portuguese merchants, then, it was no doubt wiser to work in the lucrative secondary market in Porto than to set up as a primary exporter. They took upon themselves much of the risk and skill of aging wine-and they charged for it. These broker firms were owned by some of the principal Portuguese merchants of the day, and they grew immensely wealthy from this business. Like the brokers in the Douro, this group of people was not visible in conventional views of the chain leading from production to consumption, but their participation was critical to maintaining both the strength and suppleness of that chain.53 Again, like the Douro brokers, the city brokers challenged the Companhia’s regulations and undermined its power. Nonetheless, they too played an essential part in holding the chain together. Like the Companhia, the ingleses viewed these brokers, who could drive up prices in the Douro, with ambivalence. They disdained them as ‘‘speculators,’’ perhaps the ultimate insult from a steady trader.54 And yet the ingleses depended heavily on these brokers, just as they relied on the comissa´rios, to overcome their own limitations.55 Britain. Like the Douro, the British part of the port supply line can look a little more like the frayed end of a cord than the final link in a chain. To

52 Offley & Co. to Offley Brothers Forrester, London, 29 Mar. 1825, Offley & Co. letter- book, AAF. 53

To some extent, these city brokers—more than their Douro counterparts—resemble Burt’s entrepreneurs who exploit ‘‘structural holes’’ in the network. Burt’s work suggests that brokers are relatively independent of the network and, by metaphorically rising above the chain, see and enact opportunities to transform it. Circumscribed by regulation in Portugal and informal (xenophobic) institutions in Britain, the city brokers were not so free and acted more to maintain than to transcend the chain. If, however, the brokers were not as free, the chain was not as rigid as Burt’s work might suggest. See Ronald L. Burt, Structural Holes (Cambridge, Mass., 1992). 54 If we recall Smith’s account of the ‘‘speculative trader,’’ he was one who ‘‘exercises no one regular established or well-known branch of business. He is a corn merchant this year, a wine merchant the next’’ (Smith, Wealth of Nations, 114). Most Porto brokers were not ‘‘speculators’’ in this sense. They were, indeed, wine merchants from one year to the next. It would seem that the ingleses were evidently using the term primarily as an insult. 55 The Companhia discouraged the Porto brokers, probably because it too sold wine to understocked ingleses.

338

PAUL DUGUID

transfer liability for the heavy import duties directly to the buyers, exporters in Porto sent most British-bound wine directly to the innumerable wine merchants and retail outlets spread throughout the country, rather than to corresponding houses. Toward the end of the eighteenth century, Offley & Co. sent its annual circulars to about 250 names, which is probably a reasonable approximation of its number of outlets.56 Though these links were diffuse, they were not formless. Both internal and external forces were shaping them. From the beginning of port’s reign, outlets were increasingly tied to the port commodity chain by its growing popularity among consumers. Toward the end, by contrast, the chain had to fight to hold onto outlets that were starting to stray. Since the wine was not sent to them directly, the corresponding houses may hardly seem to be critical to the commodity chain. Yet, like the Douro brokers, they were essential, principally as communication gateways. Working as commission agents, they gathered orders from the retail outlets, which they relayed on to Portugal. Through these correspondents and their agents around the country, the exporters received a constant flow of communication, both about the needs of particular clients and about the state of the market as a whole, as well as insights on new taxes and other legislative actions that could affect the chain. (They also formed an occasionally formidable political lobby.) Some of these corresponding houses also worked as wholesale and retail wine merchants themselves, taking wine from Porto for their own accounts as well as ordering it for their clients. Here again we see a significant difference between the firms that rose and fell in the trade in the early nineteenth century. The principal corresponding partners of Hunt & Co. were not themselves in the wine business and so were less committed to the chain. Hence, as already noted, Hunt, Newman, & Christopher, the London correspondent of Hunt & Co., could drop the wine business with indifference. It seems reasonable to assume, furthermore, that the knowledge of the trade that these correspondents conveyed to Porto was limited by the extent of their engagement in the trade. By contrast, the correspondents of both Sandeman & Co. and Cockburn & Co. were involved in wine. Indeed, both these Porto houses had been set up by firms that were already established in the wine business in Britain and had sought a close connection in Portugal. 56 Records of these scattered outlets have been hard to find, and what follows comes primarily from the records of those who shipped wine to them or from alternative sources, such as newspapers, advertisements, and novels.

Networks and Knowledge

339

The London house of Sandeman & Co., whose senior partner established the Porto house on his own, is a useful example of these tightly tied corresponding houses.57 Established in 1790, Sandeman (London) at first looked rather like a general trader, albeit one with a strong interest in wine. It also dealt in cotton from Brazil, and it wrote shipping insurance. Its wine income came from two sources. Initially, it solicited orders for wine around the country, at first on behalf of Offley & Co. and Warre & Co. For this work it charged commission. Over time, the firm also began to order wine for its own account and to establish itself as a London wine merchant.58 In 1805, Sandeman (London) entered a corresponding relationship with a Portuguese merchant, Thomas da Rocha Pinto. The alliance proved a fortunate one, as, within a year, all the ingleses had left Porto, and Portuguese alliances proved invaluable. Like Sandeman (London), Rocha Pinto offers another example of a firm bringing its interests into alignment with the port commodity chain. Hence, it is perhaps not surprising that the two firms came to a mutually beneficial arrangement. Rocha Pinto had been a general trader from Porto since at least 1777. The firm exported linen, cotton, hams, sumac, and other miscellaneous items to Brazil. In the 1790s, perhaps to take advantage of Portugal’s status as a neutral party during Britain’s war with France, which disrupted British shipping, Rocha Pinto turned to the English trade and started dealing in wines. In 1805, with modest exports of 354 pipes of wine, it was the seventh-ranking Portuguese exporter by volume. (Offley & Co. in the same year exported 4,280 pipes.) In alliance with Sandeman (London), however, its fortunes changed dramatically, and by 1809 it had risen to third place in the list of all exporters.59

57

Surviving records of Sandeman (London) are in the Guildhall Library, mss. 8642-8652. These, along with HoS records in Vila Nova de Gaia, provide evidence for the claims made here. The London house was, for most of the period under discussion, a partnership called Sandeman, Gooden, and Forster. George Sandeman, the founder and senior partner, was initially sole owner of the Porto house, though he divided the ownership between himself and his nephews in 1835. See Norman R. Bennett, ‘‘Port Wine Merchants: Sandeman in Porto, 1813-1831,’’ Journal of European Economic History 24 (Fall 1995): 239-69. 58

Offley & Co. (London) to George Sandeman & Co., 22 Feb. 1793; Offley & Co. (London) to Sandeman & Archer, 10 Oct. 1793, Offley & Co. letterbook, HoS. 59 Details of Thomas da Rocha Pinto come primarily from Sandeman’s books and from the Arquivo Torre de Tombo, Lisbon, Mesa do Consulado e Fragatas, Alfaˆndaga do Porto, which lists the consigner, contents, and destination of exports. Other details come from the records of Sandeman (London) noted above.

340

PAUL DUGUID

By 1809, Sandeman (London) was evidently benefiting sufficiently from its wine alliances, of which the port trade was the largest part, that it abandoned the network of Brazilian cotton dealers and the insurance business and started dealing almost exclusively in alcoholic beverages. It now worked with houses in Bordeaux (for claret and cognac), Jerez (for sherry), Lisbon (for lighter Portuguese wines), and Madeira and Porto for their eponymous wines. As part of its commitment to these particular commodities, it made several attempts to set up its own corresponding houses, with varying degrees of success. It established a house in Jerez but within a couple of years had to withdraw. It also established a partnership with the cognac firm Hennessey, but that did not endure either.60 In 1814, however, George Sandeman, founder of what had become the partnership Sandeman, Gooden, & Forster in London, cut out Rocha Pinto, which had struggled after the British firms returned to Porto. George Sandeman instead established his own firm in Porto, which was run by his nephew, who had been working for the London house for several years and had gained detailed knowledge of the British market. In alliance with Sandeman (London), the house in Porto rose to dominate its part of the chain. Illustrating the dangers as well as the rewards of commitment to the port commodity chain, Rocha Pinto, after almost fifty years of trading, was bankrupt by 1817. Clearly it was important not only to be in the trade but also to have a position and good connections in a particular chain. The Sandeman houses, by contrast, exemplify the virtues of interdependence within the chain. By 1832, Sandeman (London) had transformed itself from a general trader and agency house into a well-established merchant at the end of not one, but several, wine chains, each linking major producing regions to the international market of consumption. With port, it had eschewed the conventional networks of its rivals and had forged close links to the Douro, strong ties between London and Porto, and firm connections to the retail trade. In the process, it had risen with dramatic speed to overtake much longer-established but more network-diversified rivals.

60 Paul Butel and Alain Huetz de Lemps, Histoire de la Socie´te´ et de la Famille Hennessy (1765-1990) (Cognac, 1999).

Networks and Knowledge

341

The Quinta dos Arregadas. At the time of Vizetelly ’s visit, this quinta, on the south bank of the river Douro, was rented by Sandeman & Co. as the headquarters from which the firm’s representatives oversaw the vintage.

Controlling the Chain By the end of the first third of the nineteenth century, then, port wine came to Britain along an established and relatively stable chain, increasingly dominated by dedicated traders of the sort described above.61 Part of the chain’s stability came from treaties and fiscal and regulatory arrangements stretching back over a century and external to the chain itself. Another part came from the participants’ growing commitment to the chain, which brought them greater returns but also increased their dependence on the chain. For the ‘‘general traders,’’ loss of the chain would affect only a part of their business and their expertise. For the dedicated port traders, 61 While valuably historicizing the commodity chain, Hopkins and Wallerstein suggest that the level of vertical integration reflects a pattern of long economic waves, in which chains become more integrated during phases of expansion, and less so during contractions. As the period under discussion here is longer than the phases of the Kondratieff cycle that they had in mind, this does not seem a particularly apposite explanation.

342

PAUL DUGUID

however, loss of the chain would threaten their financial capital but, more significantly, would cause them to lose their less liquid, long-term investments in social and human capital: the hard-won social relations and painfully acquired specialized knowledge embedded in the practices of this particular trade. Consequently, if the external supports were to slip away, the committed members would have a strong incentive to maintain the chain by other means. And indeed the favorable external arrangements did fall away in the nineteenth century. The Anglo-Portuguese Treaty of 1810 and Brazil’s declaration of independence in 1822 made Portugal commercially less important to Britain. Cessation of Anglo-French hostilities after 1815 reduced Portugal’s strategic importance. The triumph of liberalism in the Portuguese civil war led to the dissolution of the Companhia in 1834.62 Finally, with the Cobden-Chevalier Treaty and Gladstone’s budget of 1860, the long-cosseted trade suffered what one perceptive journalist called its ultimate ‘‘disestablishment.’’63 Traders working along this chain also faced a consumer crisis. As port lost its status as a political and patriotic symbol and came to represent unimaginative tradition, a new generation of sophisticated consumers were becoming familiar with the premium wines of France.64 At the same time, the expanding British wine market resulted in naive drinkers being led astray by aptly named ‘‘sophisticated wines’’-wines adulterated and falsified in one way or another. Falsification and fabrication are inherent to the wine trade. Indeed, the Oxford English Dictionary’s first citation for port is an account of Bordeaux wine that was taken to England via Porto in 1692 in order that it might be entered on the ship’s manifest as port.65 The Companhia tried, with limited success, to ensure that only wine grown in the Douro (or wines close enough in character that they could masquerade as such) could leave Portugal as port, while, given the wine’s preferential duty status, customs officers in England tried to control what could enter under that name. As 62 As note 12 indicates, the Portuguese government tried to revive the Companhia, but the old regulatory powers that had been so effective were never fully restored. 63

Matthew Freke Turner, ‘‘Wine and Wine Merchants,’’ New Quarterly Magazine (1874): 598.

64

For the attitudes of young men to port wine, see the distaste of Disraeli’s Vivian Grey and Trollope’s George Vavaseur. Benjamin Disraeli, Vivian Grey (London, 1827); Anthony Trollope, Can You Forgive Her? (Oxford, 1991, first published 1864). See also the fierce animosity of Cyrus Redding, A History and Description of Modern Wines, with Considerable Additions and a New Preface Developing the System of the Port Wine Trade (London, 1836). 65

Oxford English Dictionary, ‘‘port’’ (7a).

Networks and Knowledge

343

controls fell away and duties were equalized, it became easier to falsify and fabricate this premium wine. From the south coast of France to back alleys around the London docks, all sorts of concoctions were labeled ‘‘port,’’ not only by people outside the chain but also by people within it, causing its reputation inevitably to fall. For the trade to survive, some means had to be found to discipline the attenuated chain stretching from the Douro to Britain, and with the loss of external supports, those means had to come from within. As they emerged, they revealed previously hidden tensions in the chain. Participants at the different canonical points struggled over who could best guarantee quality to consumers within their own particular chain. As quality was signaled principally through names and the reputation that accrued to them, the struggle became increasingly about trade names. Traditionally wine had been identified in the English market by its country or region of origin—Shakespeare’s ‘‘Canary,’’ or (more indirectly) ‘‘champagne,’’ ‘‘claret,’’ and ‘‘sack.’’66 If it carried another name, it was the name of the wine merchant or retailer who provided the wine.67 Wine merchants were thus theoretically in a good position to stamp their names on the faltering port trade. Only theoretically, for wine merchants were aptly described in a letter to Hunt & Co in 1815 as ‘‘the most rotten set in London.’’ This notoriety was not a particularly new aspect of the trade. Half a century before, wine merchants were, as Henry Fielding’s Joseph Andrews suggests, a byword for corrupt practice.68 The letter also noted that

66

In 1653, for example, Izaak Walton talks generically of drinking ‘‘French-wine.’’ Izaak Walton and Charles Cotton, The Compleat Angler (Oxford, 1983, first published in 1653), 249. A decade later, however, Samuel Pepys names the wine of a particular dinner as ‘‘Ho Bryon’’ [Haut Brion]. Samuel Pepys, The Diary of Samuel Pepys [first published in London, 1825], 10Apr. 1663. 67

For example, in Smollett’s Humphrey Clinker, in a rare breach of his domestic self-sufficiency, Matthew Bramble gets his wine from ‘‘a correspondent on whose integrity I can depend.’’ Thackeray shows his historical accuracy in Vanity Fair when George Osborne asks whether the claret is ‘‘Adamson’s or Carbonell’s,’’ using in this scene set before Waterloo, the retailer’s name alone, as would have been typical at the time. By contrast, in Trollope’s Prime-Minister, set some sixty years later, a character says of his champagne, ‘‘It came out of Madame Cliquot’s cellars before the war, and I gave Sprott and Burlinghammer nos for it.’’ The French ne´gociant as well as the English wine merchant (and the price) are called to endorse the quality of the wine. See Tobias Smollett, The Expedition of Humphry Clinker (London, 1985; first published, 1771), 152; William Thackeray, Vanity Fair (Harmondsworth, Middlesex, 1968; first published, 1848), 62; Anthony Trollope, The Prime-Minister (Oxford, 1975; first published 1876), 103. 68

Henry Fielding, Joseph Andrews (London, 1962; first published 1742).

344

PAUL DUGUID

‘‘nearly one fifth of the Bankrupts are either wine merchants or ventures.’’ 69 Again, this was not entirely new. Brooke and Hellier, celebrated by Richard Steele in the Spectator in 1711 for its probity as a wine merchant, was listed in the London Gazette in 1712 as a bankrupt.70 Changes in advertising practices for alcohol during the nineteenth century suggest that wine merchants, no doubt aware of their own fallibility, tried to find proxies to bolster their probity. After soliciting testimony from customers and scientists, they turned to their suppliers. Again, there is a precedent for this. Since the early eighteenth century, wine had been advertised as ‘‘neat as imported,’’ to indicate that the importer had not blended the wine when it was received.71 Of course, blending could be legitimate, but the distinction between blending and adulteration was often a fine one.72 And the claim ‘‘neat as imported’’ suggests that customers preferred wine merchants to leave what they received alone, implying in turn that the consumer had more faith in the honesty of the exporter than in the probity of the importer. Such suspicions still obtained in the nineteenth century, and suspect wine merchants of that period understandably turned to reputable suppliers for validation. As one London wine merchant stated abjectly in testimony about the genuineness of his wine before a parliamentary committee, ‘‘We rely upon the respectability of the house that ships them [i.e., the firm’s port-wine consignments]; we have no right to argue anything else.’’73 As their own reputation fell, merchants increasingly invoked these more reliable names in their advertisements. Before too long, even reputable merchants-those that served the carriage trade and were reluctant at first to advertise at all-started using the names of their suppliers.

69

T. H. Hunt, Maisonette to Hunt & Co., 25 Oct. 1815, Incoming Letters, AAF.

70

For the celebration see the Spectator 362, Friday, 25 Apr. 1712, 2; for the bankruptcy, see the London Gazette 5054, Tuesday, 23 Sept. to Saturday, 27 Sept. 1712, 4. 71

For example, in 1711, Defoe noted in his Review, ‘‘Infinite Frauds and Cheats of the Wine-Trade will be discover’d, and I hope for the future, prevented; for if once we can come to a usage of drinking our Wines neat as they come from the Country where they grow, all the vile Practices of Brewing and Mixing Wines, either by the Vintners or Merchants, will die of Course.’’ A Review of the State of the British Nation 8 (18 Sept 1711), 207. 72 Adding brandy to French wine was generally denounced as falsification; adding it to Portuguese wine, acclaimed as fortification. 73 Testimony of D. Hart, House of Commons, Minutes of Evidence Taken Before the Select Committee on Import Duties in Wine (London, 1852), 440.

Networks and Knowledge

345

Hedges & Butler, a venerable merchant, began to note, for example, that the port it sold was ‘‘Sandeman’s shipping.’’74 Names from Porto and the Douro, previously known only to connoisseurs, began to make headway in the British market. Not only Sandeman, Offley, and Hunt, but also Seixas, Roriz, and Bom Retiro, famous quintas (vineyards) in the Douro, emerge in advertisements as signals of quality. If the strategy bolstered sales (and the trend in advertising suggests it did), it also ceded authority to earlier points in the chain. By highlighting their suppliers’ names, the wine merchants were subordinating their own. With Sandeman’s name as a warranty, consumers, rather than shopping at Hedges & Butler’s, might now shop for Sandeman’s, wherever it could be found. Rising from obscurity to prominence, these new names disturbed the established balance of power of the old chain and set link against link. As they saw their names subordinate the formerly dominant British retailers downstream, the exporters also discovered that the way port was made helped them to resist subordination by all but the most powerful producers upstream. Most port that reached Britain was a blend of the output from several lavradores. The process of blending led to the disappearance of the supplier’s name while giving authority and distinction to the name of the exporter who did the blending.75 Sandeman’s 1834 was distinct from Offley’s 1834, because Sandeman had blended it. Those who liked Sandeman’s blend could not get it from other houses or lavradores. The source of the wine in Sandeman’s blend was generally unknown. Occasionally, however, Sandeman would find it expedient to acknowledge a supplier and to ship their wine without blending. In so doing, Sandeman was ceding power to those named lavradores.76 But Sandeman rarely ceded such power. Rather, it used the power ceded to it by wine merchants to become one of the most aggressive branders in the wine trade, fighting publicly and aggressively against anyone who challenged not only the Sandeman brand

74

Times (London), 5 Nov. 1856, 14e. Ne´gociants in the champagne trade similarly overpowered individual vignerons; see Kolleen Guy, When Champagne Became French: Wine and the Making of a National Identity (Baltimore, 2003), 27. 75

76 For example, in 1821, when some pipes of wine were rejected, Sandeman & Co. defended itself by noting that these were ‘‘the genuine wines of Braz Gl’z.’’ The reference is to Braz Gonc- alvez Pereira, Sandeman’s long-term supplier mentioned above. Sandeman & Co. to Sandeman, Gooden, and Foster, 2 Jan. 1821, Sandeman & Co. letterbook, HoS.

346

PAUL DUGUID

but also the integrity of the Porto marque.77 Against such power, only very well-established wine producers in the Douro or wine merchants in Britain could hold up their own name.78 In sum, as the institutions that helped construct the port-wine supply chain crumbled, actors in the chain itself-faced not only with their ‘‘disestablishment’’ but also with aggressive competition from, principally, champagne, burgundy, bordeaux, and sherry-struggled for their collective survival. Collective danger did not produce a cooperative response, however. Rather it revealed internal tensions over names and trademarks, which the courts in Britain, and eventually Parliament, were increasingly willing to protect.79 Brands became a means for one firm in the chain to subordinate, in a quasihierarchical fashion, others that they did not own. The rise to prominence of the Porto names represents a critical shift in signifying power, which had previously rested almost entirely with wine merchants. Gradually, this power was transferred down the chain to their historically more reliable suppliers, the exporters, who in turn subordinated their suppliers, the winemakers in the Douro. The geographic distribution here indicates how the power inherent in a trademark extended over geographic distances along commodity chains, allowing some to dictate terms to others over whom they had no formal control and from whom they were separated by geography. If you wanted to sell Bom Retiro or Sandeman port, the owner of Quinta Bom Retiro or Sandeman could tell you what you could do to it, how you should bottle it, what label you could use for it, and what you would pay.80 By the last quarter of the century, it was these increasingly powerful names and the names of the new branding outlets such as Victoria Wines, with which the retail trade finally struck back, that came to dominate new wine chains and extract most of the profit from them.

77 That both the Porto and the London house shared the name ‘‘Sandeman’’ clearly helped avoid a fight between the two. Where the names differed—the firm of Croft, for instance, was handled in Britain by the firm of Lucas, Gonne, & Gribble—the name of the Porto house usually became the brand. 78

See Paul Duguid, ‘‘Developing the Brand: The Case of Alcohol, 1800-1880,’’ Enterprise & Society 4 (Sept. 2003): 405-41. 79 The Merchandize Marks Act was passed in 1862; equity and common-law courts had shown increasing willingness to acknowledge a right from some forty years before. 80 The development over the century of bottles and labels, of advertisements and marketing campaigns, of crus and vintages and other rituals of classification were all parts of this struggle to accumulate—or to resist—power in wine commodity chains.

Networks and Knowledge

347

Conclusion: The End of the Port Commodity Chain Major port traders saw their old, extended commodity chain collapse following the profound shift of power to those who had projected their trade names into the British market during the period described here. The effects of this relocation of power were far reaching. In the eighteenth and early nineteenth centuries, the lavradores made small batches of wine that port exporters blended for individual retailers. As they gained skill in branding and blending, exporters abandoned the latter practice. Rather than divide the wine finely according to individual clients’ tastes, they apportioned it into broad classifications to suit the volume of their business. In the process, the wine inevitably became more of a generic commodity than a carefully crafted, individualized product, as it had once been.81 A second effect followed from the first. The chain had to some extent been saved by the deployment of brands. But as brands also led to more standardized processes and products, the specific, highly local, divided knowledge, which the chain had served to link, became less essential to the trade. To produce a standardized product in high volume, exporters replaced local, specialized knowledge with more generic, manageable skills. The effects of this change were felt along the chain. In the aftermath of the Portuguese civil war, exporters began to lease properties in the Douro, driven in some part by the desire to grow grapes but motivated to a larger degree by the wish to make wine to their own specifications. Douro lavradores, once winemakers of distinction, were reduced to grape growers. Small farmers were increasingly marginalized, and, in the hard times of the 1850s and 1860s, many of their properties were absorbed into larger estates that produced grapes under contract for exporters and Porto wine brokers. In the process, comissa´rios lost both their power and their independence. Their role was taken over by salaried managers of Douro operations, who were in tum managed from Porto.82 Soon after, houses in Porto began to suffer a similar loss of independence.

81 Because it was individually blended, port was not divided into types of wine as late as the 1850s. See testimony of Joseph James Forrester in House of Commons, Minutes of Evidence. Only by the 1870s do the modern classifications of ‘‘vintage’’ and ‘‘tawny’’ port appear. 82 In the 1830s, Offley & Co. took a lease on the Quinta da Boa Vista, one of the most famous vineyards in the Douro, and managed its operations from Porto. Rental accounts, Off-ley & Co., AAF. About the same time, Sandeman & Co. took over the lease of the Quinta das Figueiras.

348

PAUL DUGUID

Once relatively autonomous, they were integrated into London houses, which were able to centralize operations with the aid of the telegraph while they clung onto and strategically wielded the Porto brand.83 Thus, the means that had been found to stabilize the threatened commodity chains of major export houses ultimately became the means that ended them. By concentrating in London power that was once distributed along the chain and simultaneously devaluing local specific knowledge that the chain had articulated, brands facilitated the vertical integration of the commodity chain in a period of intensifying competition.

 Socially embedded, enduring trade networks provide an important alternative analytical tool to the neoclassicists’ idealized, atomistic markets. But, in their tum, networks too have perhaps become overly idealized. In this essay, I have tried to show how the trade in port wine to Britain, once dominated by general traders who worked in broad networks, was transformed by dedicated specialists, whose success in the port business can almost be measured by the degree to which they extracted themselves from diverse trading networks. 84 They focused, instead, on a set of activities ‘‘clustered around one commodity or product, linking households, enterprises, and states to one another,’’ a process that matches Gary Gereffi’s definition of a ‘‘commodity chain’’ as a highly specialized kind of network.85 In the essay, I have tried to analyze why this more linear path rose to prominence and have suggested that the critical factors involved questions of quality—a central and contentious issue for the wine business—and the specialized knowledge required to produce quality in unstable, yet highly regulated, circumstances.86 The development of the port trade also 83 This contribution by the telegraph to international trade was well documented by Harold Innis, The Bias of Communication (Toronto, 1951). One of the early symptoms in the port trade is probably the disappearance of family members at the head of Porto houses. 84 As this argument suggests, commodity chains may be more capable of transformation than some discussions of network ‘‘reproduction’’ indicate, and participants may be more committed to the survival of the networks than to stability. See Gordon Walker, Bruce Kogut, and Weijian Shan, ‘‘Social Capital, Structural Holes, and the Formation of an Industry Network,’’ Organization Science 8 (1997): 109-25. 85

Quoted in Dicken et al., ‘‘Chains and Networks,’’ 98.

86

Neoclassical economics, as Stigler pointed out, has difficulties with questions of quality. George J. Stigler, ‘‘The Economics of Information,’’ Journal of Political Economy 69 (1961): 213-25. It also has trouble with questions of knowledge. See Patrick Cohendet, Francis Kern, Babak Mehmanpazir, and Francis Munier, ‘‘Knowledge Coordination, Competence Creation,

Networks and Knowledge

349

highlights the question of power in commodity chains. Over time, different links sought either to dominate the chain or, at least, to prevent themselves from becoming dominated by others in the chain. Even the nature of the commodity—a fortified wine—reflects this struggle, but the struggle emerges most clearly in the nineteenth-century brand war that broke out along chains as much as between them, which, by concentrating power, marked the final stage in the progress through networks to chains and then to vertical integration.

and Integrated Networks in Globalized Firms,’’ Cambridge Journal of Economics 23 (Mar. 1999): 225-41. For more on this, see my introduction to this issue.

TOWARDS A NETWORK PERSPECTIVE ON ORGANIZATIONAL DECLINE$ Brian Uzzi ABSTRACT Analysis of organizational decline has become central to the study of economy and society. Further advances in this area may fail however, because two major literatures on the topic remain disintegrated and because both lack a sophisticated account of how social structure and interdependencies among organizations affect decline. This paper develops a perspective which tries to overcome these problems. The perspective explains decline through an understanding of how social ties and resource dependencies among firms affect market structure and the resulting behavior of firms within it. Evidence is furnished that supports the assumptions of the perspective and provides a basis for specifying propositions about the effect of network structure on organizational

$

This chapter is a reprint of the article ‘‘Towards a Network Perspective on Organizational Decline’’ published in the International Journal of Sociology and Social Policy Volume 17 Issue 7/8 (1997).

Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 351–387 Copyright r 1997 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030014

351

352

BRIAN UZZI

survival. I conclude by discussing the perspective’s implications for organizational theory and economic sociology. Keywords: Organizational decline, organizational theory, network structure, firm behavior The US economy has dramatically transformed over the last three decades raising new questions about economic growth potential, labor market stratification, and industrial and political independence (Harrison, 1994). The most debated change has concerned the loss of manufacturing because of its central role in the development of the US economy and its key role in raising living standards. Before 1975 the loss of manufacturing, often termed deindustrialization, was a regional phenomenon. The loss in manufacturing in one region of the country was off-set by the birth of new manufacturing plants and industries in other regions. After 1975 however, real decline in manufacturing became a national phenomena, reaching its peak rate during the 1980 and 1982 recessions and continuing at a slower but steady rate of decline thereafter. By 1982, manufacturing jobs dropped to just under 18 million jobs from their all-time high of 21 million jobs in 1979; breaking the prior record for plant failure and job loss set during the Great Depression (Romo and Schwartz, 1995). From an international perspective, these changes are not strictly American. Much research shows that other advanced nations, Great Britain in particular, have experienced similar changes and encountered similar economic and social consequences (Elbaum and Lazonick, 1986; Best, 1990). These changes are wrenching to local communities and particularly corrosive of the gains won by immigrants, women, and minorities, since these groups benefit most from a strong manufacturing sector even though they are the first to be let go during downsizing and closings. Even though some arguments suggest that the changes are a natural part of a necessary readjustment process whereby economic resources are repositioned to their most productive uses (Romo et al., 1989), there is nonetheless much reason for concern. New industries have not provided ample re-growth opportunities nor reversed the ever-widening income distribution between the wellpaid and stably employed and the growing ranks of temporary workers who are restricted to unstable and low-paying positions (Harrison and Bluestone, 1988; Davis-Blake and Uzzi, 1993). There is also a growing recognition that the growth of new industries, particularly those marked to carry the US economy into the 21st century, is based on the presence of a strong

Towards a Network Perspective on Organizational Decline

353

manufacturing base that can supply the inputs and absorb the outputs of new industries, and that without such a base, new industries will not arise nor flourish. For example, the growth and success of the biotechnology industry in the US, Japan, and Germany has been tied to the strong presence of three support industries - pharmaceuticals, chemicals, and brewing - and the social and resource dependence networks that exist between and among organizations in these fields (Saxenian, 1994). Present explanations of organization failure fall into two major views: the comparative cost perspective and the threat-rigidity perspective. The comparative cost perspective has been developed primarily by economists and population ecologists who study decline and tends to use traditional economic arguments that stress efficiency and rational calculation as determinants of decline1 (Lawrence, 1984; McKenzie, 1982; 1984; Hannan and Freeman, 1989). The threat-rigidity perspective has been developed mainly by organization theorists and social psychologists and relies predominantly on cognitive and decision making theories to explain the processes by which organizational decline takes place and can be strategically managed (Sutton, 1990). Though these perspectives are formulated using different disciplinary logic and focus on somewhat different aspects of organizational decline, they tend to make similar assumptions about the role of social structure in the process and outcomes of failure: both theoretical conceptions overlook the fact that organizations are embedded in concrete, ongoing systems of social relations and resource dependencies that influence the process of organizational decline (Granovetter, 1985). For example, in the case of comparative cost theory, the action of firms and decision makers is assumed to be directed by rational, self-interested behavior that is only minimally affected by social ties and resource dependence relations. In the case of the threat-rigidity perspective, managers succumb to ‘‘pathological and maladaptive’’ dominant psychological responses that minimize the effect of social structure on action because microbehavioral decision making processes unfold without attention to the decision maker’s linkages 1

To date, there is no universally agreed upon definition of decline (see Sutton, 1990, p.208), and definitions tend to vary according to what types of firms are under investigation, i.e. not-forprofit, manufacturing, or educational organizations. In this paper, decline refers to three organizational changes: workforce reductions, plant closings, or organizational death. This definition is consistent with most deindustrialization and organizational decline research, as well as the paper’s focus on manufacturing firms.

354

BRIAN UZZI

to other actors or institutional connectedness (Staw, Sandelands and Dutton, 1981; Ocasio, 1995). Thus, in both cases, a sophisticated account of how social structure directs attention to particular stimuli, shapes actors’ expectations and preferences, or channels resources between interdependent actors is given short shrift. In this paper, I argue that further understanding of organizational failure requires that these perspectives be synthesized to provide a sophisticated account of how social structure and interfirm resource dependence affect decline. In developing this argument I build upon the majority perspective among sociologists that argues that economic behavior is embedded in ongoing systems of social and resource exchange networks (Granovetter, 1985; Burt, 1992; Smelser and Swedberg, 1994). This perspective proposes that decline is best understood by focusing on the characteristics of relations among organizations and their social context (Romo and Schwartz, 1995; Uzzi, 1996). The objective is to show how an understanding of social structure can furnish a unique perspective on organizational decline while also resolving empirical anomalies from both comparative cost and threatrigidity perspectives. I begin with a review of the literature on comparative cost and threatrigidity perspectives, focusing on how the logic of each argument neglects a treatment of social structure and on the empirical anomalies that have been identified with each approach. I then describe the structural embeddedness framework and present data which supports its assumptions and helps resolve in part some of the anomalies of comparative cost and threatrigidity perspectives. Finally, I formulate hypotheses that follow logically from the structural embeddedness approach to show how its predictions are unique, testable, and capable of directing future empirical study on organizational decline. I close with a discussion of the implications of the structural embeddedness approach for public policy, the sociology of organizations, and organization decline research.

Organizational Decline The Comparative Cost Perspective The study of organizational decline has long been a topic of economic research. While several theoretical perspectives have been forwarded, most combine a focus on cost efficiency and product life-cycle dynamics (Romo

Towards a Network Perspective on Organizational Decline

355

and Schwartz, 1995).2 The cost argument component of this perspective is the most frequently used decline explanation. Simply put, it argues that organizations survive or die based on their ability to cost-effectively compete with other organizations. The basic proposition is that those firms that most efficiently organize their internal structures and market relationships will have the lowest failure rates. ‘‘[The]ytheory assumes that firms seek to maximize profits, y[and that] a firm will cease production if market price (average revenue) fails to cover average variable costsywhere competitive performance has been relatively poor, average profitability is likely to be low and a high closure rate is to be expected’’ (Henderson, 1980: 153). On a microbehavioral decision-making level, this perspective embraces the neoclassical assumptions about human information processing and actor motivation, namely, that decision makers are rational, self-interested profit maximizers. Decision makers, in this view, whether individuals or firms, scan the environment, rank alternatives, and direct their investments into those activities with the highest expected value of return, where returns are most efficiently calculated by attention to price data which provides all the information needed to make rational decisions (Simon, 1991). Consequently, successful firms are unlikely to form long-term attachments to exchange partners or a particular place, or to cultivate close personal ties because long-term attachment suggests that the firm is suboptimally exploiting new low cost entrants into the industry. Also, social relationships supposedly offer no economic value over price data or are predicted to lower the price system’s efficiency by causing actors to become more enamored with the relationship than with the economic imperatives of the transaction (Harrison, 1992: 476; Peterson and Rajan, 1994). In perhaps the best articulation of this view, McKenzie (1979, 1982, 1984) argued that the large scale decline of manufacturing in the northeast could be attributed to the relatively high cost of wages and union rigidities, property and business taxes, infrastructure renewal and maintenance costs, utilities, and other regional factors of production in the Northeastern states.

2

A line of thought within economics which is not explored here because of my focus on organizations and their interrelationships, is the purely macro argument of deindustrialization. In this view, decline is caused by the government’s mismanagement of the money support, tax and tariff policies, and exchange rates (Best, 1990).

356

BRIAN UZZI

As a result, the systematic closure of plants in the Northeast, and the attendant organizational births in Southern states, was attributed to the lower regional production and transaction costs in the South. In the 1980’s, the shift of production to off-shore locations caused some factors of the comparative cost logic to be refuted, while at the same time focusing attention on labor costs as the primary driver of decline and managerial attention (Romo and Schwartz, 1995). This shift in thinking occurred because the infrastructure, utility, and tax costs in target relocations in Japan, Germany, and Italy were the same or higher than the South. At the same time, these factors were increasingly recognized as accounting for only a small portion of the average overall cost structure of production within US regions. This meant that the only cost factor that was measurably different was labor costs, which became seen as the core factor behind decline decisions and processes (Romo and Schwartz, 1995). The idea of comparative costs as grounded fundamentally in labor costs was assimilated into product-life cycle arguments, which combine the assumption of comparative costs with the notion that industries follow hypothetical stages of growth, maturity, and decline over their ‘‘product life-cycle.’’ In the early stages, the high level of skill needed to develop new products means that firms must pay high wages to attract skilled workers, while the absence of strong competition in a new product’s market enables firms to pay high wages without intense competitive pressure to lower wage costs. During the maturity stage however, there is a shift from nonstandardized product development to standardized production. This change permits firms to shift from using costly highskilled labor to cheap low-skilled labor. During this stage new entrants can also flood the market as products are easily copied and production is automated - raising labor cost containment pressures. Thus, mature firms, no longer dependent on costly skilled labor, should close old plants and relocate to locales that are replete with low-wage, semi- and unskilled labor (Norton, 1986). In this way, comparative cost arguments permitted a straightforward explanation of organizational decline. Most of the US industrial base was constructed during WWII when a high priced labor force was needed to create products and markets. But, as newly industrialized nations developed their industries and adopted advanced technology, the inexorable pressure on US companies to take advantage of lower wages in the production of

Towards a Network Perspective on Organizational Decline

357

mature products made plant closures and migration to low wage regions inevitable.3 Most tests of comparative cost theory are based on case studies of particular industries that richly describe specific cases that fit comparative cost logic, such as the Lowell textile industry (Heckman, 1980), yet lack generalizability or a basis on which to explain non-confirming findings. For example, apparel firms located in New York have experienced severe decline despite the fact that their production costs are less than their most important competitors. In several regions associated with the Third Italy, the average Italian producer’s labor costs exceed the labor costs of New York producers by $3.00 an hour and those of Far-East producers by more than $11.00 dollars an hour (Werner, 1989). Yet, firms located in Italy are growing in number and remain the world’s largest clothing exporters in value (Werner, 1989; Lazerson, 1995). Other evidence for comparative cost perspectives is based on analysis that finds a statistical association between industrial decline in the Northeast and its rise in other regions with lower labor rates, particularly the Sun-belt (Flicks, 1980; Norton, 1986). Although these studies offer findings consistent with the broad expectations of comparative cost views, they obscure the fact that firms in the same region or industry have different cost structures. Thus, a more precise test of the theory would examine whether firms with high wages, as opposed to those with low wage structures, are more likely to close plants or move to areas with relatively lower labor costs. In one of the few studies to make this decisive comparison, Romo and Schwartz (1995) detected little support for comparative cost arguments. They found that of the 2,907 plants of greater than 25 employees or 10,000 square feet of workspace that migrated/closed in New York State from 1960 to 1985, 49.9 percent had no cost savings, while a full 24% moved to locations with higher wage costs - suggesting that 75 percent of the activity in New York State runs counter to comparative cost arguments. A more descriptive summary of these patterns are presented in Table 1 (adapted from Romo and Schwartz, 1995: 879). Table 1 presents the potential average 3

Capital accumulation arguments also emphasize rational disinvestment behavior as a source of decline, but view it as a product of managerial philosophy, not competitive forces. In this view, quick-fix profit making strategies direct investment into speculative, short-term activities such as junk bonds rather than into investments that improve plant efficiency. Over time, disinvestment in once profitable plants reaches a level that makes production too expensive justifying plant closure (Bluestone and Harrison, 1982).

358

BRIAN UZZI

Table 1. Potential Savings in Wage Costs and Percent Migrating: Manufacturing Establishments in New York State, 1960-1985. Sector

Apparel Chemical Electrical Machines Fabricated Metals Food and Kindred Furniture Instruments Leather Lumber Machinery Miscellaneous Paper and Allied Industries Petroleum and Coal Primary and Metals Print and Publishing Rubber Stone and Clay Textile Mills Transport Equipment

Potential Savings in Wage Costsa

Migrating Plants as a Percentage of 1963 Base

24.3 16.1 36.9 28.7 33.2 23.6 45.9 20.9 3.1 30.4 41.1 19.1

0.09 12.2 12.8 3.4 3.4 2.8 7.7 1.6 1.5 5.0 2.5 9.0

13.6 37.8 29.8 11.2 25.8 26.7 31.1

7.0 9.0 1.9 9.4 2.8 4.1 7.1

a

Defined as the largest percentage savings in wage costs achieved by any migrating plant in the sector between 1960 and 1985 (adapted from Romo and Schwartz 1995: 879).

labor costs savings of permanently closing an establishment in New York State or migrating to a new location by 2-digit manufacturing sectors, 1960 to 1985. Consistent with comparative cost theory, Table 1 reveals that there is considerable potential wage cost savings across industries. For example, column one shows that while lumber has only a potential wage cost savings of 3.1 percent through closure, transportation equipment has a potential wage cost savings of 31.1 percent through closure or migration. Contrary to comparative cost logic however, the results suggest that there is little to no relationship between labor cost savings and actual migration or closure. For instance, apparel, instrument manufacturing, and printing and publishing had estimated cost savings of 24.3, 45.9, and 29.8 percent respectively and migrated/closed at a rate of 0.9, 7.7, and 1.9 percent. In contrast, the chemical sector had a potential wage cost savings rate of 16.1, but migrated

Towards a Network Perspective on Organizational Decline

359

at a rate of 12.2 percent, nearly twice that of the other three sectors. Across all sectors, the product moment correlation between potential savings in labor costs and the migration rate is only .06, suggesting an almost random relationship. Romo and Schwartz (1995: 879) concluded, ‘‘While these data do not exclude comparative costs as a factor in the relocation of industry, they refute the widely held assumption that such costs are the fundamental determinant of migration destinations [and closure].’’ The Threat-Rigidity Perspective Whereas comparative cost theory focuses on objective cost measures and rational decision making processes, threat-rigidity arguments concentrate on how subjective measures of performance and ‘‘irrational’’ decision making processes affect decline. In this way, comparative cost and threatrigidity perspectives present different but complementary views of decline. The threat-rigidity perspective contends that individuals, groups, or organizations in threatening situations have a general tendency to follow ‘‘ymaladaptive or pathological cycles of behavior’’ that restrict information processing and tighten social control over organizational members in the face of economic change or adversity (defined as a failure to meet a desired level of aspiration, e.g. inability to reduce labors costs to the level of a competitor) (Staw, Sandelands, and Dutton, 1981: 501). These responses are manifested in ‘‘mechanistic’’ shifts in organizational structures that simplify information codes, narrow fields of attention, centralize decision making, and ultimately reduce the organization’s ability to change. According to the threat rigidity perspective, the factors determining organizational decline are rooted in psychological processes that are activated in response to threatening stimuli – such as new lower cost competitors or the inability to innovate. Because decision-makers are socialized to apply well-learned and previously successful strategies, threats fail to illicit rational decision and instead provoke ‘‘pathological,’’ ‘‘emotional,’’ or habitual responses that are inappropriate for the current crisis but which worked well in the past. Past successes may also cause decision makers to deny problems and defend ‘‘business-as-usual’’ (Weitzel and Jonsson, 1989). Ultimately, these behaviors limit the firm’s ability to innovate and change at a time when it needs flexibility and invention most in order to adapt to new environmental conditions. Research on threat-rigidity effects has found mixed results (Ocasio, 1995). Cameron, Whetton and Kim (1987) found no evidence of decision making

360

BRIAN UZZI

centralization, non-prioritized cost-cutting, or a lack of long-range planning in declining universities. These findings are similar to the results of other studies that show that some organizations become more innovative rather than less during decline. Etzkowitz (1990) found that colleges experiencing financial cutbacks support faculty entrepreneurship and new types of industry-university alliances. Evidence from studies of large manufacturing firm failures is also mixed. In a sample of matched bankrupt and surviving firms, D’Aveni (1989) argued that the increased levels of centralization, heightened attention to costs, and ‘‘strategic paralysis’’ found in bankrupt firms reflected threat-rigidity responses. In a follow-up study, D’Aveni and MacMillan (1990) found that manufacturing organizations facing performance problems relied on well-learned, rather than adaptive responses. Hambrick and D’Aveni (1988, p.15) however, using the identical sample of firms, were unable to determine whether the strategic vacillation and paralysis4 they identified in bankrupt firms was due to, ‘‘a perceived lack of resources, a tendency for decision makers to ‘withdraw’, or to paralysis brought on by anxiety.’’ Hence, some declining firms appear to exhibit patterns consistent with the threat-rigidity thesis, but others do not. Ocasio (1995) argues that this contrary evidence stems from the fact that theories of threat-rigidity have not been adequately reconciled with theories of rational action and failure-induced change such as comparative cost perspectives. He concluded that threat-rigidity arguments are predictive, but lack a specification of the institutional and organizational field-level contingencies under which they are operative. In this regard, Sutton (1990) has begun to touch on the importance of interfirm relations in determining how threat-rigidity process unfolds. He theorizes that a deterioration in a firm’s resource base may harm its reputation with its network of exchange patterns - leading exchange partners to end relations or to bargain for more favorable contract terms. Consistent with its argument, D’Aveni (1989) found that the social capital of the top management of bankrupt firms affects the firm’s ability to acquire resources during periods of threat and decline. He showed that firms with senior executives who attended prestigious schools or who have extensive 4

Hambrick and D’Aveni (1988, p.11) defined vacillation and paralysis as a firm’s attempt at ‘‘either doing too much or too little’’ entry or exit into new product domains as a means of reversing decline. It is measured as an organization’s entries into, and exits from, different lines of business measured at the 4-digit SIC level and uses the coefficient of variation to compare firms.

Towards a Network Perspective on Organizational Decline

361

contact networks are better able to acquire financial resources than other firms that lack these kinds of ties. Thus, the threat-rigidity literature, like the comparative cost literature, has begun to recognize the need to pay greater attention to the structural relations within which firms operate and decline processes unfold. Interestingly, this gap in our understanding of decline processes arises out of both perspectives even though they approach decline from different microbehavioral and organizational assumptions. In comparative cost logic, behavior follows almost automatically from rational calculations and selfinterested organizational behavior. Social and interfirm ties have no real effect on behavior; instead, decline occurs in an economic outer space ‘‘yrelations of productions and distribution are (and must be) essentially untouched by such sociological, cultural, anthropological and political considerations as the size, location and history of one’s community, family and ethnic ties, the presence or legacy of attachment to guilds, or commitment to place, or else they ought to be’’ (Harrison, 1992, p.476). On the other hand, organizational decline logic holds that failure results from pathologically flawed decision-making. Organizational relations and interdependencies are at best ‘‘triggers’’ of behavior because decision makers follow nearly deterministic scripts of behavior. From this perspective organizational behavior is also viewed as automatic - one merely needs to know the conditions under which decision-making is taking place to predict the mechanistic changes in individual and intraorganizational behavior that promote decline. Thus, although each perspective posits opposing mechanisms, they paradoxically reach similar conclusions about decline. In short, once we know either that an organization is acting rationally or is under threat, organizational decline behavior follows automatically and ongoing relations become unimportant in determining decline.5 The structural embeddedness perspective does not refute either comparative cost logic or irrational decision-making processes, but attempts instead

5

It is not surprising that decline research has focused on individual actor explanations. Attribution theory has shown that observers seek an explanation of phenomena that provides a sense of control over situations (Kelley, 1971). Thus, by attributing decline to individual, atomized actors, observers have a powerful theory of how to control organizations - change managers or their decision making processes. Attributing decline to social structural relations shifts attention away from the individual and makes situational control more difficult. Moreover, because economics and psychology focus on traits (e.g. human capital, personality) as causes of behavior it has been difficult for scholars to attribute causality externally.

362

BRIAN UZZI

to specify the conditions under which costs and faulty decision making is likely to be a primary determinant of decline, while also making unique predictions that are not conceived of in either of the other perspectives. In the next section, I attempt to integrate the insights and advances found in both comparative cost and threat-rigidity perspectives with recent work in economic sociology on the embeddedness of economic action. I build on these results and other arguments to develop a framework which tries to explain how the structure and content of network ties among firms affects firm and network-level performance. This work takes an explicit network focus and attempts to explain economic behavior and microbehavioral decision-making processes as a function of the social structure within which organizations are embedded. The Structural Embeddedness Perspective Resource dependence and institutional theories have shown that organizational behavior is constrained by resource interdependence and uncertainty (Pfeffer and Salancik, 1978; DiMaggio and Powell, 1983). The embeddedness approach argues that economic behavior cannot be explained by individual motives and cognition alone because it is influenced by networks of relations between and among organizations and individuals (Granovetter, 1985; Portes and Sensenbrenner, 1993; Romo and Schwartz, 1995; Uzzi, 1996). ‘‘Embeddedness refers to the fact that exchanges within a groupyhave an ongoing social structure [that]yby constraining the set of actions available to the individual actors and by changing the dispositions of those actors toward the actions they may take’’ affects economic performance in ways that economic and psychological schemes of organizational behavior do not centrally address (Marsden, 1981:1210). First, interfirm networks are important mechanisms by which resources are allocated and valued by actors, and second, networks influence behavior by directing decision makers’ attention to specific stimuli and by changing their expectations and motives. A distinct feature of the approach is that embeddedness characterizes a unique logic of exchange. In this exchange logic, explanatory power is attributed to ongoing networks of interfirm ties that shape actors’ expectations and opportunities in ways that differ from self-interested economic logic or the logic of mechanistic, hard-wired psychological processes. The key implication is that the level of embeddedness in an exchange system creates opportunities, constraints, and outcomes that are not predicted by standard explanations. In the next section, I focus on how

Towards a Network Perspective on Organizational Decline

363

content and structure of interfirm network ties shape decline and present data which support key assumptions of the model. Network Structure and Formation In both comparative cost and threat-rigidity perspectives on decline, the image of the social structure within which the firm operates is one of an atomistic market (Sutton, 1990; Harrison, 1994). Organizations primarily form loose, temporary connections to other firms and the quality of the relationship itself remains cool. ‘‘To this day, orthodox economists y assume that relations of productions and distribution are (and must be) essentially untouched by such sociological, cultural, anthropological and political considerations as the size, location and history of one’s community, family and ethnic ties, the presence or legacy of attachment to guilds, or commitment to place, or else they ought to be’’ (Harrison, 1992). In contrast, the structural embeddedness approach makes the assumption that organizations are embedded in interfirm networks that shape individual firm’s performance, as well as the performance of the network as a whole by determining how resources are allocated and what range of action is considered feasible by organizations. Romo and Schwartz (1995) showed that interfirm networks develop around core establishments that provide the resources on which subcontractor, supplier, trader, and service organizations thrive. Core organizations have several characteristics that distinguish them from periphery firms (Romo and Schwartz, 1995). First, they tend to be large manufacturers of diversified finish products, rather than specialized producers. Second, core organizations tend to be sophisticated assemblers who have broad knowledge of the design, fabrication, and marketing process, rather than extensive primary expertise in the product and process innovations related to particular component parts. Instead, they tend to purchase components and expertise from specialized local contractors and suppliers. Lastly, core firms are likely to be part of multiregional or multinational companies, since their finished products are shippable and marketable in markets that demand similar products but may be located in different places around the globe. In the US auto industry for example, GM is an example of a core organization. Its contracts with over 5000 suppliers yearly for component parts that it then assembles into final goods (Helper and Levine, 1992). More generally, the use of suppliers is evidenced by the fact that the typical US firm contracts out about 60% of all manufactured parts and there are 5-7 customer-supplier links on the value-added

364

BRIAN UZZI

manufacturing chain from raw material to the end user subcontractors (Kelley and Harrison, 1990). In contrast, periphery firms tend to be small or medium sized producers of a narrow range of components or products. Periphery firms provide core firms with a pool of specialized skills and expertise that may be uneconomical for core organizations to develop on their own because of the uneven demand for a product or because the unique know-how that is required to undertake the design and manufacture of the product is too costly to develop in-house (Davis-Blake and Uzzi, 1993). In this way, periphery firms provide more than just products and services to the network of organizations with whom they transact. Their small size and specialized production facilities increase the flexibility of the network for dealing with changing market conditions by enabling core firms to adjust production levels in quick response to changes in demand while also providing a source of skilled and specialized talent that can be tapped when creating new products or moving into markets (Piore and Sabel, 1984; Harrison, 1994). Since core organizations provide the principal resources on which periphery firms survive, interfirm networks develop around core organizations that provide a market for the specialized components and products made by contracting, supplier, and other periphery organizations. Moreover, because the viability of networks of organizations become tied to the welfare of the core, core firms acquire the power to shape the social structure of the local economy and the embeddedness of firms within it. They gain power in local politics, professional associations, and on boards of major institutions (Mintz and Schwartz, 1985). At the same time, these resource constraints lead periphery organizations to mimic the practices of core firms, establish plants near core establishments, or form corporate ties to core firms (Florida and Kenney, 1991). Taken as a whole, these processes create pressures that transform atomized markets into networks of organizations. The creation of networks around Japanese transplant firms is a contemporary example of the network formation process. ‘‘Around any major Japanese corporation one can find a cluster of intermediate-sized firms that make quality productsyMost of the parts used by auto plants in Japan, for example, are manufactured by sub-contractors. All the automaker is responsible for are the assembly and painting operations.’’ (Trevor and Christie, 1988: 20). Moreover, Florida and Kenney (1991) report that the formation of networks of suppliers and contractors around the Japanese transplant automakers in the Midwest was due in small part to comparative

Towards a Network Perspective on Organizational Decline

365

costs. Over 90 percent of the periphery firms chose sites near core firms to take advantage of network benefits, rather than cost efficiencies. Fig. 1 illustrates the core and periphery structure of a typical interfirm network in the apparal industry.6 Circles represent the different firms that make up a network. Arrows indicate the flow of goods and information. The core firm is at the center of the network. Apparel manufacturers are really sophisticated assemblers (hence the industry specific nomenclature of ‘‘jobber’’) that often make no part of the garment. Instead, they usually design a product line, in-house or with freelance designers, organize of work of the specialized firms that manufacture the individual components of the garment, and provide a channel between finished goods and retailers. Core firms also link backward to textile mills and converters, which take raw materials and transform them into fabric. The primary periphery firms in this network are design, grading and marking, and sewing contractors. Other contractors are used for specialized production purposes - pleating, button holes, ruffing, and bias trim. These firms have deep knowledge of one specialized aspect of the production process. As can be seen from the diagram, the welfare of the network is tied to the viability of the core firm; while the success of the core firm is itself dependent on the performance of its contracting network. This creates a reciprocal dependence between the firms in the network and the network as a whole. Moreover, this example points out that the large size of a core organization is not measured in absolute terms but in relative terms vis-a-vis periphery firms. In this industry, apparel manufacturers are small in absolute size of

6

Original data on this economy came from two sources. Intensive field interviews were conducted with the CEOs of 20 apparel firms with gross sales varying from $500,000 to $1,000,000. Interviews were focused on women’s better dress apparel firms - a design driven and trendy segment of the market. Data on the level of transactions between manufacturers and contractors were obtained from the International Ladies Garment Workers Union and the American Fashion Council which maintain records on the volume of transactions between contractors and manufacturers. The data describe the full network of relations each firm possesses and the level of that relationship in terms of resource dependencies. A firm’s main product lines, number of employees, ownership ties to other firms, and location are also included in these data. Because the union relies on these records for dues collecting, it maintains data on the number of workers in each firm and how much work they do for other firms to accurately track the manufacturer or contractor responsible for paying workers’ dues. Data are kept only on unionized shops. Nearly 3/4 of all firms are union controlled in the New York regional economy, non-unionized shops tend to be small. For a full description of the data see (Uzzi, 1996a).

366

BRIAN UZZI

* Makes Griege Goods

Converter

* Sells Fabric

Textile Mill

* Dyes & Textures Griege Goods into Fabric Retailer

* Delivers Garments

* Places Order

Core Manufacturer

Manufacturer’s Showroom

Manufacturer’s Warehouse * Sews Together Garment & Delivers to Manufacturer

* Makes Sample

Design Studio

* Create Design

* Sends Trim Sewing Contractor

* Sets Sizes * Sends Fabric

* Makes Pattern

* Cuts Fabric Grading Contractor

Cutting Contractor

* Sizes Pattern

First Tier Periphery Second Tier Periphery Trimming Contractor

Fig. 1.

Ruffling Contractor

Button Contractor

Pleating Contractor

Typical Interfirm Network in the Apparel Industry’s Better Dress Sector.

number of employees, and may have fewer direct employees than the periphery firms in their network, but they almost always have much greater sales volume, and more connections to other firms in and outside the immediate locale.

367

Towards a Network Perspective on Organizational Decline

Table 2. Indicators of Structural Embeddedness in the New York Better Dress Apparel Industry, 1992. Variable

Mean number of firms per network % of focal firm’s work due to one firm % of focal firm’s work due to two firms % of focal firm’s work due to three firms Market boundary index (# ties/# possible ties)

Contractors (N = 508 firms) 4.33 61.7% 76.7% 82.5% 4.9%

(3.26) (.285) (.271) (.268)

Manufacturers (N = 91 firms) 12 27.8% 39.7% 45.5% 2.5%

(10.28) (.211) (.236) (.230)

Note: Values are means with standard deviations in parentheses. Sample represents all unionized firms; about 85% of the industry.

Table 2 further describes the typical size and nature of interfirm network ties in the apparel industry. The data shows that the average number of ties that a firm maintains in the ‘‘market’’ is relatively small, given the number of possible relations in the regional economy. Limiting our analysis to women’s better dress firms, the average contractor maintains ties with 4.33 manufacturers out of the possible 91 manufacturers with whom they can do business. The average manufacturer maintains ties to 12 contractors out of a possible 508 subcontractors with whom they can do business. Relationships in networks also tend to be more concentrated than expected by comparative cost theories, which focus on the atomization of exchange relationships (Mclean and Padgett, 1996). Contractors tend to do a high proportion of their work for just one or at most two firms. On average, 61.7% of a subcontractor’s total work is due to a single manufacturer. On average, 76.7% is due to just two manufacturers. In the case of manufacturers, 27.8% of their total work goes to a single subcontractor. It also appears that while manufacturers maintain networks with a mean of 12 contractors, three contractors account for 45.5% of the average manufacturer’s total business. Another measure, the market boundary index, measures the proportion of buyers and sellers out of all possible buyers and sellers that an average firm transacts with. The lower the index, the more firms rely on a small portion of the total market for the volume of their transaction. The market boundary values for contractors and manufacturers respectively is 4.8% of the market (4.33 ties/89 possible ties) and 2.5% of the market (12 ties/508 possible ties). Thus, contrary to the atomistic logic of market exchange that underlies current views of decline, but consistent with a structural embeddedness perspective, firms appear to form highly

368

BRIAN UZZI

interdependent resource dependence ties with a network of trading partners - underscoring the importance of network structure in the functioning and survival of the organization.7 The above patterns are particularly illustrative because the apparel industry is precisely the kind of industry in which one would expect comparative cost theory predictions about the atomization of relations to be an accurate description of economic relations. In the apparel industry there are many buyers and sellers, low barriers to entry, low start-up costs, and many substitutable shops. Hence, while these data do not refute current decline arguments, they highlight the need to develop an understanding of how embeddedness and network structure affects economic action. Social Structure and Interfirm Network Relationships An understanding of how embeddedness and network structure produce particular economic outcomes demands a closer examination of the interactions among firms in organizational networks. Research shows that exchange relationships among firms in a network can be either arm’s-length as an atomistic markets or closely knit and collaborative as the organizational network literature emphasizes (Powell, 1990). I view arm’s-length and embedded relationships as being on opposite ends of an exchange continuum so that a network can have varying levels of structural embeddedness from low (arm’s-length ties) to high (embedded ties) depending on the type of ties used by firms in the network. This view allows for the possibility that networks can be mechanisms for both cooperation and exploitation (Granovetter, 1985; Burt, 1992). Arm’s-length Relationships. In arm’s-length ties only price and quantity data needs to be exchanged between buyer and seller because, according to neoclassical logic, it contains all the information necessary to make efficient business decisions currently and in the future (Hirschman, 1970). Each party

7

It is possible that the concentration and relative exclusivity of network ties reflected in these data are an artifact of an unrepresentative, one-year cross-section of data. However, a research project carried out by Emil Schlesinger (1951) and funded and published by the International Ladies Garment Workers Union (now called UNITE, NYC, NY) demonstrates clearly that these network characteristics are typical of on-going interfirm relationships in the apparel industry dating back as far as the 1930 and up to the end of his study in 1951. His data also shows that principle relationships between manufacturers and contractors were long-term and enduring, while relationships that comprised a small percentage of total business were fleeting and ‘‘turned over’’ frequently on a year-to-year basis (see also Waldinger, 1986).

Towards a Network Perspective on Organizational Decline

369

to the transaction pursues their own interest and exchanges with the highest bidder or lowest cost producer. If ongoing relations form between parties, they are believed to be more a matter of coincidence than a deliberate commitment (Lazonick, 1991) or simply epiphenomena of economic interchange (Granovetter, 1985). The identity of parties is immaterial to the extent that it does not influence the use-value of the commodity (Sen, 1985), while sociological factors such as family, shared ends or values, history, commitment, or trust have a minimal effect on economic action (Harrison, 1992). Thus, when arm’s length ties are used among firms in a network, the network structure resembles the competitive model of microeconomic theory’’ (Baker, 1990; Powell, 1990). Arm’s-length relationships are argued to produce efficiencies and promote survivability in several ways. First, they provide wide access to a market information. Because there is a low level of exchange between any two parties, firms can spread their business out in small parcels to many firms which in turn gives the focal organization a large number of competitors from which to sample price and quantity information. Second, arm’s-length ties reduce the risk of opportunism because firms, in the process of spreading their business out among many trading partners, avoid small numbers bargaining situations and increase their ability to withdraw from problematic partners (Hirschman, 1970; Williamson, 1985). Finally, low interfirm dependencies mean that firms can unilaterally adapt to changes in the market (Williamson, 1985). Embedded Relationships. At the other pole of the exchange interface is an embedded relationship. In these relationships, social factors such as personal relations, identity, and trust influence the nature of the econmic association and the behavior of the economic actors themselves (Granovetter, 1985). Uzzi (1996), integrating the literature on interfirm relationships (see Powell, 1990) with his ethnographic field research in the New York apparel industry, has demonstrated that these relationships are composed of three primary components: trust, fine-grained information transfer, and joint problem-solving arrangements; and that these factors influence the logic of exchange and decline processes. In contrast to the contractual relations of arm’s-length ties, trust has been found to be the governance mechanism in network exchange relationships (Smitka, 1991; Larson, 1992; Portes and Sensenbrenner, 1993). Trust is defined as the expectation that an exchange partner will not act in selfinterest at another’s expense. On a microbehavioral decision making level, trust operates like a heuristic - a predilection to interpret someone’s

370

BRIAN UZZI

behavior in a favorable light, rather than with the intensive calculation typical of economic models of decision making. The presence of trust in an interfirm relationship is important because it facilitates the giving of voluntary contributions to the relationship that go beyond an exchange partner’s expectations. These voluntary exchanges might include putting an exchange partners’ goods in front of a queue, working overtime, or giving business to an exchange partner to help them through a slow period even when that business is not immediately needed. The distinctive characteristic of these assets is that they are difficult to value with prices and are monitored with few or no formal mechanisms of enforcement (e.g. no fines or contracts). Over time, these exchanges create relationship-specific opportunities that can be drawn upon in times of need (Uzzi, 1996). This means that firms with embedded ties in networks gain access to privileged resources that help them adapt, while also lowering transaction costs. The information exchanged in embedded relationships includes much more than just price and quantity information. Important technical know-how and tacit information is shared among exchange partners so that each comes to understand the products and production processes of their trading partner well. As a result, firms linked through embedded contacts can jointly contribute to design and process innovations, rather than simply transmitting prices or following the plans of the purchasing firm (Helper, 1990; Powell, 1990). Importantly, Smitka (1991) found that this information is not a form of, or result of, asset-specificity. Rather this type of know-how may be quite general and transferable to a wide range of exchange partners without a loss of value. In a study of apparel firm networks relationships, Uzzi (1996) found that the information transferred in embedded relationships was more tacit and proprietary than that exchanged in arm’s-length ties. Unlike price data which presumably distills all pertinent features of a good into a single dimension, the tacit and proprietary data of embedded exchanges had a holistic rather than a distilled structure. This information structure is comprised of a configuration of densely packed patterns of data that fuse together components of fashion, materials, nomenclatures, and production techniques into a ‘‘style’’. This style tends to be difficult and time-consuming even for experts to articulate and separate into component parts and as a result makes it difficult to codify in discrete terms or communicate with prices through arm’s-length ties. His ethnographic data demonstrated that

Towards a Network Perspective on Organizational Decline

371

information links in embedded ties comprise a composite of non-distillable ‘‘chunks’’ of information that are not only highly detailed (relative to price data), but quickly processed in a manner consistent with Herbert Simon’s notions of ‘‘chunking’’ and expert rationality (Prietula and Simon, 1991). The value of fine-grained information exchange has been shown to be more than a matter of asset-specificity, because the social tie between exchange partners imbues the information with value beyond what it has at hand. For example, Uzzi (1996) reports a case of a manufacturer who uses his network of arm’s-length ties to shop the market of retail buyers to learn what styles they are converging on. Once this forecast has been determined, he passes the information on to his embedded ties. This feedback, in turn, gives his embedded ties an advantage in predicting the market. Moreover, the manufacturer in this case reported that his embedded ties believe him only because of their close relationship. If they learned the same information from an arm’s-length tie, they would not have no faith in the vicinity of the data. From the point-of-view of organizational competitiveness, what this means is that these chunks of information not only allow individuals to handle more detailed data at one time, but that decision-making is sped up because exchange partners are processing chunks of packed data rather than individual pieces of that data. Lastly, embedded ties include joint problem-solving arrangements that enable firms to coordinate functions and resolve problems (Larson, 1992). In an arm’s-length tie, problems in scheduling, production, or quality control are resolved independently by the parties responsible for that function. Deviations from contractual standards are resolved through exit from the relationship (Helper, 1991). In embedded relationships, exchange partners send personnel or tightly link their separate operations across firm boundaries in an effort to solve production or design problems on-the-fly (Smitka, 1991; Larson, 1992). Thus, joint problem-solving arrangements are key mechanisms of ‘‘voice’’ (Hirschman, 1970) that allow organizations to promote resource pooling and to learn from their mistakes through direct feedback from exchange partners. It is worth noting that although the dimensions of embeddedness have been presented as individual components, they interact as a gestalt. Trust promotes information exchange between two firms while increased information exchange improves one’s trust in another’s claim about performance (Moorman et al., 1992). Joint-problem solving increases information exchange during learning. Fine-grained information exchange

372

BRIAN UZZI

in turn improves problem solving performance. Trust is fostered in joint problem solving situations that reduce another party’s risk. Finally, joint problem-solving allows one to go beyond the letter of the contract – to create indebtedness on the other’s part and an opportunity to show one’s ability to do more than is required or expected beforehand. Broadly speaking, the supplier relationship policies of US and Japanese automakers offer contrasting examples of the use of arm’s-length and embedded tie interfirm arrangements respectively. Each Big three automaker typically spreads its work out in small parcels to over thousands of suppliers annually. Moreover, contracts are usually short-term and based on distrust of the other party (Cusumano and Takeishi, 1991). Among the big three automakers, it is still common practice to frequently switch suppliers yearly in order to save pennies per component. Also, US firms typically specify all the details of the performance contract and the part to be produced. Contractors are not normally asked to participate in design or product development or encouraged to exchange tacit information on production or product development processes (Helper and Levine, 1992). Japanese automakers focus their ties on 200-300 contractors, rather than thousands (Smitka, 1991). Interfirm relationships are characterized by finegrained information exchange on production techniques and new product innovation, and contractors typically co-design the products they supply to the automaker. Contractors trust that core firms will forego opportunities to switch to new low cost and presumably more efficient subcontractors if the opportunity to successfully form an embedded tie with the subcontractor is promising. Lastly, trust insures that problems will be worked out fairly whether they arise on-the-fly or in the future (Dore, 1983). Finally, the dyadic focus of this discussion does not give the full picture because dyads are themselves embedded in larger networks of relationships. Since an exchange between dyads has repercussions for the other network members through transitivity, embedded ties assemble into extended networks of such relations. The ties of each firm, as well as the ties of their ties, generate a network of organizations that becomes a repository for the accumulated benefits of embedded exchanges. Moreover, the longer an actor has made embedded contacts within their present and past networks, the more the benefits of embedded ties can be ‘‘stockpiled’’ for future needs. Thus, the level of embeddedness in a network increases with the density and duration of embedded ties. Conversely, networks with a high density of arm’s-length ties have low embeddedness and resemble an atomistic market.

Towards a Network Perspective on Organizational Decline

373

This process underscores the primary effect the network of ties has on a firm’s performance. For example, Baum and Oliver (1991) found that social service organizations in Toronto had higher survival chances when they maintained close cost and information sharing ties to other social service organizations. Peterson and Rajan (1994) found that small businesses significantly increase their availability of financing when they build close ties to an institutional investor. Within the general parameters of the structural embeddedness approach, it has also been shown that the structure of the network and its level of embeddedness are important for understanding adaptation and failure. Uzzi (1996) found that apparel firm contractors that operate in embedded networks have significantly higher chances of survival than do comparable firms that operate in markets - underscoring the importance of network effects, as well as the problems associated with thinking of arm’s-length market ties as being the most efficient form of transacting. The analysis revealed that a contractor’s survival chances are optimal when it uses embedded ties to link to its network of manufacturers and those manufacturers have networks fashioned of both arm’s-length and embedded relationships. The crucial implication of these forces is that networks critically affect a focal firm’s survival chances and are set by a web of ties, some of which lie beyond the actor’s direct influence. Similar findings have been found in other studies as well. These arguments and findings suggest that social structure influences organizational behavior in important ways that are not well-articulated in either the comparative cost or threat-rigidity perspectives. Whereas comparative cost and threat-rigidity views focus on atomistic relationships between firms, the structural embeddedness approach focuses on how network linkages inhibit or facilitate economic action depending on the type of relationships an actor possesses and the structural position of the actor in the core or the periphery of the network. Moreover, whereas comparative cost and threat- rigidity perspectives view individual action as hard-wired, psychological tendencies, the structural embeddedness approach argues that preferences and motives emerge from the social relationships that make up an actor’s network. Thus, whether a firm views changes in its environment as threats or opportunities, or chooses to close its business in the face of comparably better alternatives, turns on the way in its network of ties enable or block access to resources needed to survive. In the next section, I build on these arguments and describe how key decline processes and their intensity vary for individual firms depending on the level of embeddedness in their network.

374

BRIAN UZZI

Network Dynamics and Organizational Decline It follows from my discussion on the structure of organizational networks that unique predictions about decline can be derived. While empirical tests are beyond this paper’s scope, the ultimate value of this perspective lies in its predictive utility. The propositions discussed below are not meant to exhaust the universe of predictions but to accent those ceteris paribus predictions that are testable and central to a network perspective. The first set of propositions specifies the relationship between the content of interfirm relations and decline. The second set of propositions specifies the relationship between network structure and the decline of individual firms, as well as the network as a whole. Interfirm Network Ties and Decline The logic of embeddedness suggests that firms are likely to look to their network exchange partners for interpretation of environmental changes. Under these conditions, embedded ties are likely to help moderate threatening situations because these ties become conduits to privileged resources, information, and on-the- fly problem-solving arrangements that help firms adapt to problems that would otherwise exceed their individual resource base. Ocasio (1995:299) has made this link between social structure and threat-rigidity processes explicit. He concludes that, ‘‘ythe mental models used by individuals to enact economic adversity in organizations are socially constructed, yand will depend not just on the problemsolving search and narrowing of information processing triggered by loss aversion and threat-rigidity, but on theysupra-organizational factors that influenceydecision makers.’’ Research on Japanese interfirm networks exemplify some of these processes (Smitka, 1991; Gerlach, 1992). Studies have shown that when firms in a keiretsu experience adversity they are likely to be highly adaptive (e.g. move line managers in sales, institute job rotation programs, intensify training) in part because of their ties to other network partners whom they have had long relationships with and whom they can rely on to help them ride out the period of adversity. Organizations in a network may also find it difficult to make changes in technology, production methods, or strategy if their ability to implement change depends on complementary changes among organizations they are tied to. This appears to have been a factor in the decline of the auto industry. Large firms in this industry slowly adopted new technologies and work arrangements because the periphery organizations they depended

Towards a Network Perspective on Organizational Decline

375

on were unable to coordinate adjustments quickly enough. Specifically, periphery firms were unable to change practices until core establishments shifted from arm’s-length contracting modes to relational contracting modes that facilitated interfirm coordination (Helper, 1991). Proposition 1: Firms in networks characterized by embedded ties will have lower rates of decline than firms that maintain arms-length relations, and this effect will increase with increases in the rate of environmental change. Network Structure and Decline The structural embeddedness approach emphasizes the point that socioeconomic ties pressure organizations to become isomorphic and tighten relations within regional networks. However, organizations in a network may paradoxically become less able to adapt as organizational isomorphism decreases diversity (Weick, 1979) and asset specificity among firms make subsequent change costly and disruptive (Singh, House and Tucker, 1986). Ironically then, if core establishments migrate, downsize, or close, the embedded ties used by periphery establishments to increase transactional effectiveness between themselves and core firms may actually put them at higher risk of decline. This is because their competencies may not be transferable to new organizations or locales and because they may not have the resources needed to ride out the period between the loss of a core establishment and the founding of a new core establishment and interorganizational relations. Proposition 2: The decline of an organization in a network will negatively affect the performance of other organizations in the network in direct proportion to its resource interdependence with the other organizations in the network and in inverse proportion to the other organization’s ties to organizations in other networks. These relationships speak to the debate on whether adaptation precludes adaptability, i.e. whether ‘‘Organizations that acquire an exquisite fit with their current surroundings may be unable to adapt when those surroundings change’’ (Weick, 1979, p.135). It also suggests that organizations can obtain a balance between maximizing adaptation and adaptability via their mix of network ties. Firms with contacts that span networks may be able to access novel information and resources that permit adaptation to new environments even though they may be tied at the same time to firms in their regional network. Thus, organizations that are simultaneously ‘‘tightly’’ and

376

BRIAN UZZI

‘‘loosely’’ coupled to their networks maximize their adaptive capacity. Tight coupling improves the transactional efficiency of exchanges and sociopolitical cohesion within the network, while loose coupling prevents the complete insulation of the network and improves its ability to anticipate and react to opportunities. Granovetter (1973) showed that change in exchange systems depends on strong ties (direct friendships) that bind systems and weak ties (friends of friends) that make systems permeable (e.g. via innovation diffusion) to adaptive changes. Uzzi (1996) showed in a study of the apparel firms in New York that there was a positive association between organizational survival and being connected to a network composed of a mix of embedded and arm’s-length ties. Similarly, organizations tied to a network composed of either predominantly arm’s-length ties or predominantly embedded ties had high death rates. What is suggested by these data are that threat rigidity effects are likely to be most problematic when all ties in a firm’s network are embedded. This occurs because firms in the network become sealed-off to new and alternative mental models of change, and thus, are likely to apply an inappropriate mental model as the variance in environmental change increases. On the other hand, in networks composed only of arm’s-length ties, firms are not privileged to the unique network resources and therefore have a narrower competitive base and fewer resources to draw on relative to firms that have embedded ties. In an integrated network structure, the variety of mental models represented by arm’s-length linkages permits a greater range of models to choose from and therefore a greater probability of at least implementing an appropriate model, while the embedded ties support coordinated and cooperative action. Proposition 3: Threat-rigidity effects are most likely to occur in networks composed of only embedded ties; while comparative cost logic is most likely to occur in networks composed of only arm’s-length ties. Proposition 4: Decline rates will be lowest in networks composed of an integrated mix of both embedded and arm’s-length ties. Friedland and Palmer (1984) have argued that core organizations make decisions that promote the success of the whole organization, even if branch establishments in the organization are injured. Executives in core organizations are active in national business networks which expose them to information about distant industrial sites, profitable acquisitions, and new investment opportunities (Mintz and Schwartz, 1985). Also, core firms tend to have multidivisional forms, diversified products, and geographically dispersed establishments.

Towards a Network Perspective on Organizational Decline

377

Haunschild (1993) found that interlock networks among large firms affect their propensity to divest, as well as the price they will pay for acquisitions, after controlling for typical determinants of these activities. In both cases, Haunschild argued that the networks of large firms systematically channel investment and disinvestment information to their members while excluding non-network members. Not only do network ties shape the information available to firms, they can also make the investment decisions more or less difficult to implement. Davis (1990) found that the interlock network within which a firm is embedded is an important determinant of the adoption of disinvestment counter strategies - specifically the use of poison pills to wardoff hostile takeovers. McPherson, Popielarz, and Drobnic (1992, p.166) found in their study of organizational memberships, that, ‘‘Weak ties outside a group shorten our stay in the group by introducing new information, new commitments, and contradictory pressures.’’ Under these conditions, aspects of comparative cost logic complement the structural embeddedness approach. Core firms are likely to adopt business strategies that transcend the interests of a single establishment or regional economy in ways unlike that of single-site or family-owned firms (Romo et al., 1989). Proposition 6: Organizations with ties to firms outside the region are most likely to reduce operations or close plants in a regional network than organizations with only regional network ties. Proposition 7: The core organizations around which regional business networks are founded are paradoxically those most likely to abandon the network. When core establishments decline or depart there is a multiplier effect that causes the decline of interdependent organizations. Periphery organizations tied to the core lose access to critical and perhaps nonsubstitutable resources. A structural embeddedness approach argues that an organization’s response to these events and its likelihood of decline are contingent upon its structural position in the interfirm network. For example, I propose that ‘‘strategic vacillation,’’ a key decline response found to occur in threatresearch, is conditioned by the organization’s network characteristics, which in turn, trigger psychological tendencies. Baker’s (1984) study of stock option trading showed that as the size of the stock trading group increased, the variance of group members’ trading behavior increased - resulting in high levels of stock price variation. This occurred because information about stock prices and trades was more difficult for each stock trader to

378

BRIAN UZZI

learn, interpret, and react to. As a consequence, group members had a more difficult time deciding on and converging to an equilibrium price in their trading behavior. Extrapolating from the structural causes of trader behavior variation to the structural causes of strategic vacillation, I argue that strategic vacillation is associated with the number of connections organizations maintain in their networks. Proposition 8: Large networks will have high levels of equivocality - resulting in high levels of strategic vacillation, i.e. too much strategic change during decline. Proposition 9: Small networks will have low levels of equivocality - resulting in low levels of vacillation, i.e. too little strategic change during decline. These points mark another area of integration between decline literatures. Comparative cost theory argues that organizations will survive in those regions promising the greatest return. The structural embeddedness perspective extends this logic by arguing that it is the financial interdependence and social context within which establishments are embedded that affects the pattern of decline. To illustrate, the structural embeddedness approach suggests an explanation of why financial losses lead to ‘‘mechanistic rigidities’’ in some declining firms but not in others (Sutton, 1990). D’Aveni and MacMillan (1990) found that bankrupt firms focus on their organization’s ‘‘input/internal environment’’ (i.e. creditors, suppliers, owners, employees, and top managers), whereas matched surviving firms pay more attention to their organization’s ‘‘output environment’’ (i.e. customers and economic conditions). The structural embeddedness approach suggests that declining firms turn their attention to ‘‘input/ internal’’ factors because their ties to other organizations limit their ability to acquire financial resources. As a consequence, they turn to improving the internal aspects of the organization because they are mutable and may be the best way to reduce the hazard of failure even in a permanently changed environment. In contrast, surviving organizations may focus on the external environment because their interfirm ties permit access to capital or control of market constraints. Survivors attempt to change external organizational factors because they are mutable and provide a way to overcome financial losses. Thus, a firm can use its network ties to strategically shape the environment within which it operates, even though the environment may be simultaneously placing constraints on the firm’s behavior. This argument is similar to Burt’s (1992) theory of structural holes. In Burt’s foundational work, an

Towards a Network Perspective on Organizational Decline

379

actor that occupies a structural hole in a network (qualitatively similar to a broker position in a network) has high bargaining power because it has more nonsubstitutability vis-a-vis the firms it transacts with. In this way, severe threats to the organization’s survival can be reversed via network linkages. Proposition 10: Organizations in broker positions will have lower decline rates than organizations in client positions. Proposition 11: Core establishments will have lower rates of decline than periphery organizations. Thus, a structural embeddedness approach argues that both environmental constraints and individual organizational autonomy interact through social structure to determine a firm’s probability of decline. External changes such as financial cutbacks on dwindling market share affect a firm’s survival ability through direct interfirm ties not through indirect market forces or psychological determinism.

Discussion and Policy Implications This paper introduced a sociological perspective on organizational decline that is based on principles from embeddedness, institutional, and resource dependence theories. I reviewed current perspectives on organization decline and showed how they lacked a sociological account of social structure and organizational interdependence, and how this neglect led to a limited understanding of decline. My main argument is that economies are composed of networks of interdependent organizations. When core establishments decline, close down, or migrate, periphery organizations dependent on the core lose access to critical or nonsubstitutable resources. The embeddedness of periphery firms in local networks increases their likelihood of decline. These processes cause multiplier effects – the negative economic effects of the declining core firm spread throughout the network causing the decline of interdependent firms. It is this result that connects the decline of a single firm with the decline of networks and industries. From the perspective of public policy on organization growth and decline, this research has two main implications that follow the guidelines offered by Hall and Quinn (1983). First, since organizations are a means by which public policy is implemented, they must be understood from the perspective of embeddedness if policy directives are to succeed. A point made

380

BRIAN UZZI

throughout this paper has been that current perspectives on organization decline have neglected an understanding of how social structure affects economic action. Comparative cost perspectives and threat-rigidity perspectives both treat organizations and their decision makers as unitary actors that operate in either an undersocialized world of ‘‘hard-wired’’ economic self-interests or the oversocialized world of well-learned, but unreflective pathological responses. In either case, social structure plays a minor role in shaping expectations, motivations, or channeling resources. Thus, if public policy is to be implemented within an organizational context, an understanding that organizations are embedded in on-going relations with other organizations is fundamental to the success of policy. Policies of economic development that continue to view organizations as atomized actors regulated by the invisible hand of market processes can only produce limited results. Future research should begin to consider how policy must change to enhance the competitiveness of industries and firms that are composed of interconnected networks of organizations. In highly successful economic sectors in Japan, Germany, and the Third Italy, economic policy has already developed to a high degree to capitalize on these structural considerations. In Italy and Japan for example, there are many associations and crosscutting networks of formal and informal relationships at the contractor-tocontractor level, the contractor-to-manufacturer level, and the level between manufacturers and trading companies that enhance information flow among firms, technology transfer, resource sharing, international marketing and research and development projects, and labor management training, development, and redeployment strategies (Lazerson, 1988; Gerlach, 1991; Harrison, 1994). These relationships are multiplex (information ties, resources ties, and rule making ties) and help coordinate the activities, interests, and competencies of the firms in a region or industry in ways that appear to out-perform invisible hand allocation and alignment systems (Best, 1990). One conclusion from this research is that public policy on economic growth and decline should specifically address the issue by which policy is implemented between and among organizations, rather than as a function of unitary organization. A second implication of the structural embeddedness approach for public policy is that greater attention needs to be placed on the role of periphery organizations in the policy formulation process. This research suggests that more policy needs to be directed to the smaller firms that support the effectiveness and adaptability of the large core organizations. When the base

Towards a Network Perspective on Organizational Decline

381

of periphery firms is underdeveloped or under-supported, the human and technical resources needed by core firms to innovate, bring products to market quickly, or shift production is limited - increasing the likelihood that the core will close or migrate to locales with these features (e.g. the electronics industry to Japan) or lower labor costs in order to sustain a competitive advantage. This suggests that there needs to be a stronger connection between policy at the federal and local levels as a way of coordinating the global interests of large firms and local interests of periphery organizations. Both Etzkowitz (1993) and Harrison (1994) have argued that there are numerous methods for accomplishing these outcomes - high-tech parks and research triangles such as Stanford Industrial Park and the Research Triangle in North Carolina are two of the most visible methods. Nonetheless, there has been little effort on the part of federal or local policy formulators to study or institute these arrangements systematically, even as the firm, as the basic unit of economic activity, is increasingly supplemented by networks of organizations as generators of competitive advantage (Etzkowitz, 1990; 1995). Thus, policy should not only address the network as the level of analysis, as opposed to the firm or unitary decision maker, but should develop strategies for Unking the competencies and competitive advantages of periphery and core organizations.

Acknowledgement I thank Bruce Carruthers, Mark Granovetter, Frank Romo, Michael Sacks, and Michael Schwartz for helpful comments on an earlier draft of this paper. Grants from The National Science Foundation (SES-9200960), Sigma XI, and The Institute for Social Analysis (ISA) at the State University of New York at Stony Brook supported this research. An earlier version of this paper won the 1993 Society for Socio-economics Best Paper Prize.

References Baker, Wayne, E. 1984. ‘‘The Social Structure of a National Securities Market.’’ American Journal of Sociology, 89:775-811. Baker, Wayne, E. 1990. ‘‘Market Networks and Corporate Behavior.’’ American Journal of Sociology, 96:589-625.

382

BRIAN UZZI

Best, Michael. The New Competition: Institutions of Industrial Restructuring. MA: Harvard. Barnett, William, P. 1990. ‘‘The Organizational Ecology of a Technological System.’’ Administrative Science Quarterly, 35:31-60. Bluestone, Barry and Harrison, Bennett. 1982. The Deindustrialization of America: Plant Closings, Community Abandonment, and the Dismantling of Basic Industry. New York: Basic Books. Burt, Ron S. 1992. Structural Holes: The Social Structure of Competition. Boston: Harvard University Press. Cameron, Kim S., Whetten, David A., and Kim, Myung U. 1987. ‘‘Organizational Dysfunctions of Decline.’’ Academy of Management Journal, 30:126-138. Cusumano, Michael A. and Takeishi, Akira. 1991. ‘‘Supplier Relations and Management: A Survey of Japanese, Japanese-Transplant, and US Auto Plants.’’ Strategic Management Journal, 12:563-588. D’Aveni, Richard A. and MacMillan, Ian. 1990. ‘‘Crisis and the Content of Managerial Communications: A Study of the Focus of Top Managers in Surviving and Failing Firms.’’ Academy of Management Journal, 32:577-605. D’Aveni, Richard A. 1989. ‘‘The Aftermath of Organizational Decline: A Longitudinal Study of the Strategic and Managerial Characteristics of Declining Firms.’’ Academy of Management Journal, 35:634-657. Davis, Gerald F. 1991. ‘‘Agents Without Principles? The Spread of the Poison Pill Through the Intercorporate Network.’’ Administrative Science Quarterly, 36:583-613. Davis-Blake, Alison and Uzzi, Brian. 1993. ‘‘Employment Externalization: The Case of Temporary Workers and Independent Contractors.’’ Administrative Science Quarterly, 38:195-223. DiMaggio, Paul and Powell, Walter W. 1983. ‘‘The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.’’ American Sociological Review, 48:147-160. Dore, Ronald. 1983. ‘‘Goodwill and the Spirit of Market Capitalism.’’ British Journal of Sociology, 34:459-482.

Towards a Network Perspective on Organizational Decline

383

Elbaum, Robert and Lazonick, William. 1986. The Decline of the British Economy. NY: Oxford. Etzkowitz, Henry. 1990. ‘‘The Capitalization of Knowledge: The Decentralization of United States Industrial and Science Policy from Washington to the States.’’ Theory and Society, 19:107-121. Etzkowitz, Henry. 1993. ‘‘Enterprises from Science: The Origins of ScienceBased Regional Economic Development,’’ Minerva, Fall. Etzkowitz, Henry. 1995. ‘‘The Triple Helix: Academy-Industry-Government Relations: Implications for the New York Regional Innovation Environment.’’ New York Academy of Sciences and Federal Reserve Bank of New York. November. Florida, R. and Kenney, M. 1991. ‘‘Transplanted Organizations: The Transfer of Japanese Organization to the US.’’ American Sociological Review, 56:381-398. Friedland, Roger and Palmer, Donald. 1984. ‘‘Park Place and Main Street: Business and the Urban Power Structure,’’ Pp.393-416 in Annual Review of Sociology, vol. 10. Gerlach, Michael L. 1992. Alliance Capitalism: The Social Organization of Japanese Business. Berkeley: UC Press. Granovetter, Mark S. 1985. ‘‘Economic Action and Social Structure: The Problem of Embeddedness.’’ American Journal of Sociology, 91:481-510. Hall, Richard H. and Quinn, Robert E. 1983. ‘‘Is There a Connection Between Organization Theory and Public Policy?’’ Pp.7-20 in Organizational Theory and Public Policy (Richard H. Hall and Robert E. Quinn (eds.). CA: Sage. Hambrick, Donald C. and D’Aveni, Richard A. 1988. ‘‘Large Corporate Failures as Downward Spirals.’’ Administrative Science Quarterly, 33:1-23. Hannan, Michael T. and John Freeman. 1989. Organizational Ecology. MA: Harvard. Harrison, Bennett. 1992. ‘‘Industrial Districts: Old Wine in New Bottles?’’ Regional Studies, 26.5:469–483. Harrison, Bennett. 1994. Lean and Mean: The Changing Landscape of Corporate Power in the Age of Flexibility. NY: Basic.

384

BRIAN UZZI

Harrison, Bennett and Bluestone, Barry. 1988. The Great U-Turn: Corporate Restructuring and the Polarization of America. NY: Basic Books. Haunschild, Pamela R. 1993. ‘‘Interorganizational Imitation: The Impact of Interlocks on Corporate Acquisition Activity.’’ Administrative Science Quarterly, 38:564-592. Heckman, John S. 1980. ‘‘The Product Life Cycle of New England Textiles.’’ Quarterly Journal of Economics, 74:697-717. Henderson, Robert A. 1980. ‘‘An Analysis of Closures Amongst Scottish Manufacturing Plants Between 1966 and 1975.’’ Scottish Journal of Political Economy, 27:152-174. Helper, Susan. 1991. ‘‘Strategy and Irreversibility in Supplier Relations: The Case of the US Automobile Industry.’’ Business History Review, 65:781-824. Helper, Susan and Levine, David I. 1992. ‘‘Long-term Supplier Relations and Product-Market Structure.’’ Journal of Law, Economics,and Organization, 8:561-581. Hicks, D. 1980. ‘‘Urban America in the Eighties, Perspectives and Prospects. Report of the panel on policies and prospects for metropolitan and non-metropolitan America.’’ President’s Commission for a National Agenda for the Eighties. New Brunswick, NJ: Transaction. Hirschman, Albert O. 1970. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA: Harvard University Press. Kelley, Maryellen and Harrison, Bennett. 1990. ‘‘The Subcontracting Behavior of Single vs. Multi-plant Enterprises in US Manufacturing: Implications for Economic Development.’’ World Development, 18:1273-1294. Larson, Andrea. 1992. ‘‘Network Dyads in Entrepreneurial Settings: A Study of the Governance of Exchange Processes.’’ Administrative Science Quarterly, 37:76-104. Lazerson, Mark. 1988. ‘‘Organizational Growth of Small Firms: An Outcome of Markets and Hierarchies?’’ American Sociological Review, 53:330-342. Lazerson, Mark. 1995. ‘‘A New Phoenix: Modern Putting-Out in the Modena Knitwear Industry.’’ Administrative Science Quarterly, 40:34-59.

Towards a Network Perspective on Organizational Decline

385

Lazonick, William. 1991. Business Organization and the Myth of the Market Economy. New York: Cambridge. Lawrence, Robert Z. 1984. Can America Compete? Washington, DC: Brookings Institution. Lincoln, James R., Gerlach, Michael L. and Takahashi, Peggy. 1992. ‘‘Keiretsu Networks in Japan: A Dyad Analysis of Intercorporate Ties.’’ American Sociological Review, 57:561- 85. Marsden, Peter V. 1981. ‘‘Introducing Influence Processes into a System of Collective Decisions.’’ American Journal of Sociology, 86:1203-1235. McKenzie, Robert E. 1979. Restrictions on Business Mobility. Washington, DC: American Enterprise Institute. McKenzie, Robert E. 1982. Plant Closings: Public or Private Choices. Washington, DC: Cutko Institute. McKenzie, Robert E. 1984. Fugitive Industry: The Economics and Politics of Deindustrialization. SF: Jossey-Bass. McLean, Paul D. and Padgett, John F. 1996. ‘‘Was Florence a Perfectly Competitive Market?: Transactional Evidence from the Renaissance.’’ Theory and Society. McPherson, J. Miller, Popielarz, Pamela, P. and Drobnic, Sonja. 1992. ‘‘Social Networks and Organizational Dynamics.’’ American Sociological Review, 57:153-170. Mintz, Beth and Schwartz, Michael. 1985. The Power Structure of American Business. Chicago: University of Chicago Press. Moorman, Christine, Zaltman, Gerald and Deshponde, Rohit. 1992. ‘‘Relationships Between Providers and Users of Market Research: The Dynamics of Trust Within and Between Organizations.’’ Journal of Marketing Research, XXXIX:314-28. Norton, R.D. 1986. ‘‘Industrial Policy and American Renewal.’’ Journal of Economic Literature, 24:1-40. Ocasio, William. 1995. ‘‘The Enactment of Economic Adversity: A Reconciliation of Theories of Failure-induced Change and Threat-rigidity.’’ Research in Organizational Behavior, 17:287-331.

386

BRIAN UZZI

Pfeffer, Jeffrey and Salancik, Gerald R. 1978. The External Control of Organizations: A Resource Dependence Perspective. New York: Harper and Row. Piore, Michael J. and Sabel, Charles F. 1984. The Second Industrial Divide: Possibilities for Prosperity. New York: Basic Books. Portes, Alejandro and Julia Sensenbrenner. 1993. ‘‘Embeddedness and Immigration: Notes on the Social Determinants of Economic Action.’’ American Journal of Sociology, 98:1320-1350. Powell, Walter W. 1990. ‘‘Neither Market Nor Hierarchy: Network Forms of Organization.’’ Pp.295-336 in Research in Organizational Behavior, edited by Barry Staw B. and L.L. Cummings, Greenwich, CT:JAI Press. Prietula, Michael J. and Simon, Herbert. 1989. ‘‘The Experts in Your Midst.’’ Harvard Business Review, January-February: 120-124. Romo, Frank P., Korman, Hyman, Brantley, Peter, and Schwartz, Michael. 1988. ‘‘The Rise and Fall of Regional Political Economies: A Theory of the Core.’’ Pp.37-64 in Research in Politics and Society: Deindustrialization and the Economic Restructuring of American Business, vol. 3, edited by Michael Wallace and J. Rothschild. Greenwich, CT: JAI Press. Romo, Frank P. and Schwartz, Michael. 1995. ‘‘The Structural Embeddedness of Business Decision to Migrate.’’ American Sociological Review, 60:874-907. Saxenian, Anna Lee. 1994. Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Cambridge: Harvard University Press. Schlesinger, Emil. 1951. ‘‘The Outside System of Production in the Women’s Garment Industry in the New York Market.’’ Archives of the International Ladies Garment Workers’ Union. Sen, Amartya. 1985. ‘‘Goals, Commitment, and Identity.’’ Journal of Law, Economics, and Organization, 1:341-55. Singh, Jitendra V., House, Robert J. and Tucker, David. 1986. ‘‘Organizational Change and Organizational Mortality.’’ Administrative Science Quarterly, 31:587-611. Smelser, Neil and Swedberg, Richard. 1994. The Handbook of Economic Sociology, NJ: Princeton.

Towards a Network Perspective on Organizational Decline

387

Smitka, Michael. 1991. Competitive Ties: Subcontracting in the Japanese Automotive Industry. New York: Columbia Press. Staw, Barry, Sandelands, Lance E., and Dutton, Jane E. 1981. ‘‘ThreatRigidity Effects in Organizational Behavior: A Multilevel Analysis.’’ Administrative Science Quarterly, 26:501-524. Sutton, Robert I. 1990. ‘‘Organizational Decline Processes.’’ Pp.205-253 in Research in Organizational Behavior, edited by Barry Staw and L.L. Cummings. Greenwich, CT: JAI. Trevor, Malcolm and Christie, Ian. 1988. Manufacturers and Suppliers in Britain and Japan: Competitiveness and the Growth of Small Firms. London: Policy Studies Institute. Uzzi, Brian. 1996. ‘‘The Sources and Consequences of Embeddedness for the Economic Performance of Organizations: The Network Effect.’’ American Sociological Review, August. Uzzi, Brian. 1997. ‘‘Comments on the Sociology of Strategy.’’ In Jane Dutton and Joel Baum (eds.) Advances in Strategic Management. CT: JAI. Waldinger, Roger D. 1986. Through the Eye of the Needle: Immigrants and Enterprise in New York’s Garment Trades. New York: NYU Press. Weick, Karl. 1979. The Social Psychology of Organizing. 2nd edition. New York: Random House. Weitzel, William and Jonsson, Ellen. 1989. ‘‘Decline in Organizations: A Literature Integration and Extension.’’ Administrative Science Quarterly, 34:91-109. Werner International. 1989. ‘‘Commentary on the Hourly Labor Costs in the Primary Textile Industry.’’ New York: Werner International Management Consultants. Williamson, Oliver E. 1985. Economic Institutions of Capitalism. NY: Free Press.

EXPLAINING THE ATTACKER’S ADVANTAGE: TECHNOLOGICAL PARADIGMS, ORGANIZATIONAL DYNAMICS, AND THE VALUE NETWORK$ Clayton M. Christensen and Richard S. Rosenbloom$$

ABSTRACT Understanding when entrants might have an advantage over an industry’s incumbent firms in developing and adopting new technologies is a question which several scholars have explained in terms of technological capabilities or organizational dynamics. This paper proposes that the value

$

This chapter is a reprint of the article ‘‘Explaining the attacker’s advantage: technological paradigms, organizational dynamics, and the value network’’ published in the Research Policy Volume 24 Issue 2 (1995). $$ The authors gratefully acknowledge the financial support of the Harvard Business School Division of Research. Collaboration and Competition in Business Ecosystems Advances in Strategic Management, Volume 30, 389–429 Copyright r 2013 Elsevier B.V. All rights of reproduction in any form reserved ISSN: 0742-3322/doi:10.1108/S0742-3322(2013)0000030015

389

390

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

network—the context within which a firm competes and solves customers’ problems—is an important factor affecting whether incumbent or entrant firms will most successfully innovate. In a study of technology development in the disk drive industry, the authors found that incumbents led the industry in developing and adopting new technologies of every sort identified by earlier scholars—at component and architectural levels; competency-enhancing and competency-destroying; incremental and radical—as long as the technology addressed customers’ needs within the value network in which the incumbents competed. Entrants led in developing and adopting technologies which addressed user needs in different, emerging value networks. It is in these innovations, which disrupted established trajectories of technological progress in established markets, that attackers proved to have an advantage. The rate of improvement in product performance which technologists provide may exceed the rate of improvement demanded in established markets. This mismatch between trajectories enables firms entering emerging value networks subsequently to attack the industry’s established markets as well. Keywords: attacker’s advantage; value network; technological paradigms; organizational dynamics Introduction From the earliest studies of innovation, scholars exploring factors influencing the rate and direction of technological change have sought to distinguish between innovations launching new directions in technology— ‘radical’ change—and those making progress along established paths—often called ‘incremental’ innovations. In an empirical study of a series of novel processes in petroleum refining, for example, John Enos [19, p. 299] found that half of the economic benefits of new technology came from process improvements introduced after a new technology was commercially established. Continuing this pattern, and borrowing Thomas Kuhn’s [26] notion of scientific ‘paradigms,’ Giovanni Dosi [17, p. 147] distinguished between ‘normal’ modes of technological progress—which propel a product’s progress along a defined, established path—and the introduction of new ‘technological paradigms’. Dosi characterizes a technological paradigm as a ‘pattern of solution of selected technological problems, based on selected principles derived from natural sciences and on selected material technologies’ (p. 152). New paradigms represent discontinuities in trajectories of progress which were defined within earlier paradigms. They

Explaining the Attacker’s Advantage

391

tend to redefine the very meaning of ‘progress’, and point technologists toward new classes of problems as the targets of ensuing ‘normal’ technology development. The question examined by Dosi—how new technologies are selected and retained—is closely related to the question of why firms succeed or fail as beneficiaries of such changes. Chandler [7, p. 79] has shown that, in a variety of industries, leading firms have prospered for extended periods by exploiting a series of incremental technological innovations built upon their established organizational and technical capabilities. When challenged by radically different technologies, however, dominant incumbents frequently lag behind aggressive entrants, sometimes with fatal consequences to their established businesses. Students of innovation have long sought to understand the circumstances that will determine the outcomes under such conditions [13]. Richard Foster [20] argues that there is an ‘attacker’s advantage’ in bringing new technologies to market, which incumbents must act to offset. To explain that advantage, most studies have focused on two sets of factors: (1) the characteristics or magnitude of the technological change relative to the capabilities of incumbents and entrant firms, and (2) the managerial processes and organizational dynamics through which entrant and incumbent firms respond to such changes. We undertake to expand those explanations by arguing that the success and failure of entrants and incumbents with respect to strategic technological innovations is largely shaped by three interlocking sets of forces. To the two identified above we add a third, which we call the value network—the context within which the firm identifies and responds to customers’ needs, procures inputs and reacts to competitors. Building on Christensen’s [8] notion of a nested system of product architectures, we argue that the firm’s competitive strategy, and particularly its past choices of markets to serve, determines its perceptions of economic value in new technology that in turn shape the rewards it will expect to obtain through innovation. The first two sections of the paper review the two perspectives on innovation mentioned above, and the third presents the concept of nested systems and the value network. In the fourth section, we develop these ideas further by analyzing the series of technological innovations that have underpinned frequent and substantial changes in the market position of leading firms throughout the history of the disk drive industry. The final section summarizes the paper and describes the sorts of innovation in which incumbents or attackers might be expected to enjoy an advantage.

392

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

Studies focused on the characteristics of a technology in relation to technological capabilities Upon the emergence of some new technological paradigms, the inability of incumbent practitioners of the prior technology to acquire the capabilities required to compete within the new paradigm is a clear cause of some incumbents’ decline. For example, cotton spinners simply lacked the financial, human and technical resources required to compete in synthetic fibers when that radically different technology was brought into the apparel industry by Dupont. Tushman and Anderson [40, p. 439] label such innovations ‘competence-destroying’, because they destroy the value of the competencies an organization has developed. The relationship between firms’ capabilities and different types of technological change has been clarified by Henderson and Clark [24, p. 9]. They note that the core technologies upon which products are generally built are manifest in the components used in a product. Differences between analog and digital circuitry, optical and magnetic recording, and autos powered by electric motors instead of internal combustion engines are reflected in fundamentally different technological concepts embodied in componentry. A product’s design architecture defines the patterns through which components interact with each other. For example, both front-wheel-drive cars and rear-wheel-drive cars employ similar component technologies, but the components interact within the two automobile architectures in quite different ways. Henderson and Clark proposed a four-fold classification of innovations, shown in Fig. 1, according to the degree to which innovations reinforce or diminish the value of a firm’s expertise in two respects—componentry and

Changed Impact on Core Technological Concepts in Componentry Reinforced

Modular Innovation

Radical Innovation

Incremental Innovation

Architectural Innovation

Reinforced

Changed

Impact on Design Architecture –The Way Components Interact Within the Design

Fig. 1.

Types of technological innovation identified by Henderson and Clark [24].

Explaining the Attacker’s Advantage

393

architecture. Incremental changes build upon and reinforce the producer’s expertise in both product architecture and component technologies. Modular innovation denotes the introduction of new component technology inserted within an essentially unchanged product architecture, as when an antilock braking system is added to automobile design. Architectural innovation alters the ways that components work together. In the fourth category, radical innovation, a new core technology—for example, using optical fiber instead of metal for communications cables—leads to significant changes in both components and architecture. The use of a given set of core technologies in a given architecture can be said to constitute the technological paradigm for the class of products. The cost and capabilities of products within a given technological paradigm evolve along a certain trajectory of improvement, generally building upon prior innovations. Within an established paradigm, innovation may either alter the particular materials and components employed, or the detailed design thereof. The higher in the design hierarchy these changes occur, the more trenchant the consequences [11, p. 235]. A shift to a new technological regime throughout the system (e.g. from electro-mechanical to electronic cash registers) is more profound than a similar shift in a single component (e.g. from LED to liquid crystal display) because new capabilities are required, and the value of established ones may be diminished or eliminated [40, p. 439]. These concepts suggest an ordering of difficulty of technological change to individual incumbent and entrant firms. The early stages of the emergence of a new technological paradigm are characterized by diverse technical approaches and fluid designs, but once a dominant design has been established [1, p. 40] incumbent firms strengthen their positions by pursuing normal modes of innovation. Henderson and Clark [24, p.9] predict that in this process, incumbent firms’ abilities to develop and employ new component technologies and refine established product architectures will be strengthened and refined, but that their abilities to create novel product architectures will atrophy. Abernathy and Utterback [1] propose that the focus of incumbents’ innovative efforts will shift from product to process innovations. Such evolution to normal modes of innovation ill equips successful incumbent firms to succeed when newer technological paradigms emerge—thus giving certain attackers an advantage. Foster [20] characterized technological change in relation to the trajectory over time of salient attributes (performance, cost, etc.) rather than in terms of technological hierarchies and architectures. He argued that performance,

394

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

mapped in relation to cumulative engineering effort, follows an S-shaped path as initial exponential improvement encounters diminishing returns. In the terms used above, this improvement path is driven by modular and incremental innovation—generally the result of learning by both producers [25] and users [34]. Foster notes that new paradigms, drawing on different core technologies or employing innovative architectures, may then emerge to challenge established techniques. When viewed in terms of the preferences of established markets, these challenging technologies often display inferior characteristics, and therefore find their earliest application in new or remote market segments where preferences are more closely aligned with the capabilities of the new technology. As normal advances are made in the new technology in its initial market, the new paradigm may return to overtake and surpass established paradigms in the original market as well [9]. The new competencies intrinsically required by new technological paradigms clearly provide part of the explanation for why once-successful firms may fail at such technological transitions. There are, however, innovative phenomena for which a technology-centered perspective cannot account— phenomena in which the experiences of leading incumbent firms facing the same technological transition have been shown to be very different. For example, the advent of radial tires brought opportunity to some tire-makers and disaster to others [15], and when electronics transformed office information equipment, Burroughs prospered, National Cash Register struggled and eventually triumphed, and Addressograph was destroyed [36]. In these and many other examples, some incumbent firms were able to muster the resources and skills to develop competitive capability in the new technology in question, while competing firms were unable to do so. Attempts to explain these phenomena have elicited a second line of research, in which another set of scholars have used the dynamics and culture of the organization as their root cause explanatory variable. Rather than asking what types of technological changes are most difficult for incumbent firms to manage, these scholars take a given technology and examine the organizational dynamics which may explain why some incumbent firms successfully develop the capabilities required in the new technology, while other, similar organizations seem unable to do so. Studies of the organizational dynamics of technological innovation Clark [12] and Henderson and Clark [24, p. 9] have each postulated that once a new technological paradigm has become established, an

Explaining the Attacker’s Advantage

395

organization’s attention tends to shift to the sorts of incremental and modular innovations which drive performance and cost improvement within that paradigm. Groups within the engineering organization are chartered to focus on improvements to particular components, and the pattern of interactions amongst these groups tends to mirror the way the components themselves interact within the product’s architecture. The organizational structure facilitates improvements at the component level and refinements in the interaction amongst components within the architectural paradigm. Conversely, such organizations can lose their capabilities to develop new architectural technologies because their positions in maturing markets do not require such capabilities to be exercised and honed. For example, RCA and Ampex had access to capabilities that would have made them contenders in VCR manufacture, but strongly held beliefs and inappropriate organizational structures frustrated their strategic commitments to do so [38, 22]. Other detailed case studies of paradigmatic industry transitions in photolithographic equipment [23], video recording [37] and medical imaging [29] demonstrate that the structure, culture and dynamics of the incumbent’s organization can modulate its engagement with a new technological paradigm. Schein [39] argues that a work group’s success in problem-solving contributes to the consensus about the best approach to problem-solving. Repeated success strengthens such beliefs until it is no longer necessary to explicitly decide on the approach; the decision is made by cultural fiat. The stronger and more sustained the firm’s success, the stronger these culturally embedded, ‘pre-determined’ decisions will become. When key choices are made by culture rather than by explicit decision, an organization’s ability to respond to new technologies becomes circumscribed: it becomes difficult for insiders to perceive that such decisions are even being made—and they therefore become very difficult to alter. For example, Henderson [23] found that engineers of photolithographic aligners steeped in one particular architectural technology perspective were not even able to see what was different about a superior competitive machine when they examined it. Also in a study of development projects in a single firm, Maidique and Zirger [27] found that technical teams tended to apply historically successful approaches to new problems until they failed badly. Failure forced reappraisal and development of new approaches which, when successful, became incorporated in the culture. Another factor affecting an organization’s perceptions of the returns obtainable through different sorts of innovations is its economic structure,

396

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

which becomes shaped and hardened through its competitive experience. This is often reflected in its patterns of integration [21, 33]. For example, for more than half a century, National Cash Register (NCR) produced a wide variety of electromechanical cash registers and accounting machines—some models containing as many as 10000 parts—at its huge Dayton, Ohio, headquarters complex. To assure adequate, cost-effective and timely supply of necessary components, NCR created an extensive vertically integrated manufacturing organization. It took these products to market through a direct sales force, and supported its vast customer base with a highly effective field service organization. When emerging microelectronics technologies rendered NCR’s mechanical calculating technologies obsolete, acquiring the requisite electrical engineering expertise was the simplest of the barriers to innovation NCR faced: engineers could be hired. Much more difficult were the tasks of dismantling large and powerful organizational units which had once been keys to NCR’s competitive success, and of terminating old patterns of organizational interaction and communication which had once been effective and efficient, and forging new patterns in their stead [36]. These phenomena support the widespread observation that technical progress is largely path dependent—that established firms are more likely to ‘‘search in zones that are closely related to their existing skills and technologies’’ [30]. Is it not just that a firm’s technological expertise is shaped through its experience. The technological capabilities its engineers are able to perceive, pursue, and develop can be limited by its culture and its organization structure, even though the technological expertise per se may not be beyond the reach of the firm’s human and financial resources. So far, we have identified two broad classes of explanations for why attackers may hold the upper hand at points of paradigmatic technological change. Both relate to capabilities. At the first level, the nature or sheer magnitude of a new technology may make it impossible for incumbents to succeed. At a deeper level, we see that the structure and dynamics of an organization may facilitate or impede a firm’s efforts to overcome the technological barriers which scholars using the first perspective cite as the driver of incumbents’ fortunes. Christensen’s [8] research into the history of technological innovations in the rigid disk drive industry, however, indicates that just as organizational dynamics can affect an organization’s ability to develop the requisite technological capabilities, its position in the marketplace may profoundly

Explaining the Attacker’s Advantage

397

affect its organizational dynamics—which in turn drives the sorts of technologies a firm can and cannot develop successfully. This research suggests that successful incumbents’ engagements in the marketplace and the influence of those engagements in creating informational asymmetries may determine their relative willingness to make strategic commitments to the development and commercialization of new technology. This mechanism is described conceptually in the following section. The history of the disk drive industry, in which the following framework is grounded, is then summarized.

Nested hierarchies and value networks The viewpoint that differences in firms’ market positions drive differences in how they assess the economics of alternative technological investments is rooted in the notion that products are systems comprised of components which relate to each other in a designed architecture. This is an established concept in studies of innovation [28, 2]. It is important to note, however, that each component can also be viewed as a system, comprising subcomponents whose relationships to each other are also defined by a design architecture. Furthermore, the end-product may also be viewed as a component within a system-of-use, relating to other components within an architecture defined by the user. In other words, products which at one level can be viewed as complex architected systems act as components in systems at a higher level. Viewed in these terms, a given system-of-use comprised a hierarchically nested set of constituent systems and components. This is illustrated in Fig. 2 by the example of a typical management information system (MIS) for a large organization. The architecture of the MIS ties together various ‘components’—a mainframe computer; peripheral equipment such as line printers, tape and disk drives; software; a large, airconditioned room with cables running under a raised floor; a staff of data processing professionals whose training and language are unique. At the next level, the mainframe computer is itself an architected system, comprising components such as a central processing unit, multi-chip packages and circuit boards, RAM circuits, terminals, controllers, disk drives and other peripherals, and telescoping down still further, the disk drive is a system whose components include a motor, actuator, spindle, disks, heads and controller. In turn, the disk itself can be analyzed as a system composed of an aluminum platter, magnetic material, adhesives, abrasives, lubricants and coatings.

398

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

Architecture of Management Information System

Random Access Memory

Architecture of Mainframe Computer

Design of MIS reports for mgt.

Configuration of remote terminals

Physical environment – large, air-conditioned, glass-front rooms with raised floors

Cooling System

Architecture of Disk Drive

Central Processing Unit

Motor

Cabling

Actuator

Read-write heads

Architecture Of Disk

Terminals

Platter Lapping Techniques

Application Process Protective Abrasives

Proprietary Software

Fig. 2.

Power Dissipation

Commercially Purchased Software

Spindle Design Disk Drive Service & Repair Requirements

Disks IC Packaging Technology

Cache

Back-up Tape Storage

Controllers

Card Readers

Careers, training and unique language of EDP staff

Magnetic Media Adhesives

Recording codes

Interface Technology

Physical size & weight constraints

Platter Material

Servo System

Line Printers

Read-Only Memory

Operating system Network Design

Data Collection Systems

A nested hierarchy of product architectures.

Although the goods and services which constitute the system of use illustrated in Fig. 2 may all be made or provided within a single, extensively integrated corporation such as AT&T or IBM, most of these goods and services are tradeable—especially in more mature markets. This means that, while Fig. 2 is drawn to describe the nested physical architecture of a product system, it also implies the existence of a nested network of producers and markets through which the tradeable architected components at each level are made and sold to integrators at the next higher level in the system. For example, firms which are the architects and assemblers of disk drives—such as Quantum, Conner Peripherals and Maxtor—procure readwrite heads from a group of firms which specialize in the manufacture of those heads, disks from a different set of disk manufacturing firms, and spin motors, actuator motors and cache circuitry from different, unique sets of firms. Firms which design and assemble computers at the next higher level may buy their integrated circuits, terminals, disk drives, IC packaging and power suppliers from unique sets of firms focused upon manufacturing and supplying those particular products. We call this nested commercial system

399

Explaining the Attacker’s Advantage

a value network. Three illustrative value networks for computing applications are shown in Fig. 3. The top network depicts the commercial infrastructure which creates the corporate MIS system-of-use depicted in Fig. 2. The middle network depicts a portable personal computing value

Corporate Management Information System

Line Printers

Central Processing Unit

IBM Amdahl Unisys Storage Technology Control Data IBM

Disk Drives

Read/Write Heads (Captive Supply)

Particulate Oxide Disks

Accounting Software, etc.

Mainframe Computers

Capacity Speed Reliability

Multi-chip IC Packaging, etc.

Recording density

Actuators, etc.

Portable Personal Computing Word Processing & Spreadsheet Software

Zenith Toshiba Dell

Notebook Computers

Light & compact Rugged Easy to use

Modems, etc.

CISC Microprocessor

Conner Quantum Western Digital

2.5-inch Disk Drives

Ruggedness Low power consumption Low profile

Displays, etc.

Thin film disks

Applied Magnetics

Metal-in-Gap Ferrite Heads

Cost Availability in high unit volumes

AT/SCSI embedded interface, etc.

Computer-Automated Design & Manufacturing High-resolution color monitors

RISC Microprocessors

Thin-film disks

Sun Microsystems Hewlett-Packard

Maxtor Micropolis

Engineering Workstations

5.25-inch Disk Drives

Read-Rite

Speed (MIPS) Fits on desktop Capacity Speed Size Areal density

Thin-Film Heads

Simulation & graphics software, etc.

Power supplies, etc.

ESDI embedded interface, etc.

Fig. 3. Examples of three value networks: a corporate MIS system, portable personal computing, and CAD/CAM (dated approximately 1989).

400

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

network, while the bottom one represents a computer automated design/ computer automated manufacturing (CAD/CAM) value network. These depictions are drawn only to convey the concept of how networks are bounded and may differ from each other, and are not meant to represent their complete structure. The scope and boundaries of a value network are defined by the dominant technological paradigm and the corresponding technological trajectory [17, p. 147] employed at the higher levels of the network. As Dosi suggests, the very definition of value is a function of the dominant technological paradigm in the ultimate system of use in the value network. The metrics by which value is assessed will therefore differ across networks. Specifically, associated with each network is a unique rank-ordering of the importance of various performance attributes, which rank-ordering differs from that employed in other value networks. As illustrated in Fig. 3, this means that parallel value networks, each built around a different technological paradigm and trajectory, may exist within the same broadly defined industry. Note how each value network exhibits a very different rank ordering of important product attributes, as shown at the right of the center column of component boxes. In the top-most value network, performance of disk drives is measured in terms of capacity, speed and reliability. In the portable computing value network depicted beneath it, important performance attributes are ruggedness, power consumption and physical size. Although many of the constituent architected components in these different systems-of-use carry the same labels (each network involves read-write heads, disk drives, RAM circuits, printers, software, etc.), the nature of components used in the three networks is quite different. Generally, there is a set of competing firms, each of which has its own value chain [32], associated with each box in the network diagram. Often, the firms which supply the products and services used in each network are different, as illustrated by the listings of firms to the left of the center column of component boxes in Fig. 3. Finally, we note that the juxtaposed depiction of value networks in Fig. 3 represents their structure at a given moment. As will be shown below, the value network is not a static structure—it can be highly dynamic. The rates of performance improvement which manufacturers of the constituent components are able to achieve may exceed the rate of improvement in

Explaining the Attacker’s Advantage

401

performance demanded by downstream users within a value network. This enables technologies which may initially have been confined to one value network to migrate into other networks as well. In addition, the rank orderings of performance attributes which define the boundaries of a network may change over time. The position of a given established producer within a value network—the pathways it is supplying through downstream markets and producers to ultimate users, and its upstream supply network—therefore influences, and even defines to a considerable degree, the nature of the incentives associated with different opportunities for technological innovation which are perceived by the firm’s managers. For example, the value placed on certain attributes of an automotive engine will differ depending upon whether the engine is destined for a delivery van, a family sedan, or an Indy 500 racing car. Since value and product performance are defined differently in these instances, we suggest that each of these vehicles is associated with a unique value network. In another example, throughout the 1970s Xerox Corporation’s product and customer base was dominated by large, high-speed plain paper copiers used in large, central copying centers. It sold products direct to its customers, and supported them through an extensive, capable field service organization. Speed (pages per minute), resolution and cost per copy were among the most important of the performance attributes in that market. Technological improvements which enhanced these aspects of performance were of great value to Xerox, because they helped defend an established, profitable business. In general, few infrastructural investments were required to realize value from such innovations. Products embodying these innovations could be sold and serviced through established capabilities without having to build a different customer base. Often they could be produced in existing facilities. On the other hand, simplicity, low machine cost, small size and relative ease of self-service are attributes which, though of great potential value in other value networks dominated by alternative technological paradigms, were accorded less worth within Xerox’s value network. From an engineering standpoint, they could generally be obtained only by sacrificing along other, more important performance dimensions like speed and per-copy costs, and therefore were not viewed as improvements by Xerox’s most important customers. Furthermore, they did not enhance the value of the

402

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

company’s downstream investments in direct sales, service and financing. Commercializing the sorts of component and architectural technologies associated with these attributes would have entailed for Xerox the expense in time and money of creating new market applications for photocopying, and new channels of distribution. In other words, Xerox’s position in its value network, skewed its perceptions of return and risk associated with a marginal dollar of investment toward those technologies which addressed downstream needs in its own value network. The position of a potential entrant to the photocopying value network could bias its perception of risks and rewards in just the opposite fashion. The costs and risks of replicating parallel and competitive capabilities throughout Xerox’s large/fast copier value network made development of technologies for small copiers, and the creation of a new small business and office-based copying value network, a much more attractive option for entrants like Canon. In this example of two photocopying value networks, note that the boundaries of such networks may not coincide with what marketers call ‘market segments’. For example, one might well regard small businesses as a different market segment than large corporations, but some of the former will buy high-performance copiers for particular needs, while the latter may buy smaller copiers for distributed use in office areas throughout their facilities. The rank-ordering of preferred attributes (the definition of what constitutes improved product performance) will differ according to the application sought by each type of buyer, thus giving rise to two distinct systems of use, and hence two value networks. We argue that both the perceived attractiveness of a technological opportunity and the degree of difficulty a producer will encounter in exploiting it are determined, among other factors, by the firm’s position in the relevant value network. As firms gain experience within a given network, they are likely to develop their capabilities, structure and cultures to ‘fit’ that position better by meeting that network’s distinctive requirements. Manufacturing volumes, the slope of ramps to volume production, product development cycle times and organizational consensus about who the customer is, and what the customer needs, may differ substantially from one value network to the next. Competitors may therefore become progressively less well suited to compete in other networks. Their abilities and the incentives to create new market applications for their technology—giving rise to new value networks—may atrophy. While successful incumbents will

Explaining the Attacker’s Advantage

403

become more cognizant of relevant information pertaining to the networks in which they compete, they will have greater difficulty acquiring and assessing information about others. The longer the firm has been in a given position, and the more successful it has been, the stronger these effects are likely to be. Hence it faces significant barriers to mobility [6]—barriers to those innovations whose intrinsic value is greatest within networks other than those with which it is already engaged. These considerations provide a third dimension for analyzing technological innovation. In addition to required capabilities inherent in the technology and in the innovating organization, we argue that one should examine the innovation’s implications for the relevant value network. The key consideration is whether the performance attributes implicit in the innovation will be valued within networks already served by the innovator, or whether other networks must be addressed or new ones created in order to realize value for the innovation. In the case of the disk drive industry to date, one observes that most architectural innovations have imparted attributes to products which appealed to different intermediate buyers and end users from the ones which established firms were already serving. In other words, they constituted new technological paradigms, which were initially rejected within established value networks, but enabled the emergence of new ones. Component innovations, on the other hand, have generally addressed the needs of existing customers and downstream users. Established firms have historically excelled at component-level innovations, while new entrants generally succeeded at architectural innovations. We contend that the manifest strength of established firms in component innovation and their weakness in architectural innovation—and the opposite manifest strengths and weaknesses of entrant firms—are consequences not of differences in technological or organizational capabilities between incumbent and entrant firms, but of their positions in the industry’s different value network. Indeed, established disk drive manufacturers were the industry leaders in every sort of innovation—incremental, modular and architectural; competency-enhancing and competency-destroying—which addressed the needs of downstream actors in their value network, and they lagged behind the industry, in developing (or often failed to develop altogether) those technologies which addressed performance needs in other value networks. Details of this history of incumbent and entrant firms’ successes and failures in the face of different types of technological changes in the disk drive industry are recounted in the following section.

404

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

Technological history of the rigid disk drive industry1 Disk drives are magnetic information storage and retrieval devices used with most types of computers and a range of other products, such as high speed digital reprographic devices and medical imaging equipment. The principal components of most disk drives are: the disk, which is a substrate coated with magnetic material formatted to store information in concentric tracks; the read-write head, which is a tiny electromagnet positioned over the spinning disk which, when energized, orients the polarity of the magnetic material on the disk immediately beneath it; a motor which drives the rotation of the disk; an actuator mechanism which positions the head precisely over the track on which data is to be read or written; electronic circuitry and software, which control the drive’s operation and enable it to communicate with the computer. These components work together within a particular product architecture. From the industry’s inception there have been significant technological changes both within each component and in the architecture. Magnetic recording and storage of digital information was pioneered with the earliest commercial computer systems, which used reels of coated mylar tape. IBM introduced the use of rigid rotating disks in 1956 and flexible (‘floppy’) disks in 1971. The dominant design for what are now called ‘hard’ drives was provided by the IBM ‘Winchester’ project, introduced as the Model 3340 in 1973. While IBM pioneered in disk drive technology and produced drives to meet its own needs, an independent disc drive industry grew to serve two distinct markets. A few firms developed the plug-compatible market [PCM] in the 1960s, selling to IBM customers. Although most of IBM’s initial rivals in the computer market were vertically integrated, the emergence in the 1970s of smaller, non-integrated computer makers spawned an OEM market for disk drives as well. By 1976 the output of rigid disk drives was valued at about $1 billion, in which captive production accounted for 50% of unit production, and PCM and OEM segments each accounted for about 25%.

1

The following section draws upon a recent study of the industry by one of the authors (Christensen, 1992, [8]). The database rests on field-based studies of six leading disk drive manufacturers, which historically have accounted for over 70% of industry revenues, and detailed technical specifications of every disk drive model announced in the world between 1975 and 1990. Technical data come from Disk/Trend Report, Electronic Business Magazine, and manufacturers’ product specification sheets.

Explaining the Attacker’s Advantage

405

The next dozen years tell a remarkable story of rapid growth, market turbulence, and technology-driven ‘creative destruction’. The value of drives produced rose to more than $13 billion by 1989. By the mid-1980s the PCM market had become insignificant, while OEM output grew to represent twothirds of world production. Of the 17 firms which populated the industry in 1976—all of which were relatively large, diversified corporations—fourteen had failed and exited or had been acquired by 1989. During this period an additional 124 firms entered the industry, and exactly 100 of those also failed. Some 60% of the producers remaining by 1989 had entered the industry as de novo start-ups since 1976. All this took place within the context of the established core technology, i.e. magnetic digital recording.2 However, components changed, and so did architectures, and therein lies the story. Successive waves of technological change permitted dramatic improvements in performance at constantly decreasing cost. The impact on the industry of innovations in componentry and architecture was very different. In general, component innovations, such as thin-film heads, embedded servo systems and run length limited (RLL) recording codes, were developed and introduced by well-established incumbents. Component innovations sustained established trajectories of performance improvement within each architecture—an annual increase in capacity per drive which often approached 50%. Several waves of architectural innovation swept through the industry in this period, usually introduced by entrant firms. In contrast to the role played by component innovation, these new architectures often disrupted established trajectories of performance improvements, and redefined the parameters along which performance was assessed. For example, in the architecture used for portable computers, size, weight, ruggedness and power consumption, were all important attributes of performance. None of these attributes were critical in the architectures used in mainframe or minicomputers. These parallel streams of component and architectural innovation were symmetrical in one respect: new components were first introduced in the context of established architectures, while new architectures generally embodied proven componentry. Both types of innovation were important to the growth and development of the industry in the 1980s, but it is only in architectural innovation—and a

2

An alternative core technology, digital optical recording, was widely perceived as a potential substitute through the 1980s, but by 1992 had made few substantial inroads.

406

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

particular class of architectural innovation at that—that attackers proved to have any advantage. Entrants rarely tried to pioneer innovative components, and most that did failed. In contrast, the predominant pattern for new architectures was for the innovation—and subsequent market leadership in the next generation—to belong to an entrant firm. Until the late 1970s, 14-inch drives were the only rigid drive architectures available, and nearly all were sold to mainframe computer manufacturers. The hard disk capacity provided in the median priced, typically configured mainframe computer system in 1976 was about 170 megabytes (Mb) per computer. The hard disk capacity suppliers with the typical mainframe increased at a 15% annual rate over the next 15 years. At the same time, the capacity of the average 14-inch drives introduced for sale each year increased at a faster 22% rate, reaching beyond the mainframe market to the large scientific and supercomputer markets. This is shown in Fig. 4.3 Between 1978 and 1980, several entrant firms—Shugart Associates, Micropolis, Priam and Quantum—developed new architectural families of 8-inch drives with 10, 20, 30 and 40 Mb capacity. These drives were of no interest to mainframe computer manufacturers, who at that time were demanding drives with 300-400 Mb capacity. These 8-inch entrants therefore sold their small, low-capacity drives into a new application— minicomputers.4 The customers—Wang, DEC, Data General, Prime and Hewlett Packard—were not the firms which manufactured mainframes, and their customers often used software which was substantially different from programs used by mainframe computer users. In other words, 8-inch drives found their way into a different value network, leading to a different systemof-use. Although initially the cost per megabyte of capacity of 8-inch drives was higher than that of 14-inch products, these new customers were willing to pay a premium for other attributes of the 8-inch drive which were important to them—especially its smaller size. This attribute had little value to mainframe users. Once the use of 8-inch drives became established in minicomputers, the hard disk capacity shipped with the median-priced minicomputer grew about 25% per year—a trajectory driven by the ways in which minicomputers

3 4

A summary of the data and procedures used to generate Fig. 4 is included in the Appendix.

The minicomputer market was not new in 1978, but it was a new application for Winchestertechnology disk drives.

407

Explaining the Attacker’s Advantage

Hard Disk Capacity (Mb) 3000 2000 1000

14” drives

700 500 400 300

Mainframes A

200 100 70

8” drives

50 40 30

Minicomputers 3.5” drives

Desktop PCs

B

20

E

5.25” drives D

10 7 5 4 3

Notebook PCs

Portable PCs

C

2 1 76

Fig. 4.

77

78

79

80

81

82

83

84

85

86

87

88

89

90

A comparison of the trajectories of disk capacity demanded per computer, vs. capacity provided in each architecture.

came to be used in their value networks. At the same time, however, the 8inch drivemakers found they could increase the capacity of their products at over 40% per year—nearly double the rate demanded by their original ‘home’ minicomputer market. In consequence, by the mid-1980s, 8-inch drive makers were able to provide the capacities required for lower-end mainframe computers, and by that point, unit volumes had grown significantly so that the cost per megabyte of 8-inch drives had declined below that of 14-inch products. Other advantages of 8-inch drives also become apparent. For example, the same percentage mechanical vibration

408

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

in an 8-inch drive caused the head to vary its absolute position over the disk much less than it would in a 14-inch product. Within a 3-4 year period, therefore, 8-inch drives began to invade an adjacent, established value network, substituting for 14-inch drives in the lower-end mainframe computer market. When 8-inch products began to penetrate the mainframe computer market, most of the established manufacturers of 14-inch drives began to fail. Two thirds of these manufacturers never introduced an 8-inch model. The onethird of the 14-inch drive manufacturers which did introduce 8-inch drives did so with about a 2-year lag behind the 8-inch entrant manufacturers. Interviews with industry participants and analysis of product data suggest that destruction of engineering capabilities by the new architecture, as posited by Henderson and Clark [24, p. 9] and Tushman and Anderson [40, p. 439], does not explain the failure of the established producers. Table 1, for example, shows that the population of 8-inch models introduced by the established firms in 1981 possessed performance attributes which, on average, were nearly identical to the average of those introduced that year by the entrant firms. In addition, the rates of improvement (measured between 1979 and 1983) in those attributes were stunningly similar between established and entrant firms.5 This evidence supports the view that the 14inch drive manufacturers were fully capable of producing the new architecture; their failure resulted from delay in making the necessary strategic commitments. By 1981 the entrants had already created barriers to entry around their new value network, and were surmounting the barriers which had protected the old one. What explains the incumbents’ strategic lag? Interviews with marketing and engineering executives close to these companies suggest that the established 14-inch drive manufacturers were held captive by customers within their value network. Mainframe computer manufacturers did not need an 8-inch 5

This result is very different from that observed by Henderson (1988, [23]), where the newarchitecture aligners produced by the established manufacturers were inferior on a performance basis to those produced by the entrant firms. One possible reason for these different results is that the successful entrants in the photolithographic aligner industry which Henderson studied brought with them a well-developed body of technological knowledge and experience which had been developed and refined in other markets. In the case studied here, none of the entrants brought such well-developed knowledge with them. Most, in fact, were de novo start-ups comprised of managers and engineers who had defected from established drive manufacturing firms.

409

Explaining the Attacker’s Advantage

Table 1. A comparison of the average attributes of 8-inch drives introduced in 1981 by established vs. entrant firms. Attributes

Capacity (Mb) Areal density (Mb sq in.) Access time (milliseconds) Price per megabyte

Level of performance

Annual rate of performance improvement (%)

Established firms

Entrant firms

Established firms

Entrant firms

19.2 3.213 46.1 $143.03

19.1 3.104 51.6 $147.73

61.2 35.5 8.1 58.8

57.4 36.7 9.1 61.9

Source: Analysis of Disk/Trend Report data, from Christensen (1992, [8]).

drive. In fact, they explicitly did not want it—wanting instead drives with increased capacity at a lower cost per megabyte. The 14-inch drive manufacturers were listening and responding to their established customers, and their customers—in a way that was not apparent either to the disk drive manufacturers or their customers—were pulling them along a trajectory (22% capacity growth in a 14-inch platform) which would ultimately prove fatal. This finding is similar to the phenomena observed by Bower (1970, [3], p. 254), who saw that explicit customer demands have tremendous power as a source of impetus in the resource allocation process. ‘When the discrepancy (the problem to be solved by a proposed investment) was defined in terms of cost and quality, the projects languished. In all four cases, the definition process moved toward completion when capacity to meet sales was perceived to be inadequatey. In short, pressure from the market reduces both the probability and the cost of being wrong’. Although Bower’s specific reference is to manufacturing capacity, we believe that we observed the same fundamental phenomenon which he saw: the power of the known needs of known customers in marshaling and directing the investments of a firm. Figure 4 also maps the disparate trajectories of performance improvement demanded in the subsequent, sequentially-emerging computer product categories which defined market segments for the disk drive suppliers, versus the performance made available within each successive architecture by changes in component technology and refinements in system design. Again, the solid lines emanating from points A, B, C, D and E measure the disk drive capacity provided with the median-priced computer in each

410

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

category, while the dotted lines emanating from the same points measure the average capacity of all disk drives introduced for sale in each architecture for each year. Brief accounts of these transitions are presented below. The advent of 5.25-inch drives In 1980, Seagate Technology introduced the next architectural generation, 5.25-inch drives, as shown in Fig. 4. Their capacities of 5 and 10 Mb were of no interest to minicomputer manufacturers, who were demanding drives of 40 and 60 Mb from their suppliers. Seagate and other firms which entered with 5.25-inch drives in the 1980-1983 period (such as Miniscribe, Computer Memories and International Memories) had to pioneer new applications for their products—primarily desktop personal computers. Once the use of hard drives was established in the desktop PC application, the disk capacity shipped with the median-priced desktop PC increased about 25% per year. Again, the technology improved at nearly twice the rate demanded in the new market—the capacity of new 5.25-inch drives increased about 50% per year between 1980 and 1990. As in the 8- or 14-inch substitution, the first firms to produce 5.25-inch drives were entrants; on average, the established firms lagged the entrants by 2 years. By 1985, 50% of the firms which had produced 8-inch drives had introduced 5.25-inch models. The other 50% never made the transition. Growth in 5.25-inch drives occurred in two waves. The first was in the establishment of a new application for rigid disk drives—desktop computing, where product attributes which had been relatively unimportant in established applications were highly valued. The second wave was in substituting for the use of larger drives in established minicomputer and mainframe computer markets, as the rapidly increasing capacity of 5.25-inch drives intersected the more slowly-growing trajectories of capacity demanded in these markets. Of the four leading 8-inch drivemakers listed above, only Micropolis survived to become a significant manufacturer of 5.25-inch drives, and that was accomplished only with Herculean managerial effort. The pattern is repeated: the emergence of the 3.5-inch drive The 3.5-inch drive was first developed in 1984 by Rodime, a Scottish entrant. Sales of this architecture were not significant, however, until Conner Peripherals, a Seagate/Miniscribe spin-off, started shipping product in 1987. Conner had developed a small, lightweight drive architecture, which was much more rugged than its 5.25-inch ancestors, by handling functions electronically which had previously been managed with mechanical parts, and by using microcode to replace functions which had previously been

Explaining the Attacker’s Advantage

411

addressed electronically. Nearly all of Conner’s record first-year revenues of $113 million came from Compaq Computer, which had funded most of Conner’s start-up with a $30 million investment. The Conner drives were used primarily in a new application—portable and laptop machines, in addition to ‘small footprint’ desktop models—where customers were willing to accept lower capacities and higher costs per megabyte in order to get the lighter weight, greater ruggedness and lower power consumption which 3.5inch drives offered. Seagate engineers were not oblivious to the coming of the 3.5-inch architecture.6 By early 1985, less than 1 year after the first 3.5-inch drive was introduced by Rodime and 2 years before Conner Peripherals started shipping its product, Seagate personnel had shown working 3.5-inch prototype drives to customers for evaluation. The initiative for the new drives came from Seagate’s engineering organization. Opposition to the program came primarily from the marketing organization and Seagate’s executive team, on the grounds that the market wanted higher capacity drives at a lower cost per megabyte and that 3.5-inch drives could never be built at a lower cost per megabyte than 5.25-inch drives. The customers to whom the Seagate 3.5-inch drives were shown were firms within the value network already served by Seagate: they were manufacturers of full-sized desktop computer systems. Not surprisingly, they showed little interest in the smaller drive. They were looking for capacities of 40 and 60 megabytes for their next generation machines, while the 3.5-inch architecture could only provide 20 Mb—and at higher costs.7 In response to these lukewarm reviews from customers, Seagate’s program manager lowered his 3.5-inch sales estimates, and the firm’s executives canceled the 3.5-inch program. Their reasoning was that the markets for 5.25-inch products were larger, and that the sales generated by spending the engineering effort on new 5.25-inch products would generate greater

6

This information was provided by former employees of Seagate Technology. This finding is consistent with what Robert Burgelman has observed. He noted that one of the greatest difficulties encountered by corporate entrepreneurs was finding the right ‘beta test sites’, where products could be interactively developed and refined with customers. Generally, the entre´e to the customer was provided to the new venture by the salesman who sold the firm’s established product lines. This helped the firm develop new products for established markets, but did not help it identify new applications for its new technology. [5, pp. 76-80].

7

412

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

revenues for the company than would efforts targeted at new 3.5-inch products. In retrospect, it appears that Seagate executives read the market—at least their own market—very accurately. Their customers were manufacturers and value-added resellers of relatively large-footprint desktop personal computers such as the IBM XT and AT. With established applications and product architectures of their own, these customers saw no commercial value in the reduced size, weight and power consumption, and the improved ruggedness of 3.5-inch products. Seagate finally began shipping 3.5-inch drives in early 1988—the same year in which the performance trajectory of 3.5-inch drives shown in Fig. 4 intersected the trajectory of capacity demanded in desktop computers. By that time nearly $750 million in 3.5-inch products had been shipped cumulatively in the industry. Interestingly, according to industry observers, as of 1991 almost none of Seagate’s 3.5-inch products had been sold to manufacturers of portable/laptop/notebook computers. Seagate’s primary customers still were desktop computer manufacturers, and many of its 3.5inch drives were shipped with frames which permitted them to be mounted in computers which had been designed to accommodate 5.25-inch drives. The fear of cannibalizing sales of existing products is often cited as a reason why established firms delay the introduction of new technologies. As the Seagate-Conner experience illustrates, however, if new technologies are initially deployed in new market applications, the introduction of new technology may not be an inherently cannibalistic process. When established firms wait until a new technology has become commercially mature in its new applications, however, and launch their own version of the technology only in response to an attack on their home markets, the fear of cannibalization can become a self-fulfilling prophecy. Although the preceding discussion focused on Seagate’s response to the development of the 3.5-inch drive architecture, its behavior was not atypical; by 1988, only 35% of the drive manufacturers which had established themselves making 5.25-inch products for the desktop PC market had introduced 3.5-inch drives. As in earlier product architecture transitions, the barrier to development of a competitive 3.5-inch product does not appear to have been engineering-based. As illustrated in table 1 above for the 14- to 8inch transition, the new-architecture drives introduced by the incumbent, established firms in the 8- to 5.25-inch and 5.25- to 3.5-inch transitions were fully performance-competitive with the entrants’ drives. Rather, it seems

413

Explaining the Attacker’s Advantage

that the 5.25-inch drive manufacturers were misled by their customers, who themselves seemed as oblivious as Seagate to the potential benefits and possibilities of the new architecture. These only became apparent in the desktop market after the 3.5-inch products had been proven in a new application. Table 2 shows that in the 1984-1989 period, when the 3.5-inch form factor was becoming firmly established in portable and laptop applications, Seagate had in no way lost its ability to innovate. It was highly responsive to its own customers. The capacity of its drives increased at about 30% per year—a perfect match with the pace of market demand and a testament to the firm’s focus on the desktop computing market, rather than the markets above or below it. Seagate also introduced new models of 5.25-inch drives at an accelerated rate during this period—models which employed many of the most advanced component technologies such as thin film disks, voice-coil actuators8, RLL codes and embedded SCSI interfaces. Seagate’s experience was an archetype of the histories of many of the disk drive industry’s leading firms. Its entry strategy employed an innovative architectural design with standard, commercially available components. Its appeal was in an emerging value network—desktop computing. Once it was established in that value network, Seagate’s technological attention shifted

Table 2.

1984 1985 1986 1987

Indicators of the pace of Seagate engineering activity within the 5.25-inch architecture, 1984-1987.

No. of new models announced

No. of new models as % of No. of models offered in prior year

% of new models equipped with thin-film disks

3 4 7 15

50 57 78 115

0 25 71 100

SCSI interface introduction

X

Source: Analysis of Disk/Trend Report data, from Christensen (1992, [8]).

8

These were not new to the market, but were new to Seagate.

RLL codes introduction

X

414

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

toward innovations in component technology, as the work of Henderson and Clark [24, p. 9] suggests that it would. This is because improvements in component technology and refinements in system design—that is, modular and incremental innovations (see Fig. 1)—were the primary drivers of performance improvement within its value network. They were the drivers behind each of the dotted-line technological trajectories plotted in Fig. 4, and were the means by which firms attentive to customers’ demands for improved performance addressed those needs. It is not surprising, therefore, that throughout the history of the industry, the leading innovators in the development and use of component technology were the industry’s established firms, as we describe in the following section.

Leadership in component technology development by incumbent firms As in other data processing sub-systems, disk drive technology advanced rapidly through the 1970s and 1980s, increasing drive capacity and performance, and reducing size and cost, at rates that would have been astonishing in almost any other industry. One of the primary technical trends behind increasing capacity was the relentless increase in the recording density achieved—a trend which was largely driven by improvements in component technology. The earliest drives could hold only a few kilobytes of data per square inch of drive surface; by 1967 this had risen to 50 kilobytes; within 6 years, the first Winchester design held 1.7 megabytes per square inch; by 1981 the IBM 3380 boasted a density greater than 12 Mb per square inch. In 1990, densities of 50 Mb per square inch were common, marking a 3000-fold increase in 35 years. As in other applications of magnetic technology (e.g., video recording) greater density led to smaller, less expensive devices. Costs were also driven down by a constellation of incremental improvements in components and materials, by manufacturing experience, and by huge scale increases in demand. In the 1970s, some manufacturers sensed that they were approaching the limits of recording density obtainable from conventional particulate iron oxide disk coating technology, and began studying the use of thin film metal coatings to sustain improvements in recording density. While the use of thin-film coatings was then highly developed in the integrated circuit industry, its application to magnetic disks presented substantial challenges because of the disks’ greater surface area and the need to make the relatively soft metal coatings as durable as the much harder iron oxide coatings. Industry participants interviewed by Christensen estimate that development

Explaining the Attacker’s Advantage

415

of thin film disk technology required approximately 8 years, and that the pioneers of thin film disk technology—IBM, Control Data, Digital Equipment, Storage Technology and Ampex—each spent over $50 million in that effort. Between 1984 and 1986, a total of 34 firms—roughly two to three times the number of producers active in 1984—introduced drives with thin-film coatings. The overwhelming majority of these were established industry incumbents. Nearly all new entrants which used thin film disks in their initial products failed to establish themselves in the industry. The standard recording head design employed small coils of wire wound around gapped ferrite (iron oxide) cores. A primary factor limiting recording density was the size and precision of the gaps forming electromagnets on the head. Ferrite heads had to be ground mechanically to achieve desired tolerances, and by 1981 many believed that the limits of precision would soon be reached. As early as 1965, researchers had posited that smaller and more precise electromagnets could be produced by sputtering thin films of metal on the recording head and then using photolithography to etch the electromagnets, thus enabling more precise orientation of smaller magnetic domains on the disk surface. Although thin film photolithography was well-established in the semiconductor industry, its application to recording heads proved extraordinarily difficult. Read-write heads required much thicker films than did integrated circuits, and the surfaces to be coated were often at different levels, and could be inclined. Burroughs in 1976, IBM in 1979, and other large, integrated, established firms were the first to incorporate thin-film heads successfully in disk drives. In the 1982-1986 period, when over 60 firms entered the rigid disk drive industry, only four of them (all commercial failures) attempted to do so using thin-film heads as a source of performance advantage in their initial products. All other entrant firms—even aggressively performance-oriented firms such as Maxtor and Conner Peripherals—found it preferable to learn their way with ferrite heads before tackling thin-film technology. As was the case with thin-film disks, the introduction of thin-film heads was a resourceintensive challenge which only established firms could handle. IBM spent over $100 million developing its thin film heads, and competing pioneers Control Data and Digital Equipment spent amounts of a similar order of magnitude. The rate of adoption was slow; a decade after Burroughs first established the technology’s commercial feasibility, only 15 producers employed thin-film heads. Thin-film technology was costly and difficult to master.

416

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

The established firms were the leading innovators not just in undertakings to develop risky, complex and expensive component technologies such as thin film heads and disks, but in literally every component-level innovation. Even in relatively simple but important innovations—such as RLL recording codes (which took the industry from double- to triple-density disks), embedded servo systems, zone-specific recording densities and higher RPM motors—established firms were the successful pioneers, while entrant firms were the technology followers. Leadership in architectural technology innovation As noted above, Henderson and Clark found that in the photolithographic aligner industry attackers consistently had the advantage in architectural innovation. In contrast to their unambiguous findings, and to the clear pattern of component technology leadership by established firms described above, entrant firms led the disk drive industry in the introduction of three of the five new architectural technologies, while established firms led in the other two. The industry’s first architectural technology transition was the switch from removable disk packs to the fixed-disk Winchester architecture between 1973 and 1980. The subsequent four architectural transitions involved the reduction in disk diameter from 14 to 8, 5.25, 3.5, and 2.5 inches between 1978 and 1990. These four new architectures reduced size by ‘shrinking’ individual components, by reducing part count, and by rearranging the way the components interacted with each other in the system design. For example, in the 8-inch drive, a 110 volt AC motor was typically positioned in the corner of the system, and drove the disks by pulleys and a belt. In reducing the size to 5.25-inches, the motor was changed to a 12 volt DC ‘pancake’ design and positioned beneath the spindle. Table 3 shows that the 8-, 5.25- and 3.5-inch generations embodied the architectural technologies which entrant firms pioneered. For example, in 1978 an entrant offered the industry’s first 8-inch drive. Within 2 years, six firms were offering 8-inch drives; four of them were entrants. At the end of the second year of the 5.25-inch generation, eight of the ten producers were entrants. A similar pattern characterized the early population of firms offering 3.5-inch drives. Note that these transitions correspond to the movements from point A to points B, C, and D in Fig. 4. There were two significant architectural innovations in the industry’s history in which the incumbent firms, and not entrants, were the leading innovators. The first was the substitution of sealed, fixed-disk Winchester-technology

Entrants Established Total

Entrants Established Total

5.25-inch drives (1980)

3.5-inch drives (1983)

1 0 1

1 1 2

1 0 1

100

100

50 50 100

100

100

Percent

2 1 3

8 2 10

4 2 6

No. of firms

67 33 100

80 20 100

67 33 100

Percent

Second year

Source: Analysis of Disk/Trend Report data, reported in Christensen (1992, [8]).

Entrants Established Total

No. of firms

First year

3 1 4

8 8 16

6 5 11

No. of firms

75 25 100

50 50 100

55 45 100

Percent

Third year

4 4 8

13 11 24

8 5 13

No. of firms

50 50 100

54 46 100

62 38 100

Percent

Fourth year

Number of firms offering one or more models of the new product architecture

Number of entrant vs. established firms offering new product architectures.

8-inch drives (1978)

Table 3.

Explaining the Attacker’s Advantage 417

418

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

14-inch drives for removable disk-pack drives, which was the first architectural transition after the emergence of a group of independent disk drive manufacturers in the 1960s. The first Winchester model was introduced by IBM, an established manufacturer of disk-pack drives, in 1973. The second and third firms to introduce a Winchester-architecture drive were Control Data and Microdata—also established firms—in 1975. Seven of the first eight firms to introduce 14-inch Winchester drives were established manufacturers of the prior architectural generation of disk-pack drives. Entrant firms were the followers in this transition. An indication of why the incumbent leaders successfully maneuvered across this architectural transition is found in Fig. 5, which plots the average recording density for all diskpack models introduced in the industry each year (the solid black points), and the average density of Winchester-technology drives introduced in the same years (the open circles). Note the contrast between this architectural change and the transitions to 8-, 5.25- and 3.5-inch drives charted in Fig. 4. Whereas those architectural approaches disrupted established trajectories

Areal Density (millions of bits/square inch

10.0 9.0 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.50 1.0 .90 .80 .70 .60 .50 .40 Key:

.30

Disk Pack Architecture Winchester Architecture

.20

.10 65

Fig. 5.

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

Impact of Winchester architecture on the average areal density of 14-inch disk drives.

Explaining the Attacker’s Advantage

419

of performance improvement, the 14-inch Winchester architecture sustained the trajectory of improvement which had been established within the disk pack architecture. As a consequence, this new architecture was valued within the same value network as had used the prior-architecture products—in this case, mainframe computers. By listening to their customers, the leading incumbent manufacturers of disk pack drives seem to have been led, rather than misled, in the development of the 14-inch Winchester technology. Sixteen years later, in 1989, Prairietek Corporation, a spin-off of Miniscribe, introduced a 2.5-inch drive as its first product. Its customers were almost exclusively manufacturers of notebook computers. Conner Peripherals, the leader of the 3.5-inch generation, introduced its own 2.5-inch product in 1990, however, and by 1991 it controlled over 85% of the worldwide 2.5inch market.9 Did Conner somehow manage its product development and deployment process differently from its predecessors to help it stay atop this industry? Again, our interpretation is that it did not. In the three preceding architectural transitions, the new, smaller drives were sold to new customers in new applications—in new value networks. In the transition from 3.5-inch drives sold to laptop computer applications to 2.5-inch drives sold in notebook computers, however, the leading customers were largely the same firms. Toshiba, Zenith, Compaq and Sharp, which were the leading laptop computer manufacturers, became the leading notebook PC makers. Their customers, and the spreadsheet and word processing software they used, were the same. In other words, the system of use was the same; hence, and most importantly, the way disk drive performance was assessed—capacity per cubic inch and per ounce, ruggedness, power consumption, etc.—was unchanged. Whereas attentiveness to established customers led the leaders of earlier disk drive generations to attend belatedly to the deployment of new architectures, in this instance attentiveness to its established customers led Conner through a very smooth transition into 2.5-inch drives. Although the 2.5-inch drive represents a new engineering architecture, it was developed and deployed within the same value network as the 3.5-inch

9

Miniscribe was in bankruptcy proceedings in 1989, and Prairietek declared bankruptcy and ceased operations in 1991.

420

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

product, and Conner seems to have negotiated this development with competitive agility.10 It does not seem from this evidence that the established disk drive manufacturers were constrained to prosper only within the value networks in which they were born. Firms such as Maxtor, Micropolis, Quantum and Conner proved themselves to have remarkable upward mobility, in terms of Fig. 4. Conner, for example, has moved from portable computing upwards to the desktop business computing market and the engineering workstation market. Micropolis and Maxtor are now the major suppliers of 5.25-inch drives to the mainframe market—even the large arrays employed by supercomputer manufacturers such as Thinking Machines. In other words, upward visibility and understanding towards other known, existing value networks seems not to have presented an insurmountable barrier to mobility. The differing slopes of the trajectories of technology supplied vs. performance demanded seem to be what facilitated the mobility of technologies and the firms practising them across the boundaries of value networks. Firms which led in the introduction of the 5.25-inch drive in desktop computing applications, for example, have been able to ride that technological paradigm across network boundaries into minicomputer, and now mainframe and supercomputer value networks. In observing these firms’ attacks across the boundaries of these networks, it is important to note that switching to new architectural paradigms per se was not the difficulty the incumbent firms faced which gave the attackers their advantage. When the new paradigms invaded the earlier established value networks, the leading incumbent firms at each transition quite rapidly

10 Burgelman’s account (1991, [4]) of Intel Corporation’s development of Reduced Instruction Set Computing (RISC) microprocessors can be understood even more clearly in the context of the value network framework. According to those who Burgelman interviewed, chips made with the RISC architecture generally addressed the needs of ‘‘a customer base (which was) different than the companies who purchased (Intel’s) 486 chips y a lot of customers who before did not even talk to Intel.’’ (p. 247). RISC’s champion within the Intel organization, Les Kohn, had tried but failed to convince management to back the RISC technology. He had failed because ‘‘RISC was not an existing business and people were not convinced a market was there’’ (p. 246). Finally, Kohn decided to present the RISC chip as a coprocessor which enhanced the performance of Intel’s 486 chip—as one which addressed the needs of the customers within Intel’s primary value network, which was personal computing. Positioned as such, the RISC project got funding. Once it was funded, Kohn was able to begin selling the RISC chip to customers outside the 486 personal computing value network, to the customers who valued RISC’s attributes most highly—engineering workstation manufacturers.

Explaining the Attacker’s Advantage

421

introduced architecturally novel products which were fully performance competitive with those of the attackers. To repeat, as long as a technological innovation was valued within the incumbents’ value network, they seemed perfectly competent and competitive in developing and introducing that technology—whether it was incremental, modular or architectural, competency enhancing or competency destroying, in character. The problem established firms seem to have been unable to confront successfully is that of downward vision and mobility, in terms of Fig. 4. Finding new applications for new architectures, and aggressively entering latent value networks, which necessarily are not well understood, seem to be capabilities which each of these firms exhibited once, upon entry, and then seems to have lost. One final point about mobility within and across value networks merits mention. It appears, from the history of disk drive manufacturers and their suppliers, that the farther removed a firm is from the ultimate system-of-use which defines the dominant performance paradigm in a value network, the greater is its mobility across networks. For example, the firms which manufacture the aluminum platters on which magnetic material is deposited seem to have been able to sell platters to disk manufacturers regardless of the ultimate value network in which the disks were destined to be used. The firms which coated platters with magnetic material, and sold those completed disks to disk drive manufacturers, seemed more aligned than were their platter suppliers to specific value networks—but not nearly as captive within specific networks as the disk drive companies themselves seem to have been. Conclusions and propositions The history summarized in Fig. 4 seems to be a relatively clear empirical example of the emergence of a sequence of what Dosi [17, p. 147] calls ‘technological paradigms’ and their associated new trajectories. At points B, C and D, product performance came to be defined differently, new trajectories were established, and engineers began to focus, within each new paradigm, on new sets of problems. For example, power consumption, ruggedness and weight simply were not on the development agendas of any practitioners of the 14- and 8-inch generations, whereas they became the dominating issues on the technology agendas of every firm competing within the 3.5- and 2.5-inch generations. Interpreting Dosi further, in the light of Henderson and Clark’s [24, p. 9] framework, it would appear that in the

422

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

history of disk drive technology, innovations on the left-hand side of the Fig. 1 matrix—incremental and modular technological changes—would constitute normal technological progress. Note that we would include discontinuous, competency-destroying modular innovations in component technology as elements of this normal progress—because they sustained, rather than redefined, the established technological trajectory. Some innovations on the right-hand side of the Henderson-Clark matrix would also fall within the ‘normal’ rubric—such as the 14- and 2.5-inch Winchester architectures. Yet in the instances of 8-, 5.25- and 3.5-inch generations, it seems that new architectural technologies alone were sufficient to herald the emergence of a new paradigm. Each of these new technological paradigms emerged within a new value network—mainframes, minicomputers, desktop PCs and portable computers. The forces which defined the trajectories of performance demanded in each value network tended to be at the broader, higher system-of-use levels in each network—the software used, the data processed, the training level of operators, the locations of use, etc. Dosi theorized that within an established technological paradigm, the pattern of technological change would become endogenous to the ‘normal’ economic mechanism. He anticipated, in other words, that there would be a strong fit between customers’ demands with respect to the rate and direction of improvement in cost and performance, and producers’ abilities to meet those needs. That harmony, governed by market forces, is the fundamental driver of innovation within an established paradigm, according to Dosi’s theory. Figure 4 suggests, however, that there seem to be two distinct, independent trajectories at work. The first is a trajectory of product performance improvement demanded in the ultimate system-of-use in each value network—the solid lines in Fig. 4. The second is a trajectory of performance improvement which the technology is able to supply— represented by the dotted lines in Fig. 4. There seems to be no reason why the slopes of these trajectories should be identical: the first is driven by factors at higher system-of-use levels in the value network, while the latter is driven by the inventiveness and ingenuity of scientists and engineers, and the willingness of marketers and managers to make product commitments targeted at existing markets above them. Likewise, Dosi theorized that the creation of new technological paradigms were events which occurred largely exogenously to the economic system— that institutional forces were largely responsible for their emergence. Our

Explaining the Attacker’s Advantage

423

research suggests that although the factors governing selection of a new paradigm can be seen as ‘outside’ the ‘normal’ market mechanisms as perceived by established producers, they are nevertheless essentially economic in character. Accordingly, in addition to classifying innovations according to their technological character and magnitude, and according to the requirements they place on an organization’s culture and structure, as prior scholars have noted, we propose that innovations be categorized also by the degree of mobility they enable or require across value networks. If no mobility or change in strategic direction is required—if the innovation is valuable within a firm’s established value network—the character of the innovation can be considered straightforward, regardless of its intrinsic technological difficulty or riskiness. If realization of inherent value requires the establishment of new systems of use—new value networks—the innovation is surely complex even if it is technologically quite simple. This is because such innovation requires far more than technological activity. It involves creating markets, and focusing on commercial opportunities which are small and poorly defined, rather than large and clear. In summary, then, we argue that the context in which a firm competes has a profound influence on its ability to marshall and focus the resources and capabilities required to overcome the technological and organizational hurdles other scholars have identified as impediments to firms’ ability to innovate. An important part of this context is the value network in which the firm competes. As we stated earlier, the boundaries of a value network are determined by a unique definition of product performance—by a rankordering of the importance of various performance attributes which differs from that employed in other systems-of-use in a broadly defined industry. In other words, the importance of such product attributes as size, weight, power consumption, heat generation, speed, ruggedness and ease of repair will be ranked very differently by users, and hence in the markets comprised by different networks. This implies that a key determinant of the probability of commercial success of an innovative effort is the degree to which it addresses the wellunderstood needs of known actors within the value network in which an organization is positioned. Incumbent firms are likely to lead their industries in innovations of all sorts—in architecture and components—which address needs within their value network, regardless of their intrinsic technological character or difficulty. These are straightforward innovations in that their

424

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

value and application are clear. Conversely, incumbent firms are likely to lag in the development of technologies—even those where the technology involved is intrinsically simple—which address customers’ needs as defined in emerging value networks. Such innovative processes are complex because their value and application are uncertain, according to the criteria used by incumbent firms. Extending Dosi’s [17, p. 147] notion of a ‘technological trajectory’ associated with each technological paradigm, we also suggest that two distinct trajectories can be identified—one which defines the performance demanded over time within a given value network, and one which traces the performance which technologists are able to provide within a given technological paradigm. In some cases, as in the disk drive industry, the trajectory of performance improvement which a technology is able to provide may have a distinctly different slope from the trajectory of performance improvement which is demanded in the system-of-use by downstream customers within any given value network. When the slopes of the two trajectories are similar, we expect the technology to remain relatively contained within the value network in which it is initially used, but when the slopes of these trajectories differ, new technologies, which initially are performance-competitive only within emerging or commercially remote value networks, may migrate into other networks, providing a vehicle for innovators in new networks to attack established ones. When such an attack occurs, it is because technological progress has made differences in the rank-ordering of performance attributes across different value networks less relevant. For example, size and weight are attributes of disk drives which are far more important in the desktop computing value network than they are in the mainframe and minicomputer value networks. When technological progress in 5.25-inch drives enabled manufacturers of those products to satisfy the attribute prioritization in the mainframe and minicomputer networks (which prize total capacity and high speed) as well as the attribute prioritization in the desktop network, the boundaries between those value networks ceased to be barriers to entry by 5.25-inch drive makers. A characteristic of almost all innovations in component technology is that they are an important engine of improvement within a given technological paradigm and its corresponding value network. Component innovation, although often ‘competency-destroying’, rarely changes the trajectory of performance improvement. As such, the risk of commercial error in

Explaining the Attacker’s Advantage

425

component innovation is lower relative to architectural innovations. Thus, we expect incumbent firms to be the leaders in component innovations. No such general statement can be made about innovations in architectural technologies. Some may reinforce or sustain the trajectory of performance improvement as it is defined within an established value network, while others may disrupt or redefine that trajectory. We expect incumbent firms to lead their industries in the sorts of architectural technology changes which sustain or reinforce the trajectory of performance within an established value network. When architectural or radical innovations redefine the level, rate and direction or progress of an established technological trajectory, entrant firms have an advantage over incumbents. This is not because of any difficulty or unique skill requirements intrinsic to the new architectural technology. It is because the new paradigm addresses a differently ordered set of performance parameters valued in a new or different value network. It is difficult for established firms to marshall resources behind innovations that do not address the needs of known, present and powerful customers. In these instances, although this ‘attacker’s advantage’ is associated with an architectural technology change, the essence of the attacker’s advantage is in its differential ability to identify and make strategic commitments to attack and develop emerging market applications, or value networks.. The issue, at its core, may be the relative abilities of successful incumbent firms vs. entrant firms to change strategies, not technologies. References [1] W.J. Abernathy and J.H. Utterback, Patterns of Innovation in Technology, Technology Review 80 (July, 1978) 40-47. [2] C. Alexander, Notes on the Synthesis of Form (Harvard University Press, Cambridge, MA, 1964). [3] J. Bower, Managing the Resource Allocation Process (Richard D. Irwin, Homewood, IL, 1970). [4] R.A. Burgelman, Intraorganizational Ecology of Strategy Making and Organizational Adaptation: Theory and Field Research, Organization Science 2 (August, 1991) 239-262. [5] R. Burgelman and L. Sayles, Inside Corporate Innovation (The Free Press, New York, 1986).

426

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

[6] R. Caves and M. Porter, From Entry Barriers to Mobility Barriers, Quarterly Journal of Economics 91 (May, 1977) 241-261. [7] A.D. Chandler, Organizational Capabilities and the Economic History of the Industrial Enterprise, Journal of Economic Perspectives 6 (Summer, 1992) 79-100. [8] C.M. Christensen, The Innovator’s Challenge: Understanding the Influence of Market Environment on Processes of Technology Development in the Rigid Disk Drive Industry (Unpublished Doctoral Dissertation, Harvard University Graduate School of Business Administration, 1992). [9] C.M. Christensen, Exploring the Limits of the Technology S-Curve, Production and Operations Management (Fall, 1993) 334-366. [11] K.B. Clark, The Interaction of Design Hierarchies and Market Concepts in Technological Evolution, Research Policy 14 (1985) 235-251. [12] K.B. Clark, Knowledge, Problem-Solving and Innovation in the Evolutionary Firm: Implications for Managerial Capability and Competitive Interaction (Working Paper, Harvard Business School Division of Research, Boston, MA, 1987). [13] A.C. Cooper and D. Schendel, Strategic Response to Technological Threats, Business Horizons 19 (February, 1976) 61-69. [15] D.L. Denouel, The Diffusion of Innovations: An Institution Approach. (Unpublished Doctoral Dissertation, Harvard University Graduate School of Business Administration, 1980). [17] G. Dosi, Technological Paradigms and Technological Trajectories, Research Policy 11 (1982) 147-162. [19] J.L. Enos, Invention and Innovation in the Petroleum Refining Industry, in: The Rate and Direction of Inventive Activity: Economic and Social Factors (National Bureau of Economic Research report, Princeton University Press, Princeton, NJ, 1962) 299-321. [20] R. Foster, Innovation: The Attacker’s Advantage (Summit Books, New York, 1986). [21] P. Ghemawat, Commitment: The Dynamic of Strategy (The Free Press, New York, 1991). [22] M.B.W. Graham, RCA and the VideoDisc: the Business of Research (Cambridge University Press, Cambridge, 1986).

Explaining the Attacker’s Advantage

427

[23] R.M. Henderson, The Failure of Established Firms in the Face of Technological Change: A Study of the Semiconductor Photolithographic Alignment Industry (Unpublished Doctoral Dissertation, Harvard Universitty, 1988). [24] R.M. Henderson and K.B. Clark, Architectural Innovation: The Reconfiguration of Existing Systems and the Failure of Established Firms, Administrative Science Quarterly 35 (1990) 9-30. [25] S. Hollander, The Sources of Increased Efficiency: A Study of DuPont Rayon Plants (MIT Press, Cambridge, MA, 1965). [26] T. Kuhn, The Structure of Scientific Revolutions (University of Chicago Press, Chicago, IL, 1962). [27] M. Maidique and B.J. Zirger, The New Product Learning Cycle, Research Policy 14(1985) 299-313. [28] D.L. Marples, The Decisions of Engineering Design, IEEE Transactions on Engineering Management EM8 (1961) 55-71. [29] J. Morone, Winning in High-Tech Markets: The Role of General Management (Harvard Business School Press, Boston, MA, 1993). [30] K. Pavitt, Technology, Innovation and Strategic Management, in: J. McGee and H. Thomas (Editors), Strategic Management Research: A European Perspective (Wiley, Chichester, 1986). [32] M. Porter, Competitive Advantage (The Free Press, New York, 1985). [33] M. Porter, Towards a Dynamic Theory of Strategy, Strategic Management Journal 12(1991) 95-117. [34] N. Rosenberg, Inside the Black Box: Technology and Economics (Cambridge University Press, Cambridge, 1962). [36] R.S. Rosenbloom, From Gears to Chips: The Transformation of NCR and Harris in the Digital Era (Working Paper, Harvard Business School Business History Seminar, 1988). [37] R.S. Rosenbloom and M. Cusumano, Technological Pioneering and Competitive Advantage: The Birth of the VCR Industry, California Management Review 29 (Summer, 1987) 51-76. [38] R.S. Rosenbloom and K. Freeze, Ampex Corporation and Video Innovation, Research on Technological Innovation, Management and Policy 2 (1985).

428

CLAYTON M. CHRISTENSEN AND RICHARD S. ROSENBLOOM

[39] E. Schein, Organizational Culture and Leadership: A Dynamic View (Jossey-Bass, San Francisco, CA, 1988). [40] M.L. Tushman and P. Anderson, Technological Discontinuities and Organizational Environments, Administrative Science Quarterly 31 (1986) 439-465.

Explaining the Attacker’s Advantage

429

Appendix: A note on the data and methods used to generate Fig. 4 The trajectories mapped in Fig. 4 were calculated as follows. Data on the capacity provided with computers was obtained from Data Sources, an annual publication which lists the technical specifications of all computer models available from each computer manufacturer. Where particular models were available with different features and configurations, the manufacturer provided Data Sources with a ‘typical’ system configuration with defined RAM capacity, performance specifications of peripheral equipment (including disk drives), list price, and year of introduction. In instances where a given computer model was offered for sale over a sequence of years, the hard disk capacity provided in the typical configuration typically increased. Data Sources divides computers into mainframe, mini/ midrange, desktop personal, portable and laptop, and notebook computers. For each class of computers, all models available for sale in each year were ranked by price, and the hard disk capacity provided with the median-priced model identified for each year. The best-fit line through the resultant time series is plotted as the solid lines in Fig. 4. These single solid lines are drawn in Fig. 4 for expository simplification to indicate the trend in typical machines. In reality, of course, there is a wide band around these lines. The frontier of performance—the highest capacity offered with the most expensive computers—was substantially higher than the typical values shown. The dotted lines in Fig. 4 represent the best-fit line through the unweighted average capacity of all disk drives introduced for sale in each given architecture for each year. This data was taken from Disk/Trend Report. Again, for expository simplification, only this average line is shown. There was a wide band of capacities introduced for sale in each year, so that the frontier or highest capacity drive introduced in each year was substantially above the average shown. Stated in another way, a distinction must be made between the full range of products available for purchase, and those in typical systems of use. The upper and lower bands around the median and average figures shown in Fig. 4 are generally parallel to the lines shown. Because drives with higher capacities were available in the market than the capacities offered with the median-priced systems, we state in the text that the solid-line trajectories in Fig. 4 represent the capacities ‘demanded’ in each market. In other words, the capacity per machine was not constrained by technological availability. Rather, it represents a choice for hard disk capacity, made by computer users, given the prevailing cost.