Accounting, Organizations and Society 32 (2007) 701–734 www.elsevier.com/locate/aos
Mediating instruments and making markets: Capital budgeting, science and the economy Peter Miller a
a,*
, Ted O’Leary
b
London School of Economics and Political Science, Department of Accounting and Finance, Houghton Street, London WC2A 2AE, United Kingdom b University of Manchester, United Kingdom/University of Michigan (Ann Arbor), United States
Abstract We examine in this paper how certain instruments link science and the economy through acting on capital budgeting decisions, and in doing so how they contribute to the process of making markets. We use the term ‘‘mediating instruments’’ to refer to those practices that frame the capital spending decisions of individual firms and agencies, and that help to align them with investments made by other firms and agencies in the same or related industries. Our substantive focus is on the microprocessor industry, and the roles of ‘‘Moore’s Law’’ and ‘‘technology roadmaps’’. We examine the ways in which these instruments envision a future, and how they link a multitude of actors and domains in such a way that the making of future markets for microprocessors and related devices can continue. The paper begins with a discussion of existing literatures on capital budgeting, science studies, and recent economic sociology, together with the reasoning behind the notion of ‘‘mediating instruments’’. We then address the substantive issues in three stages. Firstly, we consider the role of ‘‘Moore’s Law’’ in shaping the fundamental expectations of an entire set of industries about rates of increase in the power and complexity of semiconductor devices, and the timing of those increases. Secondly, we examine the roles of ‘‘technology roadmaps’’ in translating the simplified imperatives of Moore’s Law into a framework that can guide and encourage the myriad of highly uncertain and confidential investment decisions of firms and other agencies. Thirdly, we explore one particular and recent example of major capital investment, that of post-optical lithography. The paper seeks to help remedy the empirical deficit in studies of capital budgeting practices, and to demonstrate that investment is much more than a matter of valuation techniques. We argue, through the case of the microprocessor industry, for greater attention to investment as an inter-firm and inter-agency process, thus lessening the fixation in studies of capital budgeting on the traditional hierarchical and bounded organization. In addition, we seek to extend and illustrate empirically the richness of the notion of ‘‘mediating instruments’’ for researchers in accounting, science studies, and economic sociology. Ó 2007 Elsevier Ltd. All rights reserved.
*
Corresponding author. E-mail address: P.B.Miller@LSE.AC.UK (P. Miller).
0361-3682/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.aos.2007.02.003
702
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
In April 1965, Gordon E. Moore set out his predictions for what was going to happen in the semiconductor components industry during the next decade. At that point, Moore was R&D Director at Fairchild Semiconductor. Today, he is chairman emeritus of Intel Corp. In 1965, the integrated circuit was only a few years old, and not very widely accepted in the electronics industry. On the basis of three data points only, Moore predicted that during the next decade there would be a thousandfold increase in the power of the most complex integrated circuit or semiconductor device available commercially, as measured by the number of transistors, capacitors and other electronic elements crammed on the device. His expectation proved accurate. Updating and revising his prediction in 1975, Moore thought that the number of electronic elements on a semiconductor could continue to be doubled approximately every two years. That prediction, linked to a provision that the doubling in complexity should not give rise to any increase in cost per device, has become an industry norm known as ‘‘Moore’s Law’’. It remains today as the most fundamental proposition concerning the future of semiconductors. In November 1992, a group of 179 scientists and engineers, including Gordon Moore, met in Irving, Texas. They represented major US producers of semiconductors and computing products, universities, government agencies and national laboratories. Their task was to help prevent the further erosion of US pre-eminence in microelectronics. Despite the optimistic predictions of Moore’s Law, US dominance in the most advanced semiconductors had apparently been lost to other nations, and to Japan, in particular. Such erosion was held to be a consequence of America’s inferior practices for managing capital investments in science and R&D in the interests of the economy and of national defense. A result of the meeting was the first ‘‘technology roadmap’’ for semiconductors published by the industry association, SEMATECH. It set out key engineering and cost attributes that were expected to be required of semiconductor component technologies, at each of five coordination dates out to 2007. The intent was to guide rates of capital
spending by the American industry so that it might once again outperform its foreign rivals. In July 2004, Intel Corp. announced that it had installed the first lithographic tool for fabricating semiconductors on silicon wafers using an extreme-ultraviolet light source. This tool was a result of almost two decades of research and development by government laboratories, suppliers, industry consortia, and semiconductor firms including Intel. Once fully commercialized and available for high-volume production, it was expected to be the only feasible means of patterning very small transistors and other electronic elements on silicon wafers. Elements as short as 10 nanometers (nm) in length could be patterned routinely. Intel executives argued that this novel lithography would serve as a crucial complement to their capital spending on product architectures and factory designs. Shorter element lengths provided by the lithography would enable more powerful semiconductors, containing larger numbers of circuits, to be fabricated on a given area of silicon. Such devices could thus be manufactured without costly reductions in product yield per silicon wafer. And additional cost savings might be realized by aligning the semiconductor introductions with enhanced factory designs offering higher throughput rates and fewer defects. We explore in this paper the relays and linkages between these three distinct events dispersed across time. Our focus is on the instruments that mediate between arenas and actors. We are particularly interested in how certain instruments link science and the economy through acting on capital budgeting decisions, and in doing so how they contribute to the process of making markets.1 We use the term ‘mediating instruments’ to refer to those practices that frame the capital spending decisions of individual firms and agencies, and that help to align them with investments made by other firms and agencies in the same or related industries. The metrics that we are interested in here envision a future, they give substance and timing to that
1 We use the term ‘science’ here to include both science and technology.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
vision, and they demonstrate to all the actors involved what is needed from each of them, so that they may contribute to the making of future markets for microprocessors and related devices. We address these issues in three stages. The following section offers a theoretical preamble that sets out the reasoning behind the notion of ‘mediating instruments’, and that locates it in relation to literatures concerning capital budgeting, science studies, and ‘making markets’. We then proceed in three stages. Firstly, we consider the role of ‘‘Moore’s Law’’ in shaping the fundamental expectations of an entire set of industries about increases in the power and complexity of semiconductor devices, and the timing of these increases. Secondly, we examine the role of ‘‘technology roadmaps’’ in translating the abstract and simplified imperatives of Moore’s Law into a framework that can guide and encourage the multitude of highly uncertain and confidential investment decisions of firms and other agencies. We show how technology roadmaps operationalise Moore’s Law, and facilitate the process of making markets. Technology roadmaps demonstrate to existing market participants as well as possible future entrants what is needed and when, and how this might ensure rates and timing of capital expenditure consistent with American pre-eminence in semiconductors. Thirdly, we explore one particular and recent instance of major capital investment, that of post-optical lithography. We show how Moore’s Law, in conjunction with technology roadmaps, problematized the continued rates of increase in semiconductor functionality based on existing lithographic equipment, and framed an investment appraisal process that sought to ensure that the making of markets for advanced semiconductors could continue well into the 21st century. In addressing these issues, we draw upon and aim to contribute to three relatively distinct yet overlapping sets of literature: firstly, the literature within accounting on capital budgeting and investment appraisal; secondly, the literature on the history and sociology of science and technology that has addressed instruments of mediation and the roles of economic calculation; and thirdly, the literature on neo-institutionalism and economic sociology, in so far as this has addressed the making
703
and constructing of markets. We consider each of these in turn in the following section, while recognizing that the distinction between these three literatures is primarily for purposes of exposition.
Capital budgeting, science and the economy Capital budgeting as a managerial and coordination process The literature within accounting on capital budgeting has developed surprisingly little in recent decades, particularly with respect to the study of the actual capital budgeting practices deployed by firms and other agencies. Even today, we know remarkably little about how capital budgeting practices are devised and made operable in particular contexts, whether hierarchically within firms, or laterally across firm, organizational and industry boundaries (Hopwood, 1996; Miller, Kurunmaki, & O’Leary, 2007). It is as if such issues have ‘fallen between the gaps’ that separate the distinct yet related literatures of accounting, finance and strategy (Miller & O’Leary, 2005a, 2005b).2 It is now over three decades since Bower (1972) called for explicit attention to the capital budgeting process. By this, he meant much more than the application of what he termed the ‘‘theoretically correct approach’’ involving, at the time, the use of the net present value rule to evaluate and rank investment opportunities.3 He argued that capital budgeting in large corporations has ‘‘very little to do with that class of decisions’’ (Bower, 1972, p. 7). Rather, the whole set of issues typically referred to as capital budgeting comprise 2
On the possible tensions between financial and strategic approaches to investment opportunities, see Barwise, Marsh, and Wensley (1989), Carr and Tomkins (1996, 1998), Myers (1984) and Tomkins (1991). See also Baldwin and Clark (1994), who argued that the formal financial investment evaluation systems (based on discounted cash flow) adopted by most large US companies after World War II tended to allow investments in organizational capabilities to ‘fall through the cracks’. 3 More recently, Trigeorgis (1996) and Smit and Trigeorgis (2004), among others, have argued for a modified and expanded net present value criterion to incorporate option values.
704
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
‘‘general management’’ problems, he suggested, that fall within the purview of general managers rather than finance specialists.4 In calling for a broader definition of capital budgeting, he made it clear that his criticisms were not directed at the net present value theoretical model itself, but at its descriptive accuracy: ‘‘. . . the theoretical characterization of a project as a financial security, and the theoretical focus on a discrete and identifiable set of choices made by top management, make for a descriptively inaccurate conceptual framework.’’ (Bower, 1972, p. 12) Ackerman (1970) argued similarly, focusing on the impact of vertical integration and diversification on the investment process.5 His study examined the paper industry, and he hypothesized that influence in the investment process would be concentrated at higher levels in integrated firms than in diversified firms. He distinguished three stages in the resource allocation process: definition, impetus and authorization. His findings were that definition and impetus were more centralized in integrated firms, and that authorization was a corporate decision in all cases. But it is his findings with regard to the overall investment process that are of particular interest here. He argued that the investment process appeared to be more strongly influenced by factors other than the financial framework. ‘Definition’, he argued, was an outgrowth of product–market strategy; ‘impetus’ was guided by the measurement systems that related business decisions to managerial rewards; and ‘authorization’ was a function of the availability of funds and political power in the organization. These factors, he argued, appeared to be influential regardless of the organization structure, although perhaps more evident in diversified firms due to the investment process being clearly divided among organization levels.
4 See also Aharoni (1966) on the issue of foreign investment decisions. 5 Ackerman and Bower were, at the time, colleagues at Harvard Business School. Bower’s, 1972 book was first published in 1970 by the Division of Research of Harvard Business School.
Elsewhere, and around the same time, others argued in similar terms. King (1974, 1975) described the capital budgeting literature as mostly accepting the ‘scientific’ or ‘rational’ approach. He described that literature as freely admitting that this is an ‘idealization’, that not all information is available, and that any analysis is necessarily partial rather than comprehensive. But, he suggested, the capital budgeting literature implies that effort expended in attempting to follow the ideal is worthwhile.6 King’s approach was different. Relatively little attention, he stated, ‘‘has been paid to the process of decision making as compared to techniques for financial evaluation’’ (King, 1974, p. 18). Based on two case studies of large capital investment decisions in a major British company, he called for greater attention to the process of decision making from conception through to fruition. In both instances, he identified weaknesses in the ‘scientific model’. The formal procedures of the company, he argued, addressed only a small proportion of the process of decision making. He proposed instead a seven-stage model, not dissimilar in principle from Ackerman’s, which commenced with ‘triggering’, progressed through ‘screening, ‘definition’, ‘evaluation’ and ‘transmission’, only then arriving at ‘decision’. The latter, he argued, is ‘‘normally a process of endorsement rather than judgment’’ (King, 1975, p. 77). Existing capital budgeting theory, he suggested, offered no help to those who ‘‘struggle to articulate a project and to gather relevant information’’ (King, 1975, p. 78). It makes no contribution, he argued, to how screening and definition should be conducted, or how the search process should be conducted, except by holding up ‘‘an ideal towards which those involved are impugned to aim’’ (King, 1975, p. 78). In terms comparable to those of Bower, he proposed that capital investment decision making in a large organization should be viewed as ‘‘a process of investigation not a single act of top management deliberation’’ (King, 1975, p. 80). Researchers should seek to develop, he argued, ‘‘an understanding of the relationships between organizational structure, decision-making
6
See for example Bromwich (1976).
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
procedures, creativity, commitment and bias, not on how to process and condense information which only God could provide’’ (King, 1975, pp. 80–81). Capital investment decisions emerge, he suggested, ‘‘as the result of a complex social process of which formal consideration by top management forms only a small part’’ (King, 1974, p. 27). These calls to examine the capital budgeting process as a whole, and to view it as an organizational rather than exclusively financial phenomenon, produced little response from accounting researchers for approximately two decades. It was not until the 1990s that researchers in accounting turned their attention again to the organizational dimensions of investment appraisal. No doubt this was prompted to some extent by the focus on Japanese manufacturers as both a competitive threat, and as offering a counterpoint to conventional approaches to investment appraisal as established in the US and UK from the 1950s onwards.7 A series of field studies conducted in Britain and Japan allowed Jones, Currie, and Dugdale (1993) to argue that the existing literature concerning accounting and manufacturing performance had focused too narrowly on accounting measurement and technique, with insufficient attempt to locate this in its organizational and societal contexts. Even if there may appear to be similarities in the accounting techniques used in Britain and Japan, they argued that the meaning of such practices may differ in different contexts. The integration of accounting information with other types of information may be a significant factor affecting the evaluation of investments in new technology, they suggested, as may the extent to which accountants constitute a distinct occupational grouping and occupy dominant positions in managerial hierarchies. The importance of locating investment appraisal in wider processes of decision making and reasoning was called for also by Jones and Dugdale (1994), while Jones and Lee (1998) emphasized the links between accounting and strategy in investment decision processes at different managerial levels. 7
Cf. Kaplan (1986) and Johnson and Kaplan (1987). See also Hayes and Abernathy (1980), Hayes and Garvin (1982), together with Myers (1984) and Pinches (1982).
705
Notwithstanding these recent contributions, the revival of interest in capital budgeting within the accounting literature has been limited. There still remain very few studies of capital budgeting processes within firms, and remarkably few that address the organizational dynamics of capital budgeting within firms, as called for by Bower and Ackerman over 30 years ago. As recently as 1993, it was possible for Jensen to bemoan, in a Presidential Address to the American Finance Association, the dominance of normative models, and the absence of empirical studies of investment decision-making processes in practice: The finance profession has concentrated on how capital investment decisions should be made, with little systematic study of how they actually are made in practice. This narrowly normative view of investment decisions has led the profession to ignore what has become a major worldwide efficiency problem that will be with us for several decades to come (Jensen, 1993, p. 870). This point has been echoed in the past decade or so by accounting scholars, who have argued that our knowledge and understanding of investment appraisal practices continue to remain seriously undeveloped (Miller, 1991; Miller & O’Leary, 1994a, 1997, 2005a, 2005b; Northcott, 1991). There is of course recent survey evidence (Graham & Harvey, 2002) that informs us of the usage of formal valuation techniques such as NPV, IRR, and real options.8 But such evidence is limited to charting the formal usage of known evaluation techniques. There is also the still growing literature on real options, that offers refinements in methods for valuing investment opportunities (Smit & Trigeorgis, 2004). The focus here is on ‘strategic’ decisions, where ‘‘the 8
Almost 30 years earlier, Klammer (1972) documented the growing use of discounted cash flow methods, while Haka, Gordon, and Pinches (1985) examined the effect on firm performance of switching to capital budgeting techniques based on risk-adjusted discounted net cash flows. Survey evidence on the use of ‘sophisticated’ capital budgeting techniques is also provided in Pike (1983, 1988), while Carr and Tomkins (1996, 1998) provide useful comparative data for Britain, Germany, the US and Japan.
706
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
traditional discounted cash flow approach is often short-sighted’’ (Smit & Trigeorgis, 2004). It is argued by the proponents of real options methods that ‘‘traditional methods’’ for appraising projects, such as the discounted cash flow approach, are appropriate when valuing investment opportunities where a stream of cash flows can be well specified. But: These methods, however, have serious shortcomings in valuing investments when management has the ability to control future cash flows and revise future decisions, particularly when current investment may interact with future investment (growth options), may confer future strategic advantages, or may affect (and be affected by) actions and reactions of other parties external to the firm (such as competitors and suppliers) (Smit & Trigeorgis, 2004, pp. xxiii–xxiv). According to this view, the gap between finance and corporate strategy needs to be narrowed, and increasing emphasis should be placed on ‘strategic valuation’. The identification of this ‘gap’ is most welcome, but attention continues to be focused on techniques for valuing investment opportunities, rather than the processes for managing and coordinating them (Miller & O’Leary, 2005b). We know surprisingly little about how managers exercise operating flexibility when a change to one investment programme can affect a range of other programmes, including those outside the firm. Such interrelated investments are of particular importance to our increasingly ‘connected’ economy, where products and technologies are designed by networks of firms and other organizations. But these broader issues remain unaddressed in the real options literature, where a very constrained picture of capital budgeting continues to prevail. To understand capital budgeting practices more fully, we argue firstly and most generally that we need to remedy the serious empirical deficit of studies of actual capital budgeting practices, by providing many more detailed empirical analyses of the investment appraisal practices that are actually used by firms and other agencies. Equally, we need to shift the focus so that capital budgeting is
no longer viewed from the narrow perspective of valuation techniques, to which it has been consigned for several decades by studies in finance. We need to position it instead as a complex managerial and institutional process (cf. Miller, 1991). While survey evidence can provide an overall picture of the formal usage of known evaluation techniques, it is of limited use for documenting and analyzing in detail how such techniques are used, and the interrelations between such techniques and other managerial practices. It is also of limited use in identifying and describing novel or idiosyncratic evaluation practices or metrics, such as investment bundling and technology ‘roadmapping’, which may be an important part of the overall investment evaluation process.9 Secondly, we need to know much more about the ways in which managers evaluate parallel and complementary investment opportunities that necessarily operate at both intra-firm and inter-firm levels. Such interrelated investments have assumed increasing importance in the modern economy. However, as Hopwood (1996) has argued, accounting researchers have been very slow to incorporate such changes in their work. They remain largely fixated on the traditional hierarchical and bounded organization. Although this has been somewhat attenuated in the past decade for certain topics, it remains the case for investment evaluation and capital budgeting, where the ‘vertical imperative’ continues to dominate the literature. We need to know more about how investment proposal and evaluation processes are managed not only within organizations, but among groups or sets of organizations.10 Accounting researchers, we argue, should pay more attention to the interrelations among investment opportunities, and the coordination practices that facilitate investments that necessarily traverse firm and organizational boundaries. We need to know what instruments are used to mediate between the investment strategies of different entities, and 9
On ‘investment bundling’, see Miller and O’Leary (1994b, 1997). 10 Pettigrew et al. (2003) address these issues by combining an organizational perspective with the notion of complementarities.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
how practices now regarded as traditional accounting metrics interrelate with ‘hybrid’ calculative and planning procedures (cf. Miller et al., 2007). This paper aims to redress a deficit in studies of capital budgeting that is both conceptual and empirical, and to extend the purview of what counts as capital budgeting by taking into consideration those mediating instruments that frame and encourage investment across the boundaries of firms and other agencies. Thirdly, and more specifically, we need to describe and understand the linkages between technological or scientific innovation, and investment evaluation. The interrelations between capital budgeting, science and the economy, we argue, merit significant attention by the accounting research community. The case of the microprocessor industry addressed in this paper, we suggest, offers a way of beginning to open up this important research area.
707
While the accounting literature has offered a rather constrained view of capital budgeting practices, historians, philosophers and sociologists of science and technology have opened up some very fruitful lines of enquiry. These are of particular help in understanding instruments of mediation that operate at inter-organizational levels, and that link formally or legally separate arenas or entities.11 Over two decades ago, Hacking (1983) drew attention to the importance of studying representing and intervening as conjoint processes. Whereas philosophers of science had traditionally been concerned primarily with theories and representations, they had said very little about instruments or the use of knowledge to alter the world. In contrast, Hacking argued that the study of representation needed to be paired with the study of instruments as modes of intervening. Subsequently, he returned to the importance of attending to the making and remaking of instruments, as well as the processes of making them work and rethinking how they work (Hacking, 1992). He proposed a disarmingly
simple and monosyllabic tripartite classification of the elements of laboratory science: ideas, things and marks.12 The point of this taxonomy was to emphasise the importance of examining the interplay of these different elements, rather than to focus unduly on how best to classify individual items. Others such as Pickering (1992) argued in similar terms, suggesting that scholars of science should focus on ‘science-as-practice’ rather than ‘science-as-knowledge’. A concern with the intricacies of practice meant an emphasis on the richness of the doing of science, the dense work of building instruments, planning, running and interpreting experiments, in addition to theory building and elaboration. These suggestions were driven in large part by relatively localised debates within science and technology studies, yet they had important implications for researchers working within accounting, and at the interface of accounting and sociology. The argument that researchers should study the interplay between representing and intervening was central to debates about ‘governing economic life’, even if the terminology and focus was somewhat distinct. It was argued that those concerned with the regulation and government of conduct should address the interrelations between programmes and technologies (Miller & Rose, 1990; Rose & Miller, 1992). Programmes referred to the discursive character of modes of governing, the imagining and conceptualising of an arena and its constituents, such that it might be made amenable to knowledge and calculation. Technologies referred to the possibility of intervening through a range of devices, instruments, calculations and inscriptions. The governing of conduct, it was argued, was achieved through the interplay between programmes and technologies, between the discursive and the instrumental. For it is through technologies and instruments that programmes, or ‘ideas’ in Hacking’s schema, are articulated and made operable. The realm of the programmatic was extensive, and could include dreams and schemes for enhancing output,
11 On the notion of ‘arena’, see Burchell, Clubb, and Hopwood (1985).
12 These equate approximately to theories, instruments and inscriptions. Cf. Hacking (1992, p. 44).
Mediating instruments and science studies
708
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
analysing and encouraging modes of consumption, envisaging and designing audit, or inventing new forms of personal transport (Latour, 1996; Miller, 1997; Miller & O’Leary, 1987; Miller & Rose, 1995, 1997; Power, 1997). But this was not a simple matter of ‘implementing’ ideal schemes in reality. Rather, it was a question of assembling and adjusting diverse components and practices so that they might operate as a more or less stable and coherent working ensemble, even if the stability was always only ever transient. Central to that process for forming a working ensemble was the instruments that link or mediate between the various actors and agencies. This mode of analysing the governing of economic life directed attention firmly towards a large and heterogeneous range of humble and mundane mechanisms and instruments, including techniques of notation, computation and calculation. Accounting practices were identified as central here, for they added a unique quality. Accounting practices made the person calculable and comparable, and allowed conduct to be measured and subjected to an economic or financial norm that effaced the substantive process or product (Miller, 1992; Miller & O’Leary, 1987). Accounting should, it was argued, be analysed as a social and institutional practice, one that is intrinsic to and constitutive of social relations, rather than derivative or secondary (Miller, 1994). Accounting should be viewed as a practice, an attempt to intervene, to act upon individuals, entities and processes to transform them and to achieve specific ends. Accounting could be viewed as an instrument for the governing of conduct, a type of ‘action at a distance’ (Latour, 1987). But the link between the instruments or technologies of accounting, and the broader rationales or representations of accounting should be preserved, it was argued. For it is through these representations that accounting practices are mobilized, articulated and endowed with a significance that extends beyond the immediate task to which they are addressed (Burchell, Clubb, Hopwood, & Hughes, 1980). The links between accounting and science were made explicit by some writers. Power (1994) pointed to the intellectual intersections between
the history and sociology of science, and the history and sociology of accounting. This was more than the exchange of metaphors, as for instance between scientific and economic fields (cf. Mirowski, 1989). It involved also ‘material’ interactions between science and administrative practices (Power, 1994, p. 368). Robson (1994a) demonstrated substantively how accounting instruments can help connect science to the national economy, by making visible and quantitative the responsibility of individual managers for science and technology. Attempts to reconcile science and technology with the ideals of economic growth and prosperity helped establish a connection between R&D, as the measure of science and technology, and accounting, as a form of economic calculation located at the level of the individual firm or organization. Chains of calculation, he argued, increasingly surround the practices of science and technology, in attempts to make them calculable and ultimately programmable (Robson, 1994a).13 This dual concern with representing and intervening, with programmes and technologies, with ideas and instruments, is preserved and extended in recent writings that have focused attention on the ways in which instruments can ‘mediate’ between aspirations, actors and arenas. Wise (1988) analysed the modes of mediation involved in the development of the steam engine and the electric telegraph. In the case of the former, he identified conceptual mediation between political economy and engineering mechanics. In the case of the latter, he identified methodological mediation between the interests of engineering and industry on the one hand, and the interests of electromagnetic theory on the other. His general concern was with the ways in which social context contributes to the ‘everyday doings’ of practitioners (Wise, 1988, p. 78). He was, however, acutely conscious of the difficulties of analysing how social context can affect those working in areas involving mathematics or measurement. Wise proposed that any such influence must exist in a mediated form, and that analysis should focus on modes of medi-
13 See also Chua (1995), together with Robson (1992, 1994b) on the notions of ‘translation’ and ‘action at a distance’.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
ation rather than on the direct relation of social factors to scientific problems. A machine functioning in a social context carries with it both a set of ideas that explain its physical operation, and a set of ideas that explain its social function. It is, he argued, through these dual sets of ideas that we interact with the machine. The simultaneous embedding of physical and social ideas requires a mutual adaptation of the one set to the other. In this process of adaptation exists the potential for the mediating role of machines. Moreover, the process of adaptation is not passive. In so far as it allows our ideas to be ‘embedded’ in the machine, the categories of a local scientific community come to be interdefined with political and economic categories (Miller & O’Leary, 1994a, 1994b). This interest in modes of mediation has recently been developed further by Morrison and Morgan (1999), who have suggested that ‘models’ can function as ‘mediating instruments’. Despite the slightly different terminology, and a focus on how models can be used as instruments of investigation, the concern remains with the ways in which two domains are connected. Models, it is argued by Morrison and Morgan, are one of the critical instruments of modern science. Neither just theory nor data, models can function as ‘autonomous agents’ because they are made up from a mixture of elements. By its nature, an instrument is independent of the thing it operates on, but it connects with it in some way. As a tool of investigation, a scientific model represents some aspect of the world or a particular theory about the world, or both. Echoing Hacking, they argue that models function as both means of intervention and means of representation. Morrison and Morgan argue that this holds both in the realms of natural science – in the form of models such as those of superconductivity or of quark confinement – and in the realm of social science – in the form of models such as the Leontief input–output model or multivariate structural time-series models. Moreover, models play an important role in measurement, not only structuring and displaying measurement practices but also acting directly as measuring instruments. To this extent, models understood as mediating instruments take their place alongside experiments,
709
theories and data as one of the essential ingredients in the practice of science. Making the economy and making markets This concern with the ways in which instruments mediate between domains and actors has been addressed recently by those sociologists of science who are turning their attention to the realms of the economy, of markets, and of their associated calculative practices. Here, the making of markets as a practical accomplishment achieved through particular types of instruments, rather than a methodological a priori, is receiving increasing attention. More than a decade ago, the notion of ‘technoeconomic networks’ (Callon, 1991) suggested the importance of examining the multiple interactions between science and technology on the one hand, and the economy on the other. Techno-economic networks were defined as: ‘‘a coordinated set of heterogeneous actors which interact more or less successfully to develop, produce, distribute and diffuse methods for generating goods and services’’ (Callon, 1991, p. 133). In place of a mode of organizing based on the single firm was substituted a mode of organizing that depended on links being formed among university laboratories, technical research centres, companies and customers. Such ‘polycentric’ networks were held to be characterized by a significant degree of autonomy for the various actors and entities composing it, and by mechanisms for integration, coordination and stabilisation (Callon, 1991; Callon, Lare´do, & Rabeharisoa, 1992).14 Most simply, techno-economic networks were depicted as being organized around three distinct poles: a scientific pole, that produces certified knowledge; a technical pole that develops and transforms artefacts; and a market pole that refers to ‘practitioners’ who more or less explicitly generate and describe the identity of consumers (Callon, Lare´do, & Mustar, 1997).15 A central feature of such networks were ‘intermediaries’, which 14
See also Callon et al. (1991, 1992). On the forming of networks, and the view that economic action is deeply ‘embedded’ in networks of interpersonal relations, see Granovetter (1985). Portes (1998) offers a useful review of related issues concerning the notion of ‘social capital’. 15
710
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
referred to anything passing between actors that defines the relationship between them. For a network is not limited to just the heterogeneous actors who make it up. A whole set of intermediaries circulates among them. These give material content to the links uniting the actors. They may be written documents, technical artefacts, human beings, or money. Taken statically, a techno-economic network may be defined by the actors that compose it. Taken dynamically, the network is defined by the emergence and evolution of the actors and the multiple configurations they enter into. The diversity of such networks arises not only from the variety of the actors, and the heterogeneity of the intermediaries, but the multiplicity of possible interactions that can arise. This emphasis on the forming of networks among heterogeneous agents and artefacts linking science, technology and markets was extended in more recent writings.16 Callon (1998) drew attention to the making of markets, by focusing on the reciprocal relations between economics as a discipline and the economy as a thing. Economics, he argues, performs, shapes and formats the economy, rather than merely observing and analysing how it functions. That is to say, a knowledge of the economy is densely interwoven with the form and content of the economy itself, and the interventions that can be made in it. Central to these interventions are the interrelated issues of calculation, calculative agency, and the conditions under which the latter arise. For calculation, Callon argues, is a complex collective practice. Central to this practice is the material reality of calculation, the figures, mechanisms and inscriptions that are decisive in performing calculations. We should focus, he argues, on the culturally or socially constructed capacities that emerge within a calculative network.17 Agents are calculative because action in
16 See Callon (2002) for a more recent formulation of the notion of techno-economic networks. 17 Callon acknowledges explicitly his debt here to Granovetter, specifically the notion of the network as one that ‘‘configures ontologies’’. By this he means that the identities, interests and objectives of agents are variable outcomes that fluctuate according to the form and dynamics of relations among agents (Callon, 1998, p. 8).
certain types of networks can only be calculative. Calculation and agency are two sides of the same coin. The agent-network is by construction calculative, but calculativeness could not exist without calculating tools, most notably the lowly and often disclaimed tools of accounting. The existence of calculative agencies correlates closely, he argues, with that of calculative tools. These tools mutually define the nature and content of the calculations made by those agencies that calculate, and the tools themselves are permanently open, plastic and reconfigurable. As with the notion of a techno-economic network, the making of markets presupposes that the relations between actors and instruments are as important as the relations within firms and organizations. The making of markets has also been addressed recently by those working in the area that is coming to be known as the ‘social studies of finance’. Abolafia (1996) studied ‘market makers’ in the strict sense – those traders at the centre of financial markets who trade for their own accounts, while providing continuous bids and offers to a diffuse population of investors and speculators located throughout the world. Underlying this study of market making in the ‘native’ sense is a commitment also to a social constructionist perspective on the making of markets. This suggests that individual behaviours are enacted in the context of the social relationships, cultural idioms, and institutions they continuously create. According to this perspective, financial markets are socially constructed institutions that are produced and reproduced as a result of the purposeful action and interaction of interdependent powerful interests competing for control. Markets are not spontaneously generated by the exchange activity of buyers and sellers. Rather, skilled actors produce institutional arrangements, the rules, roles and relationships that make market exchange possible. The institutions define the market, rather than the reverse. Others have addressed similar issues, albeit with a somewhat different focus. MacKenzie and Millo (2003) addressed the ‘performativity’ of economics in the context of the Chicago Board Options Exchange. Drawing on the work of Callon, while also linking their analysis to classic theories of eco-
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
nomic sociology, they proposed an historical understanding of markets as cultures, moral communities, and places of political action. Beunza and Stark (2004) argued somewhat differently that economic sociology should move from studying the institutions in which economic activity is embedded to analysing the actual calculative practices of actors at work. Economic sociology, in the form of social network analysis, should turn its attention to problems of calculation and valuation. But social network analysis should not be limited to studying ties among persons. Instead, it should focus on the links among persons and instruments. For if tools count, then instrumentation must be brought into the accounts of economic sociologists. Kalthoff (2005), in a study of risk management practices within banking, argued in similar terms. Defining practices of calculation as ‘epistemic practices’, he suggests that it is through technical devices of calculation that companies are constituted within risk management practices. It is through investments in forms that entities, events and objects are transformed. More generally, MacKenzie (1996) has addressed the issue of ‘technological trajectories’, and the extent to which such trajectories are selffulfilling prophecies. Persistent patterns of technological change can be explained, he argues, by the fact that technologists and others believe that they will be persistent. But, we argue, it is important to understand the mechanisms or practices that make this persistence operable. The term ‘institution’ is a convenient shorthand for designating the ways in which the beliefs of actors in persistent technological change become routine and taken for granted (MacKenzie, 1996, p. 58). But we need a fuller understanding of how this process of ‘embedding’ is achieved, what practices or instruments help link the actions and expectations of actors across formally separate and diverse domains. We argue that a focus on the roles of ‘mediating instruments’ in linking science and the economy helps extend our understanding of these issues. From an institutionalist perspective, and in terms that are not dissimilar to those who have addressed the interrelations between science and the economy, Fligstein has also addressed the construction of markets, and more specifically the case
711
of European integration (Fligstein, 2001; Fligstein & Sweet, 2002). His central tenet is that economic development, as a process, is causally related to the emergence and consolidation of particular symbiotic relationships that form between rule structures, governmental organizations, and economic actors. When states provide the rules according to which firms trade, and the means for their enforcement, they give market actors new trading opportunities, which tend to expand economic activity and growth. Expressed synoptically, this is to postulate a reciprocal relationship between market making and rule making. In the case of the EU, this suggests that the activities of firms, together with transnational actors, produce a self-reinforcing system in which evolving rule structures and market integration become linked. Or, put differently, large-scale market-building projects rely heavily on the creation of formal rules and legal procedures. Societies, Fligstein (2001) suggests, have both formal and informal rules that provide the conditions for economic exchange and allow for the production of new markets. Formal rules define property rights, governance structures, and patents, as well as the parameters within which competition can occur. Informal rules define what organizations ‘should’ look like and how interactions should be structured. Together, formal and informal rules affect an organization’s chances of survival. We seek in this paper to build on these three relatively distinct literatures. The accounting literature, we argue, needs to pay much greater attention to the issue of capital budgeting, and to broaden its view of what counts as capital budgeting. Rather than focusing only on valuation methods, emphasis needs to be placed on modes of managing and coordinating investments, particularly when these are large-scale and involve diverse assets. Equally, inter-firm and inter-organizational investments need to be taken much more seriously than they have been to date. The literature on science and technology offers considerable scope for addressing these limitations. We argue that the notion of ‘mediating instruments’ is a fruitful way of examining the ways in which science and the economy come to be linked, offering both ways of representing and intervening. And the emerging
712
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
literature on social studies of finance, together with institutionalist writings, direct attention to the processes of making markets. The focus in this paper on Moore’s Law and technology roadmaps, builds on yet takes us beyond the notion of market making as heavily dependent on rule making. And, in so far as these instruments can be viewed as fundamental to both the intra-firm and inter-firm evaluation of investment opportunities, accounting practices can be seen to play a pivotal role in the production and transformation of market making. The remainder of the paper addresses these issues in three stages. Firstly, we consider the role of ‘‘Moore’s Law’’ as a ‘mediating instrument’ that links science and the economy. We offer a short account of the ‘birth’ of Moore’s Law, as a prelude to discussing how his predictions became central to the mediation of strategies and programs of investment across the boundaries of an extensive set of firms, scientific agencies and industries. Secondly, we consider how technology roadmaps build on and extend Moore’s simplified forecasts, to form and align expectations for the multiplicity of materials, components and systems needed to produce advanced semiconductors. Again, we find the notion of ‘mediating instruments’ a fruitful way of analyzing the role of technology roadmaps in linking science and the economy. Thirdly, we consider the instance of investments in post-optical lithography, and show how Moore’s Law, together with technology roadmaps, helped frame investment evaluations such that the making of markets for semiconductors could continue for the foreseeable future.
from which he extrapolated in a straight line. The density of transistors and other electronic elements on a semiconductor device had been doubling at roughly an annual rate, Moore claimed. He envisaged no fundamental barriers to a continuation of the trend for another decade at least. If the prediction were realized, it would be possible by 1975 to cram 65,000 electronic elements onto a single semiconductor device. This compared to only 50 or 60 elements per device in 1965. At the time of his article, integrated circuits formed on a semiconductor substrate such as silicon were only a few years old.18 And they were not very well accepted among users, many of whom thought that the role of the semiconductor industry was to supply individual elements such as transistors, diodes and capacitors, rather than the integrated circuits themselves. As Moore (1995, p. 3) was to point out subsequently, much of his initial argument was intended to make the case that improvements in the science of integrated circuits could be crucial to the future of the semiconductor industry and to economic growth. Moore speculated on the future for integrated circuits in terms that must have sounded like science fiction to many of his contemporaries: ‘‘Integrated circuits will lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles, and personal portable communications equipment.’’ (Moore, 1965, p. 114) But, he argued: ‘‘. . . the biggest potential lies in the production of large systems. In telephone communications, integrated circuits in digital filters will separate channels on multiplex equipment. Integrated circuits will also switch telephone circuits and perform data processing.’’ (Moore, 1965, p. 114)
From prediction to industry norm: The birth of Moore’s Law The 35th anniversary issue of Electronics magazine, published on 19th April 1965, contained a paper by Gordon E. Moore titled ‘‘Cramming more components onto integrated circuits’’. Moore had been asked to predict what would happen to the semiconductor components industry during the next ten years. He responded with a prediction based on a small number of data points,
18
The phrases ‘‘integrated circuit’’ and ‘‘semiconductor device’’ are used interchangeably in this section of the paper. The former term is that routinely used in Moore’s earlier papers to refer to systems of electronic elements formed on a base or substrate, typically, silicon.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
A sketch by Grant Compton that accompanied Moore’s article showed three stalls in a market. One was selling cosmetics, another ‘notions’, and a third sold ‘handy home computers’ so small that they could fit in the palm of the hand (Moore, 1965, p. 115). Although the use of integrated circuits was restricted primarily to the military at the time of Moore’s article, he charted a massive increase in their use in a range of personal, commercial and governmental applications. ‘‘Integrated electronics will make electronic techniques more generally available throughout all of society, performing many functions that presently are done inadequately by other techniques or not done at all.’’ (Moore, 1965, p. 115). The principal advantages, he argued, would be ‘‘lower costs and greatly simplified design’’ (Moore, 1965, p. 115). If Moore’s technological predictions were startling to contemporaries, equally astounding were his forecasts about the economic aspects of integrated circuits. His overall argument was that ‘‘economics may dictate’’ the continued development of more powerful semiconductors (Moore, 1965, p. 114). According to Moore: ‘‘Reduced cost is one of the big attractions of integrated electronics, and the cost advantage continues to increase as the technology evolves toward the production of larger and larger circuit functions on a single semiconductor substrate.’’ (Moore, 1965, p. 115). Advances in science would allow for continued miniaturization of electronic elements, Moore thought. Architects would be able to pack more of them on a given area of silicon, increasing the speed and power of semiconductors. But such miniaturization should also result in significant cost reductions, insofar as a greater number of semiconductor devices could be fabricated on a wafer of given diameter. More precisely, Moore argued, the costs of adding electronic elements to a device would decline rapidly until one reached the limits of manufacturing capabilities at any point in the development of semiconductor technology. Beyond some level of density, the benefit of adding more elements would be offset by the extra expense of manufacturing a defect-free product. Moore
713
(1965, p. 115) graphed the inter-temporal relations between the number of electronic elements on a semiconductor, which he termed ‘‘device complexity’’, and cost. The resultant series of u-shaped curves, shifting downward and to the right, is reproduced in Fig. 1. In 1959, the most complex semiconductor that could be manufactured cost effectively contained just 1 electronic element. A comparable device in 1962 contained 10 components, rising to between 50 and 60 components by 1965 when Moore examined a ‘‘micro-logic’’ device about to be produced by Fairchild Semiconductor, of which he was then the director of research. On a semi-log plot, the small number of data points available to him ‘‘fell close to a straight line that doubled the complexity [for minimum cost] every year up until 1965’’ (Moore, 1995, p. 3). From this, he extrapolated in a straight line for the following decade, and said that he saw no reason why this rate of increase should not continue. If it did, a commercially available semiconductor of 1975 would contain about 65,000 electronic elements (Fig. 2). In 1975, Moore revisited the subject in a paper given at the December meeting of the Institute of Electrical and Electronics Engineers (IEEE). His prediction of a decade earlier, that an integrated
Fig. 1. Curves of device complexity for minimum cost. (Adapted from Moore (1965, p. 115).)
714
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
Fig. 3. Rate of increase in electronic elements per chip as predicted in 1975. (Adapted from Moore (1995, p. 9).) Fig. 2. Number of electronic elements ‘‘crammed’’ on an integrated circuit. (Adapted from Moore (1965, p. 116).)
circuit containing 65,000 elements would be available by 1975, had proved accurate – a memory device with that density was in production at Intel, where he was then president and chief executive officer. In the IEEE paper, he identified the technical factors that he considered had enabled the industry to meet his predictions of a doubling of the number of electronic elements per semiconductor device every year for a decade. Continued reduction in the sizes of transistors and other elements had been made possible by advances in lithography, particularly. Also, the industry had improved the properties of materials and manufacturing processes, so that denser semiconductor devices could be fabricated with fewer defects and without sacrificing yields. With respect to both factors, Moore claimed, there was no fundamental barrier in laws of physics or engineering practice to a continuation of historical rates of improvement. The third factor he identified as ‘‘circuit and device cleverness’’, by which he meant improvements in chip architecture and design (Moore, 1975, pp. 12–13). Less sure that this third factor would persist as an important contributor to device complexity, Moore revised his projections for the slope of the rate of increase. This revised prediction indicated that the new slope might approximate to a doubling of device complexity for minimum cost every two years, rather than every
year (Fig. 3). It is to this particular prediction that the term Moore’s Law came to be applied during the later 1970s.19 Moore’s prediction envisaged strong and beneficent links between science and the economy. For so long as research could result in denser circuits every two years, the economy could be expected to benefit. More powerful semiconductors would be available for an ever widening array of applications affecting private life, business, and government. And the pursuit of such applications would be made all the more likely and attractive by large cost reductions. Of course, Gordon Moore was not alone in envisaging such an attractive prospect for microelectronics. Other commentators had also argued that continued miniaturization and integration of electronic circuits provided a key to development and growth (Schaller, 2004, pp. 287, 395). Moore’s distinct contribution was to express graphically a set of relations between increased functionality and reduced cost that, in the terms of an Intel executive, ‘‘just turned out to be possible early on’’ and then ‘‘got built into the economics of things’’. By the latter, he meant that the predic-
19
See Hogan (1977) and Noyce (1977) for early uses of the phrase Moore’s Law. Hogan was at the time vice-chairman of Fairchild Semiconductor and Noyce, along with Moore, was a co-founder of Intel. We are grateful to Schaller (2004) for these references.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
tions had become a guide to capital budgeting decisions: ‘‘Moore’s Law is not a law of physics. On the other hand, it’s a pretty strong economic law, because once the industry deviates from Moore’s Law, then the rate of investment is going to change, and the whole structure [of the industry] will change.’’20 In an effort to ensure that investment rates would continue to give effect to Moore’s ambitious predictions, they were incorporated in what we have termed a mediating instrument. The intent was to mediate between the strategies and the capital spending programs of an extensive and changing set of firms and agencies, to align them with one another and with an imperative to preserve and extend American pre-eminence in semiconductor and related industries. We trace the development of that instrument in the next section of the paper.
Technology roadmaps as mediating instruments Restoring American pre-eminence in semiconductors Moore’s graphs connected an imperative for regular and recurrent technological innovation with an imperative that was financial. Only by constantly innovating technologically, could one achieve apparently limitless reductions in cost per electronic element. By the mid 1980s, however, it was distinctly unclear whether such innovation would continue to be carried-forward by American firms and research agencies. An advisory committee to Congress referred to a ‘‘strategic industry at risk’’ with grave consequences for the wealth and security of the US. The committee chairman, Ian Ross of AT&T, wrote as follows to President Bush in November 1989: ‘‘The semiconductor industry in the United States is in serious trouble. If this vital industry is allowed to wither away, the Nation will 20 Research Fellow, Intel Corp., interviewed by the authors, November 3, 1997.
715
pay a price measured in millions of jobs across the entire electronics field, technological leadership in many allied industries such as telecommunications and computers, and the technical edge we depend on for national security.’’ (Ross, 1989; introduction) All of its advanced industries relied on products incorporating the latest integrated circuits. But the country had lost its leadership in semiconductor design and production, the committee argued. The revenues of American semiconductor firms had almost tripled between 1981 and 1988, as those of Japanese rivals increased sixfold. The situation was even more troubling when specific product areas were analysed, such as the advanced memories (or DRAM devices) that were deemed crucial at the time to industrial and defence applications. US firms had sold the first integrated memories in 1970. By 1988, 80% of a rapidly growing DRAM market had been ceded to Japan. While such precipitous declines had damaged integrated circuit manufacturers, they had all but eliminated US suppliers of the materials and equipment used for semiconductor production. By the end of the 1980s, the US was importing 97% of the most advanced silicon wafers and 68% of the highest specification lithography machines from foreign suppliers (National Advisory Committee on Semiconductors, 1989, pp. 8– 14; 1990a, 1990b). How to restore pre-eminence in semiconductors was the subject of an extensive set of policy proposals, some of them familiar from wider debates on the ‘‘decline of American manufacturing’’ (Cohen & Zysman, 1987; Dertouzos, Lester, & Solow, 1989; Miller & O’Leary, 1993, 1994a). Particular significance was attached to initiatives concerning capital investment. US semiconductor firms consistently invested less than their Japanese counterparts, it was argued, lagging the latter by some $12 billion during the period 1984–1989 (National Advisory Committee on Semiconductors, 1990b, p. 1). Gordon Moore’s stunning forecasts of the benefits of aggressive investment in semiconductors had been based on US experience. But Japan, it seemed, would shape the future of the industry. America’s inferior level of
716
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
commitment to technology development was held to have been a key factor in its loss of world market share, and Ross’s committee proposed ways to redress the problem. One measure that they advocated was to correct certain perceived deficiencies in the workings of the nation’s capital markets. Japanese firms could procure capital at lower cost than their US counterparts, it was argued, and investors there were more willing to await returns from long-term technology development. This had lowered the risks and increased the time-horizons for capital budgeting decisions, a key advantage in an industry of increasing capital intensity. The committee proposed that the US industry, government, and private and institutional investors, should contribute to a consumer electronics capital fund. It would provide low-cost, long-duration capital on terms at least equal to those believed to be available in Japan (National Advisory Committee on Semiconductors, 1990b, p. 22). This proposal was rendered irrelevant almost as soon as it was announced (Macher, Mowery, & Hodges, 1998). By the time Ian Ross’s committee presented its final report to Congress and the President in 1992, American semiconductor firms were regaining market share. It was a consequence not of acting on capital markets but, rather, of substituting one particular type of semiconductor product, the microprocessor, for another, the DRAM or memory device. As Macher et al. (1998, p. 112) argue, the substitution allowed American firms and research agencies to concentrate their skills on an area of relative advantage. Investment in complex microprocessor products offered significantly higher margins, and demand for them was to increase rapidly during the 1990s. By 1997, US firms held over 50% of the total world market for semiconductors. The share of Japanese producers had shrunk to just 29%, reflecting their apparent slowness in shifting to growth markets for microprocessors and increased competition in DRAM products from Taiwanese and South Korean firms (Macher et al., 1998, p. 112). A second proposal of Ross’s committee regarding investment practices was to have enduring consequences, however. Its concern was the lack of coordination by firms and research agencies
of their capital spending on ‘‘pre-competitive’’ R&D programs. These were understood as programs to develop common, underlying technologies for the US semiconductor industry as a whole, excluding specific and proprietary products: ‘‘Very large, pre-competitive investments in people, technology, and facilities must be made years before products are ready for market.’’ (National Advisory Committee on Semiconductors, 1989, p. 20). The total amount of such investments increased each product generation, it was noted. But, despite relaxations in anti-trust law, the nation lagged its Far East rivals in cooperative endeavours. Companies appraised and undertook research projects independently. A consequence was that of ‘‘limiting the financial resources applied to any single [R&D] effort to those that could be borne by a single firm’’ (National Advisory Committee on Semiconductors, 1989, p. 20). There was little sharing of information and expertise to manage and coordinate the development and instantiation of new technologies. Ross’s committee instanced the case of X-ray lithography, regarded at the time as a crucial, common technology that would be required for patterning all types of semiconductors in the future: ‘‘[Its] development is expected to be extremely expensive and could greatly benefit from the pooled resources of the semiconductor industry. The US effort in X-ray lacks the breadth, cooperation, and organization of the program in the Far East’’ (National Advisory Committee on Semiconductors, 1989, p. 20). Moore’s Law had modelled a strikingly beneficent relation between science and the economy at a highly abstract and simplified level. Continued enactment of that relation seemed now to require an instrument to coordinate and align capital investment decisions across the boundaries of firms and research agencies, to their advantage and that of the nation. The task of devising such an instrument was to fall to a group chaired by Gordon Moore himself.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
The codification of a ‘‘technology roadmap’’ In November 1992, Moore, then president and CEO of Intel, convened a meeting in Irving, Texas, of almost 200 scientists, engineers and semiconductor technologists. He was acting as chairman of the Semiconductor Industry Association’s technology committee. The attendees represented major US semiconductor firms, their suppliers and customers, as well as universities, government agencies and national laboratories. The aim in bringing them together, Moore observed, was to ‘‘create a common vision of the course of semiconductor technology’’ for a 15 year period. This was to be reflected in ‘‘a set of charts’’ available to all industry participants to help coordinate their investments in research and the development of advanced technologies (Moore, reported in Semiconductor Industry Association (1992, p. iii)).21 Ian Ross’s committee had argued that significant capital misallocations resulted from interdependent investment decisions being taken in isolated, un-coordinated ways by individual firms and research agencies. Moore’s group sought an instrument that would mediate between those investment decisions, bringing them into alignment without permeating the confidentiality of individual companies’ capital budgeting processes or seeking to determine their technology choices. The first ‘‘set of charts’’ was published in 1992 as a US National Technology Roadmap for Semiconductors, after which Moore stepped down as chair of the steering committee. Of the six revised versions published since then, the first two – of 1994 and 1997 – were developed by the American industry alone. Thereafter, the focus shifted to how national leadership in advanced semiconductors might be enacted through international collaborations. Intel and other US firms, including
21 The formation of Moore’s group followed from an earlier effort to devise such a set of charts or roadmaps by the National Advisory Committee on Semiconductors and the government’s Office of Science and Technology Policy. That initiative was regarded by key industry participants as too focused on one type of semiconductor product, and as failing to afford a basis for continued monitoring of progress in technology development (Schaller, 2004).
717
AMD and Hewlett-Packard, had established the dominant designs for advanced logic devices during the 1990s. To continue to develop such designs and to exploit the benefits of them, alliances and joint ventures had been established with research agencies and firms in several regions of the world (cf. Semiconductor Industry Association, 1999; foreword). The aim was to ensure that American semiconductor firms could influence the development of the most advanced materials and technologies, wherever these were devised. Far from wanting an integrated national infrastructure for semiconductor production, as Ian Ross’s committee had advocated only a few years previously, US producers now declared the existence of a ‘‘global industry’’. Pre-competitive scientific and R&D programs were to be coordinated internationally. In 1998, the trade association of US semiconductor firms joined with its counterpart organizations in Europe, Taiwan, South Korea and Japan itself, to provide: ‘‘a forum for international discussion, cooperation, and agreement among the leading semiconductor manufacturers and the leading suppliers of equipment, materials, and software, as well as researchers from university, consortia, and government [laboratories].’’ (Semiconductor Industry Association, 2001, p. ii). An outcome was to be the publication of an International Technology Roadmap for Semiconductors, updated annually. Table 1 summarises top-level data from the 2001 edition. Each of the labelled columns – (TN0, TN1, . . ., TN5) – refers to a ‘‘technology node’’, that is, a year by which more advanced types of semiconductor products were expected to be available for sale. The timing of each node, and its attendant levels of product functionality and desired manufacturing cost, were derived from Moore’s Law. Ideally, a new semiconductor should provide ‘‘twice the functionality every two years at a constant cost .. per chip’’ (Semiconductor Industry Association, 2001, Executive Summary, p. 51). This meant that, as the number of electronic elements was increased by a factor of 2 every two years, total manufacturing cost per
718
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
Table 1 International Technology Roadmap for Semiconductors, 2001 Edition, data adapted from top-level chartsa Year of first production 2001
2004
2007
2010
2013
2016
(TN0)
(TN1)
(TN2)
(TN3)
(TN4)
(TN5)
Expected shifts in product functionality & manufacturing cost – high-volume microprocessors Number of electronic elements per chip at introduction (billions of transistors) 0.193 0.386 Multiple per 3 year technology cycle 2 Affordable production cost per element at introduction (micro cents) 176 62 Rate of cost reduction per cycle (%) 65% Annualized cost reduction/element (%) 29%
0.773 2 22 65% 29%
1.546 2 7.78 65% 29%
3.092 2 2.75 65% 29%
6.184 2 0.97 65% 29%
Chip size at introduction (mm2)
280
280
280
280
Technology node
280
Innovations in lithographic equipment for microprocessor manufacture Electronic feature size 1/2 pitch (nm) Printed gate length (nm) Physical gate length (nm) Type of lithographic technology
Innovations in raw materials Silicon wafers: Wafer diameter (mm) in high-volume production a
280
150 90 65 90 53 35 65 37 25 Direct ultraviolet (DUV)
45 32 22 25 18 13 18 13 9 Extreme-ultraviolet (EUV) or electron projection (EPL)
300
300
300
300
450
450
Semiconductor Industry Association, International Technology Roadmap for Semiconductors (Austin, TX: SIA, 2001).
element should be reduced by 29% per year.22 And if the rate of increase in functionality were to slow, because of economic downturn, for instance, or delay in commercializing some novel technologies, the industry should nevertheless seek to maintain that rate of cost reduction. Thus, the 2001 edition of the Roadmap envisaged a doubling in the number of electronic elements per microprocessor at slower, 3-year intervals out to 2016, but cost per element was to continue to reduce at its historic rate of 29% per annum (Table 1). Such frequent, steep reductions in cost per element had been crucial to the industry’s historic rates of growth, it was argued, enabling semiconductor manufacturers to withstand ‘‘market pressure to continue to deliver twice the functionality on-chip every [two years approximately] in an environment of constant or reducing prices’’ (Semiconductor Industry Association, 2001, Executive Summary, p. 51). And it was ‘‘a basic pre22 If the number of electronic elements on a chip doubles during a two-year period at a constant cost per chip, then cost per electronic element has fallen by 50% during the period. This amounts to an annual rate of cost reduction of 29%.
mise’’ of the Roadmap that maintaining such a relation between functionality and price would promote increases in demand in line with an historical average, estimated at 15% to 17% annually since the early 1970s (Semiconductor Industry Association, 2001, Executive Summary, p. 1). In the terms of Intel president and CEO, Craig Barrett, interviewed shortly after publication of the third edition of the Roadmap, reliance on industry history was important in justifying large capital commitments: ‘‘You spend $2 billion, and you don’t have a designated product or even a customer for the product. .. You do things on, I won’t say 100 per cent hope and faith, but we absolutely do rely on past history. We rely on a sense of where the technology has been .. to make major capital bets either on manufacturing technology development or product design.’’23 23
Intel president and CEO, Craig Barrett, interviewed by the authors, December 17th 1998 (see Miller & O’Leary, 2005a, 2005b).
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
The ‘‘challenge’’ was to justify such investments by perpetuating the short and seemingly successful product introduction cycles of the past (Semiconductor Industry Association, 2001, Executive Summary, p. 1). The preparation and publication of an International Roadmap was to help meet the challenge by effecting a mediation between the strategies and the investment programs of legally separate entities, dispersed geographically and across an extensive range of industries. The Roadmap has been presented from its earliest editions as an instrument for effecting such mediation, by influencing the formation of markets for advanced semiconductor products and for the inputs needed to manufacture them. Take the 2001 edition as an instance. The starting point in its preparation was said to have been a ‘‘consensus building process’’ (Semiconductor Industry Association, 2001, Executive Summary, p. i) in which delegates from the world’s major semiconductor producers arrived at forecasts of device functionality and affordable manufacturing cost. Each firm would pursue its proprietary product development approach. But delegates had nevertheless reached a shared opinion regarding key, generic aspects of product development out to 2016. Producers aimed to at least double product functionality every three years, slower than the two year intervals that had been achieved during the 1990s. And they would seek manufacturing cost reductions per three year period of roughly 65% (Table 1). If that sequence of innovation were enacted, then, to take just one example, it should result in markets being made by (TN4), in 2013, for microprocessors containing 3.092 billion electronic elements per chip of about 280 mm2 (Table 1). But, high-volume manufacture of those products would be possible only if markets were also made for novel raw materials, production systems, and manufacturing equipment sets. The continued enactment of Moore’s Law would require the formation of an interlinked set or cascade of markets. Investments by one set of firms and agencies to commercialize more powerful microprocessors should align with those of others, to provide larger silicon wafers of 450 mm rather than 300 mm in diameter, for instance, and lithographic systems capable of patterning electronic elements of just
719
13 nm in physical gate length (Table 1). At the date of publication of the Roadmap, in 2001, coordinated programs of R&D aimed at growing and instantiating larger wafers, traditionally regarded as one of the most difficult and expensive kinds of transitions made by the industry, had not yet begun. And lithographic systems promising sufficiently small electronic elements existed only in the laboratory, with significant doubt as to whether it would be possible to devise the light sources, mirrors, masks, chemicals, and other components to make a technology fully operable (Semiconductor Industry Association, 2001, Lithography Section, pp. 1–5). The intent in preparing and publishing the Roadmap has been to guide and monitor the making of such sets of markets through acting on the investment decisions of incumbents and potential entrants to the semiconductor components industry. For if investments were not timed, staged, and sequenced in a coordinated way, it has been argued, the likelihood of systemic change would be reduced significantly: ‘‘In semiconductor manufacture, progress tends to occur in discrete generations where all the technology elements need to be in place before a transition can be made to the next generation’’ (Semiconductor Industry Association, 1994, p. 27). The lengthy process of seeking to develop and instantiate a ‘‘post-optical lithography’’ provides a particularly complex illustration.
Producing investments and making markets: The case of post-optical lithography The ‘‘twilight’’ of optical lithography A lithographic process that involves beaming light through lenses has served for several decades to pattern electronic features on silicon. Through persistently reducing the wavelength of the light source while increasing the numerical aperture of the lens, it has been possible to pattern ever smaller elements. By the 1990s, however, optical lithography was said to be entering its ‘‘twilight phase’’
720
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
(Geppert, 2004, p. 31). Continued miniaturization would require the instantiation of a ‘‘post-optical’’ approach.24 As the 1999 edition of the Roadmap reported: ‘‘The introduction of [post-optical] lithography will be a major paradigm shift that may be necessary to meet the technical requirements and complexities driven by Moore’s Law. This shift will drive major changes throughout the lithographic infrastructure and require significant resources to commercialize the system’’.25 In the absence of such a shift, historic rates of increase in semiconductor functionality might come to an end for lack of an effective way to pattern elements shorter than 32 nanometres in length (Table 1). Our concern here is with how the Roadmap was used to mediate between the interests and strategies of organizations that have sought to develop and instantiate forms of post-optical lithography. We trace the contours of that mediation from the early 1990s, when the first edition of the Roadmap was published, to July 2004 when Intel Corp. installed the world’s first commercial post-optical lithography tool. Three stages are identified and examined: the problematizing of investment in new technologies; the programmatic assessment of risk; and the demonstration and calculation of ‘‘costs of ownership’’. Problematizing investment in the creation of new technologies By problematizing, we mean a process of convincing various entities, including firms, industries 24
Various terms have been used to depict the technology that might replace so-called optical systems. These include ‘‘nextgeneration’’ and ‘‘post-optical’’ lithography (Silverman, 2005). We use the latter phrase throughout the manuscript to refer to fundamentally different sets of lithographic components and infrastructures deemed necessary for the effective patterning of very small electronic elements shorter than about 32 nm in length. 25 Semiconductor Industry Association, International Technology Roadmap for Semiconductors (San Jose, CA.: SIA, 1999, pp. 152–3).
and government agencies, to invest in synchronized ways to create a complex technology. An assumption is that the requirements for investment and its purposes are not given and obvious, but are themselves to be constituted and made salient (cf. Miller, 1991; Miller & Rose, 1990). Problematization is aimed at ensuring the production of interrelated investments across different arenas. It comprises a key early stage in the process of mediation. In the case of post-optical lithography, an initial step in problematization was to provide credible indications of when various agencies should invest. The early incorporation of Moore’s Law in the semiconductor Roadmap proved particularly significant in that regard. It made the lengths of technology cycles calculable. Estimates could be constructed as much as ten to fifteen years in advance of when existing components and materials used in the production of semiconductors would reach their limits. So, for instance, the drafters of the first Roadmap in 1992 could argue that there was a compelling need for immediate investment to devise post-optical lithography. Given a continuation of Moore’s Law, the limits of existing approaches could be reached in a decade to a decade and a half. The continued sequence of miniaturization would result in electronic elements being too small by then to be formed by traditional, optical methods. Light of sufficiently short wavelength would be absorbed rather than refracted by lenses. Miniaturization, the ability to manufacture more powerful semiconductors, and the perceived economic benefits of Moore’s Law, would all come to a halt. If experience were a guide, it could take a decade or more to devise a new lithographic system and to make markets for the many components comprising it. The Roadmap sought immediate investment on the part of suppliers, consortia, and research agencies. This was to be aimed at extending the life of optical lithography in the short-term, and in the longer-term at devising a viable post-optical solution for which ‘‘a systems’ capability [and] proof-of-concept [were] to be demonstrated early’’ (Semiconductor Industry Association, 1992, p. 28). Later editions of the Roadmap also made public a key expectation on the part of the world’s major semiconductor firms, the even-
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
tual buyers of advanced lithographic systems. These firms would almost certainly select and install only one system of post-optical lithography. The reward for devising that successful system would be to amortize the investment across world-wide demand for several decades. But, as the history of the industry seemed to demonstrate, mere calls for capital spending could go unheeded notwithstanding the prospects set out by Moore’s Law. A further step in problematizing was the provision of an infrastructure within which incumbent and new suppliers and research agencies could gauge the potential returns and attendant risks of investing in post-optical lithography. The 1997 edition of the Roadmap noted that capital spending had begun on several candidate technologies, and undertook an assessment of each one (Semiconductor Industry Association, 1997, pp. 82–98). Among the systems that had been proposed were those centred on X-ray, electron-beam projection and extreme-ultraviolet technologies. The merits of each one had been articulated by key firms or research agencies. The X-ray system had ‘‘had the most resources applied to its development both in the United States and Japan’’ (Semiconductor Industry Association, 1997, p. 88). IBM had long been a proponent of the approach and had sought to foster and coordinate innovation internationally in its unique components and infrastructure. With respect to the second and third alternatives, electron-beam projection and extreme-ultraviolet, the Roadmap concluded that both were at an earlier stage of development. These systems were ‘‘being studied’’ and, ‘‘if successful in the laboratory’’ would be candidates ‘‘for manufacturing applications in the future’’ (ibid., p. 88). Lucent Technologies (formerly part of Bell Laboratories) was a known proponent of the electron-beam projection system, but extreme-ultraviolet technology lacked a key corporate sponsor.26
26 Intel and the US Department of Energy (DOE) Laboratories had collaborated on extreme-ultraviolet research from about 1990, with each agency bearing the costs of its own scientific programs. However, cuts in Congressional funding for such collaborative arrangements brought the Intel – DOE scheme to a halt during 1995.
721
It was a conclusion of significant concern to executives from three semiconductor producers – Intel, AMD, and Motorola. Their worry was that the extreme-ultraviolet alternative would slip-off the industry Roadmap prematurely. They sought to promote renewed investment in it by establishing an infrastructure that would enable suppliers and other agencies to better manage their risks and assess their potential returns: ‘‘Today we’re talking about [reinstating] a third approach, what has been called EUV or Extreme Ultraviolet. While it certainly isn’t obvious at the present time, [and] all three choices are complicated, expensive and technically very difficult, ... we think EUV ... is really a very strong contender’’.27 The comments are those of Gordon Moore, speaking as chairman emeritus of Intel Corp at a press conference on September 11th 1997. He was addressing the relative merits of X-ray, electron-beam and extreme-ultraviolet as alternative systems of post-optical lithography. Sharing the podium with him was US Energy Secretary Federico Pena. In attendance were the directors of the Livermore, Sandia, and Berkeley National Laboratories which, collectively, were termed the US Virtual National Laboratory. What they announced was the formation of a partnership between the private and public sectors. A consortium formed by Intel, AMD and Motorola, and termed the EUV LLC, would fund the Virtual National Laboratory to serve as an expert systems integrator for the unique components comprising extreme-ultraviolet lithography. The components would be devised by an extensive set of supplier firms, research agencies, and industries, but tested and integrated as an operable system in collaboration with the Virtual National Laboratory. Problematizing had begun by using Moore’s Law to estimate when world-wide demand would exist for post-optical lithography. As the major semiconductor producers had hoped, several 27
Intel chairman emeritus, Gordon Moore, speaking at the launch of the EUV LLC partnership, September 11th, 1997. Reported in an Intel Corp. press release of the same date.
722
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
potential and competing post-optical systems had been proposed. Developing any one of them to the point at which its viability could be assessed would require significant investment in an overall design or architecture for that system. And it would require investment by a diverse array of firms and agencies to create initial or test versions of the many unique and distinct components comprising the system. At this lower and more detailed level, problematizing involved the articulation of the merits and attributes of each system by powerful proponents, such as the EUV LLC. Also, and crucially, it required a means of demonstrating whether individual components would integrate as a system. For investment on the part of some suppliers and research agencies might be forthcoming only if they could be assured of corresponding capital commitments by others. Problematizing required means of establishing and monitoring the relations of complementarity, or mutual reinforcement, of investments made across diverse arenas and by many different firms and agencies (cf. Miller & O’Leary, 1997, 2005a, 2005b; Pettigrew et al., 2003). And it was this role that was to be filled in the case of extreme-ultraviolet lithography by the partnership formed between the EUV LLC and the US Virtual National Laboratory. As Secretary Pena explained, the Department of Energy would lend its authority and objectivity as the agency that had led basic research on extremeultraviolet lithography, itself an outgrowth of the ‘‘Star Wars’’ initiative of the Reagan era: ‘‘The Department of Energy created this technology now known as Extreme-Ultraviolet Lithography or EUV at three of our national laboratories – Livermore, Sandia, and Berkeley. The technology behind EUV was developed as part of our efforts to ensure the safety, reliability and security of our nation’s nuclear weapons stockpile.’’28
28 US Energy Secretary, Federico Pena, speaking at the launch of the EUV LLC partnership, September 11th, 1997. Reported in an Intel Corp. press release of the same date. See also Fasca (1997) and Parker (1999).
Using funds provided by the private consortium, EUV LLC, the Virtual National Laboratory would continue with core areas of R&D. But it would act also to delineate an overall architecture for an extreme-ultraviolet system, and to integrate the many components of that system until an ‘‘alpha’’ or ‘‘proof-of-concept’’ tool had been produced and demonstrated. This integrative role was regarded as imperative for the continued production of investment in extreme-ultraviolet lithography. Its development was far beyond the capabilities of the assemblers of existing, optical systems. Capital spending would be required on the part of many specialist and geographically dispersed firms and research agencies, some not previously involved in the semiconductor industry. Problematization was aimed at the production of investment in alternative systems of post-optical lithography. But as development of those systems progressed, a second phase in mediation became important. It was aimed at convincing some sets of firms and agencies to withdraw from approaches deemed likely to fail, and at encouraging increased investment to make markets for a successful system of post-optical lithography. It is a process to which we turn in the next two sections of the manuscript. We trace in those sections how extreme-ultraviolet came to be regarded as the system most likely to succeed, albeit with delays and surprise intrusions on the part of other and wholly unexpected forms of lithography. And we illustrate the continued roles of the US Virtual National Laboratory in integrating the products of many suppliers and research agencies as a system, and in demonstrating and calculating that system’s expected costs-of-ownership. The programmatic assessment of risk Between October 1997 and December 2005, the consortium responsible for the Technology Roadmap convened at least eight conferences of the world-wide industry concerning post-optical lithography.29 The delegates were senior executives 29
Five conferences had been held by December 2001, and an additional three by December 2005. See Hand (2001); also International SEMATECH web-site (www.sematech.org).
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
from the world’s major semiconductor producers, incumbent and potential suppliers, and research agencies. The conferences were integral to the updating of the Roadmap that occurs annually. We argue that they were central, also, to how the Roadmap operates as an instrument of mediation between capital spending on science and R&D on the part of individual firms and agencies, and a much wider, industry-level process of making markets. There were deep seated disagreements between sets of firms and research agencies as to which system for post-optical lithography could be made operable and established on world markets. Thus, for instance, as Intel, AMD and Motorola formed the EUV LLC during 1997, convinced that the future lay with extreme-ultraviolet, another set of agencies was abandoning that approach. Lucent Technologies discontinued capital spending on extreme-ultraviolet and X-ray systems to focus exclusively on electron-beam lithography. It planned to assemble a set of corporate and scientific agencies that would help integrate the novel components of that particular type of lithography, rivalling the EUV LLC of Intel and its partners.30 Such divergence of views was seen by semiconductor producers as highly desirable insofar as it resulted in the investigation and prototyping of competing systems. But, as the Technology Roadmap argued, means were then required to appraise and compare prototypes to discourage continued investment in systems deemed likely to fail: ‘‘Although many technology approaches exist, the industry is limited in its ability to fund the development of the full infrastructure (exposure tool, resist, mask, and metrology) for multiple [post-optical lithography] technologies. Closely coordinated global interactions among government, industry, and universities are absolutely necessary to narrow the options for these future generations.’’31
30
See Fasca (1997). Semiconductor Industry Association, International Technology Roadmap for Semiconductors (San Jose, CA.: SIA, 1999, pp. 152–3). 31
723
The international conferences provided a forum for proponents of each system to demonstrate its capabilities and expected costs-of-ownership. The presentations were to be made before a sceptical audience including scientists and technical experts from firms and agencies that supported alternative systems, and representatives of the final customers, the world’s major semiconductor producers. At the end of each conference, a survey of attendees gauged their rankings of the approaches that had been demonstrated. At issue was to provide industry-wide guidance for the Roadmap consortium to narrow the set of R&D approaches that it would list for substantive analysis in the document. The fourth of the conferences was held in Reston, Virginia, in September, 2000. Its purpose, as the chairman of the event expressed it, was to enable executives and technical experts, representing 70 US, European and Asian organizations, to assess the development status of several potential forms of post-optical lithography. Among them were X-ray, extreme-ultraviolet and electron-beam projection technologies.32 The case for continued industry support and investment in each one was argued before the conference by a ‘‘champion’’, typically, a senior technical executive from a lead company promoting that technology. At stake was to convince the audience of its comparative merits on a range of criteria. These included the likelihood that the technology would be available world-wide when needed, whether it would meet all core operational requirements for multiple product generations, and whether the capital outlay to purchase the technology would be justified by such as its throughput rate and life-cycle operational costs. When surveyed at the conclusion of the presentations, delegates responded in overwhelmingly negative terms to X-ray lithography, most answering ‘‘never’’ when asked to specify when the technology was likely to be available for high-volume production.33 Delegates’ concerns with the X-ray system, which had been promoted by IBM and key 32 Semiconductor Industry Association (2000). We restrict our attention to these three alternatives in the interests of a parsimonious analysis. 33 See Semiconductor for Industry Association (2000).
724
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
agencies in the Japanese industry, centred on doubt that a critical mask or ‘‘reticle’’ component of the technology could be perfected and commercialized. With little support from delegates, a position that reinforced views expressed less trenchantly at the conferences of 1998 and 1999, X-ray lithography was eliminated from consideration at future conferences and from listing in the International Roadmap as an alternative for extensive analysis and discussion.34 The de-listing did not preclude a firm such as IBM or proponents in the Japanese industry from persisting in their attempts to make markets for Xray lithography. They might have redoubled their efforts to overcome the difficulties associated with the mask component. But with the de-listing came a highly significant and unfavourable shift in the risk profile of X-ray technology. The idea that continued capital spending could result in its commercialization and instantiation on markets had been dealt a serious blow. In the absence of debate at the conferences, and recognition in the International Roadmap, the system was unlikely to attract the levels of interest and the coordinated investments in infrastructure and components needed to bring it to fruition. The international conferences and the resultant processes of updating the Roadmap had effected a crucial mediation between the strategies of individual firms and the making of markets for post-optical lithography. IBM announced during 2000 that it would join the EUV LLC, thus indicating a major shift in its investment priorities toward extreme-ultraviolet lithography.35 At the conclusion of the September 2000 conference, two potential post-optical technologies were recommended by delegates for continued and extensive analysis in the International Road34
Ion projection technology, too, was eliminated from the Roadmap, and also because of concerns regarding the availability of suitable mask components. 35 See Chappell (2001). IBM’s announcement came after the unfavourable industry-wide reception of X-ray lithography at the 1999 international conference, and prior to the confirmation of that view of its prospects at the event of September 2000. This left the system with support from interests within the Japanese industry only.
map – extreme ultraviolet, associated with the EUV LLC, and electron-beam projection lithography as promoted by Lucent and its partners. The Roadmap consortium would continue to monitor them, and to provide funding to help test their feasibility. But the series of conferences did not result in a simple progression from less to more likely candidates, easily agreed upon. Neither was the process without its surprises. For instance, an ‘‘immersion’’ system was entered on the International Roadmap in 2003.36 By adding a film of water between the lens and the silicon wafer, this technology held the promise to extend significantly the life of existing, optical forms of lithography. An immersive component would allow such ‘‘systems to print wires and spaces once thought impossibly narrow’’ (Geppert, 2004, p. 30). It was expected to defer the date by which a postoptical form of lithography would be required on international markets. At the time of writing (August 2006), the three primary assemblers of lithographic technology – ASML, Nikon and Canon – expected that the ‘‘immersion’’ approach would have run its course by about 2010 or 2011. It was anticipated that markets would have been made by then for an infrastructure and all components of an extremeultraviolet system of post-optical lithography. As we argue in the next section, practices of demonstrating and calculating costs-of-ownership were central to a process of building and maintaining support for the latter system. Demonstrating and calculating costs-of-ownership Engineers at Intel Corp. announced during July 2004 that the firm had installed a ‘‘micro-exposure’’ tool, the first commercially available device for fabricating test wafers using extreme-ultraviolet lithography. Its installation was a further stage in a process of instantiating and demonstrating test versions of the lithography that, if successful, would pave the way for the building of production tools by the world’s major lithography assemblers (Table 2). After some two decades of research, it
36
See Hand (2004).
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
725
Table 2 Demonstrations of extreme-ultraviolet lithographic toolsa Tool type
Supplier
Start date
Location
Engineering test stand (ETS)
US Virtual National Laboratories
2001
Small-field research tool Full-field research tool
Exitechb ASMLb
2004 2007
Process development tool Production tools
Nikonb ASML, Nikon, Canonb
2006/7 2009/10
Lawrence Livermore National Laboratory, Livermore, CA Intel Corp., Santa Clara, CA EMEC Industry Consortium, Louvain, France & State University of New York, Albany, NY Major semiconductor firms Major semiconductor firms
a b
Adapted from Silverman (2005, p. 3). Commercial assemblers and suppliers of lithographic systems.
was argued, extreme-ultraviolet technology finally showed signs of ‘‘moving out of the research lab and toward the pilot-line environment’’.37 The initial test tool was that of the US Virtual National Laboratory, devised on behalf of the EUV LLC of Intel and its partners and first demonstrated during 2001 (Table 2). Built as an Engineering Test Stand (or ETS), it operated at hourly-throughput levels far lower than would be called for in commercial production. Its workings were subject to what one commentator described as a ‘‘slew of problems’’. Yet, its significance was in establishing that the components of an extreme-ultraviolet system could be made to work together as expected, if only under carefully contrived laboratory conditions. By 2001, the main alternative post-optical technology listed on the international Roadmap – electron-beam projection – was already weakened. A commitment to its eventual assembly came from only one firm (Nikon), whereas all three major assemblers (ASML, Nikon and Canon) would build extreme-ultraviolet machines. If the latter system thus had an edge, the heightened levels of investment to commercialize it for high-volume production were far from assured, however. In 2001, at an industry conference to update the international Technology Roadmap, delegates 37
Director of Components Research, ‘‘Intel’s EUV lithography process line’’, Intel Corp. presentation, July 2004. The pace of development of a full-scale process based on extremeultraviolet lithography, and when it will be used in high-volume manufacture, remain uncertain. It is anticipated that improved forms of optical lithography may serve Intel Corp’s manufacturing processes until at least 2010 or 2011.
concluded that extreme-ultraviolet still faced potential ‘‘showstoppers’’. These were defined as intractable design or engineering issues that could prevent the system from working effectively in the factory (Semiconductor Industry Association, 2001, p. 13). Mediating between the interests of various firms and agencies, to elicit continued capital commitments in the face of such difficulties, was a task undertaken primarily by the EUV LLC. It depended on a complex interplay of practices of demonstration and calculation of costs-ofownership that centred on the Virtual National Laboratory’s prototype tool (Fig. 4). The availability of that tool for industry-wide demonstrations was seen as highly significant for the production of investment, and for building sufficient confidence among suppliers and research agencies to make markets for extreme-ultraviolet lithography. The conformance of the system to relevant scientific models and expectations could be shown rather than just asserted. Also, it became possible to unpack and analyse physically the attributes of core components. At stake here was to identify and propose ways to overcome potential ‘‘showstoppers’’. So, for instance, delegates to the August 2001 Roadmap conference were satisfied that commercializing the aspheric-mirrors posed low levels of technical risk.38 But, insofar as these components were unique to the extremeultraviolet system, optics firms like Tinsley and Carl Zeiss might not make them production-ready if major doubts attended the perfectibility of other 38 Reported in Semiconductor Industry Association (2001, p. 13).
726
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
Fig. 4. Schematic of an extreme-ultraviolet lithography system (Adapted from C.W. Gwyn and Peter J. Silverman, ‘‘EUV lithography: transition from research to commercialization,’’ Presentation at the Photo Mask Japan Conference, April 2003.) The Virtual National Laboratory tool comprised an ‘‘illuminator’’ that provided sufficient EUV radiation to pattern test wafers. At the ‘‘reticle stage’’, reflective masks tolerant of extreme-ultraviolet light contained the images of the integrated circuits. Sets of aspheric mirrors and ‘‘reflective projection optics’’ reflected the ultraviolet light and transferred the images of the integrated circuit to the wafer. EUV resist chemicals permitted the physical printing of the circuits onto silicon at the ‘‘wafer stage’’. The set of components operated within an environmental control system and in a vacuum. (C.W. Gwyn, Program Manager, EUV LLC, ‘‘Extreme-ultraviolet lithography: technology review’’, Presentation to International SEMATECH Next-Generation Lithography Workshop, August 29th, 2001.)
components, such as the illuminator (Fig. 4). A cycle of mutually reinforcing investment in the various components of the system could be stalled as a consequence. And, as an EUV LLC program director acknowledged, improving the illuminator component was indeed one of the more intractable engineering issues.39 Unless the wattage of EUV radiation that it provided could be increased by a factor of 10, high-volume production of patterned wafers might be precluded.
39
The ability to perfect suitable ‘‘reticles’’ or masks, and to develop resist materials, were also regarded as severe technological problems affecting extreme-ultraviolet lithography. Our treatment here is restricted to the difficulties surrounding the illuminator component for brevity of analysis.
Seeking to unblock the investment process in the face of such difficulties involved two interlinked stages. One was to delineate a program and a set of timed interventions aimed at convincing the industry that the technical ‘‘showstopper’’ could be resolved. At the 2001 Roadmap conference, directors from the EUV LLC and the Virtual National Laboratory confirmed that parallel solutions were being pursued to the power-wattage problem. A variety of distinct power sources was being developed, and more than 10 specialist firms were engaged in the task. EUV LLC member companies had supported some of the smaller firms with venture capital. The contract between the EUV LLC and the Virtual National Laboratory had been extended in duration to provide overarching scientific support and integrative capabili-
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
727
Table 3 ‘‘Cost of Ownership’’ Equationa C pwle ¼
ðC e þ C l þ C f þ C c þ C r Qrw N c Þ C m þ þ C other ðTUY p Þ N wm
where Cpwle Ce Cl Cf Cc Cr Qrw Nc Tn Yp Cm Nwm Cother
cost per wafer level exposure annual cost of lithographic, coating and pattern transfer equipment (including depreciation, maintenance, and installation, using 5-year, straight-line depreciation) annual cost of labor annual cost of clean-room space annual cost of other consumables (condenser, laser diodes) unit cost of chemical resist quantity of chemical resist used per wafer level exposure number of wafer levels coated with chemical resist per year net throughput of wafer levels per year average yield of good wafer level exposures cost of a mask (reticle) number of wafer levels exposed per mask other lithography related costs (etch, cleans, etc.)
In this model, cost-of-ownership of a lithography system refers to the allocated full-cost (depreciation plus operating expense) of achieving one level or layer of electronic elements on a silicon wafer. The complete fabrication of chips on a wafer involves using lithographic tools to pattern as many as fifteen to twenty layers of circuits and interconnections. The International SEMATECH costof-ownership model incorporates more than 100 input variables per technology. Its use is usually linked to sensitivity analyses intended to highlight the variables to which cost-of-ownership is most susceptible. a Adapted from P. Seidel, ‘‘Initial Cost of Ownership Analysis for 45 nm Half Pitch 2009 Applications’’, International SEMATECH, November 3, 2004. Reproduced by permission.
ties. A time-line was set out for improving the performance of the illuminator on which progress would be reported at frequent intervals.40 The second stage was to establish that, if technical ‘‘showstoppers’’ were resolved, costs-of ownership of an extreme-ultraviolet system would meet key benchmarks. Calculations of cost-of-ownership are utilized extensively throughout the semiconductor and related industries. They are intended to compare two or more systems or technologies by relating the capital costs and operating expenses associated with each one to measures of output and operational effectiveness. Table 3 shows the equation devised by the semiconductor industry consortium, International SEMATECH, to
40
G.D. Kubiak, Sandia National Laboratories, ‘‘ETS scanned printing, environment and source’’, Presentation to International SEMATECH Next-Generation Lithography Workshop, August 20th, 2001; also, C.W. Gwyn, Program Manager, EUV LLC., ‘‘Extreme-ultraviolet lithography: technology review’’, Presentation to International SEMATECH Next-Generation Lithography Workshop, August 29th, 2001.
compute costs-of-ownership for alternative lithography systems. Its focus is on capital costs (represented by depreciation) plus operating expense related to an output, namely, the successful exposure or patterning of electronic elements on one level of a silicon wafer. Other relations are also utilized such as capital cost plus operating expense related to throughput of silicon wafers per period. In progress reports to industry conferences from 2002 onward, EUV LLC delegates and others confirmed that extreme-ultraviolet power sources were indeed being boosted closer to commercial levels. Wattage was increasing while the costs associated with the illuminator component were reducing. By early 2006, what had been a potential ‘‘showstopper’’ was no longer regarded the most significant issue affecting the commercialization of extreme-ultraviolet lithography. Such demonstrations of engineering advance were interlinked with costs-of-ownership assessments. At their most subtle and sophisticated, those assessments went far beyond mere applications of an intricate cost equation like that of Table 3.
728
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
At issue was not only to affirm the viability of an extreme-ultraviolet lithography, but also to shape expectations regarding cost and price in markets for the various components comprising the system. Take the calculations discussed by an Intel manager at an international lithography conference during 2004 (Silverman, 2005). The intent was to compare the estimated costs-of-ownership of an extreme-ultraviolet system to that of its optical predecessor, 193 nm immersion lithography, from the perspective of a high-volume semiconductor firm (Table 4) The relative costs-of-ownership of the systems in question would depend on three factors, it was argued, notwithstanding the large number of variables incorporated in the SEMATECH model. These were: the capital cost per machine; the cost per ‘‘reticle’’ or mask containing the image of the integrated circuit; and an output measure such as throughput of silicon wafers per hour. The analysis sought to convince delegates that, although ‘‘[determining] the cost-of-ownership of a new lithographic technology is extremely difficult and very controversial’’, there were no good reasons for assemblers, such as ASML, Canon and Nikon, to charge semiconductor firms a higher price for an extreme-ultraviolet machine than for a 193 nm immersion tool, once the former reached highvolume production levels. The benchmark capital cost was taken as $ 20 million for an immersion machine based assemblers’ price lists and estimates.
The first versions of an extreme-ultraviolet tool, produced in very low volumes and to enable semiconductor firms to develop their fabrication processes, would carry a unit capital cost significantly higher than that. But, it was argued, if one analysed the components of both systems, it would become clear that the capital cost per EUV machine should fall quickly to that of a 193 nm immersion system (Silverman, 2005, pp. 4–5). Suppliers of illuminator components – specifically, the EUV power source – could indeed be expected to charge several times that for an Excimer laser (Table 4). Power (or light) sources might cost $2 to $5m per unit, depending on the specific application, compared to $1 to $2m per unit for the lasers. But this added cost to the system assemblers should be offset by lower expense elsewhere. For instance, even if aspheric-mirrors and optics for the EUV system cost more per unit than the lenses used in 193i lithography, fewer of them would be used per machine. Based on such an extensive component-by-component analysis, it was argued, ‘‘a good case can be made that the [capital] cost of an EUV exposure system [to a semiconductor firm] should not be significantly more than the cost of a 193 nm immersion exposure system’’. Turning from capital cost per machine to the expense of ‘‘reticles’’, it was held that, while significant additional cost attended the development of base materials or ‘‘substrates’’ for EUV masks, the costs of mask fabrication
Table 4 Capital cost comparison: EUV and 193 nm immersion systemsa Module cost comparison Component/module
193 nm Immersion (193i)
EUV
Relative cost
Light source Illumination optics Projection optics Stages Body structure Environment
Excimer laser Multi-element refractive 30 element catadioptric Air bearing; Ambient ‘‘Endoskeleton’’ vibration isolated Nitrogen purged Thermal enclosure Massively filtered Fluid management Off-axis optical Transmitting; OPC; PSM
Plasma source 6 element reflective 6 element reflective Air bearing; Vacuum ‘‘Exoskeleton’’ Vibration isolated Vacuum chamber
EUV P 193i EUV 6 193i EUV 6 193i EUV = 193i EUV = 193i EUV = 193i
Off-axis optical Reflective
EUV = 193i EUV = 193i
Focus and align Reticle a
Table adapted from Silverman (2005, p. 4).
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
would be correspondingly lower than that for 193 immersion systems. If capital cost per machine and ‘‘reticle’’ expense were roughly equal for both systems, the relative costs-of-ownership of an EUV and a 193 immersion system would hinge on measures of output. The immersion system was reliably expected to reach a throughput of 100 silicon wafers per hour. While additional development work remained, there was ‘‘growing confidence’’ that EUV machines capable of comparable output levels would be available between 2007 and 2009. ‘‘[If] this occurs, then the cost-of-ownership of EUV will be strongly competitive with 193 nm immersion lithography’’ (ibid., p. 5). Repeated demonstrations of the Virtual National Laboratory’s prototype tool had shown that, technically, extreme-ultraviolet did not entail ‘‘any unpleasant surprises’’ such as had ‘‘doomed’’ other lithographic systems. Now, based on an unpacking of the components of that prototype, it was suggested that no unpleasant financial surprises should be in store, either, if both assemblers of the system and suppliers of its components embraced reasonable expectations. Markets for the system would be made sooner, facilitating quicker learning and repayment of investment for all industry participants. The case of post-optical lithography illustrates clearly the importance of mediating instruments in the marking of markets. The process of forming, revising and mediating expectations between actors and arenas, between science and the economy, is more than a matter of rule-following or routinised behaviour. Technological and financial risks had to be assessed, and technological competencies enabled. Requirements had to be spelled out in detail, and timings specified. Even if Moore’s Law provides some highly general ‘rules’ to which the future of semiconductors is made to conform, these are insufficient on their own to mediate between apparent scientific or technological imperatives on the one hand, and economic or financial imperatives on the other. While Moore’s Law served to problematize rates of investment at a very high level of generality, it was left to technology roadmapping practices to resolve technological disagreements while satisfying economic
729
requirements. Calculations of the cost-of-ownership of post-optical lithography, and comparisons with extending existing optical forms of lithography also had be performed. Capital budgeting, science and the economy had to be linked, and this was achieved by performing and connecting up a whole series of calculations based on Moore’s Law, technology roadmaps, and cost-of-ownership models. Together, these mediating instruments helped link formally separate actors and arenas, and in such a way as to adhere to the apparently beneficent imperatives of Moore’s Law.
Conclusion We have examined in this paper how certain instruments link science and the economy through acting on capital budgeting decisions, and in doing so how they contribute to the process of making markets. We have used the term ‘mediating instruments’ to refer to those practices that frame the capital spending decisions of individual firms and agencies, and that help to align them with investments made by other firms and agencies in the same or related industries. Our substantive focus has been on the microprocessor industry, and the roles of ‘‘Moore’s Law’’ and ‘‘technology roadmaps’’. We have examined the ways in which these instruments envision a future, and how they link a multitude of actors and domains in such a way that the making of future markets for microprocessors and related devices can continue. We have addressed these issues in three stages. Firstly, we have considered the role of ‘‘Moore’s Law’’ in shaping the fundamental expectations of an entire set of industries about rates of increase in the power and complexity of semiconductor devices, and the timing of those increases. Secondly, we have examined the roles of ‘‘technology roadmaps’’ in translating the simplified imperatives of Moore’s Law into a framework that can guide and encourage the myriad of highly uncertain and confidential investment decisions of firms and other agencies. Thirdly, we have explored one particular and recent example of major capital investment, that of post-optical lithography.
730
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
In examining these practices, our aim has been to contribute to the literature on capital budgeting and investment appraisal, as well as the somewhat distinct yet overlapping literatures of what can loosely be termed science studies and economic sociology. The literature within accounting on capital budgeting has, we have argued, developed surprisingly little in recent decades, particularly with respect to the study of the actual capital budgeting practices deployed within and between firms and other agencies. This is notwithstanding the strident calls in the 1970s to examine the capital budgeting process as an issue of general management, rather than a narrow issue of financial valuation techniques. These calls, we have argued above, were largely neglected by accounting researchers for approximately two decades. While there has been some revival of interest in capital budgeting within the accounting literature in the past decade or so, there still remain very few studies of actual capital budgeting processes. This is notwithstanding the existence of survey evidence that has charted the formal usage of known evaluation techniques, and the real options literature which has suggested refinements in methods for valuing investment opportunities. If anything, we suggest, such developments have served to reinforce the view of capital budgeting as a narrow matter of financial valuation, rather than a much broader issue of managing and coordinating investments. Most generally, we argue for a shift in focus away from the constrained perspective of valuation techniques, to which capital budgeting has been consigned for several decades by studies in finance, in favour of a focus on the more complex managerial and institutional processes in which investment evaluation is embedded. We need to acknowledge the significant limitations of survey evidence for studying capital budgeting, and we need to see many more studies of actual capital budgeting practices and processes. We need to document and understand the range of metrics and processes within which the capital budgeting practices of individual firms are located, and we need to analyse the roles of novel or idiosyncratic practices, such as investment bundling and technology ‘roadmapping’ that we have addressed here and elsewhere. We should also to pay much
greater attention to the ways in which managers evaluate and coordinate parallel and complementary investment opportunities, that necessarily operate at both intra-firm and inter-firm levels. Such interrelated investments have assumed increasing importance in the modern economy. While accounting researchers working in some areas have begun to lessen the fixation on the traditional hierarchical and bounded organization, this has not occurred for capital budgeting where the ‘vertical imperative’ continues to dominate. We need to know much more about how investment proposal and evaluation processes are managed and coordinated not only within organizations, but among sets of organizations. In short, accounting researchers need to fundamentally rethink what counts as capital budgeting, by examining the mediating instruments that frame investments across the boundaries of firms and other agencies. Our contribution to the increasingly interlinked literatures of science studies and economic sociology, we argue, is twofold. Firstly and substantively, we have sought to at least begin to redress the startling neglect of Moore’s Law, as a phenomenon that has profound implications for those interested in science and technology on the one hand, and the economy on the other. We have shown how Moore’s Law links technological and financial trajectories, and we have also shown how technology roadmaps translate the simplified imperatives of Moore’s Law into targets and timelines that individual firms can embed in their own planning and investment processes. Secondly, we have sought to extend existing ways of theorizing the roles of calculative practices. We have suggested that the notion of ‘mediating instruments’ is consistent with previous studies of ‘mediating machines’ and ‘mediating models’. But we have argued also that the notion of mediating instruments, when applied to the linking of science and the economy, can contribute to the analysis of the processes that contribute to the making of markets. To this extent, our paper contributes to and extends earlier literatures on ‘techno-economic networks’, as well as more recent work on the ‘laws of the markets’. Our focus, however, is primarily on the ‘intermediaries’ or instruments
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
that link actors and domains, rather than the network as a whole or the actors that form it. We are interested in the ways in which such instruments not only make markets in a literal sense, but how they frame and stabilize the interrelations among the multitude of components that constitute one important sector of the modern economy. Finally, we suggest that our study demonstrates the potential and wider applicability of the notion of mediating instruments. Clearly, we have only begun to address the complexities of capital budgeting in the microprocessor industry, and the ways in which a range of instruments help to mediate between distinct domains and actors. Much more could be done, for instance, to examine the ways in which cost of ownership calculations have been conducted for specific issues, and the links between such calculations at industry- and firmlevels merit further consideration. Equally, it would be rewarding to see studies of other industries that use similar or related instruments for linking medium and long-term strategic objectives to shorter-term capital budgeting decisions. More generally, it would be good to see the ‘gap’ between strategy and capital budgeting lessened. We are also optimistic about the opportunities for utilizing the notion of mediating instruments in very different domains. For instance, attempts to make markets in the field of health care depend fundamentally on calculative instruments that link clinical and financial categories. Similarly, formal ‘partnership’ arrangements between healthcare and social care agencies can act as mediating instruments linking distinct professional groups, administrative arrangements and financial rules. These are only some indications of what we see as the potential to strengthen the already fruitful overlaps and intersections among the accounting, science studies and economic sociology literatures.
Acknowledgement We are grateful to Anthony Hopwood and to two anonymous reviewers for their insightful and constructive comments on an earlier version of the manuscript.
731
References Abolafia, M. Y. (1996). Making markets. Cambridge, MA: Harvard University Press. Ackerman, R. (1970). Organization and the investment process. Cambridge, MA: Harvard Business School Press. Aharoni, Y. (1966). The foreign investment decision process. Boston: Graduate School of Business Administration, Harvard. Baldwin, C. Y., & Clark, K. B. (1994). Capital budgeting systems and capabilities investments in US companies after the second world war. Business History Review, 68(Spring), 73–109. Barwise, P., Marsh, P. R., & Wensley, R. (1989). Must finance and strategy clash? Harvard Business Review, 85–90. Beunza, D., & Stark, D. (2004). Tools of the trade: The sociotechnology of arbitrage in a Wall Street trading room. Industrial and Corporate Change, 13(2), 369–400. Bower, J. (1972). Managing the resource allocation process. Homewood, IL: Irwin. Bromwich, M. (1976). The economics of capital budgeting. Harmondsworth: Penguin. Burchell, S., Clubb, C., & Hopwood, A. G. (1985). Accounting in its social context: Towards a history of value added in the United Kingdom. Accounting, Organizations and Society, 10(4), 381–413. Burchell, S., Clubb, C., Hopwood, A., & Hughes, J. (1980). The roles of accounting in organizations and society. Accounting, Organizations and Society, 5(1), 5–27. Callon, M. (1991). Techno-economic networks and irreversibility. In J. Law (Ed.), A sociology of monsters: Essays on power, technology and domination. London: Routledge. Callon, M. (Ed.). (1998). The laws of the markets. Oxford: Blackwell. Callon, M. (2002). From science as an economic activity to socioeconomics of scientific research: The dynamics of emergent and consolidated techno-economic networks. In P. Mirowski & E-M. Sent (Eds.), Science bought and sold: Essays in the economics of science. Chicago: University of Chicago Press. Callon, M., Courtial, J. P., Crance, P., Lare´do, P., Mauguin, P., Rabeharisoa, V., et al. (1991). Tools for the evaluation of technological programmes: An account of work done at the centre for the sociology of innovation. Technology Analysis and Strategic Management, 3, 3–41. Callon, M., Lare´do, P., & Mustar, P. (1997). Techno-economic networks and the analysis of structural effects. In M. Callon, P. Lare´do, & P. Mustar (Eds.), The strategic management of research and technology. Paris: Economica International. Callon, M., Lare´do, P., & Rabeharisoa, V. (1992). The management and evaluation of technological programs and the dynamics of techno-economic networks: The case of the AFME. Research Policy, 21, 215–236. Carr, C., & Tomkins, C. (1996). Strategic investment decisions: The importance of SCM, A comparative analysis of 51 case studies in UK, US and German companies. Management Accounting Research, 7, 199–217.
732
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
Carr, C., & Tomkins, C. (1998). Context, culture and the role of the finance function in strategic decisions. A comparative analysis of Britain, Germany, the USA and Japan. Management Accounting Research, 9, 213–239. Chappell, J. (2001). EUV LLC says technology is ready. Electronic News, March 26th (p. 1, electronic version). Chua, W. F. (1995). Experts, networks and inscriptions in the fabrication of accounting images: A story of the representation of three public hospitals. Accounting, Organizations and Society, 20(2/3), 111–145. Cohen, S., & Zysman, J. (1987). Manufacturing matters. New York: Basic Books. Dertouzos, M., Lester, R., & Solow, R. (1989). Made in America. Cambridge, MA: MIT Press. Fasca, C. (1997). Lithography powerhouse formed. Electronic News, September 15th (p. 1, electronic version). Fligstein, N. (2001). The architecture of markets: An economic sociology of twenty-first-century capitalist societies. Princeton, NJ: Princeton University Press. Fligstein, N., & Sweet, A. (2002). Constructing politics and markets: An institutionalist account of European integration. American Journal of Sociology, 107(5), 1206–1243. Geppert, L. (2004). Chipmaking’s wet new world. IEEE Spectrum, 41(5), 29–33. Graham, J., & Harvey, C. (2002). How do CFOs make capital budgeting and capital structure decisions? Journal of Applied Corporate Finance, 15(1), 8–23. Granovetter, M. (1985). Economic action and social structure: The problem of embeddedness. American Journal of Sociology, 91(3), 481–510. Hacking, I. (1983). Representing and intervening. Cambridge: Cambridge University Press. Hacking, I. (1992). The self-vindication of the laboratory sciences. In A. Pickering (Ed.), Science as practice and culture. Chicago: University of Chicago Press. Haka, S. F., Gordon, L. A., & Pinches, G. E. (1985). Sophisticated capital budgeting selection techniques and firm performance. The Accounting Review, LX 4, 651–669. Hand, A. (2001). Commercializing new generation lithography. Semiconductor International, December 1st (p. 1, electronic version). Hand, A. (2004). Potential tool solutions make a shift. Semiconductor International, February 1st (p. 1, electronic version). Hayes, R. H., & Abernathy, W. A. (1980). Managing our way to economic decline. Harvard Business Review, 58, 67–77. Hayes, R. H., & Garvin, D. A. (1982). Managing as if tomorrow mattered. Harvard Business Review, 60, 71–79. Hogan, C. (1977). Reflections on the past and thoughts about the future of semiconductor technology. Interface Age, 2, 24–36. Hopwood, A. (1996). Looking across rather than up and down: On the need to explore the lateral processing of information. Accounting, Organizations and Society, 21(6), 589–590. Jensen, M. (1993). The modern industrial revolution, exit and the failure of internal control systems. Journal of Finance, 48(3), 831–880.
Johnson, H. T., & Kaplan, R. S. (1987). Relevance lost: The rise and fall of management accounting. Harvard, MA: Harvard Business School Press. Jones, T. C., Currie, W. L., & Dugdale, D. (1993). Accounting and technology in Britain and Japan: Learning from field research. Management Accounting Research, 109–137. Jones, T. C., & Dugdale, D. (1994). Academic and practitioner rationality: The case of investment appraisal. British Accounting Review, 26, 3–25. Jones, T. C., & Lee, B. (1998). Accounting, strategy and AMT investment. Omega, 26, 769–783. Kalthoff, H. (2005). Practices of calculation: Economic representations and risk management. Theory, Culture and Society, 22(2), 69–97. Kaplan, R. S. (1986). Must CIM be justified by faith alone? Harvard Business Review(March–April), 87–93. King, P. (1974). Strategic control of capital investment. Journal of General Management, 2(1), 17–28. King, P. (1975). Is the emphasis of capital budgeting theory misplaced? Journal of Business, Finance and Accounting, 2(1), 69–82. Klammer, T. (1972). Empirical evidence of the adoption of sophisticated capital budgeting techniques. The Journal of Business, 45(3), 387–397. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press. Latour, B. (1996). Aramis or the love of technology. Cambridge, MA: Harvard University Press. Macher, J., Mowery, D., & Hodges, D. (1998). Reversal of fortune? The recovery of the US semiconductor industry. California Management Review, 41(1), 107–136. MacKenzie, D. (1996). Knowing machines: Essays on technical change. Cambridge, MA: MIT Press. MacKenzie, D., & Millo, Y. (2003). Constructing a market, performing theory: The historical sociology of a financial derivatives exchange. American Journal of Sociology, 109(1), 107–145. Miller, P. (1991). Accounting innovation beyond the enterprise: Problematizing investment decisions and programming economic growth in the United Kingdom in the 1960s. Accounting, Organizations and Society, 733–762. Miller, P. (1992). Accounting and objectivity: The invention of calculating selves and calculable spaces. Annals of Scholarship, 9(1/2), 61–86. Miller, P. (1994). Accounting as social and institutional practice: An introduction. In A. G. Hopwood & P. Miller (Eds.), Accounting as social and institutional practice. Cambridge: Cambridge University Press. Miller, P. (1997). The multiplying machine. Accounting, Organizations and Society, 22(3/4), 355–364. Miller, P., Kurunmaki, L., & O’Leary, T. (2007). Accounting, hybrids and the management of risk, Accounting, Organizations and Society, in press. doi:10.1016/j.aos.2007.02.005. Miller, P., & O’Leary, T. (1987). Accounting and the Construction of the Governable Person. Accounting, Organizations and Society, 12(3), 235–265.
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734 Miller, P., & O’Leary, T. (1993). Accounting expertise and the politics of the product. Accounting, Organizations and Society, 18, 187–206. Miller, P., & O’Leary, T. (1994a). Accounting, ‘‘economic citizenship’’ and the spatial reordering of manufacture. Accounting, Organizations and Society, 19(1), 15–43. Miller, P., & O’Leary, T. (1994b). The factory as laboratory. Science in Context, 7(3), 469–496. Miller, P., & O’Leary, T. (1997). Capital budgeting practices and complementarity relations in the transition to modern manufacture. Journal of Accounting Research, 35(2), 257–271. Miller, P., & O’Leary, T. (2005a). Capital budgeting, coordination and strategy: A field study of inter-firm and intrafirm mechanisms. In C. S. Chapman (Ed.), Controlling strategy: Management. Accounting and Performance Measurement. Oxford: Oxford University Press. Miller, P., & O’Leary, T. (2005b). Managing operational flexibility in investment decisions. Journal of Applied Corporate Finance, 17(2), 18–24. Miller, P., & Rose, N. (1990). Governing economic life. Economy and Society, 1–31. Miller, P., & Rose, N. (1995). Production, identity and democracy. Theory and Society, 24(3), 427–467. Miller, P., & Rose, N. (1997). Mobilising the consumer: Assembling the subject of consumption. Theory, Culture and Society, 14, 1–36. Mirowski, P. (1989). More heat than light: Economics as social physics, physics as nature’s economics. Cambridge: Cambridge University Press. Moore, G. (1965). Cramming more components onto integrated circuits. Electronics, 35, 114–117. Moore, G. (1975). Progress in digital integrated electronics. Technical digest of international electron devices meeting. New York: Institute of Electrical and Electronics Engineers. Moore, G. (1995). Lithography and the future of Moore’s Law. In T. Brunner (Ed.), Proceedings of the international society for optical engineering, Santa Clara, CA. Morrison, M., & Morgan, M. S. (1999). Models as mediating instruments. In M. S. Morgan & M. Morrison (Eds.), Models as mediators: Perspectives on natural and social science. Cambridge: Cambridge University Press. Myers, S. C. (1984). Finance theory and financial strategy. Interfaces, 14, January–February, pp. 126–137. National Advisory Committee on Semiconductors (1989). A strategic industry at risk. Washington, DC: NACS. National Advisory Committee on Semiconductors (1990a). Preserving the vital base. Washington, DC: NACS. National Advisory Committee on Semiconductors (1990b). Capital investment in semiconductors. Washington, DC: NACS. Northcott, D. (1991). Rationality and decision making in capital budgeting. British Accounting Review, 23, 219–233. Noyce, R. (1977). Microelectronics. In R. Noyce (Ed.), A scientific American book. San Francisco, CA: W.H. Freeman and Co.
733
Parker, A. (1999). Extreme ultra-violet lithography: Imaging the future. Livermore National Laboratory Science and Technology Review(November), 1–9. Pettigrew, A. M., Whittington, R., Melin, L., Sanchez-Runde, C., van den Bosch, F., Ruigrok, W., et al. (2003). Innovative forms of organizing: International perspectives. London: Sage. Pickering, A. (1992). From science as knowledge to science as practice. In A. Pickering (Ed.), Science as practice and culture. Chicago: University of Chicago Press. Pike, R. H. (1983). A review of recent trends in formal capital budgeting processes. Accounting and Business Research(Summer), 201–208. Pike, R. H. (1988). An empirical study of the adoption of sophisticated capital budgeting practices and decision-making effectiveness. Accounting and Business Research, 18(72), 341–351. Pinches, G. (1982). Myopia, capital budgeting and decision making. Financial Management(Autumn), 6–19. Portes, A. (1998). Social capital: Its origins and applications in modern sociology. Annual Review of Sociology, 24, 1– 24. Power, M. (1994). From the science of accounts to the financial accountability of science. Science in Context, 7(3), 355– 387. Power, M. (1997). The audit society: Rituals of verification. Oxford: Oxford University Press. Robson, K. (1992). Accounting numbers as ‘inscription’: Action at a distance and the development of accounting. Accounting, Organizations and Society, 17(7), 685–708. Robson, K. (1994a). Connecting science to the economic: Accounting calculation and the visibility of research and development. Science in Context, 7(3), 497–514. Robson, K. (1994b). Inflation accounting and action at a distance: The sandilands episode. Accounting, Organizations and Society, 19(1), 45–82. Rose, N., & Miller, P. (1992). Political power beyond the state: Problematics of government. British Journal of Sociology, 43(2), 173–205. Ross, I. (1989). Chairman’s introduction. In National Advisory Committee on Semiconductors, A Strategic Industry at Risk. Washington, DC: NACS. Schaller, R., 2004. Technological innovation in the semiconductor industry (unpublished Doctoral Dissertation, George Mason University). Semiconductor Industry Association (1992). National Technology Roadmap for Semiconductors. San Jose, CA: SIA. Semiconductor Industry Association (1994). National Technology Roadmap for Semiconductors. San Jose, CA: SIA. Semiconductor Industry Association (1997). National Technology Roadmap for Semiconductors. San Jose, CA: SIA. Semiconductor Industry Association (1999). International Technology Roadmap for Semiconductors. San Jose, CA: SIA. Semiconductor Industry Association (2000). Fourth international next-generation-lithography workshop report. Austin, TX: International SEMATECH.
734
P. Miller, T. O’Leary / Accounting, Organizations and Society 32 (2007) 701–734
Semiconductor Industry Association (2001). Fifth next-generation-lithography workshop report. Austin, TX: International SEMATECH. Semiconductor Industry Association (2001a). International Technology Roadmap for Semiconductors. San Jose, CA: SIA. Silverman, P. (2005). Extreme ultra-violet lithography: Overview and development status. Journal of Microlithography, Microfabrication and Microsystems, 4(1), 1–5.
Smit, H., & Trigeorgis, L. (2004). Strategic investment: Real options and games. Princeton, NJ: Princeton University Press. Tomkins, C. (1991). Corporate resource allocation: Financial, strategic and organizational perspectives . Oxford: Blackwell. Trigeorgis, L. (1996). Real options. Cambridge, MA: MIT Press. Wise, M. N. (1988). Mediating machines. Science in Context, 2(1), 77–113.