Insights Volume 4 November 2008

Page 1

Insights v4 cover final:Layout 1

28/10/08

8:01 AM

Page 1

Insights

INSIGHTS

Melbourne Economics and Commerce volume 4 november 2008

Melbourne Economics and Commer ce volume 4 november 2008

Big brother or a fair go By Graham Sewell

Market competitiveness By Bryan Lukas

Accounting induced performance anxiety By Anne Lillis

Understanding global imbalances By Richard Cooper

An interview with Robert E. Lucas By Ian King

Closing the gap? By Paul Smyth

Forward with fairness By John Denton

Intelligent systems in accounting By Stewart Leech

New agenda for prosperity Mailing Address: The Faculty of Economics and Commerce The University of Melbourne Victoria 3010 Australia Telephone: +61 3 8344 2166 Email: gsdir-ecom@unimelb.edu.au Internet: http://insights.unimelb.edu.au Published by the Faculty of Economics and Commerce, November 2008 Š The University of Melbourne

By Stephen Sedgwick

Real options analysis By Bruce Grundy

Inflation targeting By Guay Lim

New economic geography By Russel Hillberry

Paying doctors to improve health By Anthony Scott

Disclaimer Insights is published by the University of Melbourne for the Faculty of Economics and Commerce. Opinions published are not necessarily those of the publisher, printers or editors. The University of Melbourne does not accept responsibility for the accuracy of information contained in this journal. No part of this journal may be reproduced without the permission of the editors.

Innovation: high value-added strategy By Danny Samson

Neuromarketing By Phil Harris


Insights v4 cover final:Layout 1

28/10/08

8:01 AM

Page 2

Welcome Insights publishes condensed and edited versions of important public lectures connected with the Faculty of Economics and Commerce. Its objective is to share with the wider public – especially Alumni – the issues presented and developed in these lectures. It also constitutes an archival source of an important element of Faculty life. This volume appears in the middle of the most serious international monetary crisis since 1929. In our last issue, we published a prophetic paper by Satyajit Das about the inevitable bursting of the liquidity bubble. This bubble has clearly burst. Unfortunately, the deadline for this issue prevents us from publishing two forthcoming lectures dealing more directly with the causes and consequences of the credit crisis. These will appear in the next volume of the journal. However, the recent large number of public lectures has provided a rich harvest for inclusion in this volume. Three professors expound their particular areas of interest in their Inaugural Lectures. These are followed by papers on a number of contemporary issues – global imbalances, industrial relations, a summary by Professor Sedgwick of the proceedings of the Economic and Social Outlook Conference, intelligent systems in accounting, and an interview with Nobel Laureate Lucas on inequality and incomes across societies. Finally, this issue includes condensed versions of six refresher lectures given by leading Faculty members.

Insights: Melbourne Economics and Commerce ISSN:1834-6154 Editor: Emeritus Professor Joe Isaac, AO Associate Editor: Ms Brooke Young Sub-editor: Ms Rebecca Gleeson Advisory Board: Professor Robert Dixon Professor Bruce Grundy Professor Bryan Lukas Illustrator: Ms Caroline Stirling Design: Ms Sophie Campbell

These lectures are aimed at updating Alumni on recent developments and research in a number of fields. Insights is sent to a large number of Alumni and other interested people; and it is also accessible on the Faculty’s webpage. We hope it is widely read and finds a place in waiting rooms and on coffee tables for others to read. So far this year, more than 8,000 people accessed the publication online from Australia, US, Netherlands, Japan, Germany, UK, India and Singapore. Insights is a team effort. Caroline Stirling has done the artistic illustrations, which are suggestive of the themes of some of the papers, in this issue. We congratulate her on depicting the substance of the first article for the journal’s cover so tastefully. Finally, comments and suggestions from readers – on all aspects of the journal – are most welcome. Joe Isaac Editor jei@unimelb.edu.au


insights vol 4 Table of contents 03 Big brother or a fair go: is workplace surveillance coercive or does it guarantee our rights at work?

By Graham Sewell

08 When a firm is market-oriented: product and brand management implications

By Bryan A. Lukas

11 Accounting induced performance

Alumni refresher lecture series: 47 Real options analysis and investment appraisal: the opportunities and challenges

By Bruce D. Grundy

51 Inflation targeting By Guay C. Lim 57 New economic geography and

anxiety: consequences and cures

manufacturing

By Anne Lillis

By Russel Hillberry

19 Understanding global imbalances By Richard Cooper 23 An interview with Robert E. Lucas Jnr.

By Ian King

26 Closing the gap? The role of wage, welfare and industry policy in promoting social inclusion

By Paul Smyth

61 For love or money? Paying doctors to improve the quality of health

By Anthony Scott

65 Innovation: a high value-added strategy

By Danny Samson

69 Neuromarketing – marketing insights from neuroimaging research

By Phil Harris

31 Forward with fairness: a business perspective on Labor’s reform agenda

By John Denton

37 The use and misuse of intelligent systems in accounting: the risk of technology dominance

By Stewart Leech

42 New agenda for prosperity By Stephen Sedgwick

Insights Melbourne Economics and Commerce

01



big brother or a fair go: is workplace surveillance coercive or does it guarantee our rights at work? We are greatly in need of informed debate in Australia about the purpose and consequences of workplace monitoring by graham sewell A condensed version of his Inaugural Lecture given at the University of Melbourne on 29 July 2008.

“My anxiety is that we are sleepwalking into a surveillance society where much more information is collected about people, accessible to far more people shared across many more boundaries than British society would feel comfortable with.” Richard Thomas UK Information Commissioner (interviewed in The Times, 16 August 2004)

Is it time to reconsider Big Brother? Sentiments like those expressed above by Richard Thomas have become a regular feature of news reports. In his novel Nineteen Eighty-Four, George Orwell famously predicted that surveillance would play a crucial role in maintaining an oppressive social order under a totalitarian regime. The name he coined for such a pervasive, centralised and oppressive surveillance system was, of course, ‘Big Brother’. Through its attachment to the eponymous television show, the term has become associated with trivial voyeurism and we now face the prospect of a constant diet of Facebook, YouTube and reality television. It is creating a generation that does not know the importance of privacy; a generation where the default state is one of total self-disclosure, even to complete strangers. This has led the outgoing Chief Justice, Murray Gleeson, to comment that some personal information previously considered to be self-evidently private may no longer be so.

Yet one important lesson we can learn from Orwell’s dystopian vision is that those who hold personal information on us are also able to exercise power over us. Although a well-developed sense of privacy is one way to counter this power, there are some things that we have always been compelled to reveal about ourselves. This is particularly the case in the employment relationship where employers consider it legitimate to gather all kinds of personal information about their employees through workplace surveillance. One question arises from this state of affairs: is workplace surveillance and the power that accompanies it exercised in a benign and paternalistic way or in a malign and coercive way? In Orwell’s Oceania, surveillance is enrolled in an oppressive and brutal project of control, but it is possible to imagine circumstances where surveillance protects us and helps to maintain our basic freedoms. In this article I will present the workplace as a familiar setting where these opposing views both hold sway – that is, there are well-established intellectual traditions that see surveillance as either a tool of management oppression or as a way of enforcing basic notions of fairness in the workplace. What we must be able to establish is how these intellectual traditions affect our stance toward the purpose and necessity of workplace surveillance. This is important, as I shall argue that neither provides a completely satisfactory account of workplace surveillance if taken in isolation. Insights Melbourne Economics and Commerce

03


Consequently, in order to appreciate the personal and organisational impacts of workplace surveillance, we need to develop a conceptual and empirical approach that takes into account these opposing views. This is because the status of surveillance is invariably paradoxical and our experiences of it are often contradictory. For example, a common reaction to the failure of surveillance to prevent crime or misconduct is to argue for more of it – rather than, say, pursue another course of action that could potentially be more successful. Another paradox of surveillance is that it is potentially a two-way street in that it can be used to expose the activities of people who abuse positions of authority. This most famously occurred in 1991, when amateur film footage was used to identify members of the Los Angeles Police Department who beat Rodney King. Finally, but not exhaustively, I offer another common paradox of surveillance in that, despite the familiar adage that ‘If you’ve got nothing to hide you’ve got nothing to fear’, we can appreciate the protective benefits of surveillance when it is directed at others yet cavil when it is directed at us. If we are all the targets of surveillance then one of its effects, psychologically speaking, at least, is that we are all seen as potential criminals. The UK provides a vivid example of this. Anyone who is detained by the British police can be compelled to give a DNA sample which is retained indefinitely regardless of whether that person is charged or not, let alone convicted. One of my tasks in this article is to show how these paradoxes of surveillance play out in the workplace, so that we can begin to appreciate the need for a reformulation of our ideas about surveillance to take into account its ambiguous role as a means of coercion or protection.

Measuring everything that moves: my managers cannot resist the seductions of surveillance As countless scholars – most notably Max Weber – have observed, one of the defining features of modern organisations is the use of hierarchical systems of command and control. Weber saw this as a sine qua non for the achievement of organisational ‘incorporation’ – that is, getting a group of people from diverse backgrounds to work

04

Big brother or a fair go

toward a common purpose. In a modern capitalist organisation there is no guarantee that common purpose will emerge spontaneously or by consent. Organisational goals are usually devised by an elite group of decision-makers who must then ensure that employees’ efforts are directed toward them. In a bureaucratic organisation this is achieved by breaking a job into a series of specified duties. Determining whether an individual is performing these duties becomes the responsibility of an immediate superior, thus giving rise to the familiar notion of the bureaucratic hierarchy. This is, quite literally, a form of ‘over-sight’ or ‘sur-veillance’; and it captures the operating principles of this form of organisational control. Of course, it has become fashionable to talk of ‘post-bureaucratic’ or ‘flat’ organisations where such forms of hierarchical control are supposed to have been consigned to the dustbin of history. Even if such pronouncements are true – and I for one doubt that they are – I have argued extensively in my published work that our theoretical as well as our everyday understanding of organisational control is still dependent on this conceptual notion of oversight or surveillance. In drawing attention to the importance of bureaucratic hierarchy as a form of oversight, Weber was formally capturing the familiar Christian belief that the omnipresent eye of God will see your every sinful transgression and punish you accordingly – a principle famously depicted in Hieronymus Bosch’s painting, The Seven Deadly Sins. In organisational literature, however, our understanding of the disciplinary effect of surveillance has more recently drawn on Michel Foucault’s discussion of the Panopticon – a prison design proposed by Jeremy Bentham where warders watch prisoners held in a ring of cells arrayed around a central tower. Importantly, overseers in the tower could not be seen by the inmates, leaving them with the impression that they could at any moment be under surveillance. This is the source of panoptic discipline: the effect of surveillance would appear to be continuous even if inmates were not necessarily under constant scrutiny. It is tempting to take the Panopticon quite literally as a model of perfected surveillance.


Indeed, authors talk about the prospect of an electronic or information Panopticon to such an extent that it runs the risk of becoming a cliché. I think that one way of avoiding this fate is to see the Panopticon as a figurative expression of the desire to know all there is to know about others in the belief that this will render them selfdisciplining subjects. Thus, Bentham’s Panopticon provides us with a secular version of the prospect of divine retribution leading the God-fearing to refrain from sinful behaviour – that is, to fall into line behind some culturally determined notion of what constitutes appropriate behaviour or right conduct. Importantly, the enduring nature of this belief is not dependent on the actual means of surveillance approximating to the operating principles of the Panopticon. Thus, although some argue that technological developments mean that the panoptic model no longer holds in some social settings, the workplace is nevertheless still one such setting where managers claim the legal and moral right to subject their subordinates to a form of scrutiny that involves the centralised storage and dissemination of personal information. This is particularly the case when it comes to the measurement and evaluation of personal work performance, meaning that we can still talk about the cultural and normative force of the Panopticon – it stands as a powerful metaphor for managers’

desires to collect information that can be used to compare the performance of individual employees. Indeed, it is a widely held belief that collecting such information is an essential part of running an efficient and effective organisation.

Coercion versus care: the purpose and necessity of workplace surveillance But why should individual performance measurement – part of a process of enhanced managerial control as I have presented here – be so important at a time when employee ‘empowerment’ is now the mantra of management consultants? In the sociology of organisations our general understanding of the purpose and necessity of this form of workplace surveillance falls into two ideologically opposed camps. There are those who see it as being essentially coercive – a case of the few watching the many to serve the interest of the few. In other words, managers are agents of a minority class of capitalists who use performance measurement to ensure that employees are working as hard as they can all the time. In contrast, there are those who see workplace surveillance as being essentially ‘caring’ – a case of the few watching the many to protect the interests of everyone.

Insights Melbourne Economics and Commerce

05


In other words, managers are disinterested technocrats who use performance measurement to ensure that employees and employers each do as promised in an employment contract, in the process protecting everyone from the effects of self-interested or anti-social behaviour – for example, free-riding, bullying or various forms of harassment. In essence, this is the source of the argument: that surveillance can protect our rights at work because it ensures that employees do not suffer injustices, be they perpetrated by their colleagues or by their employers. As I indicated above, bureaucratic control is associated with very detailed descriptions of roles and responsibilities; but in today’s world, where job descriptions are open-ended and employees expect flexibility, there is – some might say paradoxically – an even greater desire to subject the

06

Big brother or a fair go

performance of employees to scrutiny. A coercive view would frame this as using surveillance to maximise the opportunities created by employee discretion and flexibility to intensify work. This is because performance monitoring is a way of ensuring that employees are using their discretion for the good of the organisation rather than themselves. Interestingly, exactly the same argument can be advanced from a caring perspective except that, in this case, a belief in the essential fairness of the employment contract means that flexibility benefits all. Thus, only the small minority who wish to use their increased discretion to indulge in free-riding or anti-social behaviour should have anything to fear from this form of surveillance. The problem for researchers and employees alike is that it is possible to simultaneously see the


merit of each of these perspectives on workplace surveillance. Like a citizen who does not appreciate the power of the state until they are arrested and detained on the strength of a false accusation, we may well be inclined to see surveillance as being essentially benign and protective for most of the time. To repeat the old adage: it is, after all, only those who have something to hide that have something to fear. Yet, when our performance does not measure up and our work efforts are called into question, especially if we really have been working to the best of our abilities, then we may well be inclined to think of performance measurement as being little more than an instrument of managerial oppression. It is at moments like this that neither the coercive nor the caring perspective on its own can provide an adequate account of the consequences of workplace surveillance. This gives us pause to think about how the other paradoxes of surveillance are played out at work. For example, it is easy to use the failure to apprehend a single high-profile offender – perhaps someone who abuses their position at work to obtain personal gain – as a reason to intensify surveillance of everyone’s activities without ever stopping to think about the consequences. This might include increasing compliance costs, say, by having employees do more and more paperwork, eroding trust, or actually reducing employee discretion. You cannot begin to measure a job unless you break it down to its standard component parts which, in itself, goes against the spirit of discretion. Rather than putting it down to a failure of surveillance, it may well be better to think about how such a person was appointed in the first place. In other words, it is the selection process that needs to be tightened up, not the work monitoring systems.

products of surveillance can be used to the advantage of employees; moments when we can use performance information to get managers to keep their side of the bargain.

Where do we draw the line? As ever, the question is where do we draw the line which determines what is acceptable and what is unacceptable in terms of surveillance? In civil society, such matters are dealt with through legal and democratic processes, with varying levels of success. However, in the workplace few equivalent avenues exist. For example, in Australia, personal employee information – in practice taken to include individual performance information – is excluded from Federal privacy legislation. This was because it was intended to be an industrial relations matter but, even under the current government’s Forward with Fairness proposals, workplace privacy is not explicitly addressed. As a result, the status and role of workplace surveillance is a matter of good faith bargaining between employers and employees, with little or no reference to minimum statutory protections or maximum permitted intrusions. This means that we are greatly in need of informed debate in Australia about the purpose and consequences of workplace monitoring; a debate that can only be improved by developing a subtle appreciation of the paradoxes of surveillance and the tensions these create when we are trying to understand whether it is coercive or protects our rights at work. Professor Graham Sewell is in the Department of Management and Marketing at the University of Melbourne.

Finally, we may cavil as more and more of our work activities become subject to measurement, but there are times when it can work to our advantage insofar as we may be able to demonstrate that we have actually reached the required standards to be promoted or to win a bonus. Similarly, we can use things like mobile phone or internet usage records to demonstrate that we were actually doing what we were said we were doing, should our conduct ever be called into question. These are moments when the

Insights Melbourne Economics and Commerce

07


when a firm is market-oriented: product and brand management implications A market-oriented firm is a business-model innovator, a product-market pioneer and a brand developer by bryan a. lukas

This essay served as the basis for his Inaugural Lecture given at the University of Melbourne on 10 June, 2008.

The meaning of being market-oriented One of the important debates in the marketing literature since the 1990s is the question of what it means for a firm to be market-oriented. The debate stems from mounting evidence that market-oriented firms deliver superior financial performance to their owners. Most firms that I have engaged with as a consultant or researcher claim to be marketoriented. Their claim usually goes something like this: ‘I believe in listening to the market carefully. I spend a fortune on market research. I integrate my market research findings in my decisionmaking process. I am, therefore, market-oriented.’ The truth, however, is that market research does not make a firm market-oriented. Most firms that I engage with are not market-oriented, at all. So, what makes a firm market-oriented? In my experience, the main building blocks of market orientation are to be customer-oriented, competitor-oriented and collaborator-oriented. Collaborator-oriented means that a firm will put itself in the position of its collaborators – for example, in the position of a supplier or strategic alliance partner. Competitor-oriented means that a firm will put itself in the position of its direct competitors. Customer-oriented means that a firm will put itself in the position of its customers. Then, once a collaborator-competitor-customer perspective has been adopted, the firm has to

08

When a firm is market-oriented

respond to the following question: ‘What would I as a collaborator, competitor and customer really expect to benefit from my own firm?’ In this short essay, I do not have the space to explore all three orientations in detail. Therefore, let me concentrate on customer orientation – probably the most important and difficult orientation of the three for a firm aspiring to be market-oriented to execute.

Customer orientation as a component of market orientation To be sure, customer orientation means that a firm will (a) look at itself by pretending to be a customer and (b) answer the question stated above: ‘What would I as my own customer really expect to benefit from my firm?’ The reason why market research of customers does not make a firm marketoriented is that the firm does not become its own customer through market research – market research consists usually of asking people outside of the firm what benefits are needed from the firm. Only by the firm becoming its own customer can it take into consideration the full scope and depth of customer benefits that are technically and otherwise possible. Why? Because only the firm knows what is really possible. Only the firm has insights into what can be really offered to customers. Let us look at some firms that have managed to be market-oriented and have listened to their own answers to their question.


The first example is Virgin Group’s domestic airline venture in Australia. The firm asked itself what it would want from a domestic airline in Australia. The firm admitted that meals on short flights between capital cities were really not value-adding, paper tickets did not add anything either, and that a bit of in-flight fun would not go astray. So Virgin Blue was launched as a no-frills, deep-discount and fun-loving airline in Australia.1 The next example is Starbucks. The firm asked itself what it would want from drinking coffee in a coffee shop in the US. The answer was: to be part of a small community. So Starbucks aimed to become the ‘third place’ in America, in addition to work and home. The firm said to its customers: ‘Stay as long as you like in our coffee shops; bring your family, friends and colleagues along; bring your work along; and you do not have to order a new cup of coffee every twenty minutes to earn your right to be here.’ Then there is the case of ING Group. The Dutch firm asked itself what it would want from a retail bank. The bank conceded that it would expect a decent return for its savings deposits, would not want to pay fees for those deposits, and would want 24-hour access to those deposits from any customer location. So ING Direct was launched as a virtual bank that paid above-average interest on savings deposits and did not charge savings-account fees. The virtual bank was launched in Australia, Europe and North America. Let us look at these examples more closely. Not only do Virgin Blue, Starbucks and ING Direct exemplify what it means to be customeroriented; in each instance, these firms have also changed the way business is conducted in their line of work. Specifically, it appears that a truly market-oriented firm is an innovator – but rather than being a product2 innovator in an engineering sense, a truly market-oriented firm is a businessmodel innovator. Virgin Blue revolutionised Australian domestic air travel. Starbucks revolutionised the American way of ‘having a coffee’. ING Direct revolutionised savings accounts in Australia, Europe, and to some extent even in North America. Revolutions require a certain mindset – they are disruptive, and they often require a firm to abandon its traditional product markets. Let us explore this observation further.

Market orientation implications for product markets Truly market-oriented firms are prepared to move away from their existing product market(s), and to ignore both typical customer segments and commonly accepted product lines. Consider Virgin Blue again. The firm started up and established itself successfully with a focus on tourists and travellers who could not afford to fly with the incumbent airlines. Business people were not part of Virgin Blue’s original target audience. As for Starbucks, this firm started off selling freshroasted, gourmet coffee beans and related equipment in a small retail store. Today, it makes coffee in thousands of coffee shops. Finally, look at ING Direct. Its Dutch parent company, ING Group, is a full-service bank that offers banking, insurance and asset management services. In contrast, ING Direct focuses mainly on the savings accounts business and does not offer insurance and asset management products. Abandoning traditional product markets has a number of follow-on effects. Usually, one effect is that existing product brands need to be adjusted, or new brands need to be built, in order to accommodate the change in product markets. Let me explain this observation in more detail.

Market orientation implications for brand management A brand’s ability to affect a consumer’s product choice is a function of what consumers know about a brand. What customers know about a brand is, in turn, a function of customers experiencing the brand in its designated product market. If that product market is abandoned by a firm because its market orientation necessitates that move, then the existing brand is detached from its designated context. Therefore, the brand needs to be adjusted, or a new brand needs to be put in place, to fit the new product market(s) of the market-oriented firm. Let us consider Virgin Blue once again. Virgin Group did not call its airline venture in Australia ‘Virgin Group’. Nor did it use any of its existing airline sub-brands – for example, Virgin Express or Virgin Atlantic.

Insights Melbourne Economics and Commerce

09


Instead, it created a new sub-brand by using a new suffix: ‘Blue’. With this move, new associations distinctive to the Australian context were added to the well-established parent-brand associations related to ‘Virgin’. ING Group did something similar to Virgin Group by replacing ‘Group’ with ‘Direct’, thereby creating a new sub-brand for its new operations. Starbucks is an example of adopting a new brand. The first Starbucks coffee shop to make coffee the way we experience Starbucks coffee today was actually called ‘Il Giornale’. Mr Schultz, the owner of Il Giornale, bought the small coffee bean roaster, Starbucks, to grow his business. He then dropped the Il Giornale business name and registered all of his shops as ‘Starbucks’. I presume that the name ‘Il Giornale’ did not lend itself as well as ‘Starbucks’ did, as a memorable name, to an aspiring coffee shop empire aimed at being present in nearly every US city and town. In summary, to be market-oriented is to be a business-model innovator, a product-market pioneer and a brand developer. The concept of market orientation is the philosophical corner stone of the discipline of marketing.

Professor Bryan A. Lukas is Professor of Marketing and Head of the Department of Management and Marketing at the University of Melbourne.

1 In recent times, Virgin Blue has changed its strategic stance from a cost leader to a differentiator. It remains fun-loving, but no longer aims to be a no-frills price-breaker. For the purpose of this essay, however, Virgin Blue’s original strategic stance as a cost leader – lasting from the firm’s launch to approximately 2006 – is a good example. 2 By ‘product’, I mean both goods and services.

10

When a firm is market-oriented


accounting-induced performance anxiety: consequences and cures The anxiety to perform favourably against accounting performance benchmarks has both intended and unintended consequences by anne lillis*

A condensed version of her Inaugural Lecture delivered at the University of Melbourne on 16 September 2008.

Introduction In 1975, Stephen Kerr published a simple but important paper entitled ‘On the folly of rewarding A while hoping for B’, which captured the intended and unintended consequences of performance measures. Kerr identified the ways in which commonly used performance metrics induce behaviour that may enhance reported performance on the measures used, but do not have any substantive performance impact – for example, achieving ‘reject rate’ reduction by stopping quality inspections rather than improving attention to quality. The prominence of accounting metrics in performance evaluation of organisations and their management brings on what I call accounting-induced performance anxiety. The anxiety to perform favourably against accounting performance benchmarks has both intended and unintended consequences. This lecture examines the phenomenon of accounting-induced performance anxiety by reflecting on three collaborative research projects: 1. The way intense performance anxiety – for example making losses – affects decision making at the firm level; 2. The way conventional accounting performance measurement practices within firms can inhibit effective strategy implementation; and

3. The potential for recent performancemeasurement innovations, such as the Balanced Scorecard, to reduce the problem of accounting-induced performance anxiety.

Project 1 – Intense performance anxiety: reporting accounting losses1 This project examines accounting-induced performance anxiety that derives from capital market pressure to report profits. The accounting profit/loss threshold represents a powerful decision heuristic (empirically derived) because: – The reporting of a loss acts as a trigger for outside intervention (e.g. by boards, regulatory agencies, equity markets and lenders); and – Investors asymmetrically ‘devalue’ a firm in a small loss situation relative to a small profit situation, while these firms are in fact very similar economically. Although the likelihood of crossing the profit/ loss – zero profit – threshold creates severe accounting-induced performance anxiety, this threshold is not a particularly important benchmark in an economic sense. Small profit firms are not ‘profitable’ in an economic sense, as their earnings would not be generating sufficient return on capital.

* I acknowledge helpful comments and/or assistance with the preparation of this inaugural lecture from Margaret Abernethy, Albie Brooks, Marc Costabile, Jennifer Grafton, Richard Lee and Matthew Pinnuck.

Insights Melbourne Economics and Commerce

11


So if the anxiety was really performance driven, it should be evident somewhere in the small profit area. However, it is the loss threshold which is a relatively arbitrary accounting threshold with little economic meaning. Managers have a variety of mechanisms available to them to avoid reporting losses. One is ‘earnings management’ – the ability to use the discretion that is available within the reporting of accounting numbers to show the firm in a profit position. It is well documented in the literature that a disproportionately large number of firms report small profits, and a disproportionately small number of firms report small losses. This suggests firms use available mechanisms to avoid the reporting of losses, including the use of accounting discretion to stay on the right side of the threshold. The asymmetric distribution of firms around the zero profit/loss threshold is almost certainly evidence of accounting-induced performance anxiety. A different reflection of the same type of anxiety arises from the behaviour of firms that have crossed the profit/loss threshold – into a loss position – and want to get back to reporting a profit as quickly as possible. Presumably, these firms have unsuccessfully exploited all available potential for accounting discretion in the bid to report a small profit. More drastic action can include discarding less productive investments Figure 1: Employment changes

6.00%

Average annual percentage change in no. of employees by firms as a function of different levels of net profit (either side of zero threshold)

5.00%

Source: Pinnuck, M. & Lillis, A.M. 2007, op.cit.

2.00%

that may have been ‘carried’ during more profitable times, or cost cutting in discretionary areas like R&D. Investments in fixed assets are lumpy and irreversible. Reducing the level of employment, on the other hand, provides a continuum (non-lumpy) of divestment potential. Cutting employees turns out to be a highly valuable ‘response’ lever in this situation. An examination of changes in employee numbers for a very broad cross-section of US firms, classified by their level of earnings deflated by total assets, shows the following patterns: – The average percentage growth in number of employees is systematically lower for lossmaking firms than for profit-making firms. There is a significant discontinuity at the threshold and the difference is not explained by differences in economic fundamentals between small loss and small profit firms (Figure 1).2 – Firms that cross the threshold from a profit to a loss reduce their investment in employees disproportionately in the year that they cross the threshold. – The effect is more significant in the year following the ‘threshold crossing’ – consistent with firms taking some time to decrease their investment in fixed labour. – In comparing firms that crossed the threshold with firms with a similar earnings-drop that did

Threshold 4.00% 3.00%

1.00% Average percentage change

0.00% -10 -9 -8 -7 -6 -5 - 4 - 3 -2 -1

1

2

3

4

5

6

7

8

9 10

-1.00% Large losses

12

Accounting-induced performance anxiety

Small losses

Small profits

Large profits


not cross the threshold, across all quartiles of earnings decreases, the fall in employment is greater for the firms that switched from reporting a profit to reporting a small loss. We conclude from this analysis that the profit/loss threshold acts as an anxiety-inducing decision heuristic which has significant economic consequences. The fact that employees are so easily divested – relative to other assets – may render them particularly susceptible to ‘management’ to achieve accounting performance benchmarks.

Accounting-induced performance anxiety within firms In order to describe the way accounting-induced performance anxiety plays out within firms, consider the following dilemma. A manager of a large division of a local manufacturing company considers recommending the acquisition of a small niche manufacturing company with technology and intellectual capital assets that offer strong potential to enhance the division’s future earnings. This manager is faced with two sets of financial and non-financial information that may convey quite different messages: 1. Decision-facilitating information – information provided to the manager about the financial and non-financial consequences of the decision. For example: an analysis of the strengths and capabilities of the target company; and discounted cash flow analysis of potential financial consequences of acquisition over a five-year horizon, compared with the ‘do nothing’ option. 2. Decision-influencing information – information collected by higher level management to evaluate the performance (decision outcomes) of the subunit-managers. For example: divisional profit and return on assets (ROA), measured annually – the basis for managerial performance measurement and bonuses. The messages may conflict. The discounted cash flow analysis of the acquisition versus the ‘do nothing’ option may support the acquisition, but that decision might ‘depress’ both divisional profit and ROA for the next year or two as the costs of the acquisition are taken up, the asset

base is expanded immediately, and the expected payoff is deferred. Which message is ‘right’? Routine annual performance measures will never completely reflect the quality of managerial decisions when measured over short time intervals. Many managerial decisions generate short-term costs and delayed benefits. It is the multi-period analysis that gives the more comprehensive picture of events. Yet we know accounting is always locked into arbitrary reporting cycles. Which signal will exert the strongest influence on decision-making? Generally, the literature tells us that the decision-influencing information – performance measurement, evaluation, incentives – will dominate. It is difficult to ignore rewarded behaviour. Annual accounting performance metrics within firms tend to induce myopic decision-making and subunit-level optimisation.

Project 2: Customer-responsive manufacturing3 These two related projects examine how performance measurement practices can subvert good strategic decision-making when firms pursue strategies focused on customer responsiveness. Conventional measures of manufacturing performance focus on efficiency and productivity (e.g. cost variances, scrap, downtime). Conventional structures within manufacturing firms also tend to support mass production at low cost. Efficiency was historically encouraged in manufacturing subunits through task-segregation, by ‘buffering’ manufacturing subunits from the vagaries of markets and customers. Sales subunits were charged with responsibility for dealing with customers. Interdependencies between manufacturing and sales were always high but they were sequential in nature, with the sales and manufacturing interface managed through scheduling. These measures and structures are well suited to manufacturing firms with high levels of product standardisation, stable production processes and a focus on cost minimisation. However, these attributes no longer reflect current strategic priorities in manufacturing.

Insights Melbourne Economics and Commerce

13


Figure 2

Figure 3

Classified summary of strategic orientations.

Manufacturing performance measurement practices in the 36 firms reflected in Figure 2. Cost and efficiency/productivity measures

Low cost

3 2

13 1

3

1 Quality

19 6

Service, responsiveness & dependability

12 Quality measures

4

Customer service measures

1

Source: Lillis, A.M. 2002, op.cit.

Source: Lillis, A.M. 2002, op.cit.

Figure 2 classifies 36 local manufacturing firms by strategy, and demonstrates the prevalence of multiple strategies focused on quality, service, customer responsiveness and dependability rather than low cost.

efficiency and productivity measures extensively. Is this the folly of rewarding efficiency while hoping for responsiveness? It raises a challenge for management: how to elicit responsiveness from manufacturing subunits. How do you shift the mindset of efficient lot sizes and maximum throughput to allow for the costs of disruption associated with frequent changeovers, reduced batch sizes and greater product variety? In order to be flexible enough to meet variable customer demands and associated short lead times, manufacturing and sales need to work much more collaboratively than has historically been the case. Interdependencies are described as ‘reciprocal’ rather than sequential, as manufacturing and sales managers negotiate ‘joint’ optimal solutions, rather than manufacturing determining optimal scheduling and efficient batch sizes.

The shift from low cost to responsive manufacturing is captured in a quote from a local manufacturer producing heavy-duty men’s work clothes. They “used to do long runs of ‘97 regulars’ or efficient combinations of ‘82 regulars’ and ‘112 stouts’, put them in inventory and sell them.” Now, however, they face demand for a varied mix of products at short lead times. While the product range in this example has remained relatively stable, the firm has significantly increased its responsiveness to customers by shortening lead times and meeting greater within-order variety. These market-initiated changes have increased the rate of production changeovers, reduced batch sizes and increased disruption. Accounting-induced performance anxiety arises here from the failure of performance measurement approaches to keep pace with shifting strategic priorities. Figure 3 reflects the manufacturing subunit performance measures used by the same 36 firms reflected in Figure 2. The paradox here is that the same firms that have moved away from a low cost strategy are using

14

6

Accounting-induced performance anxiety

We found that firms deal with this in a couple of ways: – A structural response; or – Reducing the intensity and accounting focus of performance measurement.

Structural response The structural response involves investing in integrative structural arrangements that facilitate cross-functional co-ordination and interaction –


more organic, less mechanistic structures. These structural mechanisms include cross-functional teams, task forces and daily cross-functional meetings. The aim of these devices is to link the efficiency and productivity-focused mindset of manufacturing with the customer-focused mindset in sales, and to facilitate the joint development of optimal production solutions.

Reducing the intensity of the accounting focus of performance measurement In settings with a strong commitment to customer responsiveness, the absence of standardisation makes it increasingly difficult to specify unambiguous performance standards. Performance standards in manufacturing subunits are generally constructed in the form of standard product costs that specify standard expectations in relation to material, labour and overhead input per unit of output. Such standards are set to reflect a specific efficiency or productivity level (assumed labour standard to assemble a certain number of units per hour). It is notoriously difficult to build the costs of disruption and unpredictable frequent changeovers onto standard costs. Firms struggle with this – they are unable to rewrite efficiency standards to incorporate the costs of disruption, but they want manufacturing subunits to be prepared to ‘wear’ the disruption in the interests of customer responsiveness. The way they deal with it is to ‘play down’ the pressure around cost budgets and efficiency standards. In effect, they rely less on accounting performance benchmarks as they would be ‘counter-strategic’.

“Setters and leading hands are imbued with this view that the line must not stop, and try as we may we cannot get that out of their thinking. The trouble is we have far too many long serving employees and they know that they have to get 25,000 products off that line this shift and they will do it...They'll believe that they have done a good job and in fact some of the management mechanisms may tell them they're doing a good job. But it might not match the customer service angle and that's what's wrong.” What happens when they get it right? “We have a fairly informal management structure. It’s run a bit like a big milk bar.”

Project 3: The Balanced Scorecard – a mechanism to reduce accountinginduced performance anxiety4 The Balanced Scorecard (BSC) is a performance measurement innovation designed to specifically counter the adverse effects of managing directly by accounting numbers. The BSC is a much broader performance measurement protocol embracing a wide-ranging set of financial and non-financial metrics that should have a causal link with future profits. At a minimum, the effect should be to dilute the influence of accounting in evaluation, and thus reduce accounting-induced performance anxiety.

“We’re nowhere near as flexible as we would like. We always liken it to a battleship where it takes miles to turn it around.”

BSC captures particularly well the dual role of performance measures, as it is fundamentally designed as a decision-facilitating tool. It provides a dashboard of measures that are basically designed to enhance the information available to managers in decision-making. By targeting leading indicators, managers who seek to make decisions that improve on the BSC metrics should theoretically be implementing strategy effectively and driving future financial performance improvements.

“Production is so intent on meeting their weekly targets, if a special order comes in they tend to say, ‘Oh no, what a nuisance’, rather than looking at the opportunity presented. And that’s fair enough, that’s where they’re valued at...That’s their whole reward system. Yes [the special order does get done], but it takes a lot of management effort to tell people that they are going to do it.”

However, there are challenges associated with implementation of a BSC. Some of these challenges relate to the potential for evaluation mechanisms to subvert the good intentions of the BSC. There is also the assumption that the BSC is a neutral management tool that will be used exactly as it is designed to be used: to enhance managerial effectiveness.

So what happens when firms do not get this right – when structure and performance measurement impede flexibility?

Insights Melbourne Economics and Commerce

15


The literature has been either silent or equivocal on how this performance measurement innovation interacts with the mechanisms used to evaluate the performance of managers. Thus, we are potentially back where we started. We have a good decision tool that might not be used if the decisions it signals are not consistent with evaluation mechanisms. What if the manager is expected to use the BSC in decisionmaking, but her bonus depends on subunit profit? Other researchers have documented that firms initially using a BSC try to reinforce the use of the scorecard by attaching incentives to performance on the full range of metrics. The problem is that perceptions of validity and reliability of metrics for evaluation and bonuses are somewhat different from the way the same criteria would be applied to information considered relevant for decision-making. There are documented tendencies in practice to overrely on conventional financial metrics when it comes to evaluation. This question was addressed in a study5 that surveyed 183 profit-centre managers and asked them to identify two sets of performance measures – the measures they consider the most informative for running the business (typically a BSC set of measures) and the measures that are used by the next level of management up the hierarchy to evaluate their performance. It rated the ‘commonality’ between the two sets of measures Figure 4 Extent of use of performance measures for ‘information’ and ‘evaluation’ Source: Grafton, J., Lillis, A.M. & Widener, S. (2008), op.cit

16

Accounting-induced performance anxiety

and assessed the association between ‘commonality’ decisions and profit-centre outcomes. It found: – Among the managers interviewed, there was a moderate degree of commonality between the measures they considered ‘best’ for running the business and those used to evaluate their performance (approx 62 per cent average commonality). – The weighting on aggregate financial measures in evaluation is significantly greater than their usefulness in running the business (Figure 4). Managers consider disaggregated financial information useful for running the business, such as sales by subunit/product line, costs, cash flows, etc. They consider aggregate financial measures such as profit and ROA less useful. Yet there is a disproportionate reliance on aggregate financial measures in evaluation. – Outcomes improve when firms broaden evaluation protocols to embrace more of the measures that managers consider important. The greater the level of commonality between the two sets of measures, the more the managers actually use the measures that are identified as important in running the business – and by definition, the lower the level of commonality, the less they use these measures. The use of these measures improves the firm’s ability to exploit


its capabilities – both existing and future – as managers are using the range of ‘high quality’ measures available to them more effectively. In turn, the cases with higher commonality produce better financial performance outcomes – a result which appears to be a function of greater use of the measures and more strategic responsiveness, not just a direct result of measuring accounting performance and driving improvement on that metric. So, is this the cure? Do we just need to broaden the performance measurement base to incorporate a comprehensive set of non-financial leading indicators that will drive up future profits? The BSC is a performance measurement innovation that has gained significant traction in practice. It has led to a significant shift in performance measurement practice within organisations, in that managers rely less on broad accounting performance measures than they did a decade ago. However there are still challenges. The BSC is designed primarily to facilitate decisions. Whether or not it manages to do so depends very much on how it links with performance evaluation throughout the firm. There are practical impediments to the complete adoption of the BSC in evaluation, and firms seem reluctant to do it. Many of the issues relate to undue complexity with so many measures, the inescapable ‘softness’ of some measures on a BSC and the fact that they are inherently situation-specific and thus not comparable across subunits. There are many examples of firms reverting to the use of accounting measures in evaluation. To the extent that evaluation protocols remain accountingfocused and disconnected from the array of decision-facilitating measures that managers want to use, performance anxiety will remain and decisions are likely to be accounting-driven.

Concluding comments This lecture has addressed the issue of accountinginduced performance anxiety from a range of perspectives. Accounting-induced performance anxiety arises from the prominence of accounting in performance measurement practice, and has several consequences. In response to capital market pressures, accounting benchmarks can be shown to have socio-economic consequences as

firms change employment patterns when they report accounting losses. There are many ways in which accounting benchmarks can induce shortterm decisions that compromise long-term firm value in response to capital market pressures. Within firms, accounting performance measures create silos, cultivate self-interest and ‘short term-ism’ among managers. It becomes difficult to achieve collaborative ‘firm-wide’ solutions and managers focus on driving up short-term evaluation measures rather than adding long-term value. There is no apparent cure for capital-market induced performance pressure. Capital markets will always induce efforts to improve and ‘manage’ accounting performance to convey particular messages about the firm and its prospects. There is much more potential to reduce the adverse impact of accounting-induced decision making within firms, by making adjustments to organisation structure, control systems and performance measurement practices. The BSC is a partial remedy to the dangers of accounting-focused evaluation within firms. However, there are many challenges associated with the reconciliation of the decisionfacilitating role of the BSC and the mechanisms used in evaluation. To date there is little evidence of sustainable change in evaluation practice. Professor Anne Lillis is Professor of Management Accounting and Deputy Head of the Department of Accounting and Business Information Systems.

1 Pinnuck, M, & Lillis, A.M. (2007). Profit versus Losses: Does Reporting an Accounting Loss Act as a Heuristic Trigger to Exercise the Abandonment Option and Divest Employees?, The Accounting Review, 82:4, pp. 1031-1053. 2 The threshold crossing effects described in the remaining dot points are not evident in Figure 1. 3 Abernethy, M.A. & Lillis, A.M., (1995). The impact of manufacturing flexibility on management control system design, Accounting, Organizations and Society, 20:4, pp. 241258; Lillis, A.M. (2002). Managing multiple dimensions of manufacturing performance – an exploratory study, Accounting, Organizations and Society, 27:6, pp. 497-529. 4 Grafton, J., Lillis, A.M. & Widener, S. (2008). The influence of evaluation mechanisms on the use of decision-facilitating performance measurement information, Working Paper. 5 Idem

Insights Melbourne Economics and Commerce

17



understanding global imbalances Far from being unsustainable, the large and growing US current account deficit is likely to endure for some years – and Australia shares some of the key features of the United States by richard n. cooper

A condensed version of the David Finch Lecture given at the University of Melbourne on 28 March 2008. Introduction The large and growing US current account deficit has elicited increasing concern, even alarm, and claims that it is unsustainable. This lecture argues that the large US deficit is a natural consequence of two significant worldwide developments – demographic change and globalisation of financial markets. Far from being unsustainable, it is likely to endure for some years. Serious efforts to reduce it significantly are likely to do more harm than good. While the focus is on the US deficit, Australia could be substituted for the US and much of the argument would still apply. Although smaller in scale and better endowed with natural resources, Australia shares some of the key features of the US.

The key ‘laws’ of economics Like physics, economics has its ‘laws’ that cannot be broken. The key laws of economics are accounting identities – adding up requirements – that must be satisfied when contemplating any change from any observed situation. Much of economics is devoted to the study of behavioural regularities, but these may vary in important detail from place to place and from time to time. The accounting identities must always be satisfied. Yet much public and journalistic discussion ignores them, and thus implicitly contemplates changes that are not in fact viable. Three accounting identities will inform this lecture and, I hope, throw light on the origin and

the sustainability of the current pattern of world imbalances. The first is that, apart from measurement errors, which may be substantial, any country’s current account balance must equal the difference between its total expenditure and its total output or, equivalently, between its domestic investment and its national savings. The second is that current account balances around the world must sum to zero – again, apart from measurement errors. The third is that a country’s current account balance is equal, with usually minor qualifications, to its net foreign investment. Thus, a country in current account deficit must be experiencing a net inflow of capital from abroad. The US current account deficit must be judged in light of these three accounting identities.

Current account deficits and surpluses The US current account deficit has grown significantly in recent years to around $800 billion or six per cent of GDP in 2006. (It declined modestly in 2007 in response to a slowdown of the US economy and a depreciation of the US dollar.) Many attribute this growth to a decline in US savings, and argue that savings must be increased to reduce the deficit. But the decline in US savings accounts for only a portion of the rise in the US deficit. Moreover, the US economy is part of a complicated, independent global system. Raising US savings by itself (e.g. through higher tax rates) would not necessarily reduce the current account deficit, or might reduce it in an Insights Melbourne Economics and Commerce

19


undesirable way, for example by causing a US recession. The US deficit has its exact counterpart in surpluses elsewhere, implying excess savings in the rest of the world. Raising US savings will not automatically reduce savings or increase investment elsewhere, and might indeed have the opposite effect. We must address why those surpluses exist, and how easily they will decline, as they must do if the US deficit is to be reduced. One reason for large savings elsewhere is the sharp increase in oil prices since 2002, greatly augmenting the revenues of oil-exporting countries. These surpluses are likely to decline in the coming years, as revenues move into the income stream of the oil-exporting countries and into imports, and as oil prices decline. But oil-exporters are not the only countries in surplus. So are China, Japan, Germany, and a host of smaller European and East Asian countries. These countries are going through dramatic demographic change, with increasing longevity and birth rates well below replacement rate. These developments, other things equal, are likely to sustain savings but weaken domestic investment in such countries, as the need for housing, schools, and equipment for new entrants of the labour force declines. Returns to capital are also likely to be depressed. Residents of these countries have an incentive to place some of their savings abroad, to build assets for retirement, implying current account surpluses. China, while experiencing similar demographic change, is in a different situation from the rich countries, in that it will continue to experience significant rural-to-urban migration and will thus need to house and equip the new urban workers, as well as upgrade the housing of urban Chinese as they become richer. As globalisation proceeds, the traditional bias toward allocating saving at home will decline, more information on the possibilities for foreign investment will become available, and institutions will respond to increased interest by making it easier for citizens in one country to invest in another. Thus globalisation of financial markets will reinforce demographic change as a factor leading to greater foreign investment. Net foreign investment abroad will necessarily be

20

Understanding global imbalances

reflected in a current account surplus, and, similarly, the net recipients of net inward foreign investment will run a current account deficit, as both Australia and the US do.

Destinations of foreign investments Why invest in the US? Net foreign investment by saving-surplus countries takes place in many places, not just in the US. Yet it accounts for over a quarter of gross world product, and for roughly half of marketable financial securities (stocks and bonds). Moreover, property rights in the US are secure, and dispute settlement is impartial and reasonably quick compared with other countries (not by an absolute standard). Effective confiscations by Argentina, Bolivia, Russia and Venezuela have reminded savers around the world that foreign private investment is not always secure in emerging markets. Funds for retirement seek security even more than yield. Some have emphasised the role of foreign central banks in investing in the US, particularly in US Treasury securities. It is true that extensive foreign official investment in the US occurred in this decade. But it has been dwarfed four-to-one by the inflow of private funds, and has never been sufficient to match the current account deficit. While some apparently private inflows were undoubtedly beneficially owned by official bodies – for example, in the oil-exporting countries – the choice of investment has been made by private investment managers. Moreover, some of the official funds compensate for the unwillingness (e.g. in Japan) or the inability (e.g. in China) of private parties to invest abroad, when it is in the country’s long-term interest to do so. Thus even central banks must be viewed as financial intermediaries in a global economy. Have foreign claims on the US become unsustainably large? Surprisingly, the net international investment position of the US has improved since 2001, despite large and growing current account deficits. Net foreign claims have declined from 23 per cent of US GDP to 17 per cent over the period 2001–2006, despite cumulative current account deficits of 27 per cent of GDP. The explanation lies in capital gains on US assets abroad in excess of capital gains on


foreign claims on the US. Some of these capital gains (measured in US dollars) can be explained by a depreciation of the dollar, since most US claims on the rest of the world are denominated in foreign currencies, and the dollar value rises as the dollar depreciates. But that accounts for only about a quarter of the 2001–2006 gain – the rest is due to capital gains on US equity holdings measured in local currency. The forces of demography and globalisation are deep-seated and long-lasting. They are not likely to disappear soon, and indeed may even strengthen. But will the US produce enough financial assets to satisfy the world demand for them? An interesting feature of foreign investment in the US, private as well as public, has been its concentration on interest-bearing securities. This may slowly change with the growth of sovereign wealth funds and their interest in pursuing higher yield overseas investments. Foreign investment by Americans, in contrast, concentrates on equity investment.

Thus the US serves as a global financial intermediary, exchanging debt for equity, as well as being a destination for net investment. This practice implies that, over time, Americans earn more on their foreign assets than they pay on their liabilities to foreigners. It also implies that foreign claims on the US capital stock are indirect rather than direct. So in reckoning the share of foreign ownership in the US, and the possibility that it may become unsustainably high, we need to look at total financial assets in the US. Due to financial innovation and the development of ever-more refined financial instruments to appeal to diverse tastes, the financial side of the US economy has grown more rapidly than the real side during the past half century, such that by 2006 financial assets amounted to ten times GDP, up from five times in 1965. Foreign ownership of these assets amounted to under ten per cent, but up from four per cent in 1980. The sub-prime mortgage crisis of course signaled that not all of this financial innovation was sound, and some de-leveraging of the US economy may well occur in 2008.

Insights Melbourne Economics and Commerce

21


However, the long-term trend is likely to resume thereafter. There does not seem to be any shortage of US financial assets in the near term.

Future outlook Of course, eventually, aging societies will reduce their savings and desire to run down their accumulated assets. At some point the excess savings will disappear, as will the associated current account surpluses. At that point the US deficit will undoubtedly also decline. But that point may not be reached for many years. Some have advocated that to wait would be unwise, even reckless, since the large deficit might precipitate a ‘hard landing’ for the US economy and for the world. Thus, there are proposals for a large engineered depreciation of the US dollar, combined with reduced spending in the US and increased spending abroad. But depreciation of the dollar implies appreciation of the currencies of the countries in surplus, and that is likely to depress investment in such export-oriented economies as China, Germany and Japan, not stimulate it. And why should aging consumers spend more now rather than save for a retirement of ever-increasing but uncertain length? Furthermore, to avoid world recession, such a package deal would require consummate management of macroeconomic policies across countries. This is improbable and would require larger government deficits in countries such as Germany and Japan, which have been struggling to reduce their budget deficits. The US has a vibrant, innovative economy. Its demographics differ markedly from those of other high-income countries in that birth rates have not fallen so far, while immigration, concentrated in young adults, can be expected to continue on a significant scale. In these respects the US, although high-income and politically mature, can be said to be a young and even a developing country. It has an especially innovative financial sector, which continually produces new products to cater to diverse portfolio tastes. In a globalised market, the US has a comparative advantage in producing marketable securities and in exchanging low-risk debt for higher-risk equity. It is not surprising that savers around the world

22

Understanding global imbalances

have wanted to put a growing portion of their savings into the US economy. The US current account deficit and the corresponding surpluses elsewhere, although conventionally described as imbalances, do not necessarily signal economic disequilibrium in a globalised world economy, and they may well remain large for years to come.

Professor Richard Cooper has been Maurits C Boas Professor of International Economics at Harvard University since 1981. Previously, he was Chairman, National Intelligence Council, 1995–97; Chairman, Federal Reserve Bank of Boston, 1990–1992; Under-Secretary of State for Economic Affairs, 1977–81, Deputy Assistant Secretary of State for International Monetary Affairs, 1965–66, US Department of State; Frank Altschul Professor of International Economics, 1966–77, Provost, 1972–74, assistant professor, 1963–65, Yale University; senior staff economist, Council of Economic Advisers, 1961–63. He is Vice-Chairman of the Global Development Network and a member of the Trilateral Commission.


an interview with robert e. lucas, jr. A Nobel Laureate welcomes the closure of the huge gaps of inequality and incomes across societies by ian king

Professor Ian King, Director of the Centre for Macroeconomics, University of Melbourne, interviewed Professor Lucas prior to the public lecture delivered at the University of Melbourne on 26 March 2008, and the following is an abridged transcript of that conversation.

I’d like to start by asking you some questions related to your lecture ‘The Industrial Revolution and the Macroeconomics of Ricardo and Marx’. What do you see as the key lessons, if any exist, that we can learn from the experiences of the industrial revolution? It’s still going on. What has to happen, though, bearing in mind that people are more or less the same everywhere, as are the laws of economics, is that it can’t be a permanent situation where people on one continent have living standards that are a tenth or a fifteenth of people on some other continents. It just doesn’t make any damned sense! I think the industrial revolution, in the sense that it creates huge gaps of inequality and incomes across societies, has to end some time. It has to be a story with an end. And I think, in the end, economies will be rich and growing, more or less equally. So how’s that going to play out? I don’t know. What have we learned from the industrial revolution? Everything! That sustained growth is possible! How does Karl Marx fit in? Ricardian theory evolved a dynamics that returned the living standards of ordinary working people to some equilibrium point – a kind of Malthusian subsistence equilibrium level. If you had the blessing of technology, an invention, or the discovery of new lands, you would get a temporary increase in living standards, but this

would ultimately come out as population increase, and living standards of working people would be restored back to subsistence. So, when the industrial revolution got under way, Marx thought that this was just as transient. It’s like all the other innovations in human history (and there were many of them): growth is just a transition dynamic to some new steady state, in which some people are going to own everything, and the ordinary working people are no better off than they were in the Middle Ages. Everything we knew about economic theory, at that time, was on that side. So Marx thought we had to break into this process of private ownership. Override this process – these dynamics that kept returning us to the same point – and somehow create something different. Bust out of this recurrent process of discovery and new class growth and then returning to the old equilibrium. We had to break out of that. The thing is that industrialisation created a new class that didn’t just peter out – instead, it just keeps going and going. I think that’s what we’ve learned as an historical observation. The Ricardian/Malthusian model just didn’t work. Do you feel that capitalism generates a long run equilibrium distribution of income that we can live with? Absolutely. Let’s just take Japan, as an isolated society. Suppose that was the whole world.

Insights Melbourne Economics and Commerce

23


There are some differences – some people work harder, some people have a little more luck, but it’s not like there’s one class that’s ruling over everybody else. That’s not what a capitalist society is about … there’s not some barrier. Children from parents with modest incomes can go anywhere in these societies. Now, as you start looking at countries that have disadvantaged groups, and that’s why I picked Japan, you start getting little blots on this happy picture. And if you start looking at it as the world as a whole, you get huge blots in the picture because two-thirds of the world is left out. But I think we are heading to the point where the whole world, more or less, looks like Europe, or North America, or Japan. Now, is there too much inequality? I don’t see it. I think people who drop out of high school, take drugs and so on are going to be poorer than the guys who worked hard. It doesn’t bother me at all. Why shouldn’t they be poor? It’s hard to work! You think that the distribution of income reflects effort more than it does, say, luck? Both. But how do you get all this luck? How did Bill Gates get all this good luck? He wasn’t sitting on his ass smoking dope or something like that! But how can we observe that? You say he’s rich because of effort, but there are lots of guys who work very hard and get nowhere. Let me get back to Gates. Gates’ dad is a pretty prosperous lawyer in Seattle. In fact, my brother was a partner in his law firm, Gates and Lucas. And Gates went to Lakeside, which was the only really first-class private high school in Seattle. So his dad was not only pretty well off, but he spent his money on education for his kids as opposed to a big yacht or something. He cared, and I’m sure the Gates household, when he was growing up, was a stimulating environment. So Gates had a lot of things going for him. But there were plenty of people who put in a huge amount of devotion, hard work, and risk taking. Your assessments of the long-run prospects for humanity and the global economy seem considerably more upbeat

24

An interview with Robert E. Lucas, Jr.

than many others today. To what do you attribute this difference of views? Why are you so optimistic when others seem to be so pessimistic about the future? Give me an example. Most people don’t know anything… Particularly with regard to, say, global warming, resources running out, the ‘doomsday-scenarios’ that people have. I have no idea what the relative price of oil is going to look like 100 years from now. It could be higher than it is now. When you add taxation, environmental problems are orthogonal [unrelated] to what I’m talking about. So, you don’t feel the development process itself somehow has environmental implications? It’s a pseudo-question. We may not like the fact that China is going to be as prosperous as we are, demanding materials in competition with the rest of us – but that’s what’s going to happen. Do you seriously think we would rather keep the Chinese in some sort of concentration camp, and deprive them of their access to markets? Surely it’s a blessing that people in China are moving into the modern world and doing very well out of it. And the fact is that it means the things that we rich people want to buy are going to sell at higher prices because the Chinese are going to want them too! If that’s a problem, then we should always have such problems. That’s related to your questions and concerns about distribution. Now you’re saying you’re concerned about distribution, but when the distribution problems start to resolve themselves we say, ‘Ohoh, this has gone too far.’ The Chinese are going to be as rich as we are one of these days. We’ve got to get used to it. My view is we ought to take pleasure in it, and do everything we can to help it. I’d like to turn to your advice for graduate students in economics. For students considering a career in academic research, if they’re on the cusp and they’re doing their honours, is there one key piece of advice you’d like to give to a bright student? Hang around with smart people, idealistic people. The number of people that really want to


advance knowledge, and are good enough, well trained and smart enough to actually be doing it, is a lot fewer than you think. So, do your best to hang out with smart, idealistic people. Avoid careerists. If that involves some sacrifices, to be in good places or have opportunities to interact with good people, do it! Think about the trade-offs people make – a guy getting a degree in economics can double his salary if he decides to teach accounting. That’s OK, those guys are going to enjoy their wealth but they’re not going to be contributing to the future development of social sciences in the way they could be. When choosing a topic for a PhD thesis, it seems to me there is a trade-off between going after a truly fundamental question and possibly failing and not being able to publish their piece…. …and you never get over it. There’s a trade-off between going for a big one or going into more established ways of thinking and making marginal but safer contributions. You want to work on a problem where you could learn something. It usually turns out that if what you learn is new to you then it’s new to a lot of the rest of us, too. It’s going to get recognised. Maybe not – it’s hard to say. There are a lot of good problems out there. You’ve got to try to avoid short-cuts and try to understand the issue in a way that satisfies you, yourself. Chances are it will end up being, some of the time anyway, a contribution the world knows and recognises – you could make a living out of it. That’s what I think. That’s not very operational advice but it’s hard to make it explicit.

something. Of course, sometimes you don’t end up solving it, but when I start out on something I just think, ‘God, I’m going to really blast this problem.’ If you don’t quite make it – no problem, it’s OK. Just go onto the next one. It seems to me that, for a lot of papers, the actual end product of the research can be quite different from the original question. I know. It can’t be helped. Sometimes it’s better!

Robert E. Lucas, Jr. is the John Dewey Distinguished Service Professor of Economics at the University of Chicago. He has held numerous prestigious posts throughout his career, including the Presidencies of both the Econometric Society and the American Economic Association, and the Editorship of the Journal of Political Economy. He became a Nobel Laureate in 1995 “for having developed and applied the hypothesis of rational expectations, and thereby having transformed macroeconomic analysis and deepened our understanding of economic policy.” He is widely regarded as the central figure driving the agenda of macroeconomic research over the past three decades.

When I think of conversations with people I’ve worked with – guys like Edward Prescott and Leonard Rapping – when we get deep into a problem, we don’t give a damn about anything except for solving the problem and satisfying ourselves that we’ve got the answer. It’s hugely exciting to work with people who have the same goals, same high ambitions as scientists. It’s different from what I call careerism. You just want to solve a problem, get really into

Insights Melbourne Economics and Commerce

25


closing the gap? the role of wage, welfare and industry policy in promoting social inclusion The perception of a social policy crisis created by WorkChoices is fundamentally mistaken and based on a narrow reading of the ‘Australian way’ of doing social policy by paul smyth A condensed version of the 22nd Foenander Lecture delivered at the University of Melbourne on 3 October 2007. The full paper appears in The Journal of Industrial Relations, 50: 4, September 2008. Introduction Soon after Federation, a distinctive system of ‘social protection by other means’ (SPM) emerged in Australia, and has become widely invoked in the comparative literature as the ‘wage earners’ welfare state’. Its foundation lay in the system of judicial minimum wage determination which persisted – if in diminishing forms – from 1907 to 2005. Now seen as a watershed in Australian social policy, 2005 was the year that the Howard Government passed industrial relations reforms known as the WorkChoices legislation. These reforms were widely thought to reflect a view that wage minima should be determined more by what the market can afford than by welfare criteria. For some, this was seen as social policy vandalism, opening the way to a US-style flood of working poor. Others saw it as an inevitable consequence of globalisation; and proposed that Australia, along with other SPM countries, must reckon with creating new European-style welfare state systems of ‘flexicurity’ – which can combine a deregulated market economy with a welfare state-regulated form of social compensation. I propose that this perception of a social policy crisis created by WorkChoices is fundamentally mistaken and based on a narrow reading of the ‘Australian way’ of doing social policy.

26

Closing the gap?

A revisioned history will point the way to a quite different and readily attainable social policy framework which can indeed ‘close the gap’ and promote a socially inclusive Australia.

Three features of the new industrial relations agenda The last decade witnessed the emergence of a policy and legal vacuum in relation to the respective roles of wage and welfare policies in Australia. Older pro-regulatory stances in labour law were displaced, creating a challenge to forge new pro-regulatory assumptions which are postprotectionist. There are three features of the new industrial relations agenda which are of particular relevance to this endeavour. Firstly, labour law has seen a switch of emphasis from what is termed ‘protectionism’ to efficiency. The former is said to have expressed twentieth century welfare goals concerned with securing worker rights in an employee-employer relationship conceived as fundamentally unequal. Internationally, this goal of worker protection has been overtaken in the last decade and a half by deregulatory approaches emphasising economic efficiency and competitiveness.


The main division of opinion is now between those who see competitiveness best achieved through simple deregulation of the labour market and those who see that the achievement of competitiveness actually requires extensive government intervention. The second feature of the new labour law has been the widening of interest in its regulatory field from the ‘workplace’ to the broader ‘world of work’. This emphasis links issues of efficiency at work to the requirement for workers to have appropriate education, training, housing and work-life balance. The third aspect of the new labour law relates to the sites of regulation. The curtailed powers of the Australian Industrial Relations Commission will not mean the end of regulation but rather its dispersal among other regulators, creating new difficulties for any coordinated approach to wage and welfare policies. These three themes – protectionism replaced by competition, the linking of the market to its social foundations, and the challenge of coordinated governance – comprise what I would see as leading edges of contemporary Australian social policy development.

From the ‘Australian settlement’ to the ‘Australian way’ It is vital that our analysis be informed by an accurate sense of history. ‘Path dependency’ remains a critical factor in the ongoing evolution of national social policy. The ‘initial steps’ in the Australian social policy path are still widely and falsely understood in terms of an Australian settlement characterised by protectionism. Here, wage protection is seen as an element of a policy package also comprised of tariff protection and the immigration barrier of the White Australia policy. A more accurate interpretation is in terms of an ‘Australian way’, in which protectionism was not a distinctive feature at all but rather a bias towards regulating in a way which would promote employment and growth while seeking egalitarian outcomes. The distinctive feature of the Australian way was ‘social investment’. Australia initially chose tariffs

over free trade, but the point is that this was not distinctive. North Americans, continental Europeans and all the self-governing members of the Empire rejected the British push for free trade in this period. Following more the ideas of Friedrich List than Adam Smith, these countries adopted tariff protection as a way of building up their own industrial base. Australia chose to use its land-based wealth to steer its economic development towards the new industrial sector and so develop a more diverse, higher-wage economy which would also be attractive to immigrants and build a larger population base. Most distinctive was the level of government spending. Along with the US and New Zealand, Australia has led the world in spending on education since the 1870s – with the outstanding social policy achievement of free public primary education and the beginning of state secondary and technical education. Importantly, this ‘social investment state’ was as much the product of the business sector as it was of the labour movement. An industry policy which took the high-wage manufacturing path would require a better educated workforce while offering the prize of higher wages. Good social policy would also be good for business. The idea of a ‘living wage’, as set out in the Harvester Judgement, combined with an industry policy to set Australia on a high wage path to development – thus creating the national sense of a ‘fair go’. Adequately rewarded workers would become self-trusting individuals free from dependence on charity or the state. In practice, the living wage had less impact on wage setting than many suppose. It was not until the 1920s that it could be said that federal and state tribunals were applying living wage principles consistently. The lack of standard working arrangements further attenuated its influence. Moreover, the basic wage did not keep pace with prices and living standards; and, following the Great Depression, wage setting came to be driven more by the capacity of industry to pay than by calculations of family needs. Nevertheless the practice of indexing the basic wage did set a floor under the labour market with a real effect on the wages of the weak.

Insights Melbourne Economics and Commerce

27


At the same time, tariff policy stimulated the emerging manufacturing sector in a way that enabled Australia to expand its small population base. The employment growth with higher wages allowed for the maintenance of a larger population than could have been expected otherwise. The vision informing the Australian way was thus about investing in people so that they could master risk. Making work pay through wage regulation was intended to place trust in individuals and families, that they could manage their own affairs rather than have them supervised by charities or the state.

Overlaying the social investment state Three significant transformations to this Australian way are necessary to note if we are to fully appreciate the challenges involved in the relayering of the social investment approach to today. These transformations include: the ‘economic state’, the ‘welfare state’, and the Accord. The ‘economic state’ refers to social policy in the Keynesian period (1940s to 1970s). The central emphasis of policy was stabilising investment to ensure full employment. Reliance on agriculture and minerals increased, with tariffs becoming a way of avoiding rather than mastering risk. It was a time of lost opportunities, as Australia’s average per capita income slipped from fifth highest among the developed countries in 1950 to tenth in 1973. The ‘welfare state’ emerged in the 1960s and 70s reflecting an awareness that full employment with decent wages was not sufficient to ensure an adequate social infrastructure. Social investment in education, health and social services expanded in a much more planned fashion – although Australia lagged behind international trends. The relationship between wage and welfare policy was fundamentally changed in this period. Wage rises meant that the old ‘living wage’ functioned less and less as a floor against poverty for the ordinary worker. Basic wage awards operated alongside awards for skill, which became increasingly significant in workers’ pay. Automatic quarterly cost of living adjustments were discontinued in 1956. Wage setting became 28

Closing the gap?

less related to measures of adequacy for the needs of families and was driven more by industry capacity to pay. Eventually, in 1966, the basic wage was replaced by the ‘total wage’ and a new ‘minimum wage’ was created; however, few depended on the new minimum and arbitration tribunals refused to link it to any understanding of needs. A second postwar development was the entry of larger numbers of women into the paid workforce, creating pressure to formally remove the now anomalous ‘family wage’ component from wage determinations. This occurred in 1974, when the abolition of the family component was finally declared. In its decision, the Commission pointed to the diversity of family composition and declared itself lacking in the necessary information to discriminate between the needs of families. Importantly, the Commission declared that it was an industrial arbitration tribunal, not a social welfare agency. A third layering came with the first integrated approach to wage, tax and welfare polices under the Accords; whereby governments and the ACTU agreed to trade-offs between ‘social wage’ increases – including items like compulsory superannuation and public health insurance – and take-home pay. This shifted the institutional focus of social welfare provision away from the arbitration tribunal towards social policy institutions proper. However, this development was overtaken by the growing free market and neoliberal orientation of economic policy. The welfare state froze into what seemed like a state of permanent austerity; while in 1993, enterprise bargaining was introduced, with award wage setting focused on a residual safety net of awards rather than award coverage for all.

Reframing the Australian way today As the twenty-first century has progressed, a new economic and social policy agenda has emerged around the twin goals of investing in human capital and promoting social inclusion. In economic policy, we observe a shift from cost cutting and fiscal stabilisation to a concern about the limits to growth created through poor human capital development and social cohesion problems.


The trend is reflected in two areas: the policy objective expressed in terms of the three Ps – population, participation and productivity; and the Council of Australian Governments’ national reform agenda embracing the development of Australia’s human capital. The new agenda reaffirms an open market-oriented framework but assumes that social investment in participation and productivity has emerged as a critical driver of future prosperity. To improve human capital, we must address its social dimension. Ways to tackle this social dimension have been shaping up in a second, separately developing, policy agenda associated with the goal of promoting social inclusion. Deriving in part from the New Labour in the UK, this involves: – A shift from monetary-based poverty lines to multidimensional analyses; – A focus on the particular social/economic dynamics affecting different spaces and population groups; – Reintroduction of social cohesion as policy objective with emphases on trust, social capital, community/neighbourhood strengthening; – Encouragement of ‘active society’ rather than passive welfare; and – A more people-centred, personalised welfare governance. The convergence of these trends offers an opportunity for a reintegration of economic and social policy which has eluded us for more than three decades. A new calculus of the human capital is now emerging, as are well-being prerequisites for each citizen to participate successfully through each of the key transitions of the life cycle.

Our preference has been for building up the public infrastructure to allow individuals to exercise real freedom, but not constraining everyone within a universal welfare state. In this context, the minimum wage ought to remain a key source of welfare. However, our new model will demand a very different welfare safety net – not just modest income support but a clearly articulated set of entitlements that each citizen will need for full economic and social participation. Quantitative and qualitative indicators should be established which are benchmarked against the best in the world; and periodic monitoring, evaluation and review should occur. Australia’s newly created Social Inclusion Board could provide the kind of agency needed to exercise the concerted social policy management which will be needed. However, such work cannot usefully be undertaken in isolation from Fair Work Australia, the new national agency responsible for setting a safety net within the wage system. The time is surely ripe for us to reshape the principles of justice associated with the Harvester Judgement into a new wage and welfare settlement appropriate for the twenty-first century.

Paul Smyth is Professorial Fellow in the School of Political Science, Criminology and Sociology at the University of Melbourne; and General Manager for Social Action and Research, Brotherhood of St Laurence.

From WorkChoices to the Australian way WorkChoices reflected the high tide of deregulatory policy. Today, how might we govern the transition to the human capital and social inclusion model? A vital lesson from our past is that Australia has evolved a hybrid of the welfare society and the welfare state models. Insights Melbourne Economics and Commerce

29



forward with fairness: a business perspective on labor’s reform agenda How to get the balance right between competitiveness, fairness and flexibility in labour regulation by john w. h. denton

A condensed version of the 23rd Foenander Lecture delivered at the University of Melbourne on 20 August 2008. The full paper is available on the Department of Management and Marketing website: www.managementmarketing.unimelb.edu.au*

Introduction As Chair of the Rudd Government’s Business Advisory Group, I have been concerned to ensure that the voice of the business community is heard as the Government translates its Forward with Fairness policy into legislation. However, rather than going into the detail of the current reform process in this paper, I want to take a broader view of how this process fits within both the historical and global contexts.

Key objectives of workplace reform I can discern three clear objectives from Labor’s Forward with Fairness policy. The Government aims to come up with a workplace relations system that balances the need for: – National competitiveness; – Fairness for employees; and – The flexibility and productivity needs of businesses. Reconciling this troika of competing objectives is no easy task. But it is a project that is critical to Australia’s future economic and social prosperity. For example, the importance of labour

market flexibility as a determinant of national competitiveness is highlighted by the World Economic Forum’s most recent ‘Global Competitiveness Index’ (GCI). While Australia is ranked 13th in the GCI on labour market efficiency, restrictive labour regulations were identified as the third-most problematic factor for doing business in Australia. I will now focus more closely on the objectives of workplace reform that I have identified, by developing two key themes.

1. The lessons of history In determining the shape of workplace regulation, it is important to consider how businesses are structured, how they operate, and what the broader economy looks like.

The economic and workplace setting in ‘Fortress Australia’ If we step back in time to the late 1890s/early 1900s, we can see that the federal conciliation and arbitration system that emerged in 1904 was closely linked to – and designed to meet the needs of – contemporary business and economic conditions.

* Footnotes have been omitted from this condensed version. References to all source material are to be found in the full paper.

Insights Melbourne Economics and Commerce

31


At the turn of the last century, the Australian economy was characterised by:

The economic and workplace setting in ‘Networked Australia’

– A relatively under-developed manufacturing sector;

Since the 1980s, the Australian economy has been transformed from the isolated protectionism of the Federation era. The tariff walls have been virtually dismantled in the interests of making local firms more efficient and internationally competitive. As a consequence, manufacturing industry has declined from its peak in the early 1960s (when it accounted for more than a quarter of GDP) – by 2005, the manufacturing sector represented less than 12 per cent of output and employment. Agriculture and mining accounted for 8.5 per cent of Australia’s output in 2005, five per cent of the workforce, and 55 per cent of exports – while services made up more than 70 per cent of GDP and almost threequarters of the workforce.

– Numerous small single-product businesses that served local markets; and – The exporting of raw materials for processing/ production by foreign manufacturers, which were imported back to Australian retailers. In 1901, the pastoral, agricultural and mining industries made up 30 per cent of GDP; service industries for the import and export trades (e.g. government, finance, distribution), only a third of GDP; and manufacturing, only 12 per cent of GDP. The policy-makers of the fledgling Federation settled upon ‘New Protection’ as the preferred model for ordering the national economy, with two intertwined elements: – First, significant tariff protections for manufacturers along with various types of subsidies for the farming sector; and – Second, the compulsory conciliation and arbitration system: workers shared in the rents generated by the wedge between domestic and import prices through a centralised wage fixing system that tied minimum wages to price increases. Much has been written about the origins and nature of Australian conciliation and arbitration, so I will simply emphasise a few key points here. The bitter industrial battles of the 1890s led the framers of the Australian Constitution to make provision for federal legislative power over interstate industrial disputes. The Conciliation and Arbitration Act 1904 (Cth) instituted the concepts of the ‘living wage’ and ‘wage justice’, based on uniquely Australian notions of egalitarianism and a ‘fair go’. It is clear – as The Hon Justice Kirby has put it – that ‘The 1904 Act grew out of the legal and economic environment of the late nineteenth century.’ However, in my view, Australia’s system of workplace regulation has failed to keep pace with changes in both the economy and the world of work.

32

Forward with fairness

Increasingly, Australian companies – like those in many other countries – are operating in global supply chains, or ‘transnational production networks’ based around multinational corporations. Globalisation has also brought with it greater international financial integration, foreign investment, trade liberalisation – and new forms and structures of work that reflect the new globalised businesses. Some examples of the changed nature of employment relations in the global workplace include: – A shift from internal labour markets to much looser connections between firms and workers focused on cross-utilisation of employees and recognition of their ‘intellectual capital’; – Abandonment of the implicit promise of employment security in favour of employability – the ability to acquire skills that will enhance employees’ opportunities not just in one firm but in the broader labour market as well; and – As well as the multinationals, growing numbers of employers are small businesses with links to other firms through franchises and joint ventures – and increasingly, workers are engaged as casuals, homeworkers, subcontractors or on some other flexible basis.


This is the new world economic order that our system of workplace regulation has to adjust to, and it will be different again in five years’ time. Whatever system we have, it also has to cater for an ever more diverse workforce, with its complex age and generational mix, as well as workers who want flexibility in working time and remuneration arrangements to suit their aspirational, technologically-geared lifestyles.

Forward with Fairness ‘plus’: the further reforms Australia needs I acknowledge that through the implementation of Forward with Fairness, the Government is trying to get the balance right in the system of workplace relations regulation. But in my view, the current reforms will not on their own: – Deliver the kind of flexibility that the modern Australian workplace requires; nor – Assist the project of boosting our national competitiveness. They will therefore need to be reviewed over time. Many would no doubt suggest that WorkChoices – and the High Court’s endorsement of it – was a ‘quantum leap’ away from the Federation-era IR system based on the constitutional labour power. And in some respects, I would agree with them. However, just consider how much of the ‘old’ system – or the remnants of it – will still remain, even after Labor’s Forward with Fairness policy is fully implemented: – Industrial awards – albeit ‘modernised’ ones – that will contain fairly detailed regulation of employment terms and conditions; – An arbitral body of sorts – Fair Work Australia – with significant powers; and – Extensive rules governing registered employee and employer organisations, union right of entry, freedom of association, and protected industrial action. Added to this, we will have a swag of new provisions regulating bargaining – designed mainly to deal with agreement negotiations between employers and unions. This all adds up

to a continuing regulatory focus on the concerns of a bygone industrial era – big institutions, employer bodies, and trade unions. But the Australian economy – and workforce – have moved on. I am firmly of the view that in addition to the important changes to the legislative framework that Labor is implementing, further reforms will be needed to drive the productivity agenda that the Government is also committed to. Starting right now, some serious thinking needs to be done about how governments can enable firms to pursue strategies of alignment and engagement with the workforce – and assist them to become the kind of innovative, ‘high performance’ organisations that will be critical to Australia’s future competitiveness. The Government should look closely at some overseas models here – such as Ireland’s National Centre for Partnership and Performance and New Zealand’s Workplace Productivity Project. Reflecting international trends, the Rudd Government is embarking on a limited ‘reregulation’ of our national labour laws – mainly, to restore a measure of fairness that the electorate has expressed that it wants to see in the workplace relations system. However, this cannot be the end of the workplace reform process. Forward with Fairness should be seen as a bridge to the next generation of reform, and all stakeholders need to focus on how we will deliver the all-important flexibility/productivity and competitiveness components of the reform equation.

2. A global perspective The current workplace reform debate in Australia is a local manifestation of the challenge policymakers around the world have faced for some years – how to ‘humanise global capital’. This is now occurring in a context where leaders in public policy debate are, increasingly, questioning the benefits of globalisation. Despite this, policy-makers in Australia must maintain our support for open trade and investment borders. The global engagement of Insights Melbourne Economics and Commerce

33


Australian businesses generates higher levels of productivity, growth and living standards – along with access to new learning opportunities, technologies, ideas and skills for our people. As well as acknowledging the international policy context in which changes to Australia’s workplace relations laws are occurring, it is useful to examine how policy-makers elsewhere are addressing these challenges. I will now examine how some governments overseas have sought to balance the objectives of competitiveness, fairness and flexibility.

China’s Labour Contract Law China’s importance to Australia cannot be overstated. China is the key emerging power in the East Asian region – and Australian engagement with China is critical to our own economic success. In June 2007, China adopted a new Labour Contract Law – in effect from 1 January 2008 – which has significantly increased the legal protections offered to employees, particularly in relation to job security, the payment of wages, and the rights of labour hire, casual and fixedterm employees. Importantly, the Labour Contract Law also seeks to address the concerns of business, especially the many multinational companies with operations in China – for example, it allows employers considerable latitude in the use of ‘restraint’ or ‘non-compete’ clauses in employment contracts. Increased certainty in the content and enforcement of China’s labour regulation will assist the nation’s quest to remain a desirable location for foreign investment.

The UK: ‘regulating for competitiveness’ Under Britain’s ‘New Labour’ governments since 1997, deregulation of the labour market has continued at the same time as the introduction of significant new individual rights for employees – including a statutory minimum wage, greater protections for part-time and fixed-term workers, a right to request family-friendly working hours, and union recognition provisions. However, as

34

Forward with fairness

Professor Hugh Collins has observed, labour regulation in the Blair/Brown years has been motivated not so much by the traditional protective goals of labour law – but by an overriding concern ‘to improve the competitiveness of businesses’. ‘Regulating for competitiveness’ in the UK has been accompanied by a strong policy push in favour of ‘partnership’ relationships between management and unions. A final feature of New Labour’s business-friendly approach to labour regulation has been its stance towards the EU. The trend here has generally been one of UK resistance to EU-level regulatory initiatives, and failing that, minimalist domestic implementation of EU directives. This approach reflects ongoing scepticism about the European social model, which the UK Government views as incompatible with the flexibility needed to meet the challenges of globalisation.

‘Flexicurity’ in the EU I have also been critical of the inefficiencies and costs to stakeholders of the type of regulation imposed by the EU’s many work directives. However, since 1997, the European Commission has promoted ‘the importance of both flexibility and security for competitiveness and the modernisation of work organisation’. ‘Flexicurity’, as it has come to be described, is a means whereby employees and companies can better adapt to insecurities associated with global markets. Some of the strategies that form part of the flexicurity approach are as follows: – A focus on employment security, rather than job security – recognising that few workers stay in the same job for life; – Enabling companies, especially small-tomedium enterprises, to adapt their workforce to changing economic conditions; – Flexible/reliable contractual arrangements; – Career progression through life-long learning programs, in-company training, and entrepreneurship – internal flexicurity;


– More dynamic labour markets, enabling workers to move easily between jobs – external flexicurity; – Promoting gender equality and equal opportunities; and – Modern social protection systems, namely, adequate income support for the unemployed. Flexicurity generally has strong support among the European social partners, including the European Trade Union Confederation and employer bodies such as Business Europe. Adaptability to change through flexicurity is now an entrenched feature of EU social policy.

USA: the Employee Free Choice Act Early last year, a bill was introduced into the US Congress that would significantly alter the current arrangements for union-based collective bargaining under the National Labor Relations Act (NLRA). The ‘Employee Free Choice Act’ (EFCA) proposes three main changes to the NLRA: (i)

Allowing unions to obtain collective bargaining rights without having to hold a secret ballot of employees in all cases;

(ii) Setting timelines for mediation and, if necessary, arbitration of a ‘first contract’ (i.e. collective agreement); and (iii) Introducing stronger penalties for employer violations of the NLRA. There are strongly divergent views about the EFCA, between its proponents in the US labour movement, on the one hand, and employer lobbyists, on the other. However, it is interesting to observe that even in the home of ‘muscular free enterprise’, a debate is currently taking place that in some ways reflects our own in Australia – about how to get the balance right between competitiveness, fairness and flexibility in labour regulation.

Further implications for Australia A key concern for the countries I have examined, and the EU, is to ensure that workplace regulation fits with broader economic goals – enabling them to compete in globalised product and service markets, and ensuring they are able to attract international investment. Australia faces the same challenges. But the experience of these other nations suggests that new approaches offer better prospects for resolving the tensions between competitiveness, flexibility and fairness, than traditional labour law frameworks. The UK is perhaps the best exemplar of the kind of approach that Australia should adopt, by: – Focusing labour regulation on a strong ‘floor’ of individual employment rights; – Providing collective negotiation and bargaining processes for those that still want to use them (but not making them the primary focus of the regulatory system); – Promoting cooperative workplace relationships rather than traditional adversarial posturing; and – Subjecting all regulation to the overarching goal of competitiveness. We can also learn a lot from the EU’s efforts to ‘fuse’ the goals of flexibility and fairness through the concept of flexicurity – and the harnessing of this concept to the project of enhancing the economic competitiveness of EU member states. And, returning to the other main theme of this paper, China has shown that labour laws must keep pace with structural changes in the economy. Just as China’s new Labour Contract Law reflects the profound shift from a centrallycontrolled to an open, market economy – so must Australia move away from an IR system grounded in its design in an economy and business approach that simply is no longer relevant for the majority of participants, to one that meets the needs of a fast-moving, globallyintegrated economy.

Insights Melbourne Economics and Commerce

35


Conclusion In this paper, I have sought to outline two broad arguments that I think are critical – but largely neglected – in the current workplace reform debate in Australia: – First, that the historical development of our system of industrial regulation limits our thinking about what is possible for the future – and we have to remove those historical ‘blinkers’ if we’re to move up the international ‘league tables’ of competitiveness. – Second, that the recent experience of a number of other countries shows that the competing goals of national competitiveness, fairness for employees, and flexibility for businesses can be reconciled. And while this has involved a degree of re-regulation of the labour market, this has only occurred to the extent necessary to temper the sometimes harsh impacts of globalisation – but without abandoning the overall project of internationalist economic policy. In summary, the big challenges for policy-makers in our field include the following: – To continue to modernise our system of workplace regulation, balancing fairness with the flexibility that our firms need to compete globally – and that our increasingly savvy, knowledge-rich workforce demands; – To develop an agenda for workplace reform that goes beyond statutory regulation – exploring other policy levers that could help drive productivity in Australian businesses; – To stay the course in the global economic order, through a continued commitment to open trade and investment policies – this means resisting the clamouring of certain interest groups for a return to the cosseted comfort of protectionism; and – To examine how other countries have gone about addressing these issues, learning from their successes and failures – and most importantly, coming up with solutions that will work for Australia.

36

Forward with fairness

Mr John Denton is Partner & Chief Executive Officer of leading Australian law firm Corrs Chambers Westgarth. He was recently appointed Chairman of the Australian Federal Government’s Business Advisory Group on Workplace Relations, is Councillor and Chairman of the Trade & International Taskforce for the Business Council of Australia, and is a founding member of the Australian Institute for Public Policy. He has advised business and government on a wide range of industrial relations issues, provided strategy advice on major power privatisations, mining and maritime labour negotiations, and corporate restructuring in a range of industries including manufacturing and airlines.


the use and misuse of intelligent systems in accounting: the risk of technology dominance Designing intelligent systems to enable less experienced staff to make decisions normally made by more experienced staff is possibly not a good strategy. However, there appears to be potential for success in using intelligent systems to complement and support experts’ decisions. by stewart leech A condensed version of the 68th Annual CPA Australia/University of Melbourne Research Lecture delivered at the University of Melbourne on 10 September 2007.

Judgment and decision-making in accounting Accountants and auditors are called upon daily to use their judgment to make crucial decisions. However, questions arise. How good is that judgment? What factors affect it? How do they combine and mentally weigh the numerous factors that lead them to make a decision? What do we really know about how an auditor makes a goingconcern judgment or about how an insolvency practitioner makes a decision to trade on or liquidate a business in financial difficulty? Over the past twenty years, major accounting firms turned to technology to assist them with making such judgments. Computer-based intelligent systems were developed with the aim of making more consistent decisions and sharing expertise; and in the hope that novice staff could make the same decisions as experts. What do we know about the use of intelligent systems in accounting and how might they be used more effectively? An intelligent system is a computer-based system intended to replicate the decisions of a human expert. We expect an intelligent system to show some form of intelligence – that is, it stores expertise and, given the factors about a particular case, undertakes reasoning and makes recommendations.

They include the audit support systems used by the Big Four and other major audit firms, although some are more ‘intelligent’ than others.

The Theory of Technology Dominance In theory, it was assumed that intelligent systems would allow relatively junior or novice staff to make the same decisions as experts. This assumption was questioned in the Theory of Technology Dominance, proposed in the late 1990s by Arnold & Sutton as ‘a model for understanding the conditions under which success (of intelligent systems) is more likely to occur.’ The theory attempts to understand the impact of intelligent systems on a decision maker’s judgments, including the short-term impacts and the long-term implications. Firstly, a basic requirement of success for using an intelligent system is reliance on the system by a user. Reliance implies two conditions – acceptance and influence on the decision outcomes. To enhance the likelihood of reliance on the system: – The task should be highly complex; – The system should be familiar; and – There should be good cognitive fit between system and user.

Insights Melbourne Economics and Commerce

37



The second part of the theory explores the shortterm impacts. Here, the theory posits that when the expertise of the user matches the level of the intelligent system, decision-making will be enhanced; but where the intelligent system has more expertise than the user, the system can lead to poorer decisions. Thus, technology dominance is the state of decision-making whereby the intelligent system, rather than the user, controls the decision-making process, and a user with limited expertise is unable to use the system properly and might misinterpret its output. The theory then goes on to examine the epistemological implications of using intelligent systems in the long term. Here, the theory predicts that continued use of intelligent systems could affect accounting expertise negatively in the longer term. How do we test a theory that addresses the use and misuse of intelligent systems in accounting? What intelligent system would we use? And in what field of accounting? While work on most intelligent systems has been in audit and tax, my colleagues and I decided to concentrate on a little-researched field in accounting that relies on substantial human judgement – that of corporate recovery and insolvency.

Corporate recovery and insolvency We set off on a journey to first build an intelligent system (“INSOLVE”) that would allow insolvency practitioners to input factors about a company in financial distress, while the system would use its expert knowledge to make a recommendation on how to proceed – to either liquidate or trade the company on – and to provide the reasons and explanations for the decision. We were motivated by two objectives: firstly, a cognitive modelling rationale since we were interested in understanding how expert insolvency practitioners made decisions about companies in financial distress; and secondly a behavioural science rationale that could test the theory of technology dominance. Under what conditions is an intelligent system likely to lead to improved decision-making? In building INSOLVE, knowledge was acquired from 23 experts from major accounting firms and banks. INSOLVE was then extensively validated

against insolvency cases, resulting in a high level of agreement between the experts and INSOLVE. How are companies in financial distress dealt with by insolvency practitioners? An initial decision is made to either liquidate or trade on the business. The objectives of trading-on are: – Reconstructing the business prior to returning the business to directors; – Enhancing and/or preserving the sale value of the business as a going concern; or – Complete work prior to liquidation. Such decisions may extend over a considerable period and are made on the basis of both financial information and qualitative judgments about the business, stakeholders and the business environment. The initial assessment is based on several factors: the business has ceased or cannot become viable; there is no cash and no way to generate cash; key staff vital to the business have left and cannot be replaced; or essential customers and/or suppliers will not support trading-on. If the decision is made to trade-on, an assessment is made of the stakeholders – including the directors, staff, secured creditors, customers, suppliers and unions – and the financial situation, which involves a comparison of the auction value of the assets, the sales value as a going concern and the future projected profitability of the business. All of these factors were included in INSOLVE. Once provided with the facts of an insolvency case, the system uses its inference engine to undertake reasoning and produce a recommendation. INSOLVE produces a report that recommends either liquidate, sell as a going concern or hand the business back to the directors.

Testing the Theory of Technology Dominance We then used INSOLVE to test the short-term propositions of the Theory of Technology Dominance, that is: – When there is a strong match between the user and an intelligent system, the judgment of the user will improve;

Insights Melbourne Economics and Commerce

39


– When there is a mismatch between the user and an intelligent system in terms of expertise, the risk of poor decision-making increases. The difficulty in testing these two propositions is the ambiguity in defining a better or worse decision in domains that are highly subjective. One of the challenges was to place the decision environment into an observable and measurable context. One approach recommended in decision-making research is to focus on a specific source of judgment error. We used a similar approach by examining certain specific types of judgment error during the completion of a complex task by insolvency practitioners – some aided by INSOLVE and some without INSOLVE. The research hypotheses can be summarised as follows: for novices being aided by INSOLVE, there will be an increase in judgement error; for experts being aided by INSOLVE, there will be reduced judgement error. A training session (experiment) was used to test the hypotheses. A real, reconstructed insolvency case was used with 80 insolvency practitioners having access to INSOLVE and with 87 who did not use INSOLVE. Experts were partners and managers. Novices were staff and seniors. In each

40

The use and misuse of intelligent systems in accounting

of three stages of the insolvency case, the participants were asked to give an assessment of whether they would trade-on or liquidate. The results indicated the existence of a detrimental effect of the intelligent system on the decisionmaking processes of novices. On the other hand, the intelligent system was effective at reducing the judgment error in the decision-making processes of experts. These results supported the Theory of Technology Dominance. INSOLVE was a basic intelligent system lacking a substantial explanation facility. We needed to address the question: would the provision of a fully functional explanation facility in an intelligent system have an effect on the results so far? This meant providing users with four types of explanations: definitions, rule trace, justification and strategic. Once developed, we tested INSOLVE II (with the explanation facility) in two further studies. The first study was designed to see if and how the explanation facility affected the decision-making behaviour of both novices and experts. The results showed overall that both novices and experts are more likely to rely on the recommendation of the intelligent system when explanations are provided.


However, novices still tended to accept INSOLVE’s recommendation and move towards reliance, supporting the previous research. The second study used INSOLVE with major accounting firms in Singapore and compared the results with Australia. Intelligent systems developed in one country are often used in other countries by large accounting firms. The results showed that the overall judgements were no more consistent between Singapore and Australia when using INSOLVE. However we did find that the intelligent system developed in Australia altered the way the Singaporeans evaluated some of the evidence of an insolvency case, leading them to be more in line with Australian decision-making processes. INSOLVE did change the Chinese culture/attitude to aspects of insolvency decisions, which means that an intelligent system could lead to more consistency in judgement across cultural boundaries, if indeed that is considered desirable. Other evidence from the US using intelligent tax systems generally supported our results. In summary, designing intelligent systems to enable less experienced staff to make decisions normally made by more experienced staff is possibly not a good strategy. On the other hand, there appeared to be potential for success in using intelligent systems to complement and support experts’ decisions. Finally, I turn to the possible long-term implications of the widespread use of intelligent systems in accounting. Here, the Theory of Technology Dominance posits that continued use of an intelligent system will result in the deskilling of users’ abilities and have a negative effect on the growth and advancement of knowledge. There has been virtually no research in this area until recently, partly because it is a very difficult question to research. There was some evidence from one of the US tax studies, where participants using the intelligent tax system had difficulty in completing the tax returns manually, whereas those who completed manual tax returns first could then easily use the intelligent system. We tested the association between the extent of decision-support embedded in the audit support

systems of three major audit firms and the declarative knowledge processed by long-term users (Dowling, Leech, and Moroney, 2007). We required auditors, without the aid of their firms’ audit support systems, to list the key business risks common to clients in an industry familiar to them. We found that auditors who normally use an audit support system that was not an intelligent system were able to list more relevant risks than auditors who normally use an audit support system that was more of an intelligent system. While this was a very simple exploratory study, it does ring some alarm bells in the direction of the theory, and suggests that the way audit support systems are designed has a role to play in providing sufficient opportunities for auditors to develop their knowledge.

Summary Intelligent systems and the way they are designed do have an impact on the decision-making behaviour of accountants. The expectation that novice accountants can use intelligent systems and perform like experts is not supported by the evidence and can be a possible dangerous assumption in the use of such systems. There needs to be a good match between the expertise embodied in the intelligent system and the user. A good explanation facility can make a difference to the decisions being made, and an intelligent system developed in one country can affect the decisions being made in a country of a different culture. Finally, while the jury is still out on the longer-term consequences, there are possible alarms bells that we need to heed. Professor Stewart Leech is in the Department of Accounting and Business Information Systems at the University of Melbourne. This is a report of research into intelligent systems in accounting that Professor Leech has undertaken over many years in collaboration with co-researchers, Phil Collier, Vicky Arnold, Steve Sutton, and more recently Carlin Dowling.

Insights Melbourne Economics and Commerce

41


new agenda for prosperity How much longer can Australia’s current boom last? And how do we build the physical and human capital needed to maximise the growth of living standards as the population ages? by stephen sedgwick The election of a new Government late last year presented a unique opportunity to take stock of the framework of economic and social policies currently operating in Australia. This was the backdrop to the Fifth Economic and Social Outlook Conference hosted jointly by the Melbourne Institute and The Australian. Held over two days – 27-28 of March, 2008 – the conference attracted 65 speakers drawn from politics, academia, business, non government organisations and commentators, and a capacity audience. This is a personal summary of some of the proceedings. Copies of most presentations and a recording of each speech are available at the Conference website: www.melbourneinstitute.com/conf2008/default.html

The context The conference was held in the lead up to the 2008 Commonwealth Budget. After years of strong economic growth and declining unemployment, the overriding domestic economic policy concern was to contain inflation by restraining the growth of demand to a pace more in line with the economy’s capacity to supply the necessary resources. Monetary tightening and fiscal conservatism were central to this task. A range of other policy issues were also prominent at the time. For example, rising interest rates together with high real house prices prompted debate about deteriorating housing affordability. Sustained growth had stretched the capacity of key infrastructure to cope, and there were reports of incipient labour shortages. Treasury modelling suggested that Australia’s future growth trajectory would be lower than the past 40 years unless policy changes increased incentives to work or productivity growth picked up dramatically. Debate continued about reform of education, health and the overlapping responsibilities between the Commonwealth and the States. An intense debate on climate change had focused mainly on the desirability of adopting an emissions trading scheme. Moreover there were abiding concerns about the plight of those left behind despite the sustained strong economy, with renewed interest in equity and social inclusion, 42

New agenda for prosperity

including for indigenous Australians. These were amongst the issues covered during the conference.

A decoupled economy? The fundamental conundrum for policy in the short term, however, stemmed from the fact that while Australia’s monetary authorities were raising interest rates, others were easing aggressively. This easing was intended to insulate major developed economies from credit tightening induced by the recent collapse of the sub-prime mortgage market in the US. Although inflation had edged up, the stronger fears overseas were that growth would stall (or worse) in the US and/or that financial contagion and global illiquidity would undermine prosperity in Australia. An important contextual question, therefore, was the extent to which the Australian economy could effectively decouple itself from these developments. Two speakers, Phillip Glyde1, Executive Director, Australian Bureau of Agricultural and Resource Economics and Chris Richardson, Director, Access Economics, addressed this issue, answering firmly in the negative. After more than a decade in which Australia had benefited from an historically strong upswing in demand for commodities such as iron ore and coal, some slowing in Australia’s growth is desirable in order to rein in inflation. However, assuming sensible macroeconomic policies at


home and successful policy interventions by the US Federal Reserve Board, the consensus at the conference was that any slowdown would be temporary. The Australian Bureau of Agricultural and Resource Economics (ABARE) predicted strong growth for the next five years. Essentially, the dynamism of the developing world and its weight in world GDP were believed to provide a significant buffer against a slowdown in the US for trade-exposed economies such as Australia. Central to this was an analysis that the structure of the world economy has changed. The emerging economies now account for two-thirds of world economic growth. ABARE also argued that, compared to most of the postwar period, there is now much greater openness to world trade amongst the major economies, growth is broadly dispersed amongst the world’s economies and financial markets are more resilient and can respond well to shocks. By 2050, China is expected to be the largest economy and account for almost one third of world output – compared to about 16 per cent currently. India is also likely to have claimed a much larger share of world output. Both countries have a very large, low-wage labour force, which will support their participation in global trade for some time. Importantly, the World Bank predicts that the proportion of the world’s population in the middle class – those earning between $US4,000 and $US17,000 in purchasing power parity terms – will treble over the next 20 years. Their purchasing power will underpin significantly higher demand for manufacturing and agriculture, minerals and energy intensive products as consumption patterns adjust to higher real incomes. This, in turn is expected to sustain strong demand for Australian exports but will also pose significant challenges to securing a global position on climate change. Of course, risks were identified to this generally benign outlook. Both India and China have deep-seated structural problems. In the case of China, significant reform of the legal framework, the financial system and the operations of State Owned Enterprises is required. Moreover the Chinese authorities need to address economic and social inequality within China, infrastructure bottlenecks and mounting environmental problems. India has similar issues to address.

Infrastructure and incentives The adequacy of Australia’s ports, rail, road and other economic infrastructure figured prominently in the debates. Glyde, for example, suggested such bottlenecks explain why comparatively little additional coal has been shipped from Australia in recent years, despite the strength of coal prices. He contrasted this with the more robust supply response of iron ore shippers. Several, including Ross Garnaut, suggested that the apparent infrastructure imbalances were a symptom of a broader issue, namely the strength of debt-fuelled consumption expenditure which reduced resources available for investment in infrastructure, including public infrastructure. In essence, resources had been devoted to consumption at the expense of necessary long-term investment. The availability of cheap credit and surging household incomes contributed to this. The former owed much to a cyclical easing in monetary policy during a period of sustained low inflation from the mid-eighties, coupled with financial sector innovation. Prolonged low and stable inflation also removed some of the risk premium built into interest rates, helping to keep real interest low. Steady declines in unemployment may also have reduced the perceived risk of unemployment and reduced the precautionary motive for saving. Rising real incomes were fuelled in Australia by falling unemployment and by the unprecedented rapid and sustained improvement in our terms of trade. However, achieving an adequate supply of infrastructure requires more than increased investment. Indeed Michael Keating, Chairman, Independent Pricing and Regulatory Tribunal of NSW, challenged whether any increase in the total value of infrastructure is required to meet emerging needs. He, and others, argued that the infrastructure dollar needs to be applied to its highest valued ends, just like any other scarce resource. At the least this requires professional, independent assessments of the costs and benefits of investment proposals. Although the methodologies are well established, they are not always well or routinely applied. He questioned whether bad infrastructure investments have consumed resources and diminished Australia’s growth potential. Insights Melbourne Economics and Commerce

43


Much progress has occurred in recent decades in reducing the inefficiencies of the statutory monopolies that have dominated service delivery in these areas. Even so, Gary Banks, Chairman of the Productivity Commission, reported that its annual review of the financial performance of government enterprises found that although the aggregate return on their assets has slowly improved over time, more than half still do not earn a commercial rate of return. Their investment decision-making remains constrained by undue political interference, ill defined or unfunded noncommercial obligations, constraints on pricing and restrictions on borrowing. Institutional reform has been on the Council of Australian Government’s agenda for some years. However, several speakers argued that progress has been faltering and slow. While they supported increased public investment in some areas, Rod Sims, Director, Port Jackson Partners Ltd, for example, noted that improved pricing signals, improved access arrangements and stronger competitive pressures are also required before faster progress will occur. Several argued for faster reforms in a number of areas including not only elements of the resources supply chain such as rail, land freight and ports but also urban infrastructure (including traffic congestion), water, broadband telecommunications and climate change. There were a number of common elements in proposals for reforms. These included the better alignment of prices and, in the case of roads, usage charges, with economic costs of supply, including all externalities imposed on others; and regulatory arrangements that promote competition and allow markets to work or to create them, say, to enable trading of rights to water or to emit carbon. Such arrangements will promote decisions that better allocate scarce resources to uses that will secure the highest returns for society and better match supply with demand at least cost. However, some infrastructure is characterised by economies of scale that lead to natural monopolies. Others lead to externalities that cannot be captured by the provider from users, for example the reduction in road congestion facilitated by an efficient urban rail network. Governments should intervene to prevent the exercise of undue market power by natural or artificial monopolies, and to 44

New agenda for prosperity

ensure that externalities that cannot be captured in user charging are adequately reflected in the financial returns earned via subsidies or required from publicly owned providers. The regulator has a delicate job to do in those circumstances. For example, while preventing exploitation of monopoly power, they need to ensure that prices are allowed to do their work. The Productivity Commission has argued that price regulation that prevents ‘above-normal’ returns may also introduce diminishing incentives to invest in long-lived assets in short supply and thus can be counterproductive at times.

Creating markets The creation and efficient regulation of markets is a subtle policy problem. Regulations which change existing access rights or which permit the efficient trading of rights to use resources previously held in common such as water or the atmosphere not only introduce new signals to guide behaviour but also create new assets or change the values of existing ones. The adequacy of supply responses will be affected by the perceptions of investors about the security of their asset values and their capacity to earn a return on investment. Investors require predictability about the regulatory regime sufficient to make wellinformed business judgments. Reforms to the regulation of water, telecommunications and the generation of carbon emissions are cases in point. Several speakers reflected on the consequences of such realities. Competitive markets leading to efficient pricing of water and effective opportunities to trade between agricultural and domestic uses on the basis of market returns do not exist across the world: the policy and institutional challenges are clearly not easy. However, Gary Banks argued that the potential benefits are nonetheless substantial and worth the effort. Michael Keating argued that, short of establishing fully fledged markets, efficient pricing of water based on the costs of sufficient supply would substantially address existing water supply deficiencies at least cost. Indeed, he argued that Sydney’s water supply problems for the next decade could be addressed through a relatively affordable increase in domestic water prices2 to cover the expected costs of introducing water desalination and other recycling schemes.


He argued that a price set at the level of long run marginal cost would provide sufficient incentive for investment in the necessary capacity, including by the private sector. Some argued for caution and predictability in establishing a carbon emissions trading regime so as not to unsettle investment patterns in electricity generation or other pursuits that are energy and capital intensive. Garnaut argued in favour of approaches to regulation that minimise the scope for rent-seeking behaviour, relying on well functioning markets and competition to promote better economic outcomes than would be achievable by corruptible administrative fiat. He argued that his proposed approach to an emissions trading scheme would satisfy this test better than an equivalent carbon tax, once an emissions reduction trajectory has been established.

Addressing the regulatory burden Effective regulation underpins the efficient operation of a market economy. Several speakers addressed the case for regulatory reform. Dr Stephen King, Commissioner, Australian Competition and Consumer Commission, argued that competition, not regulation per se, protects consumers. Several speakers argued that the regulators themselves and some companies have vested interests in increased regulation and complexity. It is important that the incentives are aligned to confront law makers, business and the regulators. External review mechanisms, such as requirements to subject new or amended regulations to a Regulation Impact Statement, provide an independent check on these vested interests. However some argued that these mechanisms have been ineffective in Australia, having been implemented as a check on compliance with procedural requirements rather than as substantive re-assessments of alternative approaches. Even so, Nicholas Gruen, Chief Executive Officer, Lateral Economics, argued that while Australia is not good at regulatory reform, we are, nonetheless, one of the best in the world. Several also sought greater harmonisation, if not uniformity, of regulations across jurisdictions in Australia, claiming that Australia’s market is too small to justify the compliance and other costs associated with having different, state-based approaches to regulation, especially for nationally

operating businesses. Two cautions apply, however. Capital and, increasingly, labour is mobile. A degree of competition between jurisdictions may promote improved regulatory efficiency. Moreover, the objective is to standardise on the best approach, not the lowest common denominator.

Human capital Concerns about shortages of infrastructure were matched by concerns about the quantity and quality of labour available to support sustained strong economic growth. Industry groups had complained of skill shortages. Modelling reported in Treasury’s Intergenerational Report 2007 predicted slower economic growth for Australia in the decades ahead and an increase in the ratio of non-working to working Australians because of demographic trends, principally population aging. Policies to promote greater work force participation and higher productivity can help to ameliorate these trends. Andrew Leigh, Fellow, Economics program, Research School of Social Sciences, Australian National University, argued that the demand for labour has increased in some industries and geographic areas in recent years as fast economic growth has become entrenched. In the short run, supply is relatively fixed, putting upwards pressure on wages in those occupations and areas. A feature of Australia’s current, more flexible labour market arrangements is that these pressures have been accommodated with less generalised impact on wage inflation and inflationary expectations than during earlier periods of strong demand. Leigh argued that the correct policy response to the prospect of continuing strong demand for labour is the introduction of measures to improve the quality, quantity and equity of Australia’s human capital, not greater intervention in the market to attempt to match specific demand for labour with supply. Such interventions would improve workforce participation and productivity. In fact, speakers addressed reform of the education system from a number of different perspectives, reflecting the centrality of human capital formation to contemporary policy discourse. There was a high degree of commonality about their analysis and policy prescriptions. Insights Melbourne Economics and Commerce

45


Geoff Masters, Chief Executive Officer, Australian Council for Educational Research, acknowledged that Australia’s education system performs reasonably well in world terms. Yet there is scope for improvement. For example, compared to some countries against which we often benchmark ourselves, considerably fewer Australian teenagers reach international benchmarks of excellence in standard tests, relatively fewer complete secondary school and relatively fewer young people who have not earlier completed school undertake further education between 20 to 24 years of age. Improved teacher preparation and classroom practices, better processes to monitor the performance of individual students, a more consistent and relevant curriculum across the country and better targeting of resources to educational need are required. Rewards systems that better recognise teaching excellence and reward teachers who take on hard schools and teaching assignments, for example, in remote areas, were also strongly supported. Similarly there was strong support for higher admission requirements for student and beginning teachers. Collette Taylor, Chair of Early Childhood Education and Care, Melbourne Graduate School of Education, The University of Melbourne, amongst others, argued that the education revolution needs to start with early childhood education and care. Lifelong learning is shaped by experiences in early childhood, and any disadvantage in early education tends to persist. Effective interventions at this early stage can help to raise average achievement levels and thus improve human capital and long-term productivity. Taylor argued that society in general is the principal beneficiary of improved outcomes from early childhood education, justifying increased public investment in the sector and better integration of education and childcare services. However, the bulk of the data on which early childhood policy is based in Australia is drawn from overseas. There is an urgent need for more research based on Australian data to support evidence-based policy development. There are complex interactions between childcare provision and workforce participation by women of child bearing age. Participation by Australian women in these years lags that of a number of other countries, for example, Norway. 46

New agenda for prosperity

Social mores contribute. So, too, do economic incentives. Access to subsidised childcare may assist to close this gap. However, Guyonne Kalb, Principal Research Fellow, Melbourne Institute, also reported evidence that developmental outcomes, especially cognitive skills, improve if a child has close support from one or both parents during their first year of life. These benefits for the child can also be long lasting. She, amongst others, argued that improved access to parental leave could reduce workforce participation of women temporarily – or encourage only part-time work – but have longer-term benefits that would strengthen the economy’s growth trajectory through stronger developmental, learning and economic outcomes for children as they progress through school, mature and enter the workforce.

Conclusion This paper has not attempted to summarise every presentation or issue addressed at the conference. It has illustrated two key policy themes that dominate contemporary policy discourse: how much longer can Australia’s current boom last? And how do we build the physical and human capital needed to maximise the growth of living standards as the population ages? The second agenda is broad, reaching well beyond the levels of investment in infrastructure to embrace the efficiency of pricing and markets, regulatory reform, and the effectiveness of education and training. However, it also extends to a range of issues not addressed in this paper such as the efficacy of the health system, the effectiveness of administrative and economic incentives to maximise workforce participation amongst those of working age, and the quality of social capital and social inclusion. Despite several decades of effort it is clear that the scope of work still required is large. Professor Sedgwick is Director of the Melbourne Institute of Applied Economic and Social Research at the University of Melbourne. 1 This section and the next draw heavily on the presentation of Philip Glyde. 2 Keating noted that the Independent Pricing and Regulatory Tribunal, NSW, has determined that the cost recovery tariff has to rise to $1.83 per kl over the next four years in real terms (from $1.31). A typical household would then spend an additional $203pa on water, an annual rate of increase of 6.3 per cent. The price of $1.83 for a thousand litres of tap water compares with a similar price for a single litre bottle of water at the supermarket.


alumni refresher lecture series

real options analysis and investment appraisal: the opportunities and challenges Real Options Analysis allows us to recognise in a systematic manner the impact of future expansion and contraction decisions on value today, and hence on whether an initial investment is worthwhile by bruce d. grundy

A condensed version of his Alumni Refresher Lecture delivered at the University of Melbourne on 13 August 2008. Distinguishing between financial and real options The option to pay a fixed amount in return for a share during a specified period – a call option – and the option to receive a fixed amount in return for a share during a specified period – a put option – are familiar examples of financial options, where the fixed amount is referred to as the option’s exercise price. Financial options are options to exchange a financial asset such as a share, a bond or a futures contract for cash. Less familiar, but no less important, are real options, which are options to exchange a real asset for some other asset. For example, undertaking an R&D program can give a drug company a real expansion option to acquire the revenues from any new product by paying the costs of producing and marketing the drug. The further option to discontinue the product later, should an even better mouse-trap be invented, is a real abandonment option to give up the low net cash flows from a failing product and to receive instead the scrap value of the machinery previously used to produce the product. While financial options only truly blossomed in 1973 with the advent of the Chicago Board Options Exchange – initially situated in the smokers’ lounge of the Chicago Board of Trade – and the publication of the Black-Scholes Option Pricing Model, real options to alter the

operations of an enterprise have always been with us. The mathematician and philosopher Thales (circa 635BC-543BC) made his fortune by correctly predicting a bumper olive harvest and buying options giving him the right to rent olive presses at the time of the next harvest in return for a pre-agreed fixed payment. When his forecast of high demand for olive presses proved to be correct, Thales was able to rent presses at well below what turned out to be a very high market rate that season.

Valuing options Valuation techniques suitable for financial options such as the Black-Scholes model are often inapplicable when one wishes to value real options. The fundamental determinants of the value of financial options are usually clear and easy to measure. The exercise price and option maturity – usually a few months hence – are contractually defined, the market price of the underlying share or bond or futures contract is observable, and the volatility of the underlying price over the next few months can generally be accurately estimated. On the other hand, the opportunities inherent in real options to expand or contract a business are extremely complex. Further, the market can be relatively uninformed about the current value of the business that might in the future be expanded or contracted.

Insights Melbourne Economics and Commerce

47


The estimation of the volatility of the future value of the business can be more an art than a science. Yet, valuing and operating the business today requires an understanding of how to optimally value and manage the inherent options to grow and shrink the business in the future. The CEO of Berkshire Hathaway has expressed his view on the difficulty of valuing future growth options by musing: ‘On the final exam [for a business valuation course], I’d probably take an Internet company and [ask], How much is it worth? And anybody that gave me an answer I’d flunk…’ Most finance academics warm to these words, and not just because of Warren Buffet’s tacit approval of such a speedy method of grading exam scripts. Much of the difficulty lies in the looseness of language when discussing real options. The common practice of confusing choices and valuable options is at the root of the problem. Simply having a choice about how to manage a business does not imply that the choice, or the business itself, has any value.

When are property rights valuable? Property rights that allow you to choose to give up one asset in exchange for another at off-market rates are valuable. Acquiring (or selling) an asset at offmarket rates means acquiring (or selling) it for less (or more) than its market value. An option has value only to the extent the option-holder has an ability to do something others cannot – namely, an ability to exchange assets at off-market rates. A call option on BHP Billiton gives its holder an option to acquire a share in BHP at a fixed price, which is valuable precisely because everyone else who desires a BHP share must pay the market price – the option to buy a share at its market price is a valueless right. I might be happier to be able to include BHP in my portfolio, but having that right along with everyone else does not make me a wealthier man. The source of market value in any option, whether the option is real or financial, must be traceable to some property right that the owner of the option has. An analogy may help. I have the option of remembering my beautiful wife’s birthday. And having optimally exercised that option I dutifully place $50 in my wallet and consider the option of

48

Real options analysis and investment appraisal

buying flowers or chocolates. Again I know that life will be even more pleasant if I make the right operating decision and stop at the florist. Surely I am adding value. I then consider roses or tulips. Perhaps a difficult problem for some, but in my case I know that the optimal exercise strategy is to select tulips. I am feeling wealthier still. The choice between purple and red tulips is resolved appropriately and having made this series of important strategic choices the purple tulips are to be gift-wrapped. Now I feel like a king and with my Midas touch open my wallet and find but a single $50 note. Optimally exercising all my options has not added any value. My ability to exchange $50 worth of chocolates for $50 worth of purple tulips is not a valuable property right. Similarly, a company may face boundless choices and yet still be valueless.

Exercising a financial option Even where the property right is clear, the optimal exercise policy of a real option may differ from that for a financial option. Consider a call option giving the right to acquire 1,000 ounces of gold at a fixed price of $500 per ounce at any time over the coming year. Call options on gold are financial options traded on the Commodity Exchange in New York. Suppose spot gold is trading for $900, the interest rate is seven per cent per annum and the one-year gold futures price is $963. The option must be worth at least $400 since the option-holder could immediately pay $500 cash to acquire $900 worth of gold. The spot-futures differential in our example is such that gold is being priced as a store of value. The spot price is simply the present value of the futures price: i.e. $900 = $963/1.07. Suppose you were considering exercising your call today and netting the $400. You would be better off by not exercising today and instead: (a) Committing to exercise at the end of the year; (b) Selling forward the gold you will receive when you do exercise at year-end thereby locking in the receipt of $963; and (c) Borrowing $900 today against your future receipt of the forward price.


This alternative strategy dominates because instead of paying $500 today when you exercise immediately, you can earn one year’s interest on $500 by delaying any exercise on your call until year-end. And you would be better off still not committing to exercise your call at year-end, instead delaying till year-end and then exercising your call only if the gold price is above $500 at year-end. This is an illustration of the familiar injunction: never exercise a call option on a nondividend-paying asset early. Since gold is typically priced as a store of value there is no convenience yield built into its price.

Exercising a real option Now consider an apparently analogous real option. Suppose you manage a one-year lease on a site that you know contains 1,000 ounces of gold. For simplicity assume that at any time during the life of the lease the gold can be extracted instantaneously at a cost of $500 per ounce. The lease is in effect a call option on 1,000 ounces of gold with an exercise price of $500 per ounce and a one-year life. But what would happen if you manage the lease on behalf of a public company whose shareholders are concerned that this may be another Bre-X situation? (For some particularly sobering reading see http://geology.about.com/ cs/ mineralogy/a/aa042097.htm ‘The Bre-X Gold Scandal: First there is a mountain of gold, then there is no mountain, then there is no gold mining company named Bre-X.’) If you simply announce the existence of the lease then, rather than netting $400 today, you would be delaying extraction till year-end. In the circumstances, the capital market may conclude that the shares are near valueless and your tenure as a manager would be brief. Therefore, you may find yourself forced to exercise the real option early to reassure investors that the underlying gold exists: information asymmetries between insiders and outsiders can affect the optimal exercise policy for real options when the firm’s goal is to maximise the current share price being set in a relatively uninformed market-place. As a final example of the importance of understanding the property right underlying a valuable real option, suppose you are considering the purchase of acreage in a finger of the Yarra Valley

at the edge of Melbourne with a view to planting grapes. The land is currently used for dairying. The investment in land and vines will amount to $10 million. In six years you will taste your first bottling, at which point you will discover whether you have produced a premium wine or yet another ‘blah’ wine. You must then decide whether to exercise your growth option and invest in the marketing and facilities necessary for a winery, or, alternatively, exercise your abandonment option and instead turn the property into a residential subdivision. Unless the wine is great it will be optimal to abandon grape-growing. You estimate the likelihood of producing a great wine. You also estimate the costs and revenues associated with producing wine and the costs and revenues associated with a residential development. You then estimate the expected payoff from your venture in seven years time, assuming that you exercise your growth and abandonment options optimally. Finally, you discount back at your estimate of the opportunity cost of capital and obtain an estimated present value of $25 million. On these calculations, the investment has a net present value of $15 million.

The skill source underlying this profitable venture It is necessary to identify the source of the skill that enables an investment of $10 million to create a business that you value at $25 million. If you cannot identify your relative skill as either a vintner or a land developer, you should conclude that the present value is at best $10 million. Absent a comparative advantage, you can expect to lose rather than make money. But suppose you do have a rare skill. You have observed something unknown to others, namely, that the clay soil in just this finger of the valley is surprisingly similar to that of the Coonawarra region. You may well have discovered cool-climate Coonawarra country. Having identified your property right you must then seek to enhance it. For example, you might buy options to acquire the surrounding dairy farms. You will buy the options in case, some years later, when you may be winning wine medals based on the ideal soil of this property, your neighbours then want a much higher price for their land.

Insights Melbourne Economics and Commerce

49


And, if you lack the capital to buy both this property as well as options on the surrounding properties, you should hide the real source of the value of the venture – the quality of the soil – by instead lauding the skills of your winemaking. However, what if the valuable property right is in fact the skill of your winemaker? And that what you in fact possess is the ability to identify a uniquely gifted member of the graduating class of Roseworthy College in South Australia whose skill is currently not appreciated by others. In the circumstances, the present value of the expected future cash flows associated with your venture may really be $25 million. But who will be entitled to those profits if the winemaker does prove to be so skilled and you have immediately promoted her to chief winemaker? Your star employee is likely to demand the rents to her rare skill and you as the winery owner may enjoy only one gloriously profitable initial year. Thereafter, she will enjoy all the rents and you may as well subdivide the property. Having understood that your particular skill is the ability to recognise skill in people before others also recognise it, how can you profit from your skill? Perhaps you need to try to convince your winemaker – and any would-be buyers of her services – that her skills are worth nothing without the unique blend of the soil on the property. Further, in initial contract negotiations with her, you might offer deferred compensation that gives her a large claim on the future profits from the winery in the form of restricted stock that will only fully vest if she works with you for 10 years. That way, you may also enjoy a share of 10 years of profits. Readers who mutter that only a finance professor could recommend ‘underpaying’ young talented employees should remember that if you do not conceal certain facts and/or contract in a way that locks her in, you will have spent $10 million proving to the world just how valuable she is. Once you then find that you cannot afford to retain her and you are forced to subdivide, you will simply have lost money in giving her career such a great start.

50

Real options analysis and investment appraisal

The real promise of real options analysis What, then, is the real promise of real options analysis (ROA)? First, ROA allows us to recognise in a systematic manner the impact of future expansion and contraction decisions on value today, and hence on whether an initial investment is worthwhile. Second, option valuation formulae can sometimes be used in addition to, but not as a substitute for, a traditional discounted cash flow (DCF) approach. A correctly implemented DCF approach to the valuation of a business will recognise and value the expected future net cash flows by optimally exercising any expansion and contraction options inherent in the business. When the two valuation techniques appear to yield different answers, it will be because inconsistent assumptions have been made about the business when applying the two techniques. Identifying and eliminating such inconsistencies will improve the accuracy of your valuation. Finally, and most importantly, the property rights that make the real options valuable must be identified and protected. If you cannot identify the property right, then any purported positive net present value associated with an investment opportunity will be no more than a mathematical error on your part. However, if you can identify the property right, then you can be confident that yours is truly a real option. Professor Bruce Grundy is in the Department of Finance at the University of Melbourne.


inflation targeting A review of the theoretical foundations of inflation targeting and current research in the area, with special reference to the Australian experience by guay c. lim A condensed version of a lecture presented at the Faculty of Economics and Commerce 2008 Alumni Refresher Lecture Series.

What is inflation targeting? Inflation targeting is the practice of monetary policy where the central bank aims to keep inflation within a quantitatively-defined band and, just as importantly, where the central bank communicates the policy to the public. To date, inflation targeting is practiced in 26 countries around the world (see Table 1 for a chronological list).

rialised countries such as New Zealand, Canada and the UK, and newly industrialised and emerging economies such as Chile, Korea and Mexico. The operational details vary and the practice has been variously described as pure inflation targeting, flexible inflation targeting, full-fledged inflation targeting, forward looking inflation targeting, strict inflation targeting – just to name a few.

Table 1: Inflation Targeting Countries

Inflation targeting policy frameworks

Year of adoption of inflation targeting

Country*

1989

New Zealand

Table 2 shows the operational details of inflation targeting for Australia, compared to New Zealand, Canada and the United Kingdom.

1991

Chile, Canada

Table 2: Inflation Targeting Policy Frameworks*

1992

Israel, United Kingdom

1993

Sweden, Australia

Country

1998

Czech Republic, South Korea, Poland

Measure of inflation

Target band and horizon

Policy timeline

Australia

Consumer Price Index (underlying)

Average of 2-3% over the medium term

No explicit timeframe to correct deviations

New Zealand

Consumer Price Index

Average of 1-3% over the medium term

No explicit timeframe to correct deviations

Canada

Consumer Price Index (core)

Midpoint 2% +–1% band

6-8 quarters to correct deviations

United Kingdom

Consumer Price Index

Midpoint 2% 1% band

If deviation > 1%, the Governor must provide a written explanation to the Chancellor

1999

Mexico, Brazil, Columbia

2000

Switzerland, South Africa, Thailand

2001

Norway, Iceland, Hungary

2002

Philippines, Peru

2005

Indonesia, Romania, Slovakia

2006

Turkey

2007

Ghana

* Various sources: Finland and Spain abandoned inflation targeting when they joined the European Monetary Union in January 1999; Slovakia, Poland, Czech Republic and Hungary are expected to give up inflation targeting when they join the EuroZone.

As shown, inflation targeting is practiced by different types of economies, for example, indust-

*For updates and details, see the websites of the central banks.

Insights Melbourne Economics and Commerce

51


The consumer price index (excluding volatile elements) is generally used as the measure of inflation. The target band for Australia is between two and three per cent, a range of one per cent compared to the more common range of two per cent adopted by many other countries. Like all central banks that have adopted inflation targeting, the Reserve Bank of Australia communicates its policy stance and releases monthly updates explaining its policy decision. But, unlike other countries’ institutional structures, the Reserve Bank of Australia is not faced with an explicit timeline to correct deviations from the target band. This allows the Reserve Bank the flexibility to pay attention to the state of the economy as measured by, say, the output gap or the employment gap, in its deliberations about the stance of monetary policy. Inflation targeting as practiced in Australia is by no means strict! The Reserve Bank is not an ‘Inflation Nutter’; a term coined by Mervyn King, the Governor of the Bank of England, to describe central banks which focus exclusively on inflation. A simple graphical way to present inflation targeting with respect to the unemployment gap is shown in Figure 1. The vertical axis shows the inflation gap (deviation of actual inflation from the mid-point of the target band of 2.5 per cent), while the horizontal axis shows the unemployment gap (deviation of actual unemployment from an underlying nine-quarter moving average value). The top left-hand quadrant shows the occasions when the inflation gaps were high and the unemployment gaps low, indicating that tightening monetary policy (positive changes in the cash rate) were warranted. In contrast, the bottom right-hand quadrant shows the occasions when loosening monetary policy (negative changes in the cash rate) were warranted as the inflation gaps were low and the unemployment gaps high. The graphical analysis in Figure 1 provides some indication of the importance of unemployment in the Australian inflation targeting policy framework.

52

Inflation targeting

Figure 1: Scatter plot of the inflation gap, and the unemployment gap; positive changes in the cash rate (grey); negative changes in the cash rate (black); 1993:1-2007:4

Why adopt inflation targeting? The quantity theory of money equation (MV=PY) serves as a convenient framework to think about the development of inflation targeting. If output Y is determined by real factors, it follows that price P is determined by the nominal term MV (quantity of money multiplied by the velocity of money). In the past when velocity V was fixed by technology, it followed that P was determined by the supply of money M. Hence maintaining price stability became tantamount to managing the money supply. When the velocity V became more volatile and less predictable due to financial innovations, controlling the money supply to control P was no longer feasible. Instead, the policy strategy became the direct management of P – in other words, inflation targeting! Australia’s history of monetary policy mirrors this point. In the 60s, monetary policy was about controlling the money supply, in particular bank deposits. During this time, the exchange rate was fixed and it served as the nominal anchor. However, in the 70s, the breakdown of the Bretton Woods system of exchange rates saw many countries abandon fixed exchange rates in favour of flexible rates. The Australian dollar was floated in December 1983. These were also the years of increased globalisation and financial innovations, so much so that it became increasingly difficult to define the money supply. Management of the growth of money, or monetary targeting, was abandoned, and the checklist approach to monetary policy was adopted.


Over the next few years, the Reserve Bank gravitated towards managing inflation directly; and finally, in April 1993, Australia formally adopted inflation targeting as the strategy for monetary policy. Note that a floating exchange rate system is a requirement for a well-functioning inflation targeting system, since in a world of high capital mobility, independent monetary policy cannot coexist with a pegged exchange rate regime – the socalled impossibility of the holy trinity! In parallel with globalisation and financial innovations, there were two critical developments in economic theory that promoted monetary policy as a short-run demand management tool focused on inflation. These were the relationship between inflation and unemployment, and the importance of commitment and credibility in anchoring expectations.

Inflation and unemployment In 1958, A.W. Phillips published his famous article that demonstrated a negative relationship between unemployment and the rate of change of money wages in the UK. Put simply, during periods of high unemployment, employees are unlikely to demand big increases in pay. In 1959, when Phillips was on sabbatical leave in Australia at the University of Melbourne, he estimated his second ‘Phillips Curve’ and once again established the negative relationship between changes in money wages and the unemployment rate, this time for Australia over the period 1947–1958. Since wage inflation and price inflation are highly correlated, research by academic economists became increasingly focused on the negative relationship between price inflation and the unemployment rate. In particular, the burning question became this: if there exists a negative relationship between inflation and unemployment, does it mean that we have to accept high inflation to have low unemployment? Figure 2 shows the scatter plot of wage and price inflation rates against unemployment rates for the period 1965–2007. It would be difficult indeed to see any empirical evidence to support a negative relationship between inflation and unemployment for Australia.

Figure 2: Scatter plot of wage and price inflation against the unemployment rate 1965–2007

However, there have been many structural changes in the sample period of more than 40 years. Figure 3 illustrates the changing nature of the relationship between inflation and unemployment. We see a positive vertical relationship before the breakdown of the Bretton Woods system of exchange rate determination (1964–1974), a negative relationship before the floating of the Australia dollar (1975–1983), a steeper relationship before inflation targeting (1984–1992) and a flatter relation since 1993, after allowing for the introduction of the GST in 2000. Much econometric research has gone into estimating the slope of these relationships as they provide invaluable information about the ‘trade-off’ between inflation and unemployment. Figure 3: Phillips Curves over time

Unemployment rate

Unemployment rate

Unemployment rate

Insights Melbourne Economics and Commerce

53


Alongside the empirical research, developments in economic theory began to question the existence of a long-run tradeoff between inflation and unemployment. Put simply, while attempts by a central bank to change the inflation rate might produce a nominal ‘surprise’ (with real short-run consequences), rational agents would factor the price changes into their decision-making process. Thus, over time, there would be no real effects and, consequently, no long-run trade-off.

Figure 4: Inflation and wage expectations

More importantly, academic economists began to argue that since unemployment is a real phenomenon, issues about labour supply and productivity were better managed as long-run problems via the fiscal arm of government. In contrast, inflation is a nominal phenomenon and should be managed by monetary policy. So with financial innovations and globalisation, inflation targeting became the preferred monetary policy strategy to manage short-run demand issues.

Anchoring expectations The second influential strand of economic theory that had a significant bearing on inflation targeting was the body of academic research that showed that commitment to a strategy enhanced credibility. This in turn ‘anchored’ expectations and led to better outcomes. In other words, if a central bank sends a clear signal that inflation control is a priority and the policy is then wellcommunicated, people soon expect and act on the expectation of stable prices, and actual inflation will remain low and stable. The left-hand side graph in Figure 4 shows the high correlation between actual underlying inflation and the Melbourne Institute measure of consumer inflationary expectations. More interestingly, look at the behaviour of wage expectations over the recent period of cash rate increases (see the graph on the right). When the Reserve Bank began its period of monetary tightening, wage expectations remained stable and in fact turned down, suggesting a strong belief in the downturn of inflation and the credibility of the inflation targeting policy. Wage expectations only began to creep up when the Reserve Bank kept raising the cash rate because inflation stayed stubbornly high.

54

Inflation targeting

The practice of inflation targeting The interest rate is the instrument used to achieve the inflation policy objective. In practice, the Reserve Bank of Australia meets on the first Tuesday of every month to determine the cash rate and its deliberations are announced. The effective transmission of the policy change is then dependent on the banks and changes in private sector behaviour. Much empirical research has been devoted to improving our understanding of how a change in the policy interest rate is transmitted to the rest of the economy.

Australian economic performance pre and post inflation targeting Figure 5 shows the performance of the Australian economy, pre and post inflation targeting. The level of inflation has certainly come down; GDP growth remains high but is less volatile; the employment to population ratio is trending up; and the unemployment rate is trending down. By all accounts, the economic indicators are more favourable post inflation targeting, but how much this is due to good luck from the resources boom or good management is still debatable.


Figure 5: Some macroeconomic indicators – 1984–2007

Figure 6: Tradables and non-tradables Growth in Price

Inflation Rate

GDP Growth Rate

12

8

10

6

8

4

6 2

4

0

2 0 Jun-84

Jun-90

Jun-96

Jun-02

Jun-08

-2 Jun-84

Jun-90

Jun-96

Jun-02

Employment to Population Ratio

Unemployment Rate

65

12

63

10

61

8

59

6

57

Contributions to Aggregate Inflation

4

55 Jun-84

Jun-08

2 Jun-90

Jun-96

Jun-02

Jun-08

Jun-84

Jun-90

Jun-96

Jun-02

Jun-08

Criticisms of inflation targeting The strategy of inflation targeting has been criticised mainly on two fronts. The first criticism is that a narrow inflation-only focus can lead to undesirable outcomes for growth and employment. This criticism is not relevant in Australia, as we have already noted. The other criticism is that inflation is mainly imported and hence beyond the control of the Reserve Bank. But is this the case for Australia? The graph on the left in Figure 6 shows the recent divergence in the rate of growth in the price index of tradables and non-tradables; while the graph on the right shows the smaller contribution of tradables to overall inflation. It would seem, in recent times, that inflationary pressures in Australia are predominantly homegrown, notwithstanding the contribution – direct and indirect – of the recent hikes in energy and food prices.

Concluding remarks We have looked briefly at the international practice of inflation targeting and noted the importance of the quantitative target and the communication of policy. Developments in support of inflation targeting include: financial innovations and globalisation; and academic research about the nature of the trade-off between inflation and unemployment, the role of monetary

policy as a short run demand management tool, and the importance of commitment and credibility. The Australian experience, to date, has been favourable; but how much is due to good luck and good management is something still to be explored. A lot of research, much of which is based on dynamic stochastic general equilibrium models, is currently focused on exploring ways to enhance the strategy of inflation targeting in the face of a range of shocks, especially asset-price shocks. There is still much we can research and learn about the strategy of inflation targeting.

Professor Guay C. Lim is Professorial Research Fellow, Melbourne Institute of Economic and Social Research. She is also Adjunct Professor in the Department of Economics at the University of Melbourne.

Insights Melbourne Economics and Commerce

55



new economic geography and manufacturing Understanding the existence of cities and regularities about the location of manufacturing activities within countries by russel hillberry Revelations of night photos from space The US space agency NASA publishes composite photos that show us night-time views of the earth from space.1 What is striking about these maps is that the casual observer is usually able to identify particular locations, with little more than the geographic scattering of light to help them along. For the purpose of this lecture we take light that is visible from space as a useful indicator of urbanisation. This lecture discusses the New Economic Geography (NEG) literature in Economics. Among other things, the NEG literature is focused on explaining the pattern of light in these maps. Why do the (brightly lit) heavily populated urban areas and (dark) rural areas coexist? Why are these urban areas so often located near one another in economic space? These are not simply questions about why the world looks as it does at night. Rather, they are fundamental questions about human behaviour and the organisation of industrial societies. A related topic, though one not as well captured by the night photos, is the extreme localisation of particular types of production activities. Why, for example, did Pittsburgh, Pennsylvania, become the home of several steel companies; while Detroit, Michigan, was the home of several auto companies? Why are most US band instruments produced in the small city of Elkhart, Indiana; while most carpet and flooring materials are produced in Dalton, Georgia? The recent revival of economic geography has focused on these topics. Head and Mayer (2004) provide a guide to the characteristics of a standard NEG model. They note that an NEG model typically has five components.

First, there are increasing returns to scale in production, and these are internal to the firm. Second, the existence of increasing returns generates market power, and this must be modelled appropriately. Third, there are costs of trading over geographic space. Fourth, firms have the ability to choose their locations. Fifth, there is endogenous location of demand, either through mobility of households, or via the effect on the demand for intermediate goods that occurs through the location decisions of downstream firms. A model of this type has the following implications. Increasing returns to scale imply that production will occur in a limited number of locations, because spreading out production over multiple places would mean that scale economies would go unexploited. Because there are trade costs, firms choose to locate near the bulk of demand. Trade costs will also give consumers an incentive to locate near the firms, to avoid paying more for the goods.

Core and peripheral areas This basic model thus generates co-location of firms and workers in a single location (i.e. a city). There are forces in the model that push some firms toward breaking away and locating in peripheral areas, but these are typically overcome by the benefits generated by increasing returns and trade costs for being in the geographic ‘core’. The coreperiphery structure of equilibria in these models is the common outcome that makes the models useful and interesting. Among the outcomes that emerge from the models is that in some equilibria, higher wages are sustained in the core rather than in the periphery. Insights Melbourne Economics and Commerce

57


The core may also be able to support higher tax rates and more generous public services, than can be supported in the periphery. The advantages maintained by the core can sometimes persist against certain shocks (i.e. changes in trade costs). However, the effects of such shocks are non-linear. In some cases, such shocks will lead to rapid unravelling of the core. These results offer us a way to understand not only the existence of cities, but also regularities about the location of manufacturing activities within countries. The example that is most often used in the literature is the ‘manufacturing belt’ in the northeast part of the US. From the late 1800s to the mid 1900s, US manufacturing prowess was unrivalled in the world. Its workers received very high wages by world standards, yet were often able to produce on a scale that made their goods seem cheap by world standards. During this period of US manufacturing dominance, much of its manufacturing activity was geographically clustered in a small section of the country – the Great Lakes region and the Northeast.2 Estimates reported in Krugman (1991) suggest that 70 per cent of US manufacturing activity occurred within this belt in 1900, and 64 per cent as late as 1957. A map showing the distribution of night-time light in the late 1950s would have made clear to the naked eye the unusual levels of activity in this region. Much of the country’s population lived inside the manufacturing belt. Consistent with the models, manufacturing firms and the bulk of final demand were co-located. As some NEG models indicate, the manufacturing belt was able to sustain higher wages than the rest of the country. The region also sustained higher tax rates and better quality public services. For example, this region hosted the bulk of the country’s great cultural institutions, including the most prominent universities, museums and symphonies. An important feature of economic activity within the US manufacturing belt was the importance of intermediate goods trade. Detroit produced autos, so auto parts were produced in nearby areas of Michigan, Ohio and Indiana.

58

New economic geography and manufacturing

NEG models have been adapted to include intermediate goods. A typical result from thinking through this exercise is that the inclusion of intermediate goods trade magnifies many of the results from the basic model – the core pays an even larger wage premium, even larger gaps between core and periphery taxes can be supported, but that the unravelling of the core can be even more disruptive. Intermediate goods trade can also be understood as a reason for the high levels of localisation that are sometimes observed. Note the co-location of auto parts producers and auto producers in the region surrounding Michigan. The existence of auto producers in Michigan (along with trade costs) made it too costly for auto parts producers to locate anywhere else. Once the auto parts producers are located in the region, the auto producers have an additional reason to stay in this location. It is in this way that intermediate goods trade magnifies the results of the basic model. A higher wage gap between core and periphery can be maintained, as can higher tax rates. However, if a shock is large enough to induce firms to leave (i.e. low wage manufacturing in China, lower costs of trading auto parts over distance), the implications for the core are more severe. As an aside, let me note that I believe that many of the current political and economic tensions in the US have to do with the unravelling of the core that was once the manufacturing belt. Lower trade costs, rising incomes in the rest of the world, the emergence of low-wage manufacturing in Asia – all these could result in the unravelling of the belt. Indeed, manufacturing activity is moving offshore, but it is also moving to other parts of the US. States that were in the core are finding it difficult to maintain more generous public spending and welfare provisions to which their citizens had become accustomed. Wages in much of the old manufacturing belt are stagnant or falling, especially when compared to the rest of the country. These tensions have to do with the unravelling of the core that was the US manufacturing belt, which is a large enough phenomena to affect political and economic choices at the national level.


China and India The causal empirical lessons we learned from the US manufacturing belt are also useful for helping us think about China and India. Rapid growth in Chinese manufacturing is often attributed to very low wages there. While that has certainly been important, another part of the story may well be that China is in a very good position to fully exploit any scale economies that exist, be they external or internal. Not only does China have a vast domestic market, but because of lowered tariffs, it also has access to a vast world market. Thus, it seems likely that China is able to reap all the benefits of scale that the US once did, and more. Given that this is the case, we might also expect to see certain features of the US manufacturing landscape appear in China. Just as cities in the US became so specialised that they became identified with particular products, we are seeing the same in China. For example, Qioatau, in Wenzhou province, is known as the button city, for it produces 60 per cent of the world’s buttons, and 80 per cent of the buttons produced in China. Other products, including lampshades and badminton racquets, seem to be localising in much the same way. As in the US, it also appears that Chinese manufacturing is beginning to co-locate within particular geographic regions. At the moment, there seem to be about three main agglomerations (around Shenzhen, Shanghai and Beijing) that contain much of the activity. While these cities are distant from one another, it is notable that they are all located in China’s east – close to much of China’s population and the global marketplace. As in China, the NEG models have policy lessons for India. Compared with China, India remains relatively rural and poor. There do not appear to be the same vast agglomerations of manufacturing activities – for which at least two policies are partially responsible. First, India has had an explicit policy of retaining some sectors for small firms. This policy limits the ability of Indian firms to achieve scale economies. Fortunately, this is rapidly being changed.

A second policy constraint that limits agglomeration is a number of interstate trade barriers that limit trade within India. Faced with such restrictions, it can be more appropriate to produce in multiple states than to produce in a core region and export to the periphery. Despite these tensions, there remain great prospects for further development in India and China. China seems to have already begun exploiting the scale economies that can be achieved in large-scale agglomerations of manufacturing activity. It may well be that a Chinese core will remain competitive even after Chinese wages rise. One big threat to such a core will be India, which also has the potential to be a core manufacturing centre that serves the world.

Dr Russel Hillberry is Senior Lecturer in the Department of Economics at the University of Melbourne.

References Head, Keith, and Thierry Mayer. (2004) “The Empirics of Agglomeration and Trade,” Handbook of Regional and Urban Economics: Cities and Geography, Volume 4, Amsterdam: NorthHolland, 2004. Krugman, Paul. (1991) Geography and Trade, MIT Press, Cambridge. 1 http://nssdc.gsfc.nasa.gov/planetary/image/earth_night.jpg 2 Incidentally, much of Canada’s manufacturing activity lies just over the border from this region.

Insights Melbourne Economics and Commerce

59



for love or money? paying doctors to improve the quality of health The methods through which doctors are paid have been shown to influence the decisions they make, and therefore the quality and costs of health care provided by anthony scott A condensed version of his Alumni Refresher Lecture delivered at the University of Melbourne on 20 August 2008.

Introduction A key area of health expenditure is the reimbursement of services provided by doctors. In 2005-6, 1.6 per cent of GDP ($15.5bn) was paid to doctors through Medicare fees, representing around 19 per cent of recurrent health care expenditure. This excludes the cost of doctors employed by public hospitals. The treatment decisions made by doctors – such as prescribing, referrals, admission to hospital and diagnostic tests – indirectly determines the level of most other types of health care expenditure. The decisions doctors make are the key to improving efficiency and equity in health care. The methods through which doctors are paid have been shown to influence the decisions they make, and therefore the quality and costs of health care provided. The aim of this short paper is to examine the role of fee-for-service (FFS) payment and suggest options for the reform of physician payment in Australia. The paper is concerned with how doctors receive their remuneration rather than higher level funding arrangements. In practice, any type of third party payer, from the Health Insurance Commission to proposed managed care or social insurance models, can adopt different payment schemes for the doctors it contracts with.

Current payment arrangements for doctors in Australia There are a number of issues with the current FFS payment which deserve further attention. These can be illustrated with the following quotation. “That any sane nation, having observed that you could provide for the supply of bread by giving bakers a pecuniary interest in baking for you, should go on to give a surgeon a pecuniary interest in cutting off your leg, is enough to make one despair of political humanity.” George Bernard Shaw, The Doctor’s Dilemma, 1911 The first issue raised is the objectives of the patient when visiting a physician. What do patients want from their physician? Better health is a clear objective, depending on how it is defined, but there are other possible outcomes that can influence a patient’s welfare, such as the provision of information and reassurance, and also the process by which treatment is delivered, such as whether a procedure is invasive or not. To link these objectives to a payment scheme requires that they can be both measured and attributed to the doctors’ actions. A ‘fee-per-health improvement’ would be the ideal payment system. However, the health status of patients is not routinely measured before and after they receive a treatment. The only outcome that is routinely measured is death.

Insights Melbourne Economics and Commerce

61


Even if an improvement in health could be measured, it may be difficult to attribute this to the actions of the doctor, given the many other factors that influence health status. Attribution may be easier where good evidence from clinical trials links a health intervention to better health. However, this evidence-based approach is far from complete. And so, the usual metric for FFS systems is the number of services. This is easily measurable, but depends on the volume of services provided, and its relationship with quality and outcomes is uncertain. The second issue that is raised by the above quotation is the motivation of doctors. If they were purely motivated by self-interest and money, as in the above quotation, and alternative treatments attracted a lower fee, then doctors would undertake many amputations. However, we know that doctors also care about their patients’ health status, and most adopt a more conservative practice style that is more closely aligned with the patients’ best interests – they have intrinsic motivation. Some doctors would therefore trade off a higher income for the benefit of patients. If this is the case, then there is no need for such a complicated fee schedule as we have now. The nature of and variations in doctors’ motivations should, therefore, determine the type of remuneration system used. A key issue here is the absence of evidence for many health-care interventions. In many disease areas there are a number of alternative treatments that could be pursued (including doing nothing) but there is little or no evidence to guide doctors’ recommendations. In the case where there is discretion as to which treatment to recommend, then in an FFS system the physician is more likely to recommend the most highly remunerated option. Where fee relativities are not based on evidence of relative cost-effectiveness, then inefficiency will prevail and the system will provide the wrong incentives for doctors. FFS also creates a culture where new technologies require new fees. Because some doctors are extrinsically motivated, they will not provide services unless they are paid, even where there is clear benefit to patients. There is a propensity to add on services and fees as technology advances

62

For love or money?

rather than replace services. This is less likely to happen in other types of remuneration systems. The ability of doctors to determine the level of their own fees means that costs over and above the Medicare reimbursement are passed on to patients. Although patient charges are not new in Australia, there is much evidence to suggest that those who are deterred from using the health care system are more likely to be relatively poor and in worse health – those most in need of health care. So we see a rise in health care costs while fewer patients are being seen.1 FFS and doctors’ ability to set fees and determine the volume of care provided means that although user charges do reduce demand, they do not reduce doctors’ incomes. This also depends on the responsiveness of demand to changes in patients’ out of pocket payments. FFS also discourages team working, continuity of care for patients with chronic disease, working in under-served and rural areas, universal access for equal need, and specialty choices that meet the needs of the population.

Alternatives to FFS The economic theory of incentives argues that in complex jobs and where quality and outcomes are difficult to measure, then FFS payment is likely to be inefficient.2 This is because the costs of contracting and monitoring are too high, and because doctors will only do what they are paid for doing.3 In a complex job, it is therefore difficult to determine an efficient fee schedule. The largely theoretical literature suggests that a mixed system of payment or a salaried payment option accompanied by subjective performance review, and with incentives for effort provided through the career and promotion structure, may be more efficient. There have been almost 30 years of international empirical research into physician payment systems, and a number of reviews of this literature have been published.4 Compared to salaried and capitation payment, FFS has consistently been shown to lead to a higher volume and intensity of care being provided. What this literature has yet to show, however, is whether this represents ‘too


much’ care or over-servicing. For this, it is necessary to examine the effect of different payment systems on the health outcomes of patients. This is a key area where the literature is lacking. Capitation payment, where doctors are paid according to the number of patients they are responsible for, has been shown to lead to low levels of health care provision and a more conservative approach to treatment by doctors. Salaried payment has again shown lower levels of treatment provided in comparison to FFS, although there has been no empirical research on the role of incentives contained within salary scales and careers. In order to avoid the more extreme opportunities to provide too much or too little care, blended or mixed systems of remuneration have been advocated as the way forward.5 Such systems might involve a number of elements: 1. A proportion of income that is fixed, with an additional element to reflect experience or seniority. This may be paid to all doctors providing a ‘core’ set of agreed services to minimum standards. Alternatively it could form the basis of a salaried system of payment. This fixed element could also be determined by capitation payment, which is perhaps more suitable for GPs rather than specialists.

2. The addition of an FFS element is desirable if fees can be linked to health improvements (e.g. immunisation) or to evidence-based guidelines of good practice in certain priority disease areas (e.g. cost-effective prescribing in coronary heart disease, chronic disease management), where they exist. This is a pay-for-performance element. 3. The addition of ‘non-core’ payments for services that doctors can choose to provide, such as after hours care. A further aspect of alternative payment schemes that has not been researched in health care is the role of the payment scheme in influencing the relative attractiveness of jobs for doctors. Workforce issues are a key area in Australia and other countries, and it is important to examine how the payment system can influence recruitment and retention. The level of expected future income has been shown to influence recruitment into certain specialties in the US. The type of remuneration scheme is also likely to influence recruitment and retention into geographical areas, as it will influence other non-pecuniary job characteristics. It may, therefore, be necessary to have a plurality of payment schemes, such that doctors can choose to be salaried employees, for example.

Insights Melbourne Economics and Commerce

63


This may be beneficial in rural areas and is already happening in Australia to a small extent. It may also be beneficial for those doctors who desire more flexible working hours, or who do not want to run a small business or bear the costs of the red tape associated with FFS payment.

Professor Anthony Scott is a Professorial Fellow and leads the Health Economics Research Programme at the Melbourne Institute of Applied Economic and Social Research at the University of Melbourne.

Evidence from Australia The Practice Incentive Program (PIP) for GPs in Australia was introduced in 1999. In addition to the usual FFS payments, the PIP provided capitation payments to improve practice infrastructure, and incentive payments to improve quality of care for patients with diabetes, asthma, mental health problems, and to improve coverage in cervical screening. This pay-for-performance scheme was recently evaluated in relation to diabetes treatment, and found that the HbA1c test (blood glucose test) was between 15 per cent and 20 per cent more likely to be ordered by GPs in the PIP scheme compared to GPs not in the PIP scheme. The study controlled for a wide variety of patient and GP characteristics, and also controlled for the self-selection of GPs into the PIP scheme. The results suggest that modifications to the FFS scheme can have marked effects on quality of care.

Conclusions Fee-for-service payment is widely regarded as being potentially inefficient and inflationary. Any proposed alternative payment system for doctors should follow a number of principles. Where possible, remuneration should be linked to performance in terms of patients’ health outcomes. This requires the linkage of evidence-based clinical guidelines and standards to the payment system, which will not be achievable for many disease areas. A blended or mixed system of remuneration should be used to avoid the extreme incentives of under- or over-servicing, and it should also be tailored to reflect the different motivations of doctors. A plurality of payment schemes should, therefore, be available for all doctors. This would have a positive effect on recruitment and retention. Finally, there should be experimentation and rigorous evaluation of any new payment schemes.

64

For love or money?

1 Scott A. (2006) ‘The productivity of the health workforce.’ Australian Economic Review, 39:312-317. 2 Prendergast C. (1999) ‘The provision of incentives in firms.’ Journal of Economic Literature, 37, 7–63. Burgess S., Metcalfe P. (1999). ‘Incentives in organisations: a selective overview of the literature with application to the public sector.’ Centre for Market and Public Organisation, University of Bristol, Working Paper no. 99/016. 3 Eggleston K. (2005) ‘Multitasking and mixed systems for provider payment.’ Journal of Health Economics 24(1), 1-223. 4 Gosden T., Pedersen L., Torgerson D. (1999) ‘How should we pay doctors? A systematic review of salary payments and their effect on doctor behaviour.’ Quarterly Journal of Medicine, 92(1), 47-55; Gosden T., Forland F., Kristiansen I. S., Sutton M., Leese B., Guiffrida A., Sergison M., Pedersen L. (2001) ‘Impact of payment method on the behaviour of primary care doctors: a systematic review.’ Journal of Health Services Research and Policy, 6, 44-5; Robinson J. C. (2001) ‘Theory and practice in the design of physician payment incentives.’ Milbank Quarterly, 79:149-177. 5 Scott A., Schurer S., Jensen PH., Sivey P. (2008) The Effects of Financial Incentives on Quality of Care: The Case of Diabetes. Working Paper No. 12/08. Melbourne Institute of Applied Economic and Social Research, The University of Melbourne, 2008. 6 Robinson, op.cit; Eggleston, op.cit.


innovation: a high value-added strategy Systematically innovative organisations have new ideas coursing through the DNA of all aspects of the firm, and while not all ideas are successful, a culture of innovation will reap significant benefits by danny samson Introduction This article sets out some key ideas about innovation in organisations. It attempts to answer some important questions: what it is, how do you get it, and what is the business value of pursuing innovation. In its broadest sense, innovation is a business strategy that might be applied to broadly differentiate your firm from others or to assist it in a niche-based strategy of specialisation. It may be one-off – a software package that you have developed in-house, a new product or service offering, an analytical technique, or an efficient way to do your work. Such one-off innovations can create significant value, however nothing lasts forever. So truly excellent organisations, large or small, aspire to be systematically innovative – coming up with a stream of new products and services, processing and operating methods, or even business models. Systematically innovative organisations have new ideas coursing through the DNA of all aspects of the firm, and while not all ideas are successful, they get enough things right to provide an overall handsome reward. Examples of well-known companies in this realm are Apple, Sony, 3M and Toyota. All of these companies have had their share of failures, because innovation means taking some risks, however carefully evaluated these might be. There is no reason why all companies cannot pursue a similar systematic innovation capability. Indeed, many are doing it and are delivering superior outcomes and performance to their stakeholders.

Innovation: what is it? Innovation generally implies new things – but not necessarily new to the world. It can mean

something new to a firm, or part of it; or new to a customer, client, industry or country. It is not confined to technical innovation or a new product or service line. Rather, it may be a new way of organising the internal processes of an organisation or an innovative business model or structure. The point is that the creation of innovation in any of these domains brings an opportunity to create new business value. The latest buzz word for this in ‘management-speak’ is Blue Ocean Strategy, which refers to the finding of a new market space for new products and services. Innovation can refer to big things or little things, and in my view the best large organisation in the world that achieves both is Toyota. At the ‘big innovation’ end was the massive development project of the Hybrid Synergy Drive; while at the other end of the scale, in many plants, a substantial number of improvement suggestions come from employees. Each of these is a relatively small innovation – mostly process driven – and, after evaluation, the majority of them are worthy of implementation. Innovation can occur in any and every industry, not just in high-technology product sectors. For example, there is tremendous innovation activity going on in the commodity industries of mining, oil and gas. In mining, for example, great process innovations are occurring, such as ‘block caving’ mining methods, and using biological methods (‘bugs’) to extract valuable minerals and metals from rock. There is plenty of innovation in services too, including everything from the use of the Internet for banking, retailing and travel, through to new service business ideas – for example, women-only gyms, premium seats in movie theatres, and downloadable music. Insights Melbourne Economics and Commerce

65


How to evaluate innovation and innovativeness As the old saying goes, ‘If you want to lead and manage something, you have to be able to measure it.’ Driving forward on innovation requires a clear strategy, leading to resourcing of that strategy into operating activities, and clear expectations and measures of innovativeness. It is possible to measure innovativeness at three points: inputs, process intensiveness and outputs. An input measure would answer the following question: relative to a firm that is just pushing out standard solutions in standard ways, what total effort and dollar resource are we putting into differentiation through innovation? Innovation intensity inside the firm is more difficult to quantify but it can be assessed qualitatively. Ask your staff whether their key priority is to push fairly standard solutions to client needs – with solid productivity and quality built in – or whether they can and do take the time to ‘think outside the square’, at least some of the time. And in terms of internal process issues, to what extent do they, and you, look for new and better ways to run your business operations, marketing activities, and so on? As to innovation outputs and its performance impact, you can make a list of innovations that your firm has achieved and, as well as that, an aggregate that enables you to calculate a total financial return outcome. In other words, seek to ‘show me the money!’ On this measure, 3M has as one corporate measure the percentage of total revenue that comes from new products. 3M insists that its divisions achieve over 10 per cent of sales revenue from new products every year. For service firms, this does not mean new contracts or clients for whom the same or similar works and projects are being served up, but new lines of business, new techniques or new market segments. For example, you could ask, ‘What percentage of this year’s revenue comes from products and service lines that were not in place last year?’ If it is one or two per cent, then it is hardly a systematically innovative firm, but if it is more than 10 per cent then it is very likely.

66

Innovation: a high value-added strategy

Innovation: much more than invention Innovation includes the creativity stage when new ideas are born, but it goes far beyond this. There is a great deal of hard work involved in bringing a new idea to market, no matter how brilliant the breakthrough might be. Innovation can be considered as the act of invention plus all that is involved in commercialisation. The full innovation cycle involves everything from having the initial idea to scaling it up to achieve commercial success, and turning the innovation into economic surplus, or put simply, wealth creation. With this in mind, if you have a new idea for a new product or service, what tests should you apply in order to determine the feasibility of turning that concept into reality? The following list of seven key tests may help in your assessment of its viability: 1. The functionality test: does the new product, service, technology or process provide benefits in a manner that is clearly superior to existing services or methods? Can you articulate the ‘value proposition’ of what is new and why it is better in terms that customers or clients can appreciate? 2. The mass production test: can the concept be massproduced in volumes and with the consistent quality to its specification in order to satisfy the market need? There have been many ideas that made it to prototype, but when it came time to scale up, they failed to be ‘massproducible’ or proved to be prohibitive from a cost perspective. 3. The marketing test: have you determined or assessed demand, and do you have a channel to the client or consumer base? Many inventors end up with a garage or warehouse full of their products, because they did not do their homework on the marketing test. The whole marketing mix must be planned as part of the commercialisation process. This includes design, branding, pricing, distribution, sales, and other factors.


4. The intellectual property control test: you have to make decisions around your IP, and either buy, own or licence-in the core technologies involved. 5. The leadership test: do the people involved in this initiative have the knowledge, skills, experience and courage to take it through to fruition? 6. The ROI (return on investment) test: this represents the financial bottom line of the innovation. Will it pay? The new concept must generate enough profit to make it worthwhile, including accounting for risk and the time discounted value of money. 7. Finally, more and more, new concepts must pass the corporate social responsibility test. This is also sometimes referred to as the sustainable development or sustainability test, and refers to the environmental sustainability of the initiative and also the social/community outcomes. Products, services and technologies must now at least not harm the environment and community, and where possible are advantaged by producing positive bottom line outcomes on these dimensions. A new product, service, technology or business process must pass all of the above tests and must be compelling in the level at which it clears the hurdle on most of them. These tests are useful for inventors, but also for investors who are considering the merits of underwriting a new invention, or for banks which are considering lending money to fund the development of a new service or product.

Managing the intellectual property in the venture It is important to plan and execute a strategy for how to preserve the core knowledge involved in a new idea as it is taken to the world. There are four generic strategies for attempting to control and maximise value from an invention in taking it through to commercialisation. First is to ‘run’ – meaning to simply keep ahead of the competition, by continually updating

technology and designs, such that by the time competitors have learned and perhaps copied or adapted your invention, you have already moved it into a new and better phase. Apparel designers often use a ‘run’ approach, knowing that their work will be copied, and rely on continually coming up with new ideas for the market. It may be possible in some circumstances to ‘hide’ the core knowledge in an innovation. Famous examples of this are KFC and Coca Cola. They have kept their recipes secret for many years, and their distinctive edge has remained very profitable while they have kept their IP under control. A well-known strategy for protecting knowledge is to ‘block’ others from using it by using legal means such as patents, trademarks and copyright laws. These laws exist to provide incentives for firms and individuals to take risk and invest in innovative activities, and can be effective in providing protection. However, they require significant investment, especially when taking out patents in a substantial number of countries. The fourth IP strategy is to form a network of cooperating firms in order to use combinations of technologies, which can be called ‘teaming up’. In some industries such as biotechnology, while a single firm may only have a part of the technology needed to solve a problem and pass the functionality test, there may be advantages from teaming up with other companies that satisfy other tests.

Encouraging innovation in organisations For an organisation to be systematically innovative, there are some clear and important requirements. First, strong leadership to implement strategies and policies, offer resources to make it happen, and develop a rewards system to focus staff on innovation. 3M does this very well. Another key ingredient is the organisation’s valuing of new knowledge, its creation and exploitation. This goes with the tolerance of risk, because the innovative firm must acknowledge that not everything it tries will work.

Insights Melbourne Economics and Commerce

67


Indeed, firms that succeed with systematic innovation will manage a portfolio of investments in new activities, and these are professionally project-managed so that effective ‘go/no go’ decisions are taken at critical stages of development. Finally, culture and behaviour are important. Staff in innovative companies are well trained and expected to think outside the square, take calculated risks when appropriate and to show initiative and tenacity in problem-solving. They are customer and benefit focused.

What benefits arise from innovation? Many great advances come from new services, products and technological processes. Societywide benefits are clear and valued. For example, most of us have benefited from penicillin derivatives at some time of our life. But let us consider the benefits of innovation from the supply side – what are the benefits to those bringing the innovation to market? First, there is obviously the revenue growth that comes from the innovation. For example, think of the monetary gains had by Google, Microsoft, Facebook, 3M’s PostIt notes, Apple’s Ipod and iPhone and the Toyota Hybrid Synergy Drive. A further benefit often comes with a company’s reputation for innovation, in the form of a price premium. Apple’s design capability and its unique products allow it to command such premiums over products that are technically as good, yet don’t carry the innovation credentials. Likewise, some services companies, such as architects, engineers and research firms, have innovation reputations and so can charge a premium price relative to those competitors that trot out only standard solutions and designs. Futhermore, being an innovative organisation helps firms to win the ‘war for talent’ in the labour market. Most people would rather work in a firm that is doing interesting and exciting things by investing in new knowledge, than work for one that is not. Finally, there is a lot of evidence over a few decades, which shows that when innovation is done well, there is a sound return on investment to firms that are innovative. Such firms are often 20 per cent to 50 per cent more profitable than the market.

68

Innovation: a high value-added strategy

Professor Danny Samson is Professor of Management in the Department of Management and Marketing at the University of Melbourne. Comments to d.samson@unimelb.edu.au


neuromarketing – marketing insights from neuroimaging research Increasing evidence suggests that much of our decision-making occurs via mechanisms that are inaccessible to our more rational and conscious thought processes by phil harris A condensed version of his Alumni Refresher Lecture delivered at the University of Melbourne on 27 August 2008.

Decision without reason? Imagine for a moment that you are in your local liquor store, shopping for a bottle of wine. You have a delicious meal planned for the evening, and a good bottle of red will add the final touch. Imagine that your gaze falls on two bottles, side by side on the shelf. One French, and one German. The bottles are a similar price, a similar style, and to all intents and purposes, they appear to be of similar quality. You feel that either of these will meet your needs, but how do you choose? Imagine that as you ponder your choice, the sound of a very French piano-accordion melody gently floats through the store. Will it affect your choice? And if so, will you be aware that your choice has been swayed? Recently, a group of researchers examined exactly this question, and found striking results. Wine shoppers were roughly three times more likely to purchase wine of the same nationality as background music. Striking enough, but critically, only one of forty-four shoppers interviewed suggested that the background music influenced their purchase decision, and over threequarters specifically said that the background music did not affect their choice of wine! How could they have been so out of touch with the influences on their behaviour?

Mounting evidence suggests that the wine shoppers are not alone. Indeed, there is increasing evidence from psychology and neurosciences research to suggest that very few of us are aware of some of the main determinants of our decisions. We fancy ourselves as rational decision-makers, able to weigh up the factors relevant to our decisions to arrive at reasoned choices. However, it appears that much of our decision-making is driven by thought processes that occur ‘belowthe-surface’. Indeed, increasing evidence suggests that much of our decision-making occurs via mechanisms that are inaccessible to our more rational and conscious thought processes. This mounting evidence provides a major challenge for marketing researchers and the industries they supply with consumer insights. Consumers provide valuable insights to organisations about their attitudes and likely behaviour towards products, services and ideas. Communication industries, in particular, are heavily reliant on consumers’ insights to inform the design of marketing communications, such as television advertisements. There is ample data to suggest that such research can provide valuable insights to organisations, but in the light of mounting evidence, we must seriously question the emphasis we place on consumers’ explicit thoughts regarding their choices.

Insights Melbourne Economics and Commerce

69


Neuroimaging research informs marketing models Neuroscience-based research methods are increasingly being viewed as a means to provide these types of insights. The nascent field of ‘neuromarketing’ seeks to access the thought processes underlying decision-making by capturing consumer responses to marketing materials at the moment they are presented. By examining how different regions of the brain ‘light up’ on a secondby-second basis when exposed to stimuli, and linking this information with an understanding of how different brain areas contribute to thought, this novel approach is providing unique insights on the thought processes underlying choices in real-world situations. As an example, consider a recent study examining neural responses associated with the CocaCola®/Pepsi® taste test. Coke and Pepsi are similar cola-flavoured carbonated drinks. The marginally sweeter Pepsi flavour is typically preferred by its core target market aged between 16 and 24 years. However, this preference is strongest when consumers are unaware of the cola brand they are

consuming. In branded tasting tests of Coca-Cola and Pepsi, preferences favour Coke. Since 1975, Pepsi has promoted the ‘Pepsi Challenge’ to emphasise the advantage of the Pepsi taste in order to provide a rational basis for brand preference. However, neuroimaging research, which has recorded the brain’s responses while these factors are at play, shows the limited role of rational decision-making factors in determining overall brand preference. It provides a fascinating insight into the mechanics of marketing. In 2004, researchers replicated the Pepsi Challenge while consumers’ brain activity was recorded. When consumers sipped cola drinks from unlabelled cups, cola preference for either brand activated a region of the brain linked with preferences based on sensory information. The greater the sensation, such as taste, the more this region of the brain lit up. This response mirrors the Pepsi Challenge findings when taste alone is used as the basis for choice. However, what about responses in a more realistic choice situation? How does the addition of brand information affect choice? The 2004 research showed that preferences for Pepsi did not change when the Pepsi brand was identified, whereas Coke preferences increased. Branded Coke preferences stimulated activity in a completely different neural system in the brain, this time associated with long-term memory – memory embedded for life. It appears that as consumers experience the flavour of Coke, exposure to the Coke brand image evokes associations that have been stored in long-term memory circuits. Critically, the memory regions are strongly connected with brain regions that bias preference. As a result, associations held in memory influence other brain areas that respond to taste. Thus, the research shows that by examining the neural responses associated with brand effects, we can see the mental mechanics of the strongest brand at work.

Emotional influences on decisions In another recent study, researchers offered consumers a choice between familiar beer and coffee products of similar quality and with similar attributes. In this case, the researchers were

70

Neuromarketing


interested in strategies consumers would use to make a choice between similar products when the comparison is not well supported by a rational comparison of product features. Interestingly, products that were not chosen ‘lit up’ the brain region associated with the use of reasoning strategies. In this case, the authors suggest that when consumers applied a rational approach to evaluate products with no tangible differential advantage, these products were not favoured. In contrast, products that were eventually chosen ‘lit up’ the brain region linked to the use of emotional experience to guide decision-making. This region draws on the emotional value of previous experiences to subtly bias decision-making in future encounters. In this case, the authors suggest that emotional associations with the brand, developed either through exposure to marketing communications or actual experience, bias the product evaluation process or result in a preference for that brand most strongly associated with positive emotional cues. Note that the evaluation process involved comparison of nearly identical products, yet in this case, the decision was based on intangible brand-related factors. In sum, this fascinating research suggests that two very different types of thought processes underlie consumer decision-making: a reasoning chain which conducts a rational analysis of purchase factors; and an emotional chain which biases the decision-making process as a result of previous emotional experience. Figure 1 Choice alternatives Reasoning chain

Reasoning strategies

Facts

Options

Emotional chain

Covert activation of biases related to previous emotional experience with the brand

Outcomes

Evaluation boost

Evaluation circle

Choice

A growing body of research indicates that subtle emotionally-driven thinking processes perform a fundamental role in everyday decision-making. Individuals automatically draw on cues that reflect the emotional value of stimuli derived from previous experience. These automatic processes are so important that without the impact of these subtle emotional biases, individuals demonstrate poorly adaptive behaviour in everyday life. Importantly, this ‘emotional chain’ of decisionmaking draws on mental processes that, to a large extent, occur covertly or unconsciously. Consumers may be simply unaware of the impact of these emotional biases on decisions. In this connection, the question arises whether research methods that probe rational and conscious processes are able to obtain reliable information on consumer behaviour. For example, common among many of the methods used to test the effectiveness of advertising stimuli, are survey or interview-based approaches which gauge the extent to which consumers consciously recall advertising content and ‘get the message’. In the light of increasing evidence demonstrating an impact on consumers’ reactions to advertising and brands at a subtler neural level, communications research methods must adapt to these new insights. Assessing these emotionally-driven covert consumer responses presents a formidable challenge to researchers. However, with increasing evidence of the impact of unconscious thought processes on behaviour and a growing dissatisfaction with conventional testing methods, the drive for these unique insights has prompted the rise of commercially-oriented marketing research services based on brain activity responses.

Neuroscience techniques as tools for business What are these measures, and how useful are they for business? Typically, research firms attempting to capture these implicit thought processes assess brain responses to marketing stimuli in the form of electroencephalography (EEG) – the electrical signal generated by neural regions firing simultaneously – or by using medical imaging technology such as magnetic resonance imaging (MRI).

Image adapted from Deppe, Schwindt, Kugel, Plassman and Kenning (2005)

Insights Melbourne Economics and Commerce

71


To collect EEG data, participants are fitted with special headgear, which records the electrical signal on the scalp while viewers are exposed to marketing stimuli. Using this approach, responses of individual brain regions to stimuli may be examined continuously to provide diagnostic information regarding the impact of specific elements of the marketing stimulus. By contrast, functional MRI (fMRI) research provides more detailed but less dynamic ‘snapshots’ of brain responses. Opinions differ regarding the value of these measures in commercial contexts. Proponents of EEG research point to the unique ability of these measures to reflect dynamic changes in viewer responses to marketing stimuli. For the first time, advertisers and media organisations may capture audience responses linked to specific scenes or messages with little interference from rational and explicit thought processes. However, others point to the weak link between brain electrical activity measures collected in market research settings and constructs that usefully predict consumer behaviour. For example, Brian Knutson, professor of neuroscience and psychology at Stanford University, has likened the use of EEG for marketing research purposes to ‘standing outside a baseball stadium and listening to the crowd to figure out what happened.’ Clearly, the value of brain activity measures for commercial research is dependent on the extent that these measures can capture responses in commercially viable settings and still provide reliable and useful marketing research constructs. To date, peer-reviewed research literature provides limited support for the use of these techniques in commercial contexts. At this early stage, the jury is still out regarding the long-term prospects for market research techniques drawing on brain response metrics. Additionally, ethical issues associated with the use of brain responses to guide marketing programs remain an important but relatively unexplored source of debate. Neuroimaging methods provide the scope for a better understanding of the complexity of human decision-making processes. Commercial application of neuroscience techniques aside, rigorous implementation of these techniques in

72

Neuromarketing

academic research contexts will support unique interdisciplinary advances in the application of marketing and neuroscience theory. Better understanding of human responses to marketing stimuli will play a key role in the future for more informed policy development regarding marketing communications and stimuli. Importantly, knowledge gained through the use of these scientific methods will provide both consumers and marketers alike with a better understanding of the role played by ubiquitous commercial stimuli in our everyday decisionmaking. Fancy a Coke, or Pepsi?

Dr Phil Harris is a Lecturer in the Department of Management and Marketing at the University of Melbourne.


Insights v4 cover final:Layout 1

28/10/08

8:01 AM

Page 1

Insights

INSIGHTS

Melbourne Economics and Commerce volume 4 november 2008

Melbourne Economics and Commer ce volume 4 november 2008

Big brother or a fair go By Graham Sewell

Market competitiveness By Bryan Lukas

Accounting induced performance anxiety By Anne Lillis

Understanding global imbalances By Richard Cooper

An interview with Robert E. Lucas By Ian King

Closing the gap? By Paul Smyth

Forward with fairness By John Denton

Intelligent systems in accounting By Stewart Leech

New agenda for prosperity Mailing Address: The Faculty of Economics and Commerce The University of Melbourne Victoria 3010 Australia Telephone: +61 3 8344 2166 Email: gsdir-ecom@unimelb.edu.au Internet: http://insights.unimelb.edu.au Published by the Faculty of Economics and Commerce, November 2008 Š The University of Melbourne

By Stephen Sedgwick

Real options analysis By Bruce Grundy

Inflation targeting By Guay Lim

New economic geography By Russel Hillberry

Paying doctors to improve health By Anthony Scott

Disclaimer Insights is published by the University of Melbourne for the Faculty of Economics and Commerce. Opinions published are not necessarily those of the publisher, printers or editors. The University of Melbourne does not accept responsibility for the accuracy of information contained in this journal. No part of this journal may be reproduced without the permission of the editors.

Innovation: high value-added strategy By Danny Samson

Neuromarketing By Phil Harris


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.