Ch 1: An Integrated Approach to Economic and Political Change Part 1

1.2 An Integrated approach to economic and political change

Following the research project set out by authors such as Joseph Schumpeter (1911, 1942, 1954), Nathan Rosenberg (1976, 1982), Nelson and Winter (1982), Chris Freeman (1987), Freeman and Louca (2001) and Frank Geels (2002, 2004) the integrated approach to be presented in this section places particular emphasis on the role of technology in driving economic change. Such an approach is considered particularly relevant for the study of greenhouse gas mitigation because emissions are largely a function of various technologies that are employed, for the most part, in the generation of energy. It is an integrated approach, insofar as it endeavours to provide a synthesis between evolutionary theory and equilibrium-based theory by arguing that the neoclassical framework – with extensions – is most relevant for understanding relatively short-term incremental change in the economy, while evolutionary theories are more relevant for interrogating the underlying structure and long-term competitiveness of an economy.

Furthermore, it is argued that to answer applied questions of government policy or firm strategy, such analysis must be positioned within an institutional context. This should provide an appropriate historical and geographical setting to the analysis with due regard for research on the realities of human behaviour (e.g. Kahneman and Tversky, 1979; Granovetter, 1985). This approach aims to place the focus of enquiry on understanding change in the actual world, rather than some hypothetical world constructed for the sake of parsimony or abstract theoretical conjecture. This focus on the close observation of the empirical world and the resultant embracing of diversity across social and economic outcomes also firmly positions this thesis’s epistemic cornerstone within the fields of economic and political geography.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Many applied economists may already be following a course of analysis similar to the treatment of neoclassical theory within this integrated approach. However, what is most likely missing from their theoretical background is a broader appreciation for the evolutionary processes which shape economic change. Such an appreciation may go a long way to help inject a sense of longer term value into the analytical framework of mainstream economics. This sense helps explain why economic outcomes can differ so drastically across countries and firms. Following Schumpeter, it asks the analyst to look strategically beyond short-term ‘market-returns’ which exist within relatively ‘stable states’ to potentially radically different technological pathways characterised by increasing returns over time and competition for the installation of new technologies of path creation and dependence.

By placing the nature and direction of technological change at the centre of the analysis, this framework also seeks to explicitly draw out subtle, yet profound differences in the conception of ‘the economy’ implicit in each type of analysis. As somewhat dryly noted by Clarence Ayres, economics is, by definition, the study of the economy – but what is this ‘economy’? On one thing, at least, Ayres suggests all economists seem to agree (Ayres, 1962:xii):

It is a system of interrelated activities to do with the ordinary business of living. Alfred Marshall assumed that as a matter of course the agency by which all these activities takes place is through the market – the buying and selling mechanism, 27 supply and demand. In an observation which has even more currency today perhaps, John Kenneth Galbraith (1958:6-18) referred to this as the “conventional wisdom” of our society – to question it is to be almost unintelligible.

In the traditional economic analysis of climate change this conventional wisdom has manifested itself through the application of standard welfare economics (Edgeworth, 1881; Dalton, 1920; Pigou, 1932; Hicks,1939) which adopts a utilitarian logic to establish the costs and benefits of a level of pollution and the optimal or most cost effective way to achieve it through regulation of prices, quantities, or both. To do this, two broad alternative approaches are usually considered: the Pigouvian tax, where polluters pay for the externality they impose on others; and the Coasian approach, which allocates property rights for use of the environment and argues that these be made transferrable (Coase, 1960). To balance each approach’s various disadvantages and advantages hybrid schemes have also been suggested by incorporating price floors or ceilings into emissions trading design. For a selection of work in this area see: Baumol and Oates, 1988; Cropper and Oates, 1992; Pizer, 2002; Jacobi and Ellerman, 2004; Philibert, 2009 Hepburn, 2006; Nordhaus, 2007; and Fankhauser and Hepburn, 2010a, 2010b.

The logic of the standard approach suggests that having ‘internalised the externality of pollution’ by setting a price on carbon, the problem of energy policy can be reduced to ensuring that energy prices reflect the full social costs of energy production and utilization. This is on the basis that the world is far too complex for politicians to “pick winners” with direct subsidies and, aside from providing some informational support or changing relative prices to correct for any externalities, it is best to leave people “free to choose” and let “the market decide” as much as possible.

Market driven interventions, such as carbon price-based policy, fit comfortably within the rubric of neoclassical equilibrium analysis, where well-informed consumers purchase products and energy services in a way that maximises their own welfare, and by extension, best promotes the interests of society at large. In this vision of the world, investment decisions by supply companies are driven by consumer demand for what is seen as a relatively homogeneous good – energy. Furthermore, issues of path dependency are put to one side and firms are most often assumed to be able to smoothly bring online and offline different supply technologies to reflect changing preferences and prices – the theoretical sliding up and down on demand (marginal revenue) and supply (marginal cost) curves.

It was Hayek, a student of Schumpeter, who in the 1930s vigorously championed this market model over that of more planned regulatory approaches which he observed emerging in the Union of Soviet Socialist Republics and other nations. He argued that in the market system actors can be coordinated through a single signal – prices – whereas, in a planned system, to efficiently allocate resources across the economy regulators would need to know all the utility and product functions of all the actors – an impossibly vast amount of information.

Hayek argued that this uncertainty would create “government failure” where, through acting on incomplete information, or becoming beholden to the special interests of a powerful minority, the interests of society at large (efficiency) would be compromised. Taking inspiration from Arrow (1951), this notion was further supported by extensions of the standard welfare economics approach which directly applied this logic to the political process through public choice theory, providing yet another warning of the imperfections of centrally-planned regulations (Buchanan and Tullock, 1962; Olson, 1965).

Within the economic and policy mainstream, this helped support a culture which held that while market failures were bad, intervention to correct them was to be approached very cautiously. These concerns were further heightened given that the actual economy was ridden so profoundly with market failures and policy distortions that even if policy makers managed to put in place an efficient policy with respect to say, a carbon tax, because of its interaction with other market failures or previous interventions in other parts of the economy, it could actually lead to a fall in aggregate welfare (see Corden, 1997). These bodies of theory helped create a culture of aversion to government intervention in the economy among economists.

It would be wrong, however, to characterize the economic mainstream as seeing low carbon technological change as something to be completely determined through the broad interaction of price signals in the market for a homogeneous good – energy. For example, Nicholas Stern (2007:111) argues in his Review on the Economics of Climate Change:

Many commentators are sceptical about technology policy, saying it is wrong for bureaucrats to ‘pick winners’. There is something in this, but it is also naïve or dogmatic in its underlying assumption that markets work perfectly unless distorted by government. In this case, markets do not work well unless assisted by government.

The theory of induced innovation, for example, is a hypothesis that seeks to explain the nature and direction of the technological change underpinning invention in the traditional model through the impact of factor prices (Ahmad, 1966; Kamien et al., 1968; Binswanger, 1974). Within this body of work, technological externalities have been identified providing a rationale for government intervention in a specific technology market (Acemoglu et al., 2009). For example, technological externalities in research and development are likely to exist because many technologies are nonrival (one person’s use of the technology need not exclude another’s); and because they are non-excludable, meaning property rights with respects to the technology are difficult to enforce.

Because firms are assumed to be profit maximisers, they will only invest in research and development if they can capture sufficient economic rents from the effort; as this investment is likely to have broader applicability, which would benefit the rest of the economy, positive spillovers are also likely to exist. Isaac Newton’s “standing on the shoulders of giants” is an apt metaphor here. The conclusions from this body of research for energy the environment and innovation are usefully surmised by Grubb and Ulph, (2002:104):

…while environmental policies [such as carbon pricing] may induce innovation that will lead to cleaner technologies, the theoretical and empirical evidence we have does not give us a great deal of confidence that environmental policies alone will be sufficient to bring about major environmental innovation. It seems, that to have a significant impact, it will be necessary to pursue both environmental and technology policies. Given that we are dealing with two 31 distinct areas of externality [the GHG externality and the technology externality], this conclusion is hardly surprising.

The assumptions which underlie the fundamental theorems of welfare economics state that: in a perfectly competitive market, an equilibrium reached by trading between buyers and sellers at market prices will be economically efficient; and that, ceteris paribus, such an equilibrium can always be reached from any given starting allocation of resources (Arrow and Debreu, 1954). This logic means that government interventions are framed in terms of trying to recreate a theoretical model of perfect competition – where the goal is achieving the single equilibrium that allocates resources efficiently across the economy (Foxton, 2011a:136).

At this point, there is no further impetus for change emerging from within the model. If change does occur, it is through exogenous factors determined outside the model, such as a change in technology or in consumer preferences – and then firms adapt to the new set of circumstances and reach a new equilibrium. While it is acknowledged that these exogenous forces can be profound, and things are almost certainly never ceteris paribus because of the great diversity of social and economic systems and outcomes across time and in different locations, the model’s logic requires these assumptions to be made.

One of the great advantages of the traditional neoclassical economic analysis is its parsimony. However, as is often the way, in its greatest strength lies its greatest weakness. Because the problem of climate change and GHG reduction is modelled in a demand and supply framework, the tools at the analyst’s disposal are those within the model’s analytical logic. As such, focus is given to the properties of price and quantity, their elasticities with respect to demand and supply and marginal change is explored through the application of differential calculus. However, in this framework non-price regulation is analytically problematic, so in many cases it is simply ignored, or dismissed as an inferior tool on the basis of its informational requirements to achieve efficiency.

What is missing in the traditional model is a set of tools which enable a more comprehensive understanding of the forces that influence the mechanics of economic change. For many short-term, or sector specific circumstances, the assumptions of the standard approach may not be problematic, but for analyzing long-term, non-marginal phenomena, such as what underpins low carbon technological transition, it represents a significant theoretical weakness. This is especially the case in long-term planning decisions, especially in industries characterized by increasing returns and where investments have large sunk costs, which can lead to path dependence, such as in energy supply and distribution. It is here that this thesis argues an evolutionary perspective of climate policy is particularly important.

The difference between an evolutionary perspective versus the traditional approach can be most easily understood through an interrogation of the core metaphors from which each paradigm draws its analytical logic: evolution versus equilibrium. The former emphasizes change, movement, progression, unpredictability, crisis and metamorphosis; whereas the later, emphasizes stability or stasis, a state of rest or balance due to the equal action of opposing forces. As one might expect, the implications of their epistemological differences in economics and public policy are profound.

Joseph Schumpeter challenged the conventional wisdom set out by Alfred Marshall by emphasising the role of technology as the engine of economic development and by focusing on how fundamental scientific discoveries and their application were able to destroy old markets and create new ones through product and process innovation. Rather than seeing firms competing on the basis of prices – he saw firms competing on the basis of technology – whoever had the technological edge, would win the day (Schumpeter, 1911:64):

Add as many mail coaches as you please, you will never get a railway by doing so.

A key text in this approach is Nelson and Winter’s (1982) Evolutionary Theory of Economic Change. This builds on the work Joseph Schumpeter (1942) and Herbert Simon (1957, 1991) to place technology at the centre of study in the economic model, where economic agents display ‘bounded rationality’ – that is, they are limited by their ability to access and process information, and hence look for satisfactory ‘satisficing’ solutions. Economic change in this model can come in a variety of forms, at “critical junctures” or in “gales of creative destruction” which can be far from marginal – however, the key point is that change is incessant (Schumpeter, 1942:82-3):

Capitalism, then, is by nature a form or method of economic change and not only never is, but never can be stationary. And this evolutionary character of the capitalist process is not merely due to the fact that economic life goes on in a social and natural environment which changes and by its change alters the data of economic action; this fact is important and these changes (wars, revolutions, and so on) often condition industrial change, but they are not its prime movers. Nor is this evolutionary character due to a quasi-autocratic increase in population and capital or to the vagaries of monetary systems of which exactly the same thing holds true. The fundamental impulse that sets and keeps the capitalist engine in motion comes from the new consumers’ goods, the new methods of production or transportation, the new markets, the new forms of industrial organisation that capitalist enterprise creates.

From this Nelson and Winter (1982) introduced the fundamental notion of the ‘routine’ which could be any technical, procedural, organizational or strategic process used by a firm as part of its normal business activities; for example, its R&D strategy or a particular production profile. They argued that firms compete by searching for better techniques or processes that satisfy their chosen criteria – whether that be for profit, market share, or some other objective – and which evolve out of a historically embedded adaptive process. From this, stable routines may emerge (technological pathways), formed by a dominant set of actors. However, this outcome was also observed to be highly contingent on the starting point of the system, so outcomes may be quite heterogeneous according to initial conditions.

Unlike most of his neoclassical colleagues, Schumpeter argued that the decision to invest in new technologies should be considered endogenous to the economic process since it originates from firms’ attempts to claim a monopolistic position by technological advancement. However, over time this monopolistic power would be limited by the ability of competitors to copy the innovating firm’s product and process innovations. This process is neatly described in Dosi et al. (1988):

When a new engineering or economic possibility comes along usually there are several ways to carry it through. In the 1890s the motor carriage could powered by steam, or by gasoline, or by electric batteries…. They may ‘compete’ unconsciously and passively like species compete biologically, if adoptions of one technology displace or preclude adoption of its rivals; or they may compete consciously and strategically, if they are products that can be priced and manipulated…. What makes competition between technologies interesting is the more they are adopted, the more useful and attractive they become. Competition between technologies, becomes competition between bandwagons.

The outcome of such competition is that, if a long-term perspective on economic change is adopted, increasing returns mean that rather than one efficient equilibrium, multiple outcomes become possible. Such multiple possibilities can then find expression contingent on each region or countries’ institutional context – which may favour or penalise one technology over another. Probably one of the most ubiquitous examples of this phenomenon, familiar to every international traveller, is the differing electrical sockets across many countries. The same principle applies to numerous other technologies such car engines (ethanol, electric or petroleum based); energy generation – say for instance the balance between coal, gas, nuclear, hydroelectricity or other renewables; and within energy generation, to the type of technology used, for example the numerous differing design templates for nuclear power plants, hydroelectric stations, wind turbines, solar panels and so on, each with their own national government backers, right down to the type of light bulb used in homes and offices.

Firms, often in partnership with governments, struggle to have their own products and national champions achieve ‘lock-in’ in as many markets as possible. Once lock-in is achieved, industrial protagonists and their government sponsors have an incentive to follow policies which ‘let the market decide’ content in the knowledge that since increasing returns have set in around their own businesses the competition is not, in fact, a level playing field – with the dice firmly stacked in the favour of those with historical support. Indeed, this may help explain many lesser developed nations’ scepticism at ‘free-trade’ reforms such as what underpinned the Washington Consensus project.

As Ayres (1962:xvii) points out:

this [technological] conception of the economy is not a denial of the market aspect anymore than traditional price theory is a denial of machine technology. However the question is, which is the dog and which is the tail?

Here, the notion of path dependence, or hysteresis, has been particularly influential: how decisions taken today or in the past can influence future decisions, closing off certain options, and opening up others (David, 1993, 1985, 2005; Arthur, 1988, 1989, 1994). This body of work emphasizes increasing returns effects due to forces such as: technical interrelatedness; economies of scale; dynamic learning and coordination effects and self-reinforcing expectations. This body of work suggests that these forces, combined with the quasi-irreversibility of investments and large fixed setup costs can lead to the lock-in of – in our case, high carbon industrial infrastructure and consumer patterns (Unruch, 2000).

While path dependence theory has had a profound effect across the social sciences, it has also been criticized, in its canonical form, as placing too much emphasis on the concept of lock-in to a stable equilibrium state and neglecting de-locking phenomena (Martin, 2010). Thus the model can describe the emergence of heterogeneous economic structures across different geographies, but once an equilibrium state is ‘selected’, little insight is provided into how the process of change occurs between stable states.

The idea that the economy lurches from one stable state to another is similar to Schumpeter’s “gales of creative destruction” and has been explored within the field of complex systems under the umbrella term of ‘punctuated equilibrium’ (Beinhocker, 2007). It is within such stable states or technological pathways (involving replication of a well established technology) that it is suggested the neoclassical model provides a useful framework for analysis, but for changes between states (the novel application of a relatively unestablished technology) that an evolutionary perspective is more relevant.

This picture of an integrated model of economic and political change is described in graphical form by Figures 1.1, 1.2 and 1.3. It is significantly informed by Strategic Niche Management (SNM) and its related Multi-Level Perspective (MLP) which arose out of the sociology of technology and were originally developed to understand transitions and regime shifts (Schot et al., 1994; Schot and Geels, 2008; Rip and Kemp, 1998; Kemp et al., 1998; Kemp et al. 2001; Geels, 2002, 2005; Kemp, 1994; Levinthal, 1998). As Geels (2002:1259) points out, the Multi-Level Perspective is not meant as an ontological description of reality, but as a set of analytical and heuristic concepts to understand the mechanics of socio-technical change. These same qualifications apply to the framework presented below.

Within this framework Geels and Kemp (2006: 234-236) highlight three different types of change according to their scope and underlying mechanisms: reproduction, transformation and transition.

Firstly, reproduction involves change only at the socio-technical regime level, not at a landscape or niche levels and refers to the reproduction and refinement of existing technological routines. As pointed out by Rosenberg (1976, 1982), such innovations can still lead to considerable productivity improvements over time, but there is little re-organisation of dominant actors, key technologies, or the knowledge base (High Carbon Pathway 1 in the figures below).

Secondly, transformation is defined as change at the regime and landscape level, but with little interaction from new technological niches. An example could be where a significant change in politics shifts institutional incentives away to a new ‘cleaner’ technological trajectory, such as through the imposition of a carbon price, but new technologies which are developing in niches outside the mature market cannot compete to replace the existing systems (High Carbon Pathway 2 in Figure 1.1).

Finally, a transition is said to occur when interactions between niches, the dominant socio-technical regime and the economic landscape interact to bring about a major qualitative shift in the economic system, most closely resembling Schumpeter’s gales of creative destruction. In transitions, incumbents are likely to attempt to suppress the new niches from emerging which may be better aligned with the social and economic landscape but are yet to achieve market maturity. This strategic behaviour may take the form of lobbying against the social and political reforms aimed at supporting the new niche –as forcefully described in the Australian coal industry context by Hamilton (2007). While formidable path dependent and behavioural barriers are likely to be present for transitions, if successful, they can bring about a significant shift in the knowledge base, introduce new technological objects, infrastructure and regulatory frameworks along with shifts in consumer preferences.

This model is usefully illuminated by Unruch’s (2002) ‘Escaping carbon lock-in’, which helps to explain how technologies can emerge from being in niches to enter the mainstream. Innovation scholars have tended to model technological evolution as long periods of incremental innovation, punctuated by episodes of rapid change (Abernathy and Utterback, 1978; Nelson and Winter, 1993; Sahal, 1985; Dosi, 1982; Tushman and Anderson, 1986; Tushman and Murmann, 1998). In these models major technological breakthroughs are generally seen to be driven by some exogenous force so compelling that users switch to the new technology (from fixed lines to mobile phones, for example).

While neoclassical theory would posit that even incremental improvements would lead to the adoption of a new technology over time; empirical studies have shown that it usually takes substantial improvements to induce a transition. One of the most frequently cited examples is David (1985) who found that time savings of 20-30% were insufficient to cause a switch from the QWERTY to the DVORAK keyboard. Other authors empirically show that improvements of a much larger magnitude are required to initiate a transition to a new technology (Grove, 1996; Drucker, 1993; Foster, 1986).

Next Page – Ch 1: An Integrated Approach to Economic and Political Change Part 2

Previous Page – Ch 1: The Aims and Motivation for This Research