7_ Valuing the far-off future: discounting and its alternatives Cameron Hepburn
1. Introduction
The challenges of climate change, biodiversity protection, declining fish stocks and nuclear waste management mean that policy makers now have to take important decisions with impacts decades, if not centuries, into the future. The way we value the future is crucial in determining what action to take in response to such challenges.
Whenever economists think about intertemporal decisions, whether concerning trade-offs between today and tomorrow or between the present generation and our distant descendants, we reach almost instinctively for the discount rate. This instinct is not without good reason – the practice of discounting, embedded in social cost–benefit analysis, has served us extremely well in formulating policy over the short to medium term. For longer term decisions, however, results from this trusty tool can appear increasingly contrary to intergenerational equity and sustainable development. In response, some have advocated jettisoning the tool altogether and turning to alternative methods of valuing the future. Others take the view that these long term challenges bring trade-offs between intergenerational efficiency and equity into sharp focus and it is no surprise that social cost benefit analysis, which generally ignores distributional considerations, supports efficient but unsustainable projects. They conclude that the tool is functioning properly, but must be employed in a framework that guarantees intergenerational equity. A third hypothesis is that although the tool works correctly for short term decisions, it needs repairing and refinement for long term decisions. In particular, if future economic conditions are assumed to be uncertain – a reasonable assumption when looking decades or centuries into the future – using a constant discount rate is approximately correct over shorter time periods (up to about 30 years), but is increasingly incorrect thereafter. The more accurate procedure is to employ a declining discount rate over time.
This chapter reviews social discounting (section 2), addresses the arguments for and against a zero discount rate (section 3), outlines the research on declining social discount rates (section 4), and considers some alternatives to discounting in social decision-making (section 5).
2. Exponential discounting and its implications
Cost–benefit analysis, efficiency and equity Economics has a long tradition of separating efficiency from equity, and social cost–benefit analysis is no exception, where the Kaldor–Hicks criterion is relied upon to justify projects that are efficient. Distributional effects are ignored, which is argued to be legitimate when the decision-maker also controls the tax system and can redistribute income to achieve equity. In practice, of course, the distributional effects of some projects are important, and cost–benefit analysis and should be employed as a guide for decision-making rather than a substitute for judgement (Lind, 1982). It can be a very useful guide because, when done properly, it focuses our attention on the valuation of the most important impacts of a decision.
For intergenerational investments, distributional effects are often especially important because there is no intergenerational tax system available to redistribute wealth (Lind, 1995; 1999). Although economic instruments can create wealth transfers between generations (such as certain changes to tax law and fiscal policy), there is no guarantee that the transfer will reach the intended recipient when there are many intervening generations. Drèze and Stern (1990) note that ‘hypothetical transfers of the Hicks–Kaldor variety . . . are not relevant when such transfers will not take place’. In such circumstances, explicit consideration of intergenerational equity appears to be necessary.
Estimating the social discount rate
In social cost–benefit analysis, the social discount function, D(t), is used to convert flows of future cost and benefits into their present equivalents. If the net present value of the investment exceeds zero, the project is efficient. The social discount rate, s(t), measures the annual rate of decline in the discount function, D(t). In continuous time, the two are connected by the equation:
A constant social discount rate implies that the discount function declines exponentially,
As practitioners know, the value of the social discount rate is often critical in determining whether projects pass social cost–benefit analysis. As a result, spirited debates have erupted in the past over its correct conceptual foundation. Happily, the debate was largely resolved at a 1977 conference, where Lind (1982, p. 89) reported that the recommended approach is to ‘equate the social rate of discount with the social rate of time preference as determined by consumption rates of interest and estimated on the basis of the returns on market instruments that are available to investors’. Under this approach, the social discount rate, for a given utility function, can be expressed by the well-known accounting relation: where is the utility discount rate (or the rate of pure time preference), is the elasticity of marginal utility and g is the rate of growth of consumption per capita. Even if the utility discount rate is zero, the social discount rate is positive when consumption growth, g, is positive and > 0.
Equation (7.2) shows that in general, the appropriate social discount rate is not constant over time, but is a function of the expected future consumption path.
The discounting dilemma
In recent years, debates about the correct foundation for the social discount rate have been replaced by controversy over discounting and intergenerational equity. To see that evaluation of long term investments is extremely sensitive to the discount rate, observe that the present value of £100 in 100 years’ time is £37 at a 1 per cent discount rate, £5.2 at 3 per cent, £2 at 4 percent and only 12p at 7 per cent. Because small changes in the discount rate have large impacts on long-term policy outcomes, arguments about the ‘correct’ number have intensified. For instance, the marginal damage from emissions of carbon dioxide is estimated by the FUND model (Tol, 2005) to be $58/tC at a 0 per cent utility discount rate, $11/tC at a 1 per cent utility discount rate, with damages of -$2.3/tC (i.e. net benefits) at a 3 per cent utility discount rate. Indeed, exponential discounting at moderate discount rates implies that costs and benefits in the far future are effectively irrelevant. While this might be entirely appropriate for individuals (who will no longer be alive), many people would argue that this is an unsatisfactory basis for public policy.
3. Zero discounting
Given these difficulties, some people find it tempting to suggest that we should simply not discount the cash flows in social cost–benefit analysis. But not discounting amounts to using a social discount rate of s0 percent, which is extremely dubious given our experience to date with positive consumption growth: g > 0 in equation (7.2). In contrast, a credible argument for employing a zero utility discount rate (0) can be advanced, based upon the ethical position that the weight placed upon a person’s utility should not be reduced simply because they live in the future. Indeed, this ethical position is adopted by Stern et al. (2006) and supported by a string of eminent scholars, including Ramsey (1928), Pigou (1932), Harrod (1948) and Solow (1974), and even Koopmans (1965); expressed an ‘ethical preference for neutrality as between the welfare of different generations’. Broome (1992) provides a coherent argument for zero discounting based on the presumption of impartiality found both in the utilitarian tradition (Sidgwick, 1907;Harsanyi, 1977) and also in Rawls (1971), who concluded that ‘there is no reason for the parties [in the original position] to give any weight to mere position in time.’
However, not all philosophers and economists accept the presumption of impartiality. Beckerman and Hepburn (2007) stress that reasonable minds may differ; Arrow (1999), for instance, prefers the notion of agent-relative ethics advanced by Scheffer (1982). Even if one does accept a presumption of impartiality and zero discounting, there are four counter-arguments that might overturn this presumption: the ‘no optimum’ argument, the ‘excessive sacrifice’ argument, the ‘risk of extinction’ argument, and the ‘political acceptability’ argument. We examine all four.
First, Koopmans (1960, 1965) demonstrated that in an infinite horizon model, there is no optimum if a zero rate of time preference is employed.
Consider a unit of investment today that yields a tiny but perpetual stream of consumption. Each unit investment causes a finite loss of utility today, but generates a small gain in utility to an infinite number of generations. It follows that no matter how low current consumption, further reductions in consumption are justi?ed by the in?nite bene?t provided to future generations. The logical implication of zero discounting is the impoverishment of the current generation. Furthermore, the same logic applies to every generation, so that each successive generation would find itself being impoverished in order to further the well-being of the next. Broome (1992), however, counters that humanity will not exist forever. Furthermore, Asheim et al. (2001) demonstrate that zero utility discounting (or ‘equity’, as they term it) does not rule out the existence of an optimum under certain reasonable technologies.
Second, even if we suppose a finite but large number of future generations, a zero discount rate is argued to require excessive sacrifice by the current generation, in the form of extremely high savings rates. Arrow (1999) concludes that the ethical requirement to treat all generations alike imposes morally unacceptable and excessively high savings rates on each generation. But Parfit (1984) has argued that the excessive sacrifice problem is not a reason to reject zero utility discounting. Rather, it should be resolved by employing a utility function with a minimum level of well-being below which no generation should fall. Asheim and Buchholz (2003) point out that the ‘excessive sacrifice’ argument can be circumvented, under plausible technologies, by a utility function which is more concave.
Third, each generation has a non-zero probability of extinction. Suppose that the risk of extinction follows a Poisson process such that the conditional probability of extinction at any given time is constant. Yaari (1965) demonstrated that this is equivalent to a model with an infinite time horizon where utility is discounted at the (constant) Poisson rate.As such, accounting for the risk of extinction is mathematically identical to positive utility discounting.
While admitting the strength of this argument, Broome (1992) asserts that extinction risk and the pure rate of time preference ‘should be accounted for separately’. But extinction risk is clearly not project-specific, so it would be accounted for in the same way across all projects (except projects aimed at reducing an extinction risk). Irrespective of how this is done, the mathematical effect is the same – the well-being of future generations is effectively discounted. Hence Dasgupta and Heal (1979) argue that ‘one might find it ethically feasible to discount future utilities as positive rates, not because one is myopic, but because there is a positive chance that future generations will not exist’. Given that the risk of human extinction is probably (and hopefully) quite low, the appropriate utility discount rate would be very small.
Finally, Harvey (1994) rejects zero utility discounting on the basis that it is so obviously incompatible with the time preference of most people that its use in public policy would be illegitimate. While the significance of revealed preferences is debatable (Beckerman and Hepburn, 2007),Harvey is surely correct when he states that the notion that events in ten thousand years are as important as those occurring now simply does not pass ‘the laugh test’.
In summary, the ‘no optimum’ argument and the ‘excessive sacrifice’ argument for positive time preference are refutable. In contrast, the ‘risk of extinction’ argument provides a sound conceptual basis for a positive utility discount rate. This might be backed up at a practical level by the ‘political acceptability’ argument, or by the more fundamental view that impartiality is not a compelling ethical standpoint. Overall, the arguments for a small positive utility discount rate appear persuasive. Zero discounting is not intellectually compelling.
4. Declining discount rates
Over recent years, several persuasive theoretical reasons have been advanced to justify a social discount rate that declines as time passes. Declining discount rates are appealing to people concerned about intergenerational equity, but perhaps more importantly, they are likely to be necessary for achieving intergenerational efficiency. Groom et al. (2005) provide a detailed review of the case for declining discount rates. This section provides an overview of the main arguments.
Evidence on individual time preference
Evidence from experiments over the last couple of decades suggests that humans use a declining discount rate, in the form of a ‘hyperbolic discounting’ function, in making intertemporal choices. In these experiments, people typically choose between different rewards (for example, money, durable goods, sweets or relief from noise) with different delays, so that an implicit discount function can be constructed. The resulting discount functions suggest that humans employ a higher discount rate for consumption trade-offs in the present than for trade-offs in the future.
While other interpretations, such as similarity relations (Rubinstein, 2003) and sub-additive discounting (Read, 2001), are possible, the evidence for hyperbolic discounting is relatively strong.
Pearce et al. (2003) present the argument that if people’s preferences count, and these behavioural results reveal underlying preferences, then declining discount rates ought to be integrated into social policy formulation. Pearce et al. recognize, however, that the assumptions in this chain of reasoning might be disputed. First, as hyperbolic discounting provides an explanation for procrastination, drug addiction, undersaving, and organizational failure, the argument that behaviour reflects preferences is weakened. Second, Pearce et al. and Beckerman and Hepburn (2007) stress that Hume would resist concluding that the government should discount the future hyperbolically because individual citizens do. The recent literature on ‘optimal paternalism’ suggests, amongst other things, that governments may be justified in intervening not only to correct externalities, but also to correct ‘internalities’ – behaviour that is damaging to the actor. Whether or not one supports a paternalistic role for government, one might question the wisdom of adopting a schedule of discount rates that explains procrastination, addiction and potentially the unforeseen collapses in renewable resource stocks (Hepburn, 2003).
Pessimism about the future
Equation (7.2) makes it clear that the consumption rate of interest – and thus also the social rate of time preference in a representative agent economy – is a function of consumption growth. If consumption growth, will fall in the future, and the utility discount rate,, and the elasticity of marginal utility, are constant, it follows from equation (7.2) that the social discount rate also declines through time. Furthermore, if decreases in the level of consumption are expected – so that consumption growth is negative – the appropriate social rate of time preference could be negative. Declines in the level of consumption are impossible in an optimal growth model in an idealized economy with productive capital. For the social discount rate to be negative, either capital must be unproductive, or a distortion, such as an environmental externality, must have driven a wedge between the market return to capital and the consumption rate o interest (Weitzman, 1994).
Uncertainty
It is an understatement to say that we can have little confidence in economic forecasts several decades into the future. In the face of such uncertainty, the most appropriate response is to incorporate it into our economic models.
Suppose that the future comprises two equally likely states with social discount rate either 2 per cent or 6 per cent. Discount factors corresponding to these two rates are shown in Table 7.1. The average of those discount factors is called the ‘certainty-equivalent discount factor’ and working backwards from this we can find the ‘certainty-equivalent discount rate’, which starts at 4 per cent and declines asymptotically to 2 per cent as time passes. In this uncertain world, a project is efficient if it passes social cost–benefit analysis using the certainty-equivalent discount rate, which declines through time.
The two key assumptions in this example are that the discount rate is uncertain and persistent, so that the expected discount rate in one period is correlated with the discount rate the period before. If these two assumptions hold, intergenerational efficiency requires a declining social discount rate (Weitzman, 1998, 2001).
The particular shape of the decline is determined by the specification of uncertainty in the economy. Newell and Pizer (2003) use data on past US interest rates to estimate a reduced-form time series process which is then employed to forecast future rates. The level of uncertainty and persistence in their forecasts is high enough to generate a relatively rapid decline in the certainty-equivalent discount rate with signifficant policy implications.While econometric tests reported in Groom et al. (2006) suggest that Newell and Pizer (2003) should have employed a state-space or regime-shifting model instead, their key conclusion remains intact – the certainty-equivalent discount rate declines at a rate that is significant for the appraisal of long term projects.
Gollier (2001, 2002a, 2002b) provides an even more solidly grounded justification for declining discount rates by specifying an underlying utility function and analysing an optimal growth model. He demonstrates that a similar result can hold, for certain types of utility functions. Under uncertainty, the social discount rate in equation (7.2) needs to be modified to account for an additional prudence effect: where P is the measure of relative prudence introduced by Kimball (1990).
This prudence effect leads to ‘precautionary saving’, reducing the discount rate. Moreover, if there is no risk of recession and people have decreasing relative risk aversion, the optimal social discount rate is declining over time. These two sets of results show that employing a declining social discount rate is necessary for intergenerational efficiency (Weitzman, 1998) and also for intergenerational optimality under relatively plausible utility functions (Gollier, 2002a, b). The theory in this section provides a compelling reason for employing declining discount rates in social cost–benefit analysis.
Inter-generational equity
Not only are declining social discount rates necessary for efficiency, it turns out that they are also necessary for some specifications of intergenerational equity. Chichilnisky (1996, 1997) introduces two axioms for sustainable development requiring that the ranking of consumption paths be sensitive to consumption in both the present and the very long run. Sensitivity to the present means that rankings are not solely determined by the ‘tails’ of the consumption stream. Sensitivity to the future means that there is no date after which consumption is irrelevant to the rankings. These axioms lead to the following criterion: where (t) is the utility discount function, and 0 1 is the weight placed on the integral part. Heal (2003) notes that the Chilchilnisky criterion has no solution under standard exponential discounting, where (t) exp(-t). It makes sense to initially maximize the integral part, before switching to maximizing the asymptotic path. This refuses to yield a solution, however, because it is always optimal to delay the switching point as this increases the integral part with no reduction in the asymptotic part. Interestingly, however, equation (7.4) does have a solution provided that the utility discount rate,, declines over time, asymptotically approaching zero. In short, a declining utility discount rate is necessary for a solution satisfying Chichilnisky’s axioms of sustainable development.Li and Löfgren (2000) propose a similar model which examines a society of two individuals, a utilitarian and a conservationist. The implication of this model is similarly that the utility discount rate must decline along the optimal path.
Conclusions on declining discount rates
Incorporating uncertainty into social cost–benefit analysis leads to the conclusion that a declining social discount rate is necessary for efficient decision-making. Indeed, it was on this basis that the United Kingdom government has incorporated declining social discount rates in its most recent HM Treasury (2003) Green Book, which contains the official guidance on government project and policy appraisal. Pessimistic future projections and, to a lesser extent, the evidence from individual behaviour could further support that conclusion. Finally, the fact that declining discount rates also emerge from specifications of intergenerational equity employed by Chilchilnisky (1996, 1997) and Li and Löfgren (2000), suggests that they are an ideal way to navigate between the demands of intertemporal efficiency and the concerns of intergenerational equity.
5. Alternatives to discounting
Although declining discount rates provide an appealing solution to the dual problems of intergenerational efficiency and equity, there are other possible solutions. Schelling (1995) proposes an alternative based around ignoring discount rates and specifying a richer utility function. Kopp and Portney (1999) and Page (2003) suggest using voting mechanisms. Finally, discounting reflects a consequentialist ethical position, so alternatives based upon deontological ethics are considered.
Schelling’s utility function approach
Schelling (1995) argues that investments for people in the far-distant future should not be evaluated using the conventional discounted cash flow framework. Instead, such investments should be considered much like foreign aid. For instance, investment now to reduce future greenhouse gas emissions should not be viewed as saving, but rather as a transfer of consumption from ourselves to people living in the distant future, which is similar to making sacrifices now for the benefit of our contemporaries who are distant from us geographically or culturally. The only difference is that the transfer mechanism is no longer the ‘leaky bucket’ of Okun (1975), but rather an ‘incubation bucket’, where the gift multiplies in transit. Given that people are generally unwilling to make sacrifices for the benefit of richer people distant in geography or culture, we should not expect such sacrifices for richer people distant in time.
In other words, the ‘utility function approach’, as Schelling (1995) calls it, would drop the use of a discount rate, and instead present policy makers with a menu of investments and a calculation of the utility increase in each world region (and time period) for each investment. This approach has the merit of insisting on transparency in the weights placed on consumption flows at each point in time and space, which is to be welcomed. However, debate would focus on the appropriate utility function to employ to value consumption increases in different regions at different times. Ultimately, in addition to reflecting marginal utilities at different points in time and space, the weights would probably also have to reflect the human tendency to discount for unfamiliarity along temporal, spatial and cultural dimensions.
Voting mechanisms
Many scholars have argued that although discounting is appropriate for short term policy evaluation, it is stretched to breaking point by complex long term challenges such as climate change. For instance, global climate policy is likely to have non-marginal effects on the economy, implying that conventional consumption discounting is inappropriate. Consumption discounting rests on the assumption that the project or policy being evaluated is a small perturbation on the business as usual path. If the project is non- marginal, then the consumption discounting ‘short cut’ is inapplicable, and a full welfare comparison of different paths is necessary instead.
Of course, conducting a full welfare comparison involves a certain amount of complexity. Alternatives to the welfare economics approach include the use of mock referenda, proposed by Kopp and Portney (1999), where a random sample of the population would be presented with a detailed description of the likely effects – across time and space – of the policy being implemented or not. The description would include all relevant information, such as the full costs of the policy and even the likelihood of other countries taking relevant action. Respondents would then vote for or against the policy. By varying the estimate of the costs for different respondents, a willingness to pay locus for the policy would be determined.
Their approach has the appeal of valuing the future by asking citizens directly, rather than by examining their behaviour or by reference to particular moral judgements. Problems with this approach, as Kopp and,Portney (1999) note, include the usual possible biases in stated preference surveys and the difficulty of providing adequate information for an appropriate decision on such a complex topic.
Page (2003) also proposes that voting should be considered as an alternative to discounted cash flow analysis for important long term public decisions. In contrast to cost–benefit analysis, with its emphasis on achieving efficiency, he notes that voting mechanisms (with one-person-one-vote) bare more likely to produce fair outcomes.
One difficulty with both proposals is that the people affected by the policy – future human beings – remain disenfranchised, just as they are on current markets. Unlike Kopp and Portney, Page tackles this problem by proposing to extend voting rights hypothetically to unborn future generations. Under the (unrealistic) assumption that there will be an infinite number of future generations, he concludes that intergenerational voting amounts to an application of the von Weizsäcker (1965) overtaking criterion. This leads to a dictatorship of the future, so ‘safeguards’ protecting the interests of the present would be needed which, Page argues, would be easy to construct given the position of power of the present generation.
The challenge with this proposal is to make it operational. Without safeguards, the implication is that the present should impoverish itself for future generations. As such the safeguards would in fact constitute the crux of this proposal. Determining the appropriate safeguards amounts to asking how the interests of the present and the future should be balanced, and this appears to lead us back to where we started, or to employing a different ethical approach altogether.
Deontological approaches Sen (1982) argues that the welfare economic framework is insufficiently robust to deal with questions of intergenerational equity because it fails to incorporate concepts of liberty, rights and entitlements as ends in themselves. He considers an episode of torture, where the person tortured (the ‘heretic’) is worse of and the torturer (the ‘inquisitor’) is better of after the torture. Further, suppose that although the inquisitor is better of, he is still worse of than the heretic. Then the torture is justified under a utilitarian or Rawlsian social welfare function. Sen (1982) contends that society may want to grant the heretic a right to personal liberty that cannot be violated merely to achieve a net gain in utility or an improvement for the worst-off individual. He adds that an analogy between pollution and torture is ‘not absurd’, and that perhaps the liberty of future generations is unacceptably compromised by the present generation’s insouciance about pollution.
If the consequentialist foundations of cost–benefit analysis are deemed inadequate, discounted cash flow analysis must be rejected where it generates results that contravene the rights of future generations. Howarth (2003) lends support for this position, arguing that although cost benet analysis is useful to identify potential welfare improvements, it is trumped by the moral duty to ensure that opportunities are sustained from generation to generation. Page (1997) similarly argues that we have a duty – analogous to a constitutional requirement – to ensure that intergenerational equity is satisfied before efficiency is considered.
Pigou (1932) agreed that such duties existed, describing the government as the ‘trustee for unborn generations’. But Schwartz (1978) and Parfait (1983) question whether the notion of a duty to posterity is well-defined, on the grounds that decisions today not only determine the welfare but also the identities of future humans. Every person born, whether wealthy or impoverished, should simply be grateful that, by our actions, we have chosen them from the set of potential persons. Howarth (2003) answers that, at a minimum, we owe well-defined duties to the newly born, thus creating duties for at least an expected lifetime.
Assuming a duty to posterity is conceptually possible, the final step is to specify the content of the duty. Howarth (2003) reviews several different formulations of the duty, which ultimately appear to amount to a duty to ensure either weak or strong sustainability. As such, deontological approaches comprise the claim that intergenerational equity is captured by a (well-defined) duty of sustainability to future generations, and that this duty trumps considerations of efficiency. While these approaches do not reject the use of discounting, they subjugate efficiency considerations to those of rights and/or equity. This is not inconsistent with the view expressed in section 2 above that cost–benefit analysis is a guide for decision-making rather than a substitute for judgement (Lind, 1982).
6. Conclusion
This chapter has explained why discounting occupies such an important and controversial place in long-term policy decisions. While intertemporal trade-offs will always be important, the developments reported in this chapter provide reason to hope that discounting may eventually become less controversial. Arguments for a zero social discount rate need not be taken seriously unless they are based upon extremely pessimistic future economic projections. Arguments for a zero utility discount rate are more plausible, but not necessarily convincing. Indeed, there is a good case for employing a positive, but very low, utility discount rate to reflect extinction risk.
Furthermore, the fact that declining social discount rates are necessary for efficiency reduces the degree of conflict between intergenerational equity and efficiency. Economists detest inefficiency, and it is surely only a matter of time before other governments adopt efficient (declining) social discount rates. If so, the discounting controversies of the future will concern the particular specification of economic uncertainty and the precise shape of the decline, rather than the particular (constant) discount rate.
Finally, even if declining discount rates reduce the tension between intergenerational equity and efficiency, they do not eliminate it. Discounting and cost–benefit analysis provide a useful guide to potential welfare improvements, but unless infallible mechanisms for intergenerational transfers become available, project-specific considerations of intergenerational equity will continue to be important. The ethical arguments, consequentialist and deontological, outlined in this chapter provide some guidance.
Ultimately, however, the appropriate trade-off between equity and efficiency, intergenerationally or otherwise, raises fundamental issues in philosophy. Consensus is unlikely, if not impossible. At least the clarification that efficient discount rates should be declining reduces the domain of disagreement.
8 Population and sustainability Geoffrey Mc Nicoll:
1. Introduction
Problems of sustainability can arise at almost any scale of human activity that draws on natural resources or environmental amenity. In some regionsminuscule numbers of hunter-gatherers are thought to have hunted Pleistocene megafauna to extinction; complex pre-industrial societies have disappeared, unable to adapt to ecological changes – not least, evidence suggests, changes they themselves wrought (Burney and Flannery, 2005; Janssen and Scheffer, 2004). But modern economic development has brought with it sustainability problems of potentially far greater magnitude – a result not only of the technological capabilities at hand but of the demographic realities of much larger populations and an accelerated pace of change.
A simple picture of those modern realities is seen in Figure 8.1. It charts a staggered series of population expansions in major world regions since the beginning of the industrial era, attributable to lowered mortality resulting from nutritional improvements, the spread of medical and public health services, and advances in education and income. In each of the regions population growth slows and eventually halts as fertility also drops, completing the pattern known as the demographic transition. The population trajectories shown for the 21st century are forecasts, of course, but moderately secure ones, given improving economic conditions and absent major unforeseen calamities. Worldwide, the medium UN projections foresee world population increasing from its 2005 level of 6.5 billion to a peak of about 9 billion around 2075. Very low fertility, if it persists, will lead to actual declines in population size – an all but certain near-term prospect in Europe and a plausible prospect by mid-century in East Asia.
Historically, the increase in population over the course of a country’s demographic transition was typically around three- to five-fold, with the pace of change seldom much above 1 per cent per year; in the transitions still underway the increases may end up more like ten-fold or even greater and growth rates have peaked well above 2 per cent per year. In both situations the size changes are accompanied by shifts in age composition – from populations in which half are aged below 20 years to ones with half over 50 and in concentration, from predominantly rural to overwhelmingly urban.
The lagged onset and uneven pace of the transitions across regions generate striking regional differences in population characteristics at any given time. Many population–environment and population–resource issues are thus geographically delimited; for others, however, the scale of environmental spillovers, migration flows and international trade may require an interregional or global perspective. This chapter reviews the implications of these various features of modern demographic change for sustainable ndevelopment – gauged in terms of their effects both on the development process and on its outcomes (human well-being and environmental conditions).
The discussion need not be narrowed at the outset by specifying just what sustainable development sustains. The conventional polar choices are the wherewithal needed to assure the welfare of future generations – a generalized notion of capital – and that part of it that is not human-made – what is now usually termed natural capital. Conservation of the former, allowing substitutability among forms of capital, is weak sustainability, and conservation of the latter is strong sustainability. (See, for example, Chapters
3, 4 and 6 of this volume on these concepts and the problems associated with them.) I take as a premise, however, that sustainable development is a topic of interest and importance to the extent that substitutability of natural capital with other kinds of capital in the processes yielding human well-being is less than perfect.
2. Population and resources in the theory of economic growth
For the classical economists, fixity of land was a self-evident resource constraint on the agrarian economies of their day. The course of economic growth was simply described. With expanding (man-made) capital and labour, an initial period of increasing returns (derived from scale economies and division of labour) gave way over time to diminishing returns, eventually yielding a stationary state. To Adam Smith and many others, that notional end point was a bleak prospect: profit rates dropped toward zero, population growth tailed off, and wages fell to subsistence levels. A very different,more hopeful, vision of stationarity, still in the classical tradition, was set out by J .S.Mill in a famous chapter of his Principles of Political Economy (1848): population and capital would again have ceased to grow, but earlier in the process and through individual or social choice rather than necessity. Productivity, however, could continue to increase. Gains in well-being would come also from the earlier halting of population growth, and consequent lower population–resource ratios.
A similarly optimistic depiction of a future stationary state – with the ‘eco- nomic problem’ solved and human energies diverted to other pursuits – was later drawn by Keynes (1932).
As technological change increasingly came to be seen as the driver of economic growth, and as urban industrialization distanced most economic activity from the land, theorists of economic growth lost interest in natural resources. With a focus only on capital, labour and technology, and with constant rates of population growth, savings and technological change, the models yielded steady-state growth paths in which output expanded indefinitely along with capital and labour. More elaborate formulations distinguished among different sectors of the economy. In dualistic growth models, for example, a low-productivity, resource-based agricultural sector provided labour and investment to a dynamic but resource-free modern sector, which eventually dominated the economy (see also Chapter 14).
With recognition of non-linearities associated with local increasing returnsand other self-reinforcing mechanisms in the economy, there could be more than one equilibrium growth path, with the actual outcome sensitive to initial conditions or to perhaps fortuitous events along the way (see, for example, Becker et al., 1990; Foley, 2000).
Although it typically did not do so, this neoclassical modelling tradition was no less able than its classical forebears to take account of resource constraints. (See Lee, 1991, on this point.) Renewable resources would simply add another reproducible form of capital as a factor of production.
Non-renewable resources, assuming they were not fully substitutable by other factors and not indefinitely extendable through technological advances, would be inconsistent with any steady-state outcome that entailed positive population growth. Requiring population growth, in the long term, to come to an end is not, of course, a radical demand to make of the theory.
While the actual role of population and resources in economic development is an empirical issue, a lot of the debate on the matter has been based on modelling exercises little more complicated than these. Much of it takes the form of window dressing, tracing out over time the implications of a priori, if often implicit, and assumptions about that role. A single assumed functional form or relationship – an investment function, a scale effect, presence (or absence) of a resource constraint – after some initial period comes to dominate the model’s behaviour. Familiar examples can be drawn from two models occupying polar positions in the resources debate of the 1970s and 1980s: the model underpinning Julian Simon’s The Ultimate Resource (1981) and that supporting the Club of Rome’s Limits to Growth scenarios (Meadows et al., 2004). In Simon’s case, the existence of resource constraints on the economy is simply denied. Positive feedbacks from a larger population stimulate inventiveness, production and investment, and favour indefinite continuation of at least moderate population growth, leading both to economic prosperity and to vastly expanded numbers of people. (The discussion of the model’s output ignores that latter expansion by being couched only in per capita values – see Simon, 1977.) For the Meadows team, negative feedback loops working through food production crises and adverse health effects of pollution lead to dramatic population collapses – made even sharper when lagged effects are introduced. Such models, heroically aggregated, are better seen as rhetorical devices, but-tresses to qualitative argument, rather than serious efforts at simulation.
Their output may point to parts of the formulation that it is important to get right, but it does not help in getting it right. While their authors were persuaded that they were accurately portraying the qualitative evidence about population and resources, as they respectively read it, the models in themselves merely dramatized their differences.
More focused models can achieve more, if at a lower level of ambition.
The demonstration of ‘trap’ situations involving local environmental degradation is a case in point – see Dasgupta (1993). As an example, the PEDA (Population–Environment–Development–Agriculture) model developed by Lutz et al. (2002) describes the interactions among population growth, education, land degradation, agricultural production and food insecurity.
It permits simulation of the vicious circle in which illiteracy, land degradation and hunger can perpetuate themselves, and points to the conditions required for that cycle to be broken. While still quite stylized, it is cast at alevel that permits testing of its behaviour against actual experience, supporting its value for policy experiment.
3. Optimal population trajectories
Since population change is in some measure a matter of social choice, it can notionally be regarded as a policy variable in a modelling exercise. Varying it over its feasible range then allows it to be optimized for a specified welfare function. The concept of an optimum population size for some designated territory – at which, other things equal, per capita economic well-being (or some other welfare criterion) was maximized – followed as a simple consequence of diminishing returns to labour. A small literature on the subject begins with Edwin Cannan in the late nineteenth century (see Robbins, 1927) and peters out with Alfred Sauvy (1952–54) in the mid-twentieth.
This is distinct, of course, from the investigation of human ‘carrying capacity’ – such as the question of how many people the earth can support.
At a subsistence level of consumption some of these numbers are extravagant indeed – Cohen (1995) assembles many of them – but the maximization involved, although in a sense it is concerned with the issue of sustainability, has closer ties to the economics of animal husbandry than to human welfare. (The technological contingency of such calculations is well indicated by the estimate, due to Smil (1991) that fully one-third of the present human population would not exist were it not for the food derived from synthetic nitrogenous fertilizer – a product of the Haber-Bosch process for nitrogen fixation developed only in the early 20th century.) If it is assumed that present-day rich-country consumption patterns are to be replicated worldwide, carrying capacity plummets: for Pimentel et al.
(1999) the earth’s long-term sustainability calls for a population less than half its present level.
The question of optimal size also arises for the populations of cities. The urban ‘built environment’, after all, is the immediate environment of half the human population. Beyond some size, scale diseconomies deriving from pollution, congestion and other negative externalities affecting health or livability may eventually outweigh economies of agglomeration (see, for example, Mills and de Ferranti, 1971; Tolley, 1974). But other dimensions of the built environment, including its aesthetic qualities, would equally warrant attention in a welfare criterion. Singling out the relationship of population size to the subjective welfare of the average inhabitant, among all the other contributors to urban well-being, seems of limited value. Not surprisingly, like the broader topic of optimum population, this too has not proven a fruitful area of research.
What might be of more interest is the optimal path of population change over time. The age-structure dynamics of population growth are analogous to the vintage dynamics of capital stock, though with more limited scope for policy influence. For specified welfare criteria, optimal population trajectories can be derived to show how resource-constrained stationarity should be approached (see Pitchford, 1974; Arthur and McNicoll, 1977; Zimmerman, 1989).
Abstract theorizing of this kind is a means of playing with ideas rather than deriving actual policies. Nonetheless, just such an optimization exercise, part static and part dynamic, lay behind the introduction in 1979 of China’s radical one-child-per-family policy. The background, recounted by Greenhalgh (2005), was the belated conviction on the part of China’s leadership in the 1970s that the country’s population growth was damaging its development prospects and the consequent recasting of the problem, as they saw it, from being one for social scientists and political ideologues to one for systems engineers and limits-to-growth theorists. The latter experts were at hand in the persons of a group of engineers and scientists (led by a missile engineer, Song Jian), who became the principals in promoting the new technocratic approach. They investigated both the static optimum – the target population size – and alternative trajectories that would lead toward it. On the former, as they summarized it: ‘We have done studies based on likely economic development trends, nutritional requirements, freshwater resources, and environmental and ecological equilibrium, and we conclude that 700 million seems to be China’s optimum population in the long run’ (Song et al., 1985, p. 214). They then solved the optimal control problem of how fertility should evolve to reach the target population over the next century if the peak population was not to exceed 1.2 billion, there were pre-set constraints on the acceptable lower bound of fertility and upper bound of old-age dependency, and there was to be a smooth transition to the target population while minimizing the total person-years lived in excess of 700 million per year. The resulting policy called for fertility to be quickly brought down to its lower bound, held there for 50 years or so (yielding, after a time, negative population growth), then allowed to rise back to replacement level. While various minimum fertility levels were considered, one child per family was argued to be the best. The human costs of attaining such a trajectory (involving ‘a lot of unpleasant- ness in the enforcement of the program’ and the social and economic problems of the ensuing rapid population ageing were held to be unavoidable in making up for the ‘dogged stubbornness of the 1950s’when Maoist pronatalism prevailed (Song et al., 1985, p. 267).
For both countries and cities, the specification of a welfare criterion to be optimized requires decisions on the ingredients of well-being and on how its distribution over the population and over time is to be valued. The inherent arbitrariness of that exercise explains the lack of enthusiasm for the concept of an optimum as a formal construct – though the idea may hold some political potency. Changes in trade and technology – either of which can transform economies of scale – erode what little meaning there is in a static optimum population for a country or locality. A fortiori, the inherent unpredictability of those trends, along with the many unknowns in future environmental change, vitiates the usefulness of more ambitious modelling over time – modelling that has necessarily to assume known dynamics.
4. Exhaustible resources and environmental services
Past worries about rapid or continued population growth have often been linked to the idea that a country – or the world – is running out of some supposedly critical natural resource (see Demeny, 1989 for an historical perspective). There have been numerous candidates for those resources in the past. Mostly, such claims have turned out to be greatly overstated; almost always they neglect or underplay the scope for societal adaptation through technological and social change. A classic case was the concern in 19th century Britain that its industry would be crippled as coal supplies were mined out (Jevons, 1865). The widely-publicized wagers between economist Julian Simon and biologist Paul Ehrlich on whether stocks of selected mineral resources were approaching exhaustion, to be signalled by steadily rising prices, were all won by Simon as prices fell over the specified period (Simon, 1996, pp. 35–6). A prominent historian of China titled a study of that country’s environmental history: ‘three thousand years of unsustainable development’ (Elvin, 1993).
Moreover, even if we would accept, contra Simon in The Ultimate Resource, that stocks of many resources are indeed finite and exhaustible, it does not follow that the link to population should necessarily be of much consequence. For many resources, indeed, the pace of approach to exhaustion might be at most marginally affected by feasible changes in population growth. As put bluntly in a 1986 panel report from the US National Research Council, slower population growth delay[s] the time at which a particular stage of resource depletion is reached, [but] has no necessary or even probable effect on the number of people who will live under a particular stage of resource depletion. [T]he rate of population growth has no effect on the number of persons who are able to use a resource, although it does, of course, advance the date at which exhaustion occurs . . . Unless one is more concerned with the welfare of people born in the distant future than those born in the immediate future, there is little reason to be concerned about the rate at which population growth is depleting the stock of exhaustible resources (US National Research Council, 1986: 15).
But that judgement is altogether too dismissive of the problem as a whole. ‘Mining’ a resource that would be potentially renewable, such as a fishery or an aquifer, or degrading land through erosion or salination may be a population-related effect. (The resources allowed as potential sources of concern by the NRC panel were fuelwood, forest land, and fish; many would add access to fresh water.) These are cases where the concept of a sustainable yield is straightforward enough, but constructing and maintaining the institutional conditions required to safeguard that yield are demanding. Far from a society simply using up one resource and moving on to other things – presumably having replaced that part of its natural capital by other resources or by other forms of capital – the outcome may amount to an effectively irreversible loss in welfare.
The shift in focus here is from physical ‘stuffy’, epitomized by stocks of minerals in the ground, to environmental services that humans draw upon.
Environmental services encompass not only provision of food and fuel but also climate regulation, pollination, soil formation and retention, nutrient cycling, and much else. And they include direct environmental effects on well-being through recreation and aesthetic enjoyment. A massive study of time trends in the use of these services, judged against sustainable levels, is the Millennium Ecosystem Assessment. In its first report (2005), the Assessment finds that most of the services it examined are being degraded or drawn on at unsustainable rates. Dryland regions, covering two-fifths of the world’s land surface and containing one-third of the world population, are especially affected.
But to what extent can this degradation be linked to population change rather than to economic growth or to the numerous factors that might lead to irresponsible patterns of consumption? People’s numbers, but also their proclivities to consume and their exploitative abilities, can all be factors in degrading environmental services. In stylized form, this proposition is conveyed in the familiar Ehrlich–Holdren ‘IPAT’ identity: Impact Population Affluence Technology (Ehrlich and Holdren, 1972). ‘Impact’ here indicates a persisting rather than transitory environmental effect. It is an external intrusion into an ecosystem which tends to reduce its capacity to provide environmental services. An example of an environmental impact is a country’s carbon dioxide emissions, which degrade the environmental service provided by the atmosphere in regulating heat radiation from the earth’s surface. The P A T decomposition in that case would be population times per capita GDP times the ‘carbon intensity’ of the economy.
At a given level of afluence and carbon intensity, emissions rise in proportion to population.
Interpreted as a causal relationship rather than as an identity, the I PAT equation is commonly used to emphasize the responsibility for environmental damage on the part, jointly, of population size, a high-consumption lifestyle, and environmentally-destructive technology, each amplifying the others. Implicitly, it asserts that these factors can together be seen as the main human causes of degradation. The categorization should not, of course, be taken for granted. In particular, social organizational and behavioural factors would often warrant separate scrutiny as causes of degradation rather than being subsumed within A and T.
If P, A and T were independent of each other, the multiplicative relationship would be equivalent to an additive relationship among growth rates. In the carbon case, the growth rate of emissions would equal the sum of the growth rates of the three components. However, P, A and T are not in fact independent of each other. For any defined population and environment, they are variables in a complex economic, demographic and socio- cultural system. Each also has major distributional dimensions and is a function of time. Consumption – or any other measure of human welfare – is an output of this system; environmental effects, both intended and unintended, are outputs as well. And even at the global level the system is not autonomous: it is influenced by ‘natural’ changes in the environment and by environmental feedbacks from human activity.
Because of the dependency among P, A and T, the Ehrlich–Holdren formula cannot resolve disputes on the relative contributions of factors responsible for environmental degradation. For this task, Preston (1994) has proposed looking at the variances of the growth rates of I, P, A and T over different regions or countries. Writing these as , and so on, the additive relationship among growth rates implies the following relationship among variances and covariances:
The covariance terms are the interaction effects. If each is relatively small in a given case, there is a simple decomposition of the impact variance into the variance imputed to each factor. Otherwise, the one or more significant interaction terms can be explicitly noted.
In Preston’s analysis of carbon emission data for major world regions over 1980–90, used as an illustration, population growth makes a minor contribution to the total variance; the major contributors are the growth of A and T, with a substantial offsetting effect from the interaction of A and T.
Given the 50 per cent or so increase in global population projected for this century, the future role of population growth in carbon emissions is nonetheless of some importance. Detailed studies of this relationship include Bongaarts (1992), Meyerson (1998), and O’Neill et al (2001).
Important too, of course, are the demographic consequences of any resulting climate change, such as those working through shifts in food production, disease patterns and sea levels. Specification of a more general functional relationship, If (P,A,T), permits calculation of impact elasticities with respect to the three factors, rather than implicitly assuming elasticities of 1. At the country level there is some evidence that the population elasticity is indeed close to 1 for carbon emissions but may be higher for some other pollutants (see Cole and
Neumayer,2004).
Complicating any estimation of population–environment relationships is the non-linearity of environmental systems. The Millennium Ecosystem
Assessment, mentioned above, warns of an increasing likelihood of ‘non- linear changes in ecosystems (including accelerating, abrupt, and potentially irreversible changes), with important consequences for human well-being’ (2005, p. 11). Holling (1986) notes that ecosystems may be resilient under the pressure of human activity until a point is reached at which there is sharp discontinuous change. Kasperson et al. (1995) identify a series of thresholds in nature–society trajectories as human activity in a region intensifies beyond sustainability: first a threshold of impoverishment, then endangerment, and finally criticality – the stage at which human wealth and well-being in the region enter an irreparable decline. The working out of the process is detailed in particular settings: criticality is exemplified by the Aral Sea basin. More dramatic historical cases are described by Diamond (2005). Curtailing growth in human numbers may not be a sufficient change to defect those outcomes, nor may it even be necessary in some circumstances (as discussed below), but population increase has usually been an exacerbating factor.
5. Institutional mediation
Most important links between population and environmental services are institutionally contingent. Under some institutional arrangements – for example, a strong management regime, well-defined property rights, or effective community norms and sanctions – population growth in a region need not adversely affect the local environment. Access to a limited resource can be rationed or governed in some other way so that it is not overused. Or the institutional forms may be such that the population growth itself is prevented – by negative feedbacks halting natural increase (an apparent condition found often in hunter-gatherer societies) or by diverting the growth elsewhere, through migration. If this institutional mediation ultimately proves inadequate to the task, the limits on the environmental services being drawn on would be exceeded and degradation would ensue. This can happen well short of those limits if economic or political change undermines a management regime or erodes norms and sanctions. Excessive deforestation can often be traced to such institutional breakdowns (or to ill-considered efforts at institutional reform) rather than to population growth itself. In other cases, a resource may have been so abundant that no management or sanctions were needed: that is a setting where the familiar ‘tragedy of the commons’ may unfold as the number of claimants to the resource or their exploitative abilities increases (see Hardin, 1968).
An appreciable amount of literature now exists on these issues of institutional design, both theoretical and empirical, and ranging in scale from local common-pool resources such as irrigation water or community forests to the global environment (see, for example, Ostrom, 1990; Baden and Noonan, 1998). Small common-pool resource systems receive most attention: a favourite example is the experience of Swiss alpine villages, where social regulation limiting overgrazing has been maintained for many generations. Larger systems usually show less symmetry in participant involvement and participant stakes: benefits can be appropriated by favoured insiders, costs shed to outsiders. Judgement of sustainability in such cases may depend on where a system’s boundaries are placed, and whether those cost-shedding options can be curtailed (see McNicoll, 2002).
Physical spillover effects of human activity beyond the location of that activity, such as downwind acid rain from industrial plants or down- stream flooding caused by watershed destruction, present relatively straightforward technical problems for design of a governance regime.
The greater difficulties are likely to be political. These can be formidable even within a country, a fortiori where the environmental effects involve degradation of a global commons. Population change here raises added complications. Thus, in negotiating a regulatory regime to limit global carbon emissions, anticipated population growth in a country can be treated either as a foreordained factor to be accommodated by the international community – occasioning a response analogous to political redistricting in a parliamentary democracy – or treated wholly as a domestic matter (an outcome of social policy) that should not affect assignment of emission quotas.
Adverse effects of human activity can also be transferred from one region to another through the normal economic relationships among societies, notably through trade. A poorer society may be more willing to incur environmental damage in return for economic gain, or be less able to prevent it. The concept of a community’s ‘ecological footprint’ was developed to account for such displaced effects by translating them back into material terms, calculating the total area required to sustain each community’s population and level of consumption (see Wackernagel and Rees,1996, and, for criticism,Neumayer, 2003; see also Chapter 20). An implicit presumption of environmental autarky would disallow rich countries buying renewable resources from poor countries; notionally, if implausibly, they could maintain their consumption by somehow reducing their, population.
6. Population ageing and population decline
As noted earlier, the age composition of populations that emerge from the transition to low mortality and fertility are heavily weighted toward the elderly, and after transitional effects on the age distribution have worked themselves out, actual declines in population numbers are likely. For example, if fertility were to stay at the current European average of around
1.4 lifetime births per woman (0.65 births below the replacement level), each generation will be about one-third smaller than its predecessor.
Change of that magnitude could not be offset by any politically feasible level of immigration.
After the ecological damage associated with industrialization it might be expected that the ending of the demographic transition would have positive effects on sustainability. There are fewer additional people, or even fewer people in total, and those there are will mostly live compactly in cities and have the milder and perhaps more environmentally-friendly consumption habits of the elderly. There may be scope for ecological recovery. In Europe, for instance, the evidence suggests a strong expansion in forested area is occurring as land drops out of use for cultivation and grazing (Waggoner and Ausubel, 2001). The so-called environmental Kuznets curve (see Chapter 15) – the posited inverted-U relationship between income and degradation – gives additional grounds for environ- mental optimism since post-transition societies are likely to be prosperous. But there are countervailing trends as well. Household size tends to diminish, for example, and small households, especially in sprawling suburbs, are less efficient energy users (see O’Neill and Chen, 2002).
Moreover, ecosystem maintenance increasingly calls for active intervention rather than simply halting damage. Mere neglect does not necessarily yield restoration. Many human-transformed landscapes that have valued productive or amenity qualities similarly require continuing maintenance. Expectations of strengthened environmentalism around the world may not be borne out – preferences, after all, tend to adapt to realities – and even a strong environmental ethic is powerless in the face of irreversibilities.
Population decline, of course, can come about for reasons other than post-transition demographic maturity: from wars or civil violence and natural disasters, and (a potentially larger demographic threat) from epidemic disease (see Smil, 2005). These events too have implications for sustainability, at least locally. Their effect is magnified to the degree they do harm to the productive base of the economy (including its natural resource base) and to the social institutions that maintain the coherence of a society over time.
7. Conclusions and research directions
Much of the research that would shed light on demographic aspects of sustainability is best covered under the general heading of sustainable development. This is largely true for the long-run changes that constitute the demographic transition. To a considerable degree the transition is neither an autonomous process nor policy-led, but a by-product of economic and cultural change, and it is this latter that should be the research focus.
For example, in studies of rainforest destruction – a standard illustration of adverse demographic-cum-development impact on the environment – a basic characteristic of the system is precisely its demographic openness.
Demographic ‘pressure’ supposedly leads to land clearing for pioneer settlement, but a broader research perspective would investigate the economic incentives favouring that kind of settlement over, say, cityward migration (Brazil’s rural population in 2005 was one-third smaller than its 1970 peak). As to policy influence, migration and fertility might be seen as potential candidates to be demographic control variables in a population–economy–environment system, but even if they technically lie within a government’s policy space, aside from cross-border movement most governments have very limited if any direct purchase over them.
While there may thus be less content in population and sustainability than first appears, an important research agenda remains. A critical subject, signalled above, is the design of governing institutions for population–economy–environment systems, able to ensure sustainable resource use. Those institutions are of interest at a range of system levels – local, national and international – and are likely to entail intricate combinations of pricing and rationing systems and means of enforcement.
At the local level, and possibly at other levels too, governing institutions might seek to include measures aiming at the social control of population growth.
A less elusive but similarly important research area concerns demographic effects on consumption. ,How resource- and energy-intensive will the consumption future be, given what we know about the course of population levels and composition? How do we assess substitutability in consumption – say, between ‘real’ and ‘virtual’ environmental amenity? And, well beyond the demographic dimension but still informed by it, are we, in confronting sustainability problems, dealing with time-limited effects of a population peaking later this century (with an additional 2–3 billion people,added to the world total) but then dropping, allowing some measure of ecological recovery, or are we entering a new, destabilized environmental era in which sustainability in any but the weakest sense is continually out of reach?
9 Technological lock-in and the role of innovation Timothy J. Foxon
1. Sustainability and the need for technological innovation
Despite increases in our understanding of the issues raised by the challenge of environmental, social and economic sustainability, movement has been frustratingly slow towards achieving levels of resource use and waste production that are within appropriate environmental limits and provide socially acceptable levels of economic prosperity and social justice.
As first described by Ehrlich and Holdren (1971), environmental impact (I) of a nation or region may be usefully decomposed into three factors: population (P), average consumption per capita, which depends on afluence (A), and environmental impact per unit of consumption, which depends on technology (T), in the equation (identity) I P A T.
Limiting growth in environmental impact and eventually reducing it to a level within the earth’s ecological footprint (Chapter 20) will require progress on all three of these factors. Chapter 8 discussed issues relating to stabilizing population levels, and Chapter 16 addresses social and economic issues relating to moving towards sustainable patterns of consumption. This chapter discusses the challenge of technological innovation required to achieve radical reductions in average environmental impact per unit of consumption.
Section 2 argues that individual technologies, and their development, are best understood as part of wider technological and innovation systems.
Section 3 examines how increasing returns to the adoption of technologies may give rise to ‘lock-in’ of incumbent technologies, preventing the adoption of potentially superior alternatives. Section 4 examines how similar types of increasing returns apply to institutional frameworks of social rules and constraints. Section 5 brings these two ideas together, arguing that technological systems co-evolve with institutional systems. This may give rise to lock-in of current techno-institutional systems, such as high carbon energy systems, creating barriers to the innovation and adoption of more sustainable systems. Section 6 examines the challenge for policy makers of promoting innovation for a transition to more sustainable socio-economic systems. Finally, Section 7 provides some conclusions and assesses the implications for future research and policy needs.
2. Understanding technological systems
The view that individual technologies, and the way they develop, are best understood as part of wider technological and innovation systems was significantly developed by studies in the late 1980s and early 1990s. In his seminal work on development of different electricity systems, Hughes (1983) showed the extent to which such large technical systems embody both technical and social factors. Similarly, Carlsson and Stankiewicz (1991) examined the ‘dynamic knowledge and competence networks’ making up technological systems. These approaches enable both stability and change in technological systems to be investigated within a common analytical framework. Related work examined the processes of innovation from a systems perspective. Rather than being categorized as a one-way, linear flow from R&D to new products, innovation is seen as a process of matching technical possibilities to market opportunities, involving multiple interactions and types of learning (Freeman and Soete, 1997). An innovation system may be defined as ‘the elements and relationships which interact in the production, diffusion and use of new, and economically-useful, knowledge’ (Lundvall, 1992). Early work focused on national systems of innovation, following the pioneering study of the Japanese economy by Freeman (1988). In a major multi-country study, Nelson (1993) and collaborators compared the national innovation systems of 15 countries, finding that the differences between them reflected different institutional arrangements, including: systems of university research and training and industrial R&D; financial institutions; management skills; public infra- structure; and national monetary, fiscal and trade policies. Innovation is the principal source of economic growth (Mokyr, 2002) and a key source of new employment opportunities and skills, as well as providing potential for realizing environmental benefits (see recent reviews by Kemp, 1997;Ruttan, 2001; Grubler et al., 2002 and Foxon, 2003).
The systems approach emphasizes the role of uncertainty and cognitive limits to firms’ or individuals’ ability to gather and process information for their decision-making, known as ‘bounded rationality’ (Simon, 1955; 1959).
Innovation is necessarily characterized by uncertainty about future markets, technology potential and policy and regulatory environments, and so firms’ expectations of the future have a crucial influence on their present decision- making. Expectations are often implicitly or explicitly shared between firms in the same industry, giving rise to trajectories of technological development
2 which can resemble self-fulfilling prophecies (Dosi, 1982;Mac Kenzie, 1992).
3. Technological lock-in The view outlined above suggests that the development of technologies both influences and is influenced by the social, economic and cultural setting in which they develop (Rip and Kemp, 1998; Kemp, 2000). This leads to the idea that the successful innovation and take-up of a new technology depends on the path of its development – so-called ‘path dependency’ (David, 1985), including the particular characteristics of initial markets, the institutional and regulatory factors governing its introduction and the expectations of consumers. Of particular interest is the extent to which such factors favour incumbent technologies against newcomers.
Arthur examined increasing returns to adoption, that is positive feedbacks which mean that the more a technology is adopted, the more likely it is to be further adopted. He argued that these can lead to ‘lock-in’ of incumbent technologies, preventing the take-up of potentially superior alternatives (Arthur, 1989). Arthur (1994) identified four major classes of increasing returns: scale economies, learning effects, adaptive expectations and network economies, which all contribute to this positive feedback that favours existing technologies. The first of these, scale economies, occurs when unit costs decline with increasing output. For example, when a technology has large set-up or fixed costs because of indivisibilities, unit production costs decline as they are spread over increasing production volume. Thus, an existing technology often has significant ‘sunk costs’ from earlier investments, and so, if these are still yielding benefits, incentives to invest in alternative technologies to garner these benefits will be diminished. Learning effects act to improve products or reduce their cost as specialized skills and knowledge accumulate through production and market experience. This idea was first formulated as ‘learning-by-doing’ (Arrow, 1962), and learning curves have been empirically demonstrated for a number of technologies, showing unit costs declining with cumulative production (IEA, 2000). Adaptive expectations arise as increasing adoption reduces uncertainty and both users and producers become increasingly confident about quality, performance and longevity of the current technology. This means that there be may a lack of ‘market pull’ for alternatives. Network or co-ordination effects occur when advantages accrue to agents adopting the same technologies as others (see also Katz and Shapiro, 1985). This effect is clear, for example, in telecommunications technologies; for example the more that others have a mobile phone or fax machine, the more it is in your advantage to have one (which is compatible). Similarly, infrastructures develop based on the attributes of existing technologies, creating a barrier to the adoption of alternative technologies with different attributes.
Arthur (1989) showed that, in a simple model of two competing technologies, these effects can amplify small, essentially random, initial variations in market share, resulting in one technology achieving complete market dominance at the expense of the other – referred to as technological ‘lock-in’.
He speculated that, once lock-in is achieved, this can prevent the take-up of potentially superior alternatives. David and others performed a series of historical studies, which showed the plausibility of arguments of path dependence and lock-in. The most well-known is the example of the QWERTY keyboard layout (David, 1985), which was originally designed to slow down typists to prevent the jamming of early mechanical typewriters, and has now achieved almost universal dominance, at the expense of arguably superior designs. Another example is the ‘light water’ nuclear reactor design, which was originally designed for submarine propulsion, but, following political pressure for rapid peaceful use of nuclear technology, was adopted for the first nuclear power stations and rapidly became the standard design in the US (Cowan, 1990). Specific historical examples of path dependence have been criticized, particularly QWERTY (Liebowitz and Margolis, 1995), as has the failure to explain how ‘lock-in’ is eventually broken, but the empirical evidence strongly supports the original theoretical argument (David, 1997).
4. Institutional lock-in As described in section 2, the systems approach emphasizes that individual technologies are not only supported by the wider technological system of which they are part, but also by the institutional framework of social rules and conventions that reinforces that technological system. To better under- stand the development of such frameworks, insights may be drawn from work in institutional economics, which is currently undergoing a renaissance (Schmid, 2004). Institutions may be defined as any form of constraint that human beings devise to shape human interaction (Hodgson, 1988). These include formal constraints, such as legislation, economic rules and contracts, and informal constraints, such as social conventions and codes of behaviour. There has been much interest in the study of how institutions evolve over time, and how this creates drivers and barriers for social change, and influences economic performance. North (1990) argues that all the features identified by Arthur as creating increasing returns to the adoption of technologies can also be applied to institutions. New institutions often entail high set-up or fixed costs. There are significant learning effects for organizations that arise because of the opportunities provided by the institutional framework. There are co-ordination effects, directly via contracts with other organizations and indirectly by induced investment, and through the informal constraints generated. Adaptive expectations occur because increased prevalence of contracting based on a specific institutional framework reduces uncertainty about the continuation of that framework. In summary, North argues, ‘the interdependent web of an institutional matrix produces massive increasing returns’ (North, 1990, p. 95).
Building on this work, Pierson (2000) argues that political institutions are particularly prone to increasing returns, because of four factors: the central role of collective action; the high density of institutions; the possibilities for using political authority to enhance asymmetries of power; and the complexity and opacity of politics. Collective action follows from the fact that, in politics, the consequences of an individual or organization’s actions are highly dependent on the actions of others. This means that institutions usually have high start-up costs and are subject to adaptive expectations. Furthermore, because formal institutions and public policies place extensive, legally binding constraints on behaviour, they are subject to learning, co-ordination and expectation effects, and so become difficult to change, once implemented. The allocation of political power to particular actors is also a source of positive feedback. When actors are in a position to impose rules on others, they may use this authority to generate changes in the rules (both formal institutions and public policies) so as to enhance their own power. Finally, the complexity of the goals of politics, as well as the loose and diffuse links between actions and outcomes, make politics inherently ambiguous and mistakes difficult to rectify. These four factors create path dependency and lock-in of particular political institutions, such as regulatory frameworks. This helps to explain significant features of institutional development: specific patterns of timing and sequence matter; a wide range of social outcomes may be possible; large consequences may result from relatively small or contingent events; particular courses of action, once introduced, can be almost impossible to reverse; and, consequently, political development is punctuated by critical moments or junctures that shape the basic contours of social life.
5. Co-evolution of technological and institutional systems
The above ideas of systems thinking and increasing returns to both technologies and institutions may be combined, by analysing the process of co-evolution of technological and institutional systems (Unruh, 2000; Nelson and Sampat, 2001). As modern technological systems are deeply embedded in institutional structures, the above factors leading to institutional lock-in can interact with and reinforce the drivers of technological lock-in.
Unruh (2000, 2002) suggests that modern technological systems, such as the carbon-based energy system, have undergone a process of technological and institutional co-evolution, driven by path-dependent increasing returns to scale. He introduces the term ‘techno-institutional complex’ (TIC), composed of technological systems and the public and private institutions that govern their diffusion and use, and which become ‘inter-linked, feeding off one another in a self-referential system’ (Unruh, 2000, p. 825).
In particular, he describes how these techno-institutional complexes create persistent incentive structures that strongly influence system evolution and stability. Building on the work of Arthur (1989, 1994), he shows how the positive feedbacks of increasing returns both to technologies and to their supporting institutions can create rapid expansion in the early stages of development of technology systems. However, once a stable techno- institutional system is in place, it acquires a stability and resistance to change. In evolutionary language, the selection environment highly favours changes which represent only incremental changes to the current system, but strongly discourages radical changes which would fundamentally alter the system. Thus, a system which has bene?ted from a long period of increasing returns, such as the carbon-based energy system, may become ‘locked-in’, preventing the development and take-up of alternative technologies, such as low carbon, renewable energy sources. The work of Pierson (2000) on increasing returns to political institutions, discussed in
Section 4, is particularly relevant here. Actors, such as those with large investments in current market-leading technologies, who benefit from the current institutional framework (including formal rules and public policies) will act to try to maintain that framework, thus contributing to the lock-in of the current technological system.
Unruh uses the general example of the electricity generation TIC, and we can apply his example to the particular case of the UK electricity system. In this case, institutional factors, driven by the desire to satisfy increasing electricity demand and a regulatory framework based on increasing competition and reducing unit prices to the consumer, fed back into the expansion of the technological system. In the UK, institutional change (liberalization of electricity markets) led to the so-called ‘dash for gas’ in the 1990s–ar a p i dexpansion of power stations using gas turbines. These were smaller and quicker to build than coal or nuclear power stations, thus generating quicker pro?ts in the newly-liberalized market. The availability of gas turbines was partly the result of this technology being transferred from the aerospace industry, where it had already benefited from a long period of investment (and state support) and increasing returns. This technological change reinforced the institutional drivers to meet increasing electricity demands by expanding generation capacity, rather than, for example, creating stronger incentives for energy efficiency measures. Such insights were employed in a recent study of current UK innovation systems for new and renewable energy technologies (ICEPT/E4Tech, 2003; Foxon et al., 2005a). There it was argued that institutional barriers are leading to systems failures preventing the successful innovation and take-up of a wider range of renewable technologies.
6. Promoting innovation for a transition to more sustainable socio-economic systems
We conclude by examining some of the implications of this systems view of technological change and innovation for policy making aiming to promote a transition to more sustainable socio-economic systems. As we have argued, individual technologies are not only supported by the wider technological system of which they are part, but also the institutional framework of social rules and conventions that reinforces that technological system. This can lead to the lock-in of existing techno-institutional systems, such as the high carbon fossil-fuel based energy system. Of course, lock-in of systems does not last for ever, and analysis of examples of historical change may usefully increase understanding of how radical systems change occurs.
A useful framework for understanding how the wider technological system constrains the evolution of technologies is provided by the work on technological transitions by Kemp (1994) and Geels (2002). Kemp (1994) proposed three explanatory levels: technological niches, socio-technical regimes and landscapes. The basic idea is that each higher level has a greater degree of stability and resistance to change, due to interactions and link- ages between the elements forming that configuration. Higher levels then impose constraints on the direction of change of lower levels, reinforcing technological trajectories (Dosi, 1982).
The idea of a socio-technical regime reflects the interaction between the actors and institutions involved in creating and reinforcing a particular technological system. As described by Rip and Kemp (1998): ‘A sociote-chnical regime is the rule-set or grammar embedded in a complex of engineering practices; production process technologies; product characteristics, skills and procedures; ways of handling relevant artefacts and persons; ways of defining problems; all of them embedded in institutions and infrastructures.’ This definition makes it clear that a regime consists in large part of the prevailing set of routines used by the actors in a particular area of technology.
A landscape represents the broader political, social and cultural values and institutions that form the deep structural relationships of a society. As such, landscapes are even more resistant to change than regimes.
In this picture of the innovation process, whereas the existing regime generates incremental innovation, radical innovations are generated in niches.
As a regime will usually not be totally homogeneous, niches occur, providing spaces that are at least partially insulated from ‘normal’ market selection in the regime: for example, specialized sectors of the market or locations where a slightly different institutional rule-set applies. Such niches can act as ‘incubation rooms’ for radical novelties (Schot, 1998).
Niches provide locations for learning processes to occur, and space to build up the social networks that support innovations, such as supply chains and user–producer relationships. The idea of promoting shifts to more sustainable regimes through the deliberate creation and support of niches, so-called ‘strategic niche management’ has been put forward by Kemp and colleagues (Kemp et al., 1998). This idea, that radical change comes from actors outside the current mainstream, echoes work on ‘disruptive innovation’ in the management literature (Utterback, 1994; Christensen, 1997).
Based on a number of historical case studies, this argues that firms that are successful within an existing technological regime typically pursue only incremental innovation within this regime, responding to the perceived demands of their customers. They may then fail to recognize the potential of a new innovation to create new markets, which may grow and eventually replace those for the existing mainstream technology.
Geels (2002, 2005) examined a number of technological transitions, for example that from sailing ships to steamships, using the three-level niche, regime, landscape model introduced above (see also Elzen et al., 2004). He argued that novelties typically emerge in niches, which are embedded in, but partially isolated from, existing regimes and landscapes. For example, transatlantic passenger transport formed a key niche for the new steamship system. If these niches grow successfully, and their development is reinforced by changes happening more slowly at the regime level, then it is possible that a regime shift will occur. Geels argues that regime shifts, and ultimately transitions to new socio-technological landscapes, may occur through a process of niche-cumulation. In this case, radical innovations are used in a number of market niches, which gradually grow and coalesce to form a new regime.
Building on this work, Kemp and Rotmans (2005) proposed the concept of transition management. This combines the formation of a vision and strategic goals for the long-term development of a technology area, with transition paths towards these goals and steps forward, termed experiments, that seek to develop and grow niches for more sustainable technological alternatives. The transition approach was adopted in the Fourth
Netherlands Environmental Policy Plan, and the Dutch Ministry of Economic Affairs (2004) is now applying it to innovation in energy policy.
The Ministry argues that this involves a new form of concerted action between market and government, based on:
Relationships built on mutual trust: Stakeholders want to be able to rely on a policy line not being changed unexpectedly once adopted, through commitment to the direction taken, the approach and the main roads formulated. The government places trust in market players by offering them ‘experimentation space’.
Partnership: Government, market and society are partners in the process of setting policy aims, creating opportunities and undertaking transition experiments, for example through ministries setting up ‘one stop shops’ for advice and problem solving.
Brokerage: The government facilitates the building of networks and coalitions between actors in transition paths.
Leadership: Stakeholders require the government to declare itself clearly in favour of a long-term agenda of sustainability and innovation that is set for a long time, and to tailor current policy to it.
In investigating some of the implications of the above ideas for policy making to promote more sustainable innovation, a couple of case studies (of UK low carbon energy innovation and of EC policy-making processes that support alternative energy sources in vehicles) and a review of similar policy analyses in Europe (Rennings et al., 2003) and the US (Alic et al., 2003) are worth considering. Foxon et al. (2005b) outlines five guiding principles for sustainable innovation policy based on the findings of these studies.
The first guiding principle argues for the development of a sustainable innovation policy regime that brings together appropriate strands of current innovation and environmental policy and regulatory regimes, and is situated between high-level aspirations (for example promoting sustainable development) and specific sectoral policy measures (for example a tax on non-recyclable materials in automobiles). This would require the creation of a long-term, stable and consistent strategic framework to promote a transition to more sustainable systems, seeking to apply the lessons that might be gleaned from experience with the Dutch Government’s current ‘Transition Approach’.
The second guiding principle proposes applying approaches based on systems thinking and practice, in order to engage with the complexity and systemic interactions of innovation systems and policy-making processes.
This type of systems thinking can inform policy processes, through the concept of ‘systems failures’ as a rationale for public policy intervention (Edquist, 1994; 2001; Smith, 2000), and through the identification and use of ‘techno-economic’ and ‘policy’ windows of opportunity (Nill, 2003;2004; Sartorius and Zundel, 2005). It also suggests the value of promoting a diversity of options to overcome lock-in of current systems, through the support of niches in which learning can occur, the development of a skills base, the creation of knowledge networks, and improved expectations of future market opportunities.
The third guiding principle advances the procedural and institutional basis for the delivery of sustainable innovation policy, while acknowledging the constraints of time pressure, risk-aversion and lack of reward for innovation faced by real policy processes. Here, government and industry play complementary roles in promoting sustainable innovation, with government setting public policy objectives informed by stakeholder consultation and rigorous analysis, and industry providing the technical knowledge, resources and entrepreneurial spirit to generate innovation. Public–private institutional structures, reflecting these complementary roles, could be directed at specific sectoral tasks for the implementation of sustainable innovation, and involve a targeted effort to stimulate and engage sustainable innovation ‘incubators’.
The fourth guiding principle promotes the development of a more integrated mix of policy processes, measures and instruments that would cohere synergistically to promote sustainable innovation. Processes and criteria for improvement could include: applying sustainability indicators and sustainable innovation criteria; balancing benefits and costs of likely economic, environmental and social impacts; using a dedicated risk assessment tool; assessing instruments in terms of factors relevant to the innovation process; and applying growing knowledge about which instruments work well or poorly together, including in terms of overlapping, sequential implementation or replacement (Porter and van der Linde, 1995; Gunningham and Grabowsky, 1998; Makuch, 2003a; 2003b). The fifth guiding principle is that policy learning should be embedded in the sustainable innovation policy process. This suggests the value of providing a highly responsive way to modulate the evolutionary paths of sustainable technological systems and to mitigate the unintended harmful consequences of policies. This would involve monitoring and evaluation of policy implementation, and the review of policy impacts on sustainable innovation systems.
7. Conclusions and ways forward
This chapter has reviewed issues relating to the role of technological change and innovation in moving societies towards greater sustainability. Though the importance of technologies in helping to provide sustainable solutions is often promoted by commentators from all parts of the political spectrum, policy measures to promote such innovation have frequently failed to recognize the complexity and systemic nature of innovation processes. As we have seen, increasing returns to adoption in both technological systems and in supporting institutional systems may lead to lock-in, creating barriers to the innovation and deployment of technological alternatives.
This emerging understanding of innovation systems and how past technological transitions have occurred could provide insight into approaches for promoting radical innovation for greater sustainability, for example, through the support of niches and a diversity of options. However, efforts to steer or modulate such a transition will also require significant institutional change in many countries. For example, the UK policy style has been based largely on centralized decision-making processes and heavy emphasis on the use of market-based instruments without addressing other institutional and knowledge factors relating to the creation of markets for new technologies. This contrasts with a policy style of more decentralized and public–private collaborative decision-making, which has enabled the Netherlands to become a leader in practising and learning how a technology transition for sustainability could be promoted. Further practical experience and analysis will be needed for the implementation of the above ideas and principles for promoting sustainable innovation to overcome technological and institutional lock-in.
1. Introduction
The challenges of climate change, biodiversity protection, declining fish stocks and nuclear waste management mean that policy makers now have to take important decisions with impacts decades, if not centuries, into the future. The way we value the future is crucial in determining what action to take in response to such challenges.
Whenever economists think about intertemporal decisions, whether concerning trade-offs between today and tomorrow or between the present generation and our distant descendants, we reach almost instinctively for the discount rate. This instinct is not without good reason – the practice of discounting, embedded in social cost–benefit analysis, has served us extremely well in formulating policy over the short to medium term. For longer term decisions, however, results from this trusty tool can appear increasingly contrary to intergenerational equity and sustainable development. In response, some have advocated jettisoning the tool altogether and turning to alternative methods of valuing the future. Others take the view that these long term challenges bring trade-offs between intergenerational efficiency and equity into sharp focus and it is no surprise that social cost benefit analysis, which generally ignores distributional considerations, supports efficient but unsustainable projects. They conclude that the tool is functioning properly, but must be employed in a framework that guarantees intergenerational equity. A third hypothesis is that although the tool works correctly for short term decisions, it needs repairing and refinement for long term decisions. In particular, if future economic conditions are assumed to be uncertain – a reasonable assumption when looking decades or centuries into the future – using a constant discount rate is approximately correct over shorter time periods (up to about 30 years), but is increasingly incorrect thereafter. The more accurate procedure is to employ a declining discount rate over time.
This chapter reviews social discounting (section 2), addresses the arguments for and against a zero discount rate (section 3), outlines the research on declining social discount rates (section 4), and considers some alternatives to discounting in social decision-making (section 5).
2. Exponential discounting and its implications
Cost–benefit analysis, efficiency and equity Economics has a long tradition of separating efficiency from equity, and social cost–benefit analysis is no exception, where the Kaldor–Hicks criterion is relied upon to justify projects that are efficient. Distributional effects are ignored, which is argued to be legitimate when the decision-maker also controls the tax system and can redistribute income to achieve equity. In practice, of course, the distributional effects of some projects are important, and cost–benefit analysis and should be employed as a guide for decision-making rather than a substitute for judgement (Lind, 1982). It can be a very useful guide because, when done properly, it focuses our attention on the valuation of the most important impacts of a decision.
For intergenerational investments, distributional effects are often especially important because there is no intergenerational tax system available to redistribute wealth (Lind, 1995; 1999). Although economic instruments can create wealth transfers between generations (such as certain changes to tax law and fiscal policy), there is no guarantee that the transfer will reach the intended recipient when there are many intervening generations. Drèze and Stern (1990) note that ‘hypothetical transfers of the Hicks–Kaldor variety . . . are not relevant when such transfers will not take place’. In such circumstances, explicit consideration of intergenerational equity appears to be necessary.
Estimating the social discount rate
In social cost–benefit analysis, the social discount function, D(t), is used to convert flows of future cost and benefits into their present equivalents. If the net present value of the investment exceeds zero, the project is efficient. The social discount rate, s(t), measures the annual rate of decline in the discount function, D(t). In continuous time, the two are connected by the equation:
A constant social discount rate implies that the discount function declines exponentially,
As practitioners know, the value of the social discount rate is often critical in determining whether projects pass social cost–benefit analysis. As a result, spirited debates have erupted in the past over its correct conceptual foundation. Happily, the debate was largely resolved at a 1977 conference, where Lind (1982, p. 89) reported that the recommended approach is to ‘equate the social rate of discount with the social rate of time preference as determined by consumption rates of interest and estimated on the basis of the returns on market instruments that are available to investors’. Under this approach, the social discount rate, for a given utility function, can be expressed by the well-known accounting relation: where is the utility discount rate (or the rate of pure time preference), is the elasticity of marginal utility and g is the rate of growth of consumption per capita. Even if the utility discount rate is zero, the social discount rate is positive when consumption growth, g, is positive and > 0.
Equation (7.2) shows that in general, the appropriate social discount rate is not constant over time, but is a function of the expected future consumption path.
The discounting dilemma
In recent years, debates about the correct foundation for the social discount rate have been replaced by controversy over discounting and intergenerational equity. To see that evaluation of long term investments is extremely sensitive to the discount rate, observe that the present value of £100 in 100 years’ time is £37 at a 1 per cent discount rate, £5.2 at 3 per cent, £2 at 4 percent and only 12p at 7 per cent. Because small changes in the discount rate have large impacts on long-term policy outcomes, arguments about the ‘correct’ number have intensified. For instance, the marginal damage from emissions of carbon dioxide is estimated by the FUND model (Tol, 2005) to be $58/tC at a 0 per cent utility discount rate, $11/tC at a 1 per cent utility discount rate, with damages of -$2.3/tC (i.e. net benefits) at a 3 per cent utility discount rate. Indeed, exponential discounting at moderate discount rates implies that costs and benefits in the far future are effectively irrelevant. While this might be entirely appropriate for individuals (who will no longer be alive), many people would argue that this is an unsatisfactory basis for public policy.
3. Zero discounting
Given these difficulties, some people find it tempting to suggest that we should simply not discount the cash flows in social cost–benefit analysis. But not discounting amounts to using a social discount rate of s0 percent, which is extremely dubious given our experience to date with positive consumption growth: g > 0 in equation (7.2). In contrast, a credible argument for employing a zero utility discount rate (0) can be advanced, based upon the ethical position that the weight placed upon a person’s utility should not be reduced simply because they live in the future. Indeed, this ethical position is adopted by Stern et al. (2006) and supported by a string of eminent scholars, including Ramsey (1928), Pigou (1932), Harrod (1948) and Solow (1974), and even Koopmans (1965); expressed an ‘ethical preference for neutrality as between the welfare of different generations’. Broome (1992) provides a coherent argument for zero discounting based on the presumption of impartiality found both in the utilitarian tradition (Sidgwick, 1907;Harsanyi, 1977) and also in Rawls (1971), who concluded that ‘there is no reason for the parties [in the original position] to give any weight to mere position in time.’
However, not all philosophers and economists accept the presumption of impartiality. Beckerman and Hepburn (2007) stress that reasonable minds may differ; Arrow (1999), for instance, prefers the notion of agent-relative ethics advanced by Scheffer (1982). Even if one does accept a presumption of impartiality and zero discounting, there are four counter-arguments that might overturn this presumption: the ‘no optimum’ argument, the ‘excessive sacrifice’ argument, the ‘risk of extinction’ argument, and the ‘political acceptability’ argument. We examine all four.
First, Koopmans (1960, 1965) demonstrated that in an infinite horizon model, there is no optimum if a zero rate of time preference is employed.
Consider a unit of investment today that yields a tiny but perpetual stream of consumption. Each unit investment causes a finite loss of utility today, but generates a small gain in utility to an infinite number of generations. It follows that no matter how low current consumption, further reductions in consumption are justi?ed by the in?nite bene?t provided to future generations. The logical implication of zero discounting is the impoverishment of the current generation. Furthermore, the same logic applies to every generation, so that each successive generation would find itself being impoverished in order to further the well-being of the next. Broome (1992), however, counters that humanity will not exist forever. Furthermore, Asheim et al. (2001) demonstrate that zero utility discounting (or ‘equity’, as they term it) does not rule out the existence of an optimum under certain reasonable technologies.
Second, even if we suppose a finite but large number of future generations, a zero discount rate is argued to require excessive sacrifice by the current generation, in the form of extremely high savings rates. Arrow (1999) concludes that the ethical requirement to treat all generations alike imposes morally unacceptable and excessively high savings rates on each generation. But Parfit (1984) has argued that the excessive sacrifice problem is not a reason to reject zero utility discounting. Rather, it should be resolved by employing a utility function with a minimum level of well-being below which no generation should fall. Asheim and Buchholz (2003) point out that the ‘excessive sacrifice’ argument can be circumvented, under plausible technologies, by a utility function which is more concave.
Third, each generation has a non-zero probability of extinction. Suppose that the risk of extinction follows a Poisson process such that the conditional probability of extinction at any given time is constant. Yaari (1965) demonstrated that this is equivalent to a model with an infinite time horizon where utility is discounted at the (constant) Poisson rate.As such, accounting for the risk of extinction is mathematically identical to positive utility discounting.
While admitting the strength of this argument, Broome (1992) asserts that extinction risk and the pure rate of time preference ‘should be accounted for separately’. But extinction risk is clearly not project-specific, so it would be accounted for in the same way across all projects (except projects aimed at reducing an extinction risk). Irrespective of how this is done, the mathematical effect is the same – the well-being of future generations is effectively discounted. Hence Dasgupta and Heal (1979) argue that ‘one might find it ethically feasible to discount future utilities as positive rates, not because one is myopic, but because there is a positive chance that future generations will not exist’. Given that the risk of human extinction is probably (and hopefully) quite low, the appropriate utility discount rate would be very small.
Finally, Harvey (1994) rejects zero utility discounting on the basis that it is so obviously incompatible with the time preference of most people that its use in public policy would be illegitimate. While the significance of revealed preferences is debatable (Beckerman and Hepburn, 2007),Harvey is surely correct when he states that the notion that events in ten thousand years are as important as those occurring now simply does not pass ‘the laugh test’.
In summary, the ‘no optimum’ argument and the ‘excessive sacrifice’ argument for positive time preference are refutable. In contrast, the ‘risk of extinction’ argument provides a sound conceptual basis for a positive utility discount rate. This might be backed up at a practical level by the ‘political acceptability’ argument, or by the more fundamental view that impartiality is not a compelling ethical standpoint. Overall, the arguments for a small positive utility discount rate appear persuasive. Zero discounting is not intellectually compelling.
4. Declining discount rates
Over recent years, several persuasive theoretical reasons have been advanced to justify a social discount rate that declines as time passes. Declining discount rates are appealing to people concerned about intergenerational equity, but perhaps more importantly, they are likely to be necessary for achieving intergenerational efficiency. Groom et al. (2005) provide a detailed review of the case for declining discount rates. This section provides an overview of the main arguments.
Evidence on individual time preference
Evidence from experiments over the last couple of decades suggests that humans use a declining discount rate, in the form of a ‘hyperbolic discounting’ function, in making intertemporal choices. In these experiments, people typically choose between different rewards (for example, money, durable goods, sweets or relief from noise) with different delays, so that an implicit discount function can be constructed. The resulting discount functions suggest that humans employ a higher discount rate for consumption trade-offs in the present than for trade-offs in the future.
While other interpretations, such as similarity relations (Rubinstein, 2003) and sub-additive discounting (Read, 2001), are possible, the evidence for hyperbolic discounting is relatively strong.
Pearce et al. (2003) present the argument that if people’s preferences count, and these behavioural results reveal underlying preferences, then declining discount rates ought to be integrated into social policy formulation. Pearce et al. recognize, however, that the assumptions in this chain of reasoning might be disputed. First, as hyperbolic discounting provides an explanation for procrastination, drug addiction, undersaving, and organizational failure, the argument that behaviour reflects preferences is weakened. Second, Pearce et al. and Beckerman and Hepburn (2007) stress that Hume would resist concluding that the government should discount the future hyperbolically because individual citizens do. The recent literature on ‘optimal paternalism’ suggests, amongst other things, that governments may be justified in intervening not only to correct externalities, but also to correct ‘internalities’ – behaviour that is damaging to the actor. Whether or not one supports a paternalistic role for government, one might question the wisdom of adopting a schedule of discount rates that explains procrastination, addiction and potentially the unforeseen collapses in renewable resource stocks (Hepburn, 2003).
Pessimism about the future
Equation (7.2) makes it clear that the consumption rate of interest – and thus also the social rate of time preference in a representative agent economy – is a function of consumption growth. If consumption growth, will fall in the future, and the utility discount rate,, and the elasticity of marginal utility, are constant, it follows from equation (7.2) that the social discount rate also declines through time. Furthermore, if decreases in the level of consumption are expected – so that consumption growth is negative – the appropriate social rate of time preference could be negative. Declines in the level of consumption are impossible in an optimal growth model in an idealized economy with productive capital. For the social discount rate to be negative, either capital must be unproductive, or a distortion, such as an environmental externality, must have driven a wedge between the market return to capital and the consumption rate o interest (Weitzman, 1994).
Uncertainty
It is an understatement to say that we can have little confidence in economic forecasts several decades into the future. In the face of such uncertainty, the most appropriate response is to incorporate it into our economic models.
Suppose that the future comprises two equally likely states with social discount rate either 2 per cent or 6 per cent. Discount factors corresponding to these two rates are shown in Table 7.1. The average of those discount factors is called the ‘certainty-equivalent discount factor’ and working backwards from this we can find the ‘certainty-equivalent discount rate’, which starts at 4 per cent and declines asymptotically to 2 per cent as time passes. In this uncertain world, a project is efficient if it passes social cost–benefit analysis using the certainty-equivalent discount rate, which declines through time.
The two key assumptions in this example are that the discount rate is uncertain and persistent, so that the expected discount rate in one period is correlated with the discount rate the period before. If these two assumptions hold, intergenerational efficiency requires a declining social discount rate (Weitzman, 1998, 2001).
The particular shape of the decline is determined by the specification of uncertainty in the economy. Newell and Pizer (2003) use data on past US interest rates to estimate a reduced-form time series process which is then employed to forecast future rates. The level of uncertainty and persistence in their forecasts is high enough to generate a relatively rapid decline in the certainty-equivalent discount rate with signifficant policy implications.While econometric tests reported in Groom et al. (2006) suggest that Newell and Pizer (2003) should have employed a state-space or regime-shifting model instead, their key conclusion remains intact – the certainty-equivalent discount rate declines at a rate that is significant for the appraisal of long term projects.
Gollier (2001, 2002a, 2002b) provides an even more solidly grounded justification for declining discount rates by specifying an underlying utility function and analysing an optimal growth model. He demonstrates that a similar result can hold, for certain types of utility functions. Under uncertainty, the social discount rate in equation (7.2) needs to be modified to account for an additional prudence effect: where P is the measure of relative prudence introduced by Kimball (1990).
This prudence effect leads to ‘precautionary saving’, reducing the discount rate. Moreover, if there is no risk of recession and people have decreasing relative risk aversion, the optimal social discount rate is declining over time. These two sets of results show that employing a declining social discount rate is necessary for intergenerational efficiency (Weitzman, 1998) and also for intergenerational optimality under relatively plausible utility functions (Gollier, 2002a, b). The theory in this section provides a compelling reason for employing declining discount rates in social cost–benefit analysis.
Inter-generational equity
Not only are declining social discount rates necessary for efficiency, it turns out that they are also necessary for some specifications of intergenerational equity. Chichilnisky (1996, 1997) introduces two axioms for sustainable development requiring that the ranking of consumption paths be sensitive to consumption in both the present and the very long run. Sensitivity to the present means that rankings are not solely determined by the ‘tails’ of the consumption stream. Sensitivity to the future means that there is no date after which consumption is irrelevant to the rankings. These axioms lead to the following criterion: where (t) is the utility discount function, and 0 1 is the weight placed on the integral part. Heal (2003) notes that the Chilchilnisky criterion has no solution under standard exponential discounting, where (t) exp(-t). It makes sense to initially maximize the integral part, before switching to maximizing the asymptotic path. This refuses to yield a solution, however, because it is always optimal to delay the switching point as this increases the integral part with no reduction in the asymptotic part. Interestingly, however, equation (7.4) does have a solution provided that the utility discount rate,, declines over time, asymptotically approaching zero. In short, a declining utility discount rate is necessary for a solution satisfying Chichilnisky’s axioms of sustainable development.Li and Löfgren (2000) propose a similar model which examines a society of two individuals, a utilitarian and a conservationist. The implication of this model is similarly that the utility discount rate must decline along the optimal path.
Conclusions on declining discount rates
Incorporating uncertainty into social cost–benefit analysis leads to the conclusion that a declining social discount rate is necessary for efficient decision-making. Indeed, it was on this basis that the United Kingdom government has incorporated declining social discount rates in its most recent HM Treasury (2003) Green Book, which contains the official guidance on government project and policy appraisal. Pessimistic future projections and, to a lesser extent, the evidence from individual behaviour could further support that conclusion. Finally, the fact that declining discount rates also emerge from specifications of intergenerational equity employed by Chilchilnisky (1996, 1997) and Li and Löfgren (2000), suggests that they are an ideal way to navigate between the demands of intertemporal efficiency and the concerns of intergenerational equity.
5. Alternatives to discounting
Although declining discount rates provide an appealing solution to the dual problems of intergenerational efficiency and equity, there are other possible solutions. Schelling (1995) proposes an alternative based around ignoring discount rates and specifying a richer utility function. Kopp and Portney (1999) and Page (2003) suggest using voting mechanisms. Finally, discounting reflects a consequentialist ethical position, so alternatives based upon deontological ethics are considered.
Schelling’s utility function approach
Schelling (1995) argues that investments for people in the far-distant future should not be evaluated using the conventional discounted cash flow framework. Instead, such investments should be considered much like foreign aid. For instance, investment now to reduce future greenhouse gas emissions should not be viewed as saving, but rather as a transfer of consumption from ourselves to people living in the distant future, which is similar to making sacrifices now for the benefit of our contemporaries who are distant from us geographically or culturally. The only difference is that the transfer mechanism is no longer the ‘leaky bucket’ of Okun (1975), but rather an ‘incubation bucket’, where the gift multiplies in transit. Given that people are generally unwilling to make sacrifices for the benefit of richer people distant in geography or culture, we should not expect such sacrifices for richer people distant in time.
In other words, the ‘utility function approach’, as Schelling (1995) calls it, would drop the use of a discount rate, and instead present policy makers with a menu of investments and a calculation of the utility increase in each world region (and time period) for each investment. This approach has the merit of insisting on transparency in the weights placed on consumption flows at each point in time and space, which is to be welcomed. However, debate would focus on the appropriate utility function to employ to value consumption increases in different regions at different times. Ultimately, in addition to reflecting marginal utilities at different points in time and space, the weights would probably also have to reflect the human tendency to discount for unfamiliarity along temporal, spatial and cultural dimensions.
Voting mechanisms
Many scholars have argued that although discounting is appropriate for short term policy evaluation, it is stretched to breaking point by complex long term challenges such as climate change. For instance, global climate policy is likely to have non-marginal effects on the economy, implying that conventional consumption discounting is inappropriate. Consumption discounting rests on the assumption that the project or policy being evaluated is a small perturbation on the business as usual path. If the project is non- marginal, then the consumption discounting ‘short cut’ is inapplicable, and a full welfare comparison of different paths is necessary instead.
Of course, conducting a full welfare comparison involves a certain amount of complexity. Alternatives to the welfare economics approach include the use of mock referenda, proposed by Kopp and Portney (1999), where a random sample of the population would be presented with a detailed description of the likely effects – across time and space – of the policy being implemented or not. The description would include all relevant information, such as the full costs of the policy and even the likelihood of other countries taking relevant action. Respondents would then vote for or against the policy. By varying the estimate of the costs for different respondents, a willingness to pay locus for the policy would be determined.
Their approach has the appeal of valuing the future by asking citizens directly, rather than by examining their behaviour or by reference to particular moral judgements. Problems with this approach, as Kopp and,Portney (1999) note, include the usual possible biases in stated preference surveys and the difficulty of providing adequate information for an appropriate decision on such a complex topic.
Page (2003) also proposes that voting should be considered as an alternative to discounted cash flow analysis for important long term public decisions. In contrast to cost–benefit analysis, with its emphasis on achieving efficiency, he notes that voting mechanisms (with one-person-one-vote) bare more likely to produce fair outcomes.
One difficulty with both proposals is that the people affected by the policy – future human beings – remain disenfranchised, just as they are on current markets. Unlike Kopp and Portney, Page tackles this problem by proposing to extend voting rights hypothetically to unborn future generations. Under the (unrealistic) assumption that there will be an infinite number of future generations, he concludes that intergenerational voting amounts to an application of the von Weizsäcker (1965) overtaking criterion. This leads to a dictatorship of the future, so ‘safeguards’ protecting the interests of the present would be needed which, Page argues, would be easy to construct given the position of power of the present generation.
The challenge with this proposal is to make it operational. Without safeguards, the implication is that the present should impoverish itself for future generations. As such the safeguards would in fact constitute the crux of this proposal. Determining the appropriate safeguards amounts to asking how the interests of the present and the future should be balanced, and this appears to lead us back to where we started, or to employing a different ethical approach altogether.
Deontological approaches Sen (1982) argues that the welfare economic framework is insufficiently robust to deal with questions of intergenerational equity because it fails to incorporate concepts of liberty, rights and entitlements as ends in themselves. He considers an episode of torture, where the person tortured (the ‘heretic’) is worse of and the torturer (the ‘inquisitor’) is better of after the torture. Further, suppose that although the inquisitor is better of, he is still worse of than the heretic. Then the torture is justified under a utilitarian or Rawlsian social welfare function. Sen (1982) contends that society may want to grant the heretic a right to personal liberty that cannot be violated merely to achieve a net gain in utility or an improvement for the worst-off individual. He adds that an analogy between pollution and torture is ‘not absurd’, and that perhaps the liberty of future generations is unacceptably compromised by the present generation’s insouciance about pollution.
If the consequentialist foundations of cost–benefit analysis are deemed inadequate, discounted cash flow analysis must be rejected where it generates results that contravene the rights of future generations. Howarth (2003) lends support for this position, arguing that although cost benet analysis is useful to identify potential welfare improvements, it is trumped by the moral duty to ensure that opportunities are sustained from generation to generation. Page (1997) similarly argues that we have a duty – analogous to a constitutional requirement – to ensure that intergenerational equity is satisfied before efficiency is considered.
Pigou (1932) agreed that such duties existed, describing the government as the ‘trustee for unborn generations’. But Schwartz (1978) and Parfait (1983) question whether the notion of a duty to posterity is well-defined, on the grounds that decisions today not only determine the welfare but also the identities of future humans. Every person born, whether wealthy or impoverished, should simply be grateful that, by our actions, we have chosen them from the set of potential persons. Howarth (2003) answers that, at a minimum, we owe well-defined duties to the newly born, thus creating duties for at least an expected lifetime.
Assuming a duty to posterity is conceptually possible, the final step is to specify the content of the duty. Howarth (2003) reviews several different formulations of the duty, which ultimately appear to amount to a duty to ensure either weak or strong sustainability. As such, deontological approaches comprise the claim that intergenerational equity is captured by a (well-defined) duty of sustainability to future generations, and that this duty trumps considerations of efficiency. While these approaches do not reject the use of discounting, they subjugate efficiency considerations to those of rights and/or equity. This is not inconsistent with the view expressed in section 2 above that cost–benefit analysis is a guide for decision-making rather than a substitute for judgement (Lind, 1982).
6. Conclusion
This chapter has explained why discounting occupies such an important and controversial place in long-term policy decisions. While intertemporal trade-offs will always be important, the developments reported in this chapter provide reason to hope that discounting may eventually become less controversial. Arguments for a zero social discount rate need not be taken seriously unless they are based upon extremely pessimistic future economic projections. Arguments for a zero utility discount rate are more plausible, but not necessarily convincing. Indeed, there is a good case for employing a positive, but very low, utility discount rate to reflect extinction risk.
Furthermore, the fact that declining social discount rates are necessary for efficiency reduces the degree of conflict between intergenerational equity and efficiency. Economists detest inefficiency, and it is surely only a matter of time before other governments adopt efficient (declining) social discount rates. If so, the discounting controversies of the future will concern the particular specification of economic uncertainty and the precise shape of the decline, rather than the particular (constant) discount rate.
Finally, even if declining discount rates reduce the tension between intergenerational equity and efficiency, they do not eliminate it. Discounting and cost–benefit analysis provide a useful guide to potential welfare improvements, but unless infallible mechanisms for intergenerational transfers become available, project-specific considerations of intergenerational equity will continue to be important. The ethical arguments, consequentialist and deontological, outlined in this chapter provide some guidance.
Ultimately, however, the appropriate trade-off between equity and efficiency, intergenerationally or otherwise, raises fundamental issues in philosophy. Consensus is unlikely, if not impossible. At least the clarification that efficient discount rates should be declining reduces the domain of disagreement.
8 Population and sustainability Geoffrey Mc Nicoll:
1. Introduction
Problems of sustainability can arise at almost any scale of human activity that draws on natural resources or environmental amenity. In some regionsminuscule numbers of hunter-gatherers are thought to have hunted Pleistocene megafauna to extinction; complex pre-industrial societies have disappeared, unable to adapt to ecological changes – not least, evidence suggests, changes they themselves wrought (Burney and Flannery, 2005; Janssen and Scheffer, 2004). But modern economic development has brought with it sustainability problems of potentially far greater magnitude – a result not only of the technological capabilities at hand but of the demographic realities of much larger populations and an accelerated pace of change.
A simple picture of those modern realities is seen in Figure 8.1. It charts a staggered series of population expansions in major world regions since the beginning of the industrial era, attributable to lowered mortality resulting from nutritional improvements, the spread of medical and public health services, and advances in education and income. In each of the regions population growth slows and eventually halts as fertility also drops, completing the pattern known as the demographic transition. The population trajectories shown for the 21st century are forecasts, of course, but moderately secure ones, given improving economic conditions and absent major unforeseen calamities. Worldwide, the medium UN projections foresee world population increasing from its 2005 level of 6.5 billion to a peak of about 9 billion around 2075. Very low fertility, if it persists, will lead to actual declines in population size – an all but certain near-term prospect in Europe and a plausible prospect by mid-century in East Asia.
Historically, the increase in population over the course of a country’s demographic transition was typically around three- to five-fold, with the pace of change seldom much above 1 per cent per year; in the transitions still underway the increases may end up more like ten-fold or even greater and growth rates have peaked well above 2 per cent per year. In both situations the size changes are accompanied by shifts in age composition – from populations in which half are aged below 20 years to ones with half over 50 and in concentration, from predominantly rural to overwhelmingly urban.
The lagged onset and uneven pace of the transitions across regions generate striking regional differences in population characteristics at any given time. Many population–environment and population–resource issues are thus geographically delimited; for others, however, the scale of environmental spillovers, migration flows and international trade may require an interregional or global perspective. This chapter reviews the implications of these various features of modern demographic change for sustainable ndevelopment – gauged in terms of their effects both on the development process and on its outcomes (human well-being and environmental conditions).
The discussion need not be narrowed at the outset by specifying just what sustainable development sustains. The conventional polar choices are the wherewithal needed to assure the welfare of future generations – a generalized notion of capital – and that part of it that is not human-made – what is now usually termed natural capital. Conservation of the former, allowing substitutability among forms of capital, is weak sustainability, and conservation of the latter is strong sustainability. (See, for example, Chapters
3, 4 and 6 of this volume on these concepts and the problems associated with them.) I take as a premise, however, that sustainable development is a topic of interest and importance to the extent that substitutability of natural capital with other kinds of capital in the processes yielding human well-being is less than perfect.
2. Population and resources in the theory of economic growth
For the classical economists, fixity of land was a self-evident resource constraint on the agrarian economies of their day. The course of economic growth was simply described. With expanding (man-made) capital and labour, an initial period of increasing returns (derived from scale economies and division of labour) gave way over time to diminishing returns, eventually yielding a stationary state. To Adam Smith and many others, that notional end point was a bleak prospect: profit rates dropped toward zero, population growth tailed off, and wages fell to subsistence levels. A very different,more hopeful, vision of stationarity, still in the classical tradition, was set out by J .S.Mill in a famous chapter of his Principles of Political Economy (1848): population and capital would again have ceased to grow, but earlier in the process and through individual or social choice rather than necessity. Productivity, however, could continue to increase. Gains in well-being would come also from the earlier halting of population growth, and consequent lower population–resource ratios.
A similarly optimistic depiction of a future stationary state – with the ‘eco- nomic problem’ solved and human energies diverted to other pursuits – was later drawn by Keynes (1932).
As technological change increasingly came to be seen as the driver of economic growth, and as urban industrialization distanced most economic activity from the land, theorists of economic growth lost interest in natural resources. With a focus only on capital, labour and technology, and with constant rates of population growth, savings and technological change, the models yielded steady-state growth paths in which output expanded indefinitely along with capital and labour. More elaborate formulations distinguished among different sectors of the economy. In dualistic growth models, for example, a low-productivity, resource-based agricultural sector provided labour and investment to a dynamic but resource-free modern sector, which eventually dominated the economy (see also Chapter 14).
With recognition of non-linearities associated with local increasing returnsand other self-reinforcing mechanisms in the economy, there could be more than one equilibrium growth path, with the actual outcome sensitive to initial conditions or to perhaps fortuitous events along the way (see, for example, Becker et al., 1990; Foley, 2000).
Although it typically did not do so, this neoclassical modelling tradition was no less able than its classical forebears to take account of resource constraints. (See Lee, 1991, on this point.) Renewable resources would simply add another reproducible form of capital as a factor of production.
Non-renewable resources, assuming they were not fully substitutable by other factors and not indefinitely extendable through technological advances, would be inconsistent with any steady-state outcome that entailed positive population growth. Requiring population growth, in the long term, to come to an end is not, of course, a radical demand to make of the theory.
While the actual role of population and resources in economic development is an empirical issue, a lot of the debate on the matter has been based on modelling exercises little more complicated than these. Much of it takes the form of window dressing, tracing out over time the implications of a priori, if often implicit, and assumptions about that role. A single assumed functional form or relationship – an investment function, a scale effect, presence (or absence) of a resource constraint – after some initial period comes to dominate the model’s behaviour. Familiar examples can be drawn from two models occupying polar positions in the resources debate of the 1970s and 1980s: the model underpinning Julian Simon’s The Ultimate Resource (1981) and that supporting the Club of Rome’s Limits to Growth scenarios (Meadows et al., 2004). In Simon’s case, the existence of resource constraints on the economy is simply denied. Positive feedbacks from a larger population stimulate inventiveness, production and investment, and favour indefinite continuation of at least moderate population growth, leading both to economic prosperity and to vastly expanded numbers of people. (The discussion of the model’s output ignores that latter expansion by being couched only in per capita values – see Simon, 1977.) For the Meadows team, negative feedback loops working through food production crises and adverse health effects of pollution lead to dramatic population collapses – made even sharper when lagged effects are introduced. Such models, heroically aggregated, are better seen as rhetorical devices, but-tresses to qualitative argument, rather than serious efforts at simulation.
Their output may point to parts of the formulation that it is important to get right, but it does not help in getting it right. While their authors were persuaded that they were accurately portraying the qualitative evidence about population and resources, as they respectively read it, the models in themselves merely dramatized their differences.
More focused models can achieve more, if at a lower level of ambition.
The demonstration of ‘trap’ situations involving local environmental degradation is a case in point – see Dasgupta (1993). As an example, the PEDA (Population–Environment–Development–Agriculture) model developed by Lutz et al. (2002) describes the interactions among population growth, education, land degradation, agricultural production and food insecurity.
It permits simulation of the vicious circle in which illiteracy, land degradation and hunger can perpetuate themselves, and points to the conditions required for that cycle to be broken. While still quite stylized, it is cast at alevel that permits testing of its behaviour against actual experience, supporting its value for policy experiment.
3. Optimal population trajectories
Since population change is in some measure a matter of social choice, it can notionally be regarded as a policy variable in a modelling exercise. Varying it over its feasible range then allows it to be optimized for a specified welfare function. The concept of an optimum population size for some designated territory – at which, other things equal, per capita economic well-being (or some other welfare criterion) was maximized – followed as a simple consequence of diminishing returns to labour. A small literature on the subject begins with Edwin Cannan in the late nineteenth century (see Robbins, 1927) and peters out with Alfred Sauvy (1952–54) in the mid-twentieth.
This is distinct, of course, from the investigation of human ‘carrying capacity’ – such as the question of how many people the earth can support.
At a subsistence level of consumption some of these numbers are extravagant indeed – Cohen (1995) assembles many of them – but the maximization involved, although in a sense it is concerned with the issue of sustainability, has closer ties to the economics of animal husbandry than to human welfare. (The technological contingency of such calculations is well indicated by the estimate, due to Smil (1991) that fully one-third of the present human population would not exist were it not for the food derived from synthetic nitrogenous fertilizer – a product of the Haber-Bosch process for nitrogen fixation developed only in the early 20th century.) If it is assumed that present-day rich-country consumption patterns are to be replicated worldwide, carrying capacity plummets: for Pimentel et al.
(1999) the earth’s long-term sustainability calls for a population less than half its present level.
The question of optimal size also arises for the populations of cities. The urban ‘built environment’, after all, is the immediate environment of half the human population. Beyond some size, scale diseconomies deriving from pollution, congestion and other negative externalities affecting health or livability may eventually outweigh economies of agglomeration (see, for example, Mills and de Ferranti, 1971; Tolley, 1974). But other dimensions of the built environment, including its aesthetic qualities, would equally warrant attention in a welfare criterion. Singling out the relationship of population size to the subjective welfare of the average inhabitant, among all the other contributors to urban well-being, seems of limited value. Not surprisingly, like the broader topic of optimum population, this too has not proven a fruitful area of research.
What might be of more interest is the optimal path of population change over time. The age-structure dynamics of population growth are analogous to the vintage dynamics of capital stock, though with more limited scope for policy influence. For specified welfare criteria, optimal population trajectories can be derived to show how resource-constrained stationarity should be approached (see Pitchford, 1974; Arthur and McNicoll, 1977; Zimmerman, 1989).
Abstract theorizing of this kind is a means of playing with ideas rather than deriving actual policies. Nonetheless, just such an optimization exercise, part static and part dynamic, lay behind the introduction in 1979 of China’s radical one-child-per-family policy. The background, recounted by Greenhalgh (2005), was the belated conviction on the part of China’s leadership in the 1970s that the country’s population growth was damaging its development prospects and the consequent recasting of the problem, as they saw it, from being one for social scientists and political ideologues to one for systems engineers and limits-to-growth theorists. The latter experts were at hand in the persons of a group of engineers and scientists (led by a missile engineer, Song Jian), who became the principals in promoting the new technocratic approach. They investigated both the static optimum – the target population size – and alternative trajectories that would lead toward it. On the former, as they summarized it: ‘We have done studies based on likely economic development trends, nutritional requirements, freshwater resources, and environmental and ecological equilibrium, and we conclude that 700 million seems to be China’s optimum population in the long run’ (Song et al., 1985, p. 214). They then solved the optimal control problem of how fertility should evolve to reach the target population over the next century if the peak population was not to exceed 1.2 billion, there were pre-set constraints on the acceptable lower bound of fertility and upper bound of old-age dependency, and there was to be a smooth transition to the target population while minimizing the total person-years lived in excess of 700 million per year. The resulting policy called for fertility to be quickly brought down to its lower bound, held there for 50 years or so (yielding, after a time, negative population growth), then allowed to rise back to replacement level. While various minimum fertility levels were considered, one child per family was argued to be the best. The human costs of attaining such a trajectory (involving ‘a lot of unpleasant- ness in the enforcement of the program’ and the social and economic problems of the ensuing rapid population ageing were held to be unavoidable in making up for the ‘dogged stubbornness of the 1950s’when Maoist pronatalism prevailed (Song et al., 1985, p. 267).
For both countries and cities, the specification of a welfare criterion to be optimized requires decisions on the ingredients of well-being and on how its distribution over the population and over time is to be valued. The inherent arbitrariness of that exercise explains the lack of enthusiasm for the concept of an optimum as a formal construct – though the idea may hold some political potency. Changes in trade and technology – either of which can transform economies of scale – erode what little meaning there is in a static optimum population for a country or locality. A fortiori, the inherent unpredictability of those trends, along with the many unknowns in future environmental change, vitiates the usefulness of more ambitious modelling over time – modelling that has necessarily to assume known dynamics.
4. Exhaustible resources and environmental services
Past worries about rapid or continued population growth have often been linked to the idea that a country – or the world – is running out of some supposedly critical natural resource (see Demeny, 1989 for an historical perspective). There have been numerous candidates for those resources in the past. Mostly, such claims have turned out to be greatly overstated; almost always they neglect or underplay the scope for societal adaptation through technological and social change. A classic case was the concern in 19th century Britain that its industry would be crippled as coal supplies were mined out (Jevons, 1865). The widely-publicized wagers between economist Julian Simon and biologist Paul Ehrlich on whether stocks of selected mineral resources were approaching exhaustion, to be signalled by steadily rising prices, were all won by Simon as prices fell over the specified period (Simon, 1996, pp. 35–6). A prominent historian of China titled a study of that country’s environmental history: ‘three thousand years of unsustainable development’ (Elvin, 1993).
Moreover, even if we would accept, contra Simon in The Ultimate Resource, that stocks of many resources are indeed finite and exhaustible, it does not follow that the link to population should necessarily be of much consequence. For many resources, indeed, the pace of approach to exhaustion might be at most marginally affected by feasible changes in population growth. As put bluntly in a 1986 panel report from the US National Research Council, slower population growth delay[s] the time at which a particular stage of resource depletion is reached, [but] has no necessary or even probable effect on the number of people who will live under a particular stage of resource depletion. [T]he rate of population growth has no effect on the number of persons who are able to use a resource, although it does, of course, advance the date at which exhaustion occurs . . . Unless one is more concerned with the welfare of people born in the distant future than those born in the immediate future, there is little reason to be concerned about the rate at which population growth is depleting the stock of exhaustible resources (US National Research Council, 1986: 15).
But that judgement is altogether too dismissive of the problem as a whole. ‘Mining’ a resource that would be potentially renewable, such as a fishery or an aquifer, or degrading land through erosion or salination may be a population-related effect. (The resources allowed as potential sources of concern by the NRC panel were fuelwood, forest land, and fish; many would add access to fresh water.) These are cases where the concept of a sustainable yield is straightforward enough, but constructing and maintaining the institutional conditions required to safeguard that yield are demanding. Far from a society simply using up one resource and moving on to other things – presumably having replaced that part of its natural capital by other resources or by other forms of capital – the outcome may amount to an effectively irreversible loss in welfare.
The shift in focus here is from physical ‘stuffy’, epitomized by stocks of minerals in the ground, to environmental services that humans draw upon.
Environmental services encompass not only provision of food and fuel but also climate regulation, pollination, soil formation and retention, nutrient cycling, and much else. And they include direct environmental effects on well-being through recreation and aesthetic enjoyment. A massive study of time trends in the use of these services, judged against sustainable levels, is the Millennium Ecosystem Assessment. In its first report (2005), the Assessment finds that most of the services it examined are being degraded or drawn on at unsustainable rates. Dryland regions, covering two-fifths of the world’s land surface and containing one-third of the world population, are especially affected.
But to what extent can this degradation be linked to population change rather than to economic growth or to the numerous factors that might lead to irresponsible patterns of consumption? People’s numbers, but also their proclivities to consume and their exploitative abilities, can all be factors in degrading environmental services. In stylized form, this proposition is conveyed in the familiar Ehrlich–Holdren ‘IPAT’ identity: Impact Population Affluence Technology (Ehrlich and Holdren, 1972). ‘Impact’ here indicates a persisting rather than transitory environmental effect. It is an external intrusion into an ecosystem which tends to reduce its capacity to provide environmental services. An example of an environmental impact is a country’s carbon dioxide emissions, which degrade the environmental service provided by the atmosphere in regulating heat radiation from the earth’s surface. The P A T decomposition in that case would be population times per capita GDP times the ‘carbon intensity’ of the economy.
At a given level of afluence and carbon intensity, emissions rise in proportion to population.
Interpreted as a causal relationship rather than as an identity, the I PAT equation is commonly used to emphasize the responsibility for environmental damage on the part, jointly, of population size, a high-consumption lifestyle, and environmentally-destructive technology, each amplifying the others. Implicitly, it asserts that these factors can together be seen as the main human causes of degradation. The categorization should not, of course, be taken for granted. In particular, social organizational and behavioural factors would often warrant separate scrutiny as causes of degradation rather than being subsumed within A and T.
If P, A and T were independent of each other, the multiplicative relationship would be equivalent to an additive relationship among growth rates. In the carbon case, the growth rate of emissions would equal the sum of the growth rates of the three components. However, P, A and T are not in fact independent of each other. For any defined population and environment, they are variables in a complex economic, demographic and socio- cultural system. Each also has major distributional dimensions and is a function of time. Consumption – or any other measure of human welfare – is an output of this system; environmental effects, both intended and unintended, are outputs as well. And even at the global level the system is not autonomous: it is influenced by ‘natural’ changes in the environment and by environmental feedbacks from human activity.
Because of the dependency among P, A and T, the Ehrlich–Holdren formula cannot resolve disputes on the relative contributions of factors responsible for environmental degradation. For this task, Preston (1994) has proposed looking at the variances of the growth rates of I, P, A and T over different regions or countries. Writing these as , and so on, the additive relationship among growth rates implies the following relationship among variances and covariances:
The covariance terms are the interaction effects. If each is relatively small in a given case, there is a simple decomposition of the impact variance into the variance imputed to each factor. Otherwise, the one or more significant interaction terms can be explicitly noted.
In Preston’s analysis of carbon emission data for major world regions over 1980–90, used as an illustration, population growth makes a minor contribution to the total variance; the major contributors are the growth of A and T, with a substantial offsetting effect from the interaction of A and T.
Given the 50 per cent or so increase in global population projected for this century, the future role of population growth in carbon emissions is nonetheless of some importance. Detailed studies of this relationship include Bongaarts (1992), Meyerson (1998), and O’Neill et al (2001).
Important too, of course, are the demographic consequences of any resulting climate change, such as those working through shifts in food production, disease patterns and sea levels. Specification of a more general functional relationship, If (P,A,T), permits calculation of impact elasticities with respect to the three factors, rather than implicitly assuming elasticities of 1. At the country level there is some evidence that the population elasticity is indeed close to 1 for carbon emissions but may be higher for some other pollutants (see Cole and
Neumayer,2004).
Complicating any estimation of population–environment relationships is the non-linearity of environmental systems. The Millennium Ecosystem
Assessment, mentioned above, warns of an increasing likelihood of ‘non- linear changes in ecosystems (including accelerating, abrupt, and potentially irreversible changes), with important consequences for human well-being’ (2005, p. 11). Holling (1986) notes that ecosystems may be resilient under the pressure of human activity until a point is reached at which there is sharp discontinuous change. Kasperson et al. (1995) identify a series of thresholds in nature–society trajectories as human activity in a region intensifies beyond sustainability: first a threshold of impoverishment, then endangerment, and finally criticality – the stage at which human wealth and well-being in the region enter an irreparable decline. The working out of the process is detailed in particular settings: criticality is exemplified by the Aral Sea basin. More dramatic historical cases are described by Diamond (2005). Curtailing growth in human numbers may not be a sufficient change to defect those outcomes, nor may it even be necessary in some circumstances (as discussed below), but population increase has usually been an exacerbating factor.
5. Institutional mediation
Most important links between population and environmental services are institutionally contingent. Under some institutional arrangements – for example, a strong management regime, well-defined property rights, or effective community norms and sanctions – population growth in a region need not adversely affect the local environment. Access to a limited resource can be rationed or governed in some other way so that it is not overused. Or the institutional forms may be such that the population growth itself is prevented – by negative feedbacks halting natural increase (an apparent condition found often in hunter-gatherer societies) or by diverting the growth elsewhere, through migration. If this institutional mediation ultimately proves inadequate to the task, the limits on the environmental services being drawn on would be exceeded and degradation would ensue. This can happen well short of those limits if economic or political change undermines a management regime or erodes norms and sanctions. Excessive deforestation can often be traced to such institutional breakdowns (or to ill-considered efforts at institutional reform) rather than to population growth itself. In other cases, a resource may have been so abundant that no management or sanctions were needed: that is a setting where the familiar ‘tragedy of the commons’ may unfold as the number of claimants to the resource or their exploitative abilities increases (see Hardin, 1968).
An appreciable amount of literature now exists on these issues of institutional design, both theoretical and empirical, and ranging in scale from local common-pool resources such as irrigation water or community forests to the global environment (see, for example, Ostrom, 1990; Baden and Noonan, 1998). Small common-pool resource systems receive most attention: a favourite example is the experience of Swiss alpine villages, where social regulation limiting overgrazing has been maintained for many generations. Larger systems usually show less symmetry in participant involvement and participant stakes: benefits can be appropriated by favoured insiders, costs shed to outsiders. Judgement of sustainability in such cases may depend on where a system’s boundaries are placed, and whether those cost-shedding options can be curtailed (see McNicoll, 2002).
Physical spillover effects of human activity beyond the location of that activity, such as downwind acid rain from industrial plants or down- stream flooding caused by watershed destruction, present relatively straightforward technical problems for design of a governance regime.
The greater difficulties are likely to be political. These can be formidable even within a country, a fortiori where the environmental effects involve degradation of a global commons. Population change here raises added complications. Thus, in negotiating a regulatory regime to limit global carbon emissions, anticipated population growth in a country can be treated either as a foreordained factor to be accommodated by the international community – occasioning a response analogous to political redistricting in a parliamentary democracy – or treated wholly as a domestic matter (an outcome of social policy) that should not affect assignment of emission quotas.
Adverse effects of human activity can also be transferred from one region to another through the normal economic relationships among societies, notably through trade. A poorer society may be more willing to incur environmental damage in return for economic gain, or be less able to prevent it. The concept of a community’s ‘ecological footprint’ was developed to account for such displaced effects by translating them back into material terms, calculating the total area required to sustain each community’s population and level of consumption (see Wackernagel and Rees,1996, and, for criticism,Neumayer, 2003; see also Chapter 20). An implicit presumption of environmental autarky would disallow rich countries buying renewable resources from poor countries; notionally, if implausibly, they could maintain their consumption by somehow reducing their, population.
6. Population ageing and population decline
As noted earlier, the age composition of populations that emerge from the transition to low mortality and fertility are heavily weighted toward the elderly, and after transitional effects on the age distribution have worked themselves out, actual declines in population numbers are likely. For example, if fertility were to stay at the current European average of around
1.4 lifetime births per woman (0.65 births below the replacement level), each generation will be about one-third smaller than its predecessor.
Change of that magnitude could not be offset by any politically feasible level of immigration.
After the ecological damage associated with industrialization it might be expected that the ending of the demographic transition would have positive effects on sustainability. There are fewer additional people, or even fewer people in total, and those there are will mostly live compactly in cities and have the milder and perhaps more environmentally-friendly consumption habits of the elderly. There may be scope for ecological recovery. In Europe, for instance, the evidence suggests a strong expansion in forested area is occurring as land drops out of use for cultivation and grazing (Waggoner and Ausubel, 2001). The so-called environmental Kuznets curve (see Chapter 15) – the posited inverted-U relationship between income and degradation – gives additional grounds for environ- mental optimism since post-transition societies are likely to be prosperous. But there are countervailing trends as well. Household size tends to diminish, for example, and small households, especially in sprawling suburbs, are less efficient energy users (see O’Neill and Chen, 2002).
Moreover, ecosystem maintenance increasingly calls for active intervention rather than simply halting damage. Mere neglect does not necessarily yield restoration. Many human-transformed landscapes that have valued productive or amenity qualities similarly require continuing maintenance. Expectations of strengthened environmentalism around the world may not be borne out – preferences, after all, tend to adapt to realities – and even a strong environmental ethic is powerless in the face of irreversibilities.
Population decline, of course, can come about for reasons other than post-transition demographic maturity: from wars or civil violence and natural disasters, and (a potentially larger demographic threat) from epidemic disease (see Smil, 2005). These events too have implications for sustainability, at least locally. Their effect is magnified to the degree they do harm to the productive base of the economy (including its natural resource base) and to the social institutions that maintain the coherence of a society over time.
7. Conclusions and research directions
Much of the research that would shed light on demographic aspects of sustainability is best covered under the general heading of sustainable development. This is largely true for the long-run changes that constitute the demographic transition. To a considerable degree the transition is neither an autonomous process nor policy-led, but a by-product of economic and cultural change, and it is this latter that should be the research focus.
For example, in studies of rainforest destruction – a standard illustration of adverse demographic-cum-development impact on the environment – a basic characteristic of the system is precisely its demographic openness.
Demographic ‘pressure’ supposedly leads to land clearing for pioneer settlement, but a broader research perspective would investigate the economic incentives favouring that kind of settlement over, say, cityward migration (Brazil’s rural population in 2005 was one-third smaller than its 1970 peak). As to policy influence, migration and fertility might be seen as potential candidates to be demographic control variables in a population–economy–environment system, but even if they technically lie within a government’s policy space, aside from cross-border movement most governments have very limited if any direct purchase over them.
While there may thus be less content in population and sustainability than first appears, an important research agenda remains. A critical subject, signalled above, is the design of governing institutions for population–economy–environment systems, able to ensure sustainable resource use. Those institutions are of interest at a range of system levels – local, national and international – and are likely to entail intricate combinations of pricing and rationing systems and means of enforcement.
At the local level, and possibly at other levels too, governing institutions might seek to include measures aiming at the social control of population growth.
A less elusive but similarly important research area concerns demographic effects on consumption. ,How resource- and energy-intensive will the consumption future be, given what we know about the course of population levels and composition? How do we assess substitutability in consumption – say, between ‘real’ and ‘virtual’ environmental amenity? And, well beyond the demographic dimension but still informed by it, are we, in confronting sustainability problems, dealing with time-limited effects of a population peaking later this century (with an additional 2–3 billion people,added to the world total) but then dropping, allowing some measure of ecological recovery, or are we entering a new, destabilized environmental era in which sustainability in any but the weakest sense is continually out of reach?
9 Technological lock-in and the role of innovation Timothy J. Foxon
1. Sustainability and the need for technological innovation
Despite increases in our understanding of the issues raised by the challenge of environmental, social and economic sustainability, movement has been frustratingly slow towards achieving levels of resource use and waste production that are within appropriate environmental limits and provide socially acceptable levels of economic prosperity and social justice.
As first described by Ehrlich and Holdren (1971), environmental impact (I) of a nation or region may be usefully decomposed into three factors: population (P), average consumption per capita, which depends on afluence (A), and environmental impact per unit of consumption, which depends on technology (T), in the equation (identity) I P A T.
Limiting growth in environmental impact and eventually reducing it to a level within the earth’s ecological footprint (Chapter 20) will require progress on all three of these factors. Chapter 8 discussed issues relating to stabilizing population levels, and Chapter 16 addresses social and economic issues relating to moving towards sustainable patterns of consumption. This chapter discusses the challenge of technological innovation required to achieve radical reductions in average environmental impact per unit of consumption.
Section 2 argues that individual technologies, and their development, are best understood as part of wider technological and innovation systems.
Section 3 examines how increasing returns to the adoption of technologies may give rise to ‘lock-in’ of incumbent technologies, preventing the adoption of potentially superior alternatives. Section 4 examines how similar types of increasing returns apply to institutional frameworks of social rules and constraints. Section 5 brings these two ideas together, arguing that technological systems co-evolve with institutional systems. This may give rise to lock-in of current techno-institutional systems, such as high carbon energy systems, creating barriers to the innovation and adoption of more sustainable systems. Section 6 examines the challenge for policy makers of promoting innovation for a transition to more sustainable socio-economic systems. Finally, Section 7 provides some conclusions and assesses the implications for future research and policy needs.
2. Understanding technological systems
The view that individual technologies, and the way they develop, are best understood as part of wider technological and innovation systems was significantly developed by studies in the late 1980s and early 1990s. In his seminal work on development of different electricity systems, Hughes (1983) showed the extent to which such large technical systems embody both technical and social factors. Similarly, Carlsson and Stankiewicz (1991) examined the ‘dynamic knowledge and competence networks’ making up technological systems. These approaches enable both stability and change in technological systems to be investigated within a common analytical framework. Related work examined the processes of innovation from a systems perspective. Rather than being categorized as a one-way, linear flow from R&D to new products, innovation is seen as a process of matching technical possibilities to market opportunities, involving multiple interactions and types of learning (Freeman and Soete, 1997). An innovation system may be defined as ‘the elements and relationships which interact in the production, diffusion and use of new, and economically-useful, knowledge’ (Lundvall, 1992). Early work focused on national systems of innovation, following the pioneering study of the Japanese economy by Freeman (1988). In a major multi-country study, Nelson (1993) and collaborators compared the national innovation systems of 15 countries, finding that the differences between them reflected different institutional arrangements, including: systems of university research and training and industrial R&D; financial institutions; management skills; public infra- structure; and national monetary, fiscal and trade policies. Innovation is the principal source of economic growth (Mokyr, 2002) and a key source of new employment opportunities and skills, as well as providing potential for realizing environmental benefits (see recent reviews by Kemp, 1997;Ruttan, 2001; Grubler et al., 2002 and Foxon, 2003).
The systems approach emphasizes the role of uncertainty and cognitive limits to firms’ or individuals’ ability to gather and process information for their decision-making, known as ‘bounded rationality’ (Simon, 1955; 1959).
Innovation is necessarily characterized by uncertainty about future markets, technology potential and policy and regulatory environments, and so firms’ expectations of the future have a crucial influence on their present decision- making. Expectations are often implicitly or explicitly shared between firms in the same industry, giving rise to trajectories of technological development
2 which can resemble self-fulfilling prophecies (Dosi, 1982;Mac Kenzie, 1992).
3. Technological lock-in The view outlined above suggests that the development of technologies both influences and is influenced by the social, economic and cultural setting in which they develop (Rip and Kemp, 1998; Kemp, 2000). This leads to the idea that the successful innovation and take-up of a new technology depends on the path of its development – so-called ‘path dependency’ (David, 1985), including the particular characteristics of initial markets, the institutional and regulatory factors governing its introduction and the expectations of consumers. Of particular interest is the extent to which such factors favour incumbent technologies against newcomers.
Arthur examined increasing returns to adoption, that is positive feedbacks which mean that the more a technology is adopted, the more likely it is to be further adopted. He argued that these can lead to ‘lock-in’ of incumbent technologies, preventing the take-up of potentially superior alternatives (Arthur, 1989). Arthur (1994) identified four major classes of increasing returns: scale economies, learning effects, adaptive expectations and network economies, which all contribute to this positive feedback that favours existing technologies. The first of these, scale economies, occurs when unit costs decline with increasing output. For example, when a technology has large set-up or fixed costs because of indivisibilities, unit production costs decline as they are spread over increasing production volume. Thus, an existing technology often has significant ‘sunk costs’ from earlier investments, and so, if these are still yielding benefits, incentives to invest in alternative technologies to garner these benefits will be diminished. Learning effects act to improve products or reduce their cost as specialized skills and knowledge accumulate through production and market experience. This idea was first formulated as ‘learning-by-doing’ (Arrow, 1962), and learning curves have been empirically demonstrated for a number of technologies, showing unit costs declining with cumulative production (IEA, 2000). Adaptive expectations arise as increasing adoption reduces uncertainty and both users and producers become increasingly confident about quality, performance and longevity of the current technology. This means that there be may a lack of ‘market pull’ for alternatives. Network or co-ordination effects occur when advantages accrue to agents adopting the same technologies as others (see also Katz and Shapiro, 1985). This effect is clear, for example, in telecommunications technologies; for example the more that others have a mobile phone or fax machine, the more it is in your advantage to have one (which is compatible). Similarly, infrastructures develop based on the attributes of existing technologies, creating a barrier to the adoption of alternative technologies with different attributes.
Arthur (1989) showed that, in a simple model of two competing technologies, these effects can amplify small, essentially random, initial variations in market share, resulting in one technology achieving complete market dominance at the expense of the other – referred to as technological ‘lock-in’.
He speculated that, once lock-in is achieved, this can prevent the take-up of potentially superior alternatives. David and others performed a series of historical studies, which showed the plausibility of arguments of path dependence and lock-in. The most well-known is the example of the QWERTY keyboard layout (David, 1985), which was originally designed to slow down typists to prevent the jamming of early mechanical typewriters, and has now achieved almost universal dominance, at the expense of arguably superior designs. Another example is the ‘light water’ nuclear reactor design, which was originally designed for submarine propulsion, but, following political pressure for rapid peaceful use of nuclear technology, was adopted for the first nuclear power stations and rapidly became the standard design in the US (Cowan, 1990). Specific historical examples of path dependence have been criticized, particularly QWERTY (Liebowitz and Margolis, 1995), as has the failure to explain how ‘lock-in’ is eventually broken, but the empirical evidence strongly supports the original theoretical argument (David, 1997).
4. Institutional lock-in As described in section 2, the systems approach emphasizes that individual technologies are not only supported by the wider technological system of which they are part, but also by the institutional framework of social rules and conventions that reinforces that technological system. To better under- stand the development of such frameworks, insights may be drawn from work in institutional economics, which is currently undergoing a renaissance (Schmid, 2004). Institutions may be defined as any form of constraint that human beings devise to shape human interaction (Hodgson, 1988). These include formal constraints, such as legislation, economic rules and contracts, and informal constraints, such as social conventions and codes of behaviour. There has been much interest in the study of how institutions evolve over time, and how this creates drivers and barriers for social change, and influences economic performance. North (1990) argues that all the features identified by Arthur as creating increasing returns to the adoption of technologies can also be applied to institutions. New institutions often entail high set-up or fixed costs. There are significant learning effects for organizations that arise because of the opportunities provided by the institutional framework. There are co-ordination effects, directly via contracts with other organizations and indirectly by induced investment, and through the informal constraints generated. Adaptive expectations occur because increased prevalence of contracting based on a specific institutional framework reduces uncertainty about the continuation of that framework. In summary, North argues, ‘the interdependent web of an institutional matrix produces massive increasing returns’ (North, 1990, p. 95).
Building on this work, Pierson (2000) argues that political institutions are particularly prone to increasing returns, because of four factors: the central role of collective action; the high density of institutions; the possibilities for using political authority to enhance asymmetries of power; and the complexity and opacity of politics. Collective action follows from the fact that, in politics, the consequences of an individual or organization’s actions are highly dependent on the actions of others. This means that institutions usually have high start-up costs and are subject to adaptive expectations. Furthermore, because formal institutions and public policies place extensive, legally binding constraints on behaviour, they are subject to learning, co-ordination and expectation effects, and so become difficult to change, once implemented. The allocation of political power to particular actors is also a source of positive feedback. When actors are in a position to impose rules on others, they may use this authority to generate changes in the rules (both formal institutions and public policies) so as to enhance their own power. Finally, the complexity of the goals of politics, as well as the loose and diffuse links between actions and outcomes, make politics inherently ambiguous and mistakes difficult to rectify. These four factors create path dependency and lock-in of particular political institutions, such as regulatory frameworks. This helps to explain significant features of institutional development: specific patterns of timing and sequence matter; a wide range of social outcomes may be possible; large consequences may result from relatively small or contingent events; particular courses of action, once introduced, can be almost impossible to reverse; and, consequently, political development is punctuated by critical moments or junctures that shape the basic contours of social life.
5. Co-evolution of technological and institutional systems
The above ideas of systems thinking and increasing returns to both technologies and institutions may be combined, by analysing the process of co-evolution of technological and institutional systems (Unruh, 2000; Nelson and Sampat, 2001). As modern technological systems are deeply embedded in institutional structures, the above factors leading to institutional lock-in can interact with and reinforce the drivers of technological lock-in.
Unruh (2000, 2002) suggests that modern technological systems, such as the carbon-based energy system, have undergone a process of technological and institutional co-evolution, driven by path-dependent increasing returns to scale. He introduces the term ‘techno-institutional complex’ (TIC), composed of technological systems and the public and private institutions that govern their diffusion and use, and which become ‘inter-linked, feeding off one another in a self-referential system’ (Unruh, 2000, p. 825).
In particular, he describes how these techno-institutional complexes create persistent incentive structures that strongly influence system evolution and stability. Building on the work of Arthur (1989, 1994), he shows how the positive feedbacks of increasing returns both to technologies and to their supporting institutions can create rapid expansion in the early stages of development of technology systems. However, once a stable techno- institutional system is in place, it acquires a stability and resistance to change. In evolutionary language, the selection environment highly favours changes which represent only incremental changes to the current system, but strongly discourages radical changes which would fundamentally alter the system. Thus, a system which has bene?ted from a long period of increasing returns, such as the carbon-based energy system, may become ‘locked-in’, preventing the development and take-up of alternative technologies, such as low carbon, renewable energy sources. The work of Pierson (2000) on increasing returns to political institutions, discussed in
Section 4, is particularly relevant here. Actors, such as those with large investments in current market-leading technologies, who benefit from the current institutional framework (including formal rules and public policies) will act to try to maintain that framework, thus contributing to the lock-in of the current technological system.
Unruh uses the general example of the electricity generation TIC, and we can apply his example to the particular case of the UK electricity system. In this case, institutional factors, driven by the desire to satisfy increasing electricity demand and a regulatory framework based on increasing competition and reducing unit prices to the consumer, fed back into the expansion of the technological system. In the UK, institutional change (liberalization of electricity markets) led to the so-called ‘dash for gas’ in the 1990s–ar a p i dexpansion of power stations using gas turbines. These were smaller and quicker to build than coal or nuclear power stations, thus generating quicker pro?ts in the newly-liberalized market. The availability of gas turbines was partly the result of this technology being transferred from the aerospace industry, where it had already benefited from a long period of investment (and state support) and increasing returns. This technological change reinforced the institutional drivers to meet increasing electricity demands by expanding generation capacity, rather than, for example, creating stronger incentives for energy efficiency measures. Such insights were employed in a recent study of current UK innovation systems for new and renewable energy technologies (ICEPT/E4Tech, 2003; Foxon et al., 2005a). There it was argued that institutional barriers are leading to systems failures preventing the successful innovation and take-up of a wider range of renewable technologies.
6. Promoting innovation for a transition to more sustainable socio-economic systems
We conclude by examining some of the implications of this systems view of technological change and innovation for policy making aiming to promote a transition to more sustainable socio-economic systems. As we have argued, individual technologies are not only supported by the wider technological system of which they are part, but also the institutional framework of social rules and conventions that reinforces that technological system. This can lead to the lock-in of existing techno-institutional systems, such as the high carbon fossil-fuel based energy system. Of course, lock-in of systems does not last for ever, and analysis of examples of historical change may usefully increase understanding of how radical systems change occurs.
A useful framework for understanding how the wider technological system constrains the evolution of technologies is provided by the work on technological transitions by Kemp (1994) and Geels (2002). Kemp (1994) proposed three explanatory levels: technological niches, socio-technical regimes and landscapes. The basic idea is that each higher level has a greater degree of stability and resistance to change, due to interactions and link- ages between the elements forming that configuration. Higher levels then impose constraints on the direction of change of lower levels, reinforcing technological trajectories (Dosi, 1982).
The idea of a socio-technical regime reflects the interaction between the actors and institutions involved in creating and reinforcing a particular technological system. As described by Rip and Kemp (1998): ‘A sociote-chnical regime is the rule-set or grammar embedded in a complex of engineering practices; production process technologies; product characteristics, skills and procedures; ways of handling relevant artefacts and persons; ways of defining problems; all of them embedded in institutions and infrastructures.’ This definition makes it clear that a regime consists in large part of the prevailing set of routines used by the actors in a particular area of technology.
A landscape represents the broader political, social and cultural values and institutions that form the deep structural relationships of a society. As such, landscapes are even more resistant to change than regimes.
In this picture of the innovation process, whereas the existing regime generates incremental innovation, radical innovations are generated in niches.
As a regime will usually not be totally homogeneous, niches occur, providing spaces that are at least partially insulated from ‘normal’ market selection in the regime: for example, specialized sectors of the market or locations where a slightly different institutional rule-set applies. Such niches can act as ‘incubation rooms’ for radical novelties (Schot, 1998).
Niches provide locations for learning processes to occur, and space to build up the social networks that support innovations, such as supply chains and user–producer relationships. The idea of promoting shifts to more sustainable regimes through the deliberate creation and support of niches, so-called ‘strategic niche management’ has been put forward by Kemp and colleagues (Kemp et al., 1998). This idea, that radical change comes from actors outside the current mainstream, echoes work on ‘disruptive innovation’ in the management literature (Utterback, 1994; Christensen, 1997).
Based on a number of historical case studies, this argues that firms that are successful within an existing technological regime typically pursue only incremental innovation within this regime, responding to the perceived demands of their customers. They may then fail to recognize the potential of a new innovation to create new markets, which may grow and eventually replace those for the existing mainstream technology.
Geels (2002, 2005) examined a number of technological transitions, for example that from sailing ships to steamships, using the three-level niche, regime, landscape model introduced above (see also Elzen et al., 2004). He argued that novelties typically emerge in niches, which are embedded in, but partially isolated from, existing regimes and landscapes. For example, transatlantic passenger transport formed a key niche for the new steamship system. If these niches grow successfully, and their development is reinforced by changes happening more slowly at the regime level, then it is possible that a regime shift will occur. Geels argues that regime shifts, and ultimately transitions to new socio-technological landscapes, may occur through a process of niche-cumulation. In this case, radical innovations are used in a number of market niches, which gradually grow and coalesce to form a new regime.
Building on this work, Kemp and Rotmans (2005) proposed the concept of transition management. This combines the formation of a vision and strategic goals for the long-term development of a technology area, with transition paths towards these goals and steps forward, termed experiments, that seek to develop and grow niches for more sustainable technological alternatives. The transition approach was adopted in the Fourth
Netherlands Environmental Policy Plan, and the Dutch Ministry of Economic Affairs (2004) is now applying it to innovation in energy policy.
The Ministry argues that this involves a new form of concerted action between market and government, based on:
Relationships built on mutual trust: Stakeholders want to be able to rely on a policy line not being changed unexpectedly once adopted, through commitment to the direction taken, the approach and the main roads formulated. The government places trust in market players by offering them ‘experimentation space’.
Partnership: Government, market and society are partners in the process of setting policy aims, creating opportunities and undertaking transition experiments, for example through ministries setting up ‘one stop shops’ for advice and problem solving.
Brokerage: The government facilitates the building of networks and coalitions between actors in transition paths.
Leadership: Stakeholders require the government to declare itself clearly in favour of a long-term agenda of sustainability and innovation that is set for a long time, and to tailor current policy to it.
In investigating some of the implications of the above ideas for policy making to promote more sustainable innovation, a couple of case studies (of UK low carbon energy innovation and of EC policy-making processes that support alternative energy sources in vehicles) and a review of similar policy analyses in Europe (Rennings et al., 2003) and the US (Alic et al., 2003) are worth considering. Foxon et al. (2005b) outlines five guiding principles for sustainable innovation policy based on the findings of these studies.
The first guiding principle argues for the development of a sustainable innovation policy regime that brings together appropriate strands of current innovation and environmental policy and regulatory regimes, and is situated between high-level aspirations (for example promoting sustainable development) and specific sectoral policy measures (for example a tax on non-recyclable materials in automobiles). This would require the creation of a long-term, stable and consistent strategic framework to promote a transition to more sustainable systems, seeking to apply the lessons that might be gleaned from experience with the Dutch Government’s current ‘Transition Approach’.
The second guiding principle proposes applying approaches based on systems thinking and practice, in order to engage with the complexity and systemic interactions of innovation systems and policy-making processes.
This type of systems thinking can inform policy processes, through the concept of ‘systems failures’ as a rationale for public policy intervention (Edquist, 1994; 2001; Smith, 2000), and through the identification and use of ‘techno-economic’ and ‘policy’ windows of opportunity (Nill, 2003;2004; Sartorius and Zundel, 2005). It also suggests the value of promoting a diversity of options to overcome lock-in of current systems, through the support of niches in which learning can occur, the development of a skills base, the creation of knowledge networks, and improved expectations of future market opportunities.
The third guiding principle advances the procedural and institutional basis for the delivery of sustainable innovation policy, while acknowledging the constraints of time pressure, risk-aversion and lack of reward for innovation faced by real policy processes. Here, government and industry play complementary roles in promoting sustainable innovation, with government setting public policy objectives informed by stakeholder consultation and rigorous analysis, and industry providing the technical knowledge, resources and entrepreneurial spirit to generate innovation. Public–private institutional structures, reflecting these complementary roles, could be directed at specific sectoral tasks for the implementation of sustainable innovation, and involve a targeted effort to stimulate and engage sustainable innovation ‘incubators’.
The fourth guiding principle promotes the development of a more integrated mix of policy processes, measures and instruments that would cohere synergistically to promote sustainable innovation. Processes and criteria for improvement could include: applying sustainability indicators and sustainable innovation criteria; balancing benefits and costs of likely economic, environmental and social impacts; using a dedicated risk assessment tool; assessing instruments in terms of factors relevant to the innovation process; and applying growing knowledge about which instruments work well or poorly together, including in terms of overlapping, sequential implementation or replacement (Porter and van der Linde, 1995; Gunningham and Grabowsky, 1998; Makuch, 2003a; 2003b). The fifth guiding principle is that policy learning should be embedded in the sustainable innovation policy process. This suggests the value of providing a highly responsive way to modulate the evolutionary paths of sustainable technological systems and to mitigate the unintended harmful consequences of policies. This would involve monitoring and evaluation of policy implementation, and the review of policy impacts on sustainable innovation systems.
7. Conclusions and ways forward
This chapter has reviewed issues relating to the role of technological change and innovation in moving societies towards greater sustainability. Though the importance of technologies in helping to provide sustainable solutions is often promoted by commentators from all parts of the political spectrum, policy measures to promote such innovation have frequently failed to recognize the complexity and systemic nature of innovation processes. As we have seen, increasing returns to adoption in both technological systems and in supporting institutional systems may lead to lock-in, creating barriers to the innovation and deployment of technological alternatives.
This emerging understanding of innovation systems and how past technological transitions have occurred could provide insight into approaches for promoting radical innovation for greater sustainability, for example, through the support of niches and a diversity of options. However, efforts to steer or modulate such a transition will also require significant institutional change in many countries. For example, the UK policy style has been based largely on centralized decision-making processes and heavy emphasis on the use of market-based instruments without addressing other institutional and knowledge factors relating to the creation of markets for new technologies. This contrasts with a policy style of more decentralized and public–private collaborative decision-making, which has enabled the Netherlands to become a leader in practising and learning how a technology transition for sustainability could be promoted. Further practical experience and analysis will be needed for the implementation of the above ideas and principles for promoting sustainable innovation to overcome technological and institutional lock-in.
Aucun commentaire:
Enregistrer un commentaire