Thursday, August 25, 2016

The Allure of Catastrophe Bonds

There are at least two big problems inherent in the way that the world has become accustomed to dealing with natural disasters in low- and middle-income , including of weather-related, disease-related, and geologically-related varieties. One problem is that disaster aid from donor nations often arrives too little and too late. The other problem is that it often becomes apparent, in considering the aftermath of a disaster, that relatively inexpensive precautionary steps could have substantially reduced the effects of the disaster. "Catastrophe bonds" (to be explained in a moment, if you haven't heard of them) offer a possible method to ameliorate both problems. Theodore Talbot and Owen Barder. provide an overview in "Payouts for Perils: Why Disaster Aid Is Broken, and How Catastrophe Insurance Can Help to Fix It" (July 2016, Center for Global Development Policy Paper 087).

Here's a quick overview of the costs of natural disasters in recent decades. The green bars show estimates of deaths from natural disasters from the Centre for Research on the Epidemiology of Disasters (CRED), while the red bars show estimated deaths from the Munich Reinsurance Company. In especially bad years, deaths from natural disasters can reach into the hundreds of thousands. 

Here are estimates from the same two sources for estimated damages from natural disasters. In especially bad years, the losses run into the hundreds of billions of dollars. The additional blue bars seek to estimate losses due to reduced human capital. You'll notice that the patterns of deaths and damages from natural disasters aren't perfectly correlated. One of the patterns in this data is that financial damages from natural disasters are typically higher in high-income countries, while deaths from natural disasters are often higher in low- and middle-income countries (for more on this point, see "Natural Disasters: Insurance Costs vs. Deaths," April 16, 2015).
Disaster aid is falling short of addressing these costs along a number of dimensions. At the most basic level, it isn't coming close to covering the losses. The red area shows the costs of disasters. The blue area shows the size of disaster aid.

But insufficient funds are only part of the problem, as Talbot and Barder spell out in some detail. Aid is often too slow. Here is a sampling of their comments:
"As Bailey (2012) has set out in a detailed study of the 2011 famine in Somalia, slow donor responses meant that what might have been a situation of deprivation descended into mass starvation. As he points out, this happened even though early-warning systems repeatedly notified the global public sector about the emergency ... Mexican municipalities that receive payouts from Fonden, a natural disaster insurance programme, grow 2-4% faster than those that also experience a hazard but did not benefit from insurance cover, ultimately generating benefit to cost ratios in the range of 1.52 to 2.89."
"For example, food aid is often the default mechanism donors use to address food shortages, even though it would often be cheaper, faster, and much more effective to provide cash to governments or directly to households, enabling markets to react ..."
"Yet examining data on aid flows from 1973 to 2010 reported to the OECD by donors indicates that less than half a cent of the average dollar– just 0.43% – of disaster-related aid has been labelled as earmarked for reducing the costs future hazards (“prevention and preparedness”, which we refer to elsewhere using the standard label “disaster risk reduction”). Put differently, the vast majority of our funding is devoted to delivering assistance when hazards have struck, not reducing the losses from hazards or preventing them from evolving into disasters. ... For humanitarian response, a study funded by DFID, the UK aid agency, evaluated $5.6 million-worth of preparedness investments in three countries– such as building an airstrip in Chad for $680,000 to save $5.2 million by not having to charter helicopters in the rainy season– and concluded that the overall portfolio of investments had an ROI of 2.1, with time savings in faster responses ranging from 2 to 50 days."
Instead of cobbling together disaster assistance on the fly each time a disaster happens, can global insurance markets be brought into play for low- and middle-income countries? After all, the global insurance industry covered catastrophe costs of $105 billion in 2012, mainly because of flooding in Thailand.  But private insurance markets, even given their access to the reinsurance markets that currently end up covering 55%-65% of the costs of what private insurance pays in large natural disasters, don't seem to be large enough to handle the costs of natural disasters.

This line of thought leads Talbot and Barder to a discussion of catastrophe bonds, which they describe like this:
"[T]he principle is simple: rather than transferring risk to a re-insurer, an insurance firm creates a single company (a “special purpose vehicle”, or SPV) whose sole purpose is to hold this risk. The SPV sells bonds to investors. The investors lose the face value of those bonds if the hazard specified in the bond contracts hits, but earn a stream of payments (the insurance premiums) until it does, or the bond’s term expires. This gives any actor – insurer, re-insurer, or sovereign risk pool like schemes in the Pacific, Caribbean and Sub-Saharan Africa, which we discuss below – a way to transfer risks from their balance sheets to investors.
"Bermuda has been the centre of the index-linked securities market because it has laws that enable insurance firms to create easily independent “cells” housing the SPVs that underlie index-linked securities transactions (In 2014, 60% of outstanding index-linked contracts globally were domiciled there.) The combination of low yields in traditional assets like stocks and bonds (due to historically low interest rates) and the insurance features of index-linked securities have contributed to fast growth in the instrument. According to deal tracking of the catastrophe bond and index-linked security markets, demand is healthy, and global issuance has grown quickly. ... London is another leading centre. ... The UK is considering developing enabling legislation to boost the number of underlying holding companies or SPVs that are domiciled there, taking advantage a capacious insurance and reinsurance sector ..." 
Here's a graph that shows the amount of catastrophe bonds issued in the last two decades. . 

Again, those who buy a catastrophe bond hand over money, and receive an interest rate in return. If a catastrophe occurs, then the money is released. Investors like the idea because the interest rates on catastrophe bonds (which work a lot like insurance premiums) are often higher than what's currently out there in the market, and also that the timing of the risk of natural disasters occurring is not much correlated with other economic risks (which makes cat bonds a useful choice in building a diversified portfolio). Countries like having definite access to a pool of financial capital. Those who would be donating to disaster relief can instead help by subsidizing the purchase of these bonds.

There are obvious practical questions with catastrophe bonds, which are being slowly worked out over  time. One issue is how to define in advance what counts as a catastrophe where money will be released.  Talbot and Barder expain the choices this way:
Tying contracts to external, observable phenomena such as Richter-scale readings for the extent of earthquakes or median surface temperature for droughts means that risk transfer can be specifically tailored to the situation. There are three varieties of triggers: parametric, modelled-loss, and indemnity. Parametric triggers are the easiest to calculate based on natural science data– satellite data reporting a hurricane’s wind speed is transparent, publicly available, and cannot be affected by the actions of the insured or the insurer. When a variable exceeds an agreed threshold, the contract’s clauses to payout are invoked. Because neither the insured nor the insurer can affect the parameter, there is no cost of moral hazard, since the risks– the probabilities of bad events happening– cannot be changed. Modelled losses provide estimates of damage based on economic models. Indemnity coverage is based on the insurance claims and loss adjustment and are the most expensive to operate and take the most time to pay out (or not).
Several organizations are now operating to provide insurance against catastrophes different ways. There's the Pacific Catastrophe Risk Assessment and Financing Initiative, which covers Vanuatu, Tonga, Marshall Islands, the Cook Islands, and Samoa, and where what the countries pay is subsidized by World Bank and Japan. There's the Caribbean Catastrophe Risk Insurance Facility, covering 16 countries in the Carribbean. There's the African Risk Capacity, which is just starting out and has provided natural disaster coverage to only handful of countries, so far, including Niger, Senegal, Mauritania and Kenya.

These organizations are still a work in progress. As a general statement, it seems fair to say that these organizations have been more focused on how to assure quick payments, rather than on linking the amounts paid to taking preventive measures that can ameliorate the effects of future disasters. As an example of a success story, after Haiti's earthquake in 2010, apparently $8 million in disaster relief was available from the common insurance pool 19 just hours after the quake struck. It seems theoretically plausible that countries should be able to pay lower returns on their catastrophe bonds if they have taken certain steps to limit the costs of disasters, but negotiating the specifics is obviously tricky. There are also questions of how to spell out the "trigger" event for a catastrophe event involving pandemics, where a physical trigger like wind-speed or size of earthquake won't work.

Catastrophe bonds have their practical problems and limits. But they can play a useful role in planning ahead for natural disasters, which has a lot of advantages over reacting after they occur. For those interested in the economics of natural disasters, here are a couple of earlier posts on the subject:

Wednesday, August 24, 2016

How Much Slack is Left in US Labor Markets?

When the US monthly unemployment rate topped out 10% back in October 2009, it was obvious that the labor market had a lot of "slack"--an economic term for underused resources. But the unemployment rate has been 5.5% or below since February 2015, and 5.0% or below since October 2015. At this point, how much labor market slack remains? The Congressional Budget Office offers some insights in its report, An Update to the Budget and Economic Outlook: 2016 to 2026 (August 23, 2016).

I'll offer a look at four measures of labor market slack mentioned by CBO: the "employment shortfall,"  hourly labor compensation, rates at which worker are being hired or are quitting jobs, and hours worked per week. The bottom line is that a little slack remains in the US labor market, but not much.

From the CBO report: "The employment shortfall, CBO’s primary measure of slack in the labor market, is the difference between actual employment and the agency’s estimate of potential (maximum sustainable) employment. Potential employment is what would exist if the unemployment rate equaled its natural rate—that is, the rate that arises from all sources except fluctuations in aggregate demand for goods and services—and if the labor force participation rate equaled its potential rate. Consequently, the employment shortfall has two components: an unemployment
component and a participation component. The unemployment component is the difference between the number of jobless people seeking work at the current rate of unemployment and the number who would be jobless at the natural rate of unemployment. The participation component is the difference between the number of people in the current labor force and the number who would be in the labor force at the potential labor force participation rate. CBO estimates that the employment shortfall was about 1.4 million people in the second quarter of 2016; nearly the entire shortfall (about 1.3 million people) stemmed from a depressed labor force participation rate."

Here's a figure from the CBO measuring the employment shortfall in millions of workers. During the recession, the blue lines showing unemployment made up most of the employment shortfall. Now, it's pretty much all workers who would be expected to be working but are "out of the labor force," but are not counted as unemployed because they have stopped looking for work.
To get a better sense of what's behind this figure, it's useful to see the overall patterns of the labor force participation rate (blue line in graph below) and the employment/population ratio (red line). The difference between the two is that the "labor force" as a concept includes both the employed and  unemployed. Thus, you can see that the employment/population ratio veers away from the during periods of recession, and then the gap declines when the economy recovers and employment starts growing again.  Looking at the blue line in the figure, notice that the labor force participation rate peaked around peaked around 2000, and has been declining since then. As I've discussed here before, some of the reasons behind this pattern are that women were entering the (paid) workforce in substantial numbers from the 1970s through the 1990s, but that trend topped out around 2000. After that, various groups like young adults and low-skilled workers have seen their participation rates fall, and the aging of the US workforce tends to pull down labor force participation rates as well. Thus, the CBO is estimating what the overall trend of labor force participation should be, and saying that it hasn't yet rebounded back to the long-term trend. But you can also see, if you squint a bit, that the drop in labor force participation has leveled out a bit in the recent data. Also, the employment/population ration has been rising since about 2010.



A second measure of labor market slack looks at compensation for workers (including wages and benefits). The argument here is that when labor market slack is low, the This figure from the CBO report shows the change in compensation with actual data through the end of 2015, and projections after that. There does seem to be a little bump in hourly labor compensation toward the end of 2015 (see here for earlier discussion of this point), so as data for 2016 becomes available, the question will be whether that increase is sustained.

One more measure of labor market slack is the rate at which workers are being hired, which shows the liveliness of one part of the labor market, and the rate at which workers are quitting. The quit rate is revealing because when the economy is bad, workers are more likely to  hang onto their existing jobs. Both hiring and quits have largely rebounded back to pre-recession levels, as shown by this figure from the August 2016 release of the Job Openings and Labor Turnover Survey conducted by the US Bureau of Labor Statistics
Finally, average hours worked per week is also a common measure of labor market slack. The CBO report notes that this measure has mostly rebounded back to its pre-recession level. Here's a figure from the US Bureau of Labor Statistics showing the pattern.

All economic news has a good news/bad news quality, and the fall in labor market slack is no exception. The good news is obvious: unemployment rates are down and wages are showing at lesat some early signs of rising. It wasn't obvious, back during the worst of the Great Recession in 2008-2009, how quickly or how much the unemployment rate would decline.  As one example of the uncertainty, the Federal Reserve announced in December 2012, that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent," along with some other conditions, to reassure markets that its policy interest rate would remain low.But then the unemployment rate fell beneath 6.5% in April 2014, and the Fed decided it wasn't yet ready to start raising interest rates, so it retracted its policy from less than 18 months earlier.

The corresponding bad news is that whatever you dislike about the labor market can't really be blamed on the Great Recession any more. So if you're worried about issues like a lack of jobs for low-wage labor, too many jobs paying at or near the minimum wage, not enough on-the-job training, not enough opportunities for longer-term careers, loss of jobs in sectors like manufacturing and construction, too much part-time work, inequality of the wage distribution, one can no longer argue that the issues will be addressed naturally as the economy recovers. After all, labor market slack has now already declined to very low levels.

Monday, August 22, 2016

Automation and Job Loss: Leontief in 1982

Wassily Leontief is not especially well-known at  present by the general public, but he was one of the giants of 20th century economics. (He died in 1999.) When the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (commonly known as "the Nobel prize in economics") first started to be given in 1969, there was a backlog of worthy winners, and those who won in the prize in the first decade or so formed a particularly elite group. Leontief won in 1973 for "for the development of the input-output method and for its application to important economic problems".

Thus, it was a big deal when Leontief wrote an essay for Scientific American in September 1982 arguing that new trends in mechanization and computing were displacing jobs. The title and subtitle give a good sense of his themes: "The Distribution of Work and Income: When workers are displaced by machines, the economy can suffer from a loss of their purchasing power. Historically the problem has been eased by shortening the work week, a trend currently at a standstill." (The archives of Scientific American from these years are not readily available on-line, as far as I can tell, but many libraries will have back issues on their shelves.) That special issue of Scientific American issue contained seven other essays about how American jobs were being lost to the "mechanization of work," with articles discussing how mechanization was reducing jobs in in a wide range of industries: manufacturing, design and coordination of manufacturing, agriculture, mining,  commerce (including finance, transport, distribution, and communications), and information-based office work.

Leontief's concern was of course not a new one in 1982. Indeed, his essay starts by hearkening back to the Luddite movement of the early 19th century in which hand-weavers banded together to destroy some of machines that were automating the textile industry. I've posted before on this website about other episodes in which concerns about automation and job loss ran especially high: for example, here's a discussion of "Automation and Job Loss: Fears of 1964" (December 1, 2014) and "Automation and Job Loss: Fears of 1927" (March 16, 2016). Joel Mokyr, Chris Vickers, and Nicolas L. Ziebarth provide a long-term perspective on these issues in "The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?" which appeared in the Summer 2015 issue of the Journal of Economic Perspectives.

Of course, Leontief knew perfectly well that in the past, technology had been one of main drivers of disruptions that over time raised the average standard of living. hy would the effects of new technologies be different? In terms that seem very similar to the concerns raised by some current writers, Leontief wrote in 1982:
There are signs today, however, that past experience cannot serve as a reliable guide for the future of technological change. With the advent of solid-state electronics, machines that have been displacing human muscle from the production of goods are being succeeded by machines that take over the functions of the human nervous system not only in production but in the service industries as well ... The relation between man and machine is being radically transformed. ... Computers are now taking on the jobs of white-collar workers, performing first simple and then increasingly complex mental tasks. Human labor from time immemorial played the role of principal factor of production. There are reasons to believe human labor will not retain this status in the future.
Re-reading Leontief's 1982 essay today, with the benefit of hindsight, I find myself struck by how he sometimes hits, and then sometimes misses or sideswipes, what I would view as the main issues of how technology can lead to dislocation and inequality.

For example, Leontief expresses a concern that "the U.S. economy has seen a chronic increase in unemployment from one oscillation of the business cycle to the next." Of course, he is writing in 1982, after the tumultuous economic movements of the 1970s. The US unemployment rate was above 10% from September 1982 (when his article was published) through June 1983. But since then, there have been multiple periods (late 1980s and early 1990s, the mid-1990s, and mid-2000s, and since February 2015), when the monthly unemployment rate has been 5.5% or lower. With the benefit of three decades of hindsight since Leontief's 1982 essay, the issue of technological disruption is not being manifested in a steadily higher unemployment rate, but instead is the dislocation for workers and the way in which technology contributes to inequality of wages.

If one presumes (for the sake of argument) a continued advance in technology that raises output, then the question is what form these gains will take. More leisure? If not more leisure, will the income gains be broadly or narrowly based?  

Leontief emphasizes that one of the gains of technology in broad historic terms was a shorter work week. For example, he writes of how "reduction of the average work week in manufacturing from 67 hours in 1870 to somewhat less than 42 hours" by the mid-1940s, and points out that the work week did not continue to decline at the same pace after that. This notion that economic gains from technology will lead to a dramatically shorter work week is not new to Leontief: for example, John Maynard Keynes in his 1930 essay "Economic Possibilities for Our Grandchildren" (available a number of places on the web, like here and here) wrote about how technology was going to be so productive that we would move us toward a 15-hour work-week.

The hidden assumption behind this prediction for less working time seems to be that production will, either soon or in the not-too-distant future, become sufficient to cover everyone's desires, so that as technology continues to increase production beyond that level, the total hours worked can decline substantially. Back in 1848, the greatest economist of his time, John Stuart Mill, was already arguing that in the richer countries already had plenty of production, and what was needed was just a more equal distribution of that production. Indeed, if society was mostly content with the mixture of goods and services available to middle-class England in 1848, or to Keynes in 1933, or to Leontief in 1982, then the work-week could be a lot shorter. But I don't see a lot of Americans out there who would be willing to settle for, say, the information technology or health care technology or the housing or transportation from those earlier times.

If technology doesn't just make the same things more cheaply, but also makes new goods and services that people desire, then the gains from technology may not lead to dramatically  shorter work weeks. Very little in Leontief's essay discusses how technology can produce brand-new industries and jobs, and how these new industries provide consumers with goods and services that they value.

Concerning the issue of how technology can lead to greater inequality of incomes, Leontief offers some useful and thought-provoking metaphors. For example, here's his Adam and Eve comparison:
"Adam and Eve enjoyed, before they were expelled from Paradise, a high standard of living without working. After their expulsion they and their successors were condemned to eke out a miserable existence, working from dawn to dusk. The history of technological progress over the past 200 years is essentially the story of the human species working its way slowly and steadily back into Paradise. What would happen, however, if we suddenly found ourselves in it? With all goods and services provided without work, no one would be gainfully employed. Being unemployed means
receiving no wages. As a result until appropriate new income policies were formulated to fit the changed technological conditions everyone would starve in Paradise."
As noted earlier, the evidence since 1982 doesn't support a claim of steadily higher unemployment rates. But it does support a concern of increasing inequality, where those who find themselves in a position to benefit most from technology will tend to gain. One need not be worried about "starving in Paradise" to be worried that the economy could be a Paradise for those receiving a greater share of income, but not for those on the outside of Paradise looking in.

Leontief also offers an interesting image about what it means to be a worker who can draw on a larger pool of capital, using an example of an Iowa farmer. He writes:
What I have in mind is a complex of social and economic measures to supplement by transfer from other income shares the income received by blue- and white-collar workers from the sale of their services on the labor market. A striking example of an income transfer of this kind attained automatically without government intervention is there to be studied in the long-run effects of the mechanization of agriculture on the mode of operation and the income of, say, a prosperous Iowa farm.
Half a century ago the farmer and the members of his family worked from early morning until late at night assisted by a team of horses, possibly a tractor and a standard set of simple agricultural implements. Their income consisted of what essentially amounted to wages for a 75- or 80-hour work week, supplemented by a small profit on their modest investment. Today the farm is fully mechanized and even has some sophisticated electronic equipment. The average work week is much shorter, and from time to time the family can take a real vacation. Their total wage income, if one computes
it at the going hourly rate for a much smaller number of manual-labor hours, is probably not much higher than it was 50 years ago and may even be lower. Their standard of living, however, is certainly much higher: the shrinkage of their wage income is more than fully offset by the income earned on their massive capital investment in the rapidly changing technology of agriculture.
The shift from the old income structure to the new one was smooth and practically painless. It involved no more than a simple bookkeeping transaction because now, as 50 years ago, both the wage income and the capital income are earned by the same family. The effect of technological progress on manufacturing and other nonagricultural sectors of the economy is essentially the same as it is on agriculture. So also should be its repercussions with respect to the shortening of the work day and the allocation of income.
Leontief here is eliding the fact that the share of American workers in agriculture was about 2-3% back in 1982, compared to 25-30% about 50 years earlier. He is discussing a smooth transfer of new technology for a single family, but with the rise in agricultural output for that family, something like 90% of their neighbors from 50 years earlier ended up transferring out of farming altogether. When Leontief and other modern writers talk about how modern technology is fundamentally more disruptive than earlier technology, I'm not sure I agree. The shift of the US economy to mechanized agriculture was an extraordinarily disruptive change.

But Leontief also has his finger on a central issue here, which is that jobs which find ways to technology and investment as a complement are more likely to prosper. Along these lines, I'm intrigued by the notion that when workers use web-based connectivity and applications, they are accessing a remarkable global capital infrastructure that complements their work--even though the Internet isn't physically visible in my side yard like a combine harvester.

A final Leontief metaphor might be called the "horses don't vote" issue. In a short article written at about this same time for a newsletter called Bottom Line Personal (April 30, 1983, 4:8, pp. 1+), Leontief wrote:
People cannot eat much more than they already do. They cannot wear much more clothing. But they certainly can  use more services, and they begin to purchase more of them. This natural shift comes simultaneously with  the technological changes. But in the long run, the role of labor diminishes even in service industries.  Look at banking, where more and more is done electronically and automatically,  and at secretarial areas, where staff work is being replaced by word processors.
The problem becomes: What happens to the displaced labor? In the last century, there was an analogous problem with horses. They became unnecessary with the advent of tractors, automobiles and trucks. And a farmer couldn't keep his horses and postpone the change to tractors by feeding them less oats. So he got rid of the horses and used the more productive tractor. After all, this doesn't precipitate  a political problem, since horses don't vote. But it is more difficult to find a solution when you have the  same problem with people. You do not need them as much as before. You can produce without them. 
So the problem becomes the task of reevaluating the role of human labor in production as it becomes less important. It is a simple fact that fewer people will be needed, yet more goods and services  can be produced. But the machinery and technology will not benefit everyone equally. We must ask: Who  will get the benefit? How will the income be be distributed? We are accustomed  to rewarding people for work based on market mechanisms, but we can no longer rely on the market mechanism  to function so conveniently.
As noted earlier, when Leontief says that it's "a simple fact" that fewer people will be needed, I think he is overstating his case. Since 1982, the prediction of steadily rising unemployment rates has not come true. However, the prediction of steadily rising inequality of incomes and diminished opportunity for low-skilled labor has occurred. 

The extent to which one views inequality as a problem isn't a matter of pure economics, but involves political and even moral or aesthetic judgments. The same can be said about preferred political solutions.  Leontief, who did his early college studies at the University of Leningrad and his Ph.D. work at the University of Berlin, both in the 1920s, had a strong bias that more government planning was a necessary answer. His essay is heavily sprinkled with comments about how dealing with distributional issues will require "close and systematic cooperation between management and labor carried on with government support," and with support for the German/Austrian economic policy model of the 1980s. 

With Leontief's policy perspective in mind,,, I was intrigued to read this comment from his 1982 essay: "In the long run, responding to the incipient threat of technological unemployment, public policy should aim at securing an equitable distribution of work and income, taking care not to obstruct technological progress even indirectly." My own sense is that if you take seriously the desire not to obstruct technological progress, even indirectly, then you need to allow for and even welcome the possibility of strong disruptions within the existing economy. In the world of US-style practical politics, that you must then harbor grave doubts about a Leontief-style strong nexus of government along with the management and labor of existing firms. 

I agree with Leontief economic policy should seek to facilitate technological change and not to obstruct it, even indirectly. But rather than seeing this as a reason to support corporatist public policy, I would say that when technology is contributing to greater inequality of incomes, as it  seems to be doing in recent decades, then address the inequality directly. Appropriate steps include taxes on those with higher incomes, direct subsidies to lower-income workers in ways that increase their wages, and indirect subsidies in the form of public spending on schools, retraining and job search; public transportation and public safety; and parks, libraries, and improvements in local living environments. 

Friday, August 19, 2016

Convert Carbon Dioxide from the Air to Methanol?

When it comes to rising levels of carbon and other greenhouse gases in the atmosphere, I'm in favor of a consider-everything approach, including carbon capture and storage, geoengineering, noncarbon energy sources, energy conservation, and any other options that come to hand. But perhaps the most miraculous possibilities involve finding ways to absorb carbon dioxide from the air directly and then use it as part of a fuel source like methanol. This technology is not yet close to practical on any wide scale, but here are three examples of what's happening.

For example, researchers at Argonne National Laboratory and the University of Illinois Chicago have been working on what can be viewed as an "artificial leaf" for taking carbon dioxide out of the atmosphere. A press release from Argonne described it this way: "To make carbon dioxide into something that could be a usable fuel, Curtiss and his colleagues needed to find a catalyst — a particular compound that could make carbon dioxide react more readily. When converting carbon dioxide from the atmosphere into a sugar, plants use an organic catalyst called an enzyme; the researchers used a metal compound called tungsten diselenide, which they fashioned into nanosized flakes to maximize the surface area and to expose its reactive edges. While plants use their catalysts to make sugar, the Argonne researchers used theirs to convert carbon dioxide to carbon monoxide. Although carbon monoxide is also a greenhouse gas, it is much more reactive than carbon dioxide and scientists already have ways of converting carbon monoxide into usable fuel, such as methanol."  The research was just published in the July 29 issue of Science magazine, in "Nanostructured transition metal dichalcogenide electrocatalysts for CO2 reduction in ionic liquid," by a long list of co-authors headed by Mohammad Asadi (vol. 353, issue 6298, pp. 467-470).

As another example, researchers at the USC Loker Hydrocarbon Research Institute at the University of Southern California "have directly converted carbon dioxide from the air into methanol at relatively low temperatures," according to a summary of the research by Robert Perkins. "The researchers bubbled air through an aqueous solution of pentaethylenehexamine (or PEHA), adding a catalyst to encourage hydrogen to latch onto the carbon dioxide under pressure. They then heated the solution, converting 79 percent of the carbon dioxide into methanol." The researchers hope that the method might be viable at industrial scale in 5-10 years. The research was published earlier this year in the Journal of the American Chemical Society, "Conversion of CO2 from Air into Methanol Using a Polyamine and a Homogeneous Ruthenium Catalyst," by Jotheeswari Kothandaraman, Alain Goeppert, Miklos Czaun, George A. Olah, and G. K. Surya Prakash (2016, 138:3, pp 778–781).

One of the co-authors of the USC study, Alain Goeppert, points out in an article in the Milken Institute Review by Lawrence M. Fisher (Third Quarter, 2016, pp. 3-13) that a company in Iceland is already recycling carbon to make methanol and exporting it to Europe.
“A company in Iceland is already doing that: Carbon Recycling International,” Goeppert said. “There, they are recycling CO2 with hydrogen they obtain from water. They use geothermal energy, which is relatively cheap. They have been producing methanol that way for five years, exporting it to Europe, to use as a fuel. It’s still relatively small scale, but it’s a start.”
Methanol can easily be mixed into gasoline, as ethanol is today, or cars can be adapted fairly cheaply to run on 100% methanol. Diesel engines can run on methanol, too.

Of course, I don't know if carbon-dioxide-to-methanol can put a real dent into atmospheric carbon in any cost-effective way. But again, I'm a consider-everything kind of guy.  And before I get too skeptical about how fields of artificial leaves might work for this purpose, it's worth remembering that fields of solar collectors didn't look very practical as a method of generating electricity a couple of decades ago, either.

Thursday, August 18, 2016

Patterns in US Information Techology Jobs

Would you expect that the number of US jobs in information technology fields is rising or falling over time? On one side, the growing importance of IT in so many areas of the US economy suggests that the job totals should be rising. On the other hand, one often reads warnings about  how a combination of advances in technology and outsourcing to other countries are making certain jobs obsolete, and it seems plausible that a number of IT-related jobs could be either eliminated or outsourced to other countries by improved web-based software and more powerful and reliable computing capabilities. So which effect is bigger?  Julia Beckhusen provides an overview of "Occupations in Information Technology," published by the US Census Bureau (August 2016,  American Community Survey Reports ACS-35).

The top line is that US jobs in IT seem to be roughly doubling in each decade since the 1970s. Here's an illustrative figure.

What exactly are these jobs? Here's a breakdown for 2014 The top five categories, which together make up about three-quarters of all the IT jobs, are softward developers, systems and applications software; computer support specialists; computer occupations, all other; computer and information systems managers; and computer systems analysts.


Are these IT jobs basically in the category of high-paying jobs for highly educated workers? Some are, some aren't. The proportion of workers in each of these IT job categories with a master's degree or higher is shown by the bar graphs on the left. The median pay for each job category is shown by the dot-graph on the right. Unsurprisingly, more than half of all those categorized as "computer and information research scientists" have a master's degree or higher; what perhaps surprising here is that almost half of those in this job category don't have this level of education. But in most of these IT job categories, only one-quarter and in many cases much less than one-quarter of those holding such an IT job have a master's degree. Indeed, I suspect that in many of the lower-paid IT job categories, many do not have a four-year college degree either--there are a lot of shorter-term programs to get some IT training. In general, IT jobs do typically pay more than the average US job.  But the highest-paid IT jobs in the "research scientists" category also has the smallest number of workers (as shown in the graph above).


Finally, to what extend are these IT jobs held by those born in another country who have immigrated at least for a time to the United States?  As the bars at the top of the figure show, 17% of all US jobs are held by foreign-born workers; among IT workers, it's 24%.


Beckhusen provides lots more detail in breaking down IT jobs along various dimensions. My own guess is that the applications for IT in the US economy will continue to be on the rise, probably in a dramatic fashion, and that many of those applications will turn out to be even more important for society than Twitter or Pokémon Go. The biggest gains in jobs won't be the computer science researchers, but instead will be the people installing, applying, updating, and using IT in a enormously wide range of contexts. If your talents and inclinations lead this way, it remains a good area to work on picking up some additional skills.

Tuesday, August 16, 2016

What are Motivated Beliefs?

"Motivated beliefs" is a relatively recent development economics which offers a position between traditional assumptions of rational and purposeful behavior and the conventional approaches of behavioral economics. It is introduced and explored in a symposium in the Summer 2016 Journal of Economic Perspectives. Nicholas Epley and Thomas Gilovich contribute an introductory essay in "The Mechanics of Motivated Reasoning." Roland Bénabou and Jean Tirole have written: "Mindful Economics: The Production, Consumption, and Value of Beliefs."  Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri look at one aspect of motivated beliefs in "The Preference for Belief Consonance."  Francesca Gino, Michael I. Norton, and Roberto A. Weber focus on another aspect in "Motivated Bayesians: Feeling Moral While Acting Egoistically." 

Of course, I encourage you to read the actual papers. I'm worked as the Managing Editor of JEP for 30 years, so I always want everyone to read the papers! But here's an overview and a taste of the arguments.

In traditional working assumptions of microeconomics, people act in purposeful and directed ways to accomplish their goals. Contrary to the complaints I sometimes hear, this approach doesn't require that people have perfects and complete information or that they are perfectly rational decision-makers. It's fairly straightforward to incorporate imperfect information and bounded rationality into these models. But even so, this approach is built on the assumption that people act purposefully to achieve their goals and do not repeatedly make the same mistakes without altering their behavior.

 Behavioral economics, as it has usually been practiced, is sometimes called the "heuristics and biases" approach. It points to certain patterns of behavior that have been well-demonstrated in the psychology literature: for example, people often act in a short-sighted or myopic way that puts little weight on long-term consequences; people have a hard time evaluating how to react to low-probability events; people are "loss averse" and treat a loss of a certain amount as a negative outcome that is bigger in absolute value than a gain of the same amount; the "confirmation bias" of interpreting new evidence so that it tends to support previously held beliefs; and others. In this view, people can make decisions and regret them, over and over. Short-sighted people may fail to save, or fail to exercise, and regret it. People who are loss-averse and have a hard time evaluating low-probability events may be sucked into buying a series of service plans and warranties that don't necessarily offer them a good value. When decision-making includes heuristics and biases, people can make the same mistakes repeatedly.

The theory of motivated beliefs fall in-between these possibilities. In these arguments, people are not strictly rational or purposeful decision-makers, but nor does their decision-making involve built-in flaws. Instead, people have a number of goals, which include fitting in with their social group, feeling moral, competent, and attractive, fitting in with their existing social group or achieving higher social status. As Epley and Gilovich explain in their introductory essay,
"This idea is captured in the common saying, “People believe what they want to believe.” But people don’t simply believe what they want to believe. The psychological mechanisms that produce motivated beliefs are much more complicated than that. ... People generally reason their way to conclusions they favor, with their preferences influencing the way evidence is gathered, arguments are processed, and memories of past experience are recalled. Each of these processes can be affected in subtle ways by people’s motivations, leading to biased beliefs that feel objective ...
One of the complexities in understanding motivated reasoning is that people have many goals, ranging from the fundamental imperatives of survival and reproduction to the more proximate goals that help us survive and reproduce, such as achieving social status, maintaining cooperative social relationships, holding accurate beliefs and expectations, and having consistent beliefs that enable effective action. Sometimes reasoning directed at one goal undermines another. A person trying to persuade others about a particular point is likely to focus on reasons why his arguments are valid and decisive—an attentional focus that could make the person more compelling in the eyes of others but also undermine the accuracy of his assessments. A person who recognizes that a set of beliefs is strongly held by a group of peers is likely to seek out and welcome information supporting those beliefs, while maintaining a much higher level of skepticism about contradictory information (as Golman, Loewenstein, Moene, and Zarri discuss in this symposium). A company manager narrowly focused on the bottom line may find ways to rationalize or disregard the ethical implications of actions that advance short-term profitability (as Gino, Norton, and Weber discuss in this symposium). 
The crucial point is that the process of gathering and processing information can systematically depart from accepted rational standards because one goal— desire to persuade, agreement with a peer group, self-image, self-preservation—can commandeer attention and guide reasoning at the expense of accuracy. Economists are well aware of crowding-out effects in markets. For psychologists, motivated reasoning represents an example of crowding-out in attention. In any given instance, it can be a challenge to figure out which goals are guiding reasoning ... 
In one classic study, mentioned in the overview and several of the papers, participants were given a description of a trial and asked to evaluate whether they thought the accused was guilty or innocent. Some of the players were assigned to play the role of prosecutors or defense attorneys before reading the information; others were not assigned a role until after evaluating the information. Those who were assigned to be prosecutors before reading the evidence were more likely to evaluate the evidence as showing the defendant was guilty, while those assigned to be defense attorneys before reading the evidence were more likely to evaluate the evidence as showing the defendant to be not guilty. The role you play will often influence your reading of evidence. 

Bénabou and Tirole offer a conceptual framework for thinking about motivated beliefs, and then apply the framework in a number of context. They argue that motivated beliefs arise for two reasons,which they label "self-efficacy" and "affective."  In the self-efficacy situation, people use their beliefs to give their immediate actions a boost. Can I do a good job in the big presentation at work? Can I save money? Can I persevere with a diet? In such situations, people are motivated to distort their interpretation of information and their own actions in a way that helps support their ability to persevere with a certain task.  In the "affective" situation, people get immediate and visceral pleasure from seeing themselves as smart, attractive, or moral, and they can also get "anticipatory utility" from contemplating pleasant future outcomes.

However, if your motivated beliefs do not reflect reality, then in some cases reality will deliver some hard knocks in response. They analyze certain situations in which these hard knocks, again through a process of motivated beliefs, makes you cling to those beliefs harder than ever. Moreover, if you are somewhat self-aware and know that you are prone to motivated beliefs, then you may be less likely to trust your own interpretations of evidence, which complicates the analysis further. Bénabou and Tirole apply these arguments in a wide array of contexts: political beliefs (a subject of particular interest in 2016), social and organizational beliefs, financial bubbles, and personal identity. Here's one example of a study concerning political beliefs (most citations omitted). 

The World Values Survey reveals considerable differences in beliefs about the role of effort versus luck in life. In the United States, 60 percent of people believe that effort is key; in Western Europe, only 30 percent do on average, with major variations across countries. Moreover, these nationally dominant beliefs bear no relationship to the actual facts about social mobility or how much the poor are actually working, and yet they are strongly correlated with the share of social spending in GDP. At the individual level, similarly, voters’ perceptions of the extent to which people control their own fate and ultimately get their just desserts are first-order determinants of attitudes toward inequality and redistribution, swamping the effects of own income and education. 
In Bénabou and Tirole (2006), we describe how such diverse politico-ideological equilibria can emerge due to a natural complementarity between (self-)motivation concerns and marginal tax rates. When the safety net and redistribution are minimal, agents have strong incentives to maintain for themselves, and pass on to their children, beliefs that effort is more important than luck, as these will lead to working hard and persevering in the face of adversity. With high taxes and generous transfers, such beliefs are much less adaptive, so fewer people will maintain them. Thus, there can coexist: i) an “American Dream” equilibrium, with just-world beliefs about social mobility, and little redistribution; and ii) a “Euro-pessimistic” equilibrium, with more cynical beliefs and a large welfare state. In the latter, the poor are less (unjustly) stigmatized as lazy, while total effort (annual hours worked) and income are lower, than in the former. More generally, across all steady-states there is a negative correlation between just-world beliefs and the size and the welfare state, just as observed across countries.
Golman, Loewenstein. Moene, and Zarri consider one aspect of motivated beliefs, the "preference for belief consonance," which is the desire to be in agreement with others in one's immediate social group. They endeared themselves to me by starting with a quotation from Adam Smith's first great work, The Theory of Moral Sentiments (Part VII, Section IV): "The great pleasure of conversation, and indeed of society, arises from a certain correspondence of sentiments and opinions, from a certain harmony of minds, which like so many musical instruments coincide and keep time with one another." They write:

Why are people who hold one set of beliefs so affronted by alternative sets of beliefs—and by the people who hold them? Why don’t people take a live-and-let-live attitude toward beliefs that are, after all, invisibly encoded in other people’s minds? In this paper, we present evidence that people care fundamentally about what other people believe, and we discuss explanations for why people are made so uncomfortable by the awareness that the beliefs of others differ from their own. This preference for belief consonance (or equivalently, distaste for belief dissonance) has far-ranging implications for economic behavior. It affects who people choose to interact with, what they choose to exchange information about, what media they expose themselves to, and where they choose to live and work. Moreover, when people are aware that their beliefs conflict with those of others, they often try to change other people’s beliefs (proselytizing). If unsuccessful in doing so, they sometimes modify their own beliefs to bring them into conformity with those around them. A preference for belief consonance even plays an important role in interpersonal and intergroup conflict, including the deadliest varieties: Much of the conflict in the world is over beliefs—especially of the religious variety—rather than property ... 
A substantial group of studies show that if you ask people about their opinions on certain issues, and if you ask people about their opinions while telling them that certain other specific groups hold certain opinions, the patterns of answers can be quite different. Personally, I'm always disconcerted that for every opinion I hold, some of the others who hold that same opinions are people I don't like very much.

Gino, Norton, and Weber take on another dimension of motivated beliefs in  their essay on "feeling moral while acting egoistically." They explain that when given some wiggle room to manage their actions or their information, people often choose to act in a way that allows them to feel moral while acting selfishly. Gino, Norton, and Weber write: 
 In particular, while people are often willing to take a moral act that imposes personal material costs when confronted with a clear-cut choice between “right” and “wrong,” such decisions often seem to be dramatically influenced by the specific contexts in which they occur. In particular, when the context provides sufficient flexibility to allow plausible justification that one can both act egoistically while remaining moral, people seize on such opportunities to prioritize self-interest at the expense of morality. In other words, people who appear to exhibit a preference for being moral may in fact be placing a value on feeling moral, often accomplishing this goal by manipulating the manner in which they process information to justify taking egoistic actions while maintaining this feeling of morality.
They cite many studies of this phenomenon. Here's an overview of one: 
[P]articipants in a laboratory experiment distribute two tasks between themselves and another participant: a positive task (where correct responses to a task earn tickets to a raffle) and a negative task (not incentivized and described as “rather dull and boring”). Participants were informed: “Most participants feel that giving both people an equal chance— by, for example, flipping a coin—is the fairest way to assign themselves and the other participant to the tasks (we have provided a coin for you to flip if you wish). But the decision is entirely up to you.” Half of participants simply assigned the tasks without flipping the coin; among these participants, 90 percent assigned themselves to the positive task. However, the more interesting finding is that among the half of participants who chose to flip the coin, 90 percent “somehow” ended up with the positive task—despite the distribution of probabilities that one would expect from a two-sided coin. Moreover, participants who flipped the coin rated their actions as more moral than those who did not—even though they had ultimately acted just as egoistically as those who did not flip in assigning themselves the positive task. These results suggest that people can view their actions as moral by providing evidence to themselves that they are fair (through the deployment of a theoretically unbiased coin flip), even when they then ignore the outcome of that coin flip to benefit themselves.
The theory of motivated beliefs still views people as motivated by self-interest. However, the dimensions of self-interest expand beyond the standard concerns like consumption and leisure, and encompass how we feel about ourselves and the social groups we inhabit. In this way, the analysis opens up insights into insights into behavior that is otherwise puzzling in the context of economic analysis, as well as building intellectual connections to other social sciences of psychology and sociology. 

Monday, August 15, 2016

Alfred Marshall and the Origin of Ceteris Paribus

When non-economists ask me questions, they often seem to be jumping from topic to topic. A question about the effects of raising the minimum wage, for example, shifts from how it will affect jobs, and earnings, and companies that hire minimum wage workers, and work effort, and automation, and the the overall income distribution, and children of minimum wage earners, and so on. The questions are all reasonable. But I become self-aware that economists have trained themselves into a one-thing-at-a-time method of analysis, and so bouncing from one topic to another can feel somehow awkward.

The ceteris paribus or "other things equal" assumption involves an intellectual approach, common among economists, of trying to focus on one thing at a time. After all, many economic issues and policies have a number of possible causes and effects. Rather than hopscotching among them, economists often try to discuss isolate one factor at a time, and then to move on to other factors, before combining it all into an overall perspective. The use of this approach in economic analysis traces back to trace back to Alfred Marshall's 1890 classic Principles of Economic Analysis.

The Library of Economics and Liberty provides a useful place for finding searchable editions of many classic works in economics. The site provides the 8th edition of Marshall's Principles, published in 1920. In Book V, Chapter V, "Equilibrium of Normal Demand and Supply, Continued, With Reference To Long and Short Periods," Marshall described the overall logic of looking at one thing at a time, offers some hypothetical examples from a discussion of supply and demand shocks in fish markets, and points out that longer the time period of analysis, the harder it becomes to assume that everything else is constant. Marshall writes:
"The element of time is a chief cause of those difficulties in economic investigations which make it necessary for man with his limited powers to go step by step; breaking up a complex question, studying one bit at a time, and at last combining his partial solutions into a more or less complete solution of the whole riddle. In breaking it up, he segregates those disturbing causes, whose wanderings happen to be inconvenient, for the time in a pound called Cœteris Paribus. The study of some group of tendencies is isolated by the assumption other things being equal: the existence of other tendencies is not denied, but their disturbing effect is neglected for a time. The more the issue is thus narrowed, the more exactly can it be handled: but also the less closely does it correspond to real life. Each exact and firm handling of a narrow issue, however, helps towards treating broader issues, in which that narrow issue is contained, more exactly than would otherwise have been possible. With each step more things can be let out of the pound; exact discussions can be made less abstract, realistic discussions can be made less inexact than was possible at an earlier stage. ...

The day to day oscillations of the price of fish resulting from uncertainties of the weather, etc., are governed by practically the same causes in modern England as in the supposed stationary state. The changes in the general economic conditions around us are quick; but they are not quick enough to affect perceptibly the short-period normal level about which the price fluctuates from day to day: and they may be neglected [impounded in cœteris paribus] during a study of such fluctuations.

Let us then pass on; and suppose a great increase in the general demand for fish, such for instance as might arise from a disease affecting farm stock, by which meat was made a dear and dangerous food for several years together. We now impound fluctuations due to the weather in cœteris paribus, and neglect them provisionally: they are so quick that they speedily obliterate one another, and are therefore not important for problems of this class. And for the opposite reason we neglect variations in the numbers of those who are brought up as seafaring men: for these variations are too slow to produce much effect in the year or two during which the scarcity of meat lasts. Having impounded these two sets for the time, we give our full attention to such influences as the inducements which good fishing wages will offer to sailors to stay in their fishing homes for a year or two, instead of applying for work on a ship. We consider what old fishing boats, and even vessels that were not specially made for fishing, can be adapted and sent to fish for a year or two. The normal price for any given daily supply of fish, which we are now seeking, is the price which will quickly call into the fishing trade capital and labour enough to obtain that supply in a day's fishing of average good fortune; the influence which the price of fish will have upon capital and labour available in the fishing trade being governed by rather narrow causes such as these. This new level about which the price oscillates during these years of exceptionally great demand, will obviously be higher than before. Here we see an illustration of the almost universal law that the term Normal being taken to refer to a short period of time an increase in the amount demanded raises the normal supply price.  ...

Relatively short and long period problems go generally on similar lines. In both use is made of that paramount device, the partial or total isolation for special study of some set of relations. In both opportunity is gained for analysing and comparing similar episodes, and making them throw light upon one another; and for ordering and co-ordinating facts which are suggestive in their similarities, and are still more suggestive in the differences that peer out through their similarities. But there is a broad distinction between the two cases. In the relatively short-period problem no great violence is needed for the assumption that the forces not specially under consideration may be taken for the time to be inactive. But violence is required for keeping broad forces in the pound of Cateris Paribus during, say, a whole generation, on the ground that they have only an indirect bearing on the question in hand. For even indirect influences may produce great effects in the course of a generation, if they happen to act cumulatively; and it is not safe to ignore them even provisionally in a practical problem without special study. Thus the uses of the statical method in problems relating to very long periods are dangerous; care and forethought and self-restraint are needed at every step. The difficulties and risks of the task reach their highest point in connection with industries which conform to the law of Increasing Return; and it is just in connection with those industries that the most alluring applications of the method are to be found.
For those who want more on the history of ceteris paribus (the modern spelling no longer uses the ligature version that ties together the o and e), Joseph  Persky offers a nice introduction in his 1990 article "Retrospectives: Ceteris Paribus," which appeared in the Journal of Economic Perspectives (4: 2, pp. 187-193). Persky finds early uses of the term back in the 1600s, including a 1662 passage by the economist William Petty that was often quoted in the 19th century--and thus may have inspired Marshall's use of the term. 

Persky notes the dueling concerns that economists may in some cases feel that they should avoid big-picture subjects in the global economy or historical analysis because the ceteris are not always paribus, or in other cases that economic research may be focusing on one factor while other important factors are also changing. But as Persky points out, the ceteris paribus assumption is not meant as a literal statement that nothing else has changed, but only to remind the reader that the analysis may be leaving something out. As Persky writes: "Economists could do much worse than to flag our fallibility with a bit of Latin."