Sunday, March 22, 2015

Wren-Lewis Takes a Stab at It

Simon Wren-Lewis will unfortunately have to join Brad DeLong and Nick Rowe in the ranks of not-ready-for-prime-time monetary economists. There is always hope, though. We can allow him a retake of the David Levine's Keynesian economics exam. He could even attempt the same problem, if he wants. Quoting yours truly from my previous post:
If Levine's piece were a prelim question, I'm afraid we would have to fail both Brad and Nick. Brad can't quite get off the ground, as he doesn't understand that Levine's model is indeed a monetary economy and not a barter economy. Nick achieves liftoff, and we can give him points for recognizing the double coincidence problem and that the phone is commodity money. But then he stalls and crashes, walking off in a huff complaining that Levine doesn't know what he's talking about. Levine has posted an addendum to his original post, which I think demonstrates that he does in fact have a clue.
Simon says:
When we allow for the existence of money, it becomes quite clear how the ‘wrong’ real interest rate can lead to a demand deficient outcome. Brad DeLong takes Levine to task for trying to use a barter economy and Say’s Law to refute Keynesian ideas, and Nick Rowe turns the knife.
So, Simon compared notes with Brad and Nick after the exam. Bad idea. Everyone knows that talking to the guys who failed isn't going to help you pass the retake.

What did Simon do on his exam? He followed the time-honored approach of not answering the question he was asked, but answering one he thought he knew the answer to instead. What he gives us is not a critique of what Levine did, but a discussion of New Keynesian (NK) vs. RBC models. To summarize his discussion, Simon thinks that people who work with competitive equilibrium business cycle models (RBC for example) are contradicting themselves. According to him, their models are supposed to be microfounded, but prices are set by some Walrasian auctioneer. That's pretty silly, he thinks. He argues that NK models are superior in this respect, as the suppliers of goods actually set prices in an NK model, just as suppliers do in the real world. He elaborates by saying that NK models
...replace the auctioneer with a more modern macroeconomics - a macroeconomics where firms set prices and central banks change interest rates to achieve a target.
As well, repeating from above:
When we allow for the existence of money, it becomes quite clear how the ‘wrong’ real interest rate can lead to a demand deficient outcome.

First, as I explain in my last post, monetary exchange - whether it's commodity money or fiat money - is critical to how Levine's example works. That's how the "demand shock" propagates itself, and where the big multiplier comes from. Further, in Levine's sticky-price equilibrium, that the real interest rate is wrong is exactly the problem. In fact, the real interest rate is constrained to be zero in the sticky price equilibrium, when efficiency dictates that it should be lower. If you want to call that a "demand deficiency," I guess you can, but part of the point is that that terminology isn't actually descriptive of the basic inefficiency.

Since Simon brought up NK and RBC models, let's discuss that. First, there is in fact no Walrasian auctioneer in a competitive equilibrium. The Walrasian auction was a story thought up by someone (no idea who - anyone know?) to justify focusing attention on equilibrium outcomes - it's entirely outside the model. In a competitive equilibrium, everyone optimizes, markets clear, and that's it. But, does dropping competitive equilibrium make much difference? Well, not really. If we take Prescott's RBC model, and add Dixit-Stiglitz monopolistic competition, what do we get? The model behaves in roughly the same way, except there are some monopoly rents in the production of goods. For a lot of problems, we're not going to care about the difference between monopolistic competition and competitive equilibrium, so we might as well take the easy route, and use competitive pricing. But for Woodford's problem, he can't do that, because he is concerned with sticky prices and relative price distortions. You can't do that in competitive equilibrium, so he needs a technical device, and Dixit-Stiglitz works. He doesn't do that because it's somehow more realistic.

Further, if monetary exchange and central banking are so important to Simon, I'm not sure why he likes NK so much. A Woodford "cashless" model is just that. There's no money in sight, except that people quote prices in terms of some virtual unit of account, and the central bank determines an interest rate in terms of that unit of account. If this is realism, I'm confused. Actual central banks issue some liabilities, hold some assets, and their key policy actions involve swapping some of their liabilities for assets. I don't see that happening in an NK model. What I see is an assumption that the central bank can set a price. I have no idea why this central bank can do that - the model certainly doesn't tell me anything about it.

Here's something Simon says of Levine:
He does not talk about central banks, or monetary policy. If he had, he would have to explain why most of the people working for them seem to believe that New Keynesian type models are helpful in their job of managing the economy.
I work for one of these institutions, and I have a hard time answering that question, so it's not clear why Simon wants David to answer it. Simon posed the question, so I think he should answer it.

Friday, March 20, 2015

No One Expects the Spanish Inquisition: More on D.K. Levine and J.M Keynes

I guess I shouldn't be surprised. David Levine's piece on Keynesian economics appears to have generated plenty of heat. See for example the comments section in my post linking to Levine. I'm imagining an angry mob dressed like the Pythons, as in the photo above, running through the streets of Florence looking for Levine. Each has a copy of the General Theory, and they're aiming to inflict torture by taking turns reciting it to David, until he renounces his heretical writings.

What drew my attention to Levine's piece initially were blog posts by Brad DeLong and Nick Rowe. If Levine's piece were a prelim question, I'm afraid we would have to fail both Brad and Nick. Brad can't quite get off the ground, as he doesn't understand that Levine's model is indeed a monetary economy and not a barter economy. Nick achieves liftoff, and we can give him points for recognizing the double coincidence problem and that the phone is commodity money. But then he stalls and crashes, walking off in a huff complaining that Levine doesn't know what he's talking about. Levine has posted an addendum to his original post, which I think demonstrates that he does in fact have a clue.

In any case, I thought Levine's example was interesting, and I'd like to follow John Cochrane's suggestion of filling in some of the spaces, which will require some notation, and a little algebra. First, adding to David's addendum, let's generalize what he wrote down. This is just a version of an economy with an absence of double coincidence of wants. If any two people in this world meet, it will never be the case that each can produce what the other wants. It's roughly like Kiyotaki and Wright (1989), except with 4 goods instead of 3. And of course there are some very old versions of the double coincidence problem in the work of Jevons and Wicksell, for example. Brad DeLong, who reads the old stuff assiduously, perhaps missed those things.

Commodity Money
Let's first imagine a world with T types of people, indexed by i = 1,2, ..., T. There are many people of each type. Indeed, for convenience assume that there is a continuum of each type with mass 1. A person of type i can produce one indivisible unit of good i at a utility cost c, and receives utility u from consuming one indivisible unit of good i + 1 (mod T) (i.e. T + 1 (mod T) = 1 ). We need n >= 3 for a double coincidence problem, and n will matter for some elements of the problem, as we'll see. A key feature of the problem will be that each person can meet only one other person at a time to trade - that's a crude way to capture the costs of search and exchange. We could allow for directed search, and I think that would make no difference, but we'll just cut to the chase and assume that each person of type 1 meets with a type 2, each type 2 meets with a type 3, etc., until the type T - 1 people meet with the type Ts. Further, we'll suppose that, as in David's example, good 1 is perfectly durable and costless to store, while all the other goods are perishable - they have infinite storage costs. Assume that u - c > 0 (with some modifications later).

A key element of the problem is that the indivisibility of goods fixes the prices - indeed, in a Keynesian fashion - so long as we only permit these people to trade using pure strategies. That is, David assumes that when two people meet they both agree to exchange one unit of a good for one unit of some other good, or exchange does not take place. But let's do something more general. Suppose that 2 people who meet can engage in lotteries. That is, what they agree to is an exchange where a good is transferred with some probability, in exchange for the other good with some probability. Then, the probabilities play the role of prices. That is, with indivisible goods, we can think about an equilibrium with lotteries as a flexible price equilibrium, and the Levine equilibrium, where one thing always trades for one other thing, as a sticky price equilibrium.

This sounds like it's going to be hard, but it's actually very easy. Work backwards, starting with a meeting between a type T - 1 and a type T. Trade can only happen if type T-1 has good 1, which is what type T consumes, so suppose that's the case. We have to assume something about how these two would-be trading partners bargain. The simplest bargaining setup is a take-it-or-leave-it offer by the "buyer," i.e. the person who is going to exchange something he or she doesn't want for something the "seller" produces. The buyer has one unit of good 1, which is of no value to him/her, so the buyer is willing to give this up with probability one. Since u > c, the seller is willing to produce one unit of good T in exchange, so the optimal offer for the seller is in fact the Levine contract - one unit of good 1 in exchange for one unit of good T. And the same applies to the meetings where types 2, 3, ..., T-1 are the buyers.

But, the type 1 people - these are the producers of the commodity money in this economy - are different. Unlike the buyers in the other meetings, they have to produce on the spot. And, since they make a take-it-or-leave it offer, they are in a position to extract surplus from sellers - and they do it. So, the trade they agree to is an exchange where each type 2 person produces one unit of good 2 and gives it to a type 1 person, and the type 1 person agrees to produce good 1 with probability p(1), where
So, in equilibrium, only a fraction c/u of each of types 2, 3,..., T gets to consume, and all the type 1s - the money producers - consume. There is a welfare loss from this commodity money system, in that the money producers are extracting seignorage from everyone else. In the fixed price equilibrium, where everyone has to trade one unit of a good for one unit of another good, the type 1s are worse off, and everyone else is better off.

Now, consider a "demand shock." That is, suppose that all the type T people receive utility u* from consuming, where u* < c, and everyone else is the same as before. A point I want to make here is that the Keynesian failure can come from anywhere in the chain - it need not come from the money producers. If we consider the flexible price, or lottery, equilibrium, now the type T - 1 buyers have to do something different in order to get the type T sellers to produce. It is still best for the buyers in these meetings to offer their commodity money (good 1) with certainty, but the take-it-or-leave-it offer the buyer makes involves the seller producing with probability
Further, now it is possible that, because the type T - 1 buyers get a bad deal, they won't be so willing to work when they are trading with T - 2 buyers. Indeed, in this equilibrium (if there is one - more about that later), we can work backward to determine that a type i person will produce with probability
What the type 1 and type 2 people do is potentially a little different, because this involves the behavior of the type 1s, who are the commodity money suppliers. Again, working backward, we can show that an equilibrium will exist if and only if
If that inequality does not hold, then there is not enough total surplus in this economy to support trade, and everything shuts down. But, if (4) holds, then the solutions for p(1) and p(2) are:
So, the effects of the "demand shock" could be transmitted back in the chain, even to the commodity money supplier if the problem is severe enough, i.e. if u*/c is very small. This is quite interesting, as what is going on is that financial arrangements (albeit crude ones - this is just a commodity money system) propagate "shocks." A decline in demand in one sector gets transmitted to others. And all this interconnection and specialization could in fact shut the economy down - even without sticky prices.

Note that I'm putting "demand shock" in scare quotes. Why? Because, in spite of the fact that the comparative static experiment involved a decline in the utility type Ts receive from consuming, it affects everything a type T does, in particular his or her labor supply. Why work if you don't like to eat? This illustrates why terms like "demand shock" and "demand deficiency" have no meaning in a properly specified general equilibrium model. This is a standard criticism of IS/LM models that goes back to at least the mid-1970s. For example, the IS curve is shifting because the behavior of some consumers changed, but those consumers are the same people who are supplying labor in the labor market, and holding money in the money market. Why don't we take account of that? Why indeed. Spelling these things out in the model means you don't miss that, which could be very important.

The next step is easy, as Levine already did it. If prices are fixed (all trade is one thing for one other thing), then the "demand shock" will shut everything down. The flexible price lottery equilibrium is, as far as I can tell, Pareto efficient, so that's a useful benchmark. So note first that having this economy shut down - in this extreme example - will sometimes be efficient, if (4) does not hold. Thus, in that case, the fixed price equilibrium is actually OK. But that's not what interests us. Suppose u* < c, but (4) holds. Then clearly the fixed price equilibrium is not Pareto efficient. But how would we fix it? David goes through some possibilities, but the key message is that, if the government is going to intervene in this world in a good way, it has to redistribute. Somehow the government has to move surplus to type Ts from everyone else, so that the type T's are willing to trade. If the shocks are causing some inefficiency, we can't correct the problem through some blunt policy which says the government should just buy some stuff, and it really doesn't matter what. Indeed, it does matter, and this crude model is an illustration of that fact.

As well, note that a typical justification for thinking about the sticky price equilibrium rather than the flexible price equilibrium, is that pricing is hard for the people in the model to figure out. Indeed, that's the case here. People sometimes argue that mixed strategies (as in the flexible price equilibrium) are very difficult to implement in practice. But that doesn't let the government off the hook. If it wants to correct the incorrect pricing - the prices are the wrong ones in the sticky price equilibrium - they have to do so by replicating the flexible price equilibrium, and that involves lotteries. That's just an example of a general problem in Keynesian economics.

Fiat Money
So, you might wonder why we would worry about a commodity money economy, if that's not the type of world we currently live in. Well, it's not so hard to extend the idea to a fiat money economy. Some things change in an interesting way, but the basic idea stays intact. We're going to work in an overlapping generations framework. Samuelson's OG model is not used so much anymore, but it was a standard workhorse for monetary economics at the University of Minnesota until about the mid-1980s. For this example, it works nicely.

The people that live in this world look much like the people in the commodity money economy, except they each live for two periods. They can produce an indivisible unit of the current perishable consumption good when young at a cost c, and receive utility u from consuming an indivisible unit of consumption good when old. Each period, a continuum of two-period-lived people with unit mass are born. In period 1, there is a continuum of old people who live only one period. The initial old each receive utility u from consumption of one unit of the consumption good, and each has one unit of indivisible fiat money. We'll make the bold assumptions that fiat money cannot be counterfeited, and that it is perfectly durable. Each period, each young person is matched with one old person.

We'll suppose, as in the commodity money economy, that in a meeting between a young person and an old person with money, the old person makes a take-it-or-leave-it lottery offer to the young person. This is actually easier to analyze than the commodity money equilibrium, as the initial old people want to give up their money no matter what - they're not like the commodity money producers who have a cost of producing money. So, here the flexible price equilibrium and the fixed-price equilibrium are the same thing. Each period, every old person exchanges one unit of money for one good produced by a young person, every young person receives utility -c + u > 0, every initial old person receives utility u, and money circulates forever.

Now, suppose that sometime in the future, in period T, utility from consuming is lower for all the people who are born that period, i.e. they receive u* from consuming when old, and u* < c, just as before. Again, this is easier than in the commodity money case, as this economy will not shut down under flexible pricing. Letting p(i) denote the probability a young person produces in a trade with an old person, we get
which should look familiar from the commodity money case, but now this holds for all t = 1,2,...,T. But for t = T+1, T+2, p(i) = 1 as before. So now we get a temporal interpretation of the idea. Future anticipated shocks propagate backward in time.

As in the commodity money economy, everything shuts down if everyone has to trade at fixed prices. In period T, the young will not accept money, and so by induction no one will. Here, just as with commodity money, the problem in the fixed-price economy is not a monetary problem - it's that the prices are wrong. It's always puzzled me, for example, why Mike Woodford thinks of his models as prescriptions for how central banks should behave, as the relative price distortions that exist in those models look like problems for the fiscal authority to work on. I haven't worked out the details, but I think that a policy that would work in the fixed price equilibrium is to simply replicate the flexible price equilibrium with a sequence of taxes on old agents (random confiscations of money) and subsidies for the young (random transfers of money). You can do something similar in a Woodford model with consumption taxes (see this paper by Correia et al.).

We could also think about unanticipated preference shocks in this model. For example, suppose the utility of consumption for old persons is a random draw, which they learn when they are young. With probability q they receive u*, and with probability 1 - q, they receive u. Then, we can construct an equilibrium in which the young produce with probability s* when their utility when old will be u*, and produce with probability s when their utility from consuming when old will be u. For an equilibrium to exist requires
So the economy shuts down unless the unconditional expected utility of a given agent who always receives his or her consumption good when old is not negative. If an equilibrium exists, then s = 1, and
Therefore, the random preference shocks produce random business cycles in which production and consumption are low in bad states and high in good states. But these cycles are efficient. Low demand for goods means low willingness to work, but note that this doesn't mean that the person with the "demand shock" consumes less. They work less and supply less consumption goods.

As in all the previous cases, if prices are fixed, then this economy shuts down because of these demand shocks. There is always a positive probability that the young next period will not accept money, so it is not valued in equilibrium and there is no trade. Again, the problem is that the prices are wrong. A fix for this is for the government to step in, if it can, and replicate the allocation that was achieved under flexible prices. What should work is that, when a bad shock is realized (the young learn that their utility from consumption when old is u*), the government taxes money away from old people, at random, and gives it to young people, again at random. Note that this doesn't involve running a deficit - it's a tax/transfer scheme with taxes = transfers. Again, the cure is redistribution. Further, note that an optimal allocation has cycles - it's not optimal to smooth the cycle completely, even if that were possible (and I'm not sure it is).

Conclusion
So, I think this is an interesting example. It's obviously special, and we wouldn't take it to the U.S. Treasury and tell them about it, hoping to influence their decisions. The message is that whatever anyone thinks they know about "Keynesian" ideas, and the "Keynesian" policies derived from those ideas, they should reconsider. There's nothing obvious about that stuff. We can write down coherent models in which Keynesian phenomena occur, and the optimal policies don't seem to look like anything like that Paul Krugman recommends. And it's not because his IS/LM model is "right." Far from it. We understood that long ago.

Monday, March 16, 2015

Levine on Keynes

This piece be David Levine is a lot of fun. Paul Krugman refers to Levine as "incompetent" and "ignorant," which I think we can interpret as a strong positive signal.

Wednesday, March 11, 2015

Lucas and His Critics

Thanks to Brad DeLong and Noah Smith for resurrecting a 2011 panel discussion on the roots of modern macroeconomics, which includes Michael Lovell, Robert Lucas, the late Dale Mortensen, Robert Shiller, and Neil Wallace. If you're interested in the history of macro thought, this is fascinating.

What's of interest to DeLong and Smith is something else altogether, though. As Noah lets on at the outset, it's basically macro-dissing. Noah tells us about two kinds of macro-dissers:
Attacker Group 1: "Old Keynesian" economists who want to use aggregate-only models.

Attacker Group 2: Decision theorists and other micro theorists who want to make macro use more realistic models of agent behavior.
DeLong and Smith are more-or-less in the first camp and the second, respectively, though they might be better-slotted in a third category - journalists who no longer practice economics (much), but have a lot to say about what economists do or should do. The focus of these two macro-dissers is some comments in the panel discussion by Bob Lucas. What we get is Lucas's comments, taken out of context, with parts edited out, and filtered by DeLong's and Smith's notions of what modern macroeconomists do. If you're thinking this won't be entirely grounded in reality, you're on the right track.

Somewhere in the panel discussion, Shiller makes some comments about behavioral economics, which sets Lucas off. He has very different views, as you might already know. Here are Lucas's comments (not edited):
One thing economics tries to do is to make predictions about the way large groups of people, say, 280 million people are going to respond if you change something in the tax structure, something in the inflation rate, or whatever. Now, human beings are hugely interesting creatures; so neurophysiology is exciting, cognitive psychology is interesting – I’m still into Freudian psychology – there are lots of different ways to look at individual people and lots of aspects of individual people
that are going to be subject to scientific study. Kahnemann and Tversky haven’t even gotten to two people; they can’t even tell us anything interesting about how a couple that’s been married for ten years splits or makes decisions about what city to live in – let alone 250 million. This is like saying that we ought to build it up from knowledge of molecules or – no, that won’t do either, because there are a lot of subatomic particles – . . . we’re not going to build up useful economics in the sense of things that help us think about the policy issues that we should be thinking about starting from individuals and, somehow, building it up from there. Behavioral economics should be on the reading list. I agree with Shiller about that. A well trained economist or a well educated person should know something about different ways of looking at human beings. If you are going to go back and look at Herb Simon today, go back and read Models of Man. But to think of it as an alternative to what macroeconomics or public finance people are doing or trying to do . . . there’s a lot of stuff that we’d like to improve – it’s not going to come from behavioral economics. . . at least in my lifetime. {laughter}
That seems pretty interesting. Lucas is open to thinking about alternative views of human behavior, but he's making an assessment about what the productive avenues for research in macroeconomics are. He says behavioral economics does not appear to be one of them. And I think he's right - for now, at least.

This is interesting, as I had never thought about the macroeconomic problem in the way Lucas has laid it out. Modern macro is built up from "conventional" economic theory - competitive analysis, information economics, game theory, for example. In a macro model - with heterogeneous agents, a representative stand-in, or whatever - economic agents are rational, in that they are maximizing some objective function given available information and constraints. But the basic economic theory we work with is almost never used to make sense of the behavior of the individual. The one study I know about is John Rust's 1987 Econometrica paper. That's a dynamic programming model of bus engine replacement, fit to the observed behavior of Harold Zurcher. But I don't think the standard consumer behavior we teach in econ 101 would do a very good job of predicting what I do when I go out shopping. And I'm actually trained to think like an econ 101 consumer, so econ 101 certainly won't explain my neighbor's behavior. But we teach the things we do in econ 101 for good reasons - we think that this basic theory does a decent job of explaining how reasonably large groups of people behave. Not the 280 million that Lucas mentions - I think our usual notion of "large" in this respect is much smaller.

So, when we say that macroeconomic theory has "microfoundations," what we mean is not that it is built up from theory that explains the behavior of individuals. For a lot of economic behavior, we're not going to do very well in explaining the behavior of an individual. And, as Lucas notes, behavioral economists can't do it either. Rather, what "microfoundations" is about is finding the model elements - optimizing behavior, constraints, information - that explain the behavior of large (that's large in the "larger than one but much smaller than 280 million" sense) groups of economic agents from first principles. Then we can make predictions about the effects of policy on the behavior of really large (i.e. 280 million for example) groups of people. So, until we come up with something better, that's the state of the art.

Finally, I thought Noah's description of "attacker group 2" was interesting. This was "decision theorists and other micro theorists who want to make macro use more realistic models of agent behavior." Two reactions to this:

1. It seems a lost cause to try to "make" some other researcher do something that you would like them to do. These are quite independently-minded types. Best to do it yourself.
2. As Lucas points out, we don't care about someone else's notion of what might be "realistic." We go with what's useful.


Also, it's not as if macroeconomists never get outside the box - in the behavioral sense. In contrast to what Noah seems to think, we're not straight-jacketed into some narrow class of models. Some examples:

Krusell/Smith 2003
Recent survey by Driscoll/Holden at BoG
Greg Mankiw blog post

So people have experimented with behavioral economics in the macro literature, they have held conferences, they have written papers. But it's not widely used. And Lucas goes some way in explaining why.

Monday, March 9, 2015

Fed Policy in 1995

I was interested in Paul Krugman's NYT column from this morning, mainly for his take on mid-1990s monetary policy in the United States. Krugman says:
Recent job gains have brought the Fed to a fork in the road very much like the situation it faced circa 1995. Now, as then, job growth has taken the official unemployment rate down to a level at which, according to conventional wisdom, the economy should be overheating and inflation should be rising. But now, as then, there is no sign of the predicted inflation in the actual data.
So, let's look at the data, restricting attention just to the 1990s:
You can see that, indeed, the unemployment rate in 1995 was about 5.6% vs. 5.5% in the last survey. But the pce inflation rate in 1995 was in the vicinity of 2%, versus 0.2% year-over-year in January. As Krugman notes, a Phillips curve view of the world would not help you much in making sense of the data, either in 1995, or now. For example, if we look at a scatter plot of the inflation and unemployment data for the 1990s, and link the observations in order, we get:
There are brief periods during the decade when inflation and unemployment actually move in opposite directions, but more often than not we get the reverse case. Indeed, from 1992 until 1999, the unemployment rate falls about 3 1/2 percentage points while the inflation rate falls about one percentage point. So much for the inflationary pressures of a tight labor market.

What was the Fed doing during the 1990s, and in 1995 in particular? Here's how Krugman describes it:
In the early-to-mid 1990s, the Fed generally estimated the Nairu as being between 5.5 percent and 6 percent, and by 1995, unemployment had already fallen to that level. But inflation wasn’t actually rising. So Fed officials made what turned out to be a very good choice: They held their fire, waiting for clear signs of inflationary pressure.
Here's what is in the data. We'll plot the fed funds rate and inflation, and the fed funds rate and the unemployment rate for the 1990s:
So, you can see that, by early 1995, the Fed had just finished a substantial tightening cycle - the fed funds rate had increased from about 3% at the beginning of 1994 to about 6% at the beginning of 1995. So, the Fed wasn't "holding its fire" in 1995, it had just launched a major artillery barrage, and had stopped shooting until the smoke cleared. And the fed funds rate in 1995 was at 6%, not at (effectively) zero, as is the case now.

Krugman's conclusion is:
What’s worrisome is that it’s not clear whether Fed officials see it that way [Krugman's way]. They need to heed the lessons of history — and the relevant history here is the 1990s, not the 1970s. Let’s party like it’s 1995; let the good, or at least better, times keep rolling, and hold off on those rate hikes.
Here's one way to see it as Krugman does. Apparently he thinks that 1995 Fed policy was appropriate. Suppose then that we fit a Taylor rule to 1990s data, of the form:

R = a + bi + cu,

where R is the fed funds rate, i is the pce inflation rate, u is the unemployment rate, and a, b, and c are coefficients we are going to estimate. OLS regression gives us

a = 9.2
b = 1.2
c = -1.2


And here are the raw data and the fitted values:
Note that policy is actually tighter in 1995 than average Fed behavior over the 1990s predicts. But what if the Fed followed Krugman's advice and behaved like it did in the 1990s? Well, the fitted Taylor rule, given an unemployment rate of 5.5% and a pce inflation rate of 0.2%, implies a fed funds rate of 2.8%. So if the Fed had been behaving like it did in the 1990s, it would have lifted off the zero lower bound long ago. Apparently Krugman is confused.

Monday, February 9, 2015

Larry Summers Writes It Down

This Financial Times article by Larry Summers articulates ideas that he has been talking about recently.

First, I take it as a good sign that Summers understands that the Phillips curve is not all it's cracked up to be:
...the idea that below normal unemployment will necessarily lead to accelerating inflation as suggested by the so called Phillips curve is very uncertain. Contrary to such predictions, inflation did not decelerate by much even a few years ago when unemployment was in the range of 10 per cent. Nor was there much evidence of accelerating inflation in the 1990s when the unemployment rate fell below 4 per cent.
But Summers is trying to make the case that, for the Fed, continuing with ZIRP (zero interest rate policy) would be a good thing. He says:
...if inflation were to accelerate a bit this would be a good thing. It is now running and is expected to run below the Fed target. Prices are about 4 per cent below where they would have been if 2 per cent inflation had been maintained since 2007. So there is a case for some inflation above 2 per cent to catch up to the Fed’s price level target path. There may also be a case for inflation a little bit above 2 per cent for the next few years to allow real interest rates low enough to promote recovery when the next recession comes.
So, here he seems confused. If the Phillips curve doesn't explain what's going on, how do we get more inflation with continued ZIRP except through a Phillips curve mechanism? Further, Summers seems worried about the "next recession." Presumably if the Fed still has ZIRP at that point, it's powerless (except perhaps with unconventional tools) to do anything about it.

Next, we enter the realm of the bad analogy:
...a plane that accelerates too rapidly as it takes off may cause passengers discomfort while a plane that accelerates too slowly may crash at the end of the runway. Historical experience is that inflation accelerates only slowly so the costs of an overshoot on inflation are small and reversible with standard tightening policies. In contrast, aborting recovery and risking a further slowing of inflation is potentially catastrophic — as Japan’s experience demonstrates. So in a world where economic forecasts are highly uncertain, prudence in avoiding the largest risks counsels in favour of Fed restraint in raising rates.
His assumption, again, is that continued ZIRP will make the inflation rate go up. But "as Japan's experience demonstrates," 20 years of ZIRP just serves to produce low inflation.

Pining for the Fjords

This post by Simon Wren-Lewis brings the dead parrot sketch to mind (with Simon playing the Michael Palin role). I'm interested in the last couple of paragraphs of his post. Simon is analyzing the U.K. policy problem, and asserts:
For whatever reason (resistance to nominal wage cuts being the most obvious), inflation ceases to be a good indicator of underutilised resources when inflation starts off low and we have a major negative demand shock.
First, for inflation to "cease" to be a good indicator of underutilized resources, it must have once been a good one. I have run into economists who think that the unemployment rate is a good measure of underutilized resources. Then, if inflation is a good "indicator" of underutilizaiton, there must be a stable Phillips curve - a negative relationship between the unemployment rate and the rate of inflation.

Clearly, Simon thinks that there has been a "major demand shock" - presumably he means the financial crisis - which caused the Phillips curve to break down. So, suppose we look at the data on CPI inflation and the unemployment rate in the U.K. for 2000-2006. That's arbitrary of course, but if this Phillips curve is so stable, we should see it in the data for that period. Here's the time series (quarterly data):
The inflation rate rises steadily over that period; the unemployment rate goes down, and then up again. Here's the scatter plot, with the observations connected in chronological order:
You can see that's not much of a Phillips curve. Suppose I knew that the inflation rate was 1.8%, and I could not observe the unemployment rate. What would the observed inflation rate tell me about the unemployment rate, given the last chart? Not much. And, in fact, I can observe the unemployment rate. So what is the inflation rate telling me about underutilization?

So after the "major demand shock" - otherwise known as the financial crisis - occurred, what does the Phillips curve in the U.K. look like? We'll first plot the data for 2007-2014 (quarterly) using the headline CPI to measure the inflation rate. Here's the time series:
I think you can tell that this isn't going to produce a nice Phillips curve. Here's the scatter plot:
In particular, note that, from peak unemployment during the recession, the unemployment rate drops about 2 1/2 points, while the inflation rate drops about 3 points. You can see why Simon is worried about the inflation rate as an indicator of underutilizaiton. Presumably utilization has been rising in the U.K., but inflation is dropping like a rock.

In case you're wondering what happens if we use core inflation instead of headline inflation, here's what that yields:
So, that's even worse - the Phillips curve has the wrong slope.

Simon goes on:
...it seems quite possible that GDP continues to be quite a few percentage points below where it could be without inflation exceeding its target...
So, in spite of the fact that Phillips curve logic is inconsistent with the data - and the problem seems more severe post-recession, Simon continues to use that logic. He imagines that it is underutilization that is holding back the inflation rate, as he states:
...so we continue to waste resources on a huge scale. This is money down the drain that we will never get back. It is like taxing households thousands of pounds or dollars or euros a year and burning that money.
Thus, the possibility of an output gap becomes a certainty - it's now a waste of resources on a huge scale.

Simon finishes the post with:
...if, at low inflation rates, inflation becomes a noisy, weak and asymmetric indicator of the output gap, then focusing on inflation is going to perform badly. In these circumstances it could be many years before it becomes clear that we have been continually running the economy under capacity, and needlessly wasting resources. Unfortunately even when that point of realisation arrives, for obvious reasons monetary policymakers are going to be reluctant to acknowledge the mistake.
Simon's chief worry is the latent output gap - the inefficiency he suspects is there, but seems at a loss to measure. It's unclear why we can't see it now, but might realize sometime in the future that it was there all the time - massive, and highly persistent. That certainly doesn't seem like a Keynesian inefficiency - or, to be more accurate, a New Keynesian inefficiency - as that's a temporary phenomenon. If Simon is so certain this beast exists, he should be able to tell us what it is.

I think it might help Simon, as a start, if he declares the parrot dead. The Phillips curve is not resting, sleeping, or pining for the fjords. It is dead, deceased, passed away. It has bought the farm. Rest in peace.

The Bank of England has been close to the zero lower bound for a long time. The Bank Rate has been set at 0.5% since March 2009. Here's the latest inflation projection from the Bank:
And here's the latest inflation data, up to December 2014:
So, like Simon, the Bank seems not to have learned that the parrot is dead. In spite of a long period in which inflation is falling while the economy is recovering, they're projecting that inflation will come back to the 2% target. But 20 years of zero-lower-bound experience in Japan and recent experience around the world tell us that sticking at the zero lower bound does not eventually produce more inflation - it just produces low inflation.