Monday, July 18, 2016

More Neo-Fisher

What follows is an attempt to make sense of Narayana's note on Neo-Fisherism. That discussion will lead into comments on a paper by George Evans and Bruce McGough.

Start with basics. What are Neo-Fisherite ideas anyway? Narayana says
...in the absence of shocks, the equilibrium inflation rate should be constant if the nominal interest rate is pegged forever. The Fisher equation then implies that the inflation rate should move one for one with the nominal interest rate. This logic is sometimes referred to as “neo-Fisherian”.
I would actually call these New Keynesian (NK) claims. For example, in "Interest and Prices," Mike Woodford takes pains to address the concern, which came out of the previous macro literature, that nominal interest rate pegs are unstable. Woodford's claim is that a Taylor rule that conforms to the Taylor principle (a greater than one-for-one increase in the nominal interest rate in response to an increase in inflation) will imply determinacy. That is, if there are no shocks, then the nominal interest rate is pegged at a constant forever, and the inflation rate is a constant - the inflation target. Further, in the basic NK model, if Woodford's claim is correct then, in the absence of shocks, if the central bank wants to increase its inflation target, then the nominal interest rate should increase one-for-one with the increase in the inflation target, and actual inflation will respond accordingly. Under basic NK logic, this behavior is supported by promises to increase the nominal interest rate in response to higher inflation - and this inflation never materializes in equilibrium.

But, whatever we think Neo-Fisherite or New Keynesian ideas are, Narayana is making a particular argument in his note, and we want to get to the bottom of it. I don't think the analogy part is particularly helpful though. There are two problems considered in Narayana's note. One is an asset pricing problem, and the other has to do with the properties of a particular NK model. As far as I can tell, the extent of the commonality is that solving each problem can involve geometric series. Otherwise, understanding one problem won't help you much with the other.

The asset pricing problem looks like a trick question you might give to unwitting PhD students on a prelim exam. The equilibrium one-period real interest rate is negative and constant forever, and we're asked to price an asset that pays out a constant real amount each period forever. Question: Solve for the steady state price of the asset. Answer: Dummy, there is no steady state price for the asset. Since a rational economic agent in this world values future payoffs more than current payoffs, if we compute the present value of the payoffs, it will be infinite.

Well, so what? On to the second problem. Narayana uses a version of the standard NK model. We're in a world with certainty - no shocks. I'll change the notation so I don't have to use Greek letters. From standard asset pricing, and assuming constant relative risk aversion utility, we can take logs and get
Here, y is the output gap (the difference between actual output and efficient output), i is the inflation rate, R is the nominal interest rate, and r is the subjective discount rate (or the "natural real interest rate"). The second equation is a Phillips curve
This is the only difference from standard NK, as the Phillips curve doesn't have a term in anticipated inflation. This makes the solution easy, but I don't think it otherwise changes the basic mechanics.

In general, we can solve to get the difference equation
Then, an equilibrium involves finding a sequence of inflation rates that solves the difference equation (3) given some sequence of nominal interest rates, or some policy rule governing the central bank's choice of the nominal interest rate each period.

So, suppose that the nominal interest rate is a constant R forever, and suppose that, in period T the inflation rate is i(T). Then, we can solve the difference equation (3) forward to get
Similarly, we can solve (3) backward to get
So, for any real number i(T) equations (4) and (5) describe an equilibrium. Thus, there is a whole continuum of equilibria, indexed by i(T). In equation (4), the second term on the right-hand side converges to zero as n goes to infinity, for any i(T). Thus, all equilibria converge in the limit to an inflation rate of R-r. That's the long-run Fisher relation. In equation (5), the second term does not converge as n goes to infinity, i.e. as time runs backward to minus infinity. If i(T) < R - r, then inflation runs off to minus infinity as time runs backward, and if i(T) > R - r, then inflation runs off to infinity as time runs backward. This is typical of course - we have a difference equation that's stable if we solve it forward, and it's unstable if we solve it backward. Note that one equilibrium is i(t) = R - r in every period.

What Narayana does is to take equation (5), and let T go to infinity, so he's only looking at the backward solution. As should be clear, I hope, that's not describing all the equilibria. By any conventional notion of what we mean by convergence and stability, the nominal interest rate peg is stable, and all the equilibria converge in the limit to R - r. The Fisher relation holds in the long run. As a practical implication of this, I've heard many people argue that, if the central bank holds its nominal interest rate at zero, then surely inflation will eventually rise to the 2% inflation target. Well, they can't be thinking about this model then. In any equilibrium with R = 0 forever and with inflation initially lower than some inflation target i*, inflation either falls to -r in the limit, or rises to -r in the limit. If -r < i*, the central bank will never achieve its target by staying at zero.

But, with a nominal interest rate pegged at some value forever, we have an indeterminacy problem - there exists a plethora of equilibria. This makes it hard to make statements about what happens when the interest rate goes up or down. For example, it's certainly correct that, if we set T=0 in equation (4), and think of time running from zero to infinity, solving the difference equation (3) forward, then given i(0), the inflation rate will be higher along the whole equilibrium path, if R rises. But i(0) is not predetermined - it's not an initial condition, it's endogenous and the first step in only one equilibrium path. Who is to say that economic agents don't treat R as a signal and jump to another equilibrium path? We might also be tempted to set i(0) = R*-r, then solve for the equilibrium path given R = R**, and think of that as describing the effects of an increase in the nominal interest rate from R* to R**, since an inflation rate of R* - r is the long run inflation rate when R = R*. Though that's suggestive, it's not precise, due to the indeterminacy problem.

So what to do about that? If we follow the usual NK approach, we would specify a Taylor rule
In equation (6), the Taylor principle is d > 1, and Mike Woodford says that gives us determinacy. But what he means by that is local determinacy - that is, determinacy in a neighborhood of the inflation target i*. But this model is simple enough that it's easy to look at global determinacy - or indeterminacy, in this case. From equation (3) and (6), we get
And the picture looks like this:
D is the difference equation from (7). Note that the kink in the difference equation is where the nominal interest rate hits the zero lower bound (for low inflation rates). A is the desired steady state where the central bank hits its inflation target, and B is the undesired steady state in which the inflation rate is - r and the nominal interest rate is zero. A is an equilibrium, but it's unstable - there are many equilibria that converge in the limit to B. We won't discuss equilibria in which inflation increases without bound, as the model needs to be fixed a bit so that those make sense, but that's possible in a slightly modified model. These are well-known results - the Taylor principle has "perils," i.e. it yields indeterminacy, and there are many equilibria in which the central bank falls short of its inflation target forever - not great.

So, we might look for other policy rules that are better behaved. Here's one:
That rule implies a difference equation that looks like this:
The equilibrium is
The first part of the rule, (8), acts to offset effects of future inflation on current inflation, thus killing off equilibrium paths that will imply current inflation above target. (8) is only an off-equilibrium threat. The second part of the rule, (9), acts to bring inflation back to target next period. The equilibrium result is that inflation can be lower than the target in period 0, but the central bank hits its target in every future period. Further, note that the rule is neo-Fisherian, in more than one way. First, the central bank reacts to low inflation by increasing the nominal interest rate above its long-run level, temporarily. Second, the equilibrium satisfies the properties in the quote at the beginning of this post. After period 0, the nominal interest rate is constant forever, and inflation is constant. If the inflation target increases, then the nominal interest rate increases one-for-one in periods 1,2,3,... Narayana says those are Neo-Fisherian properties, and I stated above that I thought these were claims made of standard NK models under the Taylor principle. Seemingly, these are deemed by some people to be good properties of a monetary policy rule.

What Narayana seems to be getting at is that stickiness in expectations matters. In the example he gives in his note, fixed expectations in the infinite future can have very large effects today. You can see that in equation (5), for example, if we fix i(T) and solve backward. Indeed, it seems that conventional central banking wisdom comes from considering expectations as fixed, as is common practice in some undergraduate IS-LM/Phillips curve constructs. Take equation (1), fix all future variables, and an increase in the current nominal interest rate makes output and inflation go down. Indeed, sticky expectations is what George Evans and Bruce McGough have in mind. Here's their claim:
Following the Great Recession, many countries have experienced repeated periods with realized and expected inflation below target levels set by policymakers. Should policy respond to this by keeping interest rates near zero for a longer period or, in line with neo-Fisherian reasoning, by increasing the interest rate to the steady-state level corresponding to the target inflation rate? We have shown that neo-Fisherian policies, in which interest rates are set according to a peg, impart unavoidable instability. In contrast, a temporary peg at low interest rates, followed by later imposition of the Taylor rule around the target inflation rate, provides a natural return to normalcy, restoring inflation to its target and the economy to its steady state.
We can actually check this out in Narayana's model. Following Evans-McGough (E-M), we'll assume a form of adaptive expectations. Let e(t+1) denote the expected rate of inflation in period t+1 possessed by economic agents in period t. Assume that
So, h determines the degree of stickiness in inflation expectations - there is less expectational inertia as h increases. Using (1), (2), and (11) we can solve for current inflation and expected inflation for next period given the current nominal interest rate and expected inflation as of last period:
How this dynamic system behaves depends on parameters. To see some possibilities, consider extreme cases. If h=0, this is the fixed expectation case - expectations are so sticky that economic agents never learn. Letting e denote fixed inflation expecations,
That's the undergrad IS-LM/P-curve model. If you want inflation to go up, reduce the nominal interest rate. The other extreme is h = 1 which is essentially rear-mirror myopia - economic agents expect inflation next period to be what it was this period. This gives
That's extreme Neo-Fisherism. If you want inflation to go up by 1%, increase the nominal interest rate by 1%.

The question is, what happens for intermediate values of h? There are three cases: sticky expectations
medium-sticky expectations:
Not-so-sticky expecations:
The sticky expectations case gives the results that E-M are looking for. If the central banker follows a Taylor rule then, if inflation expectations are sufficiently low, the central banker goes to the zero lower bound, inflation increases, the Taylor rule eventually kicks in, and inflation converges in the limit to the inflation target i*. But, with medium-sticky or not-so-sticky expectations, from (12) increases in the nominal interest rate increase inflation. Further, if expectations are not-so-sticky there are Taylor rule perils. If d > 1, then there always exist equilibria converging to the zero lower bound with i = -r in the limit. In those equilibria the central bank undershoots its inflation target forever.

Under no circumstances is the standard Taylor rule with d > 1 well-behaved. At best, if inflation is initially below target, the inflation target is only achieved in the limit, and at worst the central banker gets stuck at the zero lower bound forever. But, there are other rules. Here's one:
Under this rule, the central banker hits the inflation target every period, provided initial inflation expectations are not too far below the inflation target. In the worst case, the central banker spends a finite number of periods at the zero lower bound when inflation expectations are too low. But, if inflation expectations are medium-sticky or not-so-sticky, the period at the zero lower bound exhibits inflation above the inflation target - i.e. a period at the zero lower bound can serve to bring inflation down.

The critical value for inflation expectations is
That is, under the rule (19), the central banker goes to the zero lower bound if inflation expections fall below e*. Note that e* is decreasing in h and goes to minus infinity as h goes to 1. As expectations become less sticky, the zero lower bound kicks in only for extreme anticipated deflations.

In their paper, E-M say
As we have shown, the adaptive learning viewpoint argues forcefully against the neo-Fisherian view and in support of the standard view.
As I hope I've made clear, that's overstated. I take the "standard view" to be (i) staying at the zero lower bound will eventually make inflation go up; (ii) a standard Taylor rule is the best the central bank can do. In Narayana's model, under adaptive learning, (i) is only correct under some parameter configurations - actual inflation and expectation inflation both have to be sufficiently sticky. Further, (ii) is never correct.

Tuesday, June 21, 2016

Attitude Adjustment

For this post, note the disclaimer at the top of the page. I'm just speaking for myself here, and my views do not necessarily reflect those of the St. Louis Fed, the Federal Reserve System, or the Board of Governors.

This is a reply to Narayana's recent Bloomberg post, which is a comment on this St. Louis Fed memo.

First, Narayana says that Jim Bullard thinks that
... the economy is so weak that a mere quarter-percentage-point increase would be enough for the foreseeable future.
I don't think the memo actually characterizes the economy as "weak" - it's not a pessimistic view of the world as, for example, Larry Summers or Robert Gordon might see it. As I noted in this post, one would not characterize the labor market as "weak." It's in fact tight, by conventional measures that we can trust. The view in the St. Louis Fed memo is that growth in real GDP, at 2% per annum, is likely to remain lower than the pre-financial crisis trend for the foreseeable future - i.e. "weaker" than we've been accustomed to. But "so weak" is language that is too pessimistic. And there remains the possibility that this will turn around.

Second, Narayana says:
Bullard’s rationale focuses on productivity...
That's not correct. The memo mentions low productivity growth, but a key part of the argument is in terms of low real rates of interest. According to conventional asset pricing and growth theory, low productivity growth leads to low consumption growth, which leads to low real rates of interest. But that effect alone does not seem to be strong enough to explain the fall in real interest rates in the world that has occurred for about the last 30 years or so. There is another effect that we could characterize as a liquidity premium effect, which could arise, for example, from a shortage of safe assets. I've studied that in some of my own work, for example in this paper with David Andolfatto. In recent history, the financial crisis, sovereign debt problems, and changes in banking regulation have contributed to the safe asset shortage, which increases the prices of safe assets, and lowers their yields. This problem is particularly acute for U.S. government debt. A key point is that a low return on government debt need not coexist with low returns on capital - see the work by Gomme, Ravikumar, and Rupert cited in the memo.

Third, Narayana thinks that:
Bullard uses a somewhat obscure measure of inflation developed by the Dallas Fed, rather than the Fed’s preferred measure, which is well below 2 percent and is expected to remain there for the next two to three years.
"Obscure," of course, is in the eye of the beholder. Let's look at some inflation measures:
The first measure is raw pce inflation - that's the Fed's preferred measure, as specified here. The second is pce inflation, after stripping out food and energy prices - that's a standard "core" measure. The third is the Dallas Fed's trimmed mean measure. Trimmed mean inflation doesn't take a stand on what prices are most volatile, in that it strips out the most volatile prices as determined by the data - it "trims" and then takes the mean. Then we calculate the rate of growth of the resulting index. One can of course argue about the wisdom of stripping volatile prices out of inflation measures - there are smart people who come down on different sides of this issue. One could, for example, make a case that core measures of inflation give us some notion of where raw pce inflation is going. For example, in mid-2014, before oil prices fell dramatically, all three measures in the chart were about the same, i.e. about 1.7%. So, by Fisherian logic, if the real interest rate persists at its level in mid-2014, then an increase in the nominal interest rate of 50 basis points would make inflation about right - perhaps even above target. Personally, I think we don't use Fisherian logic enough.

Finally, Narayana says:
...the risk of excess inflation is relatively manageable.
That's a point made in the memo. The forecast reflects a view that Phillips curve effects are unimportant, and thus an excessive burst in inflation is not anticipated.

Here's a question for Narayana: Why, if a goal is to have "capacity to lower rates" in the event of "say, global financial instability," does he want rates reduced now?

Should We Think of Confidence as Exogenous?

I don't always agree with Roger Farmer, but I admire his independence. Roger doesn't like to be bound by the constraints of particular research groups, and typically won't accept the assumptions decreed by some New Keynesians, Monetarists, New Fisherites, or whoever. Farmer is a Farmerite. But, Roger falls into a habit common to others who call themselves Keynesians, which is to describe what he does in terms of some older paradigm. The first time I saw Roger do this was in 1994, when he gave this paper at a Carnegie-Rochester conference. The paper was about quantitative work on a class of models which were one step removed from neoclassical growth models. Such models, with unique equilibrium and exogenous stochastic productivity shocks, had been used extensively by real business cycle (RBC) proponents, but Roger's work (and that of other people, including Jess Benhabib) was aimed at studying indeterminacy and endogenous fluctuations. The indeterminacy in Roger's work came from increasing returns to scale in aggregate production. Sufficient increasing returns, he showed, permitted sunspot equilibria, and those equilibria could look much like the stochastic equilibria in RBC models. That seemed promising, and potentially opened up a role for economic policy aimed at dealing with indeterminacy. Old Keynesian economics says we should offset exogenous shocks with fiscal and monetary policy; baseline RBC theory says such stabilization policy is a waste of time. But with indeterminacy, policy is much more complicated - theoretically, we can construct policies that eliminate particular equilibria through off-equilibrium promises. In equilibrium, we wouldn't actually observe how the policymaker was doing his or her job. While promising, this approach introduced some challenges. How do we deal econometrically with indeterminacy? How would we know if real-world policymakers had actually figured out this problem and were solving it?

Though teaching and entertaining ourselves has a lot to recommend it, most economists are interested in persuading other people of the usefulness of their ideas. Though I haven't had a lot of experience with dissemination of ideas in other professions, I think economists are probably extreme in terms of how we work out ideas in public. Seminars and conferences can be combative. We have fun arguing with each other, to the point where the uninitiated find us scary. And all economists know it's an uphill battle to get people to understand what we're doing, let alone to have them think that we've come up with the greatest thing since indoor plumbing. There's an art to convincing people that there are elements of things they know in our ideas. That's intution - making the idea self-evident, without making it seem trivial, and hence unpublishable (horrors).

So, what does this have to do with Roger, indeterminacy, and 1994? In the talk I heard at CMU in 1994, to make his paper understandable Roger used words like "demand and supply shocks," "labor supply and demand curves," and, particularly, "animal spirits." Given that language, one would think that the elements of the model came from the General Theory and textbook AS/AD models. But that was certainly not the case. The elements of the model were: (i) the neoclassical growth model, which most of the people in the room would have understood; (ii) increasing returns to scale which, again, was common currency for most in the room; (iii) sunspot equilibria, which were first studied in the late 1970s by Cass and Shell. This particular conference was in part about indeterminacy, so there were people there - Russ Cooper, Mike Woodford, Rao Aiyagari, for example - who understood the concept well, and could construct sunspot equilibria if you asked them to. But there were other people in the room - Alan Meltzer for example - who would have no clue. But having Roger tell the non-initiated that his paper was actually about AD/AS and animal spirits would not actually help anyone understand what he was doing. If Roger had just delivered his indeterminacy paper in unadulterated form, no undergraduate versed in IS-LM AS-AD would have have drawn any connection, and if Keynes had been in the room he would not have seen any similarity between his work and Roger's ideas. But once Roger said "animal sprits," Keynes would have thought, "Oh, now I get it." He would have left the conference with the impression that Roger was just validating the General Theory in a more technical context. And he would have been seriously mislead.

Roger was hardly the first macroeconomist who made use of language from the General Theory, or Hicksian IS-LM, or post-Hicksian static AS-AD language, to provide intuition for ideas they thought might appeal to people schooled in those traditions. Peter Diamond did it in 1982 – “aggregate demand” was in the title of the paper in which Diamond constructed a model with search and increasing returns in the matching function. That model could give rise to multiple steady states – equilibria with high output and low "unemployment" could coexist with equilibria with low output and high unemployment. If you knew some combination of one-sided search models, the Phelps volume, or had seen work by Dale Mortensen and Chris Pissarides on two-sided search, you could get it. People like Peter Howitt, Ken Burdett, and John Kennan could get it, because they were Northwestern students and been in contact with Mortensen. But an IS-LM Keynesian wouldn’t get it. For those people using the words “aggregate demand” is a dog whistle – a message that everything is OK. “Don’t worry, we’re not doing anything that you would object to.”

New Keynesians took some of these lessons in presentation to heart, and went far beyond dog whistles. A New Keynesian model is basically a neoclassical growth model with exogenous aggregate shocks, and with sticky prices in the context of price-setting monopolistically-competitive firms - and with something we could think of as monetary policy. Again, Keynes would not have the foggiest idea what this was about, but in some incarnations (three-equation reduced form), this was dressed up in a language that had for been taught to undergraduates for about thirty years prior to the advent of New Keynesian frameworks in the late 1990s – the language of “aggregate demand,” “IS curves,” and “Phillips curves.”

New Keynesian economics was no less radical than what Lucas, Prescott, and others were up to in the 1970s and 1980s, but Lucas and Prescott were very in-your-face about what they did. That’s honest, and refreshing, but getting in the faces of powerful people can get you in trouble. I think Mike Woodford learned from that. Better to calm the powerful people who might have a hard time understanding you – get them on your side, and give them the impression that they get it. If Woodford had been in-your-face like Lucas and Prescott, he would probably have the reputation that, perhaps surprisingly, Lucas and Prescott still enjoy among some Cambridge (MA) educated people of my generation. For some, Lucas and Prescott are put in a class with the low life of society – Ponzi schemers, used car salespeople, and other hucksters. Not by the Nobel committee, fortunately.

But, there’s a downside to being non-confrontational. Woodford’s work, and the work of people who extended it, and did quantitative work in that paradigm, is technical – no less technical than the work of Lucas, Sargent, Wallace, Prescott, etc., from which it came. Not everyone is going to be able to do it, and not everyone will get it if it is presented in all its glory. But the dog whistles, and other more explicit appeals to defunct paradigms - or ones that should be - makes some people think that they get it. And when they think they get it, they think that the defunct paradigms are actually OK. And, if the person that thinks he or she gets it is making policy decisions, we’re all in trouble.

Why are we in trouble? Here’s an example. I could know a lot more math and econometrics than I do, and I’ve got plenty of limitations, as we all do. But I’ve had a lot of opportunities to learn firsthand from some of the best people in the profession – Rao Aiyagari, Mark Gertler, Art Goldberger, John Geweke, Chuck Wilson, Mike Rothschild, Bob Lucas, Ed Prescott, Larry Christiano, Narayana Kocherlakota, etc., etc. But I couldn’t get NK models when I first saw them. What’s this monetary model with no money in it? Where’s that Phillips curve come from? What the heck is that central bank doing without any assets and liabilities? I had to read Woodford’s book (and we know that Woodford isn’t stingy with words), listen to a lot of presentations, read some more papers, and work stuff out for myself, before I could come close to thinking I was getting it. So, trust me, if you hear the words “IS curve,” “Phillips curve,” “aggregate demand,” and “central bank,” and think you’ve got NK, you’re way off.

Way off? How? In this post, I wrote about a simplified NK model, and its implications. Some people seem to think that NK models with rational expectations tell us that, if a central bank increases its nominal interest rate target, then inflation will go down. But, in my post, I showed that there are several ways in which that is false. NK models in fact have Fisherian properties – or Neo-Fisherian properties, if you like. Fortunately, there are some people who agree with me, including John Cochrane and Rupert and Sustek. But, in spite of the fact that you can demonstrate how conventional macroeconomic models have Neo-Fisherian properties – analytically and quantitatively – and cite empirical evidence to back it up, the majority of people who work in the NK tradition don’t believe it, and neither do most policymakers. Part of this has to do with the fact that there indeed exists a model from which one could conclude that an increase in the central bank’s nominal interest rate target will decrease inflation. That model is a static IS-LM model with a Phillips curve and fixed (i.e. exogenous) inflation expectations. That’s the model that many (indeed likely the majority) of central bankers understand. And you can forgive them for thinking that’s roughly the same thing as a full-blown NK model, because that’s what they were told by the NK people. Now you can see the danger of non-confrontation – the policymakers with the power may not get it, though they are under the illusion that they do.

I know I’m taking a circuitous route to discussing Roger’s new paper, but we’re getting there. A few years ago, when Roger started thinking about these ideas and putting the ideas in blog posts, I wrote down a little model to help me understand what he was doing. Not wanting to let that effort go to waste, I expanded on it to the point where I could argue I was doing something new, and submitted it to a journal. AEJ-Macro rejected it (an unjust decision, as I’m sure all your rejections are too), but I managed to convince the JMCB to take it. [And now I'm recognizing some of my errors - note that "Keynesian" is in the title.] Here’s the idea. In his earlier work Roger had studied a type of macroeconomic indeterminacy that is very different from the multiple equilibrium models most of us are used to. In search and matching models we typically have to deal with situations in which two economic agents have to divide the surplus from exchange. There is abundant theory to bring to bear here - generalized Nash bargaining, Kalai bargaining, Rubinstein bargaining, etc. - but if we're to be honest with ourselves, we have to admit that we really don't know much about how people will divide the surplus in exchange. That idea has been exploited in monetary theory - for example by Hu, Kennan, and Wallace. Once we accept the idea that there is indeterminacy in how the surplus from exchange is split, we can think about artificial worlds with multiple equilibria. In my paper, I first showed a simple version of Roger's idea. Output is produced by workers and producers, and there is a population of people who can choose to be either, but not both. Each individual in this world chooses an occupation (worker or producer), they go through a matching process where workers are matched with producers (there's a matching function). Some get matched, some do not, and when there is a match output gets produced and the worker and producer split the proceeds and consume. In equilibrium there are always some unmatched workers (unemployment) and unmatched producers (unfilled vacancies). There is a continuum of equilibria indexed by the wage in a match. A high wage is associated with a high unemployment rate. That's because, in equilibrium, everyone has to be indifferent between becoming a producer and becoming a worker. If the wage is high, an individual receives high surplus as a worker and low surplus as a producer. Therefore, it must be easier in equilibrium to find a match as a producer than as a worker - the unemployment rate must be high and the vacancy rate low.

What I did was to extend the idea by working this out in a monetary economy - for me, a Lagos-Wright economy where money was necessary to purchase goods. Then, I could think about monetary (and fiscal) policy, and how policymakers could achieve optimality. As in the indeterminacy literature, this required thinking about how policy rules could kill off bad equilibria.

On to Roger's new paper. He also wants to flesh out his ideas in a monetary economy, and there's a lot in there, including quantitative work. As in Roger's previous work, and my interpretation of it, there are multiple steady states, with high wage/high unemployment steady states. As it's a monetary economy (overlapping generations), there are also multiple dynamic equilibria, and Roger explores that. So, that all seems interesting. But I'm having trouble with two things. The first is Roger's "belief function." In Roger's words:
To close our model, we assume that equilibrium is selected by ‘animal spirits’ and we model that idea by introducing a belief function as in Farmer (1993, 2002, 2012b). We treat the belief function as a fundamental with the same methodological status as preferences and endowments and we study the implications of that assumption for the ability of monetary policy to influence inflation, output and unemployment.
So, a lot of people have done work on indeterminacy, and I have never run across a "belief function," that someone wants me to think is going to deliver beliefs exogenously. In Roger's model, the belief function is actually an equilibrium selection device, imposed by the modeler. The model tells us there are multiple equilibria, and that's all it has to say. "Beliefs" as we typically understand them, are in fact endogenous in Roger's model. And calling them exogenous does not accomplish anything, as far as I can tell, other than to get people confused, or cause them to raise objections, as I'm doing now.

Second complaint: This goes back to my lengthy discussion above. Roger's paper has "animal spirits" in the title, it cites the General Theory, and the words "aggregate demand" show up 7 times in the paper. Roger also sometimes comes up with passages like this:
Our model provides a microfoundation for the textbook Keynesian cross, in which the equilibrium level of output is determined by aggregate demand. Our labor market structure explains why firms are willing to produce any quantity of goods demanded, and our assumption that beliefs are fundamental determines aggregate demand.
And this:
Although our work is superficially similar to the IS-LM model and its modern New Keynesian variants; there are significant differences. By grounding the aggregate supply function in the theory of search and, more importantly, by dropping the Nash bargaining assumption, we arrive at a theory where preferences, technology and endowments are not sufficient to uniquely select an equilibrium.
In how many ways are these silly statements? This model is related to the Keynesian Cross and IS-LM as chickens are related to bears. The genesis of Roger's framework is Paul Samuelson's overlapping generations model, work on indeterminacy in monetary versions of that model (some of which you can find in the Minneapolis conference volume), and the search and matching literature. NK models are not "variants" of IS-LM models - they are entirely different beasts. It's not "aggregate demand" that is determining anything in Roger's model - there are multiple equilibria, and that's all.

Maybe you think this is all harmless, but it gets in the way of understanding, and I think Roger's goal is to be understood. Describe a bear as if it's a chicken, and you're going to confuse and mislead people. And they may make bad policy decisions as a result. Better to get in our faces with your ideas, and bear the consequences.

Friday, June 17, 2016

Dazed and Confused?

In October 2015, after a September payroll employment estimate of 142,000 new jobs, described as "grim" and "dismal" in the media, I wrote this blog post, arguing that we might well see less employment growth in the future. That conclusion came from simple labor force arithmetic. With the working-age population (ages 15-64) growing at a low rate of about 0.5%, if the labor force participation rate failed to increase and the unemployment rate stopped falling, payroll employment could grow at most by 60,000 per month, as I saw it last October.

After the last employment report, which included an estimate of a monthly increase of 38,000 in payroll employment, some people were "shocked," apparently. Let's take a look at a wider array of labor market data, and see whether they should be panicking.

If you have been following employment reports in the United States for a while, you might wonder why the establishment survey numbers are always reported in terms of the monthly change in seasonally-adjusted employment. After all, we typically like to report inflation as year-over-year percentage changes in the price level, or real GDP as quarterly percentage changes in a number that has been converted to an annual rate. So, suppose we look at year-over-year percentage changes in payroll employment:
That wouldn't quite make your cat climb the curtains. Employment growth rates were above 2% for a short time in early 2015, and the growth rate has fallen, but we're back to growth rates close to what we saw in 2013-2014.

What's happening with unemployment and vacancies?
The unemployment rate is currently at 4.7%, only 0.3 percentage points higher than its most recent cyclical low of 4.4% in May 2007, and the vacancy rate (JOLTS job openings rate) has been no higher since JOLTS came into being more than 15 years ago. Thus, by the standard measure we would use in labor search models (ratio of vacancies to unemployment), this job market is very tight.

If we break down the unemployment rate by duration of unemployment, we get more information:
In this chart, I've taken the number of unemployed for a particular duration, and expressed this as percentage of the labor force. If you add the four quantities, you get the total unemployment rate. Here, it's useful again to compare the May 2016 numbers with May 2007. In May 2007, the unemployment rates for less than 5 weeks, 5 to 14 weeks, 15-26 weeks, and 27 weeks and over were 1.6%, 1.4%, 0.7%, and 0.7%, respectively. In May 2016 they were 1.4%, 1.4%, 0.7%, and 1.2%, respectively. So, middle-duration unemployment currently looks the same as in May 2007, but there are fewer very-short-term unemployed, and more long-term unemployed. But long-term unemployment continues to fall, with a significant decline in the last report.

Some people have looked at the low employment/population ratio and falling participation rate, and argued that this reflects a persistent inefficiency:
So, for example, if you thought that a large number of "involuntarily" unemployed had dropped out of the labor force and were only waiting for the right job openings to materialize, you might have thought that increases in labor force participation earlier this year were consistent with such a phenomenon. But the best description of the data now seems to be that labor force participation leveled off as of mid-2015. Given the behavior of unemployment and vacancies in the previous two charts, and the fact that labor force participation has not been cyclically sensitive historically, the drop in labor force participation appears to be a secular phenomenon, and it is highly unlikely that this process will reverse itself. Thus, it seems wrongheaded to argue that some persistent wage and price stickiness is responsible for the low employment/population ratio and low participation rate. There is something to explain in the last chart alright (for example, Canada and Great Britain, with similar demographics, have not experienced the same decline in labor force participation), and this may have some connection to policies in the fiscal realm, but it's hard to make the case that there is some alternative monetary policy that can make labor force participation go up.

Another key piece of labor market information comes from the CPS measures of flows among the three labor force states - employed (E), unemployed (U), and not in the labor force (N). We'll plot these as percentage rates, relative to the stock of people in the source state. For the E state:
The rate of transition from E to U is at close to its lowest value since 1990, but the transition rate from E to N is relatively high. This is consistent with the view that the decline in labor force participation is a long-run phenomenon. People are not leaving E, suffering a period of U, and then going to N - they're going directly from E to N. Next, the U state:
In this chart, the total rate at which people are exiting the U state is lower than average and, while before the last recession the exit rate to E was higher than the exit rate to N, these rates are currently about the same. This seems consistent with the fact that the unemployment pool currently has a mix that tilts more toward long-term unemployed. These people have a higher probability than the rest of the unemployed, of going to state N rather than E. Finally, for state N:
Here, the rates at which people are leaving state N for both states E and U are relatively low. Thus, labor force participation has declined both because of a high inflow (from both E and U) and a low outflow. But, the low outflow rate to U from N (in fact, the lowest since 1990) also reflects the tight labor market, in that a person leaving state N is much more likely to end up in state E rather than U (though no more likely, apparently, than was the case historically, on average).

The last thing we should look at is productivity. In this context, a useful measure is the ratio of real GDP to payroll employment, which looks like this:
By this measure, average labor productivity took a large jump during 2009, but since early 2010 it has been roughly flat. There has been some discussion as to whether productivity growth measures are biased downward. Chad Syverson, for example, argues that there is no evidence of bias in measures of output per hour worked. So, if we take the productivity growth measures at face value, this is indeed something to be shocked and concerned about.

Conclusions

1. The recent month's slowdown in payroll employment growth should not be taken as a sign of an upcoming recession. The labor market, by conventional measures, is very tight.
2. The best forecast seems to be that, barring some unanticipated aggregate shock, labor force participation will stay where it is for the next year, while the unemployment rate could move lower, to the 4.2%-4.5% range, given that the fraction of long-term unemployed in the unemployment pool is still relatively high.
3. Given an annual growth rate of about 0.5% in the working age population, and supposing a drop of 0.2-0.5 percentage points in the unemployment rate over the next year, with half the reduction in unemployment involving transitions to employment, payroll employment can only grow at about 80,000 per month over the next year, assuming a stable labor force participation rate. Thus, if we add the striking Verizon workers (about 35,000) to the current increase in payroll employment, that's about what we'll be seeing for the next year. Don't be shocked and concerned. It is what it is.
4. Given recent productivity growth, and the prospects for employment growth, output growth is going to be low. I'll say 1.0%-2.0%. And that's if nothing extraordinary happens.
5. Though we can expect poor performance - low output and employment growth - relative to post-WWII time series for the United States, there is nothing currently in sight that represents an inefficiency that monetary policy could correct. That is, we should expect the labor market to remain tight, by conventional measures.

Tuesday, June 14, 2016

Dave Backus

Dave Backus has passed away. Dave was the Heinz Riehl Professor in the Stern School at NYU, and had previous positions at Queen's University, UBC, and the Minneapolis Fed. Dave leaves behind a solid body of work in macroeconomics, and many sad colleagues, students, friends, and family. Dave and I crossed paths in Kingston Ontario, Minneapolis, and on the editorial board of the JME. He was always straightforward, helpful, a dedicated scientist, and one of our honourary Canadians. Dave is interviewed here.

Thursday, April 14, 2016

Neo-Fisherian Denial

Accepting neo-Fisherism is a 12-stage program. The first stage is admitting you have a problem. The twelfth stage is helping others to admit that they have a problem too. Going from stage one to stage twelve may be a tough battle - many could temporarily fall off the wagon. But take it one day at a time. Most people, for example Larry Summers, are still at stage one. In this video, about two minutes in, after the jokes, Summers says that neo-Fisherism is most likely to be remembered as a confusion. So, if the problem is only confusion, I would like to help him out.

Neo-Fisherism says, basically: "Excuse me, but I think you have the sign wrong." Conventional central banking wisdom says that increasing interest rates reduces inflation. Neo-Fisherites say that increasing interest rates increases inflation. Further, it's not like this is some radical, novel theory. Indeed, a cornerstone of Neo-Fisherism is:

Neo-Fisherian Folk Theorem: Every mainstream macroeconomic monetary model has neo-Fisherian properties.

Let me illustrate that. A nice, simple, version of the standard New Keynesian (NK) model is the one in Narayana Kocherlakota's slides from this conference put on by the Becker Friedman Institute. I'll use my own notation. NK's version of the NK model is a reduced form, with two equations. The first comes from a pricing equation for a nominal bond - what's often called the "NK IS curve," or
Here, y is the output gap, the difference between actual output and what is efficient, pi is the inflation rate, R is the nominal interest rate, r is the subjective rate of time preference, and a is the coefficient of relative risk aversion. The second equation is
That's just a Phillips curve, with b >0 determined by the degree of price stickiness. In the underlying model, some fraction of firms is constrained to set prices to the average price from last period. Thus, there's no expectations term in the Phillips curve, as there's no forward-looking pricing. That makes the model easy to solve.

So, substitute for y in equation (1) using the Phillips curve equation, to get
So, you can see why people think this type of model is a foundation for conventional central banking ideas. If inflation expectations are "anchored," which I guess means exogenous, on the right-hand side of the equation, then an increase in the current nominal interest rate would have to imply that the current inflation rate goes down. Indeed, if the central banker experiments, by choosing the nominal interest rate each period at random, then he or she will observe a negative correlation between inflation and nominal interest rates, which would tend to confirm conventional beliefs.

But consider the following. Suppose we look at the deterministic version of the model, and use (3) to solve for a first-order difference equation in the inflation rate:
Then, an equilibrium is a sequence of inflation rates solving (4), and we can solve for output from (2). As is typical of monetary models, there's no initial condition to tie things down, so there are potentially many equilibria. We can say, however, that in a steady state, from (1),
And then (2) gives
So, what "anchors" inflation and inflation expectations in the long run is the long run nominal interest rate. And then the Phillips curve determines output. That's the first Neo-Fisherian property of this standard model.

Next, from the difference equation, (4), if the nominal interest rate is a constant R forever, then there is a continuum of equilibria, indexed by the initial inflation rate, and they all converge to a unique steady state, which is given by (6) and (7). To see this, start with any initial pi, and solve (4) forward. So, we know that the long run is Fisherian. But what about the short run?

We'll consider the transition to a higher nominal interest rate. In the figure, the nominal interest rate is constant until period T, and then it increases permanently, forever. In the figure, D1 is the difference equation (4) with a lower nominal interest rate; D2 is (4) with a higher nominal interest rate. We'll suppose that everyone perfectly anticipates the interest rate increase from the beginning of time. Again, there are many equilibria, and they all ultimately converge to point B, but every equilibrium has the property that, given the initial condition, inflation will be higher at every date than it otherwise would have been without the increase in the nominal interest rate. A straightforward case is the one where the equilibrium is at A until period T, in which case the inflation rate increases monotonically, as shown, to a higher steady state inflation rate. Inflation never goes down in response to a permanent increase in the nominal interest rate. That's consistent with what John Cochrane finds in a related model.

So, that's the second Neo-Fisherian property, embedded in this NK model. The NK model actually doesn't conform to conventional central banking beliefs about how monetary policy works. What's going on? From equation (1), an increase in the current nominal interest rate will increase the real interest rate, everything else held constant. This implies that future consumption (output) must be higher than current consumption, for consumers to be happy with their consumption profile given the higher nominal interest rate. But, it turns out that this is achieved not through a reduction in current output and consumption, but through an increase in future output and consumption. This serves, through the Phillips curve mechanism, to increase future inflation relative to current inflation. Then, along the path to the new steady state, output and inflation increase. But, if you read Narayana's Bloomberg post from five days ago, you would have noted that he thinks that lowering the nominal interest rate raises inflation and output:
Monetary policy makers should be seeking to ease, not tighten. Instead of satisfying a phantom need to “normalize” rates, the Fed should do what’s needed to get employment and inflation back to normal.
Apparently he's thinking about some other model, as the one he constructed tells us the opposite.

For more depth on this, you should read this paper by Peter Rupert and Roman Sustek. Here's their abstract:
The monetary transmission mechanism in New-Keynesian models is put to scrutiny, focusing on the role of capital. We demonstrate that, contrary to a widely held view, the transmission mechanism does not operate through a real interest rate channel. Instead, as a first pass, inflation is determined by Fisherian principles, through current and expected future
monetary policy shocks, while output is then pinned down by the New-Keynesian Phillips curve. The real rate largely only reflects consumption smoothing. In fact, declines in output and inflation are consistent with a decline, increase, or no change in the ex-ante real rate.

Conventional central banking wisdom is embedded in Taylor rules. For simplicity, suppose the central banker just cares about inflation, and follows the rule
Here pi* is the central bank's inflation target. Under the Taylor principle, d > 1, i.e. the central bank controls inflation by moving interest rates up when inflation goes up - and the nominal interest rate adjustment is more than one-for-one. It's well known from the work of Benhabib et el. that Taylor rules have "perils," and this model can illustrate that nicely. The difference equation determining the path for the inflation rate becomes
In the next figure, A is the intended steady state in which the central bank achieves its inflation target, and that is one equilibrium. But there are many equilibria for which the initial inflation rate is greater than -r and smaller than the inflation target, and all of these equilibria (like the one depicted) converge to the zero lower bound (ZLB), where the central banker gets stuck, with an inflation rate permanently lower than the target. Potentially, there could be equilibria with an initial inflation rate higher than the inflation target, which have the property that inflation increases forever. But in this model, that also implies that output increases without bound, which presumably is not feasible.

Rules with -1 < d < 1 all have the property that there are multiple equilibria, but these equilibria all converge to the inflation target - there's a unique steady state in those cases. Note that the Taylor rule central banker is Neo-Fisherian if d < 0, and that this can be OK in some sense. But aggressive neo-Fisherism, i.e. d < -1 -2(a/b), is bad, as this implies that the inflation rate cycles forever without hitting the inflation target.

But if the central banker actually wants to consistently hit the inflation target, there are better things to do than (8). For example, consider this rule:
Plug that into (4), and you'll get
And so, (10) implies that
So, under that forward-looking Taylor rule, the central bank always hits its target, and in equilibrium the central bank is purely Fisherian. If it wants to increase its inflation target - and actual inflation - it just increases the nominal interest rate one-for-one with the increase in the inflation target. So, I've lost count now, but I think that's Neo-Fisherian property 3 [see the addendum below. There's a glitch that needs to be fixed in the rule (10) to account for the ZLB.]

The rule (10) specifies out-of-equilibrium behavior that kills all of the equilibria except the desired steady state. Why does this work? If the central banker sees incipient inflation in the future, he or she knows that this will tend to increase current output, increase current inflation, and increase future output, which will also increase current inflation. To nullify these effects, the central banker commits to offset this completely, if it happens, with an increase in the nominal interest rate. In equilibrium the central banker never has to carry out the threat. Maybe you think that's not plausible, but that's the nature of the model. NK adherents typically emphasize forward guidance, and that's not going to work without commitment to future actions.

Some people (e.g. Garcia-Schmidt and Woodford) have argued that Neo-Fisherian results go out the window in NK models under learning rules. As was shown above, these models are always fundamentally Fisherian in that any monetary policy rule has to somehow adhere to Fisherian logic on average - basically the long-run nominal interest rate is the inflation anchor. But there can also be learning rules that give very Fisherian results. For example, suppose that the economic agents in this world anticipated that next period's inflation is what they are seeing this period, that is
Plug that into equation (1), and we get
So, for this learning rule, inflation is determined period-by-period by the nominal interest rate - this is about as Fisherian as you can get.

Thus, if conventional central bankers are basing their ideas on some model, it can't be a mainstream NK model, since increasing the nominal interest rate makes inflation go up in mainstream NK models. But don't get the idea that it's some other mainstream model they're thinking about. As the Neo-Fisherian Folk Theorem says, all the mainstream models have these properties, though some of the other implications of those models differ. For example, it's easy to show that one can get exactly the same dynamics from Alvarez, Lucas and Weber's segmented markets model. That's a model with limited participation in asset markets and a non-neutrality of money that comes from a distribution effect. Everyone in the model has fixed endowments forever, and they buy goods subject to cash-in-advance. The central bank intervenes through open market operations, but the people on the receiving end of the initial open market operation are only the financial market participants. The model was set up to deliver a liquidity effect, i.e. if money growth goes up, this increases the consumption of market participants (and decreases everyone elses's consumption), and this will reduce the real interest rate. Thus, you might think (like the NK model) that this produces the result that, if the central bank increases the nominal interest rate, then inflation will go down.

But, the inflation dynamics in the Alvarez et al. segmented markets model are identical to what we worked out above. In fact, the model yields a difference equation that is identical to equation (4), though the coefficients have a different interpretation. Basically, what matters is the degree of market participation, not the degree of price stickiness - it's just a different friction. And all the other results are exactly the same. But the mechanism at work is different. The quantity theory of money holds in the segmented markets model, so what happens when the nominal interest goes up is that the central bank has to choose a path for open market operations to support that. This has to be a path for which the inflation rate is increasing over time, but at a decreasing rate. This will imply that consumption grows over time at a decreasing rate, so that the liquidity effect (a negative real interest rate effect) declines over time, and the Fisher effect increases.

So, once you get it, you can form your own Neo-Fisherian support group. Moving from denial to advocacy is important.

Addendum1: Thanks to Narayana. This took some work, but this is a Taylor rule that assures that the central banker hits the inflation target period-by-period, implying that the nominal interest rate is constant in equilibrium, and will move one-for-one with the inflation target. If future inflation is anticipated to be sufficiently high, then the central banker follows the forward looking rule (10):
This rule offsets incipient high inflation, and assures that the central bank hits the inflation target. But, low inflation is a problem for (16), as the ZLB gets in the way. So, if there is incipient low inflation, the central banker follows the rule:
And the critical value for future inflation is
How does (17) work? Any equilibrium has to satisfy (4), but (4) and (17) imply
So future inflation must be greater than the inflation target. But (17) says that the central banker chooses this rule only when future inflation is less than pi**, which is less than the inflation target. So this can't be an equilibrium. I like (17), as the central banker is Neo-Fisherian - he or she kills off low inflation with a high nominal interest rate.

Addendum 2: This is interesting too. Suppose the policy rule is
Then there is a critical value for the initial inflation rate,
such that, if the initial inflation rate is below this critical value, then the inflation rate goes to the inflation target in the next period and stays there. If the initial inflation rate is above the critical value, then the initial nominal interest rate is zero, and the inflation rate falls to the inflation target, and stays at the target forever. So, that's a Fisherian rule that has nice properties.

Addendum 3: Here's another one. Central bank follows rule (20) if current inflation is below the inflation target. Central bank follows rule (10) if current inflation is at or above the inflation target. With inflation below the target, this implies raising the nominal interest rate to get inflation to target. With inflation at or above the target, the central bank promises to raise the nominal interest rate in response to incipient inflation. At worst, this implies one period of inflation below target in equilibrium.