Monday, November 24, 2014


So you've all forgotten who Thomas Piketty is, right? Recall that he is the author of the 685-page tome, Capital in the Twentieth Century, a bestseller of the summer of 2014, but perhaps also the least-read bestseller of the summer of 2014. I was determined, however, not to be like the mass of lazy readers who bought Capital. I have slogged on, through boredom, puzzlement, and occasional outrage, and am proud to say I have reached the end. Free at last! Hopefully you have indeed forgotten Piketty, and are not so sick of him you could scream. Perhaps you're even ready for a Piketty revival.

Capital is about the distribution of income and wealth. For the most part, this is a distillation of Piketty's published academic work, which includes the collection and analysis of a large quantity of historical data on income and wealth distribution in a number of countries of the world. Of course, data cannot speak for itself - we need theory to organize how we think about the data, and Piketty indeed has a theory, and uses that theory and the data to arrive at predictions about the future. He also comes to some policy conclusions.

Here's the theory. Piketty starts with the First Fundamental Law of Capitalism, otherwise known as the definition of capital's share in national income, or

(1) a = r(K/Y),

where a is the capital share, r is the real rate of return on capital, K is the capital stock, and Y is national income. Note that, when we calculate national income we deduct depreciation of capital from GDP. That will prove to be important. The Second Fundamental Law of Capitalism states what has to be true in a steady state in which K/Y is constant:

(2) K/Y = s/g,

where s is the savings rate, and g is the growth rate of Y. So where did that come from? If k is the time derivative of K, and y is the time derivative of Y, then in a steady state in which K/Y is constant,

(3) k/K = y/Y.

Then, equation (3) gives

(4) K/Y = k/y = (k/Y)/(y/Y) = s/g,

or equation (2). It's important to note that, since Y is national income (i.e. output net of depreciation), the savings rate is also defined as net of depreciation.

So, thus far, we don't have a theory, only two equations, (1) and (2). The first is a definition, and the second has to hold if the capital/output ratio is constant over time. Typically, in the types of growth models we write down, there are good reasons to look at the characteristics of steady states. That is, we feel a need to justify focusing on the steady state by arguing that the steady state is something the model will converge to in the long run. Of course, Piketty is shooting for a broad audience here, so he doesn't want to supply the details, for fear of scaring people away.

Proceeding, (1) and (2) imply

(5) a = r(s/g)

in the steady state. If we assume that the net savings rate s is constant, then if r/g rises, a must rise as well. This then constitutes a theory. Something is assumed constant, which implies that, if this happens, then that must happen. But what does this have to do with the distribution of income and wealth? Piketty argues as follows:

(i) Historically, r > g typically holds in the data.
(ii) There are good reasons to think that, in the 21st century, g will fall, and r/g will rise.
(iii) Capital income is more more concentrated among high-income earners than is labor income.

Conclusion: Given (5) and (i)-(iii), we should expect a to rise in the 21st century, which will lead to an increasing concentration of income at the high end. But why should we care? Piketty argues that this will ultimately lead to social unrest and instability, as the poor become increasingly offended by the filthy rich, to the point where they just won't take it any more. Thus, like Marx, Piketty thinks that capitalism is inherently unstable. But, while Marx thought that capitalism would destroy itself, as a necessary step on the path to communist nirvana, Piketty thinks we should do something to save capitalism before it is too late. Rather than allow the capitalist Beast to destroy itself, we should just tax it into submission. Piketty favors marginal tax rates at the high end in excess of 80%, and a global tax on wealth.

Capital is certainly provocative, and the r > g logic has intuitive appeal, but how do we square Piketty's ideas with the rest of our economic knowledge? One puzzling aspect of Piketty's analysis is his use of net savings rates, and national income instead of GDP. In the typical growth models economists are accustomed to working with, we work with gross quantities and rates - before depreciation. Per Krusell and Tony Smith do a nice job of straightening this out. A key issue is what happens in equation (2) as g goes to zero in the limit. Basically, given what we know about consumption/savings behavior, Piketty's argument that this leads to a large increase in a is questionable.

Further, there is nothing unusual about r > g, in standard economic growth models that have no implications at all for the distribution of income and wealth. For example, take a standard representative-agent neoclassical growth model with inelastic labor supply and a constant relative risk aversion utility function. Then, in a steady state,

(6) r = q + bg,

where q is the subjective discount rate and b is the coefficient of relative risk aversion. So, (6) implies that r > g unless g > q and b is small. And, if g is small, then we must have r > g. But, of course, the type of model we are dealing with is a representative-agent construct. This could be a model with many identical agents, but markets are complete, and income and wealth would be uniformly distributed across the population in equilibrium. So, if we want to write down a model that can give us predictions about the income and wealth distribution, we are going to need heterogeneity. Further, we know that some types of heterogeneity won't work. For example, with idiosyncratic risk, under some conditions the model will essentially be identical to the representative agent model, given complete insurance markets. Thus, it's generally understood that, for standard dynamic growth models to have any hope of replicating the distribution of income and wealth that we observe, these models need to include sufficient heterogeneity and sufficient financial market frictions.

Convenient summaries of incomplete markets models with heterogeneous agents are in this book chapter by Krusell and Smith, and this paper by Heathcote et al. In some configurations, these models can have difficulty in accounting for the very rich and very poor. This may have something to do with financial market participation. In practice, the very poor do not hold stocks, bonds, and mutual fund shares, or even have transcations accounts with banks in some circumstances. As well, access to high-variance, high-expected return projects, for example entrepreneurial projects, is limited to very high-income individuals. So, to understand the dynamics of the wealth and income distributions, we need to understand the complexities of financial markets, and market participation. That's not what Piketty is up to in Capital.

How might this matter? Well suppose, as Piketty suggests, that g declines during the coming century. Given our understanding of how economic growth works, this would have to come about due to a decline in the rate of technological innovation. But it appears that technological innovation is what produces extremely large incomes and extremely large pots of wealth. To see this, look at who the richest people in America are. For example, the top 20 includes the people who got rich on Microsoft, Facebook, Amazon, and Google. As Piketty points out, the top 1% is also well-represented by high-priced CEOs. If Piketty is right, these people are compensated in a way that is absurdly out of line with their marginal productivities. But, in a competitive world, companies that throw resources away on executive compensation would surely go out of business. Conclusion: The world is not perfectly competitive. Indeed, we have theories where technological innovation produces temporary monopoly profits, and we might imagine that CEOs are in good positions to skim off some of the rents. For these and other reasons, we might imagine that a lower rate of growth, and a lower level of innovation, might lead to less concentration in wealth at the upper end, not more.

Capital is certainly not a completely dispassionate work of science. Piketty seems quite willing to embrace ideas about what is "just" and what is not, and he can be dismissive of his fellow economists. He says:
...the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.
Not only are economists ignoring the important problems of the world, the American ones are in league with the top 1%:
Among the members of these upper income groups are US academic economists, many of whom believe that the economy of the United States is working fairly well and, in particular, that it rewards talent and merit accurately and precisely. This is a very comprehensible human reaction.
Sales of Capital have now put Piketty himself in the "upper income group." Economists are certainly easy targets, and it didn't hurt Piketty's sales to distance himself from these egghead ivory-tower types. This is a very comprehensible human reaction.

To think about the distribution of income and wealth, to address problems of misallocation and poverty, we need good economic models - ones that capture how people make choices about occupations, interhousehold allocation and bequests, labor supply, and innovation. Economists have certainly constructed good models that incorporate these things, but our knowledge is far from perfect - we need to know more. We need to carefully analyze the important incentive effects of taxation that Piketty either dismisses or sweeps under the rug. Indeed, Piketty would not be the first person who thought of the top 1% as possessing a pot of resources that could be freely redistributed with little or no long-term consequences. It would perhaps be preferable if economists concerned with income distribution were to focus more on poverty than the outrageous incomes and wealth of the top 1%. It is unlikely that pure transfers from rich to poor through the tax system will solve - or efficiently solve - problems of poverty, in the United States or elsewhere. My best guess is that our time would be well spent on thinking about human capital accumulation and education, and how public policy could be reoriented to promoting both in ways that have the highest payoff.

Thursday, November 13, 2014

Neo-Fisherians: Unite and Throw off MV=PY and Your Phillips Curves!

I've noticed a flurry of blog activity on "Neo-Fisherianism," and thought I would contribute my two cents' worth. Noah Smith drew my attention to the fact that Paul Krugman had something to say on the matter, so I looked at his post to see what that's about. The usual misrepresentations and unsubstantiated claims, apparently. Here is the last bit:
And at the highest level we have the neo-Fisherite claim that everything we thought we knew about monetary policy is backwards, that low interest rates actually lead to lower inflation, not higher. At least this stuff is being presented in an even-tempered way.

But it’s still very strange. Nick Rowe has been working very hard to untangle the logic of these arguments, basically trying to figure out how the rabbit got stuffed into the hat; the meta-point here is that all of the papers making such claims involve some odd assumptions that are snuck by readers in a non-transparent way.

And the question is, why? What motivation would you have for inventing complicated models to reject conventional wisdom about monetary policy? The right answer would be, if there is a major empirical puzzle. But you know, there isn’t. The neo-Fisherites are flailing about, trying to find some reason why the inflation they predicted hasn’t come to pass — but the only reason they find this predictive failure so puzzling is because they refuse to accept the simple answer that the Keynesians had it right all along.
Well, at least Krugman gives Neo-Fisherites credit for being even-tempered.

Let's start with the theory. Krugman's claim is that "all of the papers making such claims involve odd assumptions that are snuck by readers in a non-transparent way." Those sneaky guys, throwing up a smoke screen with their odd assumptions and such. Actually, I think Cochrane's blog post on this was pretty clear and helpful, for the uninitiated. I've written about this as well, for example in this piece from last year, and other posts you can find in my archive. More importantly, I have a sequence of published and unpublished papers on this issue, in particular this published paper, this working paper, and this other working paper. That's not all directed at the specific issue at hand - "everything we thought we knew about monetary policy is backwards" - but covers a broader range of issues relating to the financial crisis, conventional monetary policy, and unconventional monetary policy. If this is "flailing about," I'm not sure what we are supposed to be doing. I've taken the trouble to formalize some ideas with mathematics, and have laid out models with explicit assumptions that people can work through at their leisure. These papers have been presented on repeated occasions in seminars and conferences, and are being subjected to the refereeing and editorial process at academic journals, just as is the case for any type of research that we hope will be taken seriously. The work is certainly not out of the blue - it's part of an established research program in monetary and financial economics, which many people have contributed to over the last 40 years or so. Nothing particular odd or sneaky going on, as far as I know. Indeed, some people who work in that program would be happy to be called Keynesians, who are the only Good Guys, in Krugman's book.

So, let me tell you about a new paper, with David Andolfatto, which I'm supposed to present at a Carnegie-Rocheser-NYU conference later this week (for the short version, see the slides) . This paper had two goals. First, we wanted to make some ideas more accessible to people, in a language they might better understand. Some of my work is exposited in terms Lagos-Wright type models. From my point of view, these are very convenient vehicles. The goal is to be explicit about monetary and financial arrangements, so we can make precise statements about how the economy works, and what monetary policy might be able to do to enhance economic performance. It turns out that Lagos-Wright is a nice laboratory for doing that - it retains some desirable features of the older money/search models, while permitting banking and credit arrangements in convenient ways, and allowing us to say a lot more about policy.

Lagos-Wright models are simple, and once you're accustomed to them, as straightforward to understand as any basic macro model. Remember what it was like when you first saw a neoclassical growth model, or Woodford's cashless model. Pretty strange, right? But people certainly became quickly accustomed to those structures. Same here. You may think it's weird, but for a core group of monetary theorists, it's like brushing your teeth. But important ideas are not model-bound. We should be able to do our thinking in alternative structures. So, one goal of this paper is to explore the ideas in a cash-in-advancey world. This buys us some things, and we lose some other things, but the basic ideas are robust.

The model is structured so that it can produce a safe asset shortage, which I think is important for explaining some features of our recent zero-lower-bound experience in the United States. To do that, we have to take a broad view of how assets are used in the financial system. Part of what makes new monetarism different from old monetarism is its attention to the whole spectrum of assets, rather than some subset of "monetary" assets vs. non-monetary assets. We're interested in the role of assets in financial exchange, and as collateral in credit arrangements, for example. For safe assets to be in short supply, we have to specify some role for those safe assets in the financial system, other than as pure stores of wealth. In the model, that's done in a very simple way. There are some transactions that require currency, and some other transactions that can be executed with government bonds and credit. We abstract from banking arrangements, but the basic idea is to think of the bonds/credit transactions as being intermediated by banks.

We think of this model economy as operating in two possible regimes - constrained or unconstrained. The constrained regime features a shortage of safe assets, as the entire stock of government bonds is used in exchange, and households are borrowing up to their credit limits. To be in such a regime requires that the fiscal authority behave suboptimally - basically it's not issuing enough debt. If that is the case, then the regime will be constrained for sufficiently low nominal interest rates. This is because sufficient open market sales of government debt by the central bank will relax financial constraints. In a constrained regime, there is a liquidity premium on government debt, so the real interest rate is low. In an unconstrained regime the model behaves like a Lucas-Stokey cash-in-advance economy.

What's interesting is how the model behaves in a constrained regime. Lowering the nominal interest rate will result in lower consumption, lower output, and lower welfare, at least close to the zero lower bound. Why? Because an open market purchase of government bonds involves a tradeoff. There are two kinds of liquidity in this economy - currency and interest-bearing government debt. An open market purchase increases currency, but lowers the quantity of government debt in circulation. Close to the zero lower bound, this will lower welfare, on net. This implies that a financial shock which tightens financial constraints and lowers the real interest rate does not imply that the central bank should go to the zero lower bound. That's very different from what happens in New Keynesian (NK) models, where a similar shock implies that a zero lower bound policy is optimal.

As we learned from developments in macroeconomics in the 1970s, to evaluate policy properly, we need to understand the operating characteristics of the economy under particular fiscal and monetary policy rules. We shouldn't think in terms of actions - e.g. what happens if the nominal interest rate were to go up today - as today's economic behavior depends on the whole path of future policy under all contingencies. Our analysis is focused on monetary policy, but that doesn't mean that fiscal policy is not important for the analysis. Indeed, what we assume about the fiscal policy rule will be critical to the results. People who understand this issue well, I think, are those who worked on the fiscal theory of the price level, including Chris Sims, Eric Leeper, John Cochrane, and Mike Woodford. What we assume - in part because this fits conveniently into our analysis, and the issues we want to address - is that the fiscal authority acts to target the real value of the consolidated government debt (i.e. the value of the liabilities of the central bank and fiscal authority). Otherwise, it reacts passively to actions by the monetary authority. Thus, the fiscal authority determines the real value of the consolidated government debt, and the central bank determines the composition of that debt.

Like Woodford, we want to think about monetary policy with the nominal interest rate as the instrument. We can think about exogenous nominal interest rates, random nominal interest rates, or nominal interest rates defined by feedback rules from the state of the economy. In the model, though, how a particular path for the nominal interest rate is achieved depends on the tools available to the central bank, and on how the fiscal authority responds to monetary policy. In our model, the tool is open market operations - swaps of money for short-term government debt. To see how this works in conjunction with fiscal policy, consider what happens in a constrained equilibrium at the zero lower bound. In such an equilibrium, c = V+K, where c is consumption, V is the real value of the consolidated government debt, and K is a credit limit. The equilibrium allocation is inefficient, and there would be a welfare gain if the fiscal authority increased V, but we assume it doesn't. Further, the inflation rate is i = B[u'(V+K)/A] - 1, where B is the discount factor, u'(V+K) is the marginal utility of consumption, and A is the constant marginal disutility of supplying labor. Then, u'(V+K)/A is an inefficiency wedge, which is equal to 1 when the equilibrium is unconstrained at the zero lower bound. The real interest rate is A/[Bu'(V+K)] - 1. Thus, note that there need not be deflation at the zero lower bound - the lower is the quantity of safe assets (effectively, the quantity V+K), the higher is the inflation rate, and the lower is the real interest rate. This feature of the model can explain why, in the Japanese experience and in recent U.S. history, an economy can be at the zero lower bound for a long time without necessarily experiencing outright deflation.

Further, in this zero lower bound liquidity trap, inflation is supported by fiscal policy actions. The zero nominal interest rate, targeted by the central bank, is achieved in equilibrium by the fiscal authority increasing the total stock of government debt at the rate i, with the central bank performing the appropriate open market operations to get to the zero lower bound. There is nothing odd about this, in terms of reality, or relative to any monetary model we are accustomed to thinking about. No central bank can actually "create money out of thin air" to create inflation. Governments issue debt denominated in nominal terms, and central banks purchase that debt with newly-issued money. In order to generate a sustained inflation, the central bank must have a cooperative government that issues nominal debt at a sufficiently high rate, so that the central bank can issue money at a high rate. In some standard monetary models we like to think about, money growth and inflation are produced through transfers to the private sector. That's plainly fiscal policy, driven by monetary policy.

In this model, we work out what optimal monetary policy is, but we were curious to see how this model economy performs under conventional Taylor rules. We know something about the "Perils of Taylor Rules," from a paper by Benhabib et al., and we wanted to have something to say about this in our context. Think of a central banker that follows a rule

R = ai + (1-a)i* + x,

where R is the nominal interest rate, i is the inflation rate, a > 0 is a parameter, i* is the central banker's inflation target, and x is an adjustment that appears in the rule to account for the real interest rate. In many models, the real interest rate is a constant in the long run, so if we set x equal to that constant, then the long-run Fisher relation, R = i + x, implies there is a long-run equilibrium in which i=i*. The Taylor rule peril that Benhabib et al. point out, is that, if a > 1 (the Taylor principle), then the zero lower bound is another long run equilibrium, and there are many dynamic equilibria that converge to it. Basically, the zero lower bound is a trap. It's not a "deflationary trap," in an Old Keynesian sense, but a policy trap. At the zero lower bound, the central banker wants to aggressively fight inflation by lowering the nominal interest rate, but of course can't do it. He or she is stuck. In our model, there's potentially another peril, which is that the long-run real interest rate is endogenous if there is a safe asset shortage. If x fails to account for this, the central banker will err.

In the unconstrained - i.e conventional - regime in the model, we get the flavor of the results of Benhabib et al. If a < 1 (a non-aggressive Taylor rule), then there can be multiple dynamic equilibria, but they all converge in the limit to the unique steady state with i = i*: the central banker achieves the inflation target in the long run. However, if a > 1, there are two steady states - the intended one, and the zero lower bound. Further, there can be multiple dynamic equilibria that converge to the zero lower bound (in which i < i* and there is deflation) in finite time. In a constrained regime, if the central banker fails to account for endogeneity in the real interest rate, the Taylor rule is particularly ill-behaved - the central banker will essentially never achieve his or her inflation target. But, if the central banker properly accounts for endogeneity in the real interest rate, the properties of the equilibria are similar to the unconstrained case, except that inflation is higher in the zero-lower-bound steady state. How can the central banker avoid getting stuck at the zero lower bound? He or she has to change his or her policy rule. For example, if the nominal interest rate is currently zero, there are no alternatives. If what is desired is a higher inflation rate, the central banker has to raise the nominal interest rate. But how does that raise inflation? Simple. This induces the fiscal authority to raise the rate of growth in total nominal consolidated government liabilities. But what if the fiscal authority refused to do that? Then higher inflation can't happen, and the higher nominal interest rate is not feasible. In the paper, we get a set of results for a model which does not have a short-term liquidity effect. Presumably that's the motivation behind a typical Taylor rule. A liquidity effect associates downward shocks to the nominal interest rate with increases in the inflation rate, so if the Taylor rule is about making short run corrections to achieve an inflation rate target, then maybe increasing the nominal interest rate when the inflation rate is above target will work. So, we modify the model to include a segmented-markets liquidity effect. Typical segmented markets models - for example this one by Alvarez and Atkeson are based on the redistributive effects of cash injections. In our model, we allow a fraction of the population - traders - to participate in financial markets, in that they can use credit and carry out exchange using government bonds (again, think of this exchange being intermediated by financial intermediaries). The rest of the population are non-traders, who live in a cash-only world.

In this model, if a central banker carries out random policy experiments - moving the nominal interest rate around in a random fashion - he or she will discover the liquidity effect. That is, when the nominal interest rate goes up, inflation goes down. But if this central banker wants to increase the inflation rate permanently, the way to accomplish that is by increasing the nominal interest rate permanently. Perhaps surprisingly, the response of inflation to a one time jump (perfectly anticipated) in the nominal interest rate, looks like the figure in John Cochrane's post that he labels "pure neo-Fisherian view." It's surprising because the model is not pure neo-Fisherian - it's got a liquidity effect. Indeed, the liquidity effect is what gives the slow adjustment of the inflation rate.

The segmented markets model we analyze has the same Taylor rule perils as our baseline model, for example the Taylor principle produces a zero-lower-bound steady state which is is the terminal point for a continuum of dynamic equilibria. An interesting feature of this model is that the downward adjustment of inflation along one of these dynamic paths continues after the nominal interest rate reaches zero (because of the liquidity effect). This gives us another force which can potentially give us positive inflation in a liquidity trap.

We think it is important that central bankers understand these forces. The important takeaways are: (i) The zero lower bound is a policy trap for a Taylor rule central banker. If the central banker thinks that fighting low inflation aggressively means staying at the zero lower bound that's incorrect. Staying at the zero lower bound dooms the central banker to permanently undershooting his or her inflation target. (ii) If the nominal interest rate is zero, and inflation is low, the only way to increase inflation permanently is to increase the nominal interest rate permanently.

Finally, let's go back to the quote from Krugman's post that I started with. I'll repeat the last paragraph from the quote so you don't have to scroll back:
And the question is, why? What motivation would you have for inventing complicated models to reject conventional wisdom about monetary policy? The right answer would be, if there is a major empirical puzzle. But you know, there isn’t. The neo-Fisherites are flailing about, trying to find some reason why the inflation they predicted hasn’t come to pass — but the only reason they find this predictive failure so puzzling is because they refuse to accept the simple answer that the Keynesians had it right all along.
Why? Well, why not? What's the puzzle? Well, central banks in the world with their "conventional wisdom" seem to have a hard to making inflation go up. Seems they might be doing something wrong. So, it might be useful to give them some advice about what that is instead of sitting in a corner telling them the conventional wisdom is right.

Tuesday, November 11, 2014

Monetary Policy Normalization

Here's a common view of how the Fed implemented monetary policy prior to the financial crisis.
The chart shows a typical framework, pre-financial crisis, that was used in hitting a particular fed funds rate target, depicted by R* in the figure. The downward-sloping curve is a demand curve for reserves, which essentially captures a short-run liquidity effect. The larger the quantity of reserves in the system overnight, the lower the fed funds rate. But, a key problem for policy implementation was that this demand curve was highly unstable, shifting due to unanticipated shocks to the financial system, and with anticipated shocks related to the day of the week, month of the year, etc. Operationally, the way the Fed approached the problem of hitting the target R* was, effectively, to estimate the demand curve each day, and then intervene so as to assure that Q* reserves were in the market, implying that the market would clear at R*, if the demand curve estimate were correct. Note in the figure that the fed funds rate was bounded above by the discount rate (no depository institution, or DI, would borrow from another institution at more than the Fed was offering) and by the interest rate on reserves, IOER (no DI would lend to another DI at less than what the Fed was offering). Prior to the financial crisis, the IOER was zero.

There were certainly other approaches the Fed could have taken to implementing policy during the pre-crisis period. For example, since the Fed was typically intervening by varying its quantity of lending in the overnight repo market, an alternative approach might have been to follow a fixed-rate, full-allotment procedure, i.e. fix the repo rate, and lend whatever quantity the market wanted to take at that rate. That would work very nicely if the overnight repo rate and the fed funds rate were identical, but they are not. Typically, overnight interest rates move together, but there can be a substantial amount of variability in the margin between the repo rate and the fed funds rate, for various reasons. For example, fed funds lending is unsecured while repos are secured; lending one day and settlement the next day happen at different times in the two markets, etc. So, given that the directive from the FOMC was in terms of a fed funds target, pegging the repo rate need not be the best way to carry out the directive. One could make an argument that targeting a secured overnight rate makes more sense than targeting the fed funds rate, but that would have required revamping the whole structure of FOMC decision making.

An important point to emphasize is that, contrary to central banking myth, the mechanics of the fed funds market pre-crisis had little to do with reserve requirements. The myth is that fed funds market activity was driven by the need of DIs to meet their reserve requirements - if reserves were too low, a DI would borrow on the fed funds market, and if reserves were too high it would lend. But the same would hold true if there were no reserve requirements, as reserve balances must be nonnegative overnight. Indeed in Canada, for example, where reserve requirements were eliminated long ago, the Bank of Canada operates within a channel system. The overnight target interest rate is bounded by the Bank of Canada's lending rate (on the high side), and the counterpart of the IOER in Canada (on the low side). Overnight reserves in Canada are typically zero, effectively, but we could characterize Bank of Canada intervention in roughly the same manner as in the figure above.

Before the Fed was permitted to pay interest on reserves, people speculated (see for example, see this paper by Marvin Goodfriend) that the IOER would establish a floor for the fed funds rate. Indeed, as Goodfriend pointed out, with sufficient reserves in the system, the IOER should determine the fed funds rate. Essentially this is what happened in Canada from spring 2009 until mid-2010, when the Bank of Canada operated with a floor system and an overnight rate (equal to the interest rate on reserves) of 0.25%. But, since the Fed began paying interest on reserves in late 2008, the effective fed funds rate has traded below the IOER, which as been at 0.25%.
The margin between the IOER on the fed funds rate has been substantial - sometimes as much as 20 basis points.

In a previous post I discussed some details of Fed proposals to modify its approach to financial market intervention. Since then, the FOMC has made its plans for "normalization" explicit in this press release from September 17. Normalization refers to the period after liftoff - the point in time at which the FOMC decides to increase its policy rate. Basically, the FOMC will continue to articulate policy in terms of the fed funds rate, and proposes to control the fed funds rate through two means: the IOER, and an overnight repurchase agreement (ON RRP) facility. As explained in my previous post, ON RRPs are overnight loans to the Fed from an expanded list of counterparties. ON RRPs do not "drain" reserves, as they are effectively reserves by another name. Reserve accounts are held by only some financial institutions in the United States, and not all financial institutions with reserve accounts (the GSEs - government-sponsored enterprises) can earn interest on reserves. Thus, ON RRPs expand the market for Fed liabilities, and allow more institutions to receive interest on overnight balances held with the Fed.

The Fed's plans for modifying policy implementation will actually matter little for the how monetary policy works. Whatever the means, the basic idea is to use monetary policy to influence short-term nominal interest rates by targeting an overnight interest rate, just as was the case before the financial crisis. The key issue in the implementation appears to be how the ON RRP facility should be managed. What should the ON RRP rate be? Should there be quantity caps on ON RRPs? If there are caps, how large should they be? A smaller difference between the IOER and the ON RRP rate potentially tightens the bounds on the fed funds rate. However, the majority of fed funds market activity currently consists of activity to (imperfectly) arbitrage the difference between the interest rate that GSEs can earn on reserve balances (zero) and the IOER. That arbitrage activity should disappear if the IOER/ON RRP rate margin decreases sufficiently, in which case the effective fed funds rate could actually increase above the IOER. As well, the smaller the IOER/ON RRP rate spread, the larger the quantity of ON RRPs relative to reserves on the liabilities side of the Fed's balance sheet. This distributes Fed liabilities differently in the financial system, in ways that we perhaps do not understand well, and with unclear implications.

How long after liftoff will the Fed's altered financial market intervention persist? This depends on how long it takes for the Fed's balance sheet to "normalize." In this case, normality means a balance sheet for which currency accounts for most of the Fed's liabilities, as was the case prior to the financial crisis. Currently, total Fed liabilities are about $4.5 trillion, of which $1.3 trillion is currency and $3.2 trillion consists of other liabilities, including reserves and ON RRPs. Currency as a fraction of total Fed liabilities can increase through two means: (i) assets disappear from the Fed's balance sheet; (ii) the demand for currency goes up. Assets can disappear from the Fed's balance sheet because they mature or are sold. The FOMC Policy Normalization Principles and Plans state no plans for selling assets - either Treasury securities or mortgage-backed securities. The plans specify that reinvestment in assets will continue until after liftoff, i.e. currently the size of the Fed's asset portfolio is being held constant in nominal terms. Therefore, there are currently no plans to reduce the quantity of assets on the Fed's balance sheet until after liftoff. Thus, for now, reductions in the quantity of non-currency liabilities will happen only as the stock of currency grows.

So, how long will it take until the Fed is in a position to conduct policy exactly as before the financial crisis? If the Fed continued its reinvestment program, and if currency grew at 5% per year, it would take more than 25 years for Fed liabilities to consist of currency alone. An end to reinvestment could mean that normalization could happen in 10 years or less, barring circumstances in which the Fed chooses to embark on new quantitative easing programs. Thus, in any case, and under current announced plans by the FOMC, the current regime - a type of floor-on-the-floor monetary regime - should be with us for some time.