Saturday, July 27, 2013

China's Circularity Problem

In the context of future looking monetary policy, the circularity problem refers to the problem that central banks face when they try to use market signals to guide policy. The general problem is that the market signals may include expectations of future policy in addition to their expectations of future shocks, so that the market signals fool the central bank into pursuing inappropriate policy. For example, if the private sector believes there will be a large shock to consumer demand in the future, but also believes that the central bank will fully offset the shock, then market expectations of inflation may not change. If the central bank looks at the inflation expectations and concludes that there is no threat to aggregate demand, the central bank may end up not offsetting the shock, and the markets fall in response.

The most recent example of this in U.S. financial markets is the Fed taper. Before the Fed taper talks, the general expectation was that quantitative easing would continue into the indefinite future and that there would be no premature tightening. As a result, the stock market seemed very resilient because there were expectations of strong growth conditional on Fed easing. The Fed misinterpreted these expectations as independent of the Fed's policy of QE and decided to tighten.

However, fiscal authorities can also face the same circularity problem. If an economy is highly dependent on government spending, then real economic conditions may be determined conditional on expected future fiscal easing. And if the fiscal authority sees the strong current economic conditions as a justification for austerity, then this too may cause a fall in growth in the same way that a premature monetary contraction can slow growth.

The Chinese government is currently facing this fiscal policy circularity problem. In an interview on June 18th with the IMF mission chief for China Markus Rodlauer, he notes that high frequency data such as retail sales, investment growth all point to moderate growth. Even thought the PMI may have faltered a little bit, it's well within historical ranges. Rodlauer takes this and makes the conclusion that there's really no need for stimulus. 

While he may be right, it is also likely that the Chinese government could fall into a fiscal circularity problem. Especially since Chinese fiscal policy has the ability to reallocate a large amount of resources, much of business is conducted on the basis of expectations of future government policy. Under these conditions, concluding that economic conditions are strong on the basis of high frequency data may cause the fiscal authority to be too sluggish in responding to a slowdown in growth.

Wednesday, July 24, 2013

Casting and Melting with Paired Data

Today's post is not about economics, rather it's a note from an R programming struggle that may be helpful for fellow undergraduate researchers.

I'm often testing forecasting models, and what this ends up creating is a bunch of "forecasted" variables that are paired with the "actual" values. R has fabulous faceting capabilities, and I have often wanted to reshape the data in a way where the category of forecasted variable as an identifier, and then two columns that list the forecasted and actual variables. In other words, if the code starts from something like


       aAct      aPred       bAct      bPred id
1 1.2076384 -0.6735547  1.4994464 -1.0691975  1
2 0.4999706 -0.7188215 -0.3601551  0.7224729  2
3 1.0340859 -0.1108304 -0.5941295  0.5027085  3

And I want to convert it where one column has an id, another one identifies whether I'm forecasting a or b, and a third column that has the forecasted value, and then a fourth column with the actual value.

The procedure in R involves "melting" the data frame and then "casting" it. Melting is rather simple -- you provide a set of identifiers, and then the data frame is melted down to only that identifier, the values, and another indicator variable that tells you what the value is supposed to represent. In the above example, if we let df be the data frame described above, I would run:


df.m = melt(df, id.vars = 'id')

   id variable      value
1   1     aAct  1.2076384
2   2     aAct  0.4999706
3   3     aAct  1.0340859
4   1    aPred -0.6735547
5   2    aPred -0.7188215
6   3    aPred -0.1108304
7   1     bAct  1.4994464
8   2     bAct -0.3601551
9   3     bAct -0.5941295
10  1    bPred -1.0691975
11  2    bPred  0.7224729
12  3    bPred  0.5027085


Now I need to "unmelt" part of the data frame to get the forecast/actual pairings. In R, this is known as casting and I know that I personally had a pretty hard time decoding the documentation. The function goes along as

cast(df.m, <IDENTIFIERS> ~ <VALUES>)

The second part is known as the casting formula and is the part that I have struggled with. But in its most simplest form, the casted frame will look like something with all the identifiers added together as uniquely identifying units ,and then the <VALUES> variables being the labels for the actual value column. If that sounded confusing, I apologize. Perhaps solving the example would help.

First, I need to find a way to identify whether a row is looking at a or b, and whether it is a forecast or an actual variable. So I first create these variables:

df.m$var = substring(df.m$variable, 1, 1)
df.m$type = substring(df.m$variable, 2) 

Which gives me the data frame:


> df.m
   id variable      value type var
1   1     aAct  1.2076384  Act   a
2   2     aAct  0.4999706  Act   a
3   3     aAct  1.0340859  Act   a
4   1    aPred -0.6735547 Pred   a
5   2    aPred -0.7188215 Pred   a
6   3    aPred -0.1108304 Pred   a
7   1     bAct  1.4994464  Act   b
8   2     bAct -0.3601551  Act   b
9   3     bAct -0.5941295  Act   b
10  1    bPred -1.0691975 Pred   b
11  2    bPred  0.7224729 Pred   b
12  3    bPred  0.5027085 Pred   b

Now I can cast the frame. In this case, I would use the formula

df.mc = cast(df.m, id + var ~ type)

This is how you interpret the formula. Id + var means that every observation is uniquely identified by it's id code and the variable we're forecasting -- a or b. Then "type" on the right side represents the new variable names that will be filled by the values.

Hope this is useful to others so they don't end up spending hours agonizing over the issue as did I.


Monday, July 22, 2013

More on Growth and Convergence Within Countries

In my last post on China, I touched on the issue of Chinese growth by showing a graph with the distribution of Chinese per capita incomes by province, and arguing that there is a strong convergence story pushing China towards more growth. In the comments, Tamar makes a note that many countries do not converge. For example, per capita incomes in Mississippi and Connecticut differ by a factor of about 2, even though the United States is a relatively developed Country. He also suggested that I take a look at Brazil. And so I did. I took a look at the distribution of  province per capita income divided by country per capita income for three emerging market economies: Brazil, Mexico, and China, and found that indeed, they were quite close!


I wondered if it was because I didn't weight for populations, so I downloaded some Mexican population data from their government's website. I didn't have time to do Brazil, but even comparing China and Mexico I found that the distributions were quite similar.

On first pass, this bodes poorly for a convergence hypothesis.

But let's think back to the Solow model. We should only observe convergence in income levels if technologies and savings rates are all identical. But it's entirely plausible that these can differ across provinces, and that they differ for extended periods of time. Therefore a better metric to evaluate convergence is not whether they converge in levels, but rather if they converge in growth rates. In the Solow model, at the steady state, all countries grow at a rate equal to the rate of population growth plus the rate of technological change. If they're all bound together (eg if they're all large counties in one country), then g should be similar across them, and demographic trends typically do not differ hugely among provinces in the long run.

So if we look at growth rates, now we see convergence at work. As a technical note, I only had data for Mexico from 2003 to 2010. So I got the ratio by exponentiating the 7 year ratio by 10/7. 



So even though Mexico and China have similar distributions in terms of their with country income levels, they have widely different distributions for growth. Therefore I stand by my original belief that China still has a lot of long run growth potential to go as the poor provinces catch up to the rich.


China's Provinces and why National Data can Mislead

Close your eyes and think of China. What do you see?

If you were like me, you saw a large metropolis filled with high rise apartment buildings, inked with chronic air pollution, humming along to the sounds of millions of residents getting through their days.

I believe this is also the image many economic commentators have in their minds when they talk about an upcoming "Chinese" slowdown. But what I want to do in this short little post is to demonstrate why thinking this way neglects one of China's most important quality: its size.

China has a total of 1.34 billion people spread over 23 provinces, 4 municipalities, and 5 autonomous regions. Individual provinces in China can have as many people as entire countries. The coastal province of Guangdong has a population of 105 million -- just shy of Mexico's 112 million and far exceeding every country in the European Union. Sichuan, an inland province (known for its spicy food), has a total of 80 million inhabitants -- larger than the entire Western Untied States combined. In this sense, it's better to think of China as a collection of smaller countries united under a currency union called China, and not as a uniform economic entity.

For example, consider the following map from Wikipedia showing per capita income by province.


As can be seen, there are vast disparities in income. Whereas the coastal provinces are quite rich, the inland ones are quite poor. However, the chart understates these differences because it uses a log color scale. Below is a histogram of the 2012 per capita income and population statistics pulled from the China Data Center associated with the University of Michigan.
GDP per capita in Shanghai was 85,000元 whereas GDP per capita in neighboring Anhui was only  28,792元. Translated into market exchange rates this means an average GDP per capita of $13,848 in Shanghai and only $4690 in Anhui. If we take the Solow model seriously, what this suggests is that there is a massive potential for convergence within China. Even if the inland provinces do not face as favorable conditions as the coastal provinces did when they got rich, do you really expect the 80 million residents of inland Sichuan to stay at 60% of coastal Guangdong's income forever? Especially since China does do so much manufacturing, Dani Rodrik's work on unconditional manufacturing convergence suggests that these poorer provinces will inevitably partially catch up with the richer provinces. There's just not enough income for them to get caught in a middle income trap.

There is also no systematic relationship between population and income. No matter the combination of big or small, rich or poor, there is a Chinese province that fits the description.




Recognizing this heterogeneity also provides a good reason for why looking at China's GDP per capita statistics provide an overly rosy picture of China's wealth and an overly dour prospects of China's future growth. Because there are a few provinces that are now somewhat rich while most provinces are still very poor, mean GDP per capita for the nation does not accurately represent the plight of most provinces. You can see this by the fact that most provinces in the above scatter plot are below the regression line that approximates the mean level of GDP per capita. As a result, we underestimate the role convergence has to play in bringing more Chinese economies out of poverty and therefore underestimate the true growth potential that China has.

Bottom line is that "turning point" arguments that fail to consider the subtleties of individual provinces will lead us astray. Too often, we associate China with middle income images of massive apartment complexes, where in reality much of China is still very poor. Any serious evaluation of where China is going requires careful consideration of how we think growth in individual provinces will evolve. And based on the provincial data, I am quite optimistic.

Friday, July 19, 2013

A Market Monetarist Approach to the Interest Rate Puzzle

What’s going on with real rates, inflation breakevens, and the stock market? From the beginning of 2010 to the end of 2012, these three variables have affected each other in a predictable way. Higher inflation breakevens pushed up the stock market as they served as a sign that aggregate demand was rising. Growth in real rates was associated with increases in the stock market as the real rates served as a predictor of future growth. However, these relationships have broken down in this first half of 2013. In this post, I aim to explain why. By combining movements in market data with traditional economic theory, there is convincing evidence that the recent change is due to a positive aggregate supply shock, and therefore bodes well for economic growth looking forward.

This post will proceed in three acts. In Act One, I introduce some work that has already been done on this question. In Act Two, I present a new approach to process the market data and the theory that justifies the observations. And in Act Three, I address any residual concerns. Let us now begin.

Act One -- The Work that Has Been Done

Recent trends in financial markets since 2010 are summarized below. In it we have the movement in the 10 year real interest rate, the 10 year inflation breakeven, and the SP500. During the 2010-2012 time period, the 10 year inflation breakeven was very tightly correlated with the SP500, and if you squint you will notice that increases in the 10 year treasury yield also were correlated with increases in the SP500. However, this seemed to reverse itself starting in 2013. Even as inflation expectations were falling, the SP500 still gained steady ground. Also, when the 10 year real interest rate spiked in recent weeks, we saw a temporary fall in the SP500.



Evan Soltas has documented the breakdown of the interest rate relationship. There are two signals communicated by a rising rate. First, it could be a signal of stronger future growth -- which should send the SP500 up. On the other hand, it could be a sign that monetary policy will be too tight -- which should send the SP500 down. By looking at 90 day rolling correlations between the daily percent change in the 10 year treasury yield and the SP500 stock index, we can tell the difference. Evan has observed that the correlation coefficient between the two changes is quickly approaching zero. According to him, this signals that “over the past 90 days, monetary tightening has been as important to rates as has been macroeconomic strengthening”. The June survey of primary dealers further confirms this hypothesis.

Brad Delong and Matt Yglesias have both come into this debate on Evan’s side, arguing that the Fed has been engaging in a stealth monetary policy tightening. To them, these trends are signs that growth could suffer again in the upcoming months as the Fed decides to tighten too early.

On the other hand, I have looked at the relationship between inflation breakevens and the SP500 and believe what we’re really looking at is a positive supply shock. I find that even though 2013 has been characterized by falling inflation breakevens alongside a rising SP500, marginal increases in the TIPS spread still have a positive effect on equity prices. The only difference is that the SP500 seems to have a higher trend growth level -- an alpha with respect to inflation, if you will. I interpret this as an expectation of higher output at every level of inflation. I identify this with a textbook increase in aggregate supply, and thus argue against the monetary tightening hypothesis.

Act II - Another Look at the Data

One unfortunate oversight of the analysis Evan and I have each done is that we don’t fit our stories together. He says tightening, I say aggregate supply, and we each point to our individual data. But an open question remains: how do our theories explain the other person’s data?

To try and estimate this, I roll with Evan’s calculations, but with slight modification. Instead of calculating correlation coefficients, I instead compute rolling regression coefficients. I look at week to week changes in inflation breakevens, the 10 year TIPS yield, and the SP500. For each week I compute regressions of percent changes in the SP500 against percentage point changes in the TIPS yield and inflation breakevens for the past 26 week window. The regression slopes measure the response of the SP500 to either interest rate changes or expected inflation. It corresponds loosely to the correlation coefficient Evan calculates. The regression intercept measures the “intrinsic” trend growth of the SP500, independent of interest rates or inflation. By looking at these coefficients in context, I will try and construct a more holistic vision of what the financial markets are trying to say.



First, let us take a look at the right side panels which describe the responsiveness of the SP500 to expected inflation. In some sense, changes in the SP500 represent changes in expected future nominal GDP. Therefore, when we look at the relationship between inflation expectations and the SP500, this serves as a proxy for the relationship between inflation and nominal GDP.

In my view, the spike in the TIPS breakeven intercept is a smoking gun for a positive aggregate supply shock. Think about what the higher intercept means. The regression is of changes in the SP500 against changes in the TIPS breakeven. Therefore an increase in the intercept means that the SP500 grows faster for every level of expected inflation. This effect is quantitatively important as well. In comparison to the 6 months ending 2012, the intercept for the past 6 months suggests that the SP500 has kicked it up from about 0% weekly trend growth that is independent of inflation expectations to about 0.8%. Meanwhile, the slope for the breakeven-SP500 relationship is still positive. This all suggests a more permanent aggregate supply shock is driving the intercept up, whereas day to day aggregate demand shocks keep the slope positive. A diagram of this is shown below.


However, careful readers will note that you can get “more output at every price” from a story with a structural shift in aggregate demand with marginal shocks coming from aggregate supply. However, this hypothesis fails on two counts. First, if marginal changes in inflation reflected changes in aggregate supply, not demand, then because aggregate supply shocks send prices in the opposite direction of output, we should expect the TIPS breakeven slope to be negative. Second, the AD story does not match up with the changes in levels. As I showed above, inflation expectations have fallen while the SP500 has risen. If there were a large aggregate demand shock, then we should have seen both the SP500 and TIPS breakeven rise in levels. Therefore, a positive aggregate supply shock provides the most natural interpretation for the right hand panels.

Now comes the out of sample test. Can an aggregate supply shock explain the low slope and moderately higher intercept in the SP500-real rate relation? Absolutely.

To see how, I appeal to a version of the IS-MP (Investment Savings, Monetary Policy) model, pictured below. In the diagram, nominal GDP growth is on the x-axis and the real interest rate is on the y-axis. The IS curve is the standard IS curve from intro macro. It describes various combinations of interest rates and nominal GDP levels that give equilibrium in the goods market. At lower levels of the real interest rate, people want to hold onto less money and consume more goods. This results in higher levels of nominal GDP and a downward sloping curve. The MP curve is slightly different because it describes not equilibria but a central bank reaction function. At higher levels of nominal GDP, the Fed sets higher a higher interest rate in order to prevent rapid inflation. These two curves now give a unique equilibrium characterized by an interest rate and a level of nominal GDP.




Now what happens if there is a supply shock? The increased productive capacity, on first approximation, has no effect on the IS curve. To see why, suppose the monetary authority does not react. Then because a supply shock leaves nominal GDP relatively unchanged, then the IS curve should not move. However, because the Federal Reserve is an inflation targeting central bank, the MP curve shifts down. Now that every unit of nominal GDP consists of more real growth and less inflation, monetary policy becomes easier. Therefore, the new monetary policy curve will look something like MP(2) in the picture above.

This matches two more details from the regression.

First, the downward shift of the MP curve means that at every interest rate you observe more output. This matches the somewhat higher regression intercept on the interest rate graph.



Second, since the MP curve is moving, we should expect a weaker correlation between interest rates and output. This is illustrated above. If the MP curve is held constant while the IS curve shifts back and forth, then we will observe a strong correlation between interest rates and output, as shown by the blue line. On the other hand, if the MP curve is moving to MP(2) at the same time, we may end up observing the red dots and finding that the correlation drops. We also should expect this correlation confusion to be a bigger deal for the monetary policy shift than for the aggregate supply shift. As Bernanke is finding out, shifts in MP are linked to relatively unstable market expectations whereas a positive AS shock from something like oil discoveries is much more predictable. The theory behind this explanation of the fall in the correlation is illustrated in the sketch below, and it is actually exactly what we observe in the markets during the first half of this year.



The final step is to get the higher interest rate from the taper, and this can be seen as just the effect of a slight Fed tightening along with a slight rightward shift of the IS curve as business confidence requires. In the end, you have higher growth, higher rates, even though the Fed has tightened (as per the Dealer survey) relative to where it was before.

Act 3 -- Addressing Additional Concerns

While I believe the above story is the one most consistent with the regression data, there are always additional concerns.

Most importantly: what is the positive supply shock? I believe the most plausible supply shock could be the further discovery and development of unconventional oil and gas reserves. Therefore when compared to the counterfactual of perpetually rising oil prices, the new discoveries makes it easier for policy makers to respond to energy shocks and improve the economy’s productive capacity.

An important note is that a rise in oil prices, when it occurs alongside a rising SP500, does not contradict the aggregate supply hypothesis. An aggregate supply shock is characterized by a general fall in inflation as output rises. But if aggregate demand is moving at the same time, we could end up observing higher prices with even higher output. Therefore we identify an aggregate supply shock by seeing higher output *for any given level of inflation*. And this is precisely what we see from the rolling regressions.

Also, I have somewhat of a harder time explaining the past movements in the intercepts and slopes. Fortunately, the intercepts seem to move up and down together, whereas the slopes do the same. Moreover, the intercepts often go in opposite directions when compared to the slopes. This suggests that supply shocks may be more recurrent than we are led to believe.

Others may criticize the above approach as too ad-hoc. While to some extent, it certainly is, I believe I have done justice to the spirit of the AS/AD and IS/MP models. Furthermore, if you break down all the layers of abstraction and ad-hoc econometrics, the story is quite simple:

The massive increase in U.S. petroleum resources has expanded aggregate supply, allowing the economy to attain higher levels of output at every level of inflation. This serves as a massive tailwind for equity markets that no longer depend on aggregate demand inflation to grow. This requires a muddled monetary policy adjustment -- reducing the previously observed correlation between interest rates and growth. Nonetheless, the aggregate supply shock has increased trend growth, making the fluctuations in interest rates matter less.

Fin.

Tuesday, July 16, 2013

The Reach for Real Bills

Awash with liquidity and starved of paper, must financial markets slip out of control? This is the central question behind the “financial stability” argument against additional monetary easing. According to this objection, the zero bound on interest rates means that the Fed’s easing can do little for the real economy, and the cash created by open market operations just fuel a speculative excess termed a “reach for yield”. I have addressed one reason why this theory is incorrect. If QE indeed spurred a reach for yield, then the taper talk should have reversed this and caused a flight to safety. Yet after the taper dust settled, we saw cyclicals rally strongly with safe assets falling -- indicating that QE was likely encouraging healthy risk taking and not an anomalous reach. However, this evidence primarily came from equities. In this post, I want to take a different approach to expand the scope of my argument against financial stability concerns. I will start with some monetary history and discuss why thinking in terms of financial stability can be very misleading. In short, adopting financial stability approach to monetary policy is unwise and will likely worsen both the business and financial cycle.

First, let’s consider the motivating evidence for the financial stability position. Below is a chart prepared by UM alumni Naufal Sanaullah charting the loan deposit gap into US commercial banks. According to Naufal, this shows that the usual lending mechanism that we learn in intro macro doesn't work any more. No more loans are going out, and therefore nothing makes it to the real economy. And while the real economy is unaffected, this domestic savings glut drives a reach for yield as banks still need to pay their depositors.



If this theory is correct and monetary policy is completely ineffective, the Fed should taper earlier. If the costs to financial markets are great enough, and if the benefits to real economies are small enough, it may be worth it for the Fed to fumigate any excess risk in markets by raising interest rates.

Thinking in terms of financial stability may seem novel, but the Federal Reserve actually had the same debate during the Great Depression. Julio Rotemberg, in his recent paper for the NBER monetary policy conference, does a wonderful job summarizing the literature on the thought process of the Fed at that time.
Friedman and Schwartz (1963) stressed instead the substantial declines in the money supply that followed. These were, in part, the result of the Fed’s refusal to lend to banks subject to runs. In addition, and in spite of the exhortations of various Federal Reserve officials at various times, *the Fed resisted embarking in large-scale open-market purchases to offset the declines in banking.8 Under pressure of Congress, such a program was started in April 1932, though it quickly ended in August of the same year. This was rationalized on the ground that conditions were “easy” since there were ample excess reserves. Some officials thought the increase in excess reserves (and reduction in borrowing from the Fed) proved that the program was ineffective.9
Given subsequent developments, it seems likely that some members also viewed excess reserves with fear. As excess reserves accumulated in the mid-1930s these fears were openly discussed, and Friedman and Schwartz (1963, p. 523) quote extensively from a 1935 memo that clarifies their nature.* In effect, the Fed worried that banks would use these funds for speculative purposes that would ultimately be costly. *Or, as the 1937 Annual Report put it, the Board feared “an uncontrollable increase in credit in the future.”10 *These concerns were sufficiently intense that the Fed raised reserve requirements by 50% in August 1936. Further increases in 1937 left them at double their 1935 values (Meltzer 2003, p. 509).*
If you look closely, the parallels to the Fed’s dramatic QE policies and current financial stability concerns are uncanny. In both stories, the recession was identified as the result of speculative excess. In response to the crash, both times the Federal Reserve embarked on a program of monetary easing. However, in both instances excess reserves failed to budge, and this was interpreted as a sign that banks just didn’t want to lend -- the Fed was pushing on a string. Finally, as excess reserves persisted, the threat of “speculative purposes” was used to bully the Fed into tightening. The key difference between now and then is that we have a Fed that recognizes its role in supporting the real recovery. Those in 1936 were not as lucky.

Why did the Fed go on such a destructive path in the 1930’s? Rotemberg identifies the tightness of policy as a consequence of something called the “real bills doctrine”. Under the real bills doctrine, the Fed saw its role as providing credit so that there was enough, and no more, credit to invest in “productive uses”. Since the Great Depression was preceded by a speculative stock bubble, then Fed officials put a premium on making sure credit was put to “productive uses”; The real bills doctrine was the result. According to this doctrine, monetary policy should tighten in recessions when demand for credit falls so as to make sure what credit remains is put towards productive uses. Conversely, monetary policy should ease in booms because firms are looking to find credit to fund their projects. In other words, the real bills doctrine prescribed a procyclical monetary policy.

This goes to show that we need to avoid framing effects when thinking about monetary policy. Because the Great Depression was the result of an equity bubble, then the economists of the day were so concerned about bubbles that they pursued destructive monetary policy. It is just as important to not make the same mistake today. As the real bills doctrine shows, using the tools of financial economics to solve monetary problems can be very destructive.

In particular, the concern about excess reserves or a loan-deposit imbalance comes about from ignoring general equilibrium. Walras' law states that the value of excess demands add up to zero across all markets in an economy. So if there is a lack of demand in goods, it must be the result of an excess demand for money that goes into savings. But if the interest rate is low enough, it may no longer be worth it to hold onto the money as savings and people will spend it. In the limit, if people knew that all of their cash would disappear when the next day started, they would certainly spend today. There must be a real interest rate, perhaps negative, that would make people want to give up enough money to equilibrate the goods market. This conclusion now recasts the question to whether that negative rate is attainable. Once you can reach any arbitrary rate, then the money markets and good markets are sure to equilibrate.

Of course if the Fed was stuck at the current interest rate it could never attain the negative rate. But that’s where forward guidance comes into play. What forward guidance allows the Fed to do is pin down the future price level -- even if there appear to be no tools right now. This is the well known escape clause in Krugman’s original analysis of the liquidity trap. If the Fed can commit to a future policy path, the zero lower bound no longer matters.

To get a more intuitive feel for this argument, you should think in terms of an observable Fed policy rate (r) and an unobservable Wicksellian, or full employment, rate (w). The full employment rate is so named because it is the interest rate at which all resources are fully employed. In this example, I set both interest rates to be nominal, so r cannot be lower the zero. At any given instance in time, the stance of monetary policy is determined by where the policy rate, r, is relative to the Wicksellian rate, w. If the Fed rate is higher than the Wicksellian rate, the Fed is tightening. If it is lower, the Fed is easing. Dynamically, the Fed's policy stance is determined by the blue area minus the red over all time.




This gives a natural interpretation for why forward guidance works at the zero lower bound. Even though the Fed’s rate, r, is stuck at zero and is currently above the Wicksellian rate w, the Fed can still generate inflation by promising to keep Fed policy easy in the future, even when the Wicksellian rate rises. This then can move the economy to a different equilibrium. With higher expected inflation, the nominal Wicksellian rate rises since means people are willing to part with their money (read: have no excess demand for money) at higher interest rates. As a result, even though the Fed is constrained right now, it still has power over the future policy path. This goes to show that the zero lower bound is not a serious reason to discount the Fed's ability to conduct monetary policy.



To get back on track, the Fed must commit to keeping rates low until the price (or nominal GDP) level is back to trend. On the other hand, if the Fed were to raise interest rates now, this would collapse expected inflation, lowering the Wicksellian curve and knocking the economy into a low output, low interest rates environment. So even if you think the low rates environment is causing financial distortions, the only way to get higher rates in the future and to solve the apparent financial distortions of low interest rates is, ironically, to promise to keeping short rates low now.

The financial stability view gets off track because it ignores general equilibrium effects. In partial equilibrium analysis, when there's an excess stock of something, such as bank reserves, the natural response is to cut supply. But this is misleading analogy for bank reserves, because an excess supply of bank reserves actually represents an excess demand for money. Therefore the proper response is to maintain lower rates and not prematurely tighten.

Therefore the real bills/financial stability doctrine fails for three reasons. First, it identifies excess reserves as the result of reduced borrowing that the Fed cannot control, whereas the excess reserves actually are symptoms of an excess demand for money that easier monetary policy can address. Second, this misdiagnosis means we are left thinking the Fed is powerless, whereas the Fed can pin down the price level through forward guidance. Third, it ignores the general equilibrium relationship between money and goods. By prematurely raising rates, this actually depresses interest rates in the long run and worsens the excess demand for money. Bottom line? Worrying too much about financial stability concerns can exacerbate the business cycle and actually prolong a period of low rates. Instead, the Fed should keep its eyes on the real economic prize, and keep financial decisions separate from its monetary ones.

Thursday, July 11, 2013

Micro and Macro Benefits Should Stay Separate

Recent posts by Mark Thoma and Michael Roberts have spurred me to think more about how to evaluate fiscal policies at the zero lower bound. Both Thoma and Roberts make arguments that because crowding out is less severe at the zero lower bound, certain government investments become much more desirable. For Thoma, this policy is increased infrastructure investment, whereas Roberts focuses more on investments in environmental policy. As such, Roberts asks: “how do we more generally evaluate the costs and benefits of public policies in a depressed economy?” This post will be an attempt at answering that question. My core thesis is that while government investment can be more desirable at the zero lower bound, it is no more desirable when the economy is at the zero lower bound than when it is not.

Why is the zero lower bound important anyways? One argument against fiscal policy is that government spending can crowd out private spending, leaving net expenditure unchanged. This can happen through two ways. First, it can happen through direct channels -- when the government builds a new school, this may crowd out a private school that was built in the region. Second, there is an interest rate channel. To finance the new spending, the government has to borrow from financial markets, crowding out private borrowing and thereby attenuating any positive effect of fiscal policy. However, when the economy is at the zero lower bound, private investment is typically weak and interest rates are low. These conditions mean that government spending will likely result in “crowding in” as multiplier effects stimulate more activity. This was the major argument behind the  DeLong and Sumner paper about fiscal policy at the zero lower bound. Therefore government investment spending carries a “double dividend” at the zero lower bound; it boosts output, long run growth, while also avoiding crowding out effects.

I see two major problems with this argument. First, it ignores the Sumner Critique about monetary policy offset. If monetary policy controls the nominal growth path of an economy, then there’s no point in trying to get more aggregate demand with government investments. Any multiplier effect will just be canceled out by the monetary authority passively tightening in response. While we haven’t seen as much tightening in the U.S. economy, we have seen this process work in reverse. Even as the government has severely tightened fiscal policy, signs of aggregate demand have held surprisingly steady. A comparison with Europe -- a continent going through a similarly savage bout of austerity -- leads us to conclude that monetary policy still has a wide latitude in determining aggregate demand even at the zero lower bound. Japan’s recent spike in growth has also shown that monetary policy can have an effect even after a long period of zero rates. This contradicts the assumption made in the DeLong and Summer paper that assumes monetary policy becomes powerless at the zero lower bound, means that any multiplier effects of government investment are minimal. Therefore multiplier or “crowding in” effects do not serve as a sound basis for evaluating government investment.

Now suppose for some reason that the monetary authority has imperfect credibility and cannot pull the economy out of the zero lower bound. Does government investment become more attractive as a result? Still no. This is because the proper benchmark is not the absence of government spending, but rather the next best government spending option. When considering all these investment proposals, we should remember that the government could always spend its money on “firework shows” or “alien defenses”. This (inefficient) policy scheme would capture all the multiplier expenditure effects with none of the long run growth effects. But as a result, the dividend of using government investment are no greater at the zero lower bound than when interest rates are positive.

Some others have made the even more radical argument that the zero lower bound means aggregate supply reducing policies, such as more stringent environmental regulations, can actually have macro benefits at the zero lower bound by increasing inflation. However, a look at forecast data in Japan around the time of the tsunami and in the U.S. around the time of the Libyan oil shocks shows that adverse supply shocks are, well, adverse. Output doesn't rise in response to supply shocks -- even at the zero lower bound.

So where do we end up? While the “double dividend” hypothesis might be a strong political argument for government investment, the core, apolitical economic analysis suggests that the zero lower bound does not make investment more desirable than usual. As a result, focus needs to be directed towards identifying the efficiency costs of low investment, not low output -- focus on the Harberger triangles, not the Okun gaps.

Wednesday, July 10, 2013

The Taper and Growth -- A Reply to Brad DeLong

Intellectual honesty means disagreeing with even those who are “on my side”. So when the arguments I have made against Reaching for Yield are also arguments against doomsday predictions for the taper, I have to speak up.

In this case, I have Brad DeLong in mind. In a post today, Brad sees the recent rise in real interest rates as measured by the 10 Year TIPS yield and the fall in inflation expectations as measured by the 10 year breakeven as cause for alarm, claiming that “Not since 1991 have we had such a large and rapid contractionary shift in the market's belief about what the Federal Reserve's reaction function.”

Reading his post, it almost sounds like Fed policy is going to collapse growth. But I would argue that while Fed policy is failing to promote maximum employment, it hardly follows that growth will collapse. I come to this conclusion also by looking at financial data. Below I reproduce a plot of the real interest rate, inflation breakeven (both 10 year), and add a plot of the SP500. I focus in on 2013 to see the recent dramatic changes.



We can make some stylized observations. First, the real interest rate has been on a steady rise since May. Second, the inflation breakeven has been on secular decline since about March. Third, in spite of all of this, the SP500 has been steadily growing, rising more than 10% on a year to date basis.

How should we interpret this? If we accept the uncontroversial proposition that stock market movements reflect expectations of future growth, it should be clear that the fall in inflation expectations does not reflect a fall in expected future nominal GDP. This is a break from the trends from 2010 to 2012. But if inflation is not moving in the same direction in output, it must be that a positive supply shock is the driving factor behind the fall in inflation breakevens.

The natural candidate for the positive supply shock is the fall in oil prices. The recent slowdown in emerging markets and massive expansion in oil production has lowered energy costs for the United States. This is a textbook expansion in aggregate supply, and we should naturally expect output to rise, inflation to fall -- precisely what we observe above.



Nonetheless, I still agree that more monetary stimulus is desired. To see this, we should consider the first differences in inflation expectations and the SP500. In the plot below, I have plotted weekly percent changes in the inflation breakeven and the SP500. Blue denotes points in the 2010-2012 time period, and red denotes the points on a year to date basis. Note that in both samples there is a positive relationship between changes in inflation expectations and changes in the SP500. However, the year to date group has a higher intercept, reflecting that the SP500 has shifted to a higher trend growth path relative to the 2010-2012 period. Indeed, if you run the regressions on the first differences, you find that in the 2010-2012 period, the SP500 would gain only 0.23% in a week if inflation expectations were unchanged. However, in 2013, the value is 0.76% -- almost triple what the previous trend growth rate.



These facts show the simplest version of the aggregate supply/aggregate demand model in action. If inflation falls while nominal GDP rises, then it must be a positive supply shock. For every level of inflation we achieve a higher level of output. But even after the positive supply shock, aggregate demand policy still plays a role -- i.e. any marginal rise in inflation still translates to a rise in output.

What went wrong in DeLong’s original analysis was that he reasoned from a price change. He started by talking about inflation and interest rates and then translated that into a statement about monetary policy. On the other hand, I started with a quantity -- the SP500 -- and used that to interpret the price changes. This allows me to fit the data into the standard AS/AD model.

I can then break down potential data changes to events in the AS/AD model. DeLong writes out a list of four possibilities to interpret changes in the real interest rate and the inflation breakeven. II have produced a similar table below that translate the AS/AD arguments I made above. My version provides endogenous predictions for the real interest rate -- the market indicators are inflation expectations and the SP500.

In my view, the economy is in state (4). Inflation is weak, but growth will be strong. These growth prospects are also corroborated by the relative strength of cyclical stock sectors relative to safe ones. Investors are ramping up -- not buckling down -- as expectations of future nominal GDP rise. Bottom line? The taper isn't going to knock growth far off track.

This rate story shows how important markets are in market monetarism, TIPS spreads and movements in the SP500 make for an easy breakdown of aggregate supply aggregate demand. We should take them seriously, even if it’s politically inconvenient for those of us arguing for monetary easing. Interest rate movements signal changes in the reactions of the Fed. But since it's unlikely that the Fed will screw up so badly so as to have elevated interest rates for an extended period when the economy is suffering, rising long rates almost always indicate higher expected nominal GDP. These financial indicators provide policy makers with forward looking data on which to base policy -- a cornerstone of market monetarism.

I want to end on what the above means for monetary policy and advocates of monetary easing, such as myself.

First, the recent fall in inflation breakevens should not be interpreted as a monetary tightening -- the change is not being driven by demand, but rather by supply. Second, the Fed is severely failing its dual mandate. Now that inflation is falling, the Fed should have even more latitude to pursue its full employment objectives. In this light, the taper is madness. Third, advocacy for monetary easing should focus on the human costs, not financial costs, of tight money. Wall Street will move on, but Fed complacency in the face of half a decade of slow job growth will leave scars on Main Street for years to come.

Tuesday, July 9, 2013

Debt Is Not Damning. Debt is Just Debt

This post is meant to add a little few final goodies to the work I did with Miles Kimball on the Reinhart and Rogoff results. In short, Miles and I took another look at the RR dataset as prepared by Herndon et al. and found that the long run effects of debt on growth were vanishingly small. The major innovation driving our finding was to look not at contemporaneous growth, but rather at growth 5 to 10 years out in either direction. We ended up finding that while low past growth does a good job of predicting high current debt, current debt does a rather poor job predicting low future growth.

The column had a very strong response, and as such we each had a few follow up posts. Immediately after the first article, Miles started addressing new points brought up by various commentators, and these can be seen here. I later wrote about controlling for the possibility that policy makers would manipulated their debt levels in expectation of future growth rates, and Miles had a post on the importance of taking enough lags of growth rates into account. We pursued these ideas further in another Quartz article that featured (in my opinion) a very good looking scatterplot that lets you see the individual countries that contribute to the regression results.

Since part of this controversy was over data sharing practices, I made sure to make all the code available in the Data section of my blog.

Before presenting the original article, I want to add two more pictures to the debate. The first is a scatter plot that breaks down the relationships between future growth, past growth, and debt by both country and time -- something that Evan Soltas asked for when the article was released. These diagrams show that even when the observations are grouped by decade, the general conclusion that debt does not slow future growth  still holds. This plot also allows you to see which countries are the influential outliers. With any luck, this granularity can inspire some more posts about what the experiences of those individual countries can teach us about debt and growth.


To me, what is most stark about these plots is that the Solow growth model implies both panels should have downward sloping lines. Since debt levels usually rise as countries approach the technological frontier and start welfare states, we should expect debt to be negatively correlated with growth -- both past and future. Therefore the nearly flat slopes in the left panel really do suggest debt's effect on future growth is quite small.

As a second plot on this point, I want to present a version of the debt/gdp buckets plot because that nonparametric approach  was a big part of the RR debt/growth message. It turns out that as soon as we look at future growth, the buckets no longer show much of a slowdown at all at moderate levels of growth. However, the buckets maintain the robust negative relationship between past growth and current debt. Even though this image might be provocative, I would  recommend not reading too much into it. The standard errors should be quite large and are not included, and none of these bar charts adjust for past growth.


And now, the full text of the article. If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:
© May 29, 2013: Miles Kimball and Yichuan Wang, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.



After Crunching Reinhart and Rogoff’s Data, We Found No Evidence High Debt Slows Growth
Miles Kimball and Yichuan Wang


Leaving aside monetary policy, the textbook Keynesian remedy for recession is to increase government spending or cut taxes. The obvious problem with that is that higher government spending and lower taxes tend to put the government deeper in debt. So the announcement on April 15, 2013 by University of Massachusetts at Amherst economists Thomas Herndon, Michael Ash and Robert Pollin that Carmen Reinhart and Ken Rogoff had made a mistake in their analysis claiming that debt leads to lower economic growth has been big news. Remarkably for a story so wonkish, the tale of Reinhart and Rogoff’s errors even made it onto the Colbert Report. Six weeks later, discussions of Herndon, Ash and Pollin’s challenge to Reinhart and Rogoff continue in earnest in the economics blogosphere, in the Wall Street Journal, and in the New York Times.

In defending the main conclusions of their work, while conceding some errors, Reinhart and Rogoff point out that even after the errors are corrected, there is a substantial negative correlation between debt levels and economic growth. That is a fair description of what Herndon, Ash and Pollin find, as discussed in an earlier Quartz column, “An Economist’s Mea Culpa: I relied on Reinhardt and Rogoff.” But, as mentioned there, and as Reinhart and Rogoff point out in their response to Herndon, Ash and Pollin, there is a key remaining issue of what causes what. It is well known among economists that low growth leads to extra debt because tax revenues go down and spending goes up in a recession. But does debt also cause low growth in a vicious cycle? That is the question.

We wanted to see for ourselves what Reinhart and Rogoff’s data could say about whether high national debt seems to cause low growth. In particular, we wanted to separate the effect of low growth in causing higher debt from any effect of higher debt in causing low growth. There is no way to do this perfectly. But we wanted to make the attempt. We had one key difference in our approach from many of the other analyses of Reinhart and Rogoff’s data: we decided to focus only on long-run effects. This is a way to avoid getting confused by the effects of business cycles such as the Great Recession that we are still recovering from. But one limitation of focusing on long-run effects is that it might leave out one of the more obvious problems with debt: the bond markets might at any time refuse to continue lending except at punitively high interest rates, causing debt crises like that have been faced by Greece, Ireland, and Cyprus, and to a lesser degree Spain and Italy. So far, debt crises like this have been rare for countries that have borrowed in their own currency, but are a serious danger for countries that borrow in a foreign currency or share a currency with many other countries in the euro zone.

Here is what we did to focus on long-run effects: to avoid being confused by business-cycle effects, we looked at the relationship between national debt and growth in the period of time from five to 10 years later. In their paper “Debt Overhangs, Past and Present,” Carmen Reinhart and Ken Rogoff, along with Vincent Reinhart, emphasize that most episodes of high national debt last a long time. That means that if high debt really causes low growth in a slow, corrosive way, we should be able to see high debt now associated with low growth far into the future for the simple reason that high debt now tends to be associated with high debt for quite some time into the future.

Here is the bottom line. Based on economic theory, it would be surprising indeed if high levels of national debt didn’t have at least some slow, corrosive negative effect on economic growth. And we still worry about the effects of debt. But the two of us could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth.

The graphs at the top show show our first take at analyzing the Reinhardt and Rogoff data. This first take seemed to indicate a large effect of low economic growth in the past in raising debt combined with a smaller, but still very important effect of high debt in lowering later economic growth. On the right panel of the graph above, you can see the strong downward slope that indicates a strong correlation between low growth rates in the period from ten years ago to five years ago with more debt, suggesting that low growth in the past causes high debt. On the left panel of the graph above, you can see the mild downward slope that indicates a weaker correlation between debt and lower growth in the period from five years later to ten years later, suggesting that debt might have some negative effect on growth in the long run. In order to avoid overstating the amount of data available, these graphs have only one dot for each five-year period in the data set. If our further analysis had confirmed these results, we were prepared to argue that the evidence suggested a serious worry about the effects of debt on growth. But the story the graphs above seem to tell dissolves on closer examination.

Given the strong effect past low growth seemed to have on debt, we felt that we needed to take into account the effect of past economic growth rates on debt more carefully when trying to tease out the effects in the other direction, of debt on later growth. Economists often use a technique called multiple regression analysis (or “ordinary least squares”) to take into account the effect of one thing when looking at the effect of something else. Here we are doing something that is quite close both in spirit and the numbers it generates for our analysis, but allows us to use graphs to show what is going on a little better.

The effects of low economic growth in the past may not all come from business cycle effects. It is possible that there are political effects as well, in which a slowly growing pie to be divided makes it harder for different political factions to agree, resulting in deficits. Low growth in the past may also be a sign that a government is incompetent or dysfunctional in some other way that also causes high debt. So the way we took into account the effects of economic growth in the past on debt—and the effects on debt of the level of government competence that past growth may signify—was to look at what level of debt could be predicted by knowing the rates of economic growth from the past year, and in the three-year periods from 10 to 7 years ago, 7 to 4 years ago and 4 to 1 years ago. The graph below, labeled “Prediction of Debt Based on Past Growth” shows that knowing these various economic growth rates over the past 10 years helps a lot in predicting how high the ratio of national debt to GDP will be on a year by year basis. (Doing things on a year by year basis gives the best prediction, but means the graph has five times as many dots as the other scatter plots.) The “Prediction of Debt Based on Past Growth” graph shows that some countries, at some times, have debt above what one would expect based on past growth and some countries have debt below what one would expect based on past growth. If higher debt causes lower growth, then national debt beyond what could be predicted by past economic growth should be bad for future growth.



Our next graph below, labeled “Relationship Between Future Growth and Excess Debt to GDP” shows the relationship between a debt to GDP ratio beyond what would be predicted by past growth and economic growth 5 to 10 years later. Here there is no downward slope at all. In fact there is a small upward slope. This was surprising enough that we asked others we knew to see what they found when trying our basic approach. They bear no responsibility for our interpretation of the analysis here, but Owen Zidar, an economics graduate student at the University of California, Berkeley, and Daniel Weagley, graduate student in finance at the University of Michigan were generous enough to analyze the data from our angle to help alert us if they found we were dramatically off course and to suggest various ways to handle details. (In addition, Yu She, a student in the master’s of applied economics program at the University of Michigan proofread our computer code.) We have no doubt that someone could use a slightly different data set or tweak the analysis enough to make the small upward slope into a small downward slope. But the fact that we got a small upward slope so easily (on our first try with this approach of controlling for past growth more carefully) means that there is no robust evidence in the Reinhart and Rogoff data set for a negative long-run effect of debt on future growth once the effects of past growth on debt are taken into account. (We still get an upward slope when we do things on a year-by-year basis instead of looking at non-overlapping five-year growth periods.)



Daniel Weagley raised a very interesting issue that the very slight upward slope shown for the “Relationship Between Future Growth and Excess Debt to GDP” is composed of two different kinds of evidence. Times when countries in the data set, on average, have higher debt than would be predicted tend to be associated with higher growth in the period from five to 10 years later. But at any time, countries that have debt that is unexpectedly high not only compared to their own past growth, but also compared to the unexpected debt of other countries at that time, do indeed tend to have lower growth five to 10 years later. It is only speculating, but this is what one might expect if the main mechanism for long-run effects of debt on growth is more of the short-run effect we mentioned above: the danger that the “bond market vigilantes” will start demanding high interest rates. It is hard for the bond market vigilantes to take their money out of all government bonds everywhere in the world, so having debt that looks high compared to other countries at any given time might be what matters most.



Our view is that evidence from trends in the average level of debt around the world over time are just as instructive as evidence from the cross-national evidence from debt in one country being higher than in other countries at a given time. Our last graph (just above) shows what the evidence from trends in average levels over time looks like. High debt levels in the late 1940s and the 1950s were followed five to 10 years later with relatively high growth. Low debt levels in the 1960s and 1970s were followed five to 10 years later by relatively low growth. High debt levels in the 1980s and 1990s were followed five to 10 years later by relatively high growth. If anyone can come up with a good argument for why this evidence from trends in the average levels over time should be dismissed, then only the cross-national evidence about debt in one country compared to another would remain, which by itself makes debt look bad for growth. But we argue that there is not enough justification to say that special occurrences each year make the evidence from trends in the average levels over time worthless. (Technically, we don’t think it is appropriate to use “year fixed effects” to soak up and throw away evidence from those trends over time in the average level of debt around the world.)

We don’t want anyone to take away the message that high levels of national debt are a matter of no concern. As discussed in “Why Austerity Budgets Won’t Save Your Economy,” the big problem with debt is that the only ways to avoid paying it back or paying interest on it forever are national bankruptcy or hyper-inflation. And unless the borrowed money is spent in ways that foster economic growth in a big way, paying it back or paying interest on it forever will mean future pain in the form of higher taxes or lower spending.

There is very little evidence that spending borrowed money on conventional Keynesian stimulus—spent in the ways dictated by what has become normal politics in the US, Europe and Japan—(or the kinds of tax cuts typically proposed) can stimulate the economy enough to avoid having to raise taxes or cut spending in the future to pay the debt back. There are three main ways to use debt to increase growth enough to avoid having to raise taxes or cut spending later:

1. Spending on national investments that have a very high return, such as in scientific research, fixing roads or bridges that have been sorely neglected. 
2. Using government support to catalyze private borrowing by firms and households, such as government support for student loans, and temporary investment tax credits or Federal Lines of Credit to households used as a stimulus measure. 

3. Issuing debt to create a sovereign wealth fund—that is, putting the money into the corporate stock and bond markets instead of spending it, as discussed in “Why the US needs its own sovereign wealth fund.” For anyone who thinks government debt is important as a form of collateral for private firms (see “How a US Sovereign Wealth Fund Can Alleviate a Scarcity of Safe Assets”), this is the way to get those benefits of debt, while earning more interest and dividends for tax payers than the extra debt costs. And a sovereign wealth fund (like breaking through the zero lower bound with electronic money) makes the tilt of governments toward short-term financing caused by current quantitative easing policies unnecessary.

But even if debt is used in ways that do require higher taxes or lower spending in the future, it may sometimes be worth it. If a country has its own currency, and borrows using appropriate long-term debt (so it only has to refinance a small fraction of the debt each year) the danger from bond market vigilantes can be kept to a minimum. And other than the danger from bond market vigilantes, we find no persuasive evidence from Reinhart and Rogoff’s data set to worry about anything but the higher future taxes or lower future spending needed to pay for that long-term debt. We look forward to further evidence and further thinking on the effects of debt. But our bottom line from this analysis, and the thinking we have been able to articulate above, is this: Done carefully, debt is not damning. Debt is just debt.

Monday, July 8, 2013

Where did the Reach for Yield Go?

Friday’s strong data caused bond yields to spike. This has caused some consternation from economic commentators, and Paul Krugman in particular has argued that the rise in interest rates will have severe economic impacts. While I want to touch on these issues, I will approach them from a different debate -- that over the "reach for yield". My thesis? The recent rebalancing in the stock market shows that the reach for yield was overstated, and that, from this, we can conclude the taper will not have a severe negative effect on growth.

Let’s start by refreshing our memory of “reaching for yield”. In his February speech, Jeremy Stein argued that because many institutional investors need to meet nominal return requirements, these investors were reaching into riskier assets. Even though these assets may not offer high expected returns, their variance profiles offer better chances of hitting the nominal requirement. This game of distributions is illustrated below. Even though the safe red (i.e. low variance) asset has a higher expected return, the risky blue (high variance) asset has a better chance of getting the fund manager over the critical red required return line. As a result, a market wide reach for yield may result in a mispricing of risk, jeopardizing financial stability.


These arguments have been echoed by many other commentators. Here’s Martin Feldstein in the WSJ using the reach for yield as an argument to taper:
Although the economy is weak, experience shows that further bond-buying will have little effect on economic growth and employment. Meanwhile, low interest rates are generating excessive risk-taking by banks and other financial investors. These risks could have serious adverse effects on bank capital and the value of pension funds. In Fed Chairman Ben Bernanke's terms, the efficacy of quantitative easing is low and the costs and risks are substantial.
And here’s Rajan in a speech at the Bank of International Settlements
If effective, the combination of the "low for long" policy for short term policy rates coupled with quantitative easing tends to depress yields across the yield curve for fixed income securities. Fixed income investors with minimum nominal return needs then migrate to riskier instruments such as junk bonds, emerging market bonds, or commodity ETFs, with some of the capital outflow coming back into government securities via foreign central banks accumulating reserves. Other investors migrate to stocks. To some extent, this reach for yield is precisely one of the intended consequences of unconventional monetary policy. The hope is that as the price of risk is reduced, corporations faced with a lower cost of capital will have greater incentive to make real investments, thereby creating jobs and enhancing growth.
Indeed, Bernanke felt it was necessary to address these financial stability concerns at his February and May testimonies. He argued that even if low rates encourage a reach for yield, the only way to get sustainable rates in the long run is to keep rates low now. In the metaphor of Kochlerata, you need to keep the coat on until you are warm enough to take it off.

Bernanke can rest easy. Financial data since his testimonies has even further strengthened the arguments against a reach for yield. To see why, it is important to remember two stylized facts. First, "reach for yield" is a story about financial stability. Because people are going into riskier assets, this results in a systematic underpricing of risk. Second, it’s a story about increasing risk appetites. Excessively low interest rates trigger a flight *from* quality as fund managers look to hit their nominal return requirements.

But recent moves in equity prices contradict this story. The WSJ observes that defensive sectors are underperforming.
He said he still favors stocks over bonds, and has avoided "bond proxies" such as utilities, real- estate investment trusts, and other sectors with high dividend payouts
Those areas are "really expensive, and they have little to no earnings growth. They have benefited hugely from easing," he said. 
Those traditionally defensive sectors dragged on benchmarks. The sole decliners in late trading were the utilities and consumer-staples sectors, which lost 0.9% and 0.3%, respectively. Those areas were among the biggest gainers in the beginning of the year, when yields on Treasury bonds remained low.
Whereas cyclicals are responding very well:
Given the cross currents in the market, including the Fed's commitment to keeping overnight rates low at least until the unemployment rate falls through 6.5%, investors wouldn't want to overinterpret what's happened to the yield curve. But the stock market told a similar story of stronger growth expectations Friday. Shares of economically sensitive companies, like banks, retailers and manufacturers, rallied, while defensive areas, like utility and telecom shares, did poorly.
In other words, the recent taper has caused people to pivot out of safe sectors into riskier ones -- the opposite of what a reach for yield story would suggest. In fact, there appears to have been a flight *to* quality that is only recently being reversed. These movements are also consistent with recent data showing the equity risk premium, or a measure of stock market performance relative to the bond market, is at extremely elevated levels. With the taper we should expect this premium to fall as investors naturally increase their risk appetites.

Now, some may argue that there was a reach for yield in the fixed income market that is now being unwound. Indeed, mortgage rates and junk bond yields are rising:
Rates on a 30-year mortgage have climbed from 3.45% in April to more than 4% in June, according to Freddie Mac FMCC +1.97% . The 30-day average yield on new bonds sold by companies with "junk" credit ratings hit 7.72% in June, up from 5.79% in April, according to S&P Capital IQ LCD.
The risk premium on high yield bonds has also risen slightly. But there are two reasons why we should discount this observation.


First, the bond spreads will have a minimal effect on financial stability. The concern shouldn't be whether individual funds will suffer, but rather whether there has been a massive mispricing in risk. But if the risk was underpriced in the debt market, then the equity prices should have been overpriced to match the artificially low cost of capital. However, since there did not appear to be a reach for equity yield, then any reach for yield in fixed income should also have negligible effects.

Second, the high yield spread is still not outside of its historical range. Even in the 1990’s, when the Fed was never criticized for promoting a reach for yield, the spread was still very low. Therefore we should be skeptical of arguments that there was a massive mispricing in the corporate debt market to begin with.

This perspective from the reach for yield debate leads to two insights.

First, Fed policy has not been distorting financial markets. If anything, people have been too conservative on equities. Monetary policy, by encouraging risk taking, has been doing the right thing to do to help reboot the market. Even if you don’t want firms reaching for yield, they should at least be encouraged to stretch.

Second, it’s not clear if the taper will be all that “terrible” for equities. Of course, the human cost of tight monetary policy is enormous. I personally believe that the Fed should not taper in the face of such elevated levels of unemployment and depressed levels of nominal GDP. Nonetheless, the taper is likely to have only moderate impacts on the stock market, in spite of what short term correlations may suggest.

The greatest irony is that only after the Fed tightens do we realize that the Fed didn’t need to tighten at all. But now that it has, it doesn’t look like the financial impacts will be that large after all.