Monday, April 30, 2012

Globalization's Second Rebundling

When Old Trade Theory turned New, New Trade Theory turned Old, and Future Trade Theory may turn New again.



What will the production technology of the future look like?  A recent Economist essay on 3D printing hypothesizes a third industrial revolution, in which small parts could be “printed” on an individualized basis.  This would be a departure from the economies of scale that characterize capital intensive industries, as one would no longer need to submit orders for thousands of a specialized part; one prototype would get the job done.

But how will this transformation affect patterns of international trade?  Before arriving at a conclusive claim on the effects on trade, it’s important to get some perspective on the development of trade theory over the past few decades.  Old trade circa Ricardo and Heckscher-Ohlin-Samuelson was founded upon comparative advantage and natural factor endowments.  Countries scarce in labor would import labor intensive products, whereas countries scarce in capital would import capital intensive products.  This theory of trade then failed to explain the fact that many capital rich countries tended to import capital intensive products!  In a large scale test, HO predicted the direction of trade across multiple industries with 49.8% accuracy – lower than that of chance.

New Trade arose to explain these empirical irregularities by incorporating monopolistic competition, internal scale economies, and transportation costs.   Now that firms were monopolistically competitive, intra-industry trade began to play a much larger role.  Because the various firms in the capital intensive core could trade more cheaply with other core firms for intermediate inputs, firms would aggregate in the core.  This dynamic would also create home-market effects, in which an increase for the goods of an industry in the core would create a proportionately higher increase of the production of that good.  Part of the new production goes to feeding domestic industries, while the other part goes out to export as a result of a more competitive industry.

However, recent innovations with respect to transportation costs and communications technologies have reversed this trend.  In the original model, transportation costs functioned as a key reason for why intra-industry trade in the core happened.  Without transportation costs, the firms in the core could trade with the firms in the periphery for intermediate goods; there would be no home market effect. Such reversal of the predictions of New Trade became readily more apparent as new data rolled in.  Even when Krugman was giving his Nobel Prize acceptance speech for his work in trade theory, he noted that New Trade theory was getting old.  One of the canonical examples of New Trade was the development of auto industries.  Within the United States, “home market effects” tended to aggregate auto production in the Midwest (Mitten State represent!).  A Chicago Fed map of auto parts suppliers clearly illustrates this phenomenon:



Yet, in more recent times, a substantial amount of auto production shifted to the south, where labor was cheaper.  In effect, scale economies were overwhelmed by factor endowment advantages; new trade theory was becoming old.

This trend on the international level has been called by Baldwin as “globalization’s second unbundling”.  The reduction of transportation costs and communication costs allows supply chains to be more spread out over a larger region.  Intermediate goods don’t all need to be produced by one country.  Rather, they can be produced by smaller firms distributed over a larger region.  While Japan might be the home of Honda, the Thailand does much of the assembly, and has prospered much more as a part of a new “factory Asia.”  On the other hand, Malaysia tried the old method of building the entire supply chain, and is doing substantially worse.  This new regionalism allows for a greater diffusion of industry and, as a result, is more sensitive to initial factor endowments, much like old trade.

It is in this light that we can analyze the prospects for 3D printing on transforming the face of international trade.  What will happen will depend crucially on whether 3D printing will be generalized or specialized.  If 3D printing uses very general materials, then one firm could create key goods in a wide variety of industries.  There would not be an auto parts supplier and a tablet chip supplier; there would just be one 3D printing factory that could produce all of the small components as needed.  On one hand, the versatility of 3D printing would allow one firm to There would be less monopolistic competition as firms would not be as differentiated.  Taking away this key source of internal economies of scale may accelerate the second unbundling as there would be fewer reasons for firms to aggregate in one region.

On the other hand, if 3D printing is highly specialized, then new firms would quickly proliferate to fill the various market niches.  Monopolistic competition would actually heighten, creating more incentives for industries to agglomerate together.  Along with the increase in oil and transportation costs, transportation from distant factories may make it increasingly difficult for foreign factories to create the specialized components that 3D printing would support.  3D printing would allow such a high level of intra-industry trade.  Especially if the parts become highly specialized, it’s not that unreasonable to imagine that certain 3D printing firms may create the critical components for other 3D printers to continue their operations.  The result would be that we may return again to the world of New Trade Theory, in which initial geographic distributions can have a large effect on the movement and agglomeration of firms.   And as the original Economist article argues, this may result in a return of companies to rich countries as firms try to capitalize on the larger markets there.

With this possible transformation on the horizon, trade policy becomes increasingly important as industrial organization now may become even more fixed in the future.  Yet the fixed nature of trade may be an even stronger argument for higher labor mobility.  If factor mobility and free trade are substitutes, global welfare will be best served by allowing the citizens of poor countries to move to rich countries.  If the firms don’t move, the people will.

Increasing agglomeration of firms will also have an effect on currency unions and exchange rate pegging.  This was a key issue in the debate for the Eurozone:
As the EU moves towards a monetary union, it can be expected that the geographic concentration of industries will increase further, in line with developments so far, and paralleling United States experience (Krugman, 1991b). This, in turn, would raise the likelihood of asymmetric shocks affecting EMU member countries, thereby raising difficulties of adjustment in the absence of nominal exchange-rate instruments
If countries become increasingly heterogeneous and labor mobility is still limited by de facto issues such as language and culture, national sovereignty over monetary policy will be vital for each country’s stability.

With the technological revolution of 3D printing, New Trade Theory may become relevant again.  Globalization unbundled itself with the initial decrease in transportation costs that facilitated international trade.  It then rebundled itself as monopolistic competition and intra-industry trade caused firms to agglomerate in a global core, away from the periphery.  In recent times, the world has unbundled again with decreases in coordination costs allowing more regional supply chains.  The future may hold a second rebundling, in which 3D printing causes firms to return to countries and retreat from deep global integration.  This will have massive implications for globalization and trade policy as we watch these trends unfold in the coming decades.


Saturday, April 28, 2012

Links and Minor Thoughts

Chinese politics is complicated.  Especially as the Bo Xilai political turmoil gets more complicated, it's probably important to have somewhat of a grasp on what's going on.

What are the effect of cities on the environment, and how do new trends in manufacturing shape this?  With the past reduction in manufacturing, and now the present revival, it's likely that factories can be built in centers to improve environmental efficiency.

Tepid PMI production news from China.  The tail risk is still worrying me because I suspect that neither the actual probability nor the impact are truly calculable.

Could decentralized solar power send shockwaves through developing countries?  Recent innovations in SMS payment have led a revolution in banking; could it transfer to small scale utilities as well?
Decentralized solar, sweet...

If society is to be based on stochastic tinkering, we need patent reform.  Current patent law makes it impossible for small scale inventors to truly defend their creations.

Yves Smith points out another serious risk for banks: interest rate risk.  Especially if we decide to weight monetary policy more towards a NGDP target, there's a serious possibility for higher interest rate volatility which can wreak havoc on these banks with such long maturity bonds.

I remember perusing through Peter Diamond's paper on the Beveridge Curve after he won the Nobel Prize. I never truly grasped the significance of it, so I found Andy Harless' mini-analysis on the Beveridge curve quite interesting.

High multipliers in times of low interest rates: DSGE edition.  No surprise here, although it would likely change if monetary policy would pull more weight.  This, along with several robust pieces of evidence that austerity fails (private debt much?), all are bad news for the Eurozone.

A reminder for humility: how accurate are the national accounts anyways?  Considering precision can't be that high, don't take each decimal point that seriously.

Friday, April 27, 2012

Escaping from the Golden Fetters

An intermediate to currency disunion

The prospects for the Eurozone look bleak.  CDS spreads are going up, large countries have downgraded debt, politics is turning against fiscal consolidation, and the central bank still feels trying to save the Euro would jeopardize (what's left of) its credibility.  This has led to widespread pessimism about the long-run sustainability of the Euro, including from my fellow soon-to-be-undergraduate blogger Evan.  But, we must remember that "The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again."  Our goal for the endgame of the European economy is irrelevant if sovereign debt crises interrupt our theorizing.  Especially given the precipitous state of several other key economic powerhouses, such as India and China, adding a large scale currency adjustment hardly seems like a credible option.  The sheer physical demands of such a transition would make it nigh impossible to create an orderly transition.

Thus, even if all the nations are bound by the "golden fetters" of the Euro, simply abandoning the fetters is not a sufficient answer.To begin this walk away from the abyss, there needs to be more room for national policy in a continent that is otherwise deeply integrated.  The apparent struggles of the Eurozone are a particularly interesting case of what Dani Rodrik calls for in a path to a saner globalization.  We cannot let the perfect become the enemy of the good, and therefore let a potentially destabilizing deep integration get int the way of the many beneficial forms of shallower integration. Nations are different; why shouldn't their laws be the same?  In the words of Rodrik:
We have to think of these differences not as aberrations from the norm of international harmonization, but as the natural consequences of varying national circumstances.  In a world where national interests, perceived or real, differ, the desire to coordinate regulations can do more harm than good.  Even when successful, it produces either weak agreements based on the lowest common denominator or tougher standards that may not be appropriate to all.  It is far better to recognize these differences than to presume that they can be papered over given sufficient time, negotiation, and political pressure (The Globalization Paradox, 262).
Too often, attempts at harmonization result in policies that are "one size fits none."  Interest rates or macroprudential requirements may be too high in one polity, too lower in another, thereby aggravating the procyclical tendencies for both.  Stricter fiscal pacts and banking unification won't help; if anything, they would likely worsen the situation.

So if the Euro isn't going away in the short run, then there needs to be a way to introduce frictions to give national governments "policy space" to adjust.  This is where national macroprudential policies and potentially capital controls come into play.  Even if Eurobonds and enhanced lender of last resort capabilities are necessary, they need to be paired with policies that can ensure that the question of liquidity does not evolve into a question of later solvency.  Without national policy space, differing unit costs of capital and labor can evolve into serious financial issues, replicating the current European debt crisis.

This kind of financial segmentation would be even more appropriate given the dangerous roles of private capital flows in promoting the current crisis.  Private capital, freed from exchange rate risk and in search for higher yields, would flow from core banks to the periphery.  However, due to the poor institutional underpinnings of the periphery, there was no proper way to organize that capital (interfluidity).  In the time of adjustment, the common currency then prevents any kind of external devaluation that otherwise might have blunted the highly procyclical capital flows.  Since the currency cannot self adjust the capital flows, the governments may have to take a stronger role in ensuring that destructive capital flows don't distort national economies.

Eventually, when things have stabilized, the Euro can be steadily phased out  But with the current uncertainty and chaos, that option hardly seems viable.

Update 4/29/12:


New article from the economist that discusses the prospects for financial integration:
Breaking that interrelationship requires a number of things, Lord Turner argues. He would like to see Eurobonds that can, among other things, act as a risk-free asset that liberates banks from the “wrong-way risk” of holding their own sovereign’s debt; and he argues, too, for a pan-euro-zone approach to bank resolution, deposit insurance and supervision. National authorities should, he thinks, have responsibility for pulling “macroprudential” levers designed to prick booms before they get out of hand.
A much more integrated euro-zone banking system is a logical response to the euro crisis, but boy will it be difficult. Just imagine the implications. A big European supervisory authority that excludes Britain, the continent’s biggest financial centre; a system that would see taxpayers in creditor countries backing the banks of debtor countries; a process that could end up with supervisors in Frankfurt telling the Spanish, say, they cannot have more credit. Thorny stuff, but still better than the direction in which the euro zone is now travelling.
 But why are we trying for force all European countries into the same financial straitjacket if they're, quite obviously, Not The Same?

Thursday, April 26, 2012

Systemic Bank Runs? Why Pure Regulatory Oversight Isn't the Answer

A Systemic Crisis, a Bank Run, or a Way to Regulate Both?






Modern finance is tightly coupled, and therefore prone to complex, unpredictable Black Swan events. This is not in dispute.  However, what should we do about it?  From a the introduction of a new Fed paper on systemic institutions:
There are at least two contesting views on the causes of systemic risk. In the so-called micro-prudential view, the systemic risk arises from contagion of the failure of a financial institution to the rest of the financial sector whereas in the so-called macro-prudential view, the systemic risk arises from the collective failure of many financial institutions because of their common risk factor exposures.
The microprudential approach is basically that of Dodd-Frank and its focus on "systemically important institutions", whereas the macroprudential approach looks at assets and overall market trends.  Before looking more closely at this discussion, it is important to give a historical snapshot of the financial crisis, namely the specific breakdown in the Money Market Funds that later led to the Lehman breakdown:
After the  Lehman Brothers’ bankruptcy filed on September 15, 2008,  its outstanding debt collapsed in price almost immediately.  Since one of the largest money market  mutual funds (MMMFs), the Reserve Primary Fund, was highly exposed to Lehman Brothers' collapsing short-term  debt, the next day its net asset value (NAV) fell below par. Since  MMMFs offer stable NAV and investors can redeem anytime at par, an immediate run on the Reserve Primary Fund occurred, causing it to shut down.  This failure opened up the possibility that other MMMFs were similarly exposed and a run on the MMMFs started. Since MMMFs are a primary source for the commercial paper market, this run opened up the possibility of capital shortfalls at many financial institutions that needed to roll over commercial paper. Only after the government guaranteed the MMMF deposits 100% the run came to a halt and the slide was stopped.
How should one interpret this history?  Did the crisis occur because Reserve was a systemically important institution, or was it because Reserve collapsed due to a systemically important asset?

Based on this description, Wallison and Cochrane find that the focus really should be put the assets, and not the firms.  For them, the problem was that too many firms were exposed to the same assets, and whether the firms themselves were coupled didn't play that much of a role.  Wallison and Cochrane also make an argument against the top-down nature of the current regulatory system; how does a regulator designate what qualifies as a "systemically important firm" ?  Since there's no firm guideline, the clauses about systemically important firms in Dodd Frank could easily result in over regulation and a mentality of "regulate everything that moves."

Interestingly, a recent Federal Reserve paper discussed by Yves Smith articulated a similar criticism of Dodd Frank.  According to the paper, Dodd Frank did not take a strong enough position on Repo agreements that carried a significant amount of counterparty risk.  These assets would create very tight connections between firms, as a failure for one firm to get overnight funding for their liabilities could cause insolvency and wipe out the assets of the next firm.  As a result, financial shocks were amplified and propagated themselves through the market.  Their answer to this problem is to change the paradigm of regulation to a macroprudential standpoint, as outlined in the paper:
What we propose is that  instead of attempting regulation of systemically important financial institutions (a “top down” approach to regulating systemically important assets and liabilities), prudential regulation be built  in the form of a “bottom up” approach  – one that works at the level of the SIALs rather than at the level of the SIFI that owns them. Under the “bottom-up” approach there would be an automatic stabilizer built  for each SIAL. The automatic stabilizers could be in the form of government-provided but appropriately-charged deposit insurance, centrally cleared SIALs with initial and variation margins or haircuts charged by a clearinghouse or dedicated resolution authority for those  SIALs, and in extreme cases, lender-of-last-resort from the central bank against eligible assets (but to avoid moral hazard, only to firms that pay a market-rate fee).  This way, when an SIFI fails, it is not the orderly resolution of an individual SIFI that has to be effected, but rather the resolution of its various SIALs; the parts of its capital structure that are not systemically important would be resolved by market-determined contracts and relevant bankruptcy procedures.  A particularly attractive feature of the “bottom up” approach is that it requires no uniform institution-level insolvency process and therefore might be the simplest way of achieving international agreement on resolving the financial distress of G-SIFIs (as long as there is global agreement on resolution mechanisms for SIALs).
Although Dodd Frank might focus on systemically important institutions, the proposed regulatory framework works on assets, much like the Wallison and Cochrane view on the crisis.  This recent paper from the federal reserve find that firms' importance is primarily determined by the extent that they own systemically important liabilities.  If this is the case, then their assets need to be regulated.  The authors go on to provide a very long list of possible ways to ensure the stability of the assets.  Via Yves:

The authors argue for reducing risk in each of these areas via a variety of mechanisms: central clearing with adequate margining, deposit insurance, resolution mechanisms targeted to each asset type. The advantage is that by addressing only the systemically dangerous parts of financial firm balance sheets, the other pieces can be handled through existing legal/regulatory mechanisms.

Under this system, large banks would not be penalized for simply being large, but rather they would be at a disadvantage because of their highly important assets.  Yet these new approaches to regulation could easily proliferate into something more.  How many tools do we need to be considered a "variety of mechanisms", and how do we calibrate each tool?

This is where policies to increase financial diversity become incredibly important.  The runs on the MMF's wouldn't occur if the funding sources of all the firms weren't the same.  While the MM theorem tells us funding sources don't matter, a world of uncertain parameters strongly favors diversification for long term stability.  As I've written before, a possible supplement to individual new authorities is a Pigouvian tax on "following the market".  Capital requirements could be different with banks that correlated with the market index in differently.  If a given fund had a very strong correlation with the market as a whole, then they would be put under stringent resolution that would thereby internalize the cost to complexity and financial monoculture.  This is the standard answer to externalities; why should the government opt for firm and asset specific command and control policies instead?  It would also help address bounded rationality, as neither the firms nor the regulators would be put in the position of evaluating the effect of every single asset on every other institution in the economy.  When it comes to specific numbers on financial regulations, we don't quite know what we're doing.  If there's a way to simplify the framework, we should pursue it.

Another powerful advantage of a tax to facilitate diversity is that it would help foster resiliency across borders.  The U.S. could even move forward unilaterally, and the benefits would not even depend on how important the U.S. is to global finance.  If the U.S. is a major player in finance, our move towards a more diverse system would be able to increase the diversity of global banking as a whole.  If the U.S. is not a major player, then forcing our firms to become more diverse also forces them to be more diverse from global finance.  This would limit the effect of systemic crises in other parts of the world on our economy. We may become more segmented, but also more robust.  In every way, diversity can help with the problem of financial fragility.

These ideas on diversity and debt then provide a template from which to guide policy.  For truly effective finance, more and more regulatory oversight is not enough; there needs to be a coherent framework to promote institutional diversity while limiting international spillovers.  The problems of a monoculture of systemically important assets needs to be addressed, and macroprudential policy is a key way to accomplish that goal.

Wednesday, April 25, 2012

Debt and Growth: The Chicken or the Egg?

..., and what to put in the European omelet?


How does debt affect growth?  The well-known correlation found by Rheinhart and Rogoff in their analyses of financial crises found that growth tends to substantially slow down as the debt to GDP ratio approaches 90%.  Based on this correlation, policy makers began to advocate austerity as a way to reduce debt, and thereby restore growth.

Of course, correlation does not prove causation, and Paul Krugman jumped at that, pointing out that it was quite likely that causation ran the other way.  As a result of anemic growth, countries would pursue countercyclical fiscal policy.  This would create the appearance that only low growth countries pursue higher levels of debt.  However, the association between the two variables actually arises from textbook countercyclical fiscal policy.  Had the government decided to retrench, the economy would have suffered even more, compounding the growth problem.

So how to resolve these issues?  A recent VoxEU article on the relationship between debt and growth caught my eye, as it proposed a novel mechanism to estimate the effect of debt on growth.  While I don't understand the full mechanics, the working paper seems to use the stock of foreign debt as a variable to instrument the stock of debt.  At the end of the analysis, the authors conclude that, while R+R find a strong correlation, the new data instrumented for endogenity cannot reject the null hypothesis that debt has no effect on growth.  This then has a critical role in determining policy because it suggests that countries, when in dire output straits, should not ignore the role of fiscal policy.  The debt effects are unlikely to be strong enough to overwhelm any first order effects from fiscal stimulus.  Especially given the recent interest in hysteresis and reductions in potential output, it seems almost certain that fiscal retrenchment is not the answer.  Along these lines, I also found some older articles, one an old study looking at old U.S. time series data, and another on Latin American growth, that both suggest we should be worry about the output gap.  Long run growth can be seriously affected by temporary deviations, and therefore we need to fill the gap, whether by fiscal or monetary policy.

Another note in the paper that was particularly interesting was that the authors traced much of the negative correlation between debt and growth to the destructive policies governments would pursue when at high levels of debt.  In the words of the authors:
We believe that there is a subtle channel through which high levels of public debt can have a negative effect on growth. In the presence of multiple equilibria, a fully solvent government with a high level of debt may decide to put in place restrictive fiscal policies aimed at reducing the probability that a change in investors’ sentiments would push the country towards the bad equilibrium. These policies, in turn, may reduce growth (Perotti 2012), especially if implemented during a recession (such policies may even be self-defeating and increase the debt-to-GDP ratio, DeLong and Summers 2012, UNCTAD 2011).3 In this case, it would be true that debt reduces growth, but only because high debt leads to panic and contractionary policies.
This argument piqued my interest for two key reasons.  The first is that it echoes a paper by Bernanke on the effect of supply shocks on the economy.  In the paper, Bernanke argued that a lot of the damage from a supply shock wasn't actually from the shock itself, but rather from the monetary policy response.  In the case of debt, much of the harm from high levels of debt isn't from the debt, but rather from the fiscal policy response.

The second is that this advice on debt and growth seems particularly pertinent for the Eurozone.  This paper provides strong evidence that the austerity cure is anything but.  Even if growth is needed, but fiscal retrenchment is not the answer.  Netherland's recent rejection of harsh cuts, and France's electoral shift both show that democracy is having its say and is refusing this cup of self-defeating suffering.  They recognize that cutting budgets cannot be the only way forward, and that policy needs to be fundamentally changed to recognize the elaborate chicken and egg relationship between debt and fiscal policy.  I only hope that they figure out that omelet before the bond markets take away the chance.

Tuesday, April 24, 2012

This Time is Different

Just a short post with a 150 word essay I wrote for a scholarship.  The prompt was "the importance of Having Good Credit"


Credit, as a tool, is a promise: for hope today, a return tomorrow.
Credit, as a score, is a reputation: for promises yesterday, your faith today.
At least this was how it is supposed to work, but a rapidly inflating housing bubble made us look the other way.  No Income, No Job or Asset (NINJA) loans were lent, with no regard for reputation or promise.  “This Time is Different”, they told us.  And we, in our fervor, listened.
If only it were different, if only our house prices could, like Dorothy’s house, defy gravity.  Only after the crisis did we realize otherwise.
Never again, we now tell ourselves.  And so the recent crisis reminded us of why good credit matters.  It is a promise in which harbors the hope for tomorrow, so that the future can truly be different, and so prosperity be truly sustained.

Gazelles at the Credit Hole: Small Business JOBS

Is it quite as simple as just "jobs, jobs, jobs"?


The recent Obama JOBS act was done under auspices of helping small businesses, notably technology start ups, by reducing documentation requirements.  But does this really seem reasonable to help small businesses so much?  And while on that topic, who are small businesses, and what do they truly do for employment?

The rhetoric on capitol hill tends to be that small businesses are absolutely critical to job creation.  This line of argument had its roots in a 1979 empirical study conducted by John Birch that found that small businesses contributed to a disproportionate amount of job growth.  He subsequently added to his thesis, talking about the business environment as if it were the savannah.  On one side, you had the small businesses, comprised of the mice and the gazelles.  On the other side were the large businesses like WalMart, termed "elephants".  The mice are the low impact firms, the plumbers, the electricians, and the barber shops, whereas the gazelles are the high impact firms, the (as of then nonexistent) Apples of the world.  The mice tend to be relatively unproductive, with high turnover, and really have not much reason to grow.  They're just a part of the increase in service and hospitality, they aren't the technological game changers that politicians love to idolize.  On the other hand, the gazelles, while only about 5% of the small business crowd, were actually the ones responsible about 70% in total job growth.

So do these businesses need easier credit, much like that offered by the JOBS act, to survive?  On a first pass, the answer is no.  In a recent survey of small business owners, only 4% complained of financing difficulties, where as taxes and low sales took up 56% of the complaints.  For mice, who don't grow much anyways, there seems little need for credit.  For the gazelles, credit hardly seems like an issue at all!  Although the recent financial crisis made it more difficult for business, especially small ones, to get access to credit, the gazelles among the small businesses were hardly hit; they just kept on growing.  They have been able to sustain high revenue per employee throughout the recession  These firms were already gazelles,and they had already clearly proven themselves in the most productive fields. How could a credit crunch truly affect them?

This really goes to show how schizophrenic small business policy is in the United States.  While favorable tax policy towards small businesses may give an initial boost, they can also foster a system of incentives in which firms rationally choose not to grow.  Additionally, the policies that we try to enact to reduce compliance costs for small businesses, can have rather dangerous second order effects on hedge funds, which may end up creating new financial crises that devastate businesses.  Why create more market opacity in an attempt to feed businesses with what they're not asking for?

And if credit is somehow the actual issue holding back key businesses, wouldn't this just be another argument for better monetary policy?  If there is a guaranteed level of nominal output growth, there would be more assurance that the demand for your products would be there.  Also, this would be a less distortionary than the current system in making credit more available.  Nominal stability increases the supply of safe assets, meaning firms would have an easier time getting funds.  Banks would be more willing to lend because the NGDP target effectively guarantees that, on aggregate, these businesses cannot all fail.  The Fed would manage the systemic risk; banks would only have to deal with more tractable firm level risk.  More stable sales that naturally arise from a NGDP targeting system would also help these extremely promising firms start out.

The JOBS act really goes to show how dangerous ostensibly liberalizing legislation can be.  Often times, what we wish to fiscally achieve to solve certain persistent problems would be so much less important in a world with a stable monetary policy.  Yet even with all these factors hanging in the balance, monetary policy has remained tight.  What a shame this is.

Funny comic on this issue of presidents, legislation, and jobs from Saturday Morning Breakfast Cereal.  I sometimes wonder if this really is how bills are named.


Sunday, April 22, 2012

Terminal Velocity? Or Why the Economics Isn't Physics



Recent posts from Evan Soltas on Impulse Functions and Growth Inertia has led to some interesting discussions on whether there are certain consistent correlations that could predict behavior in economic data, much in the way the movement of physical objects could be predicted.

On first pass, this should be a skeptical proposition.  Even a weak form EMH would mean that there should be no systematic way to predict the movement of GDP on a quarter to quarter basis.  I'm no expert in econometrics, but this core tenet about efficient markets is cause for suspicion.

After looking at the data, it's not clear that there's any meaningful thresholds either.  While Evan focused more on logistic growth models or quantitative ways to model the entry into recession, I tried to look at more of what kind of growth can be seen before recessions.  I looked at percent NGDP growth from one year ago, percent NGDP growth from one quarter ago, as well as the RGDP equivalents, and tested a variety of traits about those numbers, but found nothing.  Among them, the NGDP growth from one year ago seemed the least persnickety with smaller standard deviations relative to the mean, so I use it for this post.

I used the NBER recession classification table to classify all the quarters with a dummy variable, with 1 indicating a recession and 0 indicating regular times.  As the interest was in predicting entry into a recession, I focused on the quarters before the start of a recession, and ignored the months within the recession.  I looked at the rates one, two, three, four quarters before the recession, the average growth over the four quarters before the recessions, and many others that are listed in the chart below


The last two rows are the average and standard deviation in that column. The four quarter/six quarter OLS measures are kind of like a second derivative: I was looking at the rate change in the "percent change from one year ago" number over the past 4 or 6 quarters.  Nothing.  Even if some of these confidence intervals are "non-zero", they're not very helpful as they would predict a recession in almost any quarter.  Diagnostics aren't very helpful if they keep on returning false positives. Besides, shouldn't these be regime dependent anyways? It really depends on how far governments are willing to let economies go. Perhaps there's stronger indications for other parameters, such as private debt or real wages. But so far, it doesn't look like it's in an economy's growth speed.


P.S. If you want the excel spreadsheet I used, I'd be happy to email it to you.  Google docs doesn't do xlsx very well.

Saturday, April 21, 2012

99 Reasons to Fail: Financial Monoculture

Is size necessarily fragile?  Another look at Too Big to Fail




The scale of finance has drawn heightened scrutiny in the years after the financial crisis.  Yet in spite of this concern, the size of banks has only grown.  There's fear that the government may have to intervene again if another financial crisis comes along, and Fisher, the Dallas Fed president, has blasted this trend.
It is imperative that we end TBTF. In my view, downsizing the behemoths over time into institutions that can be prudently managed and regulated across borders is the appropriate policy response. Only then can the process of “creative destruction”— which America has perfected and practiced with such effectiveness that it led our country to unprecedented economic achievement— work its wonders in the financial sector, just as it does elsewhere in our economy. Only then will we have a financial system fit and proper for serving as the lubricant for an economy as dynamic as that of the United States.

Of course, the regulatory confusion that would arise from breaking up the banks would have massive effects, but are there also other theoretical reasons to be suspicious of "just" breaking up banks?  According to the traditional narrative, large banks are too vulnerable to unseen risk.  If all the models are calibrated, and something outside those models surfaces, the entire machine could break down.  This puts the entire market in jeopardy.  Large banks, knowing this, are then willing to take on more risk, as they know that the government will intervene to save them.  This combination of factors then creates a "too big to fail" phenomenon, and society pays the price while the bankers continue picking up pennies off of the train tracks.

Yet small banks are very vulnerable as well.  What sometimes can be forgotten from discussions of banking is that big banks is why they are so problematic.  Systemic risk is at the root of the problem.  With their high frequency Gaussian models to hedge alphas, betas, deltas, gammas, a single tail even can cause a the largest bank to collapse, sending ripples through the entire financial system.  An analysis of the Fed's interbank lending system showed that 75% of the payments involved 0.1% of the nodes and 0.3% of the linkages between nodes in the banking network.  From this image, it's very easy to imagine an explosion causing one bank causing a cascade throughout the network.

But is size the only issue?  Not necessarily.  Some earlier studies about bank resiliency actually indicated that larger banks should actually decrease bank failure!  Bank, as a result of their size, are able to diversify more and limit their exposure to sector-specific volatility.  Notably, the United States actually has relatively low bank concentration compared to other countries.  The three largest banks in the United States only controlled 19% of the industry in 2003, while the corresponding numbers for Finland and New Zealand were 85% and 77% respectively.  Research from the NBER found that a one standard deviation increase in bank size resulted in about a 1 percentage point decrease in bank failure proportions.  Considering the percentage risk of bank failure was only 4% in the whole sample, the 1 percentage point would have been a significant decrease in bank failure rates.

Of course, this does not suggest that large banks were better; the fact that many of them collapsed in the recent financial crisis suggests that this isn't the case.  Additionally, the focus on the probability of bank failure glosses over the issue of magnitude; the smaller number of bank failures most likely had massive effects.  But this does suggest that the relationship is not as simple as one might think, and that the true relationship likely has a severe non-linearity.

Moreover, a market populated by small banks is prone to a crisis because there's "too many to fail".  If the market is fed on the same monoculture of debt priced with bad models, there's still the risk that a systemic crisis could run through the system.  With VaR models that are ill suited for complex environments and hyperspeed algorithmic trading models, it's not unthinkable that traders could feed on themselves and trigger stock market shocks.  This system would be difficult, if not impossible to effectively monitor, especially when any given bank can have large systemic effects.

As a result, some have called for an increase in the diversity in financial systems.  The voxEU article specifically outlines four positive externalities from diversity:

  1. Bailout/Moral Hazard Externality - banks tend to pursue the same investment as they are consequently more likely to be all bailed out.
  2. Systemic Risk Externality - as banks have a hard time taking into account the effect of its actions on other firms, this leads to inefficient levels of systemic risk with homogenity
  3. Herding/Momentum Externality - as markets tend to herd, whether for psychological or principal-agent reasons, increasing diversity would limit the swings in the market.
  4. Insurance Externality - higher diversity makes cross-insurance more robust, reducing risk

With the discussion framed in terms of externalities, the natural answer is a pigouvian tax.  The authors propose a system of capital requirements based on how much a given bank's profits or share prices correlate with the market as a whole.  The government would "tax" banks who "go with the flow".  This would take into account both "too big to fail" as well as "too many to fail".  Large banks would be required to hold more capital as they, by virtue of their size, are highly correlated with the market.  Small banks would be also pushed to try different strategies to avoid higher capital requirements.  The simplicity in the rule is also incredibly elegant; a heuristic, and not a model error sensitive parameter.  Thus, in an inverse of Taleb's criticism of the current financial system, this kind of macroprudential regulation may promote a certain level of antifragility as individual banks could play for the lottery tickets with undefined payoffs.  It may not be enough, but coupled with robust layers of monetary policy, there may yet be hope for complex economies in an unpredictable world.

Friday, April 20, 2012

Food for Thought

The fear should not be that markets are irrational, but rather that markets are perfectly rational, yet chronically unstable.

Fragile Finance - A Look at Macroprudential Regulation

Modern finance is fragile, so what should we do?

Last year, Olivier Jean Blanchard wrote a "Seoul paper" on macro and financial issues, calling for a rethinking of the way macroeconomic policy is conducted.  In the old approach:
We thought of monetary policy as having one target, inflation, and one instrument, the policy rate. So long as inflation was stable, the output gap was likely to be small and stable and monetary policy did its job. We thought of fiscal policy as playing a secondary role, with political constraints sharply limiting its de facto usefulness. And we thought of financial regulation as mostly outside the macroeconomic policy framework.
This shift was significant, as previous financial regulation was primarily concerned with the micro picture.  But with the realization that there are serious systemic risks that permeate markets, interest has shifted to trying to look at financial regulation from a macro perspective.  Since then, macroprudential policy has been integrated into the G-20 framework and there is a large and growing literature on how to implement it.

This is an especially thorny issue because we're not quite sure what we're looking at.  Unlike monetary policy, macroprudential policy does not have the equivalent of a DSGE for analysis.  Moreover, what measures of risk should be used?  Capital ratios?  Loan-to-value ratio?  Does one follow a rule based approach or allow for more discretion?  This has been the fundamental problem with more formal analyses of macroprudential policy, as "both theoretical and empirical work linking the financial sector to the macroeconomy is far from a stage where it can be operationalized and used for risk analysis and policy simulations."  There simply isn't enough data to thoroughly analyze macroprudential effects.

A recent study has suggested that certain macroprudential policies, such as caps on loan to value ratios or dynamic provisioning have been effective in reducing the procyclicality of credit growth.  As debt is very fragile and promotes unpredictable complexity, any way to reduce its use in times of economic growth is good to hear.  Ideally, debt can be limited to digging oneself out of holes, and not trying to get to extreme heights of economic euphoria.

Note that this kind of regression analysis, although it is dealing with debt, which increases the probability of black swans, is still appropriate because it's looking at the growth of debt versus the growth of GDP.  Models aren't dependent on the exact magnitude of these parameters, rather we use changes in the parameters to determine if a given policy is appropriate.

However, this macroprudential approach is not without concerns.  It is not sufficient, and safety net policies will still be necessary.  Additionally, capital controls in and of themselves may have severe harm for long run economic growth.  As we're dealing with systemic risk, it may be that the regulations to limit systemic risk only ends up replicating it elsewhere, in industries that are not as easily regulated.  This would be even more worrisome, as previously known risks go on to evolve into unknown unknowns: the realm of Extremistan.

In spite of this, I feel that macroprudential policy will be increasingly important for the future, especially if we move to a more nominally stable NGDP targeting regime.  When aggregate demand is stabilized, the largest welfare costs will arise from aggregate supply shocks.  And as the financial sector is one of the critical industries for system wide credit, the question of how to regulate finance is fundamentally an aggregate supply issue.  In the market monetarist framework of Scott Sumners, macroprudential policy will be critical for shaping the composition of NGDP growth in a post market monetarist world.  This will be also very important for developing nations, as they are disproportionately harmed by large swings in real growth.  A massive drop in export and natural resource demand can let their capital stock deteriorate, damaging their prospects for development.  This move towards "increasing transaction costs" in order to improve global finance echoes Dani Rodrik's arguments for a more sustainable version of global trade.  Much as a better trade does not equal more integration, better finance may not entail more transnational capital flows.  And without stable and robust finance, there shall be neither stable nor robust growth.  That forms the basis of macroprudential regulation.

Monday, April 16, 2012

Correlations Across Time: How Stable are the Curves?

What is the Philips curve, and how do we know it's there?   It was originally discovered by Irving Fisher in 1926 when he noted the negative correlation between inflation and unemployment.  Of course, he was not the first to realize this connection between prices and employment, as Hume commented on this exact issue almost 200 years before:

In my opinion, it is only in the interval or intermediate situation, between the acquisition of money and the rise in prices, that the increasing quantity of gold or silver is favourable to industry. . . . The farmer or gardener, finding that their commodities are taken off apply themselves with alacrity to the raising of more. . . . It is easy to trace the money in its progress through the whole commonwealth; where we shall find that it must first quicken the diligence of every individual, before it increases the price of labour
For this reason, Milton Friedman often said that modern macroeconomics has made it just one derivative past Hume.  Instead of just focusing on the first derivative and changes in the price level, we now look at the second derivative and changes in the inflation rate.

Robert Hall took this one step further in his 1986 exposition on efficient monetary policy and, instead of looking at one more derivative, looked at one more parameter.  Instead of just looking at the levels of unemployment and inflation, he theorized on the relationship between the volatility of the two variables.  He hypothesized the existence of an efficient policy frontier, a trade-off between price stability and unemployment stability that would prevent both variables from settling down in the face of periodic random shocks.

But have either of these correlations held throughout time?  The Philip's curve worked originally very well in the 1960's to 1980's, but then broke down as stagflation struck and expected inflation shifted the "stable" Philip's curve.  Thus, there seems to be a severe issue with measuring the Philip's curve; where should one start and end the observation window?  The analysis can easily become utterly meaningless, as:

To see how meaningless correlation can be outside of Mediocristan, take a historical series involving two variables that are patently from Ex­ tremistan, such as the bond and the stock markets, or two securities prices, or two variables like, say, changes in book sales of children's books in the United States, and fertilizer production in China; or real-estate prices in New York City and returns of the Mongolian stock market. Measure correlation between the pairs of variables in different subperiods, say, for 1994, 1995, 1996, etc. The correlation measure will be likely to ex­hibit severe instability; it will depend on the period for which it was com­puted. Yet people talk about correlation as if it were something real, making it tangible, investing it with a physical property, reifying it. The same illusion of concreteness affects what we call "standard" deviations. Take any series of historical prices or values. Break it up into subsegments and measure its "standard" deviation. Surprised? Every sample will yield a different "standard" deviation. Then why do people talk about standard deviations? Go figure. 
Note here that, as with the narrative fallacy, when you look at past data and compute one single correlation or standard deviation, you do not notice such instability (Taleb, The Black Swan, my emphasis).

So, in this post, I want to look at the time series data and see how the correlation evolves over time.  This is important for both the Philip's curve and the efficient policy frontier, as one can see if either of those relationships actually holds across all time periods.

Monthly CPI and unemployment data are obtained from the St. Louis Federal Reserve website, and variabilities for each variable are measured by the standard deviation of the past year's worth of observations.  Correlations were then calculated in five year windows, such that a correlation coefficient on month t is the correlation between the variables of interest in months t-59 to t.  As the concept of a standard deviation is a bit abstract and not well understood, I took the logarithms of the standard deviations, to allow an explanation in terms of percentage increases in one variable leading to percent increases in another.

Below is a tool to gain a qualitative understanding of the evolution of the correlations.  Red denotes high numbers (strong positive correlation), while green denotes low numbers (strong negative correlation).  The black lines mark every 10 years to give a sense of scale in the colorful "time series".


As expected, the correlation coefficients fluctuated throughout history. For the Philips curve, old Keynesian theory would predict a negative correlation.  However, if there's a supply shock, both inflation and unemployment move in the same direction.  This makes sense as the two major supply shocks in recent history were the negative aggregate supply oil shock in the mid 1980's, as well as the positive aggregate supply shock in the 1990's.

With this in mind, we see that the Philip's curve relationship was actually quite stable up until the 1990's.  Although the oil price shock did force the correlation positive for a short period, it quickly reverted to a negative value.  However, from about 1990 on, the correlation between unemployment and inflation became consistently, if only weakly, positive.  Since both inflation and unemployment rose in that time period, this is another piece of evidence that suggests much of the aggregate supply gains in the 1990's were steadily reversed in the 2000's.

However, the relationship between the two volatilities was not as clear cut.  A log-log regression of the unemployment volatility versus the inflation volatility over the entire 60 years yields a slope of 0.44, with a 95% confidence interval between 0.346 and 0.540, suggesting that 1% increase in inflation volatility resulted in about a 0.44% increase in unemployment volatility.  Yet this general correlation masks the variance.  Around the 1980's and 2010, the correlation was incredibly positive, while in the 1970's and 2000's the correlation is very negative.

From this, general conclusions can be made.  First, policy is not efficient.  Even if there were an efficient policy frontier, we're not on it.  The many zones of positive correlation indicate that there's much more monetary policy can do to limit volatility in the two variables.  Second, that there are interesting things going on with transmission mechanisms that would cause uncertain inflation to translate to uncertain output.  Third, if there are severe risks to inflation volatility, it may be in our interest to lower unemployment volatility as well.  Moderating the relationship between these two variables may become one of the biggest benefits of NGDP targeting, as uncertainty along the Philips curve may cause movement towards higher levels of volatility.

Saturday, April 14, 2012

The "Efficient-as-you-get" Market Hypothesis - Limits to Knowledge

Quantum Entanglement in markets: A new look at the EMH



The concept of tail risk in Chinese housing markets made me think more about the efficient market hypothesis.  If there truly are events that lie beyond the public's ability to predict, how can markets be truly efficient?

No doubt, the strong form of the EMH, which states that anything that is possibly known about an asset is incorporated into its price, seems unreasonable.  Given cognitive limits, it's doubtful that market participants could fully incorporate every shred of information into complex models that, in many instances, are necessarily non-linear and unpredictable.  Even the Weak and Semi-Strong versions have been called into question in light of persistent instances of momentum.  Market bubbles have also sometimes been used as reason to reject the EMH, saying that the fundamental decoupling of prices and fundamental value showing how markets can never be truly efficient.  And then there are the legions of behavioral economists argue that biases such as overconfidence and hyperbolic discounting prove that there are gaps in individual decision making.

These inefficiencies have been thoroughly discussed, but I think they miss another dimension: the fundamental unknowability of future events.  Prediction markets, in theory, incorporate all possible information into their judgments, but they are still contingent on what public information is available.  Also, just because prediction markets are more accurate than other forecasts, it doesn't mean they're sufficiently accurate to support highly leveraged and fragile investments.  The Black Swan events that shake the foundations of markets are, by definition, unknown unknowns.  These Black Swans can be even more pernicious because the information that could predict them may be out there.  However, the market may not be able to piece the information together, whether due to bounded rationality or the fact that certain information is not always public.  In the end, it may be these rogue investments that weren't obvious that makes much of the other information observed irrelevant.  Thus, this new formulation of the EMH differs from the other formulations by rejecting the idea that all information is incorporated.  Not all of it is, and if it is it might not be truly understood.

But what impact does this have on the practical application of the EMH?  Are there any meaningful practical implications that can be drawn from the fallibility of information inefficient markets?  On this issue, I like to view it like attempts to use quantum entanglement to transfer messages over long distances.  The theory of quantum entanglement offers a way to transfer a signal faster than the speed of light, but the information transferred is random.  As a result, no net, low entropy information can be communicated at faster than the speed of light .  As applied to markets, the EMH would say that even if prices deviate from their fundamental value, the deviation does not convey any information because there's no apriori way to know what the fundamental value is. Even if there's information that's not incorporated into the price, there's no way for you to know what the new information is, or how that new information should interact with the accumulated knowledge of all the other investors.  You don't know what the price is telling you.  The errors are unknowable ex-ante, and only obvious ex-post.

This model incorporates several aspects of the EMH and criticisms thereof very nicely.  First, it still maintains that there's no point in playing the market.  Even if prices don't reflect all information, it's impossible for you to consistently pluck reality out, save with enough time and invisible hands.  It's pointless to get good at trading, because the excess returns will always be gobbled up by firms who are smarter and computers that are faster.  When companies trade on the basis of milliseconds, do you really think your human thinking will get you anywhere?  This makes advertisements for the Online Trading Academy particularly laughable.  Pity in all those finance mini-lessons they don't teach the foundation of financial theory.

Second, crises don't disprove this formulation of the EMH.  "Seismic" price adjustments don't occur in any predictable manner, which means mispricings are random.  The price adjustment may not have even been the result of a new discovery of information, it could have just arisen from a new conceptualization of the already available information.  Again, there's no way to predict from the past.  This would then lead to Scott Sumner's disdain for tighter subprime regulation as a possible solution to 2006 housing bubble (my emphasis):
One can look at the sub-prime fiasco from a theoretical perspective, or a empirical perspective, but what one cannot do is compare an ideal regulatory scheme to actual banking practices.  No one doubts that we would be better off if we could go back in time and install a regulation banning sub-prime mortgages in 2004.  But if we had that ability, the bankers would have also known what was coming, and would never had made the loans in the first place.
Hindsight is 20/20; the efficiency of markets is a ex-ante postulate, not an ex-post proof.

Third, informational criticisms based on computer science seem to be particularly non-sensical.  This random information argument is not "perfect markets everywhere", but rather "ok markets everywhere".  Additionally, this new interpretation of the EMH actually focuses on limited rationality that is the result of algorithms that can only run in polynomial time.  But even if markets aren't efficient, there's no way for you to exploit it.  If there are more efficient allocations, your central planning algorithms can't target them on a case-by-case basis.

Fourth, while we can't prepare for any individual crisis, we can still take stock of certain warning signs.  With regard to these warning signs, I'm talking about payoffs, and not probabilities.  There are certain limits to our conception of small probabilities, but it's not infeasible to consider the issue of impact.  On this issue, I think specifically about the impact of debt.  Debt financed cycles seem to be particularly problematic, as they magnify the impact of the crisis.  I have no idea what's the fundamental stable value for debt, but I can definitely be scared of the deleveraging effects of debt.  The fragility of the financial system becomes really apparent when small shocks can propagate themselves through chains of defaults.

As a result, policy should be geared towards moderating these aggregates, such as debt, that give rise to fragility.  These may not allow policy makers to avoid crises, but the reduction in fragility should have substantial benefit in reducing the severity of crises.  NGDP targeting can even have a powerful role in this regard, as given enough crises, the high leverage strategy would become dominated by the more conservative strategy as the government could allow the fragile banks to fall apart.

This policy recommendation might seem a bit peculiar; if markets are truly efficient, how can the government have any recommendations for it?  As the argument for market efficiency is fundamentally an informational one, it's possible that information about systemic issues may be substantially less obvious than the fundamentals underlying each asset price.  But more importantly, the concern about the market aggregates arises less from an understanding of whether the crisis will unfold, but rather if the crisis unfolds.  I don't really know what a safe level of debt is and my estimates may be randomly wrong, but I don't want to be caught on the wrong side of the skew left distribution.  Nobody knows, but the individual investor is at freedom to guess wrong; he or she can take the chance.  However, policy makers are tasked with averting these large scale systemic crises, and therefore have to be much more aware of the fragility inducing effects of debt.

So while markets may not incorporate all information into prices, it's a fool's errand to try to figure out what the excluded information is.  Yet while markets may be efficient for investors, regulators may want to be aware of factors that lead to large scale systemic crises outside the domain of traditional models.  It's this focus on payoff, and not probabilities, that forms the basis of activist policy in an "efficient" market.

Edit: More analysis on the issue of timing

Momentum Trading

I recently read an interesting article on momentum trading, and I was wondering how it jives with the EMH.  It left me wondering whether there was some degree of survivorship bias when it comes to momentum trading, as the people who trade going up can make some returns, while the people who get burned by guessing the turning point wrong eventually leave the market.

This kind of asymmetry would also create an environment where there's an incentive to bid-up increases in prices.  If the general belief is that upward prices will keep on going upward, one can make money through buying stocks that are rising in price.  However, when the music ends, the people who bought on the way up still have their money; the people who got burned on the way down are no longer in the market to be evaluated.  In a sense, there's a coordination problem for momentum trading.  While all companies would prefer to not bid up the price of a stock, they are almost "forced" to by the asymmetric arbitrage opportunities.  These seem to be interesting game theory dynamics in a possibly efficient market.

Wednesday, April 11, 2012

Chinese Housing Market: More than "Tail Risk"

The story of the Chinese housing market has been a nerve wracking one.  Property prices have been soaring, and with all property price growth, there is fear that it could pop in a bubble.  This problem is especially prominent in the public consciousness in light of the U.S. real estate bubble, which seemed to show that no matter what governments may say about soft landings, housing bubbles can usually accumulate into larger macroeconomic crises.  While, yes, I am aware that housing crashed two years before the sharp drop in NGDP, the lack of Chuck Norris-esque monetary declaration in China to maintain steady NGDP growth seems to suggest that a banking crisis, which is essentially an aggregate supply issue, can accelerate into broader macroeconomic troubles.

Some recent news out of the housing market has been simultaneously comforting as well as concerning.  Housing prices have fallen five days in a row, yet the stock market is still rallying behind the bonds of several major Chinese property companies.  This has been interpreted as a prediction for a soft landing, as firms are still willing to invest in housing.  The CNBC article also outlines some other reasons for optimism:

In a report issued last month, Standard Chartered also pointed out that there were signs of hope after the drumbeat of negative news last year. "Apartment sales have improved since the Lunar New Year break," Lan Shen and Stephen Green wrote. "Developers are a little more confident about apartment sales, and price cuts of 10-20 percent are apparently helping to nurture demand."
On Friday, China's largest property developer Vanke seemed to confirm a rebound, reporting a 24 percent increase in sales in March over the previous year. It was the second consecutive month Vanke had reported a year-on-year sales increase.
Scott Sumner is also very optimistic, framing recent concerns as just another false alarm from a long row of pundits.
The price system works surprisingly well in China, despite the half-communist nature of their economy.  Chinese buyers actually use their own money to buy homes, so in a sense the US housing market circa 2005 was much more “communist” than the Chinese market.
China boosters like Robert Fogel claim that China will soon grow to be twice as rich as France the EU.  Others pundits claim it will get stuck in the middle income trap.  Both the boosters and pessimists are wrong.  Like Japan, like Britain, like France, indeed like almost all developed countries, it will grow to be about 75% as rich as the US, and then level off.  It won’t get there unless it does lots more reforms.  But the Chinese are extremely pragmatic, so they will do lots more reforms.
China is currently a very poor country, so the Chinese model has nothing to teach the West.  If we want to learn from the Chinese culture, learn from Singapore(or Hong Kong), which is how idealistic Chinese technocrats would prefer to manage an economy; indeed it’s how China itself would be managed if selfish rent-seeking special interest groups didn’t get in the way.  But they do get in the way—hence China won’t ever be as rich as Singapore; it will join the ranks of Japan, Korea, Taiwan, and the other moderately successful East Asian countries.
This isn’t any sort of “miracle.”  Go visit China and look at the airports, roads, subways, office buildings, shopping malls, etc, that they are building.   Look at educational levels in the cities (to which they are rapidly moving.)  It would be a miracle if a country that could do those things got stuck at the middle income level.  I’ve visited both Mexico and China quite often.  Mexico is a middle income country that is currently richer than China.  But any tourist who visits both places (with eyes wide open) can quickly see who will be much richer in 30 years.  There’s no stable equilibrium where the coastal Han Chinese get fully developed and the interior Han Chinese stay middle income.  And the coastal Chinese are closing in on developed status very rapidly.
While the long run predictions seem accurate (convergence seems quite reasonable for China), the concern is in the short run when the storm is still here. Interestingly enough, the first CNBC article trumpeting the resiliency of stock markets seems to be internally contradictory.  I don't buy the "price cuts of 10-20 percent are apparently helping to nurture demand" argument.  If there is so much demand, why should there be a need to lower the prices to "nurture demand"?  Even the argument about equities rallying because of confidence in the housing market is also specious.  Why should the relationship between the housing market and the housing companies' bonds be linear? Markets are opaque: how do we know the rallying is from perceptions of a short term price spike with a medium term collapse, or will the forecasts just not match reality?  Fundamentally, we just don't know why the movements in prices are happening.  Quoting Taleb in The Black Swan:
History is opaque. You see what comes out, not the script that produces events, the generator of history. There is a fundamental incompleteness in your grasp of such events, since you do not see what's inside the box, how the mechanisms work. What I call the generator of historical events is different from the events themselves, much as the minds of the gods cannot be read just by witnessing their deeds. You are very likely to be fooled about their intentions (8). 
If we really don't know the causes, optimism is unfounded.  Scott's confidence that the housing market will keep on going up also ignores the dual nature of the price movement.  The continual upward climb can both be evidence of stable growth, or the indicator that a crash is coming.  Past performance does not guarantee future results.  Taleb, again:
Let us go one step further and consider induction's most worrisome aspect: learning backward. Consider that the turkey's experience may have, rather than no value, a negative value. It learned from observation, as we are all advised to do (hey, after all, this is what is believed to be the scientific method). Its confidence increased as the number of friendly feedings grew, and it felt increasingly safe even though the slaughter was more and more imminent. Consider that the feeling of safety reached its maximum when the risk was at the highest! But the problem is even more general than that; it strikes at the nature of empirical knowledge itself. Something has worked in the past, until—well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading (41). 
Back to the CNBC article, it continues with analyses of severe risks among the housing companies:
Some credit analysts are also warning that the sector faces judgement day as builders struggle to pay back debt. On Thursday, Standard and Poor's cut the credit rating on Hopson Development and Glorious Property. 
"We have to remember, ratings agencies are not the leading indicators, ratings agencies follow the market," says SJ Seymour's Yadav. 
According to her, the best thing for investors to do is to choose the bigger names, which have exposure to mass-market housing and projects outside the large tier-1 cities. She recommends the corporate bonds of Evergrande. 
"We reckon the bigger players, simply because of the bigger number of projects, the diversification, will assist them. The smaller players can come under pressure very quickly," says Yadav. 
Meanwhile, analysts expect further consolidation among the smaller developers. Donald Han, Senior Advisor, HSR Property Group said his firm was advising some of these smaller companies. 
"A lot of these companies by in large have a fairly strong asset bases in terms of balance sheet but the difficulty is trying to move sales and converting that into stronger liquidity," Han told CNBC. "Some of the smaller companies may go through consolidation. The result of the consolidation exercise would turn some of the new entities into a bigger more stable companies."
I see multiple red flags here.  The fact that rating agencies are no longer comfortable with the levels of debt seems to hint at problems bubbling up from underneath.  The fact that they follow the market is not very comforting; it merely suggests that the economy is already moving towards that negative direction.  Additionally, while the analyst recommends investing in large firms to get stability, one has to be aware of how this consolidation of firms, in reality, merely masks tail risk with stability.  Quoting Taleb:
Just as there is a fallacy of aggregation, I believe in the fallacy of scale (because of concavities). Properties change with scale.
It is as if the firms are locking arms to face the wind.  While for small winds they are more stable, a large gust can pull them all away.

The last line about "fairly strong asset bases" but lack of "stronger liquidity" is especially concerning.  It suggests that the market is very prone to panics and shocks; there's no buffer of liquidity to assuage the fears of depositors.  This is all assuming that the asset bases are priced correctly and are actually strong.  Yet even a few minor errors here can propagate systemic risk through the markets incredibly quickly.

Some anecdotal evidence that I saw during my last trip to China also gave me shivers.  We were looking at houses near Shanghai, and found houses relatively far from the city center that were in the 5m RMB range, or about 830,000 USD.  And these relatively "new" houses were not very well constructed either: the walls often had relatively large cracks and the windows were not properly sealed.  Given the apparent high depreciation rate of housing capital: what could justify the high cost?

Of course, that can only pull on my gut feelings; perhaps there is some benefit of having a home at distances that make the city center at least accessible.  But when we were in the cab after looking in houses, I remember the driver talking about how housing prices would never fall, and that there was nothing to worry about.  Worst case scenario, there's a soft landing, but there should be no large scale concerns about housing prices.  This terrified me.  When one can buy 2 RMB newspapers about the housing market from street vendors, and then hear people saying housing price collapses are impossible, it's time to consider the "impossible": a hard landing for housing prices.

Now some may critique this argument, saying that the same arguments about opacity apply to my narrative as well.  The whole problem with low probability high impact events is that the events are fundamentally unpredictable.  We've never observed the probability before: how would we know?  The fact that it's anecdotal evidence is even more specious; I could just be telling a story and not listing the facts.  But the argument I here is not that the housing market will crash by 30% in 6 months and 16 days, but rather that we can't just wave our hands and hide the risks in China.  The problem is more than tail risk.  It's asymmetric tail risk.

If China continues on its current path, it's not going to magically get substantially higher growth.  Even if all the loans go off without a hitch, and all the banks stay liquid and solvent, the Chinese economy cannot grow much faster than 12%.  But if, for some reason, a systemic crisis befalls China, the negative payoff has a very fat tail.  Borrowing and collateral chains would collapse worldwide, and longtime trade partners would suffer through great turbulence to adapt to the new international trade.  Especially with the crisis in Europe, even a small credit shock in China could have massive global effects.  History does not crawl; it leaps.  My fears now are that while we crawl forward with housing construction, we may find ourselves in a "Great Leap Backwards", destroying many of the gains of the past decade.

Monday, April 9, 2012

Fiscal Policy in a Monetary Union


Recently, I read a post about the desirability of the various UK austerity programs within the UK monetary union.  What is special about the countries in the UK monetary union is that they are, to a certain extent, a fiscal union as well.  Scotland, Northern Ireland, Wales all have voting power within the English parliament at Westminster.  However, England does not have voting power in the parliaments of the other states, and therefore they still have some degrees of freedom to pursue their own fiscal policy, including the power to finance the policy through bond markets.  What this means is that fiscal policy, in a sense, can be devolved from the “federal” government to the individual countries.  Such a possibility is discussed by Brian Ashcroft when he calls upon the Scottish government to pursue policy to counteract the effects of fiscal austerity.
Well, first, it suggests that an independent Scotland as an accepted part of the UK sterling monetary union should be able to adopt a different fiscal policy stance to stabilise GDP than rUK. However, it also suggests that providing the degree of fiscal devolution is sufficient to allow changes in tax and spend that can influence aggregate demand then this option is also available within the UK political union. Further academic research is required on the appropriate form and degree of fiscal devolution for effective stabilisation but there is little doubt that stabilisation at the level of nations and regions within the UK is feasible. 
Moreover, if an independent Scotland is part of the sterling monetary union the Bank of England and rUK government will almost certainly require that the Scottish government abide by a set of fiscal rules - see this earlier post. The fiscal framework could be little different under devolution from that under independence. Under devolution, Scotland could have a separate stabilisation policy as well as the benefits from the risk pooling arrangements e.g. social security, bank bailouts etc. that are available as part of the UK. It is true that the high levels of trade with the UK would make it difficult for fiscal policy to chart a radically different stabilisation path from rUK but that would apply to an independent Scotland too.
Based on this concept, what if fiscal policy were devolved in the United States?  This concept of devolution would allow each state to design a fiscal policy appropriate for the macroeconomic conditions in each state.  Such a policy would be somewhat consistent with the concept of Market Preserving Federalism (MPF).  MPF was originally used by Weingast in the context of public choice; how do subnational units efficiently provide public goods to their citizens?  Samuelson argued that it was impossible, and that due to cross-border externalities, the central government had to intervene.  However, Weingast, building on Tiebout, argued that the subfederal units could be thought of as firms, and that they could compete against each other to reach optimal public good bundles.  For this to occur, five conditions must be met:
  1. 1.    There exists a hierarchy of governments with a delineated scope of authority (for example, between the national and subnational governments) so that each government is autonomous in its own sphere of authority.
  2. 2.    The subnational governments have primary authority over the economy within their jurisdictions.
  3. 3.    The national government has the authority to police the common market and to ensure the mobility of goods and factors across subgovernment jurisdictions.
  4. 4.    Revenue sharing among governments is limited and borrowing by governments is constrained so that all governments face hard budget constraints.
  5. 5.    The allocation of authority and responsibility has an institutionalized degree of durability so that it cannot be altered by the national government either unilaterally or under the pressures from subnational governments.

Each of these five conditions are important to supporting MPF.  Without a hierarchy of governments (1), there is no federalism; the unitary state cannot be differentiated from the federal unit.  If subnational governments don’t have primary control (2), then they can’t properly compete against each other.  Without factor mobility (3), there is no competition.  As MPF was originally formulated in the context of public choice, the factors of production must have choice in where they are located.  Subnational control over factor mobility in the common market would prevent this.  Without (4), transfer payments can be used to smooth over competitive differences, limiting efficiency.  Finally, without (5), there is too much policy uncertainty, and the MPF regime may fall apart.

In devolved fiscal policy, the public good is no longer something concrete like transportation infrastructure or police protection; it’s aggregate demand management.  AD policy truly is a public good, as when the macroeconomy is doing well in an area, nobody can opt out of it (nonrival), and the government cannot effectively exclude, outside of arbitrary jailing, any citizen from the benefits.  Yet in the provision of this public good, the fourth condition, the hard budget constraint, becomes very problematic.  A hard budget constraint substantially limits the ability of the sub-federal units to pursue counter-cyclical fiscal policy such that, in recessions, the sub-federal units are forced to cut spending.  As a result, the economy suffers due to the shortfall in aggregate demand.  This is confirmed by data from the 2008 recession: over the course of the downturn, state and local spending collapsed, effectively counteracting the effect of the federal stimulus.

So what happens if the fourth condition is loosened; what if state and local governments were allowed to borrow?  In effect, there then would be fifty states with sovereign fiscal policies in a common monetary union; a situation quite similar to that in Europe.  During the construction of the European monetary union, the United States was often used as a model in the literature to help describe how Europe would work.  In this case, Europe can be used to help evaluate a hypothetical United States, in which sub-federal, and not the federal, units have borrowing power.

In Europe, the concern was that, in the absence of borrowing rules, the fiscal policies of the individual European states would err towards fiscal irresponsibility.  Since prices for most goods would not be affected by an individual country’s fiscal policy, aggregate supply would be much more elastic, heightening the effect of expansionary fiscal policy.  As a result, each state would have an incentive to boost its output through government spending and push the debt to the future.  However, once every state decides to pursue fiscal expansion, the aggregate supply relation puts the brakes on output growth, resulting in higher inflation instead.

Would the same thing happen for devolved fiscal policy in the United States?  According to analysis by McKinnon, the answer is “not necessarily.”  Because of Ricardian equivalence, a state’s decision to take up higher levels of debt can be interpreted as an obligation to raise taxes or cut spending on other programs in the future.  Assuming sufficient factor mobility to cause horizontal competition between states, the taxes to finance the debt spur firms to move away from indebted states to move towards lower debt states.  Consequently, debt financed expansions would be limited to public goods that have a positive return in the state, as those would be the only programs for which firms would be willing to pay taxes.  Note that these public goods can include education and health care systems as well.  If workers are drawn to the state by the productive investments in human capital, firms will have more opportunities to find talented workers in that state.  These local public goods would also be the answer to the spillover effect of fiscal stimulus.  As per open economy models of fiscal stimulus, simple increases in consumption have a high probability of being spent in other states.  On the other hand, local investments contain more of the stimulatory effect within the state.  As a result of competition between states, there is more likely to be efficiency within states.

What is particularly attractive about this arrangement is that it forces aggregate demand management in recessions to function as quasi-aggregate supply management.  As “easy” stimulus would diffuse across borders, so the more onerous task of improving the capital stock, both human and traditional, becomes the key mechanism through which to achieve macroeconomic stability.  This also creates exciting possibilities with regards to the interaction between each subnational unit’s fiscal policy and national monetary policy.  If the price level is pushed down as a result of an increase in aggregate supply, it may force the hand of an inflation targeting central bank that is otherwise unwilling to fill the output gap.

Some may point to Europe as an example of why this system would not work; devolved fiscal policy under a monetary union has only led to severe debt crises there.  However, one key difference is the extent of factor mobility in Europe as compared to the United States.  Much of the empirical literature on the Eurozone has pointed out the limited mobility of labor within the Eurozone.  Although many of the de jure barriers to immigration have been removed, the de jure barriers are still very problematic.  Moving from Germany to Portugal is not quite as simple as moving from Maine to California.  One has to learn a new language, use it effectively in a job, and then be aware of new cultural mores to truly fit in.  Consequently, labor mobility in the Eurozone has been estimated at about one third of that in the United States.  After adding high levels of transfer payments between countries, there is very little competition between the European countries for labor; there is no “factor-price equalization.”  As a result, firms cannot easily relocate, giving individual countries more space to pursue inefficient levels of government spending.  The threat of future taxes embodied by present debt is not strong enough to spur firms to move.

With Scott Sumner proudly proclaiming a market monetarist end to macro, it is time to turn our attention to what we can do afterwards.  In a world in which knowledge is increasingly dispersed, and centralization is unequipped to deal with the complex nature of economic policy, devolving policy domains, such as fiscal policy, may be the answer.