GOOD READ: China’s Households Exposed to Housing Bubble ‘That Has to Burst’

BloombergBriefs’ Tom Orlik on a fascinating and revealing survey:

China’s households are massively exposed to an oversupplied property market according to a new survey by economist Gan Li, professor at Southwestern University of Finance and Economics in Chengdu, Sichuan and at Texas A&M University in College Station, Texas.

A 2013 survey of 28,000 households and 100,000 individuals provides striking insights on the level and distribution of household income and wealth, with far reaching implications for the economy. About 65 percent of China’s household wealth is invested in real estate, said Gan.

image_thumb[6]Ninety percent of households already own homes, and 42 percent of demand in the first half of 2012 came from buyers who already owned at least one property.

“The Chinese housing market is clearly oversupplied,” said Gan. “Existing housing stock is sufficient for every household to own one home, and we are supplying about 15 million new units a year. The housing bubble has to burst. No one knows when.”

When it does, the hit to household wealth will have a long term negative impact on consumption, he said. China’s household income is significantly higher than the official data suggest. Average urban disposable income was 30,600 yuan in 2012, according to the survey.
That’s 24 percent higher than in the National Bureau of Statistics’ data. These results suggest official statistics may overstate China’s structural imbalances, which shows household income as an extremely low share of GDP.

Many wealthy households understate their income in the official data. China’s richest 10 percent of urban households enjoy an average disposable income of 128,000 yuan per capita a year, according to Gan’s survey. That’s twice as high as the same measure in the NBS report. The
poorest 20 percent get by on about 3,000 yuan, pointing to significantly greater wealth inequality than in the U.S. or other OECD countries.

The wealth disparity helps explain China’s imbalance between high savings and investment and low consumption. Rich households have a significantly higher savings rate than poor households. The wealthiest 5 percent save 72 percent of their income, compared with the national
average of 36 percent and 40 percent of households with no savings at all in 2012.

“The solution to boosting consumption is income redistribution,” said Gan. “Compared to the U.S. and other OECD countries, China has done very little in this area.”

The survey also provides insights into China’s widespread informal lending. A third of households are involved in peer-to-peer lending, according to Gan. Zero-interest loans between friends make up the majority. Interest, when charged, is typically high, averaging a 34
percent annual rate
. That underscores the usurious cost of credit for businesses and households excluded from the formal banking sector.

 

GOOD READ: THE IT THREAT

On January 24, 2014, I posted Google chief warns of IT threat. Danny, my geek son, had been warning me about that for most of last year. It is now a reality. The Economist ran a great article (Tks Gary) on that last week (The future of jobs, The onrushing wave). Some excerpts:

(…) A 2013 paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, argued that jobs are at high risk of being automated in 47% of the occupational categories into which work is customarily sorted. That includes accountancy, legal work, technical writing and a lot of other white-collar occupations.

Answering the question of whether such automation could lead to prolonged pain for workers means taking a close look at past experience, theory and technological trends. The picture suggested by this evidence is a complex one. It is also more worrying than many economists and politicians have been prepared to admit. (…)

The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age”, a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. (…)

A startling progression of inventions seems to bear their thesis out. Ten years ago technologically minded economists pointed to driving cars in traffic as the sort of human accomplishment that computers were highly unlikely to master. Now Google cars are rolling round California driver-free no one doubts such mastery is possible, though the speed at which fully self-driving cars will come to market remains hard to guess. (…)

The machines are not just cleverer, they also have access to far more data. The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers. Text-mining programs will displace professional jobs in legal services. Biopsies will be analysed more efficiently by image-processing software than lab technicians. Accountants may follow travel agents and tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into good-enough news stories.

Jobs that are not easily automated may still be transformed. New data-processing technology could break “cognitive” jobs down into smaller and smaller tasks. As well as opening the way to eventual automation this could reduce the satisfaction from such work, just as the satisfaction of making things was reduced by deskilling and interchangeable parts in the 19th century. (…)

There will still be jobs. Even Mr Frey and Mr Osborne, whose research speaks of 47% of job categories being open to automation within two decades, accept that some jobs—especially those currently associated with high levels of education and high wages—will survive (see table). Tyler Cowen, an economist at George Mason University and a much-read blogger, writes in his most recent book, “Average is Over”, that rich economies seem to be bifurcating into a small group of workers with skills highly complementary with machine intelligence, for whom he has high hopes, and the rest, for whom not so much.

And although Mr Brynjolfsson and Mr McAfee rightly point out that developing the business models which make the best use of new technologies will involve trial and error and human flexibility, it is also the case that the second machine age will make such trial and error easier. It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers. Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.

In a forthcoming book Thomas Piketty, an economist at the Paris School of Economics, argues along similar lines that America may be pioneering a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers” grab a growing share of national income and accumulate an increasing concentration of national wealth. The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics. (…)

 

TWO GOOD READS

I came across two interesting articles which are related although the authors, both named Hunt, are not. Lacy Hunt argues why all the QEs are experimental failures with unknown (uncertain) consequences. Ben Hunt explains the differences between decision-making under risk vs decision-making under uncertainty.

Federal Reserve Policy Failures Are Mounting

Lacy H. Hunt, Ph.D., Economist

(…) Four considerations suggest the Fed will continue to be unsuccessful in engineering increasing growth and higher inflation with their continuation of the current program of Large Scale Asset Purchases (LSAP):

  • First, the Fed’s forecasts have consistently been too optimistic, which indicates that their knowledge of how LSAP operates is flawed. LSAP obviously is not working in the way they had hoped, and they are unable to make needed course corrections.
  • Second, debt levels in the U.S. are so excessive that monetary policy’s traditional transmission mechanism is broken.
  • Third, recent scholarly studies, all employing different rigorous analytical methods, indicate LSAP is ineffective.
  • Fourth, the velocity of money has slumped, and that trend will continue—which deprives the Fed of the ability to have a measurable influence on aggregate economic activity and is an alternative way of confirming the validity of the aforementioned academic studies.

1. The Fed does not understand how LSAP operates

If the Fed were consistently getting the economy right, then we could conclude that their understanding of current economic conditions is sound. However, if they regularly err, then it is valid to argue that they are misunderstanding the way their actions affect the economy.

During the current expansion, the Fed’s forecasts for real GDP and inflation have been consistently above the actual numbers. (…)

One possible reason why the Fed have consistently erred on the high side in their growth forecasts is that they assume higher stock prices will lead to higher spending via the so-called wealth effect. The Fed’s ad hoc analysis on this subject has been wrong and is in conflict with econometric studies. The studies suggest that when wealth rises or falls, consumer spending does not generally respond, or if it does respond, it does so feebly. During the run-up of stock and home prices over the past three years, the year-over-year growth in consumer spending has actually slowed sharply from over 5% in early 2011 to just 2.9% in the four quarters ending Q2.

Reliance on the wealth effect played a major role in the Fed’s poor economic forecasts. LSAP has not been able to spur growth and achieve the Fed’s forecasts to date, and it certainly undermines the Fed’s continued assurances that this time will truly be different.

2. US debt is so high that Fed policies cannot gain traction

Another impediment to LSAP’s success is the Fed’s failure to consider that excessive debt levels block the main channel of monetary influence on economic activity. Scholarly studies published in the past three years document that economic growth slows when public and private debt exceeds 260% to 275% of GDP. In the U.S., from 1870 until the late 1990s, real GDP grew by 3.7% per year. It was during 2000 that total debt breached the 260% level. Since 2000, growth has averaged a much slower 1.8% per year.

Once total debt moved into this counterproductive zone, other far-reaching and unintended consequences became evident. The standard of living, as measured by real median household income, began to stagnate and now stands at the lowest point since 1995. Additionally, since the start of the current economic expansion, real median household income has fallen 4.3%, which is totally unprecedented. Moreover, both the wealth and income divides in the U.S. have seriously worsened.

Over-indebtedness is the primary reason for slower growth, and unfortunately, so far the Fed’s activities have had nothing but negative, unintended consequences.

3. Academic studies indicate the Fed’s efforts are ineffectual

(…) It is undeniable that the Fed has conducted an all-out effort to restore normal economic conditions. However, while monetary policy works with a lag, the LSAP has been in place since 2008 with no measurable benefit. This lapse of time is now far greater than even the longest of the lags measured in the extensive body of scholarly work regarding monetary policy.

Three different studies by respected academicians have independently concluded that indeed these efforts have failed. These studies, employing various approaches, have demonstrated that LSAP cannot shift the Aggregate Demand (AD) Curve. (…)

The papers I am talking about were presented at the Jackson Hole Monetary Conference in August 2013. The first is by Robert E. Hall, one of the world’s leading econometricians and a member of the prestigious NBER Cycle Dating Committee. He wrote, “The combination of low investment and low consumption resulted in an extraordinary decline in output demand, which called for a markedly negative real interest rate, one unattainable because the zero lower bound on the nominal interest rate coupled with low inflation put a lower bound on the real rate at only a slightly negative level.”

Dr. Hall also wrote the following about the large increase in reserves to finance quantitative easing: “An expansion of reserves contracts the economy.” In other words, not only have the Fed not improved matters, they have actually made economic conditions worse with their experiments. (…)

The next paper is by Hyun Song Shin, another outstanding monetary theorist and econometrician and holder of an endowed chair at Princeton University. He looked at the weighted-average effective one-year rate for loans with moderate risk at all commercial banks, the effective Fed Funds rate, and the spread between the two in order to evaluate Dr. Hall’s study. He also evaluated comparable figures in Europe. In both the U.S. and Europe these spreads increased, supporting Hall’s analysis.

Dr. Shin also examined quantities such as total credit to U.S. non-financial businesses. He found that lending to non-corporate businesses, which rely on the banks, has been essentially stagnant. Dr. Shin states, “The trouble is that job creation is done most by new businesses, which tend to be small.” Thus, he found “disturbing implications for the effectiveness of central bank asset purchases” and supported Hall’s conclusions.

Dr. Shin argued that we should not forget how we got into this mess in the first place when he wrote, “Things were not right in the financial system before the crisis, leverage was too high, and the banking sector had become too large.” For us, this insight is highly relevant since aggregate debt levels relative to GDP are greater now than in 2007. Dr. Shin, like Dr. Hall, expressed extreme doubts that forward guidance was effective in bringing down longer-term interest rates.

The last paper is by Arvind Krishnamurthy of Northwestern University and Annette Vissing-Jorgensen of the University of California, Berkeley. They uncovered evidence that the Fed’s LSAP program had little “portfolio balance” impact on other interest rates and was not macro-stimulus. (…)

4. The velocity of money—outside the Fed’s control

The last problem the Fed faces in their LSAP program is their inability to control the velocity of money. The AD curve is planned expenditures for nominal GDP. Nominal GDP is equal to the velocity of money (V) multiplied by the stock of money (M), thus GDP = M x V. This is Irving Fisher’s equation of exchange, one of the important pillars of macroeconomics.

V peaked in 1997, as private and public debt were quickly approaching the nonproductive zone. Since then it has plunged. The level of velocity in the second quarter is at its lowest level in six decades. By allowing high debt levels to accumulate from the 1990s until 2007, the Fed laid the foundation for rendering monetary policy ineffectual. Thus, Fisher was correct when he argued in 1933 that declining velocity would be a symptom of extreme indebtedness just as much as weak aggregate demand.

Fisher was able to make this connection because he understood Eugen von Böhm-Bawerk’s brilliant insight that debt is future consumption denied. Also, we have the benefit of Hyman Minsky’s observation that debt must be able to generate an income stream to repay principal and interest, thereby explaining that there is such a thing as good (productive) debt as opposed to bad (non-productive) debt. Therefore, the decline in money velocity when there are very high levels of debt to GDP should not be surprising. Moreover, as debt increases, so does the risk that it will be unable to generate the income stream required to pay principal and interest.

(chart from Ed Yardeni)

Perhaps well intended, but ill advised

The Fed’s relentless buying of massive amounts of securities has produced no positive economic developments, but has had significant negative, unintended consequences.

For example, banks have a limited amount of capital with which to take risks with their portfolio. With this capital, they have two broad options: First, they can confine their portfolio to their historical lower-risk role of commercial banking operations—the making of loans and standard investments. With interest rates at extremely low levels, however, the profit potential from such endeavors is minimal.

Second, they can allocate resources to their proprietary trading desks to engage in leveraged financial or commodity market speculation. By their very nature, these activities are potentially far more profitable but also much riskier. Therefore, when money is allocated to the riskier alternative in the face of limited bank capital, less money is available for traditional lending. This deprives the economy of the funds needed for economic growth, even though the banks may be able to temporarily improve their earnings by aggressive risk taking.

Perversely, confirming the point made by Dr. Hall, a rise in stock prices generated by excess reserves may sap, rather than supply, funds needed for economic growth.

Incriminating evidence: the money multiplier

It is difficult to determine for sure whether funds are being sapped, but one visible piece of evidence confirms that this is the case: the unprecedented downward trend in the money multiplier.

The money multiplier is the link between the monetary base (high-powered money) and the money supply (M2); it is calculated by dividing the base into M2. Today the monetary base is $3.5 trillion, and M2 stands at $10.8 trillion. The money multiplier is 3.1. In 2008, prior to the Fed’s massive expansion of the monetary base, the money multiplier stood at 9.3, meaning that $1 of base supported $9.30 of M2.

If reserves created by LSAP were spreading throughout the economy in the traditional manner, the money multiplier should be more stable. However, if those reserves were essentially funding speculative activity, the money would remain with the large banks and the money multiplier would fall. This is the current condition.

The September 2013 level of 3.1 is the lowest in the entire 100-year history of the Federal Reserve. Until the last five years, the money multiplier never dropped below the old historical low of 4.5 reached in late 1940. Thus, LSAP may have produced the unintended consequence of actually reducing economic growth.

Stock market investors benefited, but this did not carry through to the broader economy. The net result is that LSAP worsened the gap between high- and low-income households. When policy makers try untested theories, risks are almost impossible to anticipate.

The near-term outlook

Economic growth should be very poor in the final months of 2013. Growth is unlikely to exceed 1%—that is even less than the already anemic 1.6% rate of growth in the past four quarters.

Marked improvement in 2014 is also questionable. Nominal interest rates have increased this year, and real yields have risen even more sharply because the inflation rate has dropped significantly. Due to the recognition and implementation lags, only half of the 2013 tax increase of $275 billion will have been registered by the end of the year, with the remaining impact to come in 2014 and 2015.

Additionally, parts of this year’s tax increase could carry a negative multiplier of two to three. Currently, many of the taxes and other cost burdens of the Affordable Care Act are in the process of being shifted from corporations and profitable small businesses to households, thus serving as a de facto tax increase. In such conditions, the broadest measures of inflation, which are barely exceeding 1%, should weaken further. Since LSAP does not constitute macro-stimulus, its continuation is equally meaningless. Therefore, the decision of the Fed not to taper makes no difference for the outlook for economic growth.

Ben Hunt (Epsilon Theory) sent me this note which I reproduce in its entirety because of its importance in the investment decision making process.

Epsilon Theory: The Koan of Donald Rumsfeld

There are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say, we know there are some things we do not know.

But there are also unknown unknowns – the ones we don’t know we don’t know.

Donald Rumsfeld

There is an unmistakable Zen-like quality to this, my favorite of Donald Rumsfeld’s often cryptic statements. I like it so much because what Rumsfeld is describing perfectly in his inimitable fashion are the three forms of game theoretic decisions:

Decision-making under certainty – the known knowns. This is the sure thing, like betting on the sun coming up tomorrow, and it is a trivial sub-set of decision-making under risk where probabilities collapse to zero or 1.

Decision-making under risk – the known unknowns, where we are reasonably confident that we know the potential future states of the world and the rough probability distributions associated with those outcomes. This is the logical foundation of Expected Utility, the formal language of microeconomic behavior, and mainstream economic theory is predicated on the prevalence of decision-making under risk in our everyday lives.

Decision-making under uncertainty – the unknown unknowns, where we have little sense of either the potential future states of the world or, obviously, the probability distributions associated with those unknown outcomes. This is the decision-making environment faced by a Stranger in a Strange Land, where traditional cause-and-effect is topsy-turvy and personal or institutional experience counts for little, where good news is really bad news and vice versa. Sound familiar?

The sources of today’s market uncertainty are the same as they have always been throughout history – pervasive credit deleveraging and associated political strife. In the Middle Ages, these periods of deleveraging and strife were typically the result of political pursuit of wars of conquest … Edward III and his 14th century exploits in The Hundred Years War, say, or Edward IV and his 15th century exploits in The War of the Roses. Today, our period of deleveraging and strife is the result of political pursuit of la dolce vita … a less bloody set of exploits, to be sure, but no less expensive or impactful on markets. PIMCO co-CIO, Mohamed El-Erian, has a great quote to summarize this state of affairs – “Investors are in the back seat, politicians in the front seat, and it is very foggy through the windscreen.” – and the events of the past two weeks in Washington serve to confirm this observation … yet again. Of course, central banks are political institutions and central bankers are political animals, and the largest monetary policy experiment ever devised by humans should be understood in this political context. The simple truth is that no one knows how the QE story ends or what twists and turns await us. The crystal ball is broken and it’s likely to stay broken for years and years.

We are enduring a world of massive uncertainty, which is not at all the same thing as a world of massive risk. We tend to use the terms “risk” and “uncertainty” interchangeably, and that may be okay for colloquial conversation. But it’s not okay for smart decision-making, whether the field is foreign policy or investment, because the process of rational decision-making under conditions of risk is very different from the process of rational decision-making under conditions of uncertainty. The concept of optimization is meaningful and precise in a world of risk; much less so in a world of uncertainty. That’s because optimization is, by definition, an effort to maximize utility given a set of potential outcomes with known (or at least estimable) probability distributions. Optimization works whether you have a narrow range of probabilities or a wide range. But if you have no idea of the shape of underlying probabilities, it doesn’t work at all. As a result, applying portfolio management, risk management, or asset allocation techniques developed as exercises in optimization – and that includes virtually every piece of analytical software on the market today – may be sub-optimal or downright dangerous in an uncertain market. That danger also includes virtually every quantitatively trained human analyst!

All of these tools and techniques and people will still generate a risk-based “answer” even in the most uncertain of times because they are constructed and trained on the assumption that probability estimations and long-standing historical correlations have a lot of meaning regardless of circumstances. It’s not their fault, and their math isn’t wrong. They just haven’t been programmed to step back and evaluate whether their finely honed abilities are the right tool for the environment we’re in today.

My point is not to crawl under a rock and abandon any attempt to optimize a portfolio or an allocation … for most professional investors or allocators this is professional suicide. My point is that investment decisions designed to optimize – regardless of whether the target of that optimization is an exposure, a portfolio, or an allocation  – should incorporate a more agnostic and adaptive perspective in an uncertain market. We should be far less confident in our subjective assignment of probabilities to future states of the world, with far broader margins of error in those subjective evaluations than we would use in more “normal” times. Fortunately, there are decision-making strategies designed explicitly to incorporate this sort of perspective, to treat probabilities in an entirely different manner than that embedded in mainstream economic theory. One in particular – Minimax Regret – eliminates the need to assign any probability distribution whatsoever to potential outcomes.

Minimax Regret, developed in 1951 by Leonard “Jimmie” Savage, is a cornerstone of what we now refer to as behavioral economics. Savage played a critical role, albeit behind the scenes, in the work of three immortals of modern social science. He was John von Neumann’s right-hand man during World War II, a close colleague of Milton Friedman’s (the second half of the Friedman-Savage utility function), and the person who introduced Paul Samuelson to the concept of random walks and stochastic processes in finance (via Louis Bachelier) … not too shabby! Savage died in 1971 at the age of 53, so he’s not nearly as well-known as he should be, but his Foundations of Statistics remains a seminal work for anyone interested in decision-making in general and Bayesian inference in particular.

As the name suggests, the Minimax Regret strategy seeks to minimize your maximum regret in any decision process. This is not at all the same thing as minimizing your maximum loss. The concept of regret is a much more powerful and flexible concept than mere loss, because it injects an element of subjectivity into a decision calculus. Is regret harder to program into a computer algorithm than simple loss? Sure. But that’s exactly what makes it much more human, and that’s why I think you may find the methodology more useful.

Minimax Regret downplays (or eliminates) the role that probability distributions play in the decision-making process. While any sort of Expected Utility or optimization approach seeks to evaluate outcomes in the context of the odds associated with those outcomes coming to pass, Minimax Regret says forget the odds … how would you feel if you pay the cost of Decision A and Outcome X occurs? What about Decision A and Outcome Y? Outcome Z? What about Decision B and Outcome X, Y, or Z?  Make that subjective calculation for every potential combination of decision + outcome you can imagine, and identify the worst possible outcome “branch” associated with each decision “tree”. Whichever decision tree holds the best of these worst possible outcome branches is the rational decision choice from a Minimax Regret perspective.

This is different from maximum loss calculation in many respects. For example, if the maximum loss outcome is rather apocalyptic, where it is extremely costly to prepare and you’re still pretty miserable even if you did prepare, most people will not experience this as a maximum regret outcome even if they make no preparations whatsoever to mitigate its impact. On the other hand, many people will experience substantial regret, perhaps even maximum regret, if the outcome is a large gain in which they do not share because they failed to prepare for it. Minimax Regret is a subjective decision-making strategy that captures the disutility of both missed opportunities as well as suffered losses, which makes it particularly appropriate for investment decisions that must inevitably incorporate the subjective feelings of greed and fear.

Minimax Regret requires a decision-maker to know nothing about the likelihood of this future state of the world or that future state. Because of its subjective foundations, however, it requires its practitioners to know a great deal about his or her utility for this future state of the world or that future state. The motto of Minimax Regret is not Know the World … it’s Know Thyself.

It’s also an appropriate decision-making strategy where you DO know the odds associated with the potential decision-outcomes, but where you have so few opportunities to make decisions that the stochastic processes of the underlying probability distributions don’t come into play. To use a poker analogy, my decision-making process should probably be different if I’m only going to be dealt one hand or if I’m playing all night. The former is effectively an environment of uncertainty and the latter an environment of risk, even though the risk attributes are clearly defined in both. This is an overwhelming issue in decision-making around, say, climate change policy, where we are only dealt a single hand (unless that Mars terraforming project picks up speed) and where both decisions and outcomes take decades to reveal themselves. It’s less of an issue in most investment contexts, but can certainly rear its ugly head in illiquid securities or strategies.

Is this a risk-averse strategy? In theory, no, but in practice, yes, because the most regret-filled outcomes tend to be those that are more likely to be low probability outcomes. If the “real” probability distributions of future outcomes were magically revealed to us … if we could just get our crystal ball working again … then an Expected Utility analysis of pretty much any Minimax Regret decision-making process would judge it as risk-averse. But that’s just the point … our crystal ball isn’t working, and it won’t so long as we have profound political fragmentation within and between the major economic powers of the world.

I’m not saying that Minimax Regret is the end-all and be-all. The truth is that the world is never entirely uncertain or without historical correlations that provide useful indications of what may be coming down the pike, and there are plenty of other ways to be more agnostic and adaptive in our investment decision-making without abandoning probability estimation entirely. But there’s no doubt that our world is more uncertain than it was five years ago, and there’s no doubt that there’s an embedded assumption of probabilistic specification in both the tools and the people that dominate modern risk management and asset allocation theory. Minimax Regret is a good example of an alternative decision-making approach that takes this uncertainty and lack of probabilistic specification seriously without sacrificing methodological rigor. As a stand-alone decision algorithm it’s a healthy corrective or decision-making supplement, and I believe it’s possible to incorporate its subjective Bayesian tenets directly into more mainstream techniques. Stay tuned …

If you want to sign up for the free direct distribution of Ben’s weekly notes and occasional emails, either contact him directly at ben.hunt@epsilontheory or click to Follow Epsilon Theory. All prior notes and emails are archived on the website.

 

The Equity Drumbeaters Are Out

(Note: my good friend I. Bernobul just published this post on his blog)

Now that equity prices have more than doubled and that it has become fashionable to be bullish, the gurus are out with their often convoluted theories and the media are all too happy to act as megaphones. However, one would expect the more serious media to be a little critical and choosy.

James W. Paulsen, chief investment strategist at Wells Capital Management, was given front page exposure in Monday’s Financial Times to trumpet the arrival of ‘the second
confidence-driven bull market of the postwar era”. Unfortunately, his facts and figures are not what one would expect from the FT. Some excerpts with my comments (my emphasis):

(…) But while many investors have turned cautious, the bull market has probably not ended. This is because the primary force driving the stock market is not earnings performance, low yields or quantitative easing; rather, it is a slow but steady revival in confidence, a trend that is just beginning.

In this scenario, investors need not be overly concerned about slower earnings growth. While earnings are obviously important, stock prices have frequently diverged from earnings trends. In fact, for the third time in the postwar era, stock prices and earnings are repeating a remarkably similar three-stage cycle.

Here’s Paulsen’s recipe:

First, earnings surge while the stock market remains essentially flat (the earnings production cycle). Second, earnings performance flattens while the stock market surges (the valuation cycle). Finally, both stock prices and earnings move in tandem (the traditional cycle). It appears the contemporary bull market has just entered the second phase, making earnings growth less important. (…)

So, for the third time in the postwar era, we would be in a “remarkably similar” three-stage cycle which apparently begins with an earnings surge accompanied by a flat market. Mr. Paulsen does not divulge when exactly his first stage began, but the fact is that the stock market has doubled along with surging earnings between March 2009 and May 2012.

Nonetheless, “it appears the contemporary bull market has just entered the second phase”, when “earnings performance flattens while the stock market surges”. If there is such a “second phase”, it began in the spring of 2012 when equity prices rose 30% on flat earnings.

So much for the “remarkably similar” three-stage cycle. But there’s more:

In both the 1950s and the 1980s, the earnings cycle was followed by an explosive stock market run despite almost flat earnings performance. Between 1952 and 1962 the market rose about 3.5 times, while from 1982 to about 1994 it surged almost fourfold.

Here’s the chart for the 1952-62 period during which we can see five periods when earnings and equity prices moved pretty much in tandem. Based on monthly closes, the S&P 500 Index actually tripled between January 1952 and the December 1961 peak.

image

There were actually two broad stages between 1952 and 1962:

  • between 1952 and September 1958, earnings grew only 18% while the S&P 500 Index doubled from extremely undervalued levels after going through highly volatile times following the end of WWII. Inflation went from 2% in 1945 to 20% in 1947 to -3% in 1949, to 9% in mid-1951, to -0.7% in mid-1955 and to 3% at the end of 1957. Understandably, investors were totally uncomfortable with this extreme volatility. As a result, P/Es on trailing earnings plummeted from 22 times in June 1946 to a deeply undervalued 6 times in June 1949. By January 1952, P/Es had recovered to 10x but were still very undervalued based on the Rule of 20 formula which then called for P/Es around 17-18x.
  • Multiples reached 19 in December 1958 as inflation decelerated from 3.6% in the spring of 1958 to less than 1% 12 months later. Between September 1958 and the end of 1962, equities rose 26% while earnings rose 20%, not a meaningful discrepancy.

The same can be said of the 1982-1994 period which is Paulsen’s second “remarkably similar period”.

Inflation reached 14.8% in March 1980 when the U.S. economy was in recession. Inflation receded to 8% in early 1982 but a second recession had begun in July 1981. Equity prices troughed in July 1982 at 7.7x earnings as 10Y Treasury yields reached 14%. Earnings bottomed in December 1982 and doubled by June 1989. Meanwhile, the S&P 500 Index more than tripled as P/E ratios reached 14.5 in June 1989 right where the Rule of 20 stated since inflation was then 5%.

The U.S. entered a mild recession in July 1990 but the sudden 170% jump in oil prices between July and October 1990 created a severe margins squeeze which brought earnings down 25% by the end of 1991. The successful Operation Desert Storm and the subsequent rapid decline in oil prices led investors to expect a rapid restoration of profit margins which brought P/E ratios to 21x by the spring of 1992.

image

In brief, Paulsen’s characterisations of the 1952-1962 and the 1982-1994 periods to fit his theory are far fetched and certainly not even close to the present circumstances.

But on with more recent data:

In the contemporary era, since autumn 2012, despite earnings growth slowing to low single-digit rates, the price-earnings multiple has risen from about 13 times to about 16 times. If this means the stock market just entered its third “valuation cycle” of the postwar era, is slower earnings growth really that worrying?

We are obviously nowhere near the extreme undervaluation levels of the so-called “remarkably similar periods”. Actually, using trailing earnings rather than Paulsen’s forecast, the S&P 500 Index is currently selling at 17x earnings. With inflation at 1.8%, the Rule of 20 P/E is 18.8x, a mere 6% below the “20” fair value level. If there was a “third valuation cycle”, we’ve just had it.

As to the question whether “slower earnings growth is really that worrying?”, the chart below, covering 1946 to the present, is a clear reminder of the importance of profits in equity valuation.

image

Yes, equity markets can, and sometimes do, for brief periods, rise in spite of slow or even negative earnings growth. When it is not to correct extremely low valuation levels, such advances inevitably bring markets to overvalued levels which significantly raises investors risk.

Mr. Paulsen does address some of the risks:

One concern is that the recent rise in US bond yields will abort the stock market bull run. But rising bond yields reflect improving economic confidence, rather than increasing inflation expectations or concerns about the creditworthiness of the US government. A rise in bond yields predicated on a growing belief the “world will not soon end” hardly seems bad for the stock market. Indeed, since 1967, when bond yields have risen in tandem with consumer confidence, the stock market has advanced at almost 12 per cent a year. (…)

Hmmm! Never heard that one. It would have been nice to get the details, especially given that, during the 45 years since 1967, only the first 14 years have seen a marked rise in long term interest rates.

image

Just for fun, however, I checked Paulsen’s assertion since 1977, the last year I have data on the Conference Board Consumer Sentiment Index. I only found 3 periods when interest rates rose in tandem with consumer confidence:

  • April 1983 to July 1984. S&P 500 down 7.9%.
  • January 1987 to October 1987: down 8%.
  • September 1998 to February 2000: +34%. The dot.com bubble years!

You might want to scratch that last one. But there is more:

Since 1900, there have been three major bull markets, in the 1920s, 1950s-60s and the 1980s-90s. Both the first and the third of these were driven by a persistent decline in interest rates, an option not feasible today. However, the 1950s-60s bull market was characterised by a simultaneous rise in both stock prices and bond yields, driven by rising confidence. (…)

May I just mention that earnings have grown at a 5.0% compound annual rate of growth between 1950 and 1969. As to rising confidence, read the above comments on the 1952-62 period once again.

Mr. Paulsen’s conclusion:

Certainly earnings, bond yields and Fed actions will create some turbulence along the way. But beware of becoming too myopically focused on these mainstream issues lest you miss what could be the second confidence-driven bull market of the postwar era.

Now, how confident are you?

image

(Thanks to news-to-use.com for the charts)

 

The Coming Arctic Boom

As the Ice Melts, the Region Heats Up. Excerpts from a July 2013 Foreign Affairs article written by Scott G. Borgerson. Good read.

The ice was never supposed to melt this quickly. (…) In 2007, the Intergovernmental Panel on Climate Change estimated that Arctic summers would become ice free beginning in 2070. Yet more recent satellite observations have moved that date to somewhere around 2035, and even more sophisticated simulations in 2012 moved the date up to 2020. Sure enough, by the end of last summer, the portion of the Arctic Ocean covered by ice had been reduced to its smallest size since record keeping began in 1979, shrinking by 350,000 square miles (an area equal to the size of Venezuela) since the previous summer. All told, in just the past three decades, Arctic sea ice has lost half its area and three quarters of its volume. 

It’s not just the ocean that is warming. In 2012, Greenland logged its hottest summer in 170 years, and its ice sheet experienced more than four times as much surface melting as it had during an average year over the previous three decades. That same year, eight of the ten permafrost-monitoring sites in northern Alaska registered their highest-ever temperatures, and the remaining two tied record highs. (…)

The region’s melting ice and thawing frontier are yielding access to troves of natural resources, including nearly a quarter of the world’s estimated undiscovered oil and gas and massive deposits of valuable minerals. Since summertime Arctic sea routes save thousands of miles during a journey between the Pacific Ocean and the Atlantic Ocean, the Arctic also stands to become a central passageway for global maritime transportation, just as it already is for aviation. (…)

Most cartographic depictions conceal the Arctic’s physical vastness. Alaska, which U.S. maps usually relegate to a box off the coast of California, is actually two and a half times as large as Texas and has more coastline than the lower 48 states combined. Greenland is larger than all of western Europe. The area inside the Arctic Circle contains eight percent of the earth’s surface and 15 percent of its land. 

It also includes massive oil and gas deposits — the main reason the region is so economically promising. (…) Initial estimates suggest that the Arctic may be home to an estimated 22 percent of the world’s undiscovered conventional oil and gas deposits, according to the U.S. Geological Survey. These riches have become newly accessible and attractive, thanks to retreating sea ice, a lengthening summer drilling season, and new exploration technologies. 

Private companies are already moving in. Despite high extraction costs and regulatory hurdles, Shell has invested $5 billion to look for oil in Alaska’s Chukchi Sea, and the Scottish company Cairn Energy has invested $1 billion do the same off the coast of Greenland. Gazprom and Rosneft are planning to invest many billions of dollars more to develop the Russian Arctic, where the state-owned companies are partnering with ConocoPhillips, ExxonMobil, Eni, and Statoil to tap remote reserves in Siberia. (…)  Moreover, [the fracking] boom has also reached the Arctic. Oil fracking exploration has already begun in northern Alaska, and this past spring, Shell and Gazprom signed a major deal to develop shale oil in the Russian Arctic.

Then there are the minerals. Now, longer summers are providing additional time to prospect mineral deposits, and retreating sea ice is opening deep-water ports for their export. The Arctic is already home to the world’s most productive zinc mine, Red Dog, in northern Alaska, and its most productive nickel mine, in Norilsk, in northern Russia. (…)

Alaska has more than 150 prospective deposits of rare-earth elements, and if the state were its own country, it would rank in the top ten in global reserves for many of these minerals. And all these assets are just the beginning. The Arctic has only begun to be surveyed. Once the digging starts, there is every reason to expect that, as often happens, even greater quantities of riches will be uncovered.

The coming Arctic boom will involve more than just mining and drilling. The region’s Boreal forests of spruces, pines, and firs account for eight percent of the earth’s total wood reserves, and its waters already produce ten percent of the world’s total fishing catch. Converted tankers may someday ship clean water from Alaskan glaciers to southern Asia and Africa. (…)

As the sea ice melts, once-fabled shipping shortcuts are becoming a reality. (…) Although the Northern Sea Route has a long way to go before it siphons off a meaningful portion of traffic from the Suez and Panama Canals, it is no longer just a mariner’s fantasy; it is an increasingly viable seaway for tankers looking to shave thousands of nautical miles off the traditional routes that go through the Strait of Malacca and the Strait of Gibraltar. It also provides a new export channel for warming farmlands and emerging mines along Russia’s northern coast, where some of the country’s largest rivers empty into the Arctic Ocean. (…)

 

GOOD READ: Decision time for China

(…) Then the businessman added: “Look, I don’t lose too much sleep over China’s economic troubles; but I do worry, tremendously, about a political explosion tearing the place apart”. The dramatic political destruction in March of Bo Xilai, one of China’s most thrustingly ambitious and charismatic regional Communist Party bosses, has set off that explosion. The shockwaves are convulsing China at a crucial political juncture. (…)

Decision time for China

Rosemary Righter

TLS

The Times Literary Supplement

The leading international forum for literary culture

 

GOOD READ: OIL: THE NEXT REVOLUTION

THE UNPRECEDENTED UPSURGE OF OIL PRODUCTION CAPACITY AND WHAT IT MEANS FOR THE WORLD
LEONARDO MAUGERI, Harvard Kennedy School (Belfer Center for Science and International Affairs)

EXECUTIVE SUMMARY (Full report (pdf))

Contrary to what most people believe, oil supply capacity is growing worldwide at such an unprecedented level that it might outpace consumption. This could lead to a glut of overproduction and a steep dip in oil prices.

Based on original, bottom-up, field-by-field analysis of most oil exploration and development projects in the world, this paper suggests that an unrestricted, additional production (the level of production targeted by each single project, according to its schedule, unadjusted for risk) of more than 49 million barrels per day of oil (crude oil and natural gas liquids, or NGLs) is targeted for 2020, the equivalent of more than half the current world production capacity of 93 mbd.

After adjusting this substantial figure considering the risk factors affecting the actual accomplishment of the projects on a country-by-country basis, the additional production that could come by 2020 is about 29 mbd. Factoring in depletion rates of currently producing oilfields and their “reserve growth” (the estimated increases in crude oil, natural gas, and natural gas liquids that could be added to existing reserves through extension, revision, improved recovery efficiency, and the discovery of new pools or reservoirs), the net additional production capacity by 2020 could be 17.6 mbd, yielding a world oil production capacity of 110.6 mbd by that date – as shown in Figure 1. This would represent the most significant increase in any decade since the 1980s.

image

The economic prerequisite for this new production to develop is a long-term price of oil of $70 per barrel. Indeed, at current costs, less than 20 percent of the new production does not seem profitable at prices lower than this level.

REBUTTAL:  New Energy Report from Harvard Makes Unsupportable Assumptions

 

GOOD READ: Richard Koo On Why Europe’s Austerity Will Cause Deflationary Spiral

Good stuff from ZeroHedge:

While not new to our thoughts, Richard Koo, Nomura’s Balance Sheet Recession guru, has penned a lengthy but complete treatise on why governments need to borrow and spend now or the world faces a deflationary spiral. The Real-World Economics Review posting makes it clear how his balance sheet recessionary perspective of the deleveraging and ZIRP trap we now live in means bigger and more Keynesian efforts are needed to pull ourselves out of the hole. While we agree wholeheartedly with his diagnosis of the problem, his belief in the solution…

Rest is here.