First, occupational certificate programs can serve as efficient, alternative paths to a middle class wage. But they are often criticized as a form of “tracking” that takes low-income students off the academic path. Making certificates “stackable”—like the energy industry and Texas community colleges have—allows students to layer individual certificates and build toward a higher credential if they want one. Students can then take smaller doses of occupational training when they need it and be confident it all counts toward something larger. Not all innovation requires a wireless signal.
Second, thanks to advances in technology, the components of a postsecondary education—content, instruction, assessment—are now inexpensive and abundantly available. Advances like online learning and competency-based models, where students earn credit based on how much they learn rather than time spent in class, could dramatically reduce the cost of higher education. Targeted, short-term occupational training can help close skills gaps even if it doesn’t lead to a formal credential.
But regulatory barriers like accreditation and the rules governing financial aid keep innovative, low-cost providers out and make it difficult for existing colleges to change. Lowering these barriers to entry would both expand the number of affordable options and put pressure on existing colleges to contain their costs.
Fed Chair Janet Yellen thinks the big drop in the labor force participation rate since 2007 is “mostly structural” rather than cyclical — but concedes it’s tough to apportion causality. In a new research note, Citi takes its best shot at determining how much of the drop in the jobless rate is due to a structural vs. cyclical declined in the LFPR”
We define a shadow unemployment rate due to certain types of underemployed workers who are not counted as part of the official unemployment rate. In addition to the unemployed themselves, it includes part-time employees working fewer hours than they would wish, as well as workers who very recently left the labor force but stand ready to reenter when conditions improve. We provide a more refined approach than existing broader measures of unemployment (like the BLS’s U-6, for instance) by pinpointing how many of these underemployed workers are cyclical as opposed to frictional or structural.
Our estimate of the current shadow unemployment rate is 7.1 percent, which is ½ percentage point above the official rate. While a 50 basis point adjustment is not trivial by any means – i.e., it increases the gap between the official rate and the long-run natural rate of unemployment by almost 50 percent – it is also far smaller than suggested by other broader measures of unemployment. All told, our deep investigation of these sources of underemployment reveals surprisingly little cyclicality over and above standard measures of the unemployed. As a result, the shadow unemployment rate will likely provide only a moderately sized buffer to wage and inflation pressure as the economy continues to improve.
Here is Goldman Sachs on the same LFPR issue:
– The labor force participation rate has dropped sharply since 2007 and has only recently begun to show signs of stabilization. Economists generally agree that about half of the post-crisis decline is due to demographic factors but disagree on how much of the remainder is due to cyclical, as opposed to structural, factors. In today’s comment we examine the participation rate of the young and the old to shed new light on this issue.
– For young individuals, we show that “continuing with school” accounts for almost the entire drop in the participation rate of the 18-24 year olds. The decline in their participation rate relative to the long-run trend therefore appears both cyclical and reversible. For old individuals, stock market performance matters to those who are near retirement age. But given the cyclicality of stock prices, the division between “cyclical” and “structural” factors is blurred.
– Taken together, the evidence presented above is broadly consistent with our view that a significant proportion of the participation rate decline was driven by cyclical factors and that the unemployment rate understates the extent of slack in the labor market. But our analysis also highlights that the uncertainly about the size of labor market slack is considerable.
Google recently announced it was acquiring UK artificial-intelligence firm DeepMind Technologies. Reportedly as part of the deal, Google agreed to create an ethics board to make sure the AI technology was not abused. Now whether this was due to privacy concerns or Skynet concerns isn’t clear. But the action did prompt many media stories about the likelihood of out-of-control computers destroying humanity. Historian Edward Tenner is somewhat less worried:
Artificial intelligence researchers themselves acknowledge that many tasks have taken far longer than their predecessors had predicted, leading in the past to disappointing results and funding slumps known as “AI winters.” Computer scientists specializing in computational complexity aren’t sure of whether brain modeling belongs in the category of problems so hard that centuries of hardware and software progress couldn’t solve them. Every so often, strikingly efficient computer procedures take experts by surprise, such as Google’s search algorithm in the 1990s. Artificial superintelligence may seem improbable, but history is full of great minds who said new inventions were impossible. As science fiction writer Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” In this case, will it be black magic?
The most serious reason for skepticism about such technological developments is not a philosophical, physical, or psychological objection but one from everyday experience. I would take warnings about the dangers of superintelligent machines more seriously if today’s computers were able to make themselves more resistant to human hackers and to detect and repair their own faults. Organizations with access to some of the most advanced supercomputers and gifted programmers have been hacked again and again by individuals and groups with modest resources, compromising everything from credit card numbers to espionage secrets. We must balance charts of exponential growth of computing power, like those displayed by Kurzweil in How to Create a Mind, against more sobering ones of continuing electronic fragility.
Of course there are ways to make computer systems more robust. Some of the greatest practical successes of artificial intelligence depend on elaborate techniques to compensate for the difference between computer reasoning and human thinking. Advanced aircraft systems such as the Airbus 320 are based on five or more computers answering the same questions with diverse hardware and software, comparing answers, and “voting” where necessary; any bug in a single computer will be overruled. IBM’s Watson also did not attempt to answer Jeopardy! questions as a human contestant would but instead used many techniques in parallel and assigned a probability to each one. So if superintelligence arises, it will probably be manifested not in a super-network of total social control but in clearly defined, usually proprietary environments. And as computing power becomes ever cheaper, there will be more redundant systems watching over each other, as on the Airbus; what doomed the fictional mission in Stanley Kubrick’s and Clarke’s 2001: A Space Odyssey was that there was a single, unchecked master computer, HAL.
Anyway you figure it, America’s $16 trillion economy is ridiculously large. But as big as it is, it’s probably bigger than government data show. For instance: the EU is making the UK estimate the value of drug deals and prostitution, and then add that amount — some $17 billion — to official output numbers. The US shadow economy, everything from illegal activity to off-the-books work, has been estimated at $2 trillion.
But even that doesn’t catch everything. On my recent Ricochet Money & Politics podcast, I asked AEI economist Stephen Oliner about how technological progress distorts GDP numbers:
One other point that [The Second Machine Age authors Erik Brynjolfsson and Andrew McAfee] make is that they don’t think the current economic statistics do a good job of capturing the rate of innovation of technological progress. Do you have confidence that we have a pretty good feel for the reality of the situation if you look at the productivity data?
I don’t feel entirely confident. I mean, this is one of the things that I’ve been doing research on is whether the economic statistics properly capture the impacts of information technology. And I think there are reasons to be skeptical.
I think the particular thing that Brynjolfsson was talking about is that the things that are available for free on the Internet, you know, setting up a Facebook account, for example, because they aren’t priced, they aren’t part of measured GDP. Even though they have value, people get utility out of those products, but they’re not priced, and therefore they’re not in GDP.
What I’m talking about are whether the statistics that we actually have are distorted in some way. And one example that reflects the research I’m doing now concerns the prices that we measure in the producer price index, which is the U.S. official price index produced by the Bureau of Labor Statistics for semiconductors, particularly the microprocessors that go into computers, laptops, desktops, tablets, et cetera.
The PPI shows that the price declines for those goods, which were extremely rapid throughout almost the entire history that they’ve been produced, have basically come to a halt — that in the last couple of years there have been no price declines to speak of at all, which is very strange and is in conflict with the fact that innovation in that part of the economy is still proceeding at a rapid rate.
And it raises questions about whether the procedures that are being used to measure those prices are appropriate. And I personally think that they’re not, that prices are actually falling more rapidly than the official statistics would show.
I could go on for longer, but I do agree with Erik’s basic point that the measurement framework that we have in the United States does a pretty good job of capturing IT, but there are problems. And I think, overall, it understates how much benefit we’re getting from information technology.
The other day I wrote about the very high US corporate tax rates, both statutory and effective, versus other advanced economies. Turns out the story is nearly as bad concerning capital gains taxes. The Tax Foundation:
Currently, the United States’ top marginal tax rate on long-term capital gains income is 23.8 percent. In addition, taxpayers face state-level capital gains tax rates as low as zero and as high as 13.3 percent. As a result, the average combined top marginal rate in the United States is 28.7 percent. This rate exceeds the average top capital gains tax rate of 18.2 percent faced by taxpayers throughout the industrialized world. Even more, taxpayers in some U.S. states face top rates on capital gains over 30 percent, which is higher than most industrialized countries. In fact, California’s top marginal capital gains tax rate of 33 percent is the third highest in the industrialized world.
And when you look at the combined tax rate on capital, you find a huge gap. The US integrated rate is 67.8% versus 43.7% for OECD economies.
Economist Robert Brusca takes issue with consensus view that the Yellen Fed represents continuity with the Bernanke Fed, certainly a view Yellen seems eager to promote:
Bernanke was a CYCLICAL DOVE. Bernanke undertook his policy course because of his understanding of risk in this severe business cycle. His knowledge of the Great Depression guided him, not ideology. Bernanke’s knowledge and study of the Great Depression led to the conclusion that the biggest mistake that has been made in the past was to tighten policy too soon after such an event. That conclusion left Bernanke with the dovish strategy.Janet Yellen is a lifelong Democrat. She is more likely to be a STRUCTURAL DOVE. Democrats generally believe more fundamentally in the role of government and in the right of government to intercede in the economy generally not just at special points of a severe business cycle. We will not know when we reach a point when Janet Yellen’s policy will diverge from what Bernanke’s policy might have been. We undoubtedly will reach a such point. Be wary of that.
Brusca’s take will strike many as sort of intuitively true But as I have written, former Feddie and current AEI scholar Stephen Oliner takes issue with the supposed Yellen dovishness:
I read the 42 speeches she has given over the past five years, focusing on her comments on inflation. This reading leads to only one conclusion: Yellen is not soft on inflation. Those who believe otherwise either haven’t done their homework, have misread the evidence, or are willfully misrepresenting her views.
MKM Partners economist Mike Darda offers the market monetarist perspective:
While trends in productivity, innovation and labor force growth determine the standard of living over the long haul, the most recent recession and slow recovery was largely a nominal phenomenon.
We know this because an adverse supply-side shock would reduce real growth and raise inflation (or raise unemployment and wage rates); an adverse demand-side shock (a nominal or monetary shock) would reduce real growth and inflation (or increase unemployment and lower wage growth).
Moreover, an adverse supply side shock would cause NGDP and RGDP to diverge, yet they have been tightly correlated during the recession and recovery. Moreover, this recovery has featured low and stale nominal wage growth and the second lowest average inflation rate of any post-war business cycle.
This alone contradicts the widespread but false notion that monetary policy is “excessively loose” and also suggests that the U.S. economy is operating below its potential. How far below? Probably between 3.2% and 4.6% based on estimates of potential GDP and labor market slack. By our calculations, just over half of the decline in labor force participation is cyclical (based on the drop in prime-age participation relative to total participation since the cycle peak in 2007).
Again, this recommends both continued monetary offset and supply-side reforms to raise growth potential.
A report last year from McKinsey Global Institute suggested a bunch of “game changers” to boost US productivity. One obvious idea was to improve efficiency and output in “quasi-government,” innovation-resistant sectors. We’re looking at you, health care and education.
Easier recommended than done, of course. To accomplish that goal, one key is subjecting these sectors to maximum competitive intensity via market forces. Again, easier recommended than done. Often the game can’t be changed. It is rigged in favor of incumbents, reducing potential challenges from new competitors and the disruptive innovation they threaten. Take higher education. The problem isn’t a lack of financial aid, but an out-of-control institutional cost structure leading to more student debt. One big reason, notes New America’s Anya Kamenetz: higher administrative costs as schools staff up in pursuit affluent students. Higher expenses lead to demands for more funding. Rinse and repeat. Not only do traditional institutions need reform, but a broader rethink of what “high-ed” is required.
Most importantly, current US public policy is poorly equipped to drive lower-cost higher education at existing institutions. The regulatory framework or “accreditation” that colleges and universities are currently under is a largely collegial peer-review process, meaning the accrediting agencies that evaluate colleges and universities, vet their quality, and control access to federal funding are staffed by members from the same institutions they are regulating. This lowers the likelihood that accreditors will rock the boat by calling their peer institutions to task for high costs. It also stifles innovation, as accrediting agencies act as regional cartels designed to keep out new providers of higher education.
New reforms proposed by GOP senators Mike Lee and Marco Rubio attempt to deal with the college cartel and the related issue of how to make higher-ed more market-like. What their ideas have in common are a variety of market-based approaches to improving educational opportunity. Among them: (a) give students better info on outcomes and costs; (b) lower barriers to entry by creating alternative, state-level accreditation systems so new providers of cheaper courses and programs could compete against existing players; (c) leverage private capital to finance individual students through income-share agreements.
The $10,000 bachelor’s degree is one reasonable goal for a reformed system. Also: greater opportunity for non-college higher-ed options such as, AEI’s Andrew Kelly explains, apprenticeship programs and specialized courses offered by a wide variety of actors, from firms to labor unions to community organizations. Then the game would certainly be changed.
A “very concerned” Janet Yellen told a congressional panel today that she thinks income inequality is “one of the most important issues and one of the most disturbing trends facing the nation at the present time.”
But why, exactly?
1.) If you buy the thesis that a big jump in high-end inequality has been mostly driven by technology and globalization, then the alternative is more equality, perhaps, but less innovation here and more extreme poverty abroad. Now that’s a disturbing scenario.
2.) If you are concerned about upward mobility, then family structure, education, and geographic segregation are bigger issues than 1%-99%-style inequality, which has zero correlation with climbing the opportunity ladder.
3.) Along those same lines, here are e21′s Scott Winship and Donald Schneider, whose recent work undermines “the idea that rising inequality has hurt economic mobility … the accumulating evidence [is] that mobility has been stable and that there is little robust correlation between inequality and mobility levels across geographic areas. We argue that instead of trying to construct an unsupportable case that mobility is falling and that inequality is to blame, Democrats should simply point to the insufficiently high mobility levels experienced by poor children.”
4.) If you are concerned about poverty, income inequality is a distraction. Poverty expert Ron Haskins of Brookings: “If our goal is to increase opportunity, it seems unlikely that limiting income at the top of the distribution or taking more money from the rich will increase opportunity.”
5.) If you are concerned that crony capitalist links between big business and big government are promoting inequality and making the American Dream seem like a rigged game, well, right on! But Yellen didn’t talk about that.
6.) Does Yellen know that the top 1% own a slightly smaller share of US wealth than a generation ago?
7.) Here is social scientist Lane Kenworthy, a progressive who just wrote a book on creating a Nordic-sized welfare state in the US: “Faster economic growth would be a good thing (particularly if with it came a shift towards greener growth). But there is little evidence that the American economy will grow more rapidly if the US manages to reduce income inequality. … Income inequality is too high in the US. It would be good to reduce it. But it is a mistake, in my view, to put inequality reduction at the top of the agenda.”
What disturbs me is the lack (a) economic growth, (b) good-paying, full-time jobs, (c) social mobility, and (d) educational opportunity in a time of advancing automation. Those are the disturbing trends Yellen, given her prominence and bully pulpit, should be talking about.
In congressional testimony today, new Federal Reserve Chair Janet Yellen said, “The recovery in the labor market is far from complete.”
To say the least. Then Yellen mentioned two metrics beyond the official jobless rate that prove her case: the share of long-term unemployed and the high percentage of Americans working part-time who would prefer full-time jobs. Here are some charts illustrating Yellen’s points:
1.) Long-term unemployment remains at historically high levels:
2.) The unemployment/underemployment rate remains at historically high levels:
3.) The share of the unemployed who’ve been out of work for six months or longer remains at historically high levels: