The First Semiconductor Trade War

Faced with the prospect of an Asian nation overtaking the United States as the world’s preeminent manufacturer of vital technology, the president struck a nationalist pose. “The health and vitality of the U.S. semiconductor industry are essential to America’s future competitiveness,” he said, announcing huge new tariffs and setting the stage for subsidies to domestic microchip makers. “We cannot allow it to be jeopardized by unfair trading practices.”

This was not President Joe Biden or former President Donald Trump taking a stand against China. It was President Ronald Reagan responding to the growing technological prowess of Japan in 1987.

A year earlier, the Reagan administration had reached a deal that was supposed to limit Japanese companies’ sales of computer chips to America. Unsatisfied with the results, Reagan decided to escalate the trade war in 1987 by slapping 100 percent tariffs on all semiconductors imported from Japan. One year later, Congress approved a $500 million industrial policy to subsidize American chipmakers, but the effort largely fell flat.

Nearly 35 years later, semiconductors are once again at the center of trans-Pacific trade tensions, and American politicians are fretting about global supply chains for computer chips. Taiwan is now the world’s leading manufacturer of semiconductors—the tiny, thin wafers of silicon that power computers, smartphones, cars, and many household appliances. But it is China that is rankling American politicians by spending huge sums to stoke domestic production.

While the circumstances are not identical, China’s growing technological prowess is triggering the same American political reactions that Japan’s did 30 years ago. If policy makers are not careful, they will end up repeating some of those wasteful mistakes.

The levies Reagan imposed made TVs and personal computers more expensive but did little to curb Americans’ appetite for those products. Some tariffs were lifted just a year after they were imposed, while the rest were quietly yanked by President George H.W. Bush in 1991.

The more interesting lesson for policy makers is the failure of Sematech (a portmanteau of “semiconductor manufacturing technology”), a public-private consortium created by 14 American chip manufacturers in 1987 and partially funded by the federal government. Sematech’s creation coincided with the -publication of a Pentagon report that declared “it is simply no longer possible for individual U.S. semiconductor firms to compete independently.” The report warned that government inaction was “a direct threat to the technological superiority deemed essential to U.S. defense systems.” A year later, Congress authorized $500 million (about $1.2 billion in today’s dollars) in subsidies for Sematech to be spread over five years. The 14 member companies agreed to kick in 1 percent of their semiconductor sales revenue, up to $15 million a year.

The combination of public funding and private investment was supposed to help Sematech build a “world-class” semiconductor fabrication facility in Austin, Texas. Research, development, and manufacturing know-how launched from there would be shared among the members, allowing American companies to once again surpass those in Japan.

America did soon reclaim its place as the world’s most prolific chipmaker, but not because of anything the consortium did. “A close look at Sematech confirms all the darkest suspicions of industrial-policy critics,” Brink Lindsey, now a vice president at the Niskanen Center, wrote for Reason in 1992, around the time that Sematech was asking Congress for five more years of funding. The Austin facility was churning out chips that were anything but cutting-edge. Even though Sematech was able to essentially borrow the best ideas from its member companies, it never did anything more than “reproduce manufacturing results that other private companies had achieved years before,” Lindsey wrote.

Later reviews of Sematech were equally harsh. The federal government’s -investments in the consortium “do not induce more semiconductor research than would otherwise occur,” trade historians Douglas Irwin and Peter Klenow concluded in a 1996 paper published by the National Academy of Sciences. They found that the subsidies caused member companies to cut their own spending on semiconductor R&D.

Despite Sematech’s obvious shortcomings, American semiconductor manufacturing boomed during the 1990s, and the panic over Japan’s technological advances dried up. “U.S. firms prospered because of their ability to innovate and compete effectively, not because of such techno-nationalist or protectionist measures,” says James L. Schoff, a senior fellow at the Carnegie Endowment for International Peace, a foreign policy think tank.

In a 2020 paper looking back at the U.S.-Japan semiconductor conflict, Schoff argued that cooperation and integration, rather than insular protectionism, allowed both the U.S. and Japan to strengthen their competitiveness in the diversifying global market. By 1996, in fact, Japanese companies were joining Sematech, which had by then been cut off from government funding and was focused primarily on facilitating the sharing of ideas.

Politicians pushing semiconductor industrial policy today are living in an alternate reality where foreign-made semiconductors are a threat. Earlier this year, Senate Majority Leader Chuck Schumer (D–N.Y.) successfully pushed a bill to provide $52 billion in new subsidies for American chipmakers by arguing that government action is necessary “to preserve our competitive edge” and warning that nothing less than America’s “economic and national security” was at stake.

Republicans, formerly more skeptical of government intervention in industry, have been eager to line up behind the cause as a way of demonstrating their anti-China hawkery. Sen. Tom Cotton (R–Ark.), a supporter of Schumer’s subsidy plan, says government intervention is necessary because “the United States has fallen behind and given the Chinese -Communist Party dangerous leverage over our nation’s future.” And so the cycle begins again.

Policy makers should be skeptical of these technonationalist ideas, not least because they are based on an inaccurate understanding of the global marketplace. Yes, most semiconductors used in America are manufactured overseas. But U.S.-based companies control 47 percent of the global industry, according to the Semiconductor Industry Association.

Those American companies do not need government aid. Revenue for global chip manufacturers was up 10 percent in 2020, despite a pandemic-induced slowdown in demand, The New York Times reported in May. Equity investors fell over one another to dump more than $12 billion into the industry last year.

As the Sematech saga demonstrates, the nationalist approach also suffers when the crucible of competition is replaced by the potential windfall from lobbying. In a 1996 paper, Irwin noted that Robert Noyce, a co-inventor of the integrated circuit and chairman of Intel, spent 20 percent of his time in Washington, D.C., during the early 1980s. It is unlikely he was there because major technological breakthroughs were happening in the nation’s capital.

Meanwhile, the best chips made in China are several generations behind those made in America and Taiwan, and closing the gap will be difficult, especially now that the U.S. has banned the sale of semiconductor manufacturing equipment to China. But politicians rarely let a good panic get interrupted by inconvenient facts.

While semiconductors have changed since the 1980s, the rules of economics have not. Nationalist policies such as tariffs against foreign competitors and subsidies for domestic producers are not likely to be any more successful today than they were three decades ago, because there is no reason to think the federal government has gotten better at picking winners and losers.