Jack O'Connor

Waste: Uncovering the Global Food Scandal, a new book by British author Tristram Stuart, will soon be hitting shelves in the UK and the US. It’s is a detailed indictment of the massive amount of edible food that industrialized countries throw away, both in the factory and at home. “In America, around 50 per cent of all food is wasted,” the Telegraph summarizes, “while over here [in the UK], we dump 20 million tons of food every year. Put all this together and—to make a wearisomely predictable but inescapable point—you could easily feed the world’s hungry several times over.”

The Movement Behind the Man

Both the book and its author have close ties to a new kind of conservationism, colloquially known as “freeganism.” Members of the movement cut down on waste—and make a point at the same time—by living partially or entirely off of food they find in other people’s trash. Lars Eighner described the practice in his famous essay “On Dumpster Diving,” and freegans like Stuart have turned that efficiency into advocacy. The Guardian described their message: “If we waste less food, we’ll need less land to grow it on, and hence will cut down fewer trees; we’ll use less water to irrigate that land and less carbon to transport and process the food it produces.”

dumpsterdiving

One man's trash is another man's lunch.

That message is catching on. A Welsh millionaire and professional sculptor has taken up the freegan lifestyle, inspired by his experiences with discarded electronics in Japan. A new website, freegan.info, notifies the community about big scavenging opportunities like college move-outs.

The relentless drive for efficiency has motivated some excellent innovations. Stuart himself claims to make cottage cheese from leftover custard donuts. Food banks have expanded, particularly in the US, to help grocery stores donate their unsold extras to the homeless. At the same time, Stuart leaves some questions unanswered. Waste criticizes stores and factories for overstocking their products, but as the Financial Times points out, overstocking can make good economic sense. How can what looks like a complete waste of private property be the daily routine of a profitable, competitive industry?

Questions like that aren’t particularly important to culture and lifestyle, and they’ve rightly taken a back seat to more pressing issues, like how to make cottage cheese. Inevitably, though, freeganism and other conservation movements are growing out of private life and into public policy. In the halls of government, those nagging questions of efficiency are critically important, and the economic underpinnings of this cultural movement will demand some scrutiny.

As it turns out, Stuart makes a common but crucial mistake. He ignores the invisible. With all the focus on obvious waste—dumpsters, landfills, and so on—it’s easy to forget that our most precious resource is something we never find in those places. And no, I’m not talking about air.

The Question Restated

nails

Paying less for a better product.

When we recall the industrial successes that have shaped modern life, we usually think of new inventions—plastics, automobiles, and so on. The greatest victories of industry, however, came not from new products but from making old products cheaper. Most of what we consume today—food, clothes, housing, refrigeration, steel, light, and so on—has been available for centuries. Our products are usually nicer, but the biggest difference is the price.

It’s not immediately obvious why our goods should be so cheap. After all, the nails I buy in a hardware store are made with machines vastly more expensive than the forges and hammers blacksmiths once used. They’re also shipped farther, and their quality is more consistent. By all rights they should cost more than they used to, but instead they cost orders of magnitude less. Why?

In Nature, Much Goes To Waste

Although a wire nail requires more machinery, electricity, and gasoline than the cut nails and hand-made nails that came before it, it demands much less of one crucial ingredient: human effort.

The most important resource in the world is us. Our labor and our time. Our blood, sweat, and tears. Things that still take a lot of human effort to make are expensive. Nearly everything else is cheap, because we’ve figured out how to get it without working so hard.

gdp

What capitalism has done for you lately.

If we look at the history of America’s GDP per capita, a rough estimate of how much stuff the average American made each year, we can see that process in motion. The typical worker in 1790 had a harder job with longer hours, yet he produced forty times less than he would today. Forty times less. Compared to the modern workforce, early American workers wasted more than 98% of their time and energy.

As human effort has become more productive, it has also become more expensive. Many early conservation practices—using the entire buffalo, so to speak—no longer make sense now that the proverbial buffalo is cheap and the labor to process it is expensive. This is what Tristram Stuart is missing when he criticizes our overstocked grocery stores and factories. True, their garbage is red ink on the balance sheet, but getting rid of it requires learning more about what customers will buy and applying that knowledge at every stage of production. That costs precious time and effort, which are too valuable to waste on a problem that overstocking solves so cheaply.

Once again, the answer to our question is Henry Hazlitt’s most important lesson. The challenge of economics is to mind all costs, both the obvious, like a pile of garbage, and the invisible, like an hour misspent.  Human effort is our dearest resource, and we should be happy to spare it even at great material expense. Conservation movements all too often neglect these human costs, and if our governments make the same mistake, we’ll find ourselves a good deal poorer with no idea why.

The Washington Examiner has published my op-ed on net neutrality:

A war is waging over the future of the Internet. On one side are the supporters of “net neutrality,” a proposal to ban Internet service providers (ISPs) from giving different treatment to network traffic from different sources. The Internet Freedom Preservation Act of 2009, introduced in the House two weeks ago, is their latest salvo.

On the other side are those who believe that regulation will threaten the very freedom that has allowed the Internet to thrive.

The net neutrality movement is an unfortunate departure from the “keep your hands off my Internet” attitude long held by many on the Web. Advocates of neutrality legislation are asking Congress to write into law what they see as an Internet that treats everyone equally. They are concerned that new technologies and business models might give big players an advantage over the little guy, or worse, that ISPs might use their market power to force a crippled Internet on their customers. Both fears rest on significant misconceptions.

The Internet has never been a level playing field. Big companies like Google, for example, offer their customers an Internet “fast lane” by building server farms all over the world. Cable broadband providers still reserve most of their bandwidth for cable TV. Far from hurting the Internet, these non-neutral elements have been essential to pay for the wires and servers than carry the Web as we know it.

Neutrality is not an all-or-nothing choice. Different elements coexist and make each other better. Companies that take advantage of openness can wipe the floor with those who do not, as AOL’s competitors did in the late nineties. At the same time, if Google’s servers gave the company no advantage, Google would never have built them, and the Internet would be slower for everyone.

Future innovations will be just as helpful, if we allow them. ISPs might save their customers money by “unbundling” Internet access, as we often wish cable companies would. Or they might take a cue from mobile providers and let their customers choose “preferred sites.” Some of the strongest proponents of neutrality laws—Google, Amazon, and eBay—made their fortunes with the same “dynamic pricing models” that they want to deny to ISPs. No one could have predicted the diversity of prices and services that has made AdWords possible, and there is no reason ISPs and their customers cannot benefit from the same strategy.

Many neutrality advocates admit that non-neutrality could help the Internet, but they worry that ISPs will exploit non-neutrality to swindle their customers. Doomsayers warn that ISPs will start cutting users off from some parts of the Internet in exchange for bribes from powerful players. Neutrality advocates want to make these practices illegal, to stop the problem before it starts.

That theory has several problems. The first mark of a monopolist is price gouging, not shoddy service. There is no evidence that ISPs are gouging prices, and even if they were, net neutrality would do nothing to stop them. More importantly, though, if competition were lacking, neutrality laws could only make the problem worse.

It is nearly impossible to compete directly with a powerful company. Instead, competitors try to enter the market by offering something new, like Progresso did with upscale canned soup or Apple did with the iPhone. Yet the goal of neutrality legislation is that ISPs should compete only on price. By forcing new companies to use the same business model as the big dogs, the law would make competition much more difficult.

Many advocates answer that ISPs will never be competitive, and that the best we can hope for is to regulate them. In fact, that is exactly what regulators thought in the 1920s, when the Bell telephone monopoly was just taking off. They assumed that competition had no chance, so they ignored the anti-competitive effects of their rules. Those mistakes choked the telephone industry for decades.

Competition is not perfect. It never has been and never will be. But assuming that we can do without it, that we can help consumers by prohibiting diversity, is a blunder too costly to make again.

The Internet is a process in motion. New sites and applications come and go in the blink of an eye, and that dynamism has created a wealth of content like nothing ever before. We cannot expect anything less of the technologies that carry that content, or of the businesses that pay for those technologies. They too must come and go and change as the Internet grows. The Web should not rely on one unchanging business model any more than it should run on just one browser.

We had it right the first time. Congress, keep your hands off!

I propose the following rule:

“Think of the children” rhetoric shall be reserved for those situations in which the author is not, in fact, thinking of everyone.

Ridiculous? I thought so too, until I read Tom Sydnor’s testimony to Congress on the dangers of file-sharing programs like KaZaA and LimeWire. “It is simply absurd,” he said, “for anyone to have urged children to recursively share the My Documents folder for their family computer.” (Italics from the original.)

Perhaps children aren’t competent to operate Limewire. If that was the extent of Mr. Sydnor’s argument, he could have a point. However, he doesn’t really think that educated adults are responsible enough either, and he invokes “the children” only to make his incredible lack of faith sound more plausible. He says as much in his choice of evidence: inadvertently shared tax returns and leaked flight plans for Marine One. Not exactly child’s play.

File sharing has its risks, just like email, wireless internet, and user-selected passwords. No one denies that. But Mr. Sydnor suggests that LimeWire and KaZaA are intentionally and unjustifiably risky, and in the process he tries to pass off well-established design practices as evidence of neglect or mischief. Here are just a few. (My. Sydnor’s writing in bold.)

  • “Why cram [an inadvertent sharing] warning into a little square when the entire screen was available? Why make the little square appear in the bottom-right hand corner of the screen?”
    Because message boxes are standard in user interface design. Occupying the entire screen with hypothetical warnings is hostile to the user, as Windows Vista customers are acutely aware.
  • “Obscure files stored in a hidden folder invisible to the average user can cause the newly-installed version to automatically begin sharing all files shared by the previously uninstalled version.”
    Modern software never stores settings in files that the average user will see. Retaining settings after reinstallation is a convenience to users, especially those who required help to configure the original installation.
  • “The folder-structure on an ordinary personal computer was never intended to segregate a subset of the user’s personal files that he or she might want to ‘share’ with anonymous strangers.”
    Folders have always been used to separate public and private data. Reading permissions have been around for decades, and modern operating systems often include a special folder for sharing files. Recursive operations on directory trees (which My. Sydnor also calls “outdated”) are standard for numerous familiar operations, including copy and delete.
  • One mistaken click on LimeWire 5.1′s dangerously ambiguous “share all” feature can publish all of the audio, video, image, and documents files in a user’s “Library.”
    It’s impossible to even count the catastrophes a user could cause with just a few clicks, like visiting a malicious web site, disabling a firewall, or saving a password on a public computer. If designers never sacrificed safety for convenience, modern software would be totally unusable.

Any program can sound dangerous and irresponsible if we take it out of context and paint it in alarmist language, but the reality of file sharing is nothing of the sort. As Mr. Sydnor describes himself, LimeWire and KaZaA explicitly ask the user which files should be shared. If the user changes his mind, he can easily adjust the settings later. Nothing is hidden, and the user is in complete control.

It takes a user competent enough to download and install file-sharing software, yet incompetent enough to affirm harmful settings, in order to cause inadvertent file-sharing. In other words, it’s only those users with knowledge and permission sufficient to install new programs–precisely those who need to know better–who are vulnerable.

Users who install software simply have to be responsible for its behavior. No other standard is possible on the open Internet, where users are often exposed to malicious software deliberately written to deceive them. The dangers of malware, or of child predation for that matter, demand personal responsibility on the part of users and parents. It’s that same responsibility that will solve the problem of inadvertent file-sharing.

The Financial Times published a piece describing how the pioneering innovations of Aeroflot, the USSR’s preeminent airline, have resurfaced in modern airlines across the free world. Here’s my rough prediction for how that trend might play out in the future:

2015: For the first time, all of the world’s ten largest airlines spend more money advertising their “extreme sports” image than advertising safety.

2019: As a concession to pilots’ unions, Southwest and United allow family members of the crew to take turns flying the plane. The agreement comes as a compromise after bargaining the pilots down from complementary in-flight vodka.

2024: Northwest Airlines incorporates the hammer and sickle into its logo. Delta, wary of trademark violations, incorporates only the hammer.

2031: In anticipation of a coming recession, major airlines scramble to become state-owned. In exchange for taking on Aeroflot’s mounting debt, American Airlines obtains a favorable buyout from the Russian government.

Of course, given the Financial Times’ supposedly free market slant, these predictions might be overly rosy. Readers should judge for themselves.

The American Psychological Association’s “Task Force on the Interface Between Psychology and Global Climate Change” published its report this week:

Many people are taking action in response to the risks of climate change, but many others are unaware of the problem, unsure of the facts or what to do, do not trust experts or believe their conclusions, think the problem is elsewhere, are fixed in their ways, believe that others should act, or believe that their actions will make no difference or are unimportant compared to those of others….Some or all of the structural barriers must be removed but this is not likely to be sufficient. Psychologists and other social scientists need to work on psychological barriers.

Translation: If you’re a green skeptic, the APA thinks you need a shrink.

The arrogance of this report is astounding. It presumes only one rational position in an incredibly complicated policy debate. CEI’s Marlo Lewis has a recent video discussing the science behind skepticism, but without even going into the details, let’s just recall the questions that have to come before action:

  1. What will be the extent and the effects of global warming?
  2. How much of that will be our fault?
  3. Of that portion, how much are we willing and able to abate?
  4. How much would that abatement cost?
  5. Is that cost lower than the additional damage the avoided warming would have caused?

Every one of those questions is open, and while the APA is certainly entitled to its answers, the idea that no one in his right mind could disagree is absurd and counterproductive.

The report makes a notable omission. More than a century and a half ago, Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds described humanity’s penchant for irrational crowd mentalities in movements like alchemy, the crusades, witch-hunts, and market bubbles. The same pattern has shown up again and again in popular panics: the nationalism of WWI, the crash of 1929, Nazism, Malthusian overpopulation scares in the 60′s, and fears of global cooling in the 70′s.

Might today’s green movement be more mania than reason? No, surely not.

High-frequency stock trading — the markets where sophisticated algorithms running on bleeding edge hardware trade assets using information only fractions of a second old — is under attack from Senator Chuck Schumer. In response, The Business Insider has republished a detailed piece explaining why Schumer’s criticism is unfounded.

Readers interested in the full details of the debate should take a look at that piece; this short post will simply clear the air around one concern. ArsTechnica described what many are probably thinking:

The real issue is that when the average retail investor gets an E*Trade account and tries to play the stock market, she typically has no idea that she’s going up against the market equivalent of IBM’s chess grandmaster-thumping supercomputer, Deep Blue.

That’s true, and it should be frightening. Most of us have no business betting against those odds. But if that sounds unfair, we should remember that it’s not the only game in town.

As far as we little people are concerned, we can divide investments into two big categories. Let’s call them “growing the pie” and “cutting the pie.” If we think of the stock market as a pumpkin pie, growing the pie means spreading your money throughout the whole dish, hoping that it gets bigger. Cutting the pie, on the other hand, is like trying to guess which slices of the pie will grow faster than others and putting your money only in those slices.

In practical terms, growing the pie could mean investing in an index fund. These funds invest in stocks according to broad public indices, like the S&P 500.  That index rises and falls along with the whole economy, and because the economy grows reliably over time, the index fund does too. With the S&P 500, investors can expect long-term average growth around 8-10% per year, and anyone can piggyback off that growth essentially for free.

Cutting the pie, on the other hand, means trying to do better than average. This describes investors who pick stocks, like day traders or i-bankers. Unlike growing the pie, cutting it is a zero-sum game. For every investor who beats the averages, another falls short. It’s in this kind of trading that people are competing against computers, and the computers are only getting better.

Personal investors really have nothing to worry about, though, because they shouldn’t be playing that game in the first place. Trying to beat professionals at their own sport–literally competing with investment bankers for the same dollar–is always going to be a losing proposition. The only prudent approach is to invest in the whole pie, let other people spend their time on the pieces, and just sit back and watch it grow.

Of course, some people will always want to gamble against the odds, and that’s a risk they’re allowed to take. But if we try to “level the playing field” for them by banning the superefficient trading that goes on in high-frequency markets, we’ll end up slowing down the pie for everyone else.

The latest missive from the folks at Free Press has crossed the line:

When challenged, the wireless carriers actually compare their industry to another: soda.

This is from the Times editorial on July 22:

Phone companies point out that exclusivity agreements are commonplace in other industries. For example, they say, it is not often that one finds a restaurant serving Coke and Pepsi.

Sorry, but cell phones aren’t soda. Unlike carbonated sugar water, cell phone choice, network access and the mobile Web are increasingly essential components of a democratic society. We rely on them for access to the information we need to be engaged citizens in the 21st century.

Free Press doesn’t even bother to challenge the logic, because it’s absolutely true. Exclusivity deals are as anticompetitive as vending machines, which is to say, not at all. But no, apparently the state needs to take control of mobile phones because that market is more “essential.”

What isn’t essential? Can our democracy forgo cars or trains? Could this Great Society exist without food, water, or power? What about televisions, computers, or operating systems? Books or universities?

And what is the track record so far for government’s hand in industry? The Interstate Commerce Commission was founded in 1887 to ensure “fair” operation of the railroads, and it quickly became the very definition of regulatory capture. FDR created the Civil Aeronautics Board with the same intentions, yet its greatest success was finally managing to dismantle itself. The US Postal Service survives today as an anemic jobs program, because competing with it is illegal. How many failures does it take to lose faith?

It should have taken just one. We tried regulating phones before. We wanted to ensure universal service as far back as the 1920s. The FCC nationalized the industry during WWI and then gifted it to AT&T, in exchange for the company’s help in building a nationwide network. The network grew entirely as planned–just as the FCC wanted–and we created a monster that held back the telephone industry for decades.

Free Press claims that essential services can’t be trusted to the market. I can only ask, who on Earth do they trust?

CEI’s broadband reply comments from earlier this week received a generous quotation by Ars Technica’s Nate Anderson. Mr. Anderson took issue, however, with our claim that net neutrality mandates are essentially price controls:

“In particular, [neutrality rules] require ISPs to offer content providers a price of zero, and to differentiate prices to consumers only in certain limited ways,” says CEI’s filing. “The disastrous consequences of price controls are all too familiar. And while neutrality may currently align with industry best practices, that fact limits the possible benefits just as much as the possible harm.”

Content providers pay for bandwidth on the competitive market, so it’s not clear what the line about “a price of zero” refers to (that money is passed along to other ISPs along the network path through the mechanism of “peering and transit“). But it is clear what groups like CEI want from a broadband plan: nothing at all.

There’s a lot more to say about net neutrality, especially regarding antitrust and regulatory capture. (For a brief summary of CEI’s broadband comments, check out our topic-by-topic summary.) This post aims to address Mr. Anderson’s objection on net neutrality in particular.

One of most incendiary moments in the history of the neutrality debate came during a 2005 interview with Ed Whitacre, then CEO of SBC. Ars reported Whitacre’s remarks:

How do you think they’re [Google etc.] going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes? The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!

Reactions to that comment have been at the core of the neutrality debate. Whitacre was asserting SBC’s right to charge content providers directly for their use of SBC’s lines — in essence, the right to set the price of premium service quality higher than zero — and neutrality advocates have clamored ever since to prohibit that kind of pricing. CEI wasn’t the first group to recognize the dangers of price controls at the core of net neutrality. A paper by Robert Hahn and Scott Wallsten,  “The Economics of Net Neutrality,” made the same point three years ago:

Mandating net neutrality amounts to price regulation. In this case, the regulation would state, in part, that broadband providers charge content providers a price of zero.

Mr. Anderson was correct when he pointed out that content providers already pay ISPs indirectly through various transit and peering agreements, and he linked to an excellent Ars piece explaining how these payments work. The Cato Institute’s Timothy Lee raised the same point in his 2008 policy analysis, “The Durable Internet,” in reference to Hahn and Wallsten’s argument. Lee ultimately acknowledged, however, that direct and indirect payments are not perfect substitutes, and his conclusion was simply that direct payments are inefficient:

With thousands of network owners and hundreds of millions of users, it would be prohibitively expensive for every network to charge every user (or even every online business) for the bandwidth it uses. Transaction costs would absorb any efficiency gains from such an arrangement. It would make no more sense than an automobile manufacturer requiring its customers to make separate payments to the manufacturers of every component of a new automobile. One of the services an ISP provides to its customers is “one stop shopping” for Internet connectivity. This arrangement has important economic advantages and is unlikely to change in the foreseeable future.

It’s indeed unlikely that direct payments would be worth the cost to negotiate them. Net neutrality is targeting prices that would probably remain zero anyway, at least for the foreseeable future. But for the most dynamic marketplace in history, etching the business models that prevail today in stone would be unwise — especially considering how often inefficient, outdated regulations impede market evolution.

It’s impossible to predict the evolution of content and technology online or the ways in which new developments might conflict with one another, and thus with neutrality. ISPs might even invent ways to save money for consumers by “unbundling” content, like the FCC nearly forced cable companies to do. No one knows. What is certain, though, is that thwarting innovation in service and pricing will close the widest open door to competition.

CEI submitted our initial comments to the FCC on broadband policy last month, and this week we submitted our reply comments. A brief overview:

  • International Comparisons: The gap between the US and other industrialized nations is vastly overstated. The differences between the leaders and the rest only amounts to a few months given the current extraordinary rate of growth. Much of our alleged lag is due to the fact that we subsidize broadband less than others, and yet we still seem to get better use out of it.
  • Open Access: Line sharing mandates and other access requirements are just another term for price controls. We’ve tried that before. Price controls either entrench monopolies or deter any investment at all. In order to retain investors, open access regulations would be accompanied by subsidy programs, and past experience with such programs has found them actively hostile to competition as well.
  • Universal Service Fund: Even those who recommend extending the USF to broadband admit that the fund is full of waste, fraud, and abuse. There is no reason to expect future subsidy programs to behave any differently. The snail’s pace with which these subsidies are phased out tends to entrench old technologies beyond their optimal lifetimes. If the money can’t be returned to taxpayers, it would be far more efficient to simply distribute it directly to the underserved customers it is intended to help.
  • Network Neutrality: Advocates of government involvement pretend that the pure, unblemished neutrality of the Internet is under attack. In reality, the Internet has never been so neutral. Companies like Google and Akamai have already spent billions of dollars on server farms, effectively buying “fast lanes” for their own content. Cable television–the largest and most popular proprietary network in the world–travels over the very same wires as IP traffic. Far from hurting the Internet, however, these non-neutral elements have been essential to pay for infrastructure. Neutrality and non-neutrality coexist very effectively online, and if we call on government to deal with this “problem,” we will buy ourselves an expensive lesson in regulatory capture.
  • Special Access: The NoChokePoints coalition doesn’t even hide its intentions to enact price controls. The obscene profits cited by these advocates are calculated from ARMIS data that were never intended to accurately reflect earnings, and their claims have been roundly rejected. Price controls on middle mile wires, in addition to having the same well-known consequences that all price controls do, will also thwart investment in wireless backhaul.
  • Explaining Deficiencies: First and foremost, there is no reason to expect that all areas of the US should have the same preferences regarding broadband, and if some rural areas don’t value broadband highly enough to warrant investment, that is no justification for the FCC to intervene. Second, though broadband growth is extremely rapid, there are plenty of government impediments holding it back from even more impressive growth. The largest by far is spectrum allocation; the FCC and other agencies are holding on to spectrum rights that would be worth trillions of dollars on the open market, and the cost of relinquishing those rights would be orders of magnitude lower.
  • Policy Recommendations: The vast majority of proposed regulations for broadband are just explicit or hidden price controls. There is simply no defense for price controls as a mechanism of policy. The most important goal of the FCC should be to free up spectrum for public and private use. US broadband markets do not need subsidies, much less wasteful and fraudulent subsidies, and those funds should be returned to the taxpayer.

The web is all aflutter in the debate over handset exclusivity. Harold Feld of Public Knowledge describes in a recently posted video how exclusive deals prevent competition between handsets and raise prices. Wayne Crews and Ryan Young of CEI have fired back, pointing to a handset market with literally dozens of competing devices.

The notion that exclusivity necessarily precludes competition is simply absurd. Apple’s deal with AT&T is precisely the opposite of monopoly. Far from cornering the market on smartphones, Apple has openly refused to sell the iPhone to most of its potential customers. If anything, nonexclusive sales would have discouraged competing handsets, undercutting the incentive for Verizon and Sprint to pay for their exclusive rights to the Blackberry Storm and the Palm Pre. Mr. Feld bemoans that these top-tier phones aren’t competing within any single provider, but this is just like stating that Coke and Pepsi don’t compete because they are sold in separate vending machines.

On the second point, though–that exclusive deals raise prices–Mr. Feld and other pro-regulation advocates have a point. AT&T pays Apple a hefty sum not to make the iPhone available to customers of other providers. That means the phones cost AT&T more than they would’ve otherwise, and customers in turn pay more for them. High prices are a signal to new entrants, of course, but Mr. Feld would certainly push the point. Could Congress really lower prices for consumers, without price controls or their attendant shortages, in one stroke of the regulatory pen?

Well, yes and no. It is likely that the price of the iPhone would fall if government forced Apple to abandon its agreement with AT&T. Prices would fall further still if regulators subpoenaed Apple’s schematics and source code and revoked its patent claims. But while critics attack exclusivity in the margins of Apple’s profits, no one questions the the very core of those profits: the intellectual property and corporate secrets that make the the iPhone so valuable. Why such different reactions to essentially the same business practice? Because novelty is scary. Apple’s sole production rights to the iPhone are nothing special, but its deal with AT&T is somewhat new.

We’re not used to seeing exclusive monopolies in established products, and for good reason. A monopoly is extremely difficult to maintain, and usually only possible with the help of government. It would certainly be unusual if steel, bananas, or personal computers were controlled by a single manufacturer, and it was terrible for consumers when Ma Bell—with great help from the FCC—owned the entire American telephone industry. On the other hand, there’s nothing unusual at all about Scholastic’s sole publishing rights to Harry Potter, or Amazon’s exclusive ownership of the Kindle. Why are we so accustomed to monopolies in some sectors, but wary of them in others?

The answer is that exclusivity can be perfectly natural, and sometimes even essential, for new and innovative products. Every invention starts out exclusive to its creator. Only by leveraging that exclusivity can the creator make a profit. Once a product is well-established, only an act of government can restrict its supply. It took several acts for the FCC to entrench the Bell monopoly, and it would take another to stop Apple’s competitors from building a better smartphone. Good things come to those who wait.

Ultimately, what Mr. Feld is advocating is a textbook case of the broken window fallacy. Whenever a new product is invented, society can always gain by revoking the creators exclusive rights, if we look only at that product in isolation. But it’s like cheating at poker: eventually your friends learn not to play. Prohibitions on exclusivity create shortages just like any other price control, even if these innovation shortages don’t make the evening news. Prominent benefits and hidden losses are a magnet for bad policy, and they can fool even economically literate folks like Mr. Feld who should know better.