Available September 2, 2010



Several years ago, Marina heard that Hewlett-Packard was looking for people to participate in “economic experiments.” Something intriguing was happening in her own backyard, and by the sound of it had been going on for years, yet she knew nothing about it. Marina’s personal curiosity added to the universal lure of easy money, so she signed up. Before long she was at HP Labs in Palo Alto, with a dozen or so other participants in what looked like a corporate classroom: a gray, windowless room with rows of desktop computers facing a large whiteboard. Once she and the other guests were seated at their terminals, a hyperkinetic man dressed like a math professor began pacing the front of the room. He scribbled an exchange rate on the whiteboard (1,500 experimental dollars = $1 U.S.) and gave brief instructions before setting the research participants to work.

Each participant would play the role of either buyer or seller, interacting with the others through the networked computers. Buyers would order goods from the sellers and pay for them, while sellers would receive payment and choose how much to actually ship. At the end of each period, buyers and sellers would see who’d fulfilled which orders. Given this information, buyers would choose whom to do business with in the next period; sellers, meanwhile, would decide how to proceed themselves. At the end of the game, the money earned during the experiment would convert to U.S. dollars: Marina and the others would earn real cash in proportion to how well they’d played.

The game lasted about two hours. Everyone played in silence, and couldn’t so much as send instant messages to each other. Instead, they could communicate only through the buying and selling choices they entered into their computers. Yet they could see so much: which sellers shipped as promised and which ones cheated, how this information seemed to affect future buying decisions, and, in the last bidding period, who stayed honest when honesty no longer paid. They were seeing something of the essence of human nature at work.

When the game was over, a research assistant entered the room with a cashbox in hand and counted out each player’s winnings. Marina received about $75.

She wanted to know more: what did any of this have to do with HP’s business? What she’d known about HP Labs was what most people in Silicon Valley knew—that its research led to the development of inkjet printing, one of the company’s most profitable offerings. But what was a company known for its computers, printers, and calculators doing paying people to play these games?

As a reporter, Marina carried a license to ask nosy questions. When she approached the man who’d run the experiment, he handed her his card: “Kay-Yut Chen, Principal Scientist, Decision Technology Dept.”

Kay-Yut, it turned out, wasn’t just running that day’s experiment—he had started the whole lab, right after finishing his PhD in economics at Caltech. Not only that. HP’s lab was the first-ever experimental economics laboratory inside a corporation. That made Kay-Yut one of a kind; he’d been featured in Newsweek, and Marina went on to write about his work in Scientific American and Portfolio.com.

Learning from Mistakes the Easy Way

The premise of using economic experiments in a company is simple: to make good decisions about major business processes, test them out in the safety of a lab. Without such testing, an HP marketing manager told Marina, “You could waste millions of dollars implementing a program that isn’t good.” Indeed, that’s what would have happened had the manager not asked Kay-Yut to test the idea of rewarding retailers for being among the top three in sales of HP products. On the surface, some healthy competition between the likes of Walmart, Best Buy, and the many other HP vendors seems intuitively appealing; in fact, winner-take-all sales contests are popular in many other industries. But Kay-Yut’s experiments, like the buyer-seller game Marina tried, had shown that this incentive would backfire in two ways. In the experiments, most participants playing retailers, seeing that they had little or no chance to win, gave up at the outset. The sure winners, on the other hand, had no incentive to do any better, since they were likely to get the bonus anyway.

Because of these test results, HP scrapped the idea, keeping their existing incentive program, which simply rewarded retailers according to how much business they brought in, in place. That wasn’t just because the new program had fared poorly in the lab. Had the programs been equivalent— indeed, even if the new program had offered marginally better results than the old one—it still might never have gotten the green light. Any major change carries hefty costs, such as training and legal review. By telling you exactly how much better one program or policy is than another, experiments help managers decide which changes are worth the expense.

But why even go through the trouble of running an experiment—why not just use a spreadsheet to see which program will make you more money? Managers do this all the time. For example, to compare two incentive programs, a manager might set up a formula to calculate total revenues based on each program—and plug in guesstimates for the relevant variables. The manager might assume, for example, that a winner-take-all program will boost revenues among the top three performers by 5 percent—and then have the spreadsheet calculate total revenues and ultimate profits. And if the assumptions are right, then the spreadsheet will give the right answer, much as a mortgage calculator will correctly tell you which of two mortgages will cost you less over the life of a loan. But what if your assumptions are wrong and your guesstimate is off—what if the program doesn’t boost revenues by even 1 percent? Then the spreadsheet won’t work, by the immutable law of Garbage In, Garbage Out.

So, to know how people will really respond to incentives—to see how they’ll actually think and behave, not how you guess they will—you need to test your assumptions. And rather than testing these the hard way, by rolling out a plan and seeing how it does in the real world, you can run an experiment.

The idea of testing business ideas and confining your mistakes to the lab is so obvious when you hear it that you may wonder, as Marina did, why more companies don’t do it. In fact, those on the cutting edge do. Some, like Google and Yahoo!, use in-house labs for fine-tuning keyword auction rules and other advertising programs; others, including eBay, Ford, and Hitachi, have enlisted experimental economists on a consulting basis. A couple of companies, Capital One and Harrah’s, have become well-known in the business press for running field experiments, testing out new marketing programs on small samples of actual and prospective customers before rolling out the most effective programs company-wide. And some lesser-known companies use randomized tests to help online retailers see which Web page designs will lead to the most purchases.

But on the whole, experiments are still a highly unusual way of making decisions in business. When the University of Chicago’s Booth School of Business premiered a class called “Using Experiments in Firms” in 2008, it was the first of its kind. The teachers were two economists who’d never taught at the business school—Steven Levitt (of Freakonomics fame) and John List (who conducts clever field experiments to see, for example, why car mechanics price-discriminate against people in wheelchairs). In most companies, says List, “the level of experimentation is abysmal.”

Anybody who’s worked in a company knows this, yet it’s remarkable given how far experimental economics as a whole has come. As an academic field, experimental economics has been around for several decades, and in 2002 two pioneering experimentalists, economist Vernon Smith and psychologist Daniel Kahneman, shared the Nobel Prize in Economics. Today there are well over a hundred economics labs around the world, not including labs running experiments in related fields such as experimental psychology, management science (or operations research), and marketing science. Even government agencies, not usually the most forward-thinking of organizations, have turned to experimental economists to improve policy decisions, such as how best to sell broadcast spectrum rights or how best to grant rights to use tracks on a public railroad. But businesses, which should care more than anyone about maximizing profits, have largely ignored the experimental approach.

Some of the reasons make sense. Unlike basic academic experiments, ones that use a simple, highly stylized game to look at just one aspect of human behavior, the kinds of experiments run by Kay-Yut and his counterparts at other companies often incorporate the myriad important details of an actual business setting. For example, in an experiment we’ll look at later about the effects of different Minimum Advertised Price policies, HP simultaneously took into account many variables, such as the differing goals of the company’s many retailers (from e-commerce partners who care mainly about market share to big-box retailers who try to maximize quarterly profits), relationships between sales of different products, and product life cycles (from introduction to phaseout). Designing an experiment like that takes not just experimental know-how, but also a deep knowledge of the business. Equally daunting, it takes time and attention, something managers busy fighting the fire du jour don’t have.

In the end, the biggest obstacle to adopting experimental techniques may be simple inertia: it’s easier to just keep doing things the way they’ve always been done, especially if your competitors aren’t threatening to outpace you in their methods. If something ain’t broke, why fix it?

But following tradition doesn’t always yield the best possible results. Even so-called best practices may have better alternatives—and only by trying these alternatives do better practices ever emerge. For example, a long-standing tradition in the airline industry is to schedule flights through hubs, such as O’Hare, JFK, and LAX. This hub-and-spoke system makes it possible to offer many more routes with the same number of planes, so major national airlines have gone along with it at least as far back as the 1980s, when the government stopped regulating which cities the airlines must serve.

For all its efficiency, the hub-and-spoke system creates congestion and delays, and forces travelers between smaller cities (like El Paso, Texas, and Fresno, California) to take two or more connecting flights. But since every player was subjected to this, the airlines hadn’t had to change their ways. And by going unchallenged for many years, the practice of flying planes through the world’s busiest airports remained a “best practice.”

Southwest Airlines founder and former CEO Herb Kelleher saw an opportunity to break with this tradition. After all, passengers prefer direct flights and hate delays. What’s more, delays are costly for the airlines themselves. By abandoning the hub-and-spoke model, Southwest became the most profitable airline in history. As Kelleher would say, “Those airplanes aren’t making any money while they’re sitting on the ground.” In part by flying into smaller, less congested airports, he was able to offer a seemingly impossible combination: better service at lower prices.

But the lesson here isn’t to be like Kelleher. His point-to-point strategy, like Southwest’s many other innovations, could have easily backfired, in which case nobody would be writing books about “Southwest Airlines’ crazy recipe for business and personal success.” The idea is simply that by always following established best practices, a person or organization will never find out if something else might work better. To innovate and distinguish yourself from the pack—and to surpass your own personal best— you must take some risks, and to make these risks less risky, you can test out your ideas, keeping what works and abandoning what doesn’t.

The experimental approach is just starting to catch on. The movement, which began in economics and psychology departments (and went on to become standard practice in both), has made its way to leading business schools—including Wharton, Harvard, Stanford, MIT’s Sloan School—where researchers have used experiments to better understand how people lend and invest money (the field of “behavioral finance”), how managers make efficient use of resources (“behavioral operations management”), how people act in groups (“organizational behavior”), and how shoppers choose what to buy (“consumer behavior”). From the business schools, which teach the next generation of business leaders, the next step is application in firms, as the early-adopting companies are already showing.

Still, not every company will start its own experimental economics lab, and not every business decision merits the cost—especially in time—of designing and running such carefully controlled tests. Sometimes, you have to make the best decision you can with the information you already have. Yet your current knowledge may not be enough; for most people, that knowledge comes from a messy mix of habit and intuition, conventional wisdom, personal experience, and anecdotal data about what seems to have worked for others—not the most evidence-based way to do business.

Fortunately, you don’t have to choose between running the kinds of experiments HP does or simply flying by the seat of your pants. That’s because plenty of decisions can benefit from the many insights already gleaned from the research. For example, the HP buyer-seller experiment Marina participated in is part of a larger body of experiments on reputation. These experiments reveal broader truths about how reputation works and how people use information about reputations to make business decisions. Whether you’re running a large organization or simply managing your own career, you’d be wise to learn from them.

As you’ll see throughout the book, the same is true for the vast body of research done by economists and other social scientists working in universities. Their experimental investigations into people’s sense of fairness and reciprocity, attitudes toward risk and trust, the human tendency to game the system, novel methods of prediction—all these areas of research and more offer new, scientifically grounded ways of thinking about the forces that drive human behavior in business.

The most brilliant businesspeople already have a good feel for many of these principles, though they can’t always articulate them. The distinguished economist Charlie Plott, a mentor and sometime collaborator of Kay-Yut, happens to be an avid fisherman—and he compares a great businessperson to a fish in a stream. “The fish knows how to operate so as to catch a fly. It can move through the water in an energy-minimizing way, position itself, and strike effectively. But the fish can’t understand hydrodynamics. Like the fish, the businessman is good at doing what he does. That doesn’t mean he understands the principles that govern what he’s doing.” This lack of insight into why what you’re doing works, Plott suggests, comes from being too immersed. Or, as Plott puts it, “To understand the science you need to be outside the water.”

But what if you’re already great at what you do, fortunate to have excellent business sense—why should you bother learning formal principles like the ones we lay out? And why should you listen to the advice of eggheads with little firsthand experience running a business? For the same reason seasoned business leaders like Mark Hurd of HP and Meg Whitman, formerly of eBay, have. Here a sports analogy helps: every top athlete, despite natural talent and years of practice, needs a coach. And not just to push the athlete, but to improve form and make a great technique even better. In a sense, that’s what Kay-Yut and his colleagues have done for the business leaders who turn to them, and it’s what this book aims to do for you.

About Our Approach

This book draws on several decades of experimental research in business, psychology, and economics. Although the experimental approach is at the heart of this book, we don’t limit ourselves to tightly controlled lab studies, which, as you’ll see in later chapters, have their own limitations. Where possible, we bring in field experiments but, like lab studies, these are sometimes impossible to run, for either ethical or practical reasons; so we cite dozens of findings from other scientific ways of knowing, particularly so-called natural experiments, such as those on the effects of hygiene report cards on restaurant revenues. And though we’re skeptical of stories and cherry-picked examples as a form of evidence—just as you’re right to be—memorable anecdotes are an excellent way to illustrate a point; that’s why we use them throughout the book. Likewise, though experiments can’t always answer why people do what they do, people are curious about such things, so we often offer plausible (if sometimes speculative) explanations, citing evidence when it’s available. In general, we believe that the more you understand and remember about how people make business decisions, the better your own decisions in dealing with others will be.

What happens when you don’t have a good grasp of human economic behavior? If you’re chairman of the Federal Reserve Board, the result can be national disaster. During the 1990s boom, Alan Greenspan famously mentioned “irrational exuberance” as a cause for escalating stock prices; after the financial meltdown, Greenspan admitted before Congress that the meltdown—unprecedented in his long and distinguished career—shocked him into seeing a fundamental flaw in his understanding of how the world works.

Your own decisions may never have the kind of impact (for good or ill) that a central banker’s would—and this book isn’t about government policy anyway. But whomever you’re dealing with—whether it be your rivals, your boss, your customers, your suppliers, or your employees— knowing more about what drives human behavior will help you make better use of the power you do have.

How to Read This Book

We start by exploring what makes people tick. Traditional thinking in economics, and even in the business world, has been that people are essentially greedy, selfishly driven by the desire for personal profit above all else. There’s obviously some truth to this, but over the past two or three decades new research has told a more complex story, as we show in chapters 1, 2, and 3. Chapter 1 focuses on something most people don’t want yet that constantly hovers above us all: uncertainty. We show how much people are willing to pay to reduce uncertainty—and help you see ways to profit from reducing uncertainty without going into the insurance business. Chapter 2 takes you through the research on fairness around the world. We also look at several other important values (or what economists call “social preferences”) that consistently drive human behavior when money enters the picture. Chapter 3 turns to another major motivator: we look at the business and philanthropic implications of the powerful human urge to reciprocate, as well as the ways an overreliance on financial incentives can disrupt other motives.

Once you understand what people want, how do you know what they’ll actually do? That’s the question we tackle in chapters 4 through 7. Chapter 4 shows the limits of our ability to optimize for what we want. Since people will never perform as optimally as machines, we show how to nudge your partners’ decisions closer to the optimal level. Chapters 5 and 6 are twins, both dealing with how people cope with one type of uncertainty: uncertainty about other people’s behavior. Reputation is one way to deal with this type of uncertainty, and it’s the focus of chapter 5. We show the economic value of a reputation and lay out the many ways you can capitalize on a good one. We also disentangle several often-confused aspects of reputation and offer caveats about what reputation can and can’t predict. Chapter 6 extends the discussion of coping with people’s unpredictability through a broader discussion of trust; we introduce the Trust Game and show what it reveals about developing trusting, wealth-creating relationships with others. And we go beyond the Trust Game to show several scientifically grounded ways of proving yourself trustworthy and deciding whom to trust.

Chapter 7 deals with what people do in situations where rules are important. People may be selfish or altruistic, and limited in their ability to maximize for whatever goals they have, but whatever system you set up, many will try to game it, often subverting the system’s actual intent. Focusing on systems with timing rules—from negotiations and auctions to compensation schemes and penalty systems—we show the systematic ways people game them and offer advice for avoiding similar pitfalls as you make up the rules for your own systems.

In chapter 8 we delve into a hot topic in business experiments: predicting the seemingly unpredictable. We discuss the ins and outs of “crowd wisdom” prediction markets, from the just-for-fun Hollywood Stock Exchange to internal markets in HP and other companies. We also introduce other ways to make business predictions and reduce the costs of uncertainty.

In the final chapter, we leave you with some ideas about how to start applying the many principles you’ve learned throughout the book.

Through Secrets of the Moneylab, we hope to inspire a fresh way of looking at the world, as if through the eyes of an economist. You may never run a controlled experiment or even hire an outside expert to do it for you, but we hope you’ll at least begin to question all-or-nothing thinking, seek data to challenge your hunches, track distributions and not just averages, watch hidden costs, question one-size-fits-all advice, and harness the big power of small changes.