If you want to pinpoint a place and time that the first glints of the Management Century appeared on the horizon, you could do worse than Chicago, May 1886. There, to the recently formed American Society of Mechanical Engineers, Henry R. Towne, a cofounder of the Yale Lock Manufacturing Company, delivered an address titled “The Engineer as an Economist.”
Towne argued that there were good engineers and good businessmen, but seldom were they one and the same. He went on to assert that “the management of works has become a matter of such great and far-reaching importance as perhaps to justify its classification also as one of the modern arts.”
Towne’s speech heralded a new reality in at least three respects. Call the first consciousness raising: Management was to be viewed as a set of practices that could be studied and improved. It was to be rooted in economics, which to this crowd meant achieving maximum efficiency with the resources provided. And the members of the audience were engineers. In the decades to come, such masters of the material universe, from Frederick Winslow Taylor to Michael Porter, Tom Peters, and Michael Hammer, would have a disproportionate effect on management’s history.
Towne was catching a wave. During the century that followed, management as we know it would come into being and shape the world in which we work. Three eras punctuate the period from the 1880s until today. In the first, the years until World War II, aspirations to scientific exactitude gave wings to the ambitions of a new, self-proclaimed managerial elite. The second, from the late 1940s until about 1980, was managerialism’s era of good feelings, its apogee of self-confidence and widespread public support. The third and ongoing era has been marked by a kind of retreat—into specialization, servitude to market forces, and declining moral ambitions. But it has also been an era of global triumph, measured by agreement on certain key ideas, steadily improving productivity, the worldwide march of the MBA degree, and a general elevation of expectations about how workers should be treated.
Americans and a few other Anglophones dominated management’s early history, in the sense that their ideas on the subject gained the greatest currency. There were exceptions: In 1908 Henri Fayol, an engineer who had run one of France’s largest mining companies, enumerated a list of management principles, which included a hierarchical chain of command, a separation of functions, and an emphasis on planning and budgeting. Still, his 1916 magnum opus, Administration Industrielle et Générale, took decades to be translated and to have much effect beyond France. Although globalization promises more-diverse sources of thinking on management, most of our story so far takes place in the United States. (See the sidebar “Saving Ourselves from Global Provincialism.”)
The Age of “Scientific Management”
In the last two decades of the 19th century, the U.S. was shifting—uneasily—from a loosely connected world of small towns, small businesses, and agriculture to an industrialized network of cities, factories, and large companies linked by rail. A rising middle class was professionalizing—early incarnations of the American Medical Association and the American Bar Association date from this era—and mounting a progressive push against corrupt political bosses and the finance capitalists, who were busy consolidating industries such as oil and steel in the best robber-baron style.
Progressives claimed special wisdom rooted in science and captured in processes. Frederick Taylor, who wrote that “the best management is a true science, resting upon clearly defined laws, rules, and principles,” clearly counted himself in their camp (fans such as Louis Brandeis and Ida Tarbell agreed). His stated goal was the “maximum prosperity for the employer, coupled with the maximum prosperity for each employee,” through “a far more equal division of the responsibility between the management and the workmen.” Translation (lest the reader overestimate Taylor’s respect for workers’ potential contributions): The laborer should work according to a process analyzed and designed by management for optimum efficiency, “the one best way,” allowing him to do as much as humanly possible within a specific time period.
The publication, in 1911, of Taylor’s Principles of Scientific Management—originally a paper presented to that same American Society of Mechanical Engineers—set off a century-long quest for the right balance between the “things of production” and the “humanity of production,” as the Englishman Oliver Sheldon put it in 1923. Or, as some would have it, between the “numbers people” and the “people people.” It’s the key tension that has defined management thinking.
The cartoon version of management history depicts the human relations movement, begun in the 1920s and 1930s, as a reaction to Taylor’s relentless, reductive emphasis on the quantifiable. A better view regards the two as complementary. As evidence, consider the research of Elton Mayo and the other men behind the pathbreaking Hawthorne studies. Their work shares Taylor’s overarching ambition to improve productivity and cooperation with management through the application of science, though in this case the science was psychology and, to a lesser extent, sociology.
The studies, which were mostly conducted at Western Electric’s Hawthorne plant in Cicero, Illinois, began in 1924 and ran through 1932; eventually they involved other factories and other companies. The analysis was largely done at Harvard Business School, including outposts such as its Fatigue Lab. (If tired workers were the problem, how to design operations to lessen exhaustion but still achieve maximum output?) The Hawthorne studies were easily the most important social science research ever done on industry. The project is worth unpacking a bit, if only to dispel the popular myth about what was learned. (“They turned the lights up, productivity improved. They turned the lights down, same thing. Any sign of attention from management was enough to improve productivity.”) Essentially the studies progressed through a succession of hypotheses, dismantling each one. Neither changes in physical conditions (better illumination), nor in work schedules (more rest breaks), nor even in incentive systems could fully explain why productivity steadily improved among a set of young women—always labeled “the girls”—assembling parts in a test room.
The Principles of Scientific Management (1911) set off a century-long quest to balance “the things of production” and “the humanity of production.”
After years of experiments it began to dawn on Mayo, an Australian psychologist who had joined the HBS faculty, that at least two factors were driving the results. First, the women had forged themselves into a group, and group dynamics—members encouraging one another—were proving a strong determinant of output. Second, “the girls” had been consulted by the researchers at every step of the way: The intention of the experiment had been explained to them, and their suggestions had been solicited. From such heady stuff were distilled the basic insights of the human relations school—that workers were not mere automatons to be measured and goosed with a stopwatch; that it was probably helpful to inquire after what they knew and felt; and that a group had substantial control over how much it was prepared to produce.
Those insights sound humane, and they were. Still, the aim of the experiments was always to discover how psychology could be used to raise productivity, resist unionization, and increase workers’ cooperation with management. Behind the effort of Taylor and the so-called “Harvard Circle” was an elitism, a class arrogance, almost incomprehensible by today’s standards. Dean Wallace B. Donham, who founded HBR, ardently believed that an educated managerial cadre—a “new managing class”—was the answer to the nation’s problems: to the Depression, to inept government, to social upheaval. He and others on the banks of the Charles looked down on the typical worker as a lesser being, one to be manipulated in service of higher purposes (or as Taylor said of the type of man best suited to load pig iron, “so stupid and so phlegmatic that he more nearly resembles in his mental make-up the ox”).
Then into these stuffy rooms blew a blast of fresh, cleansing air. Call it Hurricane Drucker. Personal confession: Although I, like every other right-thinking student of management, have long tugged my forelock in ritual homage to the great man, I never fully appreciated what a revolutionary thinker he was until I steeped myself in the work of his predecessors.
The application of humanistic psychology to social institutions.
Beginning with Concept of the Corporation (1946) and continuing through The Practice of Management (1954) and Managing for Results (1964), Peter Drucker laid out a vision of the corporation as a social institution—indeed, a social network—in which the capacity and potential of everyone involved were to be respected. Out with the vocabulary (and mind-set) of “boss,” “foreman,” and “worker”; in with “manager” and “employee.” If Drucker didn’t invent the concept of “management,” he did more than anyone else to introduce the “m” word and all its iterations into how we think about running organizations.
Drucker didn’t foment the managerial revolution alone. Fritz Roethlisberger, in his masterly 1937 summation of the Hawthorne experiments, described organizations as “social systems.” Management’s job, he stated, was to maintain their equilibrium. In the 1950s Drucker began to revive interest in the work of Mary Parker Follett, a largely forgotten figure from the 1920s whose ideas about management—“power with” rather than “power over,” “constructive confrontation,” and the pursuit of “win-win” solutions—found new resonance in the postwar era.
Others chimed in. Douglas McGregor, first as a management professor at MIT’s Sloan School, then as the president of Antioch College, drew on the humanistic psychology of Abraham Maslow—he of the hierarchy of human needs—to propound the famous Theory X (people are inherently lazy and will shirk their duties if they aren’t closely policed) and Theory Y (people want to find meaning in their work and will contribute in positive ways if the work is well designed). X and Y, McGregor took pains to assert, were “different cosmologies [that is, beliefs about the nature of man]…but not managerial strategies.” To his dismay, nobody much listened to him on that detail.
The overall thrust of the postwar managerial thinkers was to elevate the “humanity of production.” Workers will be most productive, the reasoning went, if they’re respected and if managers rely on them to motivate themselves and solve problems on their own.
“The development, strengthening, and multiplication of socially minded business men is the central problem of business.”—Dean Wallace B. Donham
Not that the old order went down without a fight. After researching General Motors for Concept of the Corporation, Drucker persuaded rising GM executive Charlie Wilson to propose a set of reforms including greater autonomy for plant managers and what we’d call “worker empowerment” today. Two forces killed the idea. One was the rest of GM management, including CEO Alfred P. Sloan. The other was the United Auto Workers, in the person of Walter Reuther, who wanted no blurring of the line between management and labor.
More-enlightened managerial attitudes combined with other forces—a democratization of American society following World War II; an explosion of deferred demand for economic goods—to usher in two decades of good spirits and seeming contentment with corporations and their conduct. The number of strikes and other job actions dropped precipitously from the nasty levels seen just after the war; union membership, as a percentage of the workforce, peaked and then began the long, slow decline that continues to this day. (Managerial solicitude was probably stimulated by an unemployment rate that fell below 3% in 1953.)
The rise of strategic thinking.
In addition to its more-enlightened attitudes toward employees, the postwar period brought a heightened sense of what managers could accomplish. Here again Drucker led the way.
From our perch in the 21st century, when every company has a strategy and every executive a set of key objectives, it’s difficult to comprehend the lack of direction that is said to have characterized earlier generations. But, as Drucker pointed out in The Practice of Management, “the early economist”—and, by implication, other students of management—“conceived of the businessman and his behavior as purely passive: success in business meant rapid and intelligent adaptation to events occurring outside, in an economy shaped by impersonal, objective forces that were neither controlled by the businessman nor influenced by his reaction to them.” (Whether real-live businesspeople felt quite that powerless is debatable.)
This was not good enough, Drucker proclaimed. “[M]anagement has to manage. And managing is not just passive, adaptive behaviour.” Managers had to take charge; they should be “attempting to change the economic environment…constantly pushing back the limitations of economic circumstances on the enterprise’s freedom of action.” To aid in the effort, he argued, managers should have objectives and should manage according to them.
“Organizations endure, however, in proportion to the breadth of the morality by which they are governed.”—Chester Barnard
Drucker’s 1964 book, Managing for Results, set the bar higher, arguing that “businesses exist to produce results” and that managers should systematically scan their markets for opportunities to grow the enterprise. In a 1985 preface to a new edition, the author would claim that the book was the first one on business “strategy” but also recall that he and his publisher had been dissuaded from using that word.
Consultants felt no such hesitation. In 1963 Bruce Henderson, a former Westinghouse executive and a true American original, started what was to become the Boston Consulting Group. In short order the firm would take as its mission defining corporate strategy—before then the term had barely been used—and bringing the light of its gospel to corporations.
This was no mere shift in vocabulary. The rise of corporate strategy represented a bold new reach by those concerned with the “things of production.” BCG’s building-block concepts—the experience curve, the growth-share matrix—were enormously influential, but even more important was the analytical passion that lay underneath. The consultants insisted on delving into the numbers behind costs, customers, and competitors to a depth that few companies had plumbed before. Strategy’s constant companion and facilitator was what I’ve elsewhere called Greater Taylorism—the imperative to take a sharp pencil and a stopwatch not just to some poor schlub’s daily labor but to every aspect of the company’s operations.
Strategy was aggressive. The point of gathering all those numbers was to figure out where you stood in relation to competitors and how you might seize advantage over them. With graphs and diagrams, BCG relentlessly brought home the importance of being number one or number two in your line of business.
By 1967, when John Kenneth Galbraith published The New Industrial State, some had begun to worry that American companies and their leadership had perhaps become too aggressive. Galbraith decried the fact that firms had grown so large and successful—by 1974 the 200 largest U.S. manufacturing companies controlled two-thirds of the country’s manufacturing assets and more than three-fifths of its sales, employment, and income. He argued that social goals increasingly were adapted to corporate goals. A largely anonymous “technostructure” of business leaders could dictate to consumers what to buy and, implicitly, how to live—or so the theory went.
The Era of Nervous Globalism
After two decades without serious recession, the oil shocks of the 1970s and an accompanying economic malaise put paid to the notion of managerialism triumphant. A 1966 Harris poll had found 55% of Americans voicing “a great deal of confidence” in the leaders of large companies. By 1975 the percentage had dropped to 15.
Forces for change.
Multiple new forces confronted American executives, unleashing heightened competition and eventually disrupting the relative amity that had prevailed among business, labor, and government.
In an attempt to fight runaway inflation, U.S. President Jimmy Carter launched efforts to deregulate airlines, railroads, and trucking. His successors would leap onto the deregulatory bandwagon, turning their attention to telecommunications and finance. Meanwhile, U.S. attempts to encourage global trade were succeeding all too well. As imported cars, steel, and consumer electronics piled into domestic markets, doubts snuck in, too: “Could it be that a bunch of foreigners, particularly the Japanese, know more about management than we do?”
Technology, especially computer technology, steadily increased the calculating power available to the numbers people, in the form of the integrated circuit (late 1950s), the minicomputer (mid 1960s), the microprocessor (early 1970s), and then the microcomputer (mid 1970s), soon to morph into the ubiquitous PC. Greater Taylorism, launched in the era of slide rules, had found the means to create ever more precise models of how a business should perform.
As stocks began to heat up again in 1982—it had taken the Dow Jones Industrial Average 10 years to crawl back to its 1972 high of 1,000—a lively market for corporate control developed. Old constraints against hostile takeovers fell away, new sources of funds became available to potential acquirers (think junk bonds), and financiers realized there was money to be made in buying up companies that had a dog’s breakfast of businesses and selling off the parts. More than 25% of the firms on the Fortune 500 list in 1980 were acquired by 1989.
“There is only one valid definition of business purpose: To create a customer.”—Peter Drucker
Shareholder capitalism’s ascent over stakeholder capitalism.
During this period of intense change, the purpose of strategy, and indeed of corporate management, took on new clarity: It was to create wealth for shareholders. To be sure, that idea had always been around, dating back to the buccaneering financiers of the 19th century. But during management’s era of good feelings, a more inclusive notion had taken root in some quarters. As Michael Lind notes in Land of Promise, his recent economic history of the United States, in 1951 the chairman of Standard Oil of New Jersey had proclaimed, “The job of management is to maintain an equitable and working balance among the claims of the various directly affected interest groups…stockholders, employees, customers, and the public at large.” Sometimes labeled “stakeholder capitalism,” this broader-minded conception would be steadily chipped away at by partisans of “shareholder capitalism,” to the point that it almost disappeared from discussions of corporate purpose.
Management thinkers responded to the new pressures besetting corporations by sharpening their focus. With his 1980 book, Competitive Strategy, Michael Porter did more than anyone else to give strategy an academic rigor it sometimes lacked among consultants. His next book, Competitive Advantage (1985), would arm companies with concepts such as the value chain, enabling them to break down every stage of their operations into units that could be costed out, benchmarked, and measured for competitiveness.
As the economy quickened and the deal making and excitement on Wall Street mounted, more people sought to join the ranks of management—or at least to obtain the entry credential, the MBA, whose luster was burnished by the salaries paid degree-holders by investment banks and management consulting firms. Some 26,000 MBAs had been awarded in the United States in 1970; by 1985 the number was up to 67,000.
At business schools, strategy experts such as Porter displaced the faculty who had been teaching “business policy.” Professors of finance assumed greater pride of place, shoving toward the periphery teachers of the “softer” subjects—human behavior, organizational dynamics—once central to Donham’s and Mayo’s visions of management education. Across the board, both in the corporate world and in academics, the numbers people seemed to be winning, bringing greater quantitative precision to increasingly specialized domains of expertise.
But they weren’t necessarily winning the hearts of the wider managerial population. In 1982 two McKinsey consultants, Tom Peters and Bob Waterman, published In Search of Excellence. It was a paean to the importance of culture in organizations, an attack on strategy as a merely quantitative exercise, and a celebration of the human element in making companies successful. “Soft is hard,” Peters observed. The book sold more than 6 million copies, astonishing its authors and alerting the publishing industry to the existence of a huge audience for books on managerial wisdom. It didn’t hurt that Excellence extolled U.S. companies and their practices, this shortly before President Ronald Reagan would announce “It’s morning again in America” and well after everyone outside Japan had grown tired of being lectured on the superiority of that country’s management methods.
For the next 30 years, right down to our own day, the two strains of thought—the numbers-driven push for greater profitability, and the cry for more respect for the “humanity of production”—would coexist in uneasy tension. And not just on the commanding heights of management thought, where ideas, books, gurus, and academics battle for attention. The debate has also taken place in conference rooms and offices—and in the minds of executives deliberating tough choices—where the fates of businesses, and people, are decided.
Fueled by tax cuts and deficit spending under President Reagan, the U.S. economy surged ahead after 1982. But, unlike in the 1950s, this rising tide didn’t lift all boats. In the name of beating foreign competition, completing (or avoiding) takeovers, and serving the interests of shareholders, it became acceptable to sell off businesses that didn’t fit the new corporate strategy and to lay off battalions of workers. Most famously at General Electric under Jack Welch, the old employer-employee contract, with its implicit assurance of something like lifetime employment, was ripped up. And the stock market cheered, as did many an individual investor lured back into the action by rising share prices and a dazzling variety of mutual funds, 401(k) plans, and individual retirement accounts.
Management literature provided the intellectual undergirding for the new aggressiveness. Since the 1960s, strategists had sounded a wake-up call about the need to know your competition, an adjuration almost totally neglected by the big thinkers of the prior era. Exploiting tools such as the Freedom of Information Act (1966) and databases such as LexisNexis (1970s), consultants helped clients fill in the details to see how their situations fit the frameworks devised by Porter and others.
“Strategy is about making choices, trade-offs; it’s about deliberately choosing to be different.”—Michael Porter
In two notable HBR articles in the 1980s, Michael Jensen resurrected agency theory, providing a rationale for takeover activity. The idea held that although companies existed to enrich shareholders, too often their managers developed interests of their own, particularly if they didn’t have a sufficiently large ownership stake in the enterprise. To keep their eyes on the prize, they needed both the stick of potentially being acquired and the carrot of incentives linked to stock price.
Conveniently, in 1993 the U.S. Congress changed the tax code to encourage the grant of stock options as executive compensation. As Lind points out, by the end of the decade more than half the payout to the typical Fortune 500 executive took that form. So what if the ratio of CEO pay to that of the humble cubicle worker was climbing to Olympian heights? Think of all the value the CEO was creating. OK, perhaps this wasn’t the kind of moral leadership that Wallace Donham had had in mind for the managerial class, but he was long dead, his voice largely forgotten.
With the reengineering movement, the imperative to exploit the latest information technology turbocharged the push for efficiency and competitiveness. Obliterate your existing processes, admonished Michael Hammer in a celebrated 1990 HBR article and then with coauthor James Champy in a best-selling book. Redesign them with your ultimate customer in view, the sight line provided by the wonders of new electronic communications. Companies flocked to reengineering’s banner, but too often merely as a lofty-sounding justification for layoffs. Eventually so many corporate innocents had been slaughtered in its name that the movement was discredited, to be held up later as a chief example of a management fad gone horribly wrong.
The shift toward leadership and innovation.
Advocates for the humanity of production, meanwhile, pursued a blurrier line. Within a couple of years of the publication of In Search of Excellence, BusinessWeek reported that a third of the companies held up in the book no longer met the authors’ criteria for superiority. This embarrassment suggested a more general confusion that was to dog the humanists—namely, just what were the managerial practices that would bring out the best from employees, and how were they to be measured and their value to the company calculated?
Strategy at least had a fairly clear paradigm and set of frameworks for successive generations of thinkers to build on. Champions of shareholder value gloried in their single yardstick, the stock price, as the measure of all things. By comparison, students of human behavior in organizations were all over the landscape. Scholars in the field decried what one of them, Jeffrey Pfeffer, called its “fairly low level of paradigm development” and the fact that they couldn’t agree among themselves about which problems most needed to be addressed.
Such eclecticism, to put it charitably, was mirrored on the list of business best sellers. Books on how to be a learning organization jostled with ones on the wisdom of teams, the power of corporate loyalty, the necessity of core competences, the importance of delighting customers, and the imperative to navigate change, as in figuring out who’d moved your cheese.
If the thinking on the human side coalesced at all, it was around two themes: leadership and innovation. Over the last two decades of the 20th century, business schools revised their mission from “educating general managers” to “helping leaders develop.” Unfortunately, despite some inspiring writing on how leaders differ from managers, no consensus has formed on exactly what constitutes a leader or how those exalted beings come to exist. (The current downturn has raised questions about corporate leaders’ sources of authority as well. See the sidebar “Who Made You the Boss?”)
Innovation invites less controversy. Both humanists and numbers people recognize its critical, company-saving importance in an era when new rivals can emerge suddenly from nowhere, industry leadership can change hands in a trice, and competitive advantages once thought unassailable are eroded in months. Books by Richard Foster and Clayton Christensen demonstrated to a wide managerial audience how new technologies systematically displace old ones, in the process overthrowing the pecking order of entire industries.
“If you only do what worked in the past, you will wake up one day and find that you’ve been passed by.”—Clayton Christensen
Innovation is where satisfying the fierce demands of the market depends, as never before, on eliciting the best from the humanity of production. No one yet appears to have been able to automate the invention of the new or to come up with machine-replicable substitutes for the spark of human imagination. Perhaps the biggest managerial challenge facing the 21st-century company will be finding ways to free that spark, resident in employees, from the organization’s tidal pull to keep doing the same old things.The age of management isn’t over, of course. Management thought is spreading to wherever capitalism and more-or-less free markets find a home. By some counts that home has welcomed 3 billion new inhabitants over the past two decades, with the fall of Soviet communism and the economic liberalization of China and India. Capitalism and the managerial ideas that struggle to make it more productive have indisputably rendered the world richer and better educated. And not just the capitalist and managerial elite, including the estimated half million people awarded an MBA or its foreign equivalent last year: The percentage of people worldwide living below the poverty line has dropped dramatically in the past 50 years, while literacy rates have steadily climbed.
In the world of offices, factories, stores, and even cubicle farms, particularly those of large organizations, people expect to be treated with fairness and respect (even if their long-term job security is less assured). Blatant sexism, naked bullying, and outrageous managerial behavior are more likely to be called out, even if they haven’t been totally eliminated. Upon going through the front door of a company almost anywhere in the capitalist world, a visitor can typically assume that certain rules will be followed, certain procedures observed.
True, managerialism hasn’t solved all the problems in the workplace. Most notably, it hasn’t figured out a way to employ everyone who wants a job, though achieving that goal probably requires the efforts of government and economists as well, along with a public consensus. Ironies, occasionally cruel, have accompanied managerialism’s rise. As Peter Drucker observed, when he was a boy growing up in Vienna the people who put in the longest hours were those at the bottom of the economic ladder—the lady’s maid expected to wait up for her mistress to return home from the opera. Nowadays it’s the executive elite, fielding 300 e-mails a day and calls from around the world, who toil into the night.
And managerialism’s work is not finished. Because management is, finally, about how to make humans and their organizations more effective—and because humans stubbornly cling to their propensity to be, well, human—there will never be “the one best way.” But there’s almost always a better way. Management will keep looking for it.