What are the key categories of the Malcolm Baldrige National Quality Award?

In just four years, the Malcolm Baldrige National Quality Award has become the most important catalyst for transforming American business. More than any other initiative, public or private, it has reshaped managers’ thinking and behavior. The Baldrige Award not only codifies the principles of quality management in clear and accessible language. It also goes further: it provides companies with a comprehensive framework for assessing their progress toward the new paradigm of management and such commonly acknowledged goals as customer satisfaction and increased employee involvement.

There is, however, another side to the Baldrige Award. Despite its popularity, the award has come under increasingly heavy fire. Critics cite three major deficiencies. First, reports of enormous investments by companies intent on winning the Baldrige contest have led to claims that “the Baldrige can be bought.” Both Xerox, a 1989 winner, and Corning, a 1989 finalist, admit to having spent, respectively, $800,000 and 14,000 labor hours preparing applications and readying employees for site visits by Baldrige examiners. Second, Baldrige critics note that the award does not reflect outstanding, or even exceptionally good, product quality. Here they single out Cadillac, a 1990 winner that has yet to crack the top ranks of most surveys of automobile quality. And third, the poor sales and earnings growth of some past winners have led critics to question whether the award is in fact an accurate gauge of a company’s competitiveness and profit potential. The evidence here is Cadillac again, but also Motorola and Federal Express.

At first blush, these criticisms may seem plausible. But a more thoughtful analysis of the award, its criteria, and its judging process shows that the criticisms in fact reflect deep misunderstandings. This confusion is not surprising: part of it is due to the confidentiality requirements of the award itself, which have made the necessary data unavailable. While Baldrige winners are required by law to share their experiences publicly, critical comparative information—the applications, scores, and examiners’ evaluations of all companies that have applied, winning and non-winning—is unavailable due to strict confidentiality rules set by the National Institute of Standards and Technology. Consequently, the award has been subject to little systematic study.

Ideally, NIST should find some way both to respect confidentiality and to make the data available as a learning tool for American quality improvement. In the absence of the data, the best alternative is to tap into the people who are most familiar with the Baldrige Award: the Baldrige judges, senior examiners, and examiners. Last June and July, I interviewed 20 of these quality experts to seek their views on the award, the evaluation process, and how companies can best use the criteria. (See the insert “Baldrige Interview Participants.”) Their comments and candid observations provide an accurate and sophisticated understanding of what the Baldrige Award is and, equally important, what it is not.

What Is the Baldrige Award?

The award originated from the Malcolm Baldrige National Quality Improvement Act, signed by President Ronald Reagan on August 20, 1987. That act, named after a former Secretary of Commerce, called for the creation of a national quality award and the development of guidelines and criteria that organizations could use to evaluate their quality improvement efforts. Awards were to be given in three categories—manufacturing, service, and small business—with no more than two awards per category per year. The legislation gave favorable mention to a number of management principles and tools: worker involvement, strategic quality planning, statistical process control, management-led and customer-oriented programs. But the act said little about the award’s scoring system, judging process, or criteria for evaluation. It was left to the National Bureau of Standards (known today as the National Institute of Standards and Technology, or NIST) to work out the details.

Collaborating closely with industry experts, NIST produced the seven-category, 1,000-point scoring system and the three-level judging process that are still used today. (The criteria and subcategories have evolved over time; for the most recent version, see the insert “Scoring the 1991 Baldrige Award.”) Companies submit applications of up to 75 pages (up to 50 pages for small businesses) describing their quality practices and performance in each of seven required areas—leadership, information and analysis, strategic quality planning, human resource utilization, quality assurance of products and services, quality results, and customer satisfaction—and are then graded by teams of trained examiners. The Baldrige judges, who come from industry, academia, and consulting firms and are all recognized quality experts, choose a small set of high-scoring applicants for site visits. A team of senior examiners and examiners visits each company for at least several days, conducting interviews and checking documents. The judges then meet a final time to review the top applicants and to select winners.

This process, so simple on the surface, has produced an extraordinary level of confusion. At its core are two problems: a misreading of the Baldrige criteria and a flawed conception of the award. Together, they have made the Baldrige into something of a Rorschach test, a projective device in which people see what they want to see and from which they draw their own, often self-serving, conclusions. One way to clear up the confusion and develop a more accurate picture of the award is to take a close look at the three main criticisms—or myths—of the Baldrige Award.

Myth #1: The Baldrige Award requires large expenditures on the application and preparation for site visits.

This claim, prompted by reports of heavy up-front investments at Xerox and Corning, has settled into a highly stylized debate. Critics of the award continue to cite the Xerox and Corning examples; supporters respond by pointing to Globe Metallurgical and Milliken & Co., 1988 and 1989 Baldrige winners, who spent far less on the application process. They also note that Xerox and Corning describe their expenditures as long-term investments in quality improvement—attempts to “find the warts” and develop a new set of quality goals and initiatives rather than spending money just to bring home a prize.

Lost in this debate are the underlying assumptions that are being made about the Baldrige process. Arguing that the “Baldrige can be bought” is akin to saying that there are required steps and activities that must be described on an application or discussed on a site visit if one is to win the award—an “off the shelf” quality kit that companies buy and install. An example might be statistical process control or a cost-of-quality reporting system. Without these standard items, the argument goes, an applicant will quickly run into trouble, because they are essential to success. Better to spend the money now and put the systems in place to satisfy the predilections of the Baldrige judges and examiners.

Unfortunately, this argument reflects a complete misreading of the Baldrige Award. The award criteria are indeed strongly prescriptive on philosophy and values. But they are open-minded about practices and procedures. To win, companies must have customer-oriented quality programs that are led by senior management, a high level of employee involvement, an understanding of internal processes, and “management by fact” rather than by instinct or feel. But each company is free to choose its own precise techniques to achieve these goals, and there is room for enormous variety. For example, nowhere in the Baldrige criteria is there a requirement that stipulates the techniques to be used for problem solving. Examiners regard such standard tools as statistical process control, Pareto charts, and quality function deployment favorably because they have proven track records. But these tools are not mandatory, and any other approach is acceptable, as long as it produces verifiable, repeatable, controllable results that meet customers’ needs. In fact, in the upper ranges of scoring, tailored and idiosyncratic approaches are the norm. Several judges observed that every Baldrige winner had developed its own “house brand” of quality.

The best way to understand the Baldrige criteria is as an audit framework, an encompassing set of categories that tells companies where, and in what ways, they must demonstrate proficiency—but not how to proceed. The categories are in no sense a “to do” list, and it is simply incorrect to suggest that the criteria specify particular programs or techniques. There is no Baldrige system to be bought off the shelf. Completeness, however, is an important virtue (meaning that a company must address all 32 areas on the application), as is deployment of the quality effort throughout the organization. Here, too, it is impossible for a company to spend its way to success. Examiners not only want to know what programs are in place, they also want to know when they were introduced. With access to nearly every employee in an organization during site visits, they inevitably learn the truth.

Myth #2: The Baldrige Award is flawed because it fails to predict a company’s financial success.

Several Baldrige winners have stumbled after winning the award. The reasons have varied—design problems at Motorola, international difficulties at Federal Express, depressed demand at Cadillac—but the results have been the same: poor financial performance. Critics have seized on these problems as evidence that the Baldrige Award is doing little to enhance American competitiveness or improve corporate performance. A high score on the Baldrige, they claim, tells us nothing about tomorrow’s financial results.

Here the critics are right—but wrong. The Baldrige Award and short-term financial results are like oil and water: they don’t mix and were never intended to. As one judge commented, “I don’t believe that financial performance belongs in the award. If you put it in, then, almost automatically, there is only one category, because it would overshadow everything else. Anyway, we all know the companies that have good bottom lines. That’s not much of a secret. What we don’t know are the companies that have good total quality management processes.”

To fault the Baldrige Award for not rewarding financial success is meaningless—it was never meant to. There are no profit guarantees accompanying a high score. Indeed, winning is neither a necessary nor a sufficient condition for financial success. It is not necessary because there are routes to profitability other than superior quality management. A long-standing patent, for example, or a one-of-a-kind production process can ensure financial success even if a company falls woefully short on the Baldrige criteria. Nor are the criteria sufficient for financial success, since they leave out such vital tasks of management as effective marketing, innovative R&D, and sound financial planning. None of these is considered in the evaluation process, leading to what one examiner has called “the problem of the Baldrige-winning buggy-whip manufacturer.”

Some Wall Street analysts carry these arguments a step further: they actually short the stock of Baldrige winners in anticipation of poor financial performance. Winning, after all, is usually followed by a letdown, and the managers of these companies have grown accustomed to a long-term perspective rather than the pursuit of quarterly earnings. While this reasoning may work for brief periods, over any reasonable time horizon, it is destined to fail. Baldrige winners are as vulnerable as other companies to economic downturns, changes in fashion, and shifts in technology. But they are far better positioned to recover gracefully because they have superior management processes in place. The Baldrige Award is thus a strong predictor of long-term survival and a leading indicator of future profitability. In fact, the General Accounting Office has found that Baldrige winners and semifinalists perform well on a host of important operating and financial measures—quality, of course, but also market share, return on assets, customer satisfaction, and employee relations. (See the insert “Does the Baldrige Award Improve Corporate Performance?”) These are the kinds of indicators that suggest future profitability.

Ideally, Wall Street should be using a longer, less-biased lens to assess the impact of the Baldrige Award. It might, for example, apply the same standards that it uses to judge research and development spending. Like R&D, quality efforts are long term in nature, with an uncertain payback. And like R&D, quality initiatives are costly up front. Yet in both cases, continued investment is essential to competitive success.

Myth #3: The Baldrige Award does not honor superior product or service quality.

This is also known as the “Cadillac criticism” because of the hue and cry over Cadillac’s 1990 Baldrige Award. Critics claimed that the company’s cars had yet to distinguish themselves, earning less than stellar product ratings from sources such as Consumer Reports and J.D. Powers. The August 1991 Powers Report ranked Cadillac eighth in overall customer satisfaction for 1990, down from fourth place in 1989.

Like the criticism in myth #2, this concern, while accurate, misses the point. The Baldrige Award was never designed to reward product or service excellence alone. Quality results do matter; 250 of the available 1,000 points are for product, service, process, supplier, and customer satisfaction results. But the bulk of the award focuses on management systems and processes. There Cadillac performed extremely well.

Yet a central question remains: How should the Baldrige Award be positioned for maximum effectiveness? At one extreme lies a narrowly defined award, limited to product and service excellence and, perhaps, traditional quality control. At the other extreme lies an all-encompassing award, designed to reward overall management excellence and not quality management alone. Surprisingly—and importantly—in its current form, the Baldrige Award sits firmly between the two poles.

The award is certainly not limited to product and service excellence or traditional quality control. The judges award only a quarter of the total points for quality results, and categories such as employee well-being and morale and public responsibility (which includes business ethics, environmental protection, and waste management) are well outside the scope of a narrowly focused quality award. Yet, at the other extreme, the criteria clearly omit areas of great importance to managers: innovativeness, marketing savvy, strategic positioning, organization design, financial performance, and countless others. Baldrige is in no way a complete award for corporate excellence.

This positioning opens the award to criticism from both sides. The Cadillac criticism suggests that the Baldrige criteria are not narrow enough; Tom Peters, among other management consultants, argues that additional categories are required, noting that the award fails to address “bureaucratic bulge.” Is the Baldrige Award stuck hopelessly in the middle, with little chance of redemption? Far from it. The middle ground is precisely where the Baldrige should be.

Consider the two extremes. If the Baldrige Award became a pure product or service award or was limited to traditional quality control, it would no longer capture the attention of senior managers. Winning would no longer be viewed in personal terms, as something for which they shared responsibility; it would become the job of the company’s engineers, factory workers, and QC department. The groundswell of interest created by the Baldrige—and especially the animated conversations now occurring in executive suites—would soon fade away because it is precisely the award’s comprehensiveness that is so appealing to corporate leaders.

But if comprehensiveness is an asset, why not include additional categories and make the award a more demanding test of management excellence? Largely because such an award would be almost impossible to judge. Applications would undoubtedly run several hundred pages, and knowledgeable examiners, skilled in all required areas, would be hard to find. Moreover, determining the weights to be placed on each category—Is innovation worth 5% of the total score? 10%? What about effective advertising?—would be enormously difficult because the process would rapidly become political. Different industries have different competitive needs, and each would be certain to lobby for a scoring system slanted in its favor.

The Baldrige’s current positioning, then, is in many ways ideal. The award is neither so narrow that it is uninspiring, nor so broad as to be unmanageable. While this leaves it an easy target—as one examiner put it, “We’re sort of straddling the fence”—the position is ultimately one of strength. Since 1988, NIST has distributed over 450,000 copies of the application guidelines throughout the world. There may be flaws in the Baldrige criteria, but as a framework for assessing management processes and launching improvement programs, it is, according to this number, at the top of many managers’ lists.

Using Baldrige as a Road Map

Understanding Baldrige is one thing, putting it to use another. After all, quality improvement takes time. It is a developmental process with many steps, not a “quick fix” that managers install and then forget. A company can create a world-class quality system only with years of work and constant refinement.

For companies about to embark on this journey, the first question is obvious: Where do we stand? Here, Baldrige provides a helpful road map. Most judges and examiners agree that there are clear patterns in the applications they have reviewed, with distinct bands of performance as measured by the scores in each category. Companies can be arrayed along a continuum, from best to worst. The mature, high-scoring quality programs, the medium-rung performers, and the low scorers will each cluster around common profiles and shared strengths and weaknesses.

At the heart of these groupings are two concepts central to the judging process: deployment and integration. Deployment is derived from the military term meaning “to spread troops across an extended front”; today, in business, it retains much of the original flavor. Quality experts use the term in two ways: to measure the extent or spread of the quality effort across an organization (what I call horizontal deployment) and to measure the extent to which strategic, customer-oriented objectives have made their way from the CEO to lower levels of the organization (what I call vertical deployment).

Horizontal deployment is important because it provides a quick, but effective, maturity test. A company that has firmly established its quality program in manufacturing or operations but has made little dent in support areas will not receive a high Baldrige score. Such programs are typically in the “adolescent stage”—they have grown up, but have yet to move away from their original home. Examiners work hard to distinguish them from more mature programs, where quality activities exist throughout the company and have become pervasive.

Examiners use various techniques to assess horizontal deployment. One examiner noted that, on site visits, he heads immediately for a company’s legal department or maintenance group to see how it has responded to the quality effort; another seeks out the company’s “worst” department; a third pushes hard to uncover marginal or secondary product lines where the quality effort may not yet have taken hold. (As one judge observed, site visits uncover the darnedest things. One of his favorites, in paraphrase: “We build Ferraris, and the Ferraris are terrific. And, by the way, we didn’t mention this on our application, but we also make toasters. They’re not so hot.”)

Vertical deployment, by contrast, involves the tying together of lower-level activities and strategic goals. It is closely linked to the Japanese concept of policy deployment, or hoshin planning: the cascading of senior management’s vision and objectives down through the organization so that, at each succeeding level, activities are aligned with, and derived from, higher goals. The top and the bottom of the organization will then move in concert. As one judge remarked: “If the CEO says, ‘We’re going to go West, we’re doing it by train, and we’re leaving Wednesday,’ the person [in the factory] knows when to start packing.”

Here too, companies differ sharply in maturity, and examiners have developed a number of litmus tests. One, used on virtually all site visits, is to ask the same question, usually something about customers or a major quality initiative, of both the CEO and shop floor employees. If their answers are the same, vertical deployment is high; if they diverge, it is low. A related test involves quality action teams, consisting of low- and mid-level employees, and the degree to which their activities are independent and unrelated or fit within a larger, well-understood strategy and plan. Low-scoring companies have few quality action teams and offer them little or no strategic guidance; medium scorers have many teams that work with few ties to the strategic plan; and high scorers have a large number of teams all working to meet strategic goals.

Integration is closely related to deployment, and at times, judges and examiners seem to use the terms interchangeably. Integration refers to the degree of alignment or harmony in an organization—whether different departments and levels speak the same language and are tuned to the same wavelength. This concept is often hard to grasp, and an analogy may help. Consider the children’s game of telephone. Several children stand in a circle and, one by one, pass along a message or phrase by whispering it in their neighbor’s ear. The last person in the circle says the phrase out loud; to everyone’s delight, the final wording is usually a far cry from the original.

In the typical game of telephone, integration is poor. Each child remains in a separate world, and there is no attempt to improve communication or eliminate the sources of bias and distortion. A well-integrated telephone group would be quite different. Because of prior training, common goals, continued practice, and years of collaboration, there would be few slips in communication. The group would function like a superb relay team, with flawless hand-offs. Moreover, the hand-offs would be quick. Because of the team’s shared understanding, it would communicate words and phrases rapidly, with little fear of error. Speed is a particularly good measure of integration because it reflects the tightness of the coupling within the organization. Sharply reduced cycle times are both a result and a key indicator of organizational integration.

Cycle times measure an organization’s speed of response and are an increasingly important part of the Baldrige examination. In part, this is because improvements in quality and speed are often traceable to the same sources—work simplification, for example, or process redesign. But equally important is the role of cycle time as a proxy for, and a measure of, integration. New product development times can be cut in half only if engineering and manufacturing are better integrated; patent filings can be done in three rather than ten months only if lab technicians and lawyers are more tightly linked. This observation suggests a simple litmus test: Do critical activities and processes (new product development, billing, patent filings, customer complaint resolution) still take the same amount of time that they did five years ago, or have they been speeded up? In most cases, the greater the reduction in cycle time, the tighter the integration.

With these concepts in mind, we can return to the three bands of performance observed by judges and examiners. Low scorers—300 points or less—are weak in most areas. They often have the right ideas, but implementation is poor. At best, there are “islands of excellence”: one or two showcase projects, or a single Baldrige category in which the company performs well. In some cases, this is leadership, because senior management has aggressively led the quality effort (even though it has little tangible to show for it); at other times, it is an area of traditional strength, like human resource utilization at a service company or quality assurance at a high-tech manufacturer. Breadth, however, is lacking, and most categories receive mediocre or poor scores. The result is a single “spike” in the scoring. (See the chart “Mapping Progress Through the Baldrige.”) Measurable improvements in performance are also lacking, with spotty trends and few substantial gains. In part, this is because the quality programs at low-scoring companies have only recently taken hold, and progress takes time. But equally important is the fact that deployment is nonexistent outside of manufacturing or operations, and integration is poor. As one examiner put it, “At these companies, there is no evidence that different functional areas work well together.”

What are the key categories of the Malcolm Baldrige National Quality Award?

Mapping Progress Through the Baldrige These charts are illustrative only and are derived from discussions with Baldrige judges, senior examiners, and examiners. They depict general trends of how companies progress through the Baldrige criteria over time.

Companies in the middle rung—scores of 400 to 600 points—have more balanced programs. Typically, they are strong in two or three Baldrige categories, with pronounced spikes, but significantly weaker in others. The strengths are often in the core Baldrige categories of leadership, human resource utilization, and customer satisfaction, and the weaknesses are in information and analysis, quality results, and strategic quality planning. Companies in this group have senior managers who are committed to quality, understand its concepts and methods, and are actively engaged in improvement efforts. They have encouraged employee involvement, begun training programs, and initiated problem-solving teams. They have both surveyed and satisfied their customers.

Weaknesses, however, do exist. Many of the medium-scoring companies are still using packaged programs or off-the-shelf solutions: employee suggestion programs, customer surveys, or statistical process control packages that they have purchased from outside vendors. They remain consumers of quality management, not creators, and have yet to put a personal stamp on their programs. Deployment is incomplete outside manufacturing or operations, and lower-level activities are not fully aligned with strategic goals. Most teams, for example, are still “doing their own thing.” A complete quality information system—an important tool for integration and alignment—is not yet in place, nor are quality results impressive, sustained, or organizationwide.

The top companies—scores of 700 or more—have balanced, outstanding performance across the board. There are no spikes in their scoring; all seven Baldrige categories are rated excellent. At this level, the distinction between winners and near-winners is a matter of degree. But as one judge noted, the winners have certain traits in common: “What really blows us away are integration and deployment. You can’t go anywhere in the company that isn’t affected by quality management; the right hand and the left hand know what they’re doing, and cycles are fast.”

As quality programs reach this level, they develop signatures of their own. Each winning company has a mark of distinction, an area of special expertise that fits its culture and sets its quality program apart. At Milliken, the focus is people and teamwork; at IBM-Rochester, it is technology transfer; and at Federal Express, it is problem solving. In each case, the company has used its central theme to build a tailored, personalized approach to quality management.

Step-by-Step Through the Baldrige Categories

Let’s say you’ve conducted a Baldrige self-assessment and find yourself in the middle of the pack. Your quality program is alive and kicking, but hardly surging ahead. You’ve made progress and are committed to more, but realize that improvements are needed in all seven Baldrige categories. What next?

Clearly, the answer is to identify areas of weakness and set targets for improvement. Unfortunately, that is easier said than done. Few managers have the required understanding of quality management or the Baldrige categories to make concrete proposals about “how to get from here to there.” For them, the concepts remain vague and the categories distressingly open-ended. Yet according to Baldrige judges and examiners, in every category, there are simple litmus tests that managers can apply to identify strengths or weaknesses and suggest needed improvements.

Category #1: Leadership.

The twin pillars of this category are symbolism and active involvement. Symbolic acts are required to cement the importance of quality in the minds of employees and to elevate it above financial and efficiency goals, which have long dominated decision making. The examples are legion: Bob Galvin, the former CEO of Motorola, restructuring his policy committee meetings so that quality was the first item on the agenda—and then walking out after quality issues had been discussed and before financial matters were introduced; Roger Milliken, the CEO of Milliken & Company, insisting that he and his senior management team take an intensive course in statistical methods before the material was taught to lower-level managers and employees; David Kearns, the former CEO of Xerox, holding up a vital new product launch because of “minor” quality problems, despite strong protests from the sales force. In each of these cases, the CEO used the same, heroic approach: he undertook a highly visible action, involving personal inconvenience or risk, to reinforce the company’s quality message and capture the attention of the work force.

To the dismay of many chief executives, heroic acts are not enough to ensure a successful quality program. It takes day-to-day leadership as well. This can take many forms: the CEO and senior managers may help teach quality training classes; they may lead or belong to quality improvement teams; they may personally conduct quality systems reviews; or they may meet individually with customers and employees. (One judge, who considers frequent meetings between the CEO and customers to be essential for excellence, calls it “the Mayor Koch test”— named after the three-term mayor of New York who was always asking his constituents, “How am I doing?”) Whatever the form, senior managers’ day-to-day quality activities must involve real commitments of time and energy and must go well beyond slogans and lip service.

One way judges separate rhetoric and reality is by reviewing logbooks or calendars. On site visits, many examiners ask the CEO and other senior managers to open their calendars and review their last few weeks of activity. How much time did they spend talking with customers? Meeting with employees? Leading quality improvement teams? Reviewing the progress of the quality program? What percentage of their total time was spent on quality-related activities? There are no magic numbers that define acceptable levels of commitment. Still, these reviews are extremely revealing because they marshal hard data that show whether senior managers have actually been “walking the talk.” James R. Houghton, the chairman and CEO of Corning Inc., was so taken with this approach that he reviewed his calendar for two years, 1987 and 1989, to discover how he had allocated his time and to see whether changes were necessary. (See the insert “Lessons in Leadership: James R. Houghton.”)

At the very best companies, leaders share two additional qualities: they have intimate knowledge of how their company’s work actually gets done—as one examiner explained, they have developed “a sense of the reality of the organization”—and they possess impressive listening skills. The two are obviously related. A willingness to listen carefully to employees and customers and, if necessary, to hear the bad news means that senior managers will not become isolated or removed from critical feedback. Managers who jump several levels below their direct reports to interact directly with the grass roots of the organization, known as “skip-level communication,” almost always know what is really going on. The technique eliminates many traditional biases that plague companies and leads to heightened sensitivity and an almost intuitive understanding of how the organization works.

Category #2: Information and Analysis.

Because Baldrige winners must demonstrate “fact-based management”—a reliance on hard data, not assumptions, when making decisions—this category is more important than its point total would suggest. The company’s information base must be comprehensive, accessible, and well validated. It must cover all critical areas, customers, competitors, employees, suppliers, and internal processes, with systems that assure the data’s consistency and correctness. Moreover, the data must be easy to find and use. Collecting customers’ comments and then locking them in the files of the market research group or developing detailed process knowledge that is available only through a confusing computer program serves little purpose.

A company’s approach to benchmarking—how it collects and uses information on other organizations’ practices and performance—is also assessed in this category. It can be a powerful tool, but all too often, according to the judges, it is misunderstood. Too many companies approach benchmarking as one-stop shopping: they go to Motorola or another Baldrige winner and use it as a benchmark for all processes, even though its managers would be the first to admit that they are best-in-class only in some activities and weak in others. It is also a mistake to benchmark results alone, without looking at a company’s processes or practices. Identifying a competitor with 25% higher capital productivity in its rolling mill is of little use without an understanding of how it performs the operation.

The truly excellent companies use benchmarking as a catalyst and enabler of change, a learning process rather than a score card. They scan the world widely for organizations that are skilled at what they do, visit them to gain a better understanding of their processes and ways of working, and use the findings to stretch their imaginations and develop new ways of operating. They seldom confine the search to direct competitors: Xerox has benchmarked American Express’s billing processes and L.L. Bean’s approach to distribution. And the benchmarking goes well beyond measures of performance. Moreover, from beginning to end, the approach is purposeful. A common benchmarking error is to visit superior companies without a clear agenda, just to see how they work; a targeted approach is invariably more productive. Managers begin with a business need, identify precisely what needs to be benchmarked, target the company that is best-in-class, conduct the benchmarking visit, and then incorporate their findings directly into the strategic planning process.

Category #3: Strategic Quality Planning.

Strategic quality plans are the glue holding together a company’s quality effort. The plans need not be elaborate, stand-alone documents; at the best companies, they are practically indistinguishable from the business plan. Far more important than their form is their content; and on that dimension, excellent quality plans have much in common. All are concrete, focused, integrated, and aggressive.

As one examiner commented, “What we’re looking for are two or three specific goals in a one- to two-year period, and the fact that the company can tell us, explicitly and specifically, what they’re going to improve and why.” Endless promises and laundry lists of objectives win few points, as do hazy, ill-formed goals. It is far better to seek “a 20% improvement in the reliability of our three major product lines” or “a 50% cut in telephone waiting times” than it is to propose grandiose objectives, like “becoming the auto industry’s supplier of choice” or “being recognized as the world’s best metal-working company.” These last goals lack focus and provide few guidelines; they are general enough to justify almost any behavior.

The best quality plans provide alignment and integration. They incorporate the findings of benchmarking visits, use customer data to drive goal-setting and improvement activities, dovetail neatly with the business planning process, and provide an umbrella for a constellation of quality initiatives and projects. At the winning companies, they also include aggressive “stretch” goals: staggering rates of improvement—ten or hundredfold reductions in defects over a five- to ten-year period—that cannot be accomplished without massive changes. They force companies to rethink the way they do business and ensure that complacency is kept at bay.

Category #4: Human Resource Utilization.

The idea that companies should empower their employees and unleash the full potential of the work force did not arise with the Baldrige Award. It has been a staple of the human relations movement for years. But the award has provided a powerful boost, because it has added precision and compliance tests.

To see if empowerment really exists, examiners look at the ability of frontline employees to act in the interest of customers without getting prior approval. Can a saleswoman, on her own authority, make a $5,000 adjustment for a customer, or is she limited to $10 or less? Can a customer service representative deviate from established procedures if he feels it will help a client, or must he clear it first with his boss? Within the factory, do shop floor employees have access to a “stop the line” button that they can use to halt the assembly line if they detect quality problems? Because these steps are so much more effective when they are accompanied by a supportive environment, examiners probe further on supervisory practices. What happens when things go wrong? Are employees punished, or do they receive coaching and support? Is personal initiative valued or feared?

There are other, more tangible tests of empowerment and involvement. The number of employee teams is one; another is the number of employee suggestions. In both cases, effectiveness matters: the volume of ideas is less important than the percent that are implemented. To encourage the cooperation of their employees, the best companies have tight loops for responding quickly to proposals—within 24 hours or, at most, 48 hours—and translating them into action. As one judge observed, “Good companies tell you how they collect employee suggestions. Great companies tell you how they use employee suggestions.”

Because a well-trained employee is more likely to contribute than one who lacks essential skills, examiners probe to find ample education and training. Quality training involves a package of skills that includes increased awareness, problem-solving tools (statistics, data analysis, customer-supplier relationships), group process skills (leading meetings, team-work, making presentations), and job-specific skills. All are necessary, and all must be deployed widely. As quality programs mature, companies typically spend a larger percentage of their revenues on education and training. They also tighten the links between training and application, because the sooner that lessons can be applied the more likely they are to stick. The best training programs couple skills training with real-time problem solving and feedback. After employees have completed the programs, they are asked to reevaluate and propose changes; these are incorporated in the next version. Follow-up systems are also in place to ensure that training programs produce the desired results. If they do not, the programs are retooled.

Monitoring of this sort is simply one type of bottom-up communication; others include attitude surveys and informal meetings of managers and lower-level employees. Even at successful companies, this type of communication is rare. One judge observed, “At some level of excellence, the company can tell you how the troops find out what the leaders are thinking. And they do that really precisely. But they fail to tell you how the troops tell the leaders what [they] are thinking. That must be more difficult, because it’s more likely to be missing.”

In the end, excellence in this category comes down to a simple test: the voice of the people. On site visits, examiners practice “management by sitting around”: they sit down to lunch with a random group of employees and ask them about their work. How do they view their jobs? Are they rewarded for taking initiative? Do they see themselves as hamstrung by others (“The home office tells us to do this, the home office tells us to do that”)? Or are they passionately involved (“This is the approach I plan to take, and here’s why I think it will benefit our customers”)? Do employees covet quality awards, or are they viewed as a management ploy? The answers to the questions are the bottom line on human resource management because they capture the combined impact of training, communication, and involvement programs. One examiner put it succinctly, “Empowerment is in the eyes of the empowered. It’s that simple.”

Category #5: Quality Assurance of Products and Services.

Weaknesses in this category usually reflect poor process thinking. The idea that companies have processes—new product development, for example, or billing—that combine activities from different departments to produce a specific output, is unfamiliar to many managers who are accustomed to thinking along functional lines. There, both the organization and information flow are vertical. In a process, by contrast, both move horizontally.

The poorest performing companies have little understanding of their fundamental processes; they have not mapped them using process flow diagrams, measured them quantitatively, or controlled them through statistical methods. The better companies understand the processes that are central to their business and may even have improved their performance. At a high-tech company, a key process might be new product development; at a hotel, it might be guest registration and departure. But they have little knowledge of their business and support processes, such as billing, customer service, engineering changes, or patent applications, and have made little effort to reduce their defects or cycle times. The best companies have tackled these areas as aggressively as they have pursued improvements in their core processes.

Category #6: Quality Results.

The measures in this category must be objective, like number of defects or on-time delivery rates. Examiners are looking for “meaningful trends”; they are not impressed by a single year of stellar performance, or improvements in areas of no strategic significance. Successful companies report timely data, covering at least three years, show sustained improvements on critical measures, achieve high levels of performance relative to competitors and best-in-class benchmarks, and have proof that quality initiatives, not serendipity, drove their trends.

Winners go a step further: using statistical methods, they are able to correlate their objective quality results with measures of customer satisfaction. Winners are able to say, “We knew that our customers valued on-time delivery, but we weren’t sure how much. Now we’ve established that a 10% improvement in on-time delivery performance raises our customer satisfaction scores two to three points.” This ability, which allows companies to predict changes in customer satisfaction from a small set of internal quality measures, is one of the most powerful litmus tests for separating Baldrige winners from runners-up.

Category #7: Customer Satisfaction.

This is the most heavily weighted Baldrige category, and the one that many examiners turn to first. They are looking for evidence of customer understanding and commitment, as well as impressive results. Companies must show that they possess customer information from a wide range of sources—focus groups, surveys, one-on-one meetings, letters to the chairman, sales visits, telephone hot lines—and that their measures are objective and validated, not anecdotal. A common failing is to access only current customers, ignoring those that have been lost or are still being pursued. Lost customers have a great deal to share about their sources of dissatisfaction; a competitor’s customers can help pinpoint vulnerabilities. Both contribute important nuances that current customers alone seldom provide. With this information, companies can stratify their customers into groups; typically, the more groups, the more refined the understanding. Xerox has divided its copier customers into six categories—large- and small-customer major accounts, large and small named accounts, general markets, and government/education—and has identified the purchase criteria in each segment.

The system that a company has for collecting, monitoring, and responding to customer complaints demonstrates its customer commitment. Surprisingly, a small number of complaints is often cause for alarm; it suggests an unwillingness to view them as a form of customer feedback rather than a source of bad news. At the best companies, complaints are easy to register and are actively pursued. Nintendo, for example, has a toll-free telephone hot line for assisting users of its video games; every caller is asked to evaluate the company’s products and offer suggestions, even if the call was for another purpose. Excellent companies then analyze the complaints they have collected, aggregate the data, circulate it internally, and insist on remedial action. What is true of employee suggestions is true of customer complaints: the best companies do not simply collect them, they use them to prevent future problems.

Yet even the best complaint-handling systems are incomplete. Because they are confined to customer dissatisfaction, they are able to tell companies little about the things that truly please customers or make them happy. Companies repeatedly fail to make this distinction; according to examiners, many are “managing complaint systems” without also “holding conversations with customers.” To overcome this, managers must ask their customers what they are looking for and observe them in action. There is no substitute for customer contact.

Winning companies set their sights still higher, aiming at customer delight. Their goal is to exceed expectations and anticipate needs, even if customers have not yet articulated their needs. The Sony Walkman and Black & Decker Dustbuster are examples of products that meet this test. The required levels of understanding are daunting, but the payoff is huge. As a judge observed, “You need to know things about customers that they don’t know about themselves.” Xerox found that a “very satisfied” customer was six or seven times more likely to repurchase its products than a “satisfied” customer. Its goal today is “100% very satisfied customers.”

The Baldrige Legacy

The Baldrige Award is a demanding competition, with every company subject to the same stringent tests. Points are awarded for originality, and there are only six possible winners a year. One would expect these rules to produce clannishness and secrecy, as each company pursues its own gains. In fact, the results have been the opposite: an outpouring of cooperative behavior and a level of corporate sharing seldom seen in this country.

Business audiences have shifted from politely listening to speeches about quality to absorbing them. Xerox talks to over 100,000 people a year, many of them customers and suppliers. All come seeking information and advice. “We absolutely don’t believe this would have happened without the Baldrige Award,” said one Baldrige examiner.

The award has created a common vocabulary and philosophy bridging companies and industries. Managers now view learning across the boundary lines of business as both possible and desirable. The abhorrence for anything “not invented here,” once a source of corporate uniqueness and pride, is being replaced by an unabashed zeal for borrowing ideas and practices from others.

In many ways, this spirit of cooperation is the legacy of the Baldrige Award. Winners are compelled by law to share their knowledge; that they have done so without suffering competitively has led other companies to follow suit. Benchmarking is by definition a cooperative activity, and it is an award requirement. Even warring factions of the quality movement have united under the Baldrige banner. To become more competitive, American companies have discovered cooperation.

What are the categories of Baldrige Award?

Up to 18 awards may be given annually across six eligibility categories—manufacturing, service, small business, education, health care, and nonprofit.

What is the Malcolm Baldrige National Quality Award criteria?

Malcolm Baldrige National Quality Award Criteria The criteria are leadership; strategic planning; customer and market focus; measurement, analysis, and knowledge management; human resource focus; process management; and results.

What are the 10 core values of Malcolm Baldrige National Quality Award?

The Core Values and Concepts are the foundation of the Baldrige framework:.
Systems perspective..
Visionary leadership..
Student-centered excellence..
Valuing people..
Agility and resilience..
Organizational learning..
Focus on success and innovation..
Management by fact..