
2.5 Share of Requirements
Purpose: To understand the source of market share in terms of breadth
and depth of consumer franchise, as well as the extent of relative
category usage (heavy users/larger customers versus light users/smaller
customers).
Construction
Share of Requirements: A given brand's share of
purchases in its category, measured solely among customers who have
already purchased that brand. Also known as share of wallet.
When calculating share of requirements, marketers may consider either
dollars or units. They must ensure, however, that their heavy usage index
is consistent with this choice.
Unit Share of Requirements (%) = Brand Purchases
(#)/Total Category Purchases by Brand Buyers (#)
Revenue Share of Requirements (%) = Brand Purchases
($)/Total Category Purchases by Brand Buyers ($)
The best way to think about share of requirements is as the average
market share enjoyed by a product among the customers who buy it.
Example: In a given month, the unit purchases of
AloeHa brand sunscreen ran 1,000,000 bottles. Among the households that
bought AloeHa, total purchases of sunscreen came to 2,000,000 bottles.
Share of Requirements = AloeHa Purchases/Category Purchases by AloeHa
Customers = 1,000,000/2,000,0005 = 50%
Share of requirements is also useful in analyzing overall market share.
As previously noted, it is part of an important formulation of market
share.
Share of requirements can thus be calculated indirectly by decomposing
market share.
Example: Eat Wheats brand cereal has a market share in
Urbanopolis of 8%. The heavy usage index for Eat Wheats in Urbanopolis is
1. The brand's penetration share in Urbanopolis is 20%. On this basis, we
can calculate Eat Wheats' share of requirements in Urbanopolis:
Share of Requirements = Market Share/(Heavy Usage Index * Penetration
Share) =8%/(1 * 20%) = 8%/20%5 40%
Note that in this example, market share and heavy usage index must both
be defined in the same terms (units or revenue). Depending on the
definition of these two metrics, the calculated share of requirements will
be either unit share of requirements (%) or revenue share of requirements
(%).
Data Sources, Complications, and Cautions
Double Jeopardy: Some marketers strive for a "niche"
positioning that yields high market share through a combination of low
penetration and high share of requirements. That is, they seek relatively
few customers but very loyal ones. Before embarking on this strategy,
however, a phenomenon known as "double jeopardy" should be considered.
Generally, the evidence suggests that it's difficult to achieve a high
share of requirements without also attaining a high penetration share. One
reason is that products with high market share generally have high
availability, whereas those with low market share may not. Therefore, it
can be difficult for customers to maintain loyalty to brands with low
market share.
Related Metrics and Concepts
Sole Usage: The fraction of a brand's customers who
use only the brand in question.
Sole Usage Percentage: The proportion of a brand's
customers who use only that brand's products and do not buy from
competitors. Sole users may be die-hard, loyal customers. Alternatively,
they may not have access to other options, perhaps because they live in
remote areas. Where sole use is 100%, the share of wallet is 100%.
Sole Usage (%) = Customers Who Buy Only the Brand in Question
(#)/Total Brand Customers (#)
Number of Brands Purchased: During a given period,
some customers may buy only a single brand within a category, whereas
others buy two or more. In evaluating loyalty to a given brand, marketers
can consider the average number of brands purchased by consumers of that
brand versus the average number purchased by all customers in that
category.
Example: Among 10 customers for cat food, 7 bought the
Arda brand, 5 bought Bella, and 3 bought Constanza. Thus, the 10 customers
made a total of 15 brand purchases (7 1 5 1 3), yielding an average of 1.5
brands per customer.
Seeking to evaluate customer loyalty, a Bella brand manager notes that
of his firm's five customers, 3 bought only Bella, whereas two bought both
Arda and Bella. None of Bella's customers bought Constanza. Thus, the five
Bella customers made seven brand purchases (1 1 1 1 1 1 2 1 2), yielding
an average of 1.4 (that is, 7/5) brands per Bella customer. Compared to
the average category purchaser, who buys 1.5 brands, Bella buyers are
slightly more loyal.
Repeat Rate: The percentage of brand customers in a
given period who are also brand customers in the subsequent period.
Repurchase Rate: The percentage of customers for
a brand who repurchase that brand on their next purchase occasion.
Confusion abounds in this area. In these definitions, we have tried to
distinguish a metric based on calendar time (repeat rate) from one based
on "customer time" (repurchase rate). In Chapter 5, "Customer
Profitability," we will describe a related metric, retention, which is
used in contractual situations in which the first non-renewal
(non-purchase) signals the end of a customer relationship. Although we
suggest that the term retention be applied only in contractual situations,
you will often see repeat rates and repurchase rates referred to as
"retention rates." Due to a lack consensus on the use of these terms,
marketers are advised not to rely on the names of these metrics as perfect
indicators of how they are calculated.
The importance of repeat rate depends on the time period covered.
Looking at one week's worth of purchases is unlikely to be very
illuminating. In a given category, most consumers only buy one brand in a
week. By contrast, over a period of years, consumers may buy several
brands that they do not prefer, on occasions when they can't find the
brand to which they seek to be loyal. Consequently, the right period to
consider depends on the product under study and the frequency with which
it is bought. Marketers are advised to take care to choose a meaningful
period.
2.6 Heavy Usage Index
Purpose: To define and measure whether a firm's consumers are "heavy
users."
The heavy usage index answers the question, "How heavily do our customers
use the category of our product?" When a brand's heavy usage index is greater
than 1.0, this signifies that its customers use the category to which it
belongs more heavily than the average customer for that category.
Construction
Heavy Usage Index: The ratio that compares the
average consumption of products in a category by customers of a given brand
with the average consumption of products in that category by all customers
for the category.
The heavy usage index can be calculated on the basis of unit or dollar
inputs. For a given brand, if the heavy usage index is greater than 1.0, that
brand's customers consume an above-average quantity or value of products in
the category.
Heavy Usage Index (I) = Average Total Purchases in
Category by Brand Customers (#,$)/Average Total Purchases in Category by All
Customers for That Category (#,$)
Example: Over a period of one year, the average shampoo
purchases by households using Shower Fun brand shampoo totaled six 15-oz
bottles. During the same period, average shampoo consumption by households
using any brand of shampoo was four 15-oz bottles.
The heavy usage index for households buying Shower Fun is therefore 6/4, or
1.5. Customers of Shower Fun brand shampoo are disproportionately heavy users.
They buy 50% more shampoo than the average shampoo consumer. Of course,
because Shower Fun buyers are part of the overall market average, when
compared with non-users of Shower Fun, their relative usage is even higher.
As previously noted, market share can be calculated as the product of three
components: penetration share, share of requirements, and heavy usage index
(see Section 2.4). Consequently, we can calculate a brand's heavy usage index
if we know its market share, penetration share, and share of requirements, as
follows:
This equation works for market shares defined in either unit or dollar
terms. As noted earlier, the heavy usage index can measure either unit or
dollar usage. Comparing a brand's unit heavy usage index to its dollar heavy
usage index, marketers can determine whether category purchases by that
brand's customers run above or below the average category price.
Data Sources, Complications, and Cautions
The heavy usage index does not indicate how heavily customers use a
specific brand, only how heavily they use the category. A brand can have a
high heavy usage index, for example, meaning that its customers are heavy
category users, even if those customers use the brand in question to meet only
a small share of their needs.
Related Metrics and Concepts
See also the discussion of brand development index (BDI) and category
development index (CDI) in Section 2.3.

2.7 Awareness, Attitudes, and Usage (AAU): Metrics of the Hierarchy of
Effects
Studies of awareness, attitudes, and usage (AAU) enable marketers to
quantify levels and trends in customer knowledge, perceptions, beliefs,
intentions, and behaviors. In some companies, the results of these studies are
called "tracking" data because they are used to track long-term changes in
customer awareness, attitudes, and behaviors.
AAU studies are most useful when their results are set against a clear
comparator. This benchmark may comprise the data from prior periods, different
markets, or competitors.
Purpose: To track trends in customer attitudes and behaviors.
Awareness, attitudes, and usage (AAU) metrics relate closely to what has
been called the Hierarchy of Effects, an assumption that customers progress
through sequential stages from lack of awareness, through initial purchase of
a product, to brand loyalty (see
Figure 2.2). AAU metrics are generally designed to track these stages of
knowledge, beliefs, and behaviors. AAU studies also may track "who" uses a
brand or product––in which customers are defined by category usage
(heavy/light), geography, demographics, psychographics, media usage, and
whether they purchase other products.

Information about attitudes and beliefs offers insight into the question of
why specific users do, or do not, favor certain brands. Typically, marketers
conduct surveys of large samples of households or business customers to gather
these data.
Construction
Awareness, attitudes, and usage studies feature a range of questions that
aim to shed light on customers' relationships with a product or brand (see
Table 2.4). For example, who are the acceptors and rejecters of the product?
How do customers respond to a replay of advertising content?
Table 2.4 Awareness, Attitudes, and Usage: Typical Questions
Type |
Measures |
Typical Questions |
Awareness |
Awareness and Knowledge |
Have you heard of Brand X? What brand comes to mind when
you think "luxury car?" |
Attitudes |
Beliefs and Intentions |
Is Brand X for me? On a scale of 1 to 5, is Brand X for
young people? What are the strengths and weaknesses of each brand? |
Usage |
Purchase Habits and Loyalty |
Did you use Brand X this week? What brand did you last
buy? |
Marketers use answers to these questions to construct a number of metrics.
Among these, certain "summary metrics" are considered important indicators of
performance. In many studies, for example, customers' "willingness to
recommend" and "intention to purchase" a brand are assigned high priority.
Underlying these data, various diagnostic metrics help marketers understand
why consumers may be willing––or unwilling––to recommend or purchase
that brand. Consumers may not have been aware of the brand, for example.
Alternatively, they may have been aware of it but did not subscribe to one of
its key benefit claims.
Awareness and Knowledge
Marketers evaluate various levels of awareness, depending on whether the
consumer in a given study is prompted by a product's category, brand,
advertising, or usage situation.
Awareness: The percentage of potential customers or
consumers who recognize––or name––a given brand. Marketers may research
brand recognition on an "aided" or "prompted" level, posing such questions
as, "Have you heard of Mercedes?" Alternatively, they may measure "unaided"
or "unprompted" awareness, posing such questions as, "Which makes of
automobiles come to mind?"
Top of Mind: The first brand that comes to mind when a
customer is asked an unprompted question about a category. The percentage of
customers for whom a given brand is top of mind can be measured.
Ad Awareness: The percentage of target consumers or
accounts who demonstrate awareness (aided or unaided) of a brand's
advertising. This metric can be campaign- or media-specific, or it can cover
all advertising.
Brand/Product Knowledge: The percentage of surveyed
customers who demonstrate specific knowledge or beliefs about a brand or
product.
Attitudes
Measures of attitude concern consumer response to a brand or product.
Attitude is a combination of what consumers believe and how strongly they feel
about it. Although a detailed exploration of attitudinal research is beyond
the scope of this book, the following summarizes certain key metrics in this
field.
Attitudes/Liking/Image: A rating assigned by
consumers––often on a scale of 1–5 or 1–7––when survey respondents are asked
their level of agreement with such propositions as, "This is a brand for
people like me," or "This is a brand for young people." A metric based on
such survey data can also be called relevance to customer.
Perceived Value for Money: A rating assigned by
consumers––often on a scale of 1–5 or 1–7––when survey respondents are asked
their level of agreement with such propositions as, "This brand usually
represents a good value for the money."
Perceived Quality/Esteem: A consumer rating––often on a
scale of 1–5 or 1–7––of a given brand's product when compared with others in
its category or market.
Relative Perceived Quality: A consumer rating (often
from 1–5 or 1–7) of brand product compared to others in the category/market.
Intentions: A measure of customers' stated willingness
to behave in a certain way. Information on this subject is gathered through
such survey questions as, "Would you be willing to switch brands if your
favorite was not available?"
Intention to Purchase: A specific measure or rating of
consumers' stated purchase intentions. Information on this subject is
gathered through survey respondents' reactions to such propositions as, "It
is very likely that I will purchase this product."
Usage
Measures of usage concern such market dynamics as purchase frequency and
units per purchase. They highlight not only what was purchased, but also when
and where it was purchased. In studying usage, marketers also seek to
determine how many people have tried a brand. Of those, they further seek to
determine how many have "rejected" the brand, and how many have "adopted" it
into their regular portfolio of brands.
In measuring usage, marketers pose such questions as the following: What
brand of toothpaste did you last purchase? How many times in the past year
have you purchased toothpaste? How many tubes of toothpaste do you currently
have in your home? Do you have any Crest toothpaste in your home at the
current time?
In the aggregate, AAU metrics concern a vast range of information that can
be tailored to specific companies and markets. They provide managers with
insight into customers' overall relationships with a given brand or product.
Data Sources, Complications, and Cautions
Sources of AAU data include
- Warranty cards and registrations, often using prizes and random drawings
to encourage participation.
- Regularly administered surveys, conducted by organizations that
interview consumers via telephone, mail, Web, or other technologies, such as
hand-held scanners.
Even with the best methodologies, however, variations observed in tracking
data from one period to the next are not always reliable. Managers must rely
on their experience to distinguish seasonality effects and "noise" (random
movement) from "signal" (actual trends and patterns). Certain techniques in
data collection and review can also help managers make this distinction.
- Adjust for periodic changes in how questions are framed
or administered. Surveys can be conducted via mail or telephone, for
example, among paid or unpaid respondents. Different data-gathering
techniques may require adjustment in the norms used to evaluate a "good" or
"bad" response. If sudden changes appear in the data from one period to the
next, marketers are advised to determine whether methodological shifts might
play a role in this result.
- Try to separate customer from non-customer responses;
they may be very different. Causal links among awareness, attitudes, and
usage are rarely clear-cut. Though the hierarchy of effects is often viewed
as a one-way street, on which awareness leads to attitudes, which in turn
determine usage, the true causal flow might also be reversed. When people
own a brand, for example, they may be predisposed to like it.
- Triangulate customer survey data with sales revenue,
shipments, or other data related to business performance. Consumer
attitudes, distributor and retail sales, and company shipments may move in
different directions. Analyzing these patterns can be a challenge but can
reveal much about category dynamics. For example, toy shipments to retailers
often occur well in advance of the advertising that drives consumer
awareness and purchase intentions. These, in turn, must be established
before retail sales. Adding further complexity, in the toy industry, the
purchaser of a product might not be its ultimate consumer. In evaluating AAU
data, marketers must understand not only the drivers of demand but also the
logistics of purchase.
- Separate leading from lagging indicators whenever
possible. In the auto industry, for example, individuals who have just
purchased a new car show a heightened sensitivity to advertisements for its
make and model. Conventional wisdom suggests that they're looking for
confirmation that they made a good choice in a risky decision. By helping
consumers justify their purchase at this time, auto manufacturers can
strengthen long-term satisfaction and willingness to recommend.
Related Metrics and Concepts
Likeability: Because AAU considerations are so important
to marketers, and because there is no single "right" way to approach them,
specialized and proprietary systems have been developed. Of these, one of
the best known is the Q scores rating of "likeability." A Q Score is derived
from a general survey of selected households, in which a large panel of
consumers share their feelings about brands, celebrities, and television
shows.4
Q Scores rely upon responses reported by consumers. Consequently, although
the system used is sophisticated, it is dependent on consumers understanding
and being willing to reveal their preferences.
Segmentation by Geography, or Geo-clustering: Marketers
can achieve insight into consumer attitudes by separating their data into
smaller, more homogeneous groups of customers. One well-known example of
this is Prizm. Prizm assigns U.S. households to clusters based on Zip Code,5
with the goal of creating small groups of similar households. The typical
characteristics of each Prizm cluster are known, and these are used to
assign a name to each group. "Golden Ponds" consumers, for example, comprise
elderly singles and couples leading modest lifestyles in small towns. Rather
than monitoring AAU statistics for the population as a whole, firms often
find it useful to track these data by cluster.

2.8 Customer Satisfaction and Willingness to Recommend
Within organizations, customer satisfaction ratings can have powerful
effects. They focus employees on the importance of fulfilling customers’
expectations. Furthermore, when these ratings dip, they warn of problems
that can affect sales and profitability.
A second important metric related to satisfaction is willingness to
recommend. When a customer is satisfied with a product, he or she might
recommend it to friends, relatives, and colleagues. This can be a powerful
marketing advantage.
Purpose: Customer satisfaction provides a leading indicator of consumer
purchase intentions and loyalty.
Customer satisfaction data are among the most frequently collected
indicators of market perceptions. Their principal use is twofold.
- Within organizations, the collection, analysis, and dissemination of
these data send a message about the importance of tending to customers and
ensuring that they have a positive experience with the company's goods and
services.
- Although sales or market share can indicate how well a firm is
performing currently, satisfaction is perhaps the best indicator of
how likely it is that the firm's customers will make further purchases in the future. Much research has focused on the relationship between
customer satisfaction and retention. Studies indicate that the ramifications
of satisfaction are most strongly realized at the extremes. On the scale in
Figure 2.3, individuals who rate their satisfaction level as "5" are
likely to become return customers and might even evangelize for the firm.
Individuals who rate their satisfaction level as "1," by contrast, are
unlikely to return. Further, they can hurt the firm by making negative
comments about it to prospective customers. Willingness to recommend is a
key metric relating to customer satisfaction.
Construction
Customer Satisfaction: The number of customers, or
percentage of total customers, whose reported experience with a firm, its
products, or its services (ratings) exceeds specified satisfaction goals.
Willingness to Recommend: The percentage of surveyed
customers who indicate that they would recommend a brand to friends.
These metrics quantify an important dynamic. When a brand has loyal
customers, it gains positive word-of-mouth marketing, which is both free and
highly effective.
Customer satisfaction is measured at the individual level, but it is almost
always reported at an aggregate level. It can be, and often is, measured along
various dimensions. A hotel, for example, might ask customers to rate their
experience with its front desk and check-in service, with the room, with the
amenities in the room, with the restaurants, and so on. Additionally, in a
holistic sense, the hotel might ask about overall satisfaction "with your
stay."
Customer satisfaction is generally measured on a five-point scale (see
Figure 2.4).

Satisfaction levels are usually reported as either "top box" or, more
likely, "top two boxes." Marketers convert these expressions into single
numbers that show the percentage of respondents who checked either a "4" or a
"5." (This term is the same as that commonly used in projections of trial
volumes; see Section 4.1.)
Example: The general manager of a hotel in Quebec institutes a new system
of customer satisfaction monitoring (see
Figure 2.5). She leaves satisfaction surveys at checkout. As an incentive
to respond, all respondents are entered into a drawing for a pair of free
airline tickets.

The manager collects 220 responses, of which 20 are unclear or otherwise
unusable. Among the remaining 200, 3 people rate their overall experience at
the hotel as very unsatisfactory, 7 deem it somewhat unsatisfactory, and 40
respond that they are neither satisfied nor dissatisfied. Of the remainder, 50
customers say they are very satisfied, while the rest are somewhat satisfied.
The top box, comprising customers who rate their experience a "5," includes
50 people or, as a percentage, 50/200 5 25%. The top two boxes comprise
customers who are "somewhat" or "very" satisfied, rating their experience a
"4" or "5." In this example, the "somewhat satisfied" population must be
calculated as the total usable response pool, less customers accounted for
elsewhere, that is, 200 2 3 2 7 2 40 2 50 = 100. The sum of the top two boxes
is thus 50 1 100 5 150 customers, or 75% of the total.
Customer satisfaction data can also be collected on a 10-point scale.
Regardless of the scale used, the objective is to measure customers' perceived
satisfaction with their experience of a firm's offerings. Marketers then
aggregate these data into a percentage of top-box responses.
In researching satisfaction, firms generally ask customers whether their
product or service has met or exceeded expectations. Thus, expectations are a
key factor behind satisfaction. When customers have high expectations and the
reality falls short, they will be disappointed and will likely rate their
experience as less than satisfying. For this reason, a luxury resort, for
example, might receive a lower satisfaction rating than a budget motel––even
though its facilities and service would be deemed superior in "absolute"
terms.
Data Sources, Complications, and Cautions
Surveys constitute the most frequently used means of collecting
satisfaction data. As a result, a key risk of distortion in measures of
satisfaction can be summarized in a single question: Who responds to surveys?
"Response bias" is endemic in satisfaction data. Disappointed or angry
customers often welcome a means to vent their opinions. Contented customers
often do not. Consequently, although many customers might be happy with a
product and feel no need to complete a survey, the few who had a bad
experience might be disproportionately represented among respondents. Most
hotels, for example, place response cards in their rooms, asking guests, "How
was your stay?' Only a small percentage of guests ever bother to complete
those cards. Not surprisingly, those who do respond probably had a bad
experience. For this reason, marketers can find it difficult to judge the true
level of customer satisfaction. By reviewing survey data over time, however,
they may discover important trends or changes. If complaints suddenly rise,
for example, that may constitute early warning of a decline in quality or
service. (See number of complaints in the following section.)
Sample selection may distort satisfaction ratings in other ways as well.
Because only customers are surveyed for customer satisfaction, a
firm's ratings may rise artificially as deeply dissatisfied customers take
their business elsewhere. Also, some populations may be more frank than
others, or more prone to complain. These normative differences can affect
perceived satisfaction levels. In analyzing satisfaction data, a firm might
interpret rating differences as a sign that one market is receiving better
service than another, when the true difference lies only in the standards that
customers apply. To correct for this issue, marketers are advised to review
satisfaction measures over time within the same market.
A final caution: Because many firms define customer satisfaction as
"meeting or exceeding expectations," this metric may fall simply because
expectations have risen. Thus, in interpreting ratings data, managers may come
to believe that the quality of their offering has declined when that is not
the case. Of course, the reverse is also true. A firm might boost satisfaction
by lowering expectations. In so doing, however, it might suffer a decline in
sales as its product or service comes to appear unattractive.
Related Metrics and Concepts
Trade Satisfaction: Founded upon the same principles as
consumer satisfaction, trade satisfaction measures the attitudes of trade
customers.
Number of Complaints: The number of complaints lodged by
customers in a given time period.

2.9 Willingness to Search
Purpose: To assess the commitment of a firm's or a brand's customer base.
Brand or company loyalty is a key marketing asset. Marketers evaluate
aspects of it through a number of metrics, including repurchase rate, share of
requirements, willingness to pay a price premium, and other AAU measures.
Perhaps the most fundamental test of loyalty, however, can be captured in a
simple question: When faced with a situation in which a brand is not
available, will its customers search further or substitute the best available
option?
When a brand enjoys loyalty at this level, its provider can generate
powerful leverage in trade negotiations. Often, such loyalty will also give
providers time to respond to a competitive threat. Customers will stay with
them while they address the threat.
Loyalty is grounded in a number of factors, including
- Satisfied and influential customers who are willing to recommend the
brand.
- Hidden values or emotional benefits, which are effectively communicated.
- A strong image for the product, the user, or the usage experience.
Purchase-based loyalty metrics are also affected by whether a product is
broadly and conveniently available for purchase, and whether customers enjoy
other options in its category.
Construction
Willingness to Search: The likelihood that customers
will settle for a second-choice product if their first choice is not
available. Also called "accept no substitutes."
Willingness to search represents the percentage of customers who are
willing to leave a store without a product if their favorite brand is
unavailable. Those willing to substitute constitute the balance of the
population.
Data Sources, Complications, and Cautions
Loyalty has multiple dimensions. Consumers who are loyal to a brand in the
sense of rarely switching may or may not be willing to pay a price premium for
that brand or recommend it to their friends. Behavioral loyalty may also be
difficult to distinguish from inertia or habit. When asked about loyalty,
consumers often don't know what they will do in new circumstances. They may
not have accurate recall about past behavior, especially in regard to items
with which they feel relatively low involvement.
Furthermore, different products generate different levels of loyalty. Few
customers will be as loyal to a brand of matches, for example, as to a brand
of baby formula. Consequently, marketers should exercise caution in comparing
loyalty rates across products. Rather, they should look for category-specific
norms.
Degrees of loyalty also differ between demographic groups. Older consumers
have been shown to demonstrate the highest loyalty rates.
Even with these complexities, however, customer loyalty remains one of the
most important metrics to monitor. Marketers should understand the worth of
their brands in the eyes of the customer––and of the retailer.
Footnotes
1. "Wal-Mart Shopper Update," Retail Forward,
February 2005.
2. "Running Out of Gas," Business Week, March 28th,
2005.
3. American Marketing Association definition. Accessed 06/08/2005.
http://www.marketingpower.com/live/mg-dictionary.php?SearchFor=market+concentration&Searched=1.
4. Check the Marketing Evaluations, Inc., Web site for more detail:
http://www.qscores.com/.
Accessed 03/03/05.
5. Claritas provides the Prizm analysis. For more details, visit the
company Web site:
http://www.clusterbigip1.claritas.com/claritas/Default.jsp. Accessed
03/03/05.