![]() |
![]() |
![]() |
![]() |
Home CRM | Ch1 Retention | CRM Webcasts | Ch2 BI |
Ch3 On Demand | Ch4 ABM | Ch5 Opinion | Ch6 Hospitality |
Ch7 Automotive | Appendix | CRM MAGAZINES |
|
||||||||||||||||||||||||||
CHAPTER [2]
(Business Intelligence),
(Analysis and Reporting), (Data
Management)
Business Intelligence
Business intelligence (BI) uses knowledge management, data warehouse, data
mining and business analysis to identify, track and improve key processes and
data, as well as identify and monitor trends in corporate, competitor and market
performance.
Analysis and Reporting
Business intelligence reporting and monitoring includes
ad hoc and standardized reports, dashboards, triggers and alerts. Business
analytics include trend analysis, predictive forecasting, pattern analysis,
optimization, guided decision-making and experiment design.
Data Management Getting CRM right means integrating processes both within
and across business functions to drive more effective
customer interactions and unlock greater customer value. More
mature areas such as campaign management, sales force
automation, contact center and ecommerce are adding
advanced capabilities through analytics, business process
management and knowledge management tools. Newer areas such
as Field Service, Marketing Resource Management, and
Sales Asset Management are broadening departmental capabilities and
enabling CRM to reach new heights. Customer data integration (CDI),
Customer Interaction Hubs and Customer Experience Management
make the relationship visible and customer interactions cohesive
throughout the organization. Customer value analysis and
customer data mining enable more insightful customer interactions
within the context of the interaction.
From Bettermanagement.com Six-part
series of data management articles sponsored by
DataFlux. As a leader in the data management
market, DataFlux helps you turn data into a
strategic information asset.
Article 1 |
Article 2 |
Article 3
|
Article 4 |
Article 5 |
Article 6
The Challenges of Data Management Poor data quality costs U.S. companies billions of dollars each
year. In a frequently-quoted study published by The Data Warehousing
Institute in 2002, the total value of this cost is over $600 billion.
This is a staggering amount of money, and it is likely greater today
because of the increasing amount of business transacted in the U.S.
since 2002 and because of the exponential growth in the amount of data
produced every year (computers are running non-stop every day churning
out terabytes of new data). However, if taken at face value, the dollar value doesn't convey the
sorts of data problems that companies face each day, and it doesn't
provide a real sense of how such data problems can impair a company's
ability to function at peak levels. To change this situation and limit
the loss due to poor data quality, companies need to be cognizant of the
importance of their data and of the problems that can impact the quality
of their data. They should also be aware that there are tools and
strategies available that can clean up their data and help them
keep
their data clean and accurate in perpetuity. In its most basic sense, high-quality data is essential to a
company's ability to understand its customers. Customer data that is
riddled with errors (e.g., incorrect addresses or other personal
information, misspelled customer names, etc.) or is inconsistent (data
lacking a single, standardized format), redundant (multiple records for
the same customer), or outdated will undermine a company's ability to
understand its customers. After all, how can a company understand a
customer if it doesn't know where the customer lives or how to spell the
customer's name? If a company cannot understand its customers, then it
will have problems serving its customers according to the customer's
needs, preferences, goals, and the like. Equally important, companies will have limited success up-selling and
cross-selling to customers without having accurate and up-to-date
customer information at their fingertips. They will have difficulties
distinguishing high-value customers and segmenting customers for
promotions and campaigns. Moreover, the absence of good quality data
will increase the costs of obtaining and retaining customers. If a
company has two records for a single customer, the costs of sending a
promotion to that customer will double, while the duplicated mailing
itself could irritate the customer and cost the company customer loyalty
and goodwill. However, customer data is only one part of this overall problem.
Business data, sometimes called non-name-and-address data, is just as
crucial to a company's health and success. Business data can be
anything from an email address to a part number to a genome sequence.
If a company doesn't have a correct email address for a customer, it may
have trouble contacting the customer or directing a promotion to the
customer if email is the customer's preferred method of contact. Or
consider the case in which a part number has two digits that were
accidentally transposed. In a manufacturing setting, the transposition
could delay the arrival of the correct part to the assembly line, which
in turn could delay the assembly process. The transposition could also
impact the actual value of the company's inventories and, depending on
the cost of the correct and incorrect parts, create variances in the
company's books. And an incorrect genome sequence could negatively
impact scientific research, drug discovery, or patient trials. (The Challenges of Data Management
By Robert Lerner Current Analysis.
Our Server.)
[2]
Data Profiling: The Blueprint for
Effective Data Management Let’s suppose that we
want to travel from Washington, DC, to La Jolla,
California to meet an old acquaintance. We
decide to save some money and, instead of
flying, we rent a car and begin driving. Of
course, there are some time and costs involved
in this venture, so we stuff our pockets with
$20 (for tolls and miscellaneous expenses that
might crop up) and take off.
Although we know our destination—and the
approximate time our friend will be waiting for
us—we don’t bother consulting a map. Why should
we? Everyone knows that La Jolla is in
California, and all we need to do is point the
car in the appropriate direction and everything
will come out right. We will meet our friend and
save money at the same time. Certainly, this is a ludicrous situation, and
it is bound to fail or at least exceed our
budget, especially if we turn down one dead end
after another trying to wend our way to La
Jolla. This is not to say that we won’t make it
to La Jolla, only that there are quicker,
cheaper, and more effective ways of traveling
there. Indeed, no sane traveler would embark on such
a trip without a map, and yet data-driven
projects of all kinds begin this very way.
Organizations will decide on, say, a CRM
application, and they will then go about
implementing it without first consulting a
roadmap of their data. It is therefore not surprising that over half
of all CRM implementations either fail or fail
to live up to expectations, because many
organizations attempt to implement applications
without such a roadmap. To put it another way:
Too many companies lack a complete and necessary
understanding of their data. Without such an
understanding, or data roadmap, organizations
will spend more time, energy, and money than
they should simply to achieve limited results
from the application. As any IT manager knows,
no enterprise application will ever deliver on
its promises if the data populating it is of
questionable quality. To ensure the best results from any
data-driven project, an organization should
begin by making a thorough inspection of its
data, noting all the problems or potential
problems and assessing the time and effort
needed to rectify these problems. While this can
be done manually, a manual review process tends
to be long, intensive, and costly. Furthermore,
manual review is not only susceptible to human
error, but it is also completely impractical for
large organizations that have thousands, if not
millions, of customer and product records. It is
unnecessary as well, because of the data
profiling technology that is now available. A data profiling tool is designed to provide
an organization with a thorough understanding
(or roadmap) of its data. It can inspect the
content and structure of the data and provide
detailed information on its accuracy and
completeness. It can also uncover areas in the
data that are ambiguous and redundant.
Ultimately, a data profiling tool provides
information on whether the data is fit for the
purpose for which it was—and is—intended. (Please
see Data
Profiling: The Blueprint for Effective Data Management
)
.
Our Server
[3]
Organizations today have an ever-increasing amount of data and
data
sources at their fingertips. A large organization, for instance, will
typically have numerous databases, data warehouses and data marts, as
well as a variety of enterprise applications such as CRM, ERP, SCM, etc.
It will also have a massive amount of unstructured data and a range of
third-party data sources. A small organization may have less data and
fewer data sources, but this is only a matter of degree, for it will
also have a smaller staff to manage the data and its sources. Organizations depend on information to be competitive in the market
and to function smoothly and effectively. The real concern is that much
of the information has errors of some sort in it (incorrect values,
missing values, inconsistent values, etc.), and all too frequently
the
data sources—the applications, databases, etc.—are incompatible because
each has its own data format and business rules. Such problems inhibit
an organization’s ability to leverage its data to its fullest, which
ultimately impacts the quality of decisions based on the data. Now, it is certainly possible for an organization to assess its data
and to address manually whatever problems it discovers. For most
organizations, this is not an efficient and cost-effective method for
handling data quality issues. It is also unlikely that any manual
effort, even if it can be accomplished in one’s lifetime, will achieve
the same level of results as a solid set of data quality tools. The only practical, effective method of rectifying data problems is
through the use of next-generation data quality tools and processes,
which can do more than correct data errors and render disconnected
information meaningful. Such data quality tools can also keep
information clean and consistent on an ongoing basis. For this, an
organization should consider a data management solution that includes a
tightly integrated data quality tool set and processes that provide
The data quality process begins by using
(Please see
The Data Quality Process By
Robert Lerner Current Analysis.
Our Server.)
[4]
Enhancing the Value of Data Through
Integration and Enrichment To be successful,
businesses require accurate, consistent, and
timely information to make sound,
productive
decisions. Regardless of whether the data
concerns customers, products, suppliers,
employees, or whatever, information must be
readily available to every person in the
organization who needs it, even if they are
located in different departments, divisions,
subsidiaries, or even geographical regions. Unfortunately, not every organization has
easy access to information. In fact, even
organizations with relatively accurate data can
have data silos – data (in applications,
departments, etc.) that is not shared with the
rest of the organization. Consider the case in which an organization’s
call center application does not share data
with
its CRM application, perhaps because the
technologies are from different vendors (and the
data has different formats) or because the
applications reside in different business units.
Since the applications don’t share data, any new
data (about customer interactions, updated
customer information, new customers, etc.)
arriving through the call center is unlikely to
be available to the CRM application, and vice
versa. Thus the overall value of the
applications to the organization is diminished,
since each application needs to have complete,
accurate, and timely information about the
customer – not bits and pieces – to fulfill its
promise. As a result, the organization’s ability
to understand and support its customers will
suffer, while meaningful reporting and analysis
across the applications will be difficult at
best. Multiply this situation across the
organization and it is easy to see how silos of
information can hurt an organization’s ability
not only to know its customers but also to have
real insight into its business. However, there is a solution to this problem
-- data integration. In order to get the most
from its data – and ensure that an organization
has the best foundation on which to make
business decisions – an organization must
integrate its data. In fact, data integration
is
the critical next-step in the data management
process, following the process of data cleansing
and standardization (if poor quality data is
integrated into any application, database, or
whatever, the value and effectiveness of that
application will be undermined). Of course, linking, matching, and
standardization can be part of the data
integration process, but they are not the entire
process. In most instances, an organization will
require a data integration tool to integrate
data throughout the organization or into a data
warehouse, application, or a repository such as
a customer data master file. Essentially, the data integration process of
data management entails Once
completed, the entire organization can then
operate on essentially the same data.
(Please see Enhancing
the Value of Data Through Integration and Enrichment
) .
Our Server
[5]
Keeping on Top of Data Suppose that we have
finally eliminated most, if not all, of our
organization’s data problems. We began the
process of eliminating our data problems by
profiling our data, after which we cleansed it,
integrated it, and finally enriched it. Now,
imagine we completed this task at 5:00 on Friday
afternoon. With nothing else to do, we turn out
the lights, lock the doors, and prepare to enjoy
our weekend, confident that no one will touch
the data until Monday morning.
On Monday morning, we are the first ones in
the office. Rested, we go about our work as we
normally do, but we quickly discover that some
of our applications are not delivering the
results that we had anticipated. This is
surprising, since there is no particular reason
why the data – and by extension, the
organization – shouldn’t be operating optimally.
Soon, we discover the cause of our problems –
our once pristine data now has errors in it.
Without even touching it, the data has declined
in both quality and usefulness, and it is now
negatively impacting our organization. Of course, this is an absurd tale, but it
does highlight one of the central truths about
data – data, even if it's left by itself,
changes. The validity of data is always
temporary, and changing data is as inevitable as
the sun rising on Monday morning. Data changes,
or decays, because people and things change.
Over the course of this hypothetical weekend,
any number of customers have changed some aspect
of their personal information (e.g., addresses,
phone numbers, etc.); some have changed some
aspect of their household (married, divorced,
added children), and others may have died or
simply severed their relationship with the
organization. Left unchecked, the data quality
reaches levels similar to the one that led to
its implementation of a data management solution
in the first place. Consider the following statistics compiled by
Dun & Bradstreet: On a typical morning between 9:00 and 11:00:
Without any intervention, and without the
fault of any individual, the quality of an
organization's data will decline almost the
instant that it has been cleansed, integrated
and enriched. But the problems impacting the
quality of an organization's data are not just
problems that originate outside of the
organization. Consider the host of data problems
that could crop up throughout the rest of the
week, such as input errors, the integration of
incompatible third-party data, and a repurposing
of some existing data. By Friday, our data is
rife with problems, and its usefulness is now
being questioned.
(Please see Keeping
on Top of Data
)
Our Server
[6]
How to Choose a Data Management
Solution In the previous
articles of this series, we discussed data
management technology. With these articles in
mind, we can now consider how to buy a data
management solution, or what to look for when
considering a data management solution.
The following is a discussion about some of
the features and functions that an organization
should consider when making a buying decision.
However, we are assuming that the organization
has already come to some understanding regarding
the depth of its data problems and ultimately
its goal in implementing a data management
solution. We’re also assuming that medium- to
large-sized organizations would most likely
undertake this strategy, since small
organizations may lack the resources to
accomplish these goals effectively. Data Support (Please see How to Choose a Data Management Solution
)
Our Server
[7]
The Challenges of Customer Data Integration Certainly, most companies have an
intuitive grasp of the significance of obtaining
such a view of their customers. But many
companies fail to understand its overall
importance and far too many underestimate the
difficulty of getting this view. Consider, for example, a company that has
duplicate customer records. In all likelihood,
some customer interactions with the company will
be associated with some records and not with
others. The duplicate records complicate the
effort to track customer actions such as
customer buying habits as well as customer
interactions with various customer touch points
(e.g., the Web, call center, etc.). The company
will also have difficulty determining the total
value of the customer. As a result, the
company's ability to serve and support the
customer may be limited. In fact, the company's ability to retain the
customer could be jeopardized. Customers are
easily put off by customer support
representatives who do not have a complete
history of their interactions with the
company. Moreover, many customers have ceased
to do business with a company after having been
bombarded with marketing messages that the
company intended to communicate to different
individuals but were, in fact, directed to the
same individual because of duplicate customer
records. ( Please see
The Challenges of Customer Data Integration
By Robert Lerner Current Analysis.
Our Server
)
[8]
Emerging Issues: Master Data
Management and Data Quality Master data is best defined as
mission-critical data, such as customer data,
product data, bill of materials and so forth. It
is not metadata or transactional data. Unlike
transactional data, for instance, master data
helps to classify transactional data and it is
controlled by changes to the organization (e.g.,
the introduction of a new supplier or product
line), while transactional data is generated by
such events as a sale. But while these sorts of problems are not
uncommon, the need to manage master data
effectively is gaining a sense of urgency for a
number of other reasons. The size and complexity
of organizations are increasing, for example,
while at the same time large organizations are
becoming more global. All of this puts pressure
on organizations to increase the number of
systems or applications needed to run both the
organization and its individual business units.
Furthermore, mergers and acquisitions add to the
problems (how do you integrate master data from
disparate organizations?), as does regulatory
compliance, which is forcing organizations to
control their master data more effectively for
reporting. As a result, organizations are
experiencing difficulties understanding and
properly valuing their customers, suppliers, and
even partners; and they are facing problems
controlling costs, executing effectively on
business strategies, and complying effectively
and cost-efficiently with Sarbanes-Oxley, Basel
II, and other regulations. (Please
see Emerging Issues: Master Data Management and
Data Quality By Robert Lerner Current Analysis.
Our Server.)
[9]
A CDI Solution for the Rest of Us Unfortunately, the
technologies—and the methodologies—that many
organizations have leveraged to achieve this
view have offered only modest success. ETL, EAI,
and EII tools have their uses, but they have
proven inadequate in terms of providing a true
360 degree view of a customer. An ETL tool, for
example, can effectively load customer data into
a repository, but it does little to ensure that
the data which is loaded is clean, accurate, and
unique (i.e., without duplicate records on the
same customer). It also lacks the ability to
share consistent, accurate customer data with an
organization's applications or to guarantee that
the data in the repository or in these
applications remains consistent and accurate. Of
course, small organizations can standardize on a
single vendor's applications, which would at
least increase consistency of their customer
records, but this is impractical for enterprises
that depend on multiple applications and data
sources and it doesn't ensure that the data is
accurate on an ongoing basis. What an organization needs, therefore, is
technology that is specifically designed to
create the 360 degree view of the
customer—technology that can ensure that
customer data is clean, accurate, and unique,
and technology that can share this data with all
of the organization's applications, databases,
data warehouses, and the like. To achieve a true
360 degree view of a customer, an organization
requires a customer data integration (CDI)
solution. By CDI, I mean the technology and
services that enable a company to create a
single, consistent, and complete view of a
customer across all of its applications,
databases, and other customer data sources. CDI is not a new concept, but it can be a
slightly confusing one, given the diversity of
CDI solutions and approaches that are currently
available. Unfortunately, not all of these
approaches are effective and few are suitable
for most organizations. What I will offer below
is one possible CDI solution. However, of the
CDI solutions available today, this is one of
the most effective and one that is applicable
for almost any organization. (Please see
A
CDI Solution for the Rest of Us
)
[10]
A Real-World CDI Implementation The company (hereafter referred to
as the "Company") that implemented this CDI
solution owns and operates upscale ski villages,
golf resorts and beach resorts. Each year
millions of people flock to its properties. The
competition for customers in this industry is
strong, so the Company is constantly looking for
ways to enhance the experience that it provides
its customers. In addition, the Company began to
look for new and innovative services to offer
to a select group of high-profile customers. Prior to implementing a CDI solution, the
Company relied on CRM (Customer Relationship
Management) and related technologies to enhance
customer service levels. However, while its CRM
applications improved its ability to understand
some of its customers, the applications never
truly lived up to their promises. First, the
Company had some underlying data problems that
CRM is not designed to resolve. Second, each
property of the Company had its own CRM
application, and few of these applications were
compatible with the other CRM applications in
other properties. Compounding the Company's problems, each of
the properties had a variety of applications or
systems that captured customer information and
transactions for specific purposes (e.g., food
service, lodging, green fees) but did not (or
could not) share this information with the other
applications and properties of the Company. The
upshot is that the Company could not get a
complete, or 360-degree, view of all of its
customers, and this dramatically hindered the
Company's ability to compete and to deliver new
services that might attract certain customers. To address the problem with inconsistent
enterprise customer data, the Company looked for
a technological solution that could tackle each
of these problems. The Company considered a
number of technologies, including an application
developed in-house, but it ultimately decided to
implement a CDI solution, which would target the
very problem that it faced in understanding its
customers. A variety of CDI implementations
were considered before the Company decided on a
solution similar to the one described in the
second article - a solution that combined a
customer data repository with a
tightly-integrated data quality solution, and
which was built on an SOA (services-oriented
architecture). The solution delivered the promised results.
Once implemented, the customer data repository
became the central source for the Company's
customer reference data. The repository
consolidated all the customer reference data
from the Company's data sources and properties,
after which it shared accurate, consistent, and
timely views of this data to all of the
Company's subscribing data sources, systems,
etc. In essence, the repository eliminated the
silos of customer data that had existed
throughout the Company and throughout each
property. Updates to the data gathered at one
property -- or new customer information gathered
at a different property -- became immediately
available to every property throughout the
Company. Because of the repository, the entire
Company was on the same page concerning the
Company's customers. (Please
see A
Real-World CDI Implementation
)
|
![]()
|
|||||||||||||||||||||||||
Copyright © 1997-2007 [A & A Trading Enterprises]. All rights reserved.