The most sophisticated of today’s enterprise marketing organisations should have the basic infrastructure in place to effectively measure the customer journey. As customer information is assembled across multiple communication channels, organisations have discovered that having the full customer journey and understanding it are two entirely different things. However an organisations data is ultimately structured, to use it effectively it must be modelled to be understood. Gary Angel, President of Semphonic, explains what organisations need to create is a data model that is independent of platform or technology, is designed to show how the key elements of a wide range of customer touch points can be understood and combined in a seamless manner to re-create the “customer journey.”
Extracting hidden meaning from the data
Traditional marketing data consisted of fields that everyone understood. Variables like gender, income, and education create identifiable human categories to which we can attach attitudes and match to business opportunities.
Digital data is not so forthcoming. First, digital data consists of a “stream” of facts; rather than discrete atomic data (age/gender). There is meaning trapped and hidden within these streams and when we understand what the pages mean, we can infer from the stream what it was that the visitor was trying to accomplish. We can decide if they were successful or not and measure their efficiency and difficulties in accomplishing their task. All of this has meaning that can be translated into effective customer communications and marketing opportunities.
Unfortunately, this stream of data presents real problems for analysis because most analysis tools have been built with a deep bias toward “atomic” data viewing each “row” of data as an entity. That approach doesn’t work with stream data where the meaning is contained in the whole of the stream, not necessarily in any single piece of it. As a result, most analysis tools are much better at culling out relationships between rows rather than streams of data. Analysts must find ways to first create entities of streams before they can analyse the relationship and meaning of the entities. Traditional data manipulation tools (SQL) and data analysis tools (SAS) can do this, but they are not optimised for the task and make it difficult to accomplish.
For years, segmentation has been the single most powerful data aggregation technique in marketing. Traditional visitor segmentations typically start with the business relationship; at the highest level, we tend to think of “customers” vs. “prospects” vs. “non-qualified.” Customers are often segmented into groups based on the strength, number, value and duration of their relationships and beneath this, a focus on additional customer facts: their demographics, psychographics and interests.
This type of visitor segmentation doesn’t go away when it comes to online. Organisations still care about the relationship between themselves and their customers.
As important as the visitor segmentation is, it doesn’t solve digital data aggregation problems. Two completely different types of visitors might have identical Web behavioural records. What’s more, digital data is often completely anonymous leaving analysts without an intelligible visitor-level segmentation scheme.
Another type of segmentation is needed that helps aggregate digital data in a meaningful fashion. This second tier should allow the collapsing of a stream of web behavioural events (server calls or page views) into a single or small number of fields that capture the essential meaning of the tracked behaviours.
Visit Type Segmentation
Creating a Visit Type Segmentation is hard. Typically, visits must be “signatured” using a hierarchical set of rules that describe the behavioural patterns characteristic of that visit. They frequently require the entire visitor behavioural stack to figure out what a visitor did most of and what a visitor did first, or early in a session. In addition, because the rules are hierarchical, they must be executed in a controlled sequence.
This makes for a complicated and programmatic Extract Transform Load (ETL) task – one that is entirely customised to every enterprise. On the positive side, it yields a fantastic aggregation set. At the Visit Level, we can collapse most of the meaning of a stream of Web activity down into a few simple fields: one field to capture the visit type and one or more fields to capture the metrics relative to that visit type’s recency, frequency, and success.
The goal at the visitor level is to describe the entire customer’s Web experience in a small set of fields. What might have seemed almost impossible when looking at dozens of visits and 100’s of pages of web behaviour, now becomes much easier. Most of what we need to know about a visitor’s online experience is captured by knowing what types of visits they’ve had and how successful those visits were. Our Visit Segmentation provides exactly that. This type of model not only makes it easier for marketers to use the data to answer questions they have, it helps them frame meaningful questions.
Digital marketing is still fairly new and the methods to isolate and understand consumer segments aren’t well known. A powerful Visit-Type Segmentation coupled with a traditional Recency, Frequency, Success model doesn’t just expose the data in a performance-efficient manner, it helps marketers understand how the data fits together to form a meaningful picture of the customer’s experience and the resulting marketing opportunity.
How to build an effective Visit Segmentation from behavioural data
To do the job right, you need a process for finding and categorising visit types and a method of proving your work. This typically involves the categorisation of numerous web stream variables including types of content viewed, sequence and first click, search terms, milestones reached, time spent in navigation, time spent by area/topic, types of searching or transaction behaviour, topical interest and more, followed by a cluster analysis to identify true visit segments. In some cases, filter-based rules can be constructed using traditional Web analytics tools that capture straightforward visit patterns.
In general, the incorporation of online survey data can be used to both refine and prove the efficacy of the visit segmentation. By matching your behavioural segments to self-identifications, you can establish the degree to which your analysis captures “real” intent.
Behavioural cues are not always accurate, but fortunately in this segmentation, there’s no necessity to be right all the time.
Extending the Model
One of the most profound aspects of this model is its extensibility. The Two-Tiered Segmentation was originally designed to model Web behaviour. It turns out to be equally applicable to a wide range of digital and even offline activities. By casting your customer contacts into the model of “Intent” and “Success”, you can effectively unify data from sources as disparate as ATMs, Call Centre, Mobile Applications and the Web. When you’ve done this, you have a truly refined view of that most elusive of marketing concepts – the 360 degree customer view.
Summary and Conclusion
Digital data has placed unprecedented demands on both marketing analysts and database professionals. Huge data volumes, complex data, multiple channels with distinct data and a fundamental change in most channels from simple event-based data to streams of data, are challenging for most analysts and tools to manage. It all adds up to a huge new set of data analysis challenges.
What’s required is a way to model the data from all these disparate and complex sources in a way that makes sense of it. Like most real solutions, segmentation is hard work. However, compared to the cost of storing, processing and integrating the vast amounts of customer journey data now finding its way into the warehouse, it’s truly a small investment. It’s a methodology that can make the difference between success and failure in the single most important investment in the world of Marketing IT.