It was simpler in the olden days: you bought the tool, read the documentation and voila! you'd taught yourself web analytics (well, almost). Now, to be at the top of your game in this business you need to be continuously learning. One of the many great reasons for working in the web analytics industry is its rate of development, with lots of new tools and techniques being introduced, and different thoughts abound on how to do the job properly.
There are a variety of learning resources available to the budding web analyst. There are many blogs in the web analytics field debating the latest issues, giving advice and suggesting new ways to tackle old problems (I've listed a few in my blog list to the right, if you're interested). There are also forums, books, and white papers provided by consultancies and vendors, catering to those in the visual learner category. For those auditory learners there are a number of podcasts out there (see also banner to the right). This then leaves the excitingly named kinesthetic learners who learn by doing, which sounds like the perfect opportunity to plug the Analysis Exchange.
So there are a number of places a web analyst can rely on to keep up-to-date with what's going on. But this puts me in mind of the former US Secretary of Defense, Donald Rumsfeld, talking about known unknowns. These resources are all great at helping you find out information about things you know that exist and you know little about, the known unknowns. But what about the unknown unknowns? How can you get a definitive list of everything that a web analyst should know, to determine if you're on top of it all? I believe that this is something that the WAA is missing. Whilst they currently have the syllabus for the WAA Certification, publishing a list of the areas involved in "Web Analytics" might help define the role of the web analyst better, and help them in their efforts to define themselves too. It could help build a coherent self-referenced set of pages on the intracies of web analytics, with suggestions for the metrics and reports to use for given scenarios. Whilst there's plenty of information out there providing overviews of web analytics and the tools to use, quite often the advice contained glosses over the details, or is one-dimensional, failing to mention other related reports or analyses that could be carried out. This then would become the definitive site for a web analytics education.
The science of web analytics has been around for a while now. So why hasn't this "open-source" educational resource been created yet? Being spoon-fed the information isn't the best way to learn - what good, curious web analyst would want to learn this way? With the current web analytics sphere being very tools-centric it becomes harder to share information as silos develop. And there's also an element of self-interest. Handing out the information on a plate loses business for practioners; it also spoils book sales.
And yet, I still feel that open-source education is the way to move forwards. Whilst the web analytics industry has been around for a while, it's still not mature. The public doesn't trust it, and whilst the majority of companies have at least one web analytics solution on their site, there's little evidence it's being used to its potential, with only the largest or bravest allowing their online strategy to be steered by it. In order to deal with this, we need to grow the number of individuals with the necessary knowledge to become advocates, dedicated to analysing their website on a full time basis. Restricting the ease with which they can learn is a short-termist approach - we need to think about the long term. By growing an army of trained web analysts, the case for the benefits of analytics can be made to those businesses still too small or immature to have made the transition, transforming companies from being satisfied with a list of their top 10 pages to ones competing on analytics, to paraphrase Stephane Hamel's OAMM model. As a critical mass of sites that truly use analytics is reached, the remainder will have to engage or die. Competition breeds improvements in techniques and ideas. Then, as the world learns that sophisticated web analytics requires sufficient resourcing, the opportunity for consulting services and more specialist knowledge will grow, and the availability of information on the internet becomes irrelevant. No-one teaches themselves accountancy - they hire an accountant. By sharing now, we can create the demand for tomorrow.
Tuesday, August 24, 2010
Thursday, August 19, 2010
A Levels and KPIs - an analogy
Today is A-level results day in the UK, when high school students find out if they've got the grades to go to university. Traditionally this is the day the media go to town to bash students or standards, depending on your view point. Students become angry that society has deemed them unworthy of the high grades they get, and society finds it hard to believe the ubiquitous A grades that are handed out are really reflective of the students' understanding.
In my mind there are two clear problems with A level results here - the grading classification itself and how students are being taught, and both these issues rear their heads in the web analytics world.
Firstly, the grading. Here in the UK A levels are marked from A to E, with a new A* grade being introduced this year to try and distinguish the really bright students from the bright ones. However, as more and more students receive the better grades as the years pass, it becomes hard to distinguish the brighter students from the bright - the metric isn't transparent. This is a problem that many analysts try to overcome in their reporting. If the KPI doesn't clearly indicate what's going on, it's going to be hard to take action. Knowing that 50% of your visitors viewed 3 or more pages of your site, and are thus "engaged" doesn't help too much. Knowing the distribution of page views per visits allows you to isolate the extreme cases, and determine who's really ploughing through your site compared to those who just view three pages. In the case of A levels, replacing the grading classification with a simple % scoring system would allow universities to see a more accurate reflection of the students abilities, and compare them with others.
Secondly, metric manipulation. A common complaint is that students are being taught to pass exams, rather than being taught to broaden their knowledge of a subject. Complaints abound that first year university students are unable to string a sentence together or display an understanding of basic numeracy, but they do have a lot of A grades to their name. Back in the world of Web Analytics this manipulation often rears its head too, for example on content sites, where articles are split into multiple pages requiring the reader to click on links to view the next page, thus generating extra page views. This not only proves frustrating for the the visitor, but also implies an artificially high level of engagement with the site.
Of course KPIs are essential in the world of web analytics - they're our bread and butter. And whilst we strive to improve our sites through the monitoring of these KPIs, we need to be bear some things in mind. A KPI is useless unless it accurately depicts the outcomes you're trying to monitor. And manipulating metrics is essentially cheating. And as our Mums all told us, when you cheat, you're only cheating yourself.
In my mind there are two clear problems with A level results here - the grading classification itself and how students are being taught, and both these issues rear their heads in the web analytics world.
Firstly, the grading. Here in the UK A levels are marked from A to E, with a new A* grade being introduced this year to try and distinguish the really bright students from the bright ones. However, as more and more students receive the better grades as the years pass, it becomes hard to distinguish the brighter students from the bright - the metric isn't transparent. This is a problem that many analysts try to overcome in their reporting. If the KPI doesn't clearly indicate what's going on, it's going to be hard to take action. Knowing that 50% of your visitors viewed 3 or more pages of your site, and are thus "engaged" doesn't help too much. Knowing the distribution of page views per visits allows you to isolate the extreme cases, and determine who's really ploughing through your site compared to those who just view three pages. In the case of A levels, replacing the grading classification with a simple % scoring system would allow universities to see a more accurate reflection of the students abilities, and compare them with others.
Secondly, metric manipulation. A common complaint is that students are being taught to pass exams, rather than being taught to broaden their knowledge of a subject. Complaints abound that first year university students are unable to string a sentence together or display an understanding of basic numeracy, but they do have a lot of A grades to their name. Back in the world of Web Analytics this manipulation often rears its head too, for example on content sites, where articles are split into multiple pages requiring the reader to click on links to view the next page, thus generating extra page views. This not only proves frustrating for the the visitor, but also implies an artificially high level of engagement with the site.
Of course KPIs are essential in the world of web analytics - they're our bread and butter. And whilst we strive to improve our sites through the monitoring of these KPIs, we need to be bear some things in mind. A KPI is useless unless it accurately depicts the outcomes you're trying to monitor. And manipulating metrics is essentially cheating. And as our Mums all told us, when you cheat, you're only cheating yourself.
Monday, August 2, 2010
Multichannel: the digital holy grail or a poisoned chalice?
It is the received wisdom that the 360 degree view of the customer should be the ultimate goal of all marketing functions. By combining their customers' online and offline activities the company can monitor how frequently and through which channels these customers are touched by or touch them, ascertain how the customers respond to incentives, and use this information to determine how best to market to them. But should this be the goal of analytical functions? Might it actually be a waste of time?
There are a number of implementation issues that have to be overcome for this sort of project to avoid turning from the digital holy grail to a poisoned chalice. Firstly, accuracy. Whilst we've been told before that accuracy isn't so important in web analytics, this isn't the case when it comes to combining multiple databases - having an all-singing-all-dancing database means nothing if your data's not up-to-scratch. Your offline data needs to be regularly cleaned to remove deceased customers and update addresses, otherwise your finely-honed marketing campaign will be flawed from the start. In addition to this there's the problem of linking the offline to the online records - if any of these are inaccurate, then you're going to have problems merging the two. So there's little point executing this project unless you're satisfied that databases can be accurately linked. Finally, this isn't cheap. This of course is no reason not to implement a scheme providing the ROI warrants it. But can that be guaranteed, given some of these potential pitfalls?
Once the implementation problems are out of the way, you face your next hurdle (which ideally you'd have considered before the implementation). This relates to the type of industry your company sits in. A multichannel programme is going to be of little use to you if your customers don't purchase from you that frequently, which could be the case if you're in a very competitive industry, or one where the purchase frequency is low (cars, for example). If your customers don't purchase from you through any channel that frequently, this causes two problems for you: the statistical robustness of the data is weakened and the accuracy of the data you hold on them is likely to deteriorate between purchases, as customers identifying information changes. Finally, and crucially, this data only deals with the purchasers through your differing channels, and obviously won't pick up those who fail to purchase, assuming that visitors to your site only login during the purchasing process. It provides little help for converting prospects by determining why they didn't purchase from you.
And this leads me on to my main point - assuming you've got this far and set up an accurate multichannel database with a 360-degree view of your customers who are purchasing frequently enough from you to keep it all together, what next? It won't have happened overnight, and it won't have been free. Is being able to determine that customer A has responded better to a direct mail than an email campaign, whereas customer B only purchases online really going to provide huge insight? Obviously insight is there to be found, and it may be of benefit to some companies. But wouldn't a well constructed and segmented email campaign have already told you that customer A didn't respond well? Looking at it in terms of the opportunity cost of implementing such a costly and timely scheme, isn't there something more productive you could have done instead? For example, looking at segmenting your email campaigns more effectively, and analysing their on-site behaviour compared to other visitors might give you an improved response rate to your campaigns.
It seems to me that this is the end result of a marketer's fantasy gone mad, with little thought for the practical realities of its implementation and shortfalls. It's part of a familiar scenario: we've got too much data, and we're struggling to deliver true, clear insight. So what do we do to solve the problem? Bring in more data, or try and link existing datasets in an attempt to find it. But often there is no identifiable answer, and actually customer A asked his partner who happens to be customer B to buy it for him online. Focussing on the basics could provide as much ROI implementing a mulitchannel solution.
Again, let me reiterate, I'm not saying there's nothing to be gained from doing this; just that it's being sold as solving all our problems and being some sort of Utopian ideal, when in fact it takes a lot of time and money to implement, and there are still many key questions out there that remain unanswered.
There are a number of implementation issues that have to be overcome for this sort of project to avoid turning from the digital holy grail to a poisoned chalice. Firstly, accuracy. Whilst we've been told before that accuracy isn't so important in web analytics, this isn't the case when it comes to combining multiple databases - having an all-singing-all-dancing database means nothing if your data's not up-to-scratch. Your offline data needs to be regularly cleaned to remove deceased customers and update addresses, otherwise your finely-honed marketing campaign will be flawed from the start. In addition to this there's the problem of linking the offline to the online records - if any of these are inaccurate, then you're going to have problems merging the two. So there's little point executing this project unless you're satisfied that databases can be accurately linked. Finally, this isn't cheap. This of course is no reason not to implement a scheme providing the ROI warrants it. But can that be guaranteed, given some of these potential pitfalls?
Once the implementation problems are out of the way, you face your next hurdle (which ideally you'd have considered before the implementation). This relates to the type of industry your company sits in. A multichannel programme is going to be of little use to you if your customers don't purchase from you that frequently, which could be the case if you're in a very competitive industry, or one where the purchase frequency is low (cars, for example). If your customers don't purchase from you through any channel that frequently, this causes two problems for you: the statistical robustness of the data is weakened and the accuracy of the data you hold on them is likely to deteriorate between purchases, as customers identifying information changes. Finally, and crucially, this data only deals with the purchasers through your differing channels, and obviously won't pick up those who fail to purchase, assuming that visitors to your site only login during the purchasing process. It provides little help for converting prospects by determining why they didn't purchase from you.
And this leads me on to my main point - assuming you've got this far and set up an accurate multichannel database with a 360-degree view of your customers who are purchasing frequently enough from you to keep it all together, what next? It won't have happened overnight, and it won't have been free. Is being able to determine that customer A has responded better to a direct mail than an email campaign, whereas customer B only purchases online really going to provide huge insight? Obviously insight is there to be found, and it may be of benefit to some companies. But wouldn't a well constructed and segmented email campaign have already told you that customer A didn't respond well? Looking at it in terms of the opportunity cost of implementing such a costly and timely scheme, isn't there something more productive you could have done instead? For example, looking at segmenting your email campaigns more effectively, and analysing their on-site behaviour compared to other visitors might give you an improved response rate to your campaigns.
It seems to me that this is the end result of a marketer's fantasy gone mad, with little thought for the practical realities of its implementation and shortfalls. It's part of a familiar scenario: we've got too much data, and we're struggling to deliver true, clear insight. So what do we do to solve the problem? Bring in more data, or try and link existing datasets in an attempt to find it. But often there is no identifiable answer, and actually customer A asked his partner who happens to be customer B to buy it for him online. Focussing on the basics could provide as much ROI implementing a mulitchannel solution.
Again, let me reiterate, I'm not saying there's nothing to be gained from doing this; just that it's being sold as solving all our problems and being some sort of Utopian ideal, when in fact it takes a lot of time and money to implement, and there are still many key questions out there that remain unanswered.
Subscribe to:
Posts (Atom)