What's not to love about Voice of Customer (VOC) data? It's so much easier to figure out what a customer was up when they just come out and tell you, instead of looking for patterns in clickstream data.
Yet whilst it's great when your customers tell you why your site sucks, it can be depressing too. Surely it's obvious where that information is, you cry - it's staring you in the face on the home page! But as we all know, the customer's always right, and what's obvious for us won't always be for them: we need to go and fix the problem. Standard stuff. However, we can do more with this information. As well as monitoring how satisfaction and other customer metrics change after we make these changes, we can also track the comments too.
Building categories from the comments made allows you to not only track the improvements you've made but also keep an eye out for future issues. For example, you could create a category based on feedback which suggests the user can't find the information they're looking for. This could allow you to compute a metric which calculates the percentage of "lost" responders. You could pivot these by visit purpose, or traffic source to get an idea of where the issues are. Some tools allow you to integrate your VOC data with your analytics package, so it could be possible to build segments based on these responders, to overlay on your clickstream data. Maybe they can't find it because it's below the fold on their screen size?
Of course, you don't have to just stick with those who "can't find stuff", you could segment by more vitriolic comments, technical problems, basket problems etc. So take a couple of paracetamol, be brave, and explore your VOC data!
Digital Analytics Review
Tuesday, January 18, 2011
The customer's always right
Wednesday, December 15, 2010
Privacy, reputation and ethics
The public's grievances with tracking are not going away, fuelled by articles in the WSJ, extensions which block tracking and murmurings of a tracking ban. In an attempt to engage with and inform the public, the WAA have recently updated their new code of ethics. In it they propose a list of statements that websites should agree to that centre around privacy, transparency, consumer control and education.
Although some believe that a ban is inevitable, if hard to enforce, we can begin to fight back by considering how a website owner's decision to monitor traffic responsibly can affect their reputation. The decision to adhere to the code or not will likely be affected by how concerned a site's visitors are with privacy and data security, as well as the policy/code's perceived cost (implementing and enforcing the relevant processes, displaying it on the site, etc):
Although websites and their visitors vary, it's likely that in order to avoid the potential negative effects on reputation for a relatively small implementation cost, most would choose to publicly sign up to the code. With individual complaints now able to build momentum into public campaigns, websites need to take reputation management very seriously. Would publicly signing up to the WAA code pacify privacy campaigners? Not entirely - the code requires the public's trust that it is being faithfully enforced, and trust is one of the current stumbling blocks. This is why a clear, intuitive argument for tracking, backed up by the site's privacy policy and support for the code is required to provide a compelling case for why tracking is in both parties' interests, and that upholding the principles of the code are too.
And yet not everyone is aware of this debate, or have yet to take the decision. This is where the WAA needs to keep on evangelising, talking to the likes of the WSJ and putting across our side of the argument. We can do our bit, by signing up to the code and improving our own sites' privacy policies. With both sides of the argument becoming more vocal, the number of those existing in blissful ignorance should soon diminish.
Although some believe that a ban is inevitable, if hard to enforce, we can begin to fight back by considering how a website owner's decision to monitor traffic responsibly can affect their reputation. The decision to adhere to the code or not will likely be affected by how concerned a site's visitors are with privacy and data security, as well as the policy/code's perceived cost (implementing and enforcing the relevant processes, displaying it on the site, etc):
Although websites and their visitors vary, it's likely that in order to avoid the potential negative effects on reputation for a relatively small implementation cost, most would choose to publicly sign up to the code. With individual complaints now able to build momentum into public campaigns, websites need to take reputation management very seriously. Would publicly signing up to the WAA code pacify privacy campaigners? Not entirely - the code requires the public's trust that it is being faithfully enforced, and trust is one of the current stumbling blocks. This is why a clear, intuitive argument for tracking, backed up by the site's privacy policy and support for the code is required to provide a compelling case for why tracking is in both parties' interests, and that upholding the principles of the code are too.
And yet not everyone is aware of this debate, or have yet to take the decision. This is where the WAA needs to keep on evangelising, talking to the likes of the WSJ and putting across our side of the argument. We can do our bit, by signing up to the code and improving our own sites' privacy policies. With both sides of the argument becoming more vocal, the number of those existing in blissful ignorance should soon diminish.
Tuesday, November 9, 2010
Using browser data as a net sophistication proxy
Today marks Firefox's 6th birthday, and what better way to celebrate than with a blogpost on browser data.
Within web analytics packages there are plenty of metrics and dimensions that describe user behaviour on your site. However, without resorting to external sources it's hard to build a picture of individuals and their characteristics as opposed to the behaviour they exhibit on your site. However, there are data within web analytics packages that can hint at these characteristics. One example of this is the browser breakdown report: a user's choice of browser works as a proxy for their level of internet sophistication.
Historically, we could say with a fair degree of confidence that those who didn't use Internet Explorer were more advanced users of the internet than those who did. In more recent years, although this still rings true it's not as black and white as it used to be, as IE's market share diminishes in light of the general public's increasing awareness of the alternatives. There are of course exceptions to this - those who use multiple browsers, or those using the internet in a work environment where their browser choice is restricted, although this can be overcome. However, in general those who use non-IE browsers are by definition exhibiting preferences that indicate their more sophisticated use of the internet.
This definition of sophistication can be improved by looking at browser versions rather than just browsers themselves. Doing this could give you an indication of how early adopters (those using dev or beta versions of browsers) interact with your site as opposed to luddites (those still on IE6), and gives you more flexibility into how you define people. The downside to this though is you need to keep up to date with your definitions as browser updates now come thick and fast. And, of course, you don't have to stop there with your definitions - adding other dimensions (for example keywords used or keyword count) can further refine them.
Creating segments based on these definitions can open up a lot of insights into your site behaviour and traffic sources. However, you need to bear in mind that it is a proxy, and first and foremost it describes the difference in behaviour of visitors using different browsers - so if you see some weird and wacky behaviour as a result of this, your first port of call should be to check how your site functions for this browser rather than put it down to users being less/more sophisticated than average. That said, with some common sense and imagination you can uncover plenty of interesting stuff using different interpretations of "standard" web analytics dimensions.
Within web analytics packages there are plenty of metrics and dimensions that describe user behaviour on your site. However, without resorting to external sources it's hard to build a picture of individuals and their characteristics as opposed to the behaviour they exhibit on your site. However, there are data within web analytics packages that can hint at these characteristics. One example of this is the browser breakdown report: a user's choice of browser works as a proxy for their level of internet sophistication.
Historically, we could say with a fair degree of confidence that those who didn't use Internet Explorer were more advanced users of the internet than those who did. In more recent years, although this still rings true it's not as black and white as it used to be, as IE's market share diminishes in light of the general public's increasing awareness of the alternatives. There are of course exceptions to this - those who use multiple browsers, or those using the internet in a work environment where their browser choice is restricted, although this can be overcome. However, in general those who use non-IE browsers are by definition exhibiting preferences that indicate their more sophisticated use of the internet.
This definition of sophistication can be improved by looking at browser versions rather than just browsers themselves. Doing this could give you an indication of how early adopters (those using dev or beta versions of browsers) interact with your site as opposed to luddites (those still on IE6), and gives you more flexibility into how you define people. The downside to this though is you need to keep up to date with your definitions as browser updates now come thick and fast. And, of course, you don't have to stop there with your definitions - adding other dimensions (for example keywords used or keyword count) can further refine them.
Creating segments based on these definitions can open up a lot of insights into your site behaviour and traffic sources. However, you need to bear in mind that it is a proxy, and first and foremost it describes the difference in behaviour of visitors using different browsers - so if you see some weird and wacky behaviour as a result of this, your first port of call should be to check how your site functions for this browser rather than put it down to users being less/more sophisticated than average. That said, with some common sense and imagination you can uncover plenty of interesting stuff using different interpretations of "standard" web analytics dimensions.
Tuesday, October 19, 2010
Improving Engagement
Engagement's back on the menu. Eric Peterson gave a webinar recently with a excellent overview of engagement and discussed some new white papers written to measure it. However, due to the ambiguous nature of engagement, these measurement techniques (despite being the best attempt so far) are quite complicated and need to be tailored to the individual website.
Historically we measured engagement using page views per visit, time on site etc, but there was no way to define positive or negative sentiment, or distinguish an engaged visitor from a lost one. Currently companies are trying to bring in other datasets at their disposal to help, such as social and Voice of the Customer, but it's still fuzzy and subjective. We need to spend more time thinking about what engagement is, rather than just thinking of it as "someone who digs my site". To what extent are current methodologies for engagement measurement capturing actual engagement? Are we using the metrics at our disposal to define the concept of engagement as well as measure it?
Ideally we'd define engagement by what the visitors to our sites are thinking whilst carrying out their activities on the site as much as by what they did. But this isn't an ideal world. What we need to avoid doing is defining it by just selecting the metrics we can hold up against them. Broadly, an engaged visitor is likely to view more pages and exhibit an increased propensity to interact with your site whether internally (e.g leaving comments on posts) or externally (e.g. linking to your site). Obviously this again is dependent on the site and site type so can't be defined too tightly, creating the engagement paradox - we have a limited number of valid metrics at our disposal to capture behaviour that is too varied to define accurately. But it gets worse - we also need to bear in mind that visitors are unique and as such will interact differently on a site. Another thing we might need to take into consideration is at what stage they're at on the customer lifecycle:
Whilst we can argue the toss at which stage a visitor would become engaged, we can certainly agree that the latter stages would define engaged visitors. However, visitors in these different latter categories would likely display different types of behaviour, even though they were "engaged". A visitor who's yet to make a purchase but is close to making a decision would behave differently to a loyal multi-buyer. To me this highlights the fact that with the current tools at our disposal it's going to be hard work to build an engagement model anytime soon.
Might this change? Looking to the future, the increased importance of mobile to analytics and its implications for future web behaviour (geolocation) will bring more parameters and data to be used in the calculation of engagement. Whether this will make the calculation of engagement easier or not is debatable. Perhaps this is somewhere that the paid tools can bring some innovation to the market by looking to build an engagement feature into the interface? We're forever hearing the concerns around the amount of data available, but the lack of information coming out of it - this would be a great opportunity to right that wrong.
Historically we measured engagement using page views per visit, time on site etc, but there was no way to define positive or negative sentiment, or distinguish an engaged visitor from a lost one. Currently companies are trying to bring in other datasets at their disposal to help, such as social and Voice of the Customer, but it's still fuzzy and subjective. We need to spend more time thinking about what engagement is, rather than just thinking of it as "someone who digs my site". To what extent are current methodologies for engagement measurement capturing actual engagement? Are we using the metrics at our disposal to define the concept of engagement as well as measure it?
Ideally we'd define engagement by what the visitors to our sites are thinking whilst carrying out their activities on the site as much as by what they did. But this isn't an ideal world. What we need to avoid doing is defining it by just selecting the metrics we can hold up against them. Broadly, an engaged visitor is likely to view more pages and exhibit an increased propensity to interact with your site whether internally (e.g leaving comments on posts) or externally (e.g. linking to your site). Obviously this again is dependent on the site and site type so can't be defined too tightly, creating the engagement paradox - we have a limited number of valid metrics at our disposal to capture behaviour that is too varied to define accurately. But it gets worse - we also need to bear in mind that visitors are unique and as such will interact differently on a site. Another thing we might need to take into consideration is at what stage they're at on the customer lifecycle:
Whilst we can argue the toss at which stage a visitor would become engaged, we can certainly agree that the latter stages would define engaged visitors. However, visitors in these different latter categories would likely display different types of behaviour, even though they were "engaged". A visitor who's yet to make a purchase but is close to making a decision would behave differently to a loyal multi-buyer. To me this highlights the fact that with the current tools at our disposal it's going to be hard work to build an engagement model anytime soon.
Might this change? Looking to the future, the increased importance of mobile to analytics and its implications for future web behaviour (geolocation) will bring more parameters and data to be used in the calculation of engagement. Whether this will make the calculation of engagement easier or not is debatable. Perhaps this is somewhere that the paid tools can bring some innovation to the market by looking to build an engagement feature into the interface? We're forever hearing the concerns around the amount of data available, but the lack of information coming out of it - this would be a great opportunity to right that wrong.
Tuesday, September 28, 2010
The future of web analytics
As the web analytics industry matures its future is still uncertain. In this post I'll have a look at some of the questions that we'll need to answer soon.
There's been a lot of takeover activity in the industry of late, and two of the main analytics commentators, John Lovett and Eric Peterson, have written about the mergers and what they mean of late. Now only WebTrends is left as an independent tool, zigging whilst the others zag, and IBM is splitting at the seams with its three recent acquisitions, to make 23 in the last four years. Whilst they may not have released plans to close any of these products down, one has to wonder if this will be beneficial for the industry, with the potential for stiffling innovation, which is what the industry craves at the moment.
Then there's the bifurcation in tools debate. Some maintain that to do truly sophisticated analytics you need a more powerful (and expensive) tool, with the likes of Google Analytics being left to the marketers. Whilst Google doesn't offer an visitor-level intelligence tool as some of the paid solutions do, no-one can deny the progress the tool has made in recent years. But will it ever truly catch up and end the bifurcation of tools (and is it in their commercial interests to do so)? And what about Google's future itself - how reliant on its parent is Google Analytics? With Facebook and others starting to take on the big G, and its recent attempts to enter the social arena backfiring, the company's future isn't guaranteed, and its analytics package isn't at the top of its list of priorities. However, the tool has no clear competitors in the free arena, with Yahoo! Analytics maintaining it's non-mainstream enterprise-only position for the foreseeable future. What if a new (suitably big) entrant decided to get in on the free game? Perhaps Microsoft might reconsider their exit from this field? If they or another did, it could force Google to further up its game.
Finally, there's the soft side of analytics - the skills required to do the job. Currently a knowledge of statistics isn't that important; being business savvy or having coding knowledge more helpful. But what if other factors change? Will the rise of mobile require a more technical person to understand the intracies of it? Will the rise of intuitivly-designed and easy-to-implement analytical packages mean that company knowledge and the intepretation of these numbers becomes more important to bring context and relevancy? As sites evolve and improve through competition and analytical insight, will visitor-level tools become obligatory for commercial sites? If they do, the skills set of the web analyst will have to expand too.
There's been a lot of takeover activity in the industry of late, and two of the main analytics commentators, John Lovett and Eric Peterson, have written about the mergers and what they mean of late. Now only WebTrends is left as an independent tool, zigging whilst the others zag, and IBM is splitting at the seams with its three recent acquisitions, to make 23 in the last four years. Whilst they may not have released plans to close any of these products down, one has to wonder if this will be beneficial for the industry, with the potential for stiffling innovation, which is what the industry craves at the moment.
Then there's the bifurcation in tools debate. Some maintain that to do truly sophisticated analytics you need a more powerful (and expensive) tool, with the likes of Google Analytics being left to the marketers. Whilst Google doesn't offer an visitor-level intelligence tool as some of the paid solutions do, no-one can deny the progress the tool has made in recent years. But will it ever truly catch up and end the bifurcation of tools (and is it in their commercial interests to do so)? And what about Google's future itself - how reliant on its parent is Google Analytics? With Facebook and others starting to take on the big G, and its recent attempts to enter the social arena backfiring, the company's future isn't guaranteed, and its analytics package isn't at the top of its list of priorities. However, the tool has no clear competitors in the free arena, with Yahoo! Analytics maintaining it's non-mainstream enterprise-only position for the foreseeable future. What if a new (suitably big) entrant decided to get in on the free game? Perhaps Microsoft might reconsider their exit from this field? If they or another did, it could force Google to further up its game.
Finally, there's the soft side of analytics - the skills required to do the job. Currently a knowledge of statistics isn't that important; being business savvy or having coding knowledge more helpful. But what if other factors change? Will the rise of mobile require a more technical person to understand the intracies of it? Will the rise of intuitivly-designed and easy-to-implement analytical packages mean that company knowledge and the intepretation of these numbers becomes more important to bring context and relevancy? As sites evolve and improve through competition and analytical insight, will visitor-level tools become obligatory for commercial sites? If they do, the skills set of the web analyst will have to expand too.
Wednesday, September 15, 2010
The future of the web and the implications for its measurement
Guessing the future of the web is a game that everyone likes to play, but because it's still early days and the web is still volatile, the forecaster normally ends up looking silly. But I'm going to carry on and make some predictions anyway and have a think about what this means for those in the web analytics community.
Although it typically accounts for less that 10% of visits to websites, it's obvious that with the rise of smartphones and tablets the mobile share of web browsing will soon dominate. Handset manufacturers will focus on the devices' power and bringing in new functionality, with more consumer focus being put on the operating system and software. What new apps will be developed? Currently geolocation is the flavour of the month, with Facebook joining in the fun. In my experience the geography data fed back through web analytics tools is not that accurate. With the future of the internet becoming more reliant on geography, this might be something we need to improve. Whilst there are again privacy implications for the improving this accuracy, imagine the potential for finding out where your customers are when browsing your site. Or being able to integrate check-in data with your web analytics data?
With issues around privacy, complaints about its applications and other negative publicity Facebook's seems to be peaking. We regularly hear about the risks of putting all your marketing eggs in the Facebook basket, but does the web analytics industry not risk doing the same thing? However, companies are now not only building relationships with customers within these arenas on fan pages, but monitoring what's said about them within these arenas outside of their fan pages. Sentiment analysis is one area with real potential but is reliant on still-developing artificial intelligence. To me, this is closely linked with the struggle to get to Web 3.0 - the semantic web, where we try to bring more meaning to the content on the internet, and build relationships between data and datasets. Thinking about how we struggle to manage our data now, and how the providers struggle to present it makes it clear how much progress will be required to accurately manage, link and present this new era of data. I think that one of the largest challenges facing our industry is how this is managed, owned and presented in the future, perhaps second only to how we address our current privacy issues.
The majority of the world is still coming to terms with the implications of the "always available" internet, and its potential for increased communication whether it be for good or ill. As the authorities attempt to track illicit online behaviour, there's a growing confusion between monitoring civilians' behaviour and data, and web analytics. Whilst I believe we need to step up to this and nip it in the bud, I would hope that eventually the public takes a more relaxed attitude to tracking, in the way that they do to store loyalty cards, for example. We also need to consider the implications of a generation growing up with the internet as its main resource for entertainment and education. It doesn't seem beyond the realms of possiblility for future companies to be set up to help 18 year olds change their identity and escape their permanently documented youthful transgressions, as hypothesised by Eric Schmidt recently. Might there be an opportunity for building tools to help individuals track their online presence? Whilst the ease with which students can now research information will help them discover more, on the downside it's now easier to plagiarise other's work for assignments and communicate in exams. The recently introduced Tynt Tracer may be further developed to help track illicit copying in this framework, with analytics agencies being set up to monitor other people's sites rather than their own in order to optimise their site.
Indeed, it's this side of analytics that I think we need to be considering now. Whilst the model of working for a company to help optimise their website is the current standard, perhaps we should start thinking outside the box. The internet is now central to more and more people's lives, and whilst this will continue to drive this existing model for those in the web analytics industry, there are opportunities to be had for working on other sites. These could be governmental, educational, looking at analysing external sites for a company, as suggested above, or indeed working for individuals, perhaps to measure the data held on them by other companies? All in all, this shows that the web analytics industry should be kept quite busy keeping up with developments on the internet.
I'd love to hear your thoughts on this - am I way off mark? Have I missed something which you think we need to consider?
Although it typically accounts for less that 10% of visits to websites, it's obvious that with the rise of smartphones and tablets the mobile share of web browsing will soon dominate. Handset manufacturers will focus on the devices' power and bringing in new functionality, with more consumer focus being put on the operating system and software. What new apps will be developed? Currently geolocation is the flavour of the month, with Facebook joining in the fun. In my experience the geography data fed back through web analytics tools is not that accurate. With the future of the internet becoming more reliant on geography, this might be something we need to improve. Whilst there are again privacy implications for the improving this accuracy, imagine the potential for finding out where your customers are when browsing your site. Or being able to integrate check-in data with your web analytics data?
With issues around privacy, complaints about its applications and other negative publicity Facebook's seems to be peaking. We regularly hear about the risks of putting all your marketing eggs in the Facebook basket, but does the web analytics industry not risk doing the same thing? However, companies are now not only building relationships with customers within these arenas on fan pages, but monitoring what's said about them within these arenas outside of their fan pages. Sentiment analysis is one area with real potential but is reliant on still-developing artificial intelligence. To me, this is closely linked with the struggle to get to Web 3.0 - the semantic web, where we try to bring more meaning to the content on the internet, and build relationships between data and datasets. Thinking about how we struggle to manage our data now, and how the providers struggle to present it makes it clear how much progress will be required to accurately manage, link and present this new era of data. I think that one of the largest challenges facing our industry is how this is managed, owned and presented in the future, perhaps second only to how we address our current privacy issues.
The majority of the world is still coming to terms with the implications of the "always available" internet, and its potential for increased communication whether it be for good or ill. As the authorities attempt to track illicit online behaviour, there's a growing confusion between monitoring civilians' behaviour and data, and web analytics. Whilst I believe we need to step up to this and nip it in the bud, I would hope that eventually the public takes a more relaxed attitude to tracking, in the way that they do to store loyalty cards, for example. We also need to consider the implications of a generation growing up with the internet as its main resource for entertainment and education. It doesn't seem beyond the realms of possiblility for future companies to be set up to help 18 year olds change their identity and escape their permanently documented youthful transgressions, as hypothesised by Eric Schmidt recently. Might there be an opportunity for building tools to help individuals track their online presence? Whilst the ease with which students can now research information will help them discover more, on the downside it's now easier to plagiarise other's work for assignments and communicate in exams. The recently introduced Tynt Tracer may be further developed to help track illicit copying in this framework, with analytics agencies being set up to monitor other people's sites rather than their own in order to optimise their site.
Indeed, it's this side of analytics that I think we need to be considering now. Whilst the model of working for a company to help optimise their website is the current standard, perhaps we should start thinking outside the box. The internet is now central to more and more people's lives, and whilst this will continue to drive this existing model for those in the web analytics industry, there are opportunities to be had for working on other sites. These could be governmental, educational, looking at analysing external sites for a company, as suggested above, or indeed working for individuals, perhaps to measure the data held on them by other companies? All in all, this shows that the web analytics industry should be kept quite busy keeping up with developments on the internet.
I'd love to hear your thoughts on this - am I way off mark? Have I missed something which you think we need to consider?
Tuesday, August 24, 2010
Improving self-improvement: a call for open-source education
It was simpler in the olden days: you bought the tool, read the documentation and voila! you'd taught yourself web analytics (well, almost). Now, to be at the top of your game in this business you need to be continuously learning. One of the many great reasons for working in the web analytics industry is its rate of development, with lots of new tools and techniques being introduced, and different thoughts abound on how to do the job properly.
There are a variety of learning resources available to the budding web analyst. There are many blogs in the web analytics field debating the latest issues, giving advice and suggesting new ways to tackle old problems (I've listed a few in my blog list to the right, if you're interested). There are also forums, books, and white papers provided by consultancies and vendors, catering to those in the visual learner category. For those auditory learners there are a number of podcasts out there (see also banner to the right). This then leaves the excitingly named kinesthetic learners who learn by doing, which sounds like the perfect opportunity to plug the Analysis Exchange.
So there are a number of places a web analyst can rely on to keep up-to-date with what's going on. But this puts me in mind of the former US Secretary of Defense, Donald Rumsfeld, talking about known unknowns. These resources are all great at helping you find out information about things you know that exist and you know little about, the known unknowns. But what about the unknown unknowns? How can you get a definitive list of everything that a web analyst should know, to determine if you're on top of it all? I believe that this is something that the WAA is missing. Whilst they currently have the syllabus for the WAA Certification, publishing a list of the areas involved in "Web Analytics" might help define the role of the web analyst better, and help them in their efforts to define themselves too. It could help build a coherent self-referenced set of pages on the intracies of web analytics, with suggestions for the metrics and reports to use for given scenarios. Whilst there's plenty of information out there providing overviews of web analytics and the tools to use, quite often the advice contained glosses over the details, or is one-dimensional, failing to mention other related reports or analyses that could be carried out. This then would become the definitive site for a web analytics education.
The science of web analytics has been around for a while now. So why hasn't this "open-source" educational resource been created yet? Being spoon-fed the information isn't the best way to learn - what good, curious web analyst would want to learn this way? With the current web analytics sphere being very tools-centric it becomes harder to share information as silos develop. And there's also an element of self-interest. Handing out the information on a plate loses business for practioners; it also spoils book sales.
And yet, I still feel that open-source education is the way to move forwards. Whilst the web analytics industry has been around for a while, it's still not mature. The public doesn't trust it, and whilst the majority of companies have at least one web analytics solution on their site, there's little evidence it's being used to its potential, with only the largest or bravest allowing their online strategy to be steered by it. In order to deal with this, we need to grow the number of individuals with the necessary knowledge to become advocates, dedicated to analysing their website on a full time basis. Restricting the ease with which they can learn is a short-termist approach - we need to think about the long term. By growing an army of trained web analysts, the case for the benefits of analytics can be made to those businesses still too small or immature to have made the transition, transforming companies from being satisfied with a list of their top 10 pages to ones competing on analytics, to paraphrase Stephane Hamel's OAMM model. As a critical mass of sites that truly use analytics is reached, the remainder will have to engage or die. Competition breeds improvements in techniques and ideas. Then, as the world learns that sophisticated web analytics requires sufficient resourcing, the opportunity for consulting services and more specialist knowledge will grow, and the availability of information on the internet becomes irrelevant. No-one teaches themselves accountancy - they hire an accountant. By sharing now, we can create the demand for tomorrow.
There are a variety of learning resources available to the budding web analyst. There are many blogs in the web analytics field debating the latest issues, giving advice and suggesting new ways to tackle old problems (I've listed a few in my blog list to the right, if you're interested). There are also forums, books, and white papers provided by consultancies and vendors, catering to those in the visual learner category. For those auditory learners there are a number of podcasts out there (see also banner to the right). This then leaves the excitingly named kinesthetic learners who learn by doing, which sounds like the perfect opportunity to plug the Analysis Exchange.
So there are a number of places a web analyst can rely on to keep up-to-date with what's going on. But this puts me in mind of the former US Secretary of Defense, Donald Rumsfeld, talking about known unknowns. These resources are all great at helping you find out information about things you know that exist and you know little about, the known unknowns. But what about the unknown unknowns? How can you get a definitive list of everything that a web analyst should know, to determine if you're on top of it all? I believe that this is something that the WAA is missing. Whilst they currently have the syllabus for the WAA Certification, publishing a list of the areas involved in "Web Analytics" might help define the role of the web analyst better, and help them in their efforts to define themselves too. It could help build a coherent self-referenced set of pages on the intracies of web analytics, with suggestions for the metrics and reports to use for given scenarios. Whilst there's plenty of information out there providing overviews of web analytics and the tools to use, quite often the advice contained glosses over the details, or is one-dimensional, failing to mention other related reports or analyses that could be carried out. This then would become the definitive site for a web analytics education.
The science of web analytics has been around for a while now. So why hasn't this "open-source" educational resource been created yet? Being spoon-fed the information isn't the best way to learn - what good, curious web analyst would want to learn this way? With the current web analytics sphere being very tools-centric it becomes harder to share information as silos develop. And there's also an element of self-interest. Handing out the information on a plate loses business for practioners; it also spoils book sales.
And yet, I still feel that open-source education is the way to move forwards. Whilst the web analytics industry has been around for a while, it's still not mature. The public doesn't trust it, and whilst the majority of companies have at least one web analytics solution on their site, there's little evidence it's being used to its potential, with only the largest or bravest allowing their online strategy to be steered by it. In order to deal with this, we need to grow the number of individuals with the necessary knowledge to become advocates, dedicated to analysing their website on a full time basis. Restricting the ease with which they can learn is a short-termist approach - we need to think about the long term. By growing an army of trained web analysts, the case for the benefits of analytics can be made to those businesses still too small or immature to have made the transition, transforming companies from being satisfied with a list of their top 10 pages to ones competing on analytics, to paraphrase Stephane Hamel's OAMM model. As a critical mass of sites that truly use analytics is reached, the remainder will have to engage or die. Competition breeds improvements in techniques and ideas. Then, as the world learns that sophisticated web analytics requires sufficient resourcing, the opportunity for consulting services and more specialist knowledge will grow, and the availability of information on the internet becomes irrelevant. No-one teaches themselves accountancy - they hire an accountant. By sharing now, we can create the demand for tomorrow.
Subscribe to:
Posts (Atom)