The Unfulfilled Promise of Online Analytics, Part 2

Perfection is achieved,
not when there is nothing more to add,
but when there is nothing left to take away.
– Antoine de Saint-Exupery, The Little Prince

Readers can find the previous entry in this arc at The Unfulfilled Promise of Online Analytics, Part 1.

First, I want to thank all the people who read, commented, twittered, emailed, skyped and phoned me with their thoughts on Part 1.

My special thanks to the people with reputations and company names who commented in Part 1. Avinash Kaushik and Jim Novo, I thank and congratulate you for stepping up and responding (I queried others if I could include them in this list, they never responded). Whether you intended to or not, whether you recognize it or not, you demonstrated a willingness to lead and a willingness to get involved. Please let's keep the discussion going.

Also my thanks to those who took up the gauntlet by propagating the discussion via their own blogs. Here Chris Berry (and I also note that Chris' The Schism in Analytics, A response to Carrabis, Part II post presages some of what I'll post here) and Kevin Hillstrom come to mind. My apologies to others I may not have encountered yet.

Second, I was taken aback by the amount of activity this post generated. I was completely unprepared for the responses. It never occurred to me there was a nerve to be struck; only one person interviewed responded purely in the positive. The lack of positive response caused me to think this information was self-evident.

Well…there was one of the problems. It was self-evident. Like the alcoholic brother-in-law elephant in the living room, it took someone new to the family to point and say, “My god is that guy drunk or what!”

And like the family who's been working very hard making sure nobody acknowledges the elephant, the enablers came forward — okay, they emailed, skyped and phoned forward. One industry leader commented, saw my response and asked that their comment be removed. I did so with great regret because there can be no leadership without discussion, no unification of voices until all voices are heard.

Please note that some quotes appearing in this entry may be from different sources than in part 1 and (as always) are anonymous unless a) express permission for their use is given or b) the quote is in the public domain (Einstein, Saint-Exupery, etc).

Okay, enough preamble. Enjoy!

The whole industry needs a fresh approach. This situation isn't going to improve itself.There was a sense of exhaustion among respondents regarding the industry. It took two forms and I would be hard pressed to determine which form took precedent.

One form I could liken to the exhaustion a spouse feels when their partner continually promises that tomorrow will be better, that they'll stop drinking/drugging/gambling/overeating/abusing or otherwise acting out.

It wasn't always the case. Once upon a time (that phrase was actually used by more than one respondent) there was a belief that if things were implemented correctly, if a new tool could be developed, if management would understand what was being done, if if if… Things could and would be better. Promises were made that were never kept and were then comfortably forgotten.

The second form I could liken to the neglected child who starts acting out simply to get attention. Look at me, Look at me! But mom&dad always have something else to focus their attention on. There's the new product launch, opening new markets, having to answer to the Board, (and probably the worst) the other children (marketing, finance, logistics, …), …

“When you know the implementation is correct you have to wonder if the specifications are wrong.”

Several respondents showed an impressive level of self-awareness. Many of them have moved on, either out of the industry completely or into more fulfilling positions within. All recognized that any industry that succumbs to promise and hype will ultimately end in disappointment.

First we're told to nail things down then given a block of unobtainium to nail them in then told to do it now!The disappointment took two primary forms (clear schisms abounded in this research. Clear schisms are usually indicative of deep level challenges to unification in social groups) and the division was along personality types. Respondents who were more analytic than business focused were disappointed because “…a fraction of implementation achieve business goals. A tiny faction of those actually work.”

Respondents who were more business than analytics focused were disappointed because the industry didn't help them achieve their career goals.

For many in both camps moving on was a recognition of their own personal growth and maturation, for most it was frustration based, a running away-from pain rather than a movement towards pleasure. This latter again demonstrates a victim mentality, a caught in the middle between warring parents.

“When the tools don't agree management's solution is to get a new tool.”

Deciding on tools is more politics than smarts. Management doesn't ask us, they just go with the best promises.Respondents demonstrated frustration with clients/organizations and vendors that refuse to demonstrate leadership. This was such a strong theme that I address it at length below. Sometimes a lack of leadership is the result of internal politics (“…and that's (competition, keeping knowledge to themselves, backstabing) is starting to happen (we see the schism (right word?) between Eric's 'hard' position and Avinash 'easy' (and others)…”).

Leadership vacuums also develop when power surges back and forth between those given authority positions by others. Family dynamics recognizes this when parents switch roles without clearly letting children know who's taking the lead (think James Dean's “You're tearing me apart” in Rebel Without a Cause). This frustration was exacerbated when respondents began to recognize that no tool was truly new, only the interfaces and report formats changed.

There was a sense among respondents that vendors and clients/organizations were switching roles back and forth, neither owning leadership for long, and again, the respondents were caught in the middle.

“Management pays attention to what they paid for, not what you tell them.”

Some respondents are looking at the horizon and reporting a new (to them) phenomenon; as vendors merge, move and restructure there's an increasing lack of definition around “what can we do with this?” This is disturbing in lots of ways.

...everybody's agreeing with their own ideas and nobody elses.Analysts will begin to socially and economically bifurcate (there will be no “middle class”). Those at the bottom of the scale will get into the industry as a typical “just out of school” job then move elsewhere unless they're politically adept. The political adepts will join the top runners, either associating themselves with whatever exemplars exist or by becoming exemplars themselves. But the social setting thus created allows for a multitude of exemplars, meaning there are many paths to the stars, meaning one must choose wisely, meaning most will fail and thus the culture bifurcates again and fewer will stay long enough to reach the stars. “You have to pick who you listen to. I get tired figuring out who to follow each day.”

Respondents admitted to lacking (what I recognize as) research skills. I questioned several people about their decision methods — had they considered this or that about what they did or are planning to do — and universally they were grateful to me for helping them clarify issues. Those that had appreciable research skills were hampered by internal politics (“Until my boss is ready nothing gets done.”)

Most respondents confused outputs with outcomes (as noted in part 1) because tools are presented and trained in two levels (this is my conclusion based on discussions. I'm happy to be corrected). There's the tool core that only few learn to use and there's the tool interface that everyone has access to.

Everyone can test and modify their plans based on the interface outputs but what happens at the core level — how the interface outputs are arrived at — is the great unknown hence can't be defended in management discussions and “…I can't explain where it came from so I'm ignored.” Management's (quite reasonable, to me) response follows Arthur C. Clarke's “Mankind never completely abandons any of its ancient tools”, they go with what they know, especially when analysts themselves don't demonstrate confidence in their findings. “I can only shrug so many times before they stop listening, period.” Management is left to make decisions based on experience and now we see the previously mentioned bifurcation creeping into business decisions. Those with the most experience, the most tacit knowledge, win. As John Erskine wrote, “Opinion is that exercise of the human will that allows us to make a decision without information” and management — asking for more accountability — is demanding to understand the basis for the information given.

“Did you ever get the urge when someone calls up or sends e-mails asking, 'How's that data coming?' to say, 'Well, we're about two hours behind where we would be if I didn't have to keep stopping to answer your goofy-?ss phone calls and e-mails.' This is called project management, I guess.”

Some tools are rejected even when they make successful predictions.“Ignore them” as a strategy for responding to business requests works two-ways. Management repeatedly asking difficult to solve questions results in they're being ignored by analysts until the final results are in. By that time both question and answer are irrelevant to a tactical business decision and once again the “promise” is lost. In-house analysts can suggest new tools and must deal with their suggestions gaining little traction. “Management works in small networks that look at the same thing. They're worse than g?dd?mn children. You have to whack them on the side of the head to get their attention.”

Management's reluctance to take on different tools and methodologies is understandable. Such decisions increase risk and no business wants risk.

“To change the form of a tool is to lose it's power. What is a mystery can only be experienced for the first time once.” online analytics matures it must evolve to survive.I asked for clarification of the statement on the right and was told that yes, there are times when old paradigms need to be tossed aside and knowing when is a recognizable management skill that can only be exercised by extreme high-level management, by insanely confident upstarts and lastly by (you guessed it) trusted leaders/guides. The speaker had recently returned to the US from a study of successful EU-based startups. When and how paradigms should be shifted and abandoned is a hot topic among 30ish EU entrepreneurs.

“We're suppose to be solving problems. But I can't figure out what problems we're suppose to solve.”

Random metric names and symbols is not an equation.(the quote on the right is from Anna O'Brien's Random Acts of Data blog)

Business and Science are orthogonal, not parallel. Any science-based endeavor works to overcome obstacles. If not directly, then to provide insight into how and what obstacles can be overcome. Business-based endeavors work to generate profit. Science involves empirical investigation. Investigation takes time and only certain businesses can afford time because unless the science is working at overcoming a business obstacle, it's a cost, not a profit.

So if you can't afford the time involved in research and are being paid to solve business problems your options are limited. Most respondents relied on literature (usually read at home during “family time” or while traveling), conferences, private conversations and blogs. Literature is only produced by people wanting to sell something (this includes yours truly). It may be a book, a conference ticket, a tool, consulting, a metaphysic, …, and even when what they offer is free (such as most blogs) consumers pay with their attention, engagement and time (yes, I know. Especially with my posts).

...I don't believe in WA anymore, I haven't seen any of my clients change because of it and all the presentations that I've seen are always similar...Conferences and similar venues are biased by geographies, time and cost (again, even if free you're paying somehow. Whoever is picking up the bar tab and providing the munchies is going to be boasting about how many attended).

Private conversations provide limited access and that leaves blogs. The largest audiences will be (most often) offline in the form of books and online in the form of blogs.

Behold, and without most people realizing it's happening, exemplars form. The exemplar du jour provides the understanding du jour, hence a path to what problems can be solved du jour. Who will survive?

Historical precedent indicates that exemplars who embrace and encourage new models will thrive. More than thrive, they will continue as positive exemplars. Exemplars not embracing or at least acknowledging new models will quickly become negative exemplars and the “negativity” will be demonstrated socially first in small group settings then spill over into large group settings once a threshold is reached (and once that threshold is reached, watch out!). The latter won't happen “over night” and it will definitely happen (my opinion) because all societies follow specific evolutionary and ecologic principles (evolutionary biology, Red Queen, Court Jester, evolutionary dynamics, niche construction and adaptive radiation rules (along with others) all apply). The online analytics world is no different.

Some people contacted me about Stephane Hamel's Web Analytics Maturity Model. I knew nothing about it, contacted Stephane, asked to read his full paper (not the shortened version available at, did so, talked with him about it, told him my conclusions and take on it and got his permission to share those conclusions and takes here. I also asked Stephane if I could apply his model to some of my work with the goal of creating something with objective metricization that would be predictive in nature and he agreed (if you treat Stephane's axes as clades and consider each node as a specific situation then cladistic analysis tools via Situational Calculus looks very promising (asleep yet?)).

A case in point is Stephane Hamel and his Web Analytics Maturity Model (WAMM). Stephane will emerge as an exemplar for several reasons and WAMM is only one of them.

KISS should be part of the overall philosophy.WAMM is (my opinion) an excellent first step to solving some of the issues recognized in part 1 because it does something psycholinguists know must be done before any problem can be solved; it gives the problem a name. Organizations can place themselves or be placed on a scale of 0-5, Impaired to Addicted (Stephane, did you know that only 1-4 would be considered psychologically healthy?). WAMM helps the online analytics world because it creates a codification, an assessment tool for where an organization is in their online efforts.

I asked Stephane if he thought his tool was a solution to what I identified in part 1. He agreed with me that it wasn't. Its purpose (my interpretation, Stephane agreed) was that it creates a 2D array, creates buckets therein and then explains what goes in each bucket.

I asked Stephane if he believed WAMM provided a metricizable solution with universally agreed to objective measures (I told Stephane that I wasn't grasping how WAMM becomes an “x + y = z” type of tool and asked if I'd missed something). Stephane replied “…no, you haven't missed anything, because it is NOT a x+y=z magical/universal formula, that's not the goal. The utmost goal is to enable change, facilitate discussion, and it's not 'black magic'. A formula would imply there is some kind of recipe to success. Just like we can admire Amazon or Google success and could in theory replicate everything they do, you simply can't replicate the brains working there – thus, I think there is a limit to applying a formula (or 'brain power' is a huge randomized value in the formula).”

WAMM and any similar models would be considered observational tools (I explain “observational” tools further down in this post). Most observational tools (I would write “all” and don't have enough data to be convinced) trace their origins (and this is a fascinating study) to surveying; People could walk the land and agree “here is a rise, there is a glen” but it wasn't until surveying tools (the plumb&line, levels, rods&poles, tapes, compass, theodolite, …) came along that territories literally became maps (orienteers can appreciate this easily) that told you “You are here” and gave very precise definitions of where “here” was.

The only problem with observational tools is that the map is not the territory. Yes, large enough maps can help you figure out how to get from “here” to “there” and how far you can travel (how much your business can successfully change) depends on the size of your map, your confidence in your guide/leader, … . Lots of change means maps have to be very large (ie, very large data fields/sets), updated regularly (to insure where you're walking is still where you want to walk). The adage “Here there be dragons” places challenges in a fixed, historical location, it doesn't account for population and migrational dynamics (market movements, audience changes).

Or you need lots of confidence in your leaders.

“…any science first start as art until it's understood and mature enough, no?”

A conclusion of this research is that online analytics is still more art than science, more practitioner than professional (at least in the client/organization's mind). This was demonstrated as a core belief in responses as the ratio of respondents using practitioner to professional was 6:1. This language use truly shocked me. Even among non-AmerEnglish speakers the psycholinguistics of practitioner and professional makes itself known. “Practitioner” is to “professional” as “seeking” is to “doing”, “deed” to “task”, “questing” to “working”, …

The disconnect between what practitioners do and what businesses need is an embarassment. There's a widening gulf between [online analytics] and business requirements.Online analytics makes use of mathematics (statistics, anyway) and although some people use formulae the results are often not repeatable except in incredibly large frames hence any surgical work is highly questionable. As the USAF Ammo Troop manual states “Cluster bombing from B-52s is very, very accurate. The bombs are guaranteed always to hit the ground.”

A challenge for online analysts may be recognizing the current state being more art than science as such and promoting both it and themselves accordingly. They are doing themselves and those they answer to a disservice if they believe and promote that they're doing “science” while the error rates between methods are recognized (probably non-consciously) as “art” by clients. Current models and methods allow for high degrees of flexibility (read “unaccountable error sources”).

Modern medical science has no cure for your condition. Fortunately for you, I'm a quack.A good metaphor is modern medicine. Without a diagnosis there can be no prognosis. You can attempt a cure but without a prognosis you have no idea if the patient is getting better or not. Most people think a prognosis is what they hear on TV and in the movies. “Doctor, will he live?” “The prognosis is good.” Umm…no. A prognosis is a description of the normal course of something, a prediction based on lots of empirical data seasoned with knowledge of the individual's general health. A prognosis of “most people turn blue then die” coupled with observations of “the skin is becoming a healthy pink and the individual is running a marathon” means the cure has worked and that the prognosis has failed.

Right now the state of online analytics is like the doctor telling the patient “We know you're ill but we don't know what you have.” The patient asks “Is there a cure?” and the doctor responds, “We don't know that either. Until we know what you have we don't know how to treat you…but we're willing to spend lots of money figuring it out.”

This philosophy is good in the individual and not in the whole (as recently witnessed by the public outcries about the recently published mammogram studies and no more demonstration of communicating science to non-scientists has occurred in recent years).

But once the disease is named? Then we have essentially put a box around whatever it is. We know its size, its shape and its limits.

There can be no standardization, no normalization of procedure or protocol, when the patient can shop for opinions until they find the one they want.

The challenge current models and methods face is that they serve the hospitals (vendors), not the doctors (practitioners) nor the patients (clients/organizations). It doesn't matter if all the doctors agree on a single diagnosis, what matters is whether or not there is a single prognosis that will heal the client. In that sense, WA is still much more an art than it is a science, and while we may all attend Hogwarts, our individual levels of wizardry may leave much to be desired.

...but give us a second and we'll run the data again.If you wish to claim the tools of mathematics then you must be willing to subject yourself to mathematical rigor. Currently there can be no version of Karl Popper's falsifiability when the same tool produces different results each time it's used (forget about different tools producing different results. When the same tool produces different results you're standing at the scientific “Abandon Hope All Ye Who Enter Here” gate).

“…gathered data that [we] knew how to gather rather than asking what data would be useful to gather and figuring out how to gather it.”

All the online tools currently available are “observational” (anthropologists, behavioral ethologists, etiologists, …, rely heavily on such tools). “Observation” is the current online tool sets' origin (going back to the first online analytics implementation at UoH in the early 1990s) and not much has changed. The challenge to observational tools is that they can only become predictive tools when amazingly large numbers are involved. And even then you can only predict generalized mass movement, neither small group nor individual behavior (for either you need what PsyOps calls ITATs — Individualizing Target Acquisition Technologies), with the mass' size determining the upper limit of a prediction's accuracy.

At this point we start circling back to part 1's discussions about “accountability” and why the suggestion of it gets more nervous laughter than serious nods. Respondents' resulting language indicates there is more a desire to currently keep WA an art than a science . There is less accountability when things are an art form. But “metrics as an art” is in direct conflict with client goals. And unless a great majority of practitioners wish their industry to mature there is no cure for its current malaise.

The promise has been unfulfilled since 2003. We were talking about more effective marketing, improved customer retention and all that stuff back then.One solution to this is giving the industry time to mature. Right now there is conflict between the art and science paradigms, between Aristophanes' “Let each man exercise the art he knows” and Lee Mockenstrum's “Information is a measure of the reduction of uncertainty.”

Time as a solution has been demonstrated historically, most obviously in our medical metaphor. Village wisdomkeepers gave way to doctors then to university degrees in medicine because the buying public (economic pressure) demanded consistency of care/cures. Eventually things will circle back and again due to economic pressure. Enough clients will seek alternatives not provided by institutional medicine and go back to practitioners of alternative medicine at which point the cycle will begin again. People have been openly seeking alternative cures to catastrophic illnesses since the 1960s. Eventually money began escaping institutional medicine's purview and insurers were being forced to pay. The end result was that institutional medicine and insurers started recognizing and accepting alternative medical technologies…provided some certification took place, usually through some university program.

It will be interesting to see how WAMM economizes the online analytics ecology: will practitioners decide institutions lower in the WAMM matrix are too expensive to deal with? This means such institutions — which require experienced practitioners to survive — will only be able to afford low quality/low experienced practitioners to help them. This can be likened to a naval gunnery axiom, “The farther one is from a target, either the larger the shell or the better the targeting mechanism” and companies will opt for larger shells (poorly defined efforts) rather than better targeting mechanisms (experienced practitioners).

“A dominant strand for [online analytics] the past ten fifteen years has been incorporating web information with executive decisions.”

So far no single solution to concerns raised in this research is apparent (to me). Instead a solution matrix of several components seems most likely to succeed (WAMM is a type of solution matrix; you can excel along any axis and to be successful you need to excel evenly along all axes). So far three matrix elements — time, a lack of leadership and realism — have been identified. Time to mature is culture dependent so the online community as a whole must do the work.

Not enough gets said about the importance of abandoning crap.(I believe the quote on the right originated with Ira Glass)

Realism — in the sense of being realistic about what should be expected and what can be accomplished is obvious — deals with social mores and leads in the “lack of leadership” concern. There can be no “realism” until the social frame accepts “realism” as a standard, until hype and promise are dismissed and this isn't likely to happen until leaders/exemplars emerge that make it so.

“Yes, I see your point. Please remove my post from your blog”

Progress in any discipline depends on public debate and the criticism of ideas. That recognized, it is unfortunate that the current modes of online analytics public debate and criticism are limited to conferences, private conversations and (as witnessed here) online posts. Conferences (by their nature) only allow for stentorian and HiPPOish debate. Private conversations only allow for senatorial flow. In both cases the community at large doesn't take part.

Blogs and related online venues are an interesting situation. They provide a means for voices to be raised from the crowd. Social mechanics research NextStage has been doing (we're working on a whitepaper) documents how leaders emerge (become senatorial, sometimes stentorian and in some cases HiPPOtic), how they fade, how to create and destroy them (for marketing purposes), (probably most importantly) how a given audience will perceive and respond to a given leader and what an individual can do regarding their own leadership status.

The WAA is very US focussed.I bring this into the discussion because several people commented publicly (both in Part 1 comments and elsewhere) and privately (emails and skypes) that the industry (more true of web than search) suffers from a lack of leadership.

People who enjoy the mantle of leadership yet refuse to lead are not leaders. Recognized names had an opportunity to both join and take leadership in the discussion (I mention some who did at the top of this post). Yet the majority of others either failed to respond, chose to ignore the discussion or — as indicated by the quote opening this section — simply backed away when the discussion was engaged. No explanation, no attempt at writing something else. Considering the traffic, twits, follow-up posts on other blogs (for something I posted, anyway), this was an opportunity for people to step forward. Especially when lots of other people were writing that there was a leadership vacuum.

Leaders/Influencers take different forms (as documented in the previously mentioned social mechanics paper). Two forms are Guide and Responder. Guides are those who are in front. They may know the way (hence are “experts”) and may not. Experts may or may not be trusted depending on how well they can demonstrate their expertise safely to their followers (you learn to trust your guide quickly if you've ever gone walking on Scottish bogs. They demonstrate their knowledge by saying “Don't step there”, you step there and go in over your head at which point they pull you out and say “I said, 'Don't step there'.” A clear, clean, quick demonstration of expertise).

Guides who don't know the way rely heavily on the trust of those following them and can be likened to “chain of command” situations; they are followed because they are trusted and have the moral authority to be followed.

The Guide role is definitely riskier. It's also the more respected one because Guides lead by “being in front of the pack, stepping carefully, being able to read the trail signs hence guiding them safely”. The Responder doesn't lead by being in front. Instead they assume a position “closer to the end, perpetually working at catching up, but always telling the pack where to go, where to look and what to do”. The major problem for Responders is that people don't have lots of respect for that latter role. They may respect the individual and most people will quickly recognize the role they play and the lack of respect will filter backward to the individual.

This plays greatly into any industry's maturation cycle. New school will replace old school and unless our forebears' wisdom is truly sage — evergreen rather than time&place dependent — the emerging schools will seek their own influencers, leaders and guides. This is already being demonstrated in the fractionalizing of the conference market.

One industry leader offered three points in a comment, saw my response and asked that I remove their comment before it went live. I'm going to address two points (the third was narrative and doesn't apply) because I believe the points should be part of the discussion and more so due to their origin.

First, Web Analytics is not a specific activity.

People need to look beyond the first conclusions that come to mind.I responded that nothing I'd researched thus far led me to think of 'Web Analytics' as an 'always do this – always get that' type of activity and offered that while different people use 'Web Analytics' for different purposes, the malaise is quite pervasive. Whether or not 'Web Analytics' includes a host of different activities or not is irrelevant to the discussion. The analysts' dissatisfaction with their role in the larger business frame, their dissatisfaction with the tools they are asked or choose to use, their dissatisfaction with their 'poor country cousin' position in the chain-of-command, …, are what need to be addressed.

Second, the individual wrote that there was no “right way” to do web analytics.

I both agreed and disagreed with this and explained that there are lots of ways to dig a hole. In the end, the question is 'Did you dig the hole?' More specifically, if one is asked to excavate a foundation hole, dig a grave, plow a field, dig a well, plant tomatoes, …, all involve digging holes, each requires different tools (time dependency for completion becomes an issue, I know. You can excavate a foundation hole with a hand trowel. I wouldn't want to and you could). Stating that 'There is a right way to do it' is a faulty assumption demonstrates a belief that standardization will never apply, therefore chaos is the rule.

Chaos being the rule is usually indicative of crossing a cultural boundary (such as a western educated individual having to survive in the Bush. None of the socio-cognitive rules apply until the western individual learns the rules of the Bush culture) or crazy-making behavior (from family and group dynamics theory). Culture of any kind is basically a war against chaos and what cultures do is create rules for proper conduct and tool use within their norms.

One could conjecture that the cross-cultural boundary is the analytics-management boundary. So long as management controls that boundary a) there will be no “one-way” to do analytics (the patients will self-diagnose and -prescribe) and b) analytics will never be granted a seat at the grown-ups' table.

The numbers need a context.So there better be a 'right way to do it', at least as far as delivering results and being understood are concerned, because without that the industry — more accurately, the practitioners — are lost.

“I could tell them 'It is not possible to send in the Armadillos for this particular effort but communication will continue without interruption' and they'd nod and agree.”

Two needs surfaced quickly:

  • recognize what's achievable when (so people aren't set up to fail) and
  • learn how to promote faster adoption of an agenda (without going to Lysistratic extremes, of course. Everybody wants to keep their job).

Accepting increased accountability addresses some issues and not all. Concepts from several sources (some distilled and not in quotes, some stated more elegantly than I could and in quotes) revealed the following additional matrix components:

1) “[online] Analysts need to share the error margins, not the final analysis, of their tools”
2) stop or at least recognize and honestly report measurement inflation
3) “Trainings need to focus on a proficiency threshold”
4) “…provide a strong evidence of benefit”
5) understand what [a tool] is really reporting
6) “It's better to come at [online analytics] from a business background than the other way around…” (“…but who wants the cut in pay?”)
7) “We should standardize reports because the vendors won't”
8) initiate regular, recognized adaptive testing for higher level practitioners
9) include communication and risk assessment training (some time we're at a conference, ask me about the bat&ball question. It's an amazingly simple way to discover one's risk assessment abilities)

We must work to get uncertianty off the table.“The problem is uncertainty…”

That's a long component list and most readers will justifiably back away or become overwhelmed and disheartened. Fortunately there's historically proven, overlapping strategies for dealing with the above items collectively rather than individually.

  • Analysts live with uncertainty, clients fear it, so “…get uncertainty off the table” when presenting reports (this was termed “stop hedging your bets” by some respondents).This single point addresses items 1, 2, 4, 5, 8 and 9 above (hopefully you begin to appreciate that working diligently on any one component suggested here will accrue benefits in several directions (so to speak)).
  • Identify the real problem so you can respond to their (management's) problem. This point addresses items 1, 2, 3, 4, 5, 6, 7 and 9.
  • Speak their (management's) language. Items 4, 5, 6, 7 and 9.
  • Learn to communicate the same message many ways without violating the core message (we've isolated eight vectors addressing this and the previous item: urgency, certainty, integrity, language facility, positioning, hope, outcome emphasis (Rene, I'm seeing another tool. Are you?)) Items 3, 4, 5, 6, 7, 8 and 9 are handled here.
  • Be drastic. Rethink and redo from the bottom up if you have to. This point deals with items 1, 2, 4, 5, 8 and 9.
  • Focus on opportunities, not difficulties. This point deals with items 4, 5, 6 and 9.

Any one of the above will cover several matrix components right out of the gate. The benefit to any of the above stratagems is that implementing any one will cause other stratagems to root over time as well, and thus the shift

  • in what the numbers are about,
  • how they are demonstrated,
  • how to derive actionable meaning from them and
  • how accountability is framed

mentioned at the end of part 1 can be easily (? well, at least more easily) achieved.


I wrote a little about how this study was done in part 1. We contacted some people via email, performed various analysis on their responses, others via phone, ditto, others via skype, ditto, and some in face-to-face conversation. All electronic information exchanges were retained and analyzed using a variety of analog and digital tools. Face-to-face conversations were performed with at least one other observer present to check for personal biasing in the resulting analysis.

Like any research, others will need to add their voices and thoughts to the work presented here. I make no claims to its completeness, only that it's as complete as current time and resources allow.


Posted in , , , ,

35 thoughts on “The Unfulfilled Promise of Online Analytics, Part 2

  1. Joseph, again you bring out some very interesting points in your post. You talk about 'disappointment' early on in the post — part of it is definitely caused by [to quote Geoffrey Moore] “the chasm between the visionaries and the pragmatists'. One group sold dreams while the other wanted solutions.

    You mention “….unless the science is working at overcoming a business obstacle, its a cost …”.
    I would venture and say that science is always a cost — it is a matter of deciding how much of that cost are you willing to incur and when do you want to say enough and apply the results. Either extreme can hurt the business – I have seen cases where folks hardly had any scientific rigor to their analysis (for example, say in the selection of a control group) and have also seen where the data was overkilled by using numerous methods ending up with wasted time and more confusion as to which route to take.
    Joseph Response: Howdy, Ned, good to read you again.
    Point taken regarding science always being a cost (from a business frame) and the need to make the decisions you've noted. There are lots examples such as those you gave. Thanks for catching this.

    Having said that, I do think that the web analytics community can benefit greatly with a dose in Experimental Design. I constantly see instances of confusion between 'correlation and causality', 'moderators vs mediators', 'random sample designs for the task at hand' etc. This obviously leads to conclusions that do not match with the reality of the business — which in turn adds to the 'lack of trust' in the tools, data, and anything they can point a finger at.
    Joseph Response: Noted and agreed to. By the way, I'd never actually heard the phrase “random sample designs for the task at hand” and it gave me a chuckle.

    On the WAMM – kudos to Stephane, it is an excellent piece of work. I agree it is a great start but folks got to realize that they cannot use it as an Ouija board and expect it to communicate a magical solution. Also, there can never be a “formula” for success (no pun intended). Even if two businesses are identical in all measurable aspects (size, revenue, #customers, products sold etc.), there are a lot of intangible factors that simply cannot be captured in a formula — things like Management philosophy and personality of decision makers — that undoubtedly would have an influence on the degree of success even if the formula outcome is the same for both. I really think of WAMM as an excellent scaffolding on which a firm can stand on and build their tower of success. But to build this tower they need additional materials and labor (another topic, another discussion)
    Joseph Response: Well…here I think I'll politely disagree. Or desire more discussion. And please forgive what may seem like a plug because I don't mean it as such. It is meant as a point of technology/scientific clarification.
    You write that management philosophy and personalities can't be captured in a formula. That is much of the work I've been involved in over the past 20+ years and (finally) have patent(s) wherein I demonstrate that such things can be mathematically defined and to a reasonably high degree of precision. At least enough precision to be within a +/- 2db error margin. We've even documented lots of those successes in various scientific papers and such.

    And as always, thanks for reading and commenting.

  2. I could rage on about the 'art' implications. You're pointing out what others have said. I'll respond to them by saying that that isn't what analytics should be. Communication is an art – sure. The process of information extraction and learning must be based on science.
    Joseph Response: Howdy, Chris. Thanks for reading and commenting.
    I agree with what you write above — definitely that the process should be based on science. I also know (from personal experiences with my masters/mentors) that there's a place where technical knowledge and ability are subsumed by… maybe a better concept would be “become wholly integrated into”… one's being. Pardon the mysticism and I'll admit this is part of my philosophy; whatever you do and how ever you do it, your acts will be a demonstration of the who and what that you are. I know from my research that people can't separate the two…not for very long anyway.
    That end product, that merging of the who and what that produces the “how someone does themselves” is (to me) a very high art, one I aspire to.
    So I completely agree with what you've written and offer that it's the individual doing the practice or professional act that can demonstrate their science as an art. Too often I've seen (and there's external documentation to corroborate) that artists often lose the feeling for their art when they learn to think of it as a science. But you do need to learn the (scientific) basics before your analytics becomes your personal art form.
    So yes, again, I agree.

    Science is about the progressive discovery of yet more uncertainty. The notion of getting it off the table…it's counter to what we do. It's counter to progress. Progress is an expanding library of questions yet to be answered. Progress is an expanding library of questions that have been answered. And going back and reading those answers.
    Joseph Response: This I may not agree with. At least not with how I interpret what you've written (and I'm willing to be educated). No, wait…I'll accept it. Science is about the progressive discovery of yet more uncertainty…yes. I'll agree with that. I also accept Lee Mockensturm's “Information Information is a measure of the reduction of uncertainty.” The two are corollaries to a certain degree.

    The matrix of solutions you outline is constructive.
    Joseph Response: Thanks. That was my hope.

    Questions of accuracy and understanding what the tool is really saying (definition) should be known. There ought to be a level of professionalism involved. Granted, I'm now talking about what analytics should be, not how it's actually practiced today.

    The quantification of uncertainty, honesty, integrity, periodic testing, evidence…all of it. You know it smacks of a physicians college eh?
    Joseph Response: Darn…I've been found out…must be because I'm doing research on the health industry and citizens' health initiatives…

    I'm reading in there a definitive call for leadership. So. Who leads?

    Is there a Moses?
    Is there a Khesgis Khan?
    Are there both?
    Who has the authority to lead?
    Joseph Response: These are excellent are root questions, me thinks. Fortunately, I'm not qualified to answer them.
    And thanks again for a wonderful discussion.

  3. Joseph, good to trade thoughts with you too.
    Joseph Response: Hello again, Ned

    Ahh – you know what I meant by the 'random sampling designs for the task' :-). Random sampling is cool — if designed correctly and fitted to what one is wanting to study.
    Joseph Response: Yes, I had a good idea what you meant, simply never heard it put that way. I have a relative who Monte Carloed his way through a PhD in physics. His battle cry was “Random sampling be d?mned!”

    On the last point,my apologies for not being more explicit. I am definitely not disagreeing with the technological aspects here but more on dynamic nature of these factors as it applies in reality. The issue I have with “formulas” and such is that most people use it as a static factor — meaning, they would plug in the values, get an output and for the remainder of the project use that output in their decision making. This is what I was countering.
    Joseph Response: Understood, accepted and agreed with. When/Should we meet we can talk about mathematical engines and variables channels. That's how I dealt with your concern re formulae being used as static factors. My original thinking was that standard mathematical forms weren't going to cut it so I came up with variations of field effect equations that modified themselves in real time based on what they were required to do.

    Probably could have written that last sentence in Gaelic and had as many people nod.

    While I agree that personality and such can be “mathematically defined/captured” using latent variable modeling and/or proprietary techniques – [I think] one has to be very very careful on how the mathematical definition is applied.
    Joseph Response: Agreed.

    In most businesses, there are many folks involved in a decision. To make matters more convoluted, there are folks who are in and out (e.g a Senior VP who just 'drops in' for a chat but in the process offers some 'thoughts'); and then there are folks who were there at the beginning and then left, and those who were not there at the beginning but came later. Anyway, [to me] mathematical definitions of factors like personality is great to guide decisions but should not end up being a decision point in itself.
    Joseph Response: Agreed!

    (Even great tools/techniques can do damage if used by the wrong person, for the wrong reason).
    Joseph Response: You've read our Principles, eh?
    Thanks again for a great discussion.

  4. Once again, well done. Thank you for playing an important role in representing the needs of an analytics community.

  5. I was going to comment here, but it was getting so long that I decided to give you a nod on my blog.

    However, I see that others also have long replies :)

    It's a good thing. Open minded constructive discussions, even when there are misunderstanding or disagreement over some points, is as it should always be!

    I second Kevin: thank you for spending so much time helping us!


  6. Ah! We are not a discipline. Not yet? Ever will?
    Joseph Response: Howdy, Jacques. Thanks for joining the discussion.
    It's not for me to determine if online analytics is a discipline or not. I think if one were to query a group of online analysts they might claim yes, it is a discipline. I was reporting on language use, concept and cognitive analytics, … Using those tools, (and even if online analytics is a discipline) the respondents did not believe it was (a core value) or that it was being treated as such.

    Interesting that our field evolved in the age of self-publication (blogs, tweets, etc.). We could say anything, utter any absurdity, and never be called BS on us. SO little evidence-based stuff is “published”, never going through the peer review filters. I think this is one of the big reasons we're still going after legitimacy as a “field” (again not a discipline).
    Joseph Response: Yes, I'd agree with that. I do have some concerns with your thoughts on peer review filters…I believe they do exist and are just demonstrated differently. I believe people might not point at something and proclaim “That's BS” but I do think some form of ostracism (mild or not, depending on the degree of obvious BSity) would occur.
    I've seen such things at conferences in the online and other fields. I've heard statements along those lines on various occasions.
    I'm curious to know other people's thoughts on this.

    Rapidly, some sort of start system got in place; there's so much brown-nosing in WA, it often makes me sick.
    Joseph Response: I believe this is changing. I have no direct, identifiable evidence of this at present. Right now it's more hearsay — “I was talking with…and they said…”, “A bunch of us were talking…” type of thing — than data points I've been collecting. This post and its antecedent grew out of research so maybe I'll have some data to make you feel better in a while. We can but hope…

    Debates quickly turn into bullying matches.
    Joseph Response: Pity.

    Why? Well, I guess because it is at heart commercially driven. Most people who publish (yours truly as well) have vested commercial interests in building their reputation (did I say expertise?), and place themselves as high as possible in the food chain. This is also true for solution vendors.

    Don't get me wrong; there is absolutely nothing wrong with being in business and wishing for that business to strive. I do this everyday. What I am saying is that we should not be surprised if we are facing the present situation. I am also not naive enough to believe there is no star system in academia, but at least they have forums (i.e. publications) where people are asked to base what they say on evidence.

    Maybe one possible avenue would be to create such place, where we would say “Here's what I discovered, how I did it, and what I think we should conclude from that”. Does it have to be some sort of Journal of Web Analytics, at the risk of sounding pompous? I don't know, but I guess that type of endeavor could and should certainly be under the WAA's leadership.
    Joseph Response: I think some of what you're hoping for is available in academic marketing journals, usability journals and such.
    I disagree that it should be under the WAA's leadership, though. My disagreement is based on some research I'm doing for eMetrics (thanks to Matthew Finlay for letting me go public with that) and discussions I've had with WAA members while doing the research presented in these posts and for eM.
    Basically, the WAA is perceived as a political organization, hence subject to ego and with a recognized and defined agenda, therefore may not be a truly impartial judge of any material submitted. The end result would be a journal that might well include excellent material yet never gain respect due to public perception of who/what got published and why.
    (I queried some WAA members about the above line before including it and one person commented “That's what the WAA looks like… politics & egos… there seems to be no leadership because they are afraid of stepping on any vendors toes. Defining standards should be led by the WAA, but it's not happening because of that. ” This was largely in keeping with what respondents were communicating in this research.)
    That offered, do I think such a journal is a good idea? Yes, definitely. Any social science (I think online analytics falls into that camp and am willing to be convinced otherwise) that eventually gains widespread recognition (meaning recognition outside of its own community) invariably produces a “house organ”, a publication that demonstrates there is some field-specific rigor applied to the research, results, standards definitions (good luck with that one), …

    Now, there's a potential leader right there.
    Joseph Response: Umm…

    and thanks for reading and commenting

  7. Very interesting reading. I am not a web analytics expert, but a humble PhD student trying to discover and understand the shifts taking place in media. If my thoughts seem off track or naive, that is why.
    Joseph Response: Greetings and welcome to the discussion. My opinion, your thoughts are neither off track nor naive. NextStage Principle#34: Never be afraid to appear a fool when asking a question. It's the ones won't ask questions who are truly the fools.

    Your discussions are providing some valuable insight.
    Joseph Response: Thank you and thank goodness.

    Just a few comments:

    My grandmother gave birth to five childrenall who were either stillborn or died at birth. Then a medical quack floated down the Missouri River, treated my grandmother with a few concoctions, gave her a diet to follow when pregnant and structured a supporting girdle of sorts and she went on to have seven healthy childrenmy father being one. Two heroes in this storythe quack and my grandmother who, although ridiculed by family and neighbors, followed through and the rest is history.
    Joseph Response: I, too, have a great respect for “quacks”. Your anecdote is worthy.

    I am a little concerned with the discussion of science putting a box around it.
    Joseph Response: I probably didn't make myself clear on this point. My “But once the disease is named? Then we have essentially put a box around whatever it is. We know its size, its shape and its limits.” comes more from linguistics, psycho-linguistics and semiotics and has to do with the human cognitive need to give something a name before anything else can be done regarding that thing. This deals with the human mind being design to categorize things. Until we have a “name” (category) for something we don't know how to respond to it. Socio-culturally, we tend to discard or ignore things we can't identify (people who don't give us answers we can understand). Neuro-cognitively, our wiring is to completely ignore things that don't “fit” into our experiential maps (Dr. Jerome Lettvin's Frog's-Eye Principle was probably the penultimate demonstration of this). I apologize for the confusion. Science's job (as I think Chris Berry notes in his comment) is to shatter boxes on the one hand and put things in boxes on the other. But create the boxes themselves? Nope, that's language's job.
    So you are correct to be concerned and thank you for inviting me to clarify (hope I did).

    I applaud the art of science discussion and see great hope in expanding the box or discarding the box altogether. The Spanish observer, Miguel de Unamuno, suggested:

    “The mind seeks what is dead, for what is living escapes it; it seeks to congeal the flowing stream in blocks of ice; it seeks to arrest it. In order to analyze a body it is necessary to extenuate or destroy it. In order to understand anything it is necessary to kill it, to lay it out rigid in the mind. Science is a cemetery of dead ideas, even though life may issue from them… The rational, in effect, is simply the relational; reason is limited to relating irrational elements. Who can extract the cube root of an ash-tree? Nevertheless we need logic…”

    I dont think Unamuno was suggesting science could only study the dissected and thus the dead, but that present day science does this and claims victory. We use the box to dilute the complexity, but web analytics is such a dynamic environment, that until we can study it as a whole living thing, we will never understand it. As I mentioned in comment in Unfulfilled Promise part one [Joseph note: also comment], we need complexity experts that can build bridges between many disciplines On the other hand, I have seen some who have worked so hard on an analytics tool (Like Colonel Nicholson in Bridge Over the River Kwai) they are willing to let it deliver bad results because they are so proud of the tool. These same new leaders will need the courage to blow up a few traditional bridges as well.
    Joseph Response: Well stated and well done. Thank you.

    If I understand you correctly Joseph, I take it you are not a fan of Malcolm Gladwells Blink concepts of rapid cognition in suggesting we look beyond the first conclusions? I agree. Making the right decisions for the right reasons requires more justification than simply intuition, emotion, ego, etc.
    Joseph Response: There's lots of evidence about what types of problems can be correctly solved via rapid cognition and what can't. Most people can't answer the following correctly (seen this myself, took part in similar studies, most recent was with about 4k participants): You have a bat and a ball. The bat costs exactly one dollar more than the ball. Together the bat and ball cost US$1.10.
    How much does the ball cost?

    The majority of people (including the majority of college graduate students surveyed) couldn't answer this question correctly because they non-consciously activated incorrect brain regions and accepted results derived from them. However, if one either learns or has training to recognize which parts of the brain need to fire to solve particular types of problems, few errors occur. Or fewer obvious errors occur, anyway.

    Outstanding part two. Will there be a part three?
    Joseph Response: Well…um…I wasn't planning on it. I'm open to suggestions, though.

    And thanks for reading and commenting.

  8. Thanks for a thoughtful Part 2 Joseph!
    Joseph Response: Greetings, Good Sir! Thanks for taking part in the discussion, part 2. The pleasure is mine.

    Your “lots of ways to dig a hole” metaphor is wonderful. Based on something I had heard Guy Creese talk about, I have hypothesized the Multiplicity ( is a key to success identifying actionable insights on the web. The reason is simple…. the questions we have to answer are so different that we need to be agile and flexible enough to use the right tool.
    Joseph Response: I agree, agility and flexibility are imperative when investigating new “worlds”. I don't think the online world is so mature (it's not as well understood as the highway system, for example) that we can take anything for granted. Some areas may be close to that and not all.

    To tie it to your metaphor: 1. Make sure you understand what kind of hole needs to be dug (foundation, grave, well etc) and then 2. Pick the right tool (hand trowel, excavator etc).
    Joseph Response: Yes again. It's still safer to investigate whether or not something is currently solvable, then needs to be solved, then if the current tools are up to providing an accurate solution, … Unless we are willing to be agile and flexible walking our solution paths we may find ourselves deep under the ice.

    Yet as human beings we gravitate towards one “gods gift to humanity” tool. It is also not helpful that in 90% of the large to medium size companies there is a distinct lack of clarity about what the online strategy is (and what kinds of holes need to be dug).

    Both challenges will ease, we humans take time giving up our entrenched mental models.
    Joseph Response: Yes, Arthur C. Clarkes Mankind never completely abandons any of its ancient tools again.

    [PS: My thanks to Mike Mitchell as well for his excellent comment and wonderful observations!]

  9. What should a leader be doing or saying, specifically? Can you give some examples, more than “we need to have leadership”? If you perceive the absence of leadership, you must have some criteria that aren't being met right now. If a real leader existed, what would they be saying? What needs to be said?

    1. Hello again, Chris Grant,
      Your question(s) are worthy and I would much prefer the community have the discussion. Like you and as noted in your comment to Part 1, I come from an other disciplinary background. My observations are valid within their framework, not necessarily elsewhere. You wrote in your first comment that you have seen this phenomenon, these responses and even this discussion elsewhere due to your org dev background hence your knowledge is probably more valuable than mine in this setting.

      What does org dev indicate as leader actions and statements?

      Regarding this industry specifically, some leaders have come forward. These are individuals who demonstrate their willingness to lead (sometimes without stating it as such) simply by adding their voice to an uncomfortable discussion. True leaders are willing to place themselves (not others) at risk and demonstrate that risk by taking unpopular positions within a community. I also believe true leadership is inclusive, especially of dissent (NextStage Principle #48: True Authority becomes such by acknowledging, understanding and incorporating all points of view, especially those which disagree.
      Authority can not exist without growth and change, and authority which can't include or disprove disagreement is no authority at all.). These and many others are my thoughts and may not echo those of the community or even be valid due to some community structures of which I'm unaware.

      I also note that the community is indicating a lack of leadership in their responses to me. This was a data point to me, not something I was aware of as I don't have much involvement with the online analytics community as such. I would rather the community come forward (anonymously if they wish) to determine what leadership “criteria” need be.

      What would leaders be saying? What needs to be said? Those are questions I invite the community to discuss, either here or elsewhere, and I admit I'd love to know where and how this thread might travel. I note it's already picked up some volume and mass on Stephane Hamel's Immeria Blog and has been noted on Chris Berry's and Kevin Hillstrom's blogs in the past.

      Thanks for reading and commenting. Let me know if you'd like more/other and I'll do my best.

      1. “(NextStage Principle #48: True Authority becomes such by acknowledging, understanding and incorporating all points of view, especially those which disagree.)”

        Analytics is hard. (because people make it hard)
        Joseph Response: 1) Howdy, Chris.
        2) I've been thinking I should do some explaining around the meaning of “hard” as in “x is hard”. Something can be “hard” (to understand, to do, to believe, …) and aside from physical challenges the reasons are psychosocial in nature. People exist within a sociocultural model and that model wraps its populace in laws, mores and rules. Sometimes it is these laws, mores and rules that make something difficult for an individual existing in that model to understand, do, …

        Some people profess that “math” is too hard for them. Talk to them, work things back and eventually they tell you about some bad experience with a teacher, parent, a deficient math studies program, a lack of (educational/family/…) support, something like that. Rarely does one state as a fact “I don't have any aptitude for math” in the way someone states “I can't sing a note” or “I couldn't learn languages if I tried”.

        Are there people who don't have a math (or whatever) aptitude? Yes, definitely. Fortunately, most of them don't select a mathematics (or whatever) career. They follow some other path that doesn't require an aptitude they don't possess.

        There's also research about “x is hard” psychologies. These psychologies derive ego and self-identification rewards by stating “x is hard” then doing “x”, essentially stating “See how good I am? I can do this hard thing!”. This was exemplified in Part 1's “See this tool? I must know what I'm doing because I use this tool” comment.

        There's also the martyr concept at work, the mathematician who lets everyone know just how hard their work is so that they and their work will be honored and revered in their community.
        There are lots of other reasons for the promotion of “x is hard”. All of them tend to fall away when tools such as appreciate inquiry, positive deviance, dynamic evaluation and story method are used as interrogation measures. The general (and obvious) form of such methods can be summed up as “if something is a challenge then you're doing it because…” Follow the answers back until they start to repeat and some very interesting psychologies make themselves known.

        Anyway, one of the objectives of scientific inquiry is to develop technologies that make things easier for the greatest number of people to do. Any technologist, technology, scientist or science that intentionally makes or claims things to be difficult is stating that it will forever remain in the hands of the few, meaning it will eventually become extinct because (as I've said and written before) the history of technology is the placing of the most power in the greatest number of people's hands economically. (Recent NSE research adds another dimension to that and that's for another blog or post). The rules of population dynamics and evolutionary theory, niche-management, (environment)web-diversification, etc., force these things to be true. And remember, Mother Nature bats last and owns the stadium.

        Analytics is easy. (because quick wins DO exist and anybody can do it)
        Joseph Response: Given time and tide, given understanding and a willingness to break/change the rules of a socio-cultural dynamic, yes. Everything is easy. All it takes is a willingness to understand “x” at its kernal.

        Models are valuable abstractions for understanding the world. (because they are).
        The Novo-Mason variant of RFM modeling can be profitable.

        So fine. We're all having a good time in the sandbox.

        And then hard choices have to be made.

        Every healthy field of practice I know has clearly defined axioms. You just can't expect to progress if you're always going back and tearing yourself apart over axioms. Look at what has happened in Information Architecture over the years for a clear case study.
        Joseph Response: I agree with the above with the following clarification; Every healthy field of practice has clearly defined axioms and knows those axioms will change as more data becomes information becomes knowledge becomes wisdom.

        At which point, Joseph, does the leader with authority say “I see your point, I've noted your exceptional exceptions, but we're going to keep on building our boat over here using Oak instead of Pine, and you're welcome to come with.”?
        Joseph Response: I don't know of many (true) leaders who posit things as above. What I've seen demonstrated repeatedly and with amazing success is a variant: “I see your point and I've noted your exceptional exceptions. I need us to keep building our Oak boat, though. Are you willing to put in a little extra time on your Pine idea, just enough so I can make a case to the group? And I'm willing to help you after hours if you're willing to work after hours with me.”

        True leaders keep their options open (my experience). They also honor dissent and recognize when their vested interest is getting in the way of what's best for their community/environment. My own best learning experiences have come from mentors who disagreed with my ideas and were willing to work with me until either I was convinced my ideas weren't worthy or they were convinced their dissagreement was in error.
        I needed their experience, they needed my “third eye blind” approach. We both learned and grew. As one of my mentors said to me when I gave him the proofs of Evolution Technology, “This is so obvious only you could have come up with it.”
        The fact is that not all solutions are obvious to those who refuse to look. People have to be willing to look, to take the necessary steps, to fail and to succeed.
        (Rene will no doubt claim this is one of my more mystical responses…)

          1. Yes.

            Kuhn. Axioms change. Yes.

            Alright – so somebody comes by with Pine. Maybe there's some new benefit of Pine that we don't know about. Like an oil. A leader should be willing to learn.

            If the merits of Pine versus Oak has been debated to death, and there is a whole raft (haha) of literature on it, they should be pointed to the literature: – “have you thought of all this”? – and they say “Yes, and the oil is the key”. Then woot. Awesome.

            A Scientific Revolution ensues and we all know what that's like. (Or will).

            Problem is, so many people come with their pine, without any regard to what has been already debated. I doubt that they're pointed to the literature very often. And the cycle goes on and on, without any progress.

            Your main thrust is a point well taken. Leadership would solve my central concern around debates that really don't help the situation.

            We're in need of a very specific type of leader (and you certainly opened up a can of whoop-ass here).

            Incorporation of all points of view – yes. I'll accept that.

            How to ensure there's enough fertilizer for 100 flowers to bloom?

          2. Howdy, Chris,
            My solution would be to first determine what kind of flowers could bloom with reasonable assurety with the amount of fertilizer I know I have. Next decide if those are the kinds of flowers I want to bloom. If yes, start planting and fertilizing. If no, go find more fertilizer, decide if I really want to be a gardener, accept being able to bloom less than or more than 100 different flowers in this season, determine if what I know I can bloom is acceptable if I get more fertilizer next season…

            I worked on dairy farms as a kid through my early college life. Never had to worry about not having enough fertilizer, only knowing where to spread it.

  10. (this is the comment I was going to leave on Stephane Hamel's Immeria Blog)
    Howdy again,
    Adding to a comment made by Hugh Gage re analyst despondency; Two of the quotes I used in Part 2 are “I don't believe in WA anymore…” because clients haven't changed, the other that things haven't changed since 2003.
    These were exemplar quotes to me (the sentiment was profuse, those quotes captured it wonderfully) because I've been asking (since 2001) “What does web analytics do? Please show me repeatable examples of how web analytics a)saved money, b) increased profits, c) benefited morale, d) caused a rethink/redo of business practices, …” (over time I included search in my questioning. With due respect to all WA folk, the question is much easier to answer in SA (search analytics)). Basically I was looking for some kind of “if x then y”. If such didn't exist then I was at a loss as to why it was done.
    I mean, I can understand playing with numbers for fun. That I love to do. I also appreciate that my time, my company's time and my clients' time are precious. Experimenting for fun and profit I do either on my own time or with the client's express written permission and a mutual understanding of what and where the experimentation may yield/lead. Perhaps the latter is an understood truth in the online analytics world, I don't know.
    I accept that WA has benefited some companies, I only offer that it hasn't been demonstrated as a repeatable phenomenon. This is based on my experiences. I won't say all and definitely most of the clients we've had over the years all say something like “I get all these reports but nothing that tells me what to do with it”. I also recognize and usually explain that the reports they're getting may lead to some actions but the methodologies involved in determining that path require very large numbers (sociologic large numbers, as in “enough people so that a behavioral change is propagated throughout that population and becomes dominant as a behavioral standard within that population) to produce reliably actionable results. I have no idea if others would agree with me that “you need large numbers to produce reliably actionable results” is an accurate description of taking traditional analytics tools/methods and getting that “if x then y” product I mentioned earlier and I'm quite open to being corrected.
    But that non-repeatability — especially when best efforts are applied — is psycho-emotionally debilitating.
    Perhaps the industry is too immature (a nod to Stephane's WAMM and any similar models I've not encountered yet, yes) to have repeatable methodologies surface.
    My opinion, though, is that such methodologies a) will appear, b) make themselves known and c) become widely adopted.
    They will appear and become known (the “history of technology” model demands it) and once recognized they will become adopted (ditto). Right now when I interview online analysts I learn that the majority of their engagements involve application/implementation methodologies, not actionable outcomes. In my research actionable outcomes were either mentioned in the negative or on the defensive.
    As long as the best training is training on how to guess best, large companies will employ industry top feeders and industry striations will be the rule. The only way to excel will be to associate oneself with a top feeder (as mentioned in Part 2 in the Management pays attention to what they paid for, not what you tell them. section). Once training becomes logic and method based, analysts will need to learn how to market themselves because everyone will be able to deliver the same results (+/- 2db).
    Consistency of results across all analysts carries its own demons; how will analysts differentiate themselves when all analytics produces comparable results?
    The discussion will perforce move to who can repeatably turn their results into actionable outcomes. Analysts will perforce need to speak the C-level language. It will be the only way to translate what they know into something management can do.
    Just my thoughts, folks.
    And loving the conversation, Stephane.

    1. Ms. Thayer,
      thanks for reading parts 1 and 2 and posting a follow up on your Trending Upward blog. I posted a comment there agreeing that analytics has a bright future and that that future will be determined by whatever environmental/ecologic pressures come into play.
      Thanks again for reading and posting.

  11. I've used web analytics and search to target sites for affiliate marketiing but never had any luck. Maybe the tools or people weren't up th the task? Have your readers had any luck using online tools for this?

    1. Somaie,
      thanks for reading and reaching out. I have no knowledge or experience in affiliate marketing. Perhaps others can offer you more.

  12. […] The Unfulfilled Promise of Online Analytics, Part 2 The continuation of yesterday’s post: “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” Related posts:Twitter chatter: The Unfulfilled Promise of Online Analytics, Part 1 The Unfulfilled Promise of Online Analytics, Part 1 Buckle up… […]

Comments are closed.