From TheFutureOf (13 Mar 09): The Analytics Ecology

I'm going to build a bit on what I wrote in From TheFutureOf (5 Jan 09): Omniture and Google Considered Environmentally.

I'm very curious to know where the whole concept of analytics is going to go given three factors; the current economy, the emerging Web X.0 technologies and the increasing requirement that any analytics be multichannel. One thing I didn't address in my response listed above was the concept of keystone species. A keystone species is critical to any ecological system because they directly affect both the outcome of the entire ecosystem and help shape that system. Without them everything collapses. They're not necessarily the foundation of an ecosystem, merely close to.

So what are the keystone species in the current analytics ecology? More importantly, is that species going to go away in the present economy? If so, will the ecology collapse or will something else come in to take its place? And if some other species takes the current keystone species' place, what will the resulting ecology and keystone species be like?

This line of questioning isn't arbitrary. I've been asked to co-author a whitepaper about NextStage's Evolution Technology (ET) because it “…is a technology powerful enough to fund a new industry and this is worth a WP that will still be relevant in years time. … the idea is to write a White Paper that will explore the possibilities of the technology and open the door to the future.” (these aren't my words, I'm quoting)

The request to co-author such a whitepaper is, to me, similar to Rene's “What if all we had was Omniture and Google Analytics?” It's really asking if ET could be a keystone species in an emerging ecology.

Interesting question, that. Biologic systems are inherently unstable – they have to be because instability creates conditions that determine what will survive. Biologic stabilities only occur in discrete “phase spaces” clustered around some “point attractors” or within some well-defined “limit cycle”.

<My thanks to Stephane Hamel for helping me to clarify the following)>

Web analytics' phase space is the domain of (whatever) in which what we call “wa” exists. An attractor is a goal or a determinant, something we believe web analytics is providing us (note that it might not be doing so, only that we believe it is. Misbeliefs are primary reasons business ecologies collapse. Anybody checked their stock portfolio recently?). Limit cycle is the lifetime of a system. Web analytics' limit cycle was defined and its demise (probably) unrecognizably foretold when the WAA decided to define some standards. The moment you point at something and call it “A” you can't point at something different and call it “A” as well. The most you can hope for is to make a comparison.

The challenge, of course, is that the definitions were based on a declining ecology. Technologies change the phase space hence the limit cycle comes to an end. Do I think the decline will happen tomorrow or the next day? Heavens no. It'll continue for a while, I'm sure. But this blog is about “the future of…”, after all.

<Thanks, Stephane>

Market ecologies and systems share those traits with the exception that market ecologies are usually artificially maintained.

And any systems ecologist knows that's a dangerous and potentially erroneous statement, that last, because the artificial maintainment becomes part of the ecological system, thus the current players and what they do to maintain their positions, market share, etc., are all part of the ecology hence calculable.

Right now I'm guessing Google Analytics is the keystone species in the current ecology (this is where I again emphasize that I know nothing about web analytics). If Google Analytics went away an incredible vacuum would be created. But the vacuum wouldn't simply be market share, it would also be market space.

The removal of Google Analytics means the range they filled also collapses. That would be followed by a biodiverstic explosion as everybody and their kin attempts to alleviate that vacuum and fill that range. This explosion also means evolution would go into overdrive, testing out new lifeforms, creating them and mutating existing lifeforms because the last successful lifeform ruined the ecosystem when it left. This means the entire ecosystem moves left (metaphorically speaking). The biodiverstic explosion literally causes the remaining lifeforms to shift their positions in the food-web, kind of like the fact that the Sahara was once a great lake. What survives is what best adapts.

Phase space, point attractor and limit cycle can be fairly well defined for the present analytics ecology. I've been very public in my thinking that web analytics (as I currently recognize it) is based on false attractors and that eventually these would become obvious. Someone far wiser than I wrote “…continuing to perceive the world through glasses that distort relations and priorities, actions are misguided, interpretations are obliged to maintain unwitting fictions, and emotions are inappropriately deployed. (Chickens obliged to wear prismatic lenses always peck to one side of the seed they are aiming for. When grain is plentiful, they nevertheless hit food often enough to survive, and may even, if I may be anthropomorphic, remain unaware that anything is wrong.)”

It's that last piece, “When grain is plentiful, …” that brought much of this together for me. The grain is no longer plentiful in the current ecology (the economy is shrinking). Further, the range (in a food-web sense) is shifting due to emerging Web X.0 technologies. Multichannel requirements take the place of changing phase spaces. The general taxa of Rene's original question and the comments to it demonstrate this, me thinks. What's left is the limit cycle and that's being defined by an increasingly attention-challenged culture.

Please take my following statement as intended, an encouragement to explore and traverse deeper; web analytics has always impressed me as only looking at what it can easily see. If there's something it wants that it can't easily see it creates something and puts it in the place of what it wants. The flurry over engagement was (to me) just such a time. I think it's wonderful that that word is being used and I'm happy for people in the analytics community who are charging for it and making money at it. I also recognize one could just as easily have called it “fred” or “tulip”, assigned the desired meaning to it and gone on ahead. I know “fred” and “tulip” don't have the cache of “engagement” but what the heck. (My preference has always been for uniquely valued metrics from which others can be built (as I've written elsewhere, I'm a second order tool maker).)

Nor am I suggesting I or NextStage has the answers. I wouldn't be asking these questions if I had the answers. People who know me know I bore rapidly once a problem is solved and quickly move onto the next challenge.

Stephane Hamel agreed with me. “I think you are right,” he wrote. “The the current state of wa, as well as the attractors are bound to change. The current cycle is about to end because of the economy, because the web is part of a larger whole that includes many different channels (lots of them offline), because of data integration, and ultimately, because of the changing consumer behavior.”

I'm very curious to know what people think. Truly.

<Also many thanks to Aurelie Pols for reading a draft and commenting. I hope she comments here, as well.>

From TheFutureOf (5 Jan 09): Omniture and Google Considered Environmentally

As usual, I'll respond to this question through some very different lenses. Questions like “What if all we had was Omniture and Google Analytics?” are (to me) basically questions of systems ecology, adaptive and evolutionary biology, environmental modeling, things like that. My covert suggestion is that rational actors don't exist (duh!) and that the rules of adaptive and evolutionary biology are far better at determining how markets will behave than traditional methods.

So when asked “What if all we had was Omniture and Google Analytics?” I wonder what kind of environment would be necessary for such to be, what kind of evolutionary path and ecologies had to come into and go out of existence in order for such a system to thrive.

For people with an interest, I've put together a bibliography that I used in putting together my response. You can find it at Partial Bibliography for Rene Dechamps' What if all we had was Omniture and Google Analytics?. My responses weren't done on the fly (as least that's not how I did it. It took me better than a month of reading and researching. But oh, what fun it was!).

Organisms don't evolve in absentia. The number of factors involved in any species becoming dominant (especially as dominant as hypothesized here) are just on this side of countability. One needs to investigate things as diverse as

  • studying the present environment to understand the future environment
  • scaling issues – what is required of the organism(s) under study to support that future environment?
  • are there historical analogues?
  • can the “future” organism support the energy costs required to exist in that future environment? Or do bio- and thermo-dynamics stop the scenario from happening? Then what will the organism do to achieve the scenario while working within bio- and thermo-dynamic norms?
  • can a future environment support the organism(s) under study?
  • studying the evolutionary record – how do changes in this landscape occur now and will this change methodology continue? For how long? Why will it continue/not continue? What would take its place?

Other questions, such as how this environment is organized, are also critically important. These multi- and inter-disciplinary approaches are (I believe) of greater and greater necessity as system complexity increases.

We can simplify the problem somewhat based on the conversation that has taken place thus far.

Rene (Wednesday, February 12th, 2008 at 3:27 am): What
if all we had at our disposal was Google Analytics as a “basic” free
tool and Omniture, the “enterprise” platform, serving the high-end of
the market?

Then these two are predators. There are always less predators than there are prey. If there are only these two tools for all users, these tools/companies serve the roles of predators in the system with all users being their prey.

Rene (Wednesday, February 12th, 2008 at 3:27 am): How would this landscape affect consultants and practitioners? Would
it be a good thing?

I think the only path that would allow them to survive in the ecology defined would be as scavengers on whatever the top predators didn't consume. From Rene's previous statement, there are only two predators in the environment, therefore consultants and practitioners are not predators and not offering solutions to users. Working on the side of users means they are prey and will eventually be consumed by the top predators (note that this analogy means consultants and practitioners would probably be hired by the top predators, work for the top predators, work in conjunction with the top predators (this means they are definitely scavengers)). The only other prey-based roles would be as parasite or symbiont. Either case means consultants and practitioners are manipulating the evolution of prey species (clients) to better benefit themselves.

So consultants and practitioners evolve over time. Would it be a good thing? Depends on how you define “good”. This scenario would be unsustainable in any environment or ecology. The question then becomes “How long would such a scenario exist in any given environment or ecology?” That question is very easy to answer — it would last as long as the environment and ecological systems could maintain balance. Hence it becomes mandatory for these organisms to create some kind of balance if they are to survive.

There's an upper limit on how much prey an environment can sustain. The mega-predators' prey must prey on something themselves. That would be us, consumers (of web content in whatever form and for whatever purpose). Therefore the clients are in competition for us, the environment can only sustain a countably finite number of us, therefore the number of clients is both countably and recognizably finite, therefore the ultimate size of GA and Omniture is finite.

This is one way way balance occurs but said balance eventually fails due to a hysteresis loop occuring in the ecology (think of the acorn-mouse-deertick-wolf/coyote-deer cycle). So at this point the symbionts and parasites make themselves known in the diorama and push the cascade in one direction or another. They will push the cascade along the gradient of least resistance (that which benefits them the most).

Unlike oil companies, automobile manufacturers, investment houses, mortgage brokerages, etc., most parasites and definitely all symbionts know enough not to kill their hosts (although viruses and other parasites may do this at the last stage). Parasites direct the host organism to become better hosts, sometimes killing the host when the parasite moves on. Symbionts engage in a bio-molecular pas de deux with their hosts that benefits both.

What parasites can do that symbionts can't is cycle through host species. Some worms, for example, have life cycles that take them from insect to fish to mammal and back, each host species contributing some necessary environmental elements for the growth and development of the parasite.

Therefore consultants and practitioners that move seamlessly between client and GA or Omniture are more likely playing the role of parasite. The term “parasite” may have negative connotations and not so in biology and ecology (and definitely not in parasitology). Theories developed over the past 10-20 years indicate that parasitic activity has been a primary evolutionary force through geologic time.

Even so, Google can't sustain itself in any ecology without diversifying itself via either subspeciation or co-evolution. Omniture is a mega-predator going after mega-fauna in this model. Google is also defined as a mega-predator but going after micro-fauna, not a good place to be, really, as there will always be smaller and smaller fauna available to those willing to invest the energy into harvesting them.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Or would it be the end of analytics as we know it today?

Probably so as any such specialization means some amazing things have happened to the environment and ecology. For two species to become the only species providing an ecological function to the environment then these two species have to be phenomenally unspecialized (I'll let actual users of these tools determine that). One specializes in mega-fauna (enterprise), the other in what's left. The enterprise predator has to become the more specialized hunter because the number of prey is smaller. Think of this as the polar bear first searching for then sitting by the ice hole for hours on end waiting for the one seal to surface. Expenditure of resources followed by high conservation of resources followed by maximal expenditure of resources followed by a long rest period to replenish resources.

The other tool is more like a blue whale sieving krill when it surfaces. It doesn't need to do much beyond what it's doing already — swimming, surfacing, some herding but only as a function of swimming and surfacing. But it does need to be aware that it has to keep moving because it will deplete the krill if it stays in one place for too long. All it needs to do is keep swimming and surfacing. Eventually it'll run into more krill, therefore the GA predator is feeding because the act of feeding is necessary, yes, but primarily because feeding drives some other, also evolutionarily desired activity. GA doesn't are about analytics for analytics' sake, it cares about analytics because analytics helps it achieve some other goal.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Training would be easier for consultants such as ourselves as we would have fewer tools to support and understand.

This is the scavenger model. Consultants, etc., who survive based on what the predators leave, ignore or excrete are scavengers. This means the consultants thriving in Omniture's wake are biologically (ie, business plans and goals) quite different from those thriving in Google's wake.

This also indicates a possible ecological niche in which a scavenger species could evolve into a highly specific predator — there will be prey that are too small for Omniture yet too large for Google (as we've defined them here). Omniture and Google will specialize in very different ways because their prey are very different (again, as we've defined them here) and basic co-evolution principles indicate that the prey species will also specialize to the predator. The fallout from this is that as each mega-predator and prey species co-evolves, larger and larger ecological gaps appear in the food-web due to mutation, etc. Eventually these gaps become large enough that mid-level predators evolve to exploit the vulnerabilities in that new niche.

The short, WA way of saying the above is “Clients will make demands that neither Omniture nor Google can easily meet, or new clients will appear that have demands beyond the scope of both Omniture and Google, and others will develop tools to address those demands.”

At some point this niche will either become large enough that the mega-predators start to feed on it by accident, by intent, or the mid-level predators become large enough that they begin invading the mega-predators' territory. When this occurs and based on its duration evolution goes into overdrive and there's a relatively brief explosion of new organisms (think Cambrian Explosion until ecological balance is one again achieved. Then it's lather-rinse-repeat all over again.

Rene (Wednesday, February 12th, 2008 at 3:27 am): As for many industries, a duopoly generally leads to a lack of innovation.

I doubt a lack of innovation could occur. At some point these two predators would start exhausting their food supplies and would start encroaching on each other's territories and prey species, or the prey species would start evolving better defenses to the predators (ie “requires solutions neither GA nor Omniture can provide or address”). Either one would force evolutionary changes all around.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Certainly if there is collusion at hand and as GA's pricing model is different from Omniture's one, they might have a shared interest in locking the market between their solutions. After all, competition is good. Just take a look at how vendors have been competing these past years to release more powerful tools and better functionalities to address the complexity of Web Analytics

This is an example of evolutionary principles. Other predators come into the environment, assuming and establishing ecological niches. Pretty much what I wrote above.

Rene (Wednesday, February 12th, 2008 at 3:27 am): If Omniture would be the only enterprise solution, prices would remain high while I strongly believe that WA tools will more and more becoming a commodity, putting downward pressure on prices. Don't forget that a tool is just that: a tool and that you need people and processes in order to use them correctly, which are the most important factors in a WA project. We have customers doing great things with Google Analytics and I've seen very poor uses of expensive WA tools. Look also at Office suites, currently you could say that you have two main options: Microsoft and Star Office; Microsoft still sells their software at a very high price and they make margins of over 70%! If there was a real competition I bet that prices would be lower;

Again, a demonstration of niches coming into existence. Rene's statement that “…you need people and processes in order to use them correctly…” are examples of scavengers evolving into mid-level predators.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Having just Omniture and Google Analytics wouldn't/couldn't suit every need. Not all websites are alike and we see it already today that a single tool doesn't fit all. Take for example Coremetrics that focuses on retailers and seems to be doing a great job regarding this vertical. Look also at Unica that allows big corporations to integrate easily WA to Campaign management.

Another example of what I wrote above.

Rene (Wednesday, February 12th, 2008 at 3:27 am): My opinion regarding this question is that it wouldn't be good for the industry if we ended up with just 2 products (I've taken Omniture and Google Analytics as they are the two most important tools nowadays, but it could apply to any other). As I mentioned tools are just part of the equation, an essential but not an important part.

Not within my ability to determine good and bad, sorry. I can only identify environmental, ecological and evolutionary principles at work and predict outcomes based on them. 'Lo, that I were a web analyst…

Rene (Wednesday, February 12th, 2008 at 3:27 am): How would you see yourself in this scenario?

That is a interesting question. Hmm… As NextStage doesn't offer the same products/services as WA does we're not in the same ecology, not predator, prey or scavenger. Some aspects of NSE being in this environment and with a nod to our operating principles indicate we serve the function of symbiont (a discussion of personal philosophies and metaphysics would quickly confirm this, me thinks). However, NSE could be deemed a mid-level predator by mega-predators at some point and would probably be consumed by them as the market has already indicated there's an audience for NSE products and services, that audience is growing (ie new prey or a new niche is evolving and invading the ecosystem). Since NSE got started other mid-level predators have come into the environment and while definitely currently more profitable than NSE we do have that one great advantage all others lack — we (as a company) are specifically designed for phenomenally rapid evolution, kind of blending the best evolutionary benefits of viral, bacterial and herd species, co-opting different disciplines to respond to client requests (this response is an example of such), and our technology is both a core and base technology (meaning NSE can rapidly adapt itself to whatever environment it finds itself in. Kind of like a fish leaping out of the water, sprouting wings and feathers and learning to fly before it dives back in again. Also, NSE can utilize resources from other, even alien, environments in order to survive in a given ecosystem until environmental variables change enough for NSE to thrive there).

That a NextStageish way of offering “Remember all those ELE things that happen periodically? We're the species that survives them because we can adapt and breed faster than most.”

Readers familiar with r/K Selection Theory will recognize NextStage as opportunistic tending towards equilibrium over time in a given market because once we invade an environment we can rapidly generate offspring highly adapted to that environment.

Rene (Wednesday, February 12th, 2008 at 3:27 am): Do you want to see a two vendor market, kind of like Windows versus Apple, or do you like the diversity of options we have before us today?

This is another example of what I've written above. Other vendors (scavengers, mid-level predators, etc.) exist, just not in large or obvious enough numbers to be recognized as such, nor of becoming threats to the existing mega-predators' food supply.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): The Pareto Principle, based on Pareto's analysis in 1906 that 80% of Italy's income went to 20% of the population, suggests that 80% of the revenue in the field will go to 20% of the industry.

Dr. Geertz, Pareto's Principle also has applications in evolutionary analysis, often appearing in any of environmental economics, food-web ratios and relationships, etc. The application is that 80% of an environment's resources go to 20% of the species in that environment. The numbers aren't exact and this is a reasonable definition of the mega-predator models I discuss above.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): That 20% may come from a small set of businesses (e.g., automobile manufacturers) or a large set (residential construction), …

This is a definition of mega-fauna characteristics.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): How does economics decide how many companies occupy the 20% plateau? It comes down to how scalable and transportable the leading businesses are and how large are the barriers to market.

A definition of the environment/ecology.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): The investment involved starting up an auto manufacturing business is astronomical, operating as a barrier to market that holds down the number of players.

I'd need to think on this one a bit. (minutes tick away). Okay. Could a totally new species appear that is a direct challenger to an established mega-predator? No. Could an existing species evolve by exploiting a ecological niche to a point where it became a challenge to an existing mega-predator? Yes. Slight modification to what you're suggesting, me thinks.

Joseph James Geertz (Wednesday, February 12th, 2008 at 7:12 pm): Web analytics, to the extent it is a personalized service, …

That's an interesting thought, personalized service. One of the earlier implementations of NextStage's ET was to “personalize” web pages as they loaded into people's browsers so that the presentation most closely matched the individual visitor's psycho-emotive and -cognitive abilities. Basically it was a plug-in type of thing. Lots of fun, that.

Palani Balasundaram (Thursday, February 13th, 2008 at 7:20 am): Since the question is what if we had just two players, Omniture and GA, i would like to approach this discussion from the websites point of view.
It would make the work simpler for websites to adapt to web analytics. For organizations, to understand analytics they may try out with the free Google Analytics and once the belief is established they may go up the value chain and opt for Omniture. I do agree that migration would be lot more easier and there would be some sort of standardization.
I also share the view that the competition is one factor that helps in breeding better products and it also helps in pushing the prices down.
What would happen to the world of “Mobile Analytics”?

Nicely stated and nicely done. This is an example of co-evolution of predator and prey species, the K aspect of the r/K Selection Theory, until an equilibrium is established in the ecosystem.

The reference to Mobile Analytics and competition are examples of mutation, genetic drift, and the r aspect of r/K Selection Theory.

The challenge to the above comes from what I described above; prey can only evolve out of an ecology until the predator takes notice. If there are only two predators in the environment then prey will evolve to not be prey, predators will evolve to exploit the new adaptations, …

Eric Peterson (Thursday, February 13th, 2008 at 3:32 pm): …vendors around the world are popping up with innovation in measuring widgets, social networks, video, mobile, RIAs, engagement, etc.

Eric, forgive me if I'm mistaken and don't you have a background in bio- or environmental sciences? What I'm describing is probably very obvious and simplistic to you so my apologies. What you offer here is an example of what I wrote above about scavengers and other species evolving into mid-level predators, etc., to exploit gaps and niches, yes?

Eric Peterson (Thursday, February 13th, 2008 at 3:32 pm): And again, it's not like any one vendor is clearly leading the way into (say this with an ominous voice) “THE FUTURE OF WEB ANALYTICS”

Hey, I'm doing my best here…

Judah (Friday, February 14th, 2008 at 12:01 am): If the duopoly didn't provide capabilities for a certain type of measurement I needed to guide decision-making, then I wouldn't be able to make data-driven decisions until one of the duopolists decided to accommodate my need. The reason I have guided some businesses away from certain vendors (and some towards) is because they were deficient in features or capabilities that the business believed they needed to “win.” Competition catalyzes (no pun intended) innovation via differentiation. Competition leads to the genesis (pun intended :) of features and capabilities that answer market demand. In other words, I think the market should drive the product, not the product driving the market, which is what I fear in the scenario of duopoly (or worse yet monopoly).

Well stated, Judah. The former is an example of gaps in the food-web appearing, the latter an example of co-evolution. Depending on how often the latter occurs it could also be step-wise evolution wherein the prey rapidly evolves into a non-prey species due to some cataclysmic change in the environment. The predator species then either evolves or (because it is over-specialized) becomes extinct because it (being a K) can't evolve fast enough.

Ian Thomas (Friday, February 14th, 2008 at 3:00 pm): See, in the future, there will be more places you can do web analytics, not fewer. I made a prediction some years ago which I still stand by, which is that eventually the 'stand-alone' web analytics tools that we currently know and love will be absorbed into (or absorb, in some cases) adjacent technologies and tools, until there's no such thing as a “web analytics vendor”.

I think this is one of those ELE events I mentioned earlier.

Ian Thomas (Friday, February 14th, 2008 at 3:00 pm): …what if in the future there were no web analytics vendors, but web analytics was everywhere? What would the consultant community do then? Discuss.

Excellent question, me thinks. What is being described is a period change, kind of like the Ediacaran to Cambrian, with a large-scale extinction of the previous period's biota followed by an explosion of new life forms (ie, The Cambrian Explosion as mentioned earlier). What also occurs is the paleologic record of how surviving species became survivors. That would be an interesting study.

Denise Eisner (Saturday, February 15th, 2008 at 5:19 pm): If the WA space were to be dominated by the likes of GA and others that depend on cookies to track users, sites that prohibit cookies from a privacy standpoint would be out of luck. Websites that adhere to the Government of Canada's standards for example are cautioned against implementation of persistent cookies due to stringent privacy laws. This has all but stopped the use of GA for federal government web sites here in Canada.

Did I ever mention that NSE's ET doesn't use persistent cookies?

Rene (Sunday, February 16th, 2008 at 4:27 am): …I don't see large corporations using several 'little' tools in order to get answers. Large companies need an integrated tool that will allow them to deploy on a global scale…

Co-evolution at work.

Rene (Sunday, February 16th, 2008 at 4:27 am): @Joseph James, while I know the pareto principle since I was I kid I don't see the relationship with the 10/90 or the 10/20/70 rule. I don't think that this industry can be 'measured' in terms of revenue as we have major players that have changed the rules.

My line of thought causes a rephrasing of your terms; “revenue” == “environmental resources”, “changed the rules” == “modified the ecology”. These redefinitions allow environmental economics to apply.

Daniel Shields (Sunday, February 16th, 2008 at 11:48 pm): I am not exactly sold on the idea that Google is 'competing' with Omniture. As I see it, Google has provided a means to the compete by measurement in the market for web entities who cannot afford the pricetag of a commercial solution. In that regard, I think that they are complementary solutions.

See above my explanation of two mega-predators and how their prey differs.

Daniel Shields (Sunday, February 16th, 2008 at 11:48 pm): If speculation pans out to anything, Google has its eyes on cellular bandwidth.

An predator evolving to exploit an evolved prey species.

Anil Batra (Thursday, February 20th, 2008 at 6:24 pm): In three years there will be no Web Analytics vendor, but Web Analytics will be everywhere – I completely agree that Web Analytics will be everywhere in next few years. This is already happening, as you mentioned and provide several examples. However, I disagree that there will be no Web Analytics Vendor. Microsoft, Google, Oracle, Atlas, Doubleclick etc. will (or already do) provide web analytics as an add on to their products…

Excellent description of a transitory ecology.

Anil Batra (Thursday, February 20th, 2008 at 6:24 pm): …but there will still be a need for one web analytics product you can rely on to make strategic decisions.

Another fascinating concept wherein the model I use can either be of great benefit or deemed invalid. I asked another NextStageologist for some help on this one. The model I'm using here depends on what “strategic” defines in a time-sense and specifically in a time-sense in this particular ecology. IE, what is deep-time in this environment? The model I'm using can predict with excellent accuracy (look at the historic record (god, it almost hurts to write that)) what the future ecology will be and how the environment will change to support that ecology. A strategy that is based on what you can predict (with great accuracy) will allow any organism to thrive. A strategy that is based on what is currently available won't allow an organism to survive.

It's borderline amusing that while any evolved species carries in its DNA a genetic record of every change it's been through and every environment it's been exposed to (this is how biologies perform trend-analytics) and that several companies use trend-analytics as part of their offerings, most companies seem incapable of using that tool their environment's deep-time to determine where they should be and what they should be doing in their future.

You've come very close to recognizing something I believe has been missing in the other comments — that the environment is one of the players in any ecology. No ecology can sustain itself (no balance can be achieved) that the environment is not willing/able to support. Consider the current world economic situation and my point (and the use of this model) is demonstrated in full.

Anil Batra (Thursday, February 20th, 2008 at 6:24 pm): Can you imagine having 15 different web analytics solutions that all give you different numbers?

You mean like a roomful of economists?

Or how about my favorite joke regarding consultants: A consultant is someone who asks to borrow your watch when you ask them what's the correct time, then tells you the time according to your watch, presents you with a bill and a list of suggestions on how to make your watch more accurate.

From TheFutureOf (11 Nov 08): Responding to Steve Jackson's 16 Sept 08 6:52am comment

NextStage: Predictive Intelligence, Persuasion Engineering, Interactive Analytics and Behavioral Metrics(sorry, I don't have a copy of Steve's comment)

Pretty much all your questions are answered on our FAQs page, I believe. What isn't answered there has probably been answered in my presentations. What hasn't been answered in my presentations is probably best answered in a live conversation as blogversations do not easily allow for course corrections and new learnings to take place. And as most people know, I'm remarkably slow in these things.

What behavioral targeting does is simply target ads and offers based on your behavior.

Please define “behavior”. It gets used a lot in these discussions and I'm still sure I don't understand how it's being used.

…behavioral network…


…you will receive ads based on preferences you have pre-identified or clicks you have

Challenges with the above start with “pre-identified” and work their way down. Most of those preference selector tableaus reveal much more about their authors than they ever could about the people filling them in. Ditto “clicks”, tritto “have made”, …

In your scenario after the auditory stimulation they might be *thinking* about food and your offers are presumably designed around this potential situation.

I believe what I wrote was “they received auditory stimulation while browsing the blog during a pause in their browsing, after that auditory stimulation they started thinking about food”. The neural circuits that trigger for food are easily recognizable and differentiable from other biologic needs. There's no “might be” involved. The number of psycho-physiologic changes that occur when people are thinking about food are … well, a lot.

In the behavioral targeting world you wouldnt know that unless the behavior indicated it. (IE they went to a search engine and typed burger – in which case you could serve an advert)

We might be getting to the crux of thing with the above statement. Even if there's nothing on a given page that deals with food, even if the reasons an individual came to a site have nothing to do with food, knowing that the individual is now being influenced by hunger allows for a much more precise response range in content provided. This is touched on in From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on “What are we measuring when we measure ‘engagement'”.

I know nothing about Future Now's offering. I know who they are, of course, and not much more.

We matched our Rich Personae (see InFocus Reports and Personae Mapping Tool because a client asked for it.

So is your method to design websites based on this kind of principle?

Have you met Rene, our new CEO? This sounds like a question he can answer better than I. I can put you in touch, if you'd like. (wink wink, nudge, nudge)

Passion about your subject is required…

A friend of mine says he never knows where my ego is as I don't respond to much. I always laugh at that. I'm very passionate about kite flying, music, my family, …, not so much about much else. I'm passionate in my research but am always surprised when others express interest. I suppose I've also got it in my head (oh, my god, the pun that's in the making here) that when I respond passionately to something its due to certain parts of my thalmic and amygdalic clusters responding to environmental signals blah blah blah.

So when I recognize I'm getting passionate (and hopefully before others do) I can begin examining why which leads to a deeper understanding of myself and (more often than not) those interacting with me. This increased understanding often leads to mutual understandings, which often leads to …

One camp didnt believe Engagement was a valid metric while the other camp did.

Again, a surprise to me based on a different metaphysic. If someone doesn't believe Engagement is a valid metric then don't use it. It's an odd thing, I guess. My training is such that if someone told me they were going to design a craft to get to the moon and power it by tying geese to it…well, I'd probably help them because I'd want to learn a) did they know something I didn't, b) what caused them to have this belief, c) … Then at some point if they ran into difficulties that caused a violation in their metaphysic I could offer “Have you tried eagles? They fly higher, you know…”


And I do remember that I often have to tell people “I can do or I can teach and we only have two minutes to get this done. Tell me which one you want.” And it still comes down to, “if you don't accept it, don't do it.” It's the inability to move from “this is wrong for me therefore it is wrong for you” that shakes me. I don't use smartphones and I certainly don't stop others from doing so, nor do I tell them they're fools for using them. “Does it make your life easier? Then good for you!” It's right up there with forcing a new student to use a concert reed on their oboe. Most often all that happens is you kill their desire to learn the oboe. I'd prefer to help someone learn their options and let them make their own decision than force my belief system on them.

The valid arguments on both sides of the fence added fuel to the fire.

Remind me to tell you about taking a master class with Neil Simon about presenting valid arguments on both sides of the fence.

AND!!!! I think I'm caught up on all comments and posts. Obviously it's time for me to start some new research, yes?

From TheFutureOf (10 Nov 08): Responding to Jim Novo's 9 Sept 08 1013am comment

Jim's Comment

What is the behavior were segmenting here?

Visits, and likelihood to Visit again.

So we take some level of Activty – 50 visits – to define “Best Visitors” and then ask, How are we doing with best Visitors, what percent have Visited in the past 60 days? You can segment this by Campaigns, by Content, etc.

For example, a study like this is extremely effective when you launch new Product / Content Categories, or change Policies. What effect did the changes have on Best Visitors? If Best Visitors = 80% of monetization, then that's the right question to ask. The question that doesn't give you the right answer is to randomly survey all Visitors – the “majority”, who only contribute 20% of the monetization. They will overrule the Visitors who contribute 80% because there are fewer of these Best Visitors.

Make sense?

The tricky thing with Visitors – especially if you don't have an e-mail address – is you have to do something *now* when you see certain patterns that historically lead to dis-engagement, for example, does not sign up for RSS feed or newsletter.

If you can hook dis-engagement to something tangible like lack of feed behavior – those who don't subscribe tend not to come back – you might be able to do something in CMS to address them. The bare bones version of this approach is the old exit pop newsletter subscribe.

If you can't hook it to something tangible and have to rely on visit patterns, then you're into something like Joseph's NextStage.

Either way, using Recency allows you to act more quickly against the testing – you don't have to wait for a segment to PROVE they're not coming back, you can PREDICT it.

My Response

Excellently stated. I'm wondering what the max-min number of visitors necessary for “accuracy” is. If I'm reading your correctly then the min is somewhere on the other side if 15k visitors, correct? And the time variable is about six months?

Also, based on the concepts you're demonstrating, historical records play heavily in your model, correct? If so, the length of time required to create accurate models is…?

Just asking for better understanding. Also touching back to my previous about replicability and transferability. If you've covered this elsewhere then my apologies for not noting it and could you please provide some pointers? Thanks.

I ask because of your caveats “…when you launch new Product / Content Categories, or change Policies.”

I'll admit to some amusement at “If you can't hook it to something tangible and have to rely on visit patterns, then you're into something like Joseph's NextStage.” Tangible? How are we defining tangible?

And in the end I agree, PREDICTion is what it's all about.

From TheFutureOf (10Nov 08): Responding to Jim Novo's 1 Sept 08 5:46pm comment

Jim's Comment

Steve, try reading again now and see if it makes more sense / this model is something you can start with. Or, tell me why it's not!

From a “culture” perspective, the great thing about using the LifeCycle Grid approach is it's exactly the same format every time. Once someone (Marketing) understands it, then every time you apply it to a new segment there isn't any downtime.

So, for example, let's say you want to see what dis-engagement looks like for buyers who became new customers 6 months ago This way they have spent some time on the books and have had a chance to let the LifeCycle play out a bit.

First you apply the Grid to all buyers

Then you apply the Grid to only jewelry buyers

Then you apply the Grid to only jewelry buyers who have purchased over $1000 past 6 months

Then you apply the Grid to only jewelry buyers who have purchased over $1000 past 6 months who buy only precious stones

Then you apply the Grid to only jewelry buyers who have purchased over $1000 past 6 months who buy only precious stones mounted in gold

The same model, again and again. Going through these iterations teaches you a lot about what dis-engagement looks like across different segments. You start to see the patterns, and create tests for cells in the Grid, and discover where the most profitable “re-engagement” timing and offer is for each segment.

Make any sense, or not really your question?

BTW, you probably will not see anything very useful using a demographic segmentation. This is a behavioral model and the segment variables need to be behavioral – spend, type of product, content visited, actions taken.

My Response

Jim, I appreciate the time and effort put into what you've shared. If I read your correctly, it's both replicable and transferable (it worked more than once and it worked in a wide diversity of environments, not under highly isolated conditions).

Thanks for that. I admit to getting a little tired of reading about solutions that are so isolated in their application that any change in initial or durational parameters causes wildly varying results.

Of course, if I've misread what you've written…

From TheFutureOf (9 Nov 08): Responding to Steve Jackson's 1 Sept 08 7:56am comment

Steve Jackson's Comment

Jim's post closely correlates to one of the points I made on Eric's forthcoming white paper on engagement. I wanted to know how the engagement formula/model would measure dis-engagement, which is where I and I'm sure Jim believes where the real value of the RF measurement model is.

I may have mentioned before that we were using segments of “engaged” visits by our definition and an RF comparison to determine likelihood to purchase in one particular case in which we saved a client in excess of $2Million.

We defined engagement as x amount of clicks and duration on the website. We explained the notion to the client that the more “recent and frequent” a visitor was the more likely they were to buy. By showing that our engaged segment was more recent and frequent (as well as purchasing slightly more) than global traffic we proved to them that certain traffic driving elements were poorer than others.

This meant the client made direct savings (as I said in the 7 figure bracket) that they wouldn't have ordinarily considered in terms of where (what media) they spent their money.

Since seeing this in practice I've been exploring how to use recency and frequency to predict when a potential customer will “dis-engage” because as I mentioned the recency and frequency of the engaged segment was much higher than global traffic.

What I need to understand in more depth is where and how to define a segment that will predict when a visitor is becoming disengaged so I can then advise clients to do something about it.

At least more than the way I do now which is by simply comparing the differences in RF metrics.

I think the formula you're producing and working on might go some way toward that and combines a lot of the current thinking on the subject I'm wondering if it can be done.

What would a dis-engagement formula look like?

My Response

(finally, some time to catch up on things)

Funny that I'm getting to this now. The answer (if I didn't explain before) is encapsulated in an email response I gave to Debbie Pascoe's “…when you talk about engagement and measuring it, what is it that should be measured? I dont mean the variables, etc to collect specific data points. Rather, it has to do with motivation. Someone comes to the site and we want to whether/know they are engagedengaged to do what? What are the variables measuring? How will we know if/ when weve answered the question?

Were not all the same, so my engagement is your time-waster. I dont know if Ive made the question clear enough for your response. If not, let me know and I can provide more clarity.

I should point out that I'm waiting for some other NextStagers to approve what I wrote in response to Debbie before posting it here.

Let me see if I can abridge somewhat…

If the demonstration(s) of (whatever is being defined as) engagement is known then any diminutions in and/or the absence of these demonstrations is an indication of dis-engagement (by the definition of engagement).

IE, the equations being used to determine whatever is being defined as “engagement” must also determine “dis-engagement” by the same definition. If not, then either the definition of engagement is in error or the equations of the determination are in error.

I described to Debbie Pascoe that there exists a set of universals that demonstrate engagement regardless of individual and regardless of situation. In other words, “Are these things happening?” Answer yes, the person's engaged. Answer no, the person's not engaged. Answer yes and here are the values of these universals, know how engaged the person is in the situation they're in. Doesn't matter if the person's on the web, in their car, flying a kite, watching tv, playing music, pick your situation, doesn't matter.

The next time we're all at a conference somewhere ask me to demonstrate “engagement” and I'll be happy to do so. It's a demonstration I've been doing since the late 1980s and it never fails to impress. It's also a great way to make money if people are willing to bet.

You also write “I think the formula you're producing and working on might go some way toward that and combines a lot of the current thinking on the subject I'm wondering if it can be done.

What would a dis-engagement formula look like?”

The formula I provided Eric in the whitepaper can determine dis-engagement in any number of ways. The simplest versions are max-min problems and step-wise analytics (something I noted in my edits to the paper). Slightly more interesting versions (to me) involve partial differentials and some resorting to geometric solutions, and all still using the formula in the whitepaper. In all cases these methods would allow for increasingly refined measurements of dis-engagement (as I believe you're defining it).

Cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet. (Something else to talk about at a conference)

(please, somebody laugh. please)

From TheFutureOf (7 Nov 08): Debbie Pascoe asked me to pontificate on “What are we measuring when we measure 'engagement'?”

Debbie Pascoe emailed me a while back, asking “…when you talk about engagement and measuring it, what is it that should be measured? I dont mean the variables, etc to collect specific data points. Rather, it has to do with motivation. Someone comes to the site and we want to whether/know they are engagedengaged to do what? What are the variables measuring? How will we know if/ when weve answered the question?

Were not all the same, so my engagement is your time-waster. I dont know if Ive made the question clear enough for your response. If not, let me know and I can provide more clarity.

Whoa. Great question, Debbie. To answer with any attempt at clarity will take a bit of detail, so go get a coffee or whatever suits you, sit back, relax, take a deep breath and let's begin (and remember, I'm explaining based on the types of research NextStage does and may not be what you're asking about)…

Internal v External Objects

You're reading this blog post. This blog post is an external object to you. Perhaps it's on a screen or you've printed it out. This external object has certain properties that can be defined/recognized as system variables. These system variables are defined by the blog publishing system, your computer settings, which browser/reader you're using, … these things are obvious.

Historically not obvious (except to us at NextStage and people in related disciplines, anyway) is that system variables also include things like font size, color, images, placement, content, amount of content, positioning, …

All these system variables are variables of the external object. These variables can be measured independently by various devices and will all be the same ±2db.

As you interact with this blog post in whatever external form you've chosen, you start to create an internal representation of it. This internal representation is comprised of estimator variables. Some of these estimator variables include things like your attitude towards me (the author of this post), whether or not I use colors that you have personal biases to, whether or not I use colors that your demographic has biases to, your gender (sorry to say gender is not a binary issue, folks. Neurologically, humans are not “male or female”. We may favor one neurologic gender when considering an amalgam of all thoughts in a given time period or situation, and we switch back and forth, mix and match as needs and environment dictate), your emotional state when you read this, …

System variables and the external objects they represent are interesting, definitely. You want to know about engagement, though? You need to understand the estimator variables and the internal object, the conscious and non-conscious sum that becomes the internal representation of the external object you're interacting with.

and people wonder why, when asked, I tell people that NextStage researches “how people interact with information in their environment”

You are also quite correct when you write “…my engagement is your time-waster.

Consider all the possible estimator variables involved, the order they're in in a given calculation (arithmetic associativity and commutativity don't apply in neuromathematics because biases cascade rather than lineate. Humans don't respond to stimuli in fixed manners except in the massive aggregate. Even so, assuming a massed aggregate means you're only capturing what's allowed by your error margin. Allowing for estimator variables equates to decreasing your error margin to an infinitesimal, therefore your capture becomes infinite.

this is why I often talk about probability solids and solid probabilities and hyperspatial systems when describing the math NextStage uses in its models

Can we measure “engagement”?

So what we're left with is “Can we measure that someone is engaged?”

Yes, we can (remember, I'm talking about how NextStage defines “engagement” and our measurement methodologies, using our technology).

Then what are we measuring?

We're measuring the estimator variables to determine that a very specific neurologic process is active in the given individual (ditto the above).

What is that “very specific neurologic process”?

A focusing of attention such that non-conscious and conscious activities at least intersect if not momentarily synchronize (see Attention, Engagement and Trust: The Internet Trinity and Websites, Defining Attention on Websites & Blogs, Know Your Audience, and Reach It, Focusing Your Customer's Attention, Get the attention you're already paying for or Defining Engagement (Again? Oh, Lordy!) and Exploring the Holes in Flawed Logic. We also have some for-pay whitepapers defining these things, how we measure them, case studies, research, etc., if anybody's interested).

Note that nothing in the definitions thus far concerns itself with the system variables of the external object. We don't care about them right now.

Because we don't care about them right now, we don't have to worry about “…my engagement is your time-waster.” because we don't care what is engaging you, only that the neurologic processes of engagement are being demonstrated (see Modality-Specific Attention Under Imminent But Not Remote Threat of Shock: Evidence From Differential Prepulse Inhibition of Startle, Attention and awareness in stage magic: turning tricks into research, Social Decision-Making: Insights from Game Theory and Neuroscience, … There's 103 references in the NextStage library from sources such as Science, Nature, Neuroscience, Refereed Proceedings of the International Womens Conference, Journal of Computer-Mediated Communication, The Journal of Neuroscience, … have I lost anybody yet? And that was just a quick search. Knock yourself out and do a search for “attentional control” or “focused attention” (although there's a lot more trash around the latter)).

So it doesn't matter what's 'engaging' someone, only that they're demonstrating “engagement'?

Quite correct. What's engaging to you may not be engaging to me and vice versa (at least let's hope not, yes?). But (!!!) the way you and I demonstrate 'engagement' will be the same ±2db (so to speak). The fact that you and I and all humans demonstrate 'engagement' in some very specific ways (“Engagement is the demonstration of Attention via psychomotor activity that serves to focus an individual's Attention.“) is critical to the next part of this discussion.

You and I and just about everyone else can look at someone and tell whether they're “engaged” (focusing their attention) in what they're doing. We don't even have to see what they're doing in order to know if they're “engaged” in what they're doing. We can tell by the look on their face, their breathing, their lack of response to other external stimuli, …, that someone is focusing their attention to the exclusion of other stimuli. IE, the intersection of their non-conscious (breathing, the look on their face, etc) and conscious (what they're actually aware they're doing (and we'll need to come back to “aware” in a moment or two)) is occuring and pretty much without pause.

Here's an interesting (to me) aspect of engagement; if someone can tell you what they're doing when you ask them then their level of engagement is either very low or non-existent, meaning “they're not engaged”.

This means that if you ask someone what they're doing and they respond immediately — let's say they're looking at some websites and you say “What are you doing?” and they immediately respond “I'm looking up something” then they're not engaged in what they're doing because they were able to respond to a stimulus that wasn't part of the original external object – internal object pair (<ASIDE>this is one of the ways ET can determine if someone was on the phone, listening to music, watching tv, petting their dog, talking to someone else, etc., while browsing</ASIDE>).

However, if you ask someone a question and it takes them a moment or two to respond? Then they were engaged. Did they sigh before they responded? They were more engaged. Did they have to physically pull away from what they were doing? By golly they were engaged. Did you have to tap them or shout or something to get their attention (ask Susan about this)? Then my god were they engaged.

IE, if someone is aware of what they're doing then they're not engaged in what they're doing.

“Engagement” happens in the “now”, not over time, not in the future and not in the past

People can focus their attention on things not in their environment. They do so by bringing whatever isn't in their immediate external environment into their immedate mental environment. Top performance athletes do this when they mentally rehearse their game or event. Are they engaged? Definitely and they are engaged “right now”, they are focusing their present time attention on some future event, they aren't focusing their future attention on the future. If the latter were true, you could interrupt their musings in the present and they would respond immediately, then some time in the future and without stimulus they'd look up suddenly and ask, “What?”

The fact that engagement is very much a “now” phenomenon plays into the concept of motivation.

So “motivation” is important to engagement?

Ah, we're getting closer to the key, me thinks.

Yes, motivation is important because people must be motivated (in the process of creating an “internal-external congruency”. Motivation is demonstrated when the external environment is altered through the direct action of the individual in order to achieve some internally recognized goal or objective) to be engaged by what they're doing. Motivation is one of those things I reference when discussing the {C,B/e,M} matrix (see From TheFutureOf (22 Jan 08): Starting the discussion: Attention, Engagement, Authority, Influence, … and From TheFutureOf (16 Jul 08): Responses to Geertz, Papadakis and others, 5 Feb 08 for more on this). Understanding what motivates people — how much effort they are willing to demonstrate in order to externalize an internal goal or objective — is a part of knowing how to engage them, ie how to focus their attention where we want their attention focused.

Let me give you an example of motivation: I want to play guitar well enough to be able to sit down with musicians and keep up with them. To do that, I must practice playing guitar. I do, and will often go into our music room and just play for 5-10 minutes during the day besides my usual practice time.

The internal goal/objective is a certain level of musicianship. The external recognition of that goal/objective is playing with musicians and keeping up with them. The effort demonstrated is daily practice. Lots of people want to play guitar, few people want to practice daily. We colloquialize this with “They're not motivated enough” and what we're recognizing is that the internal goal/objective doesn't have sufficient value to warrant the effort involved (a Fair-Exchange concept if ever there was one).

Then we can motivate them to be engaged in what we want them to do? How?

People may have heard or read my description of what NextStage does as essentially creating an equation, A + B = C (see Troublesome Targets: Where Analytics and Audiences Meet). I normally describe this equation as “The visitor + the marketing material = the desired response.”

Now let's get a bit more technical. The “The visitor + the marketing material = the desired response” is another form of “(Estimator variables defining Internal Object) + (System Variables defining External Object) = (Are they engaged or not?)”.

This restating of the simple equation is important because it provides definable, executable solutions to questions such as “What background color increases attention to a branding image/message?” The “What background color increases attention to a branding image/message?” is a semantic variation of “(increases attention) + (background color) = (branding image/message)”.

Marketers, advertisers, whomever, knows they want “C”, increased branding of their product in the consumer's mind. They also know they have “B”, some background color palette. If “C” doesn't occur then “A”, increased attention, hasn't happened because the choice of “B” was an incorrect choice, ie “B” wasn't motivating visitors to demonstrate “A”, engagement.

To me (to me!!!) the value of measuring engagement as defined here is that the site owner knows now, almost immediately, if “B” is working in real time and will work or not pre-publication. Because we can measure “A” and we know “B”, we can tell how “B” needs to be changed in order to produce the desired “C”. The true richness of that “A + B = C” formulation is that it understands “A = Sifi(Sj(xj))”, etc., so that you can be near surgical in determining what exactly (and I do mean exactly) needs to be changed in order to produce the desired result “C”.

So to “Someone comes to the site and we want to whether/know they are engagedengaged to do what? What are the variables measuring? How will we know if/ when weve answered the question?”

A + B = C -> (they are engaged) + (the site) = (to do what?)

  • If you can describe what you want someone to do (“C”) and
  • You know what the demonstrations of engagement are for your selected audience (“A”) then
  • You can determine what the site (“B”) needs to be in order for “A” to happen such that “C” occurs.

What are the variables measuring? About 80 different psychomotor behaviors at present. More in the future, we hope.

How will we know if/when we've answered the question? Near immediately, depending on traffic volume, how well you've defined your target audience and things like that.

Pre-publication knowledge falls from the above by holding C and A constant, thus determining what modifications (if any) are required to B in order to create equality.

(pant pant pant)

The last thing Debbie wrote me was “I look forward to your pontification on this issue.”

Okay, Debbie. How'd I do?

From TheFutureOf (19 Sep 08): Responding to Jim Novo's 29 Aug 08 10:54am comment

(sorry, no copy of Jim's comment)


No, I didn't know much of your effort is toward “teaching”. I'm happy to learn, though. This also helps me to understand your point of view. You favor “simplicity” and in teaching models I favor analogy and metaphor, often starting with very simple models and building upon them.

Done and done. Thanks.

Yes and Hear Hear! to your “forge direct links between the various behavioral sciences and successful marketing efforts.And it seems to me we're doing that!” One of my goals for this blog is being realized. Thank you for helping me to realize a goal.

You write “I would not suggest anyone use the Recency metric without some kind of segmentation, because since we are talking about likelihoods here, you need a coherent population of some kind. The error rate when looking at a single individual would be high, but across a population, again speaking to “likelihood”, it's a great yardstick for placing your bets.” Yes, yet another example of people making first order estimates based on inadequate data because they failed to figure out the basic parameters of their experimental systems.

Then you offer “…sometimes you find that the Recency relationship is not as simple as it is at a more macro level, not as linear as 'the longer it has been, the less likely they are to repeat'.” and I'm chuckling a bit.

Your description of response and profitability curves, etc., leading to new segmentations makes me think if the way we (NSE) uses Bohmian and Mandelbrot methods could provide a worthwhile predictive solution to this.

(you were just itchin' to get “dis-Engagement” in here, weren'cha?)

Good thoughts and reasoning. Variable clouds v segmenting? I'm not sure having two options makes things simple enough.

(laugh, darn it)

From TheFutureOf (18 Sep 08): Responding to Debbie Pascoe's 16 Aug 08 11:22am comment

Debbie Pascoe wrote:

Regarding “calibration”, let me back up and briefly describe the implementation. As you point out, it is a javascript on the page. The javascript has all the instructions regarding what bits of data to collect and pass to the vendor's server. When a visitor comes, a call goes out to the vendor's server and this is how the data is transmitted.

Now, if the owner of the site doesn't tag all the pages it intended to tag (this happens all the time), they will get no data from those pages, and will have an incorrect view of what happened during the visit. If the javascript is present but does not function properly (this happens a lot), then again they get no data from the page. If the javascript is present, and functioning but the information in the variables is incorrect (this happens – human error can and does creep in), then the data will be incorrect and the assumptions people make when studying the data will be wrong.

What I mean by “calibration” is automatically scanning the site and uncovering these conditions so they can be corrected. This is not a core competency of the WA vendors, and not something, IMO they should undertake. (Ref. link to my most current post below). It is a complex problem, and getting more complex as websites get larger, more dynamic, using ever more complex technologies – ex. creating all Flash modules and deploying tags inside the module.

On a related note, Michael Wexler just posted a significant article titled “What Web Analytics is Missing” – – to which I responded.

My Response:

Yes, owners not tagging all the pages desired, intended or necessary. We share that evil, I guess.

I didn't know so many possible errors could creep in as far as javascript goes. It makes me happy that one of our original design decisions to was make things as simple as possible on the client's side and that all the heavy lifting would be done on our side. Our side we can control pretty well. The client? Not so much so.

Thank you for describing what is meant by calibration (“automatically scanning the site and uncovering these conditions so they can be corrected”). This is something our technology does during the running of reports (uncovering invalidating conditions).

Calibration should not be a core competency of WA vendors? Yet WA vendors should partner with people who have that as a core competency. Hmm…

So if they should partner with such individuals, they must in some part be assuming responsibility for proper calibration, correct?

I read your Response to Michael Wexler's Post re:What Web Analytics is Missing and Michael Wexler's What Web Analytics is Missing… (with any luck I left a comment there). I asked some NextStagers about this. The discussion was very interesting. The core issue is something we've seen repeatedly in labs; people making first order estimates based on inadequate data because they failed to figure out the basic parameters of their experimental systems.

After a lot of conversation it pretty much came down to “Yes, calibration should be something offered by vendors” and there was a caveat that plays to your partnering theory, that such a service should be included in the fee structure.

The fact that this topic has risen to the “conversation topic” level is an indication that it happens enough to be recognized in the web analytics community. Of course, one of the joys of our society is that we think contracts assign and absolve responsibilities (except probably with very large clients).

Lucky us, huh? Sometime when we're together, remind me to tell you what one of our early employees, a fellow from Australia, thinks about contracts in the US.

The amusing piece of all this is (to me) that (I thought) web analytics was all about accountability.

Glad you enjoyed the thumb comment. I only wish it weren't true so often.

From TheFutureOf (11 Sep 08): Responding to Christopher Berry's 15 Jul 08 8:26am comment

(again, sorry, don't have the original comment)

You're welcome for both.

My thanks to and appreciation of Jim Novo, as well.

Recency…since that element was added to this discussion I've been doing some research on “recency”, the “notion that a human in habit tends to stay in habit” a you eloquently put it. The more and more reading I do the more and more I liken it to NSE's Visitor Return Report. It definitely isn't Loyalty although I'm thinking recency is often confused with loyalty (and this is probably already addressed by others in this discussion. I'm behind in my readings. What can I tell you).

There is a phrase used in my studies, “As soon as you're ready to not put up with your life as it is any longer, it'll change.” This concept seems to be the domain in which recency concepts dwell and are not where loyalty concepts dwell. From what I've read, recency only deals with (what one would hope would be) an inverse exponential — the more often someone returns the more likely they are to return, the less time between visits the less time between their last visit and their next visit.

There is nothing in the recency model to account for several social science elements, many of which are elements of habituation (sequencing or chunking are two that come to mind immediately).

Stabbing at the Future

To see if it's done, perhaps?

I appreciate your “…theres fear that data driven insight is going to suck the creativity right out of most of our jobs that were all going to become slaves to the Algorithm in the end.” and “I predict that the real challenge in the Future of the Web Analytics is going to be more around the social technology implementation and maintenance, and not so much the physical technology.”

You just know I'm going to agree with you, yes? Let me push the envelope just a bit further; I've learned that people who are comfortable with their creative skills and creativity have absolutely no problem incorporating data-driven insight into their practice.

That “comfort” lends itself to being open to new methods and new technologies, especially if it means granting them the ability to improve their own processes.