Random Post: Conversion At Carnival Cruises
RSS 2.0
  • Home
  • About
  • MBA Guide
  • Print Ad Blog
  •  

    Estimating Demand Impact And Conversion Rates

    December 30th, 2008

    I was recently working on an interesting project where I was estimating the demand impact of a change that we had implemented to our site.  Without getting into the details, a change was made so that the customer would be less distracted during their shopping experience.  This then – hopefully – keeps the visitor more engaged with the site and if everything else goes well, they will then buy.

    The tricky part is that the likelihood to convert changes at different points on the site, though it is a bit difficult to get it.

    For example, if all a visitor sees is the homepage, they are going to convert at some percent.  Assume 10% for easy math.  If a visitor doesn’t bounce – meaning come to the site, see one page, and leave – say the percent to convert increases to 20%.  If the visitor then sees a product page they are now going to convert 30% of the time.  And finally if they enter the checkout process they will convert 40% of the time.  The point is that the level that the customer is at in the site changes the likelihood of conversion.

    This seems like it would be a very obvious thing, and to a certain extent it is.  The key component here is not that these differences exist and you know about them.  The key is taking that knowledge into account when making an estimate for demand impact of a change.

    If visitors have all of these different conversion points and a change is made that causes 1,000 visitors to not leave the site you need to take these conversion points into account.  Saying that the 100 more people will buy (using 1,000 visitors * 10 % conversion from homepage) is just as misleading as saying that 400 people will buy (using 1,000 visitors * 40% conversion rate from checkout pages).  When making a demand impact, make sure that you include a few inputs for these different areas.

    For example: 100 visitors * 10% + 300 * 20% + etc.  As long as the percents add up to 100% and the visitors add up to your total you are in good shape.  You can then take this number times your average order value and you now have a demand estimate.  Note that you could even take this a step more and apply a different average order value to people who have been in different areas of the site.  For instance someone who is shopping for Outerwear or A laptop will probably have a different average order value then an individual looking at flip-flops or computer cables.

    Ultimately you can segment this to any level that you are able to get.  Just make sure that the work that you put into arriving at the final number is worth it – especially if you are using the Omniture Excel Client.  Make a judgement call.  If it is just going to be small dollars or you really just need a ballpark then take the 1,000 visitors * 25% or something like that.  It is a guess, but it should be an educated guess.  Each different analysis will require varying levles of confidence.

    Have you done anything like this before?  How did it go?

    This has been a Thought From The Cake Scraps.



    Needless Comparison, Or Is It?

    November 11th, 2008

    What is the market doing?  How is my site doing in relation to other sites?  How much time are people spending on my competitors site?  Hitwise can tell you all of this.  The real question is where do you go from there?

    It has been my experience that people fall into two categories on this topic.  In the first group are the people who are absolutely convinced that you need to know how your competitors are doing and that Hitwise data is a must.  Residing in the second group are the people that say that the data may be interesting, but “it doesn’t impact what we as a company need to get done.”  It is important to note that the latter group isn’t saying the data is useless, rather it is just not going to impact what the company does.

    That is what is said in the meeting.

    In practice I have found that the data does impact the business.  People do make decisions on the data – and they should!  Don’t ever believe otherwise.  You have to know what your competitors are doing if you want to position yourself correctly.

    Take the simple statistic of traffic to a site.  On one hand it is easy to say that you cannot change the traffic to another person’s site.  The data is not actionable.  Just focus on your own site.  I ask you to look deeper for a moment.

    You know that your traffic is down 10% – perhaps you even have an alert set for such dips – but their traffic is down 15% to last year.  You could look at this and think that you are doing better than them.  That is good.  Keep doing what you’re are doing.  Then you look at your ‘competitive set’ a.k.a. a group of sites similar to yours.  Their traffic is down 15% as well.  Still looking pretty good for you.  The danger is thinking just that.

    Yes, their traffic is down.  Your traffic is down less.  This is interesting, but you have to look for the real questions.  What are they doing or not doing compared to last year?  What are you doing or not doing compared to them?  This is where the power of the data is; not the data itself but adding that extra dimension.  If you can learn from them as well as yourself you can really help out yourself at a much reduced cost.

    I think this brings about the way that the two groups mentioned at the beginning of this post really need to blend.  The first group – Hitwise junkies – are wrong if they are just looking at the data.  There needs to be an additional dimension added to turn the data into information.  The second group – the nay sayers – are also wrong if they see no value in the data.  The right questions have to be asked.  Comparisons can be made, but there needs to be an intent for action behind them.

    If this blending can happen and form a third group then real information, not just data, is at your fingertips.  Do you agree?

    This has been a Thought From The Cake Scraps.


    You Are Being Tracked: Product Page Finding Methods

    October 13th, 2008

    More often than not if you are somewhere you know how you got there.  Hopefully you don’t have too many weeknights (weekends I will exclude) where you just wake up and have no idea how you got to where you are.  You may be smart enough to know how you arrived at a particular location, but your website – at least by default – is not.

    This post covers the principle of having a Product Page Finding Method (PPFM) tag on your site.  If your site was successful in getting a visitor to a product page, you should really know how they got there.  And if you are a visitor you should know that this is one more way you are being tracked.  For more information on being tracked check out my posts on Internal Campaigns and E-Mail tracking.  I will point out now that this post is less about describing to a visitor how they are being tracked and more about how a website should track the visitor.  This is because a PPFM tag is less common and may not apply to many sites a visitor may go to.  Nevertheless, it is still something to keep an eye out for.

    Back to tracking how a visitor got to a product page.  The easy solution is a ‘Next Page’ or ‘Previous Page’ report.  This will tell you what pages a visitor was going to or coming from, respectively.  It may seem like the answer to our question of how the visitor arrived at a product page, and it does at a simplistic level, but is of no use for aggregating data.  Consider an index page that lists all of a companies laptops.  How often does a customer click through to an individual laptop (a product page)?  There is no easy answer to this if you have more than a few laptops displayed.  A PPFM tag will solve this problem.

    If you add a PPFM – that’s Product Page Finding Method – tag to each link on the index page then when the visitor clicks through to a product page you can tell Omniture to look for PPFM=INDEX_Laptops01 and it will store it to an e.var ( a commerce variable).   Then you can run a report in Omniture and look for instances of INDEX_Laptops01.  Compare that to the Page Views for your laptop index page and you have the rate at which a person is clicking form that index page to a product page.

    Another trick is to make sure that all of your index pages are tagged and have INDEX in the PPFM tag.  That way you can actually do a search to pull back all instances of an index page click on any index page.  With any luck you have your pages named in a similar fashion – so you can get total index page views – and you can then get a site-wide rate that people are clicking though to your products from your index pages.

    Now that we understand the concept of a PPFM, lets look at a few other uses for it.

    Basically, you should not have an instance where a customer navigated to a product page and you do not know how they got there.  Other ways they could get to that product page include a ‘direct to product page’ search and a cross-sell placement from another product page.

    The ‘direct to product page’ is useful if you have a search box that will allow a customer to go directly to a product page without going through an index page.  An additional way to tag this would be to have a search results tag – for instances when a search returns many products – and then any click from that index/search page to a product page would give credit to the search tag.

    The cross-sell tag would be used on any product page where you are displaying some other products the customer might also like to buy.  Any click on these links will bring the customer to another product page and then the cross-sell tag would get credit.  You might also have a similar tag for items displayed in the cart.

    The last thing to discuss is credit.  On a $100 order who gets the credit.  The simple way to do it is the last used tag.  The bad part is that with this method if a customer uses and index for the first 3 items and the last item they clicked a cross-sell item, the cross-sell tag will get all of the $100 attributed to it.  That isn’t really accurate.  The better way is to distribute the $100 via linear attribution.  That means that in the example above each of the index pages would get $25 and the cross-sell would get $25.  The tricky part here is that if a customer is browsing they may click to 10 different products from 10 different index pages and each of the index pages would get 1/10 a share of the revenue even though the customer only bought from one of the index pages.  Just something to keep in mind.

      With this tagging in place on your site you should always be able to answer how a customer arrived at your products.  It does not quite answer the question on a page by page basis – i.e. for Product A the PPFM tags used to arrive there were cross-sell 24%, indes 53% etc. – but it will give you a much better idea, on the whole, how your visitor is getting to your product pages.  Just a little tip that can save a ton of work

      This has been some Thoughts From The Cake Scraps.


      You Are Being Tracked: Internal Campaigns

      September 22nd, 2008

      So you know that you are tracked by e-mails.  You are going to beat the system.  You are not even going to use a search term to get to the website because you know Google will track you along with the website.  You are going to direct load – typing the URL into the address bar – and avoid being tracked.  Almost, but no.  Chances are that the site gave you a cookie last time you were there.  Oh well, you tried.  But that is not what this post is about.  Just because you got to the site without tracking, does not mean that you will not be tracked.

      Internal campaigns are exactly what they sound like.  They are campaigns that are internal to the site.  A campaign is anything that the site is doing to try to get you to buy more stuff (or whatever the conversion metric would be, such as filling out a survey or something).  E-mails are campaigns.  Billboards are campaigns.  A site or company runs advertising campaigns. You get the idea.

      The banner that you see across whatever site you are on is sure to include an element that says that you clicked it – a tag.  Note that I am only talking about a banner that is on the site and for the site, not an advertisement for a different site.  The advertisement for a different site would be an external campaign for the company that bought the ad.  We are talking about an ad for another item on your site – perhaps for an LCD monitor when you are looking at computers.  I call this a real estate campaign or an internal campaign.  I call it real estate because the site is tracking based on the tag on the banner and the site knows the location of the banner, the real estate. I call it an internal campaign because it is for another product that will take the visitor somewhere else internal to the site, not push them out the door to an external site.

      So you clicked the banner and were tagged.  It is in this way that the site can track how often the banner is being used (instances) as a rate of how many people saw the page it was on (page views).  This also allows the site to understand where someone is clicking on the site.  Each area of the banner could contain a different tag, thus if you clicked the t-shirt you could get tagged with a value of tshirtclick while if you clicked the jeans on the same banner you would get tagged with a value of jeansclick.

      Internal campaigns are very useful for a site because they allow for a wide variety of reporting.  The site will know how many conversions they got that clicked on the banner and how much revenue is associated with it.  This also is a much easier way to track traffic from a page.  Perhaps a single page has multiple banners and the site wants to know how many people clicked the banners.  With no tagging on the banners, all the site would be able to do is look at what pages visitors went to next and add them up.  For instance if there is no way to get to the jeans page from your home page and yet 20 of the 100 visitors took that path you can assume that they must have clicked on the jeans banner.  But then to add that up with the page that took them to the t-shrits and the page that took them to the pants, and to…etc. is a huge pain. By the time you get to the number of estimated clicks (because in theory they could have the page bookmarked or something like that) you won’t care any more.

      Look for more the post forthcoming about purchase influencer tagging on Thoughts From Thee Cake Scraps.


      Should ‘Visit’ Metric Be Updated?

      September 15th, 2008

      Interruption is a way of life here in America. I remember reading somewhere that in Japan if a person is working by themselves they are not as likely to be interrupted because it is assumed that they are in thought whereas in America if you are working by yourself it means you are available because it is assumed you are not busy. Not sure if that is true or not, but I know that if I see someone at their desk I will talk to them if I need them. I always ask if they have time, but I still ask because – in famous final words fashion – it will only take a second.

      How does this relate to web analytics? It relates because of the definition of a visit. If you are new to web analytics, you may wonder “What is a visit?”.  Web Trends Live has an excellent glossary of terms which is where I pulled this definition from:

      Visit: A visit is an interaction a unique visitor has with a website over a specified period of time or activity. In most cases, if a visitor has left a site or has not executed a click within 30 minutes, the visit session will terminate.

      My question is, is this the correct length of time? Should it be longer than 30 min because of how many distractions/interruptions we have in a day? I read a great interview at FastCompany.com about how often people get interrupted at work. The average time between switching tasks was 3 minutes and 5 seconds. That is a lot of moving around. It took an average of 23 minutes and 15 seconds to get back to something they switched from.

      This would give credence to the 30 minute rule that is laid out for us, but I still have to wonder if it is correct. I think that with tab browsing people are more likely to have a longer lag time between looking at one page and looking at another. I think that since the onset of ‘restore session’ – when you open up a browser that you previously exited with multiple tabs active – lag times between activity have increased. ‘Fires’ come up at work and need to be handled, e-mails come in, the phones ring, etc. The reality is that while a person may be idle for 30 min they would say that it was one visit. This begs the question of who defines a visit, the web site or the viewer?

      My main concern is that this time frame may skew some data that looks at visits by a visitor for a given period of time. Perhaps you will get data that says people visit your site multiple times in one day – probably considered a good thing – when really you just can’t keep a visitors attention and they keep having a 30 min or more delay in between their activity. This would actually then be a bad thing because you are not keeping the viewer involved which may discourage them from coming back.

      Clearly an industry needs standards and, honestly, web analysts are lucky to have any standards at all in a field that changes so quickly while being so young. That said, hanging on to old standards just for the sake of standards isn’t such a great policy either. It doesn’t need to change today, but it is something to keep thinking about as browsing habits evolve.


      Google Search History Update

      September 9th, 2008

      In my post You Are Being Tracked: E-mail Style there was some discussion/confusion about when I said:

      Hopefully you know that Google keeps track of everything you have searched for.  Ever.

      Well this it true and false depending on how you read it.  GHamilton noted that Google does not keep everything you searched for.  Rather they keep it for 18 months.  The key word here is “YOU”.  In a recent post on the Google Blog Google announced:

      Today, we’re announcing a new logs retention policy: we’ll anonymize IP addresses on our server logs after 9 months. We’re significantly shortening our previous 18-month retention policy to address regulatory concerns and to take another step to improve privacy for our users.

      You may see where I am going here.  GHamilton is correct in that from and individual IP perspective after 18-months, or rather 9 months now, Google no longer has history on you specifically.  I am correct in that Google really does have “a history of everything you have searched for.  Ever.” with the caveat that they no longer know that you were the one that searched for it after 9 months.  If you are interested CNET does a great job of getting into the nitty-gritty and explains why the ACLU is so critical of Google’s privacy policy.

      Hopefully that helps clear things up a bit for people.  Let me know your thoughts.  Do you care that Google keeps your data for 9-months?  Should it be longer/shorter?  Should they keep anonymous search history forever?


      You Are Being Tracked: E-Mail Style

      September 6th, 2008

      Most people probably already know that they are being tracked.  There are all sorts of programs and ways to do this at all sorts of levels.  For instance your ISP may track you and give (sell) your data to a company like Hitwise – privacy policy can be found here.  I actually saw this in a newscast last week.   They interviewed some guy about what popular search terms are and tried to make it sound creepy.  Amazing! People search for weird stuff on the internet like “how to make bombs” and *gasp* “porn”.  This guy must be some sort of genius!  And he looks at historical data! Brilliant!

      Hopefully you know that Google keeps track of everything you have searched for.  Ever.  Anyway, the part that people probably don’t know as much about is how individual sites track you.  One way a site can track you is by tagging you when you click through on an e-mail they send you – the focus of this post.  Think of tags as dated stamps in your passport book.  Interestingly enough, some of this tagging can be easily found in the address bar of your browser.

      When you see something in the address bar that looks like emid=584783 that is telling the website that your internal – meaning site specific- e-mail address ID is 584783.  This value is unique to a single e-mail address. Each e-mail sent to that e-mail address will have their unique emid attached to all links in the e-mail. This also allows a site to build a history of that e-mail address – not only for activity, but for response rate as well.  Now every time you click through an e-mail for that site they have more history.  Note that larger sites rarely look at individual behavior but instead classify a behavior and then analyze that group.  Still, the information is there.

      In addition to an e-mail ID, there is usually a campaign variable such as cid=Sep08FreeShipping.  This allows the site to report on everything with Sep08FreeShipping stored in the cid variable. All of this information is contained within the link that you click from the e-mail. If you get the e-mail and directly load their site, not through the e-mail, the activity will not be tracked because in a direct load no value would have been assigned to cid.

      These variables do not have to remain in the web address the entire time.  They are stored in the background after the initial click. So when you no longer see emid or cid in the address bar, but originally arrived at the site through the e-mail, you and your activity is still being tracked.

      Look for at least one more installment of how you are tracked. There I will focus more on how a site tracks internal campaigns. Hope this helped give some people a better understanding of how websites track you.


      If Only It Were Bigger…

      August 26th, 2008

      I get all sorts of spam in my e-mail about making things bigger, but none of them are for the one thing I really want bigger: the interface for Omniture Excel Client.

      It took me some time to get used to the Excel Client that Omniture offers.  Mostly it was because I was not sure what reports I needed or how I wanted the data.  That makes looking for an easy way to get the data difficult.  After a few weeks working with Omniture I was comfortable enough to begin using Omniture Excel Client (OEC) on a regular basis.  Life is filled with peaks and valleys from there on out.

      Because the interface is connecting to Omniture it is inherently slow to do pretty much anything.  Now it is not horrible (most of the time) but when working in Excel changes are instant – think changing the font – and working in this slower interface takes some getting used to.  I will dedicate some future post to the issues and gleeful moments I have with the OEC but for now lets get back to the size issue.

      OEC does not allow you to re-size the interface that you are working in.  So, for instance, if I am using the “Pages” or “Most Popular Pages” report and I see a list of page names, part of the names get cut off.  There is no side scrolling option to be found.  You just have to sit there and wonder what the full page name really is.  I will point out that this may not be an issue for lots of people, but if your site is large enough page names can get pretty long.  Also, I am a fan of descriptive page names so that when you see just the page name it is informative.  Names such as “Brand” “Item” “Size” or “Gender” “Size” “Shirt Type” are much better than a product number for page names.

      So the real question here is why is the interface of OEC set up this way?  Did they not do QA testing on this?  I don’t have the answers.  All I know is that adding a side scrolling bar – not my favorite option but better than nothing – cannot be all that difficult to add into the interface.  Never mind that this issue persists in the newest version of OEC. Long page names are a reality and while you can use the search or advanced search to narrow your results it is a pain to have to do. Plus, OEC doesn’t save your advanced search so if you get results and need to edit it – such as exclude an additional word or phrase – be prepared to re-enter the entire search. Lots of NOT fun here too.

      I am not sure what it will take for Omniture to fix this.  Apparently they have not got the notice that size matters.


      What A Picture is Worth

      August 25th, 2008

      It is hard to value a picture of your family on vacation, of your friends at a party, or of your website 3 weeks ago.  As touching as the first two options are, this post will focus on the last of the three: a picture of your website.

      Now for many people, this is not a big deal.  If you are running a blog or some other site that has a primary focus of serving content your website may not change much in appearance over the course of a few weeks.  But for the people that work with sites that have a focus of selling things with crisp pictures and captivating copy, things are  changing all the time – and analysts may or may not know about it.

      As a beginner in web analytics, and in fact to marketing in general, it has become clear very early on that knowing why trends are changing in your data is nearly as important as the data itself.  What people want from an analyst is not “what is changing” but rather “why is that changing”.  Answering the first part will get their attention but very few people are content knowing that traffic is up 15% without knowing why.

      Keeping a log of changes either manually or by writing a simple script – currently beyond my abilities, but a co-worker of mine whipped something up – is of great value.  Now we have a copy of what pages looked like on any day we want.  If the creative changes or a sub-zone on a division tab moves we have that change recorded.  With copies saved off, you are the owner.  Better yet, when someone asks for some analysis on your site’s homepage or division page you can not only show them the data and a cool graph, but an image of a pre-change and post-change look at the site.

      It would be great if these things just happened or if the web analysts always knew what changes were happening when, but the reality is that that is not always the case.  If you manage the copies saved off you can always be sure you will capture the change, no matter if you are informed about it ahead of time or not.  If there is a question about content change, you can now look for yourself and deliver your analysis in a more timely and – more importantly – a more informed manner.

      I’m not sure what a picture is worth, but a picture can turn data into information and information is priceless.

      UPDATE: Just found a great example of this in an article written by Bryan Eisenberg.  You can find it here.  It deals with how people interact with Google search and how it has changed overtime.  Short, simple analysis but cool.