Omniture SiteCatalyst Plug-ins

Omniture SiteCatalyst is without a doubt one of the best Web analytics solutions out there. However, like all analytics solutions it be can difficult to implement when you are not a dedicated programmer or you do not have the available programming resources at your disposal. Many times Web analytics and other people that are responsible for the Web analytics function within a company will also not have access to server-side code to implement better page names and to set events and variables when you need to in certain circumstances. And this is where the Omniture SiteCatalyst plug-ins enter the equation.

The primary advantage of the SiteCatalyst plug-ins is that they allow you to implement SiteCatalyst and its more advanced features without the need to touch the server-side code. It should be noted thought that editing server-side code to pass dynamic data to SiteCatalyst is almost always the preferred avenue if it is available to you. That being said, Omniture has created many plug-ins that allow data to be sent to SiteCatalyst so that you can implement by only touching your basic “s_code.js” file that is a part of the implementation. Some of the more useful plug-ins (my opinion of course) include:

  • Append List (s.apl)
    • This function is one of the most useful and versatile for someone without easy access to source code. As an example of how this function might be used, assume that you have a registration confirmation page that needs a success event fired. By using the “s.apl” function, you can write some very simple JavaScript that will detect the Omniture page name and if it is a match, this plug-in will fire your success event. All being coded from directly within the Omniture JavaScript file.
  • Link Handler (there are 3 of these)
    • There are three flavors of the link handler plug-in. One controls clicks on regular links, another controls exit links and the last controls download links. The “s.downloadLinkHandler” plug-in is especially helpful if you want to track all of the PDFs on your site by setting a specific custom event for only clicks on PDFs, while at the same time sending the URL of the PDF into a commerce variable (a.k.a. an eVar). By using these three plug-ins, you can easily begin tracking the clicks of select links on your site.
  • New vs. Repeat Visitors (s.getNewRepeat)
    • The name of this plug-in says it all. By using this one, you will be able to segment all of your visitors and their interactions with your site into behavioral groups for new and return visitors. Very useful when the Omniture prop and/or eVar is correlated or fully subrelated, respectively.
  • Query Parameter (s.getQueryParam)
    • This might be the most basic plug-in, and is a part of almost every Omniture SiteCatalyst implementation that I’ve ever seen. While simple, it is extremely useful. Using this plugin, you can capture the value of any query string parameter and send that value to an eVar or prop. When you couple this plug-in with one like “s.apl” you have an easy way to capture your internal search phrases while at the same time setting a custom success event for internal searches.

These are but a few of the many plug-ins offered by Omniture. There are also more advanced ones such as:

  • Channel Manager (advanced tracking of your campaign data)
  • Cross Visit Participation (provides an understanding of campaign impact across visits)
  • Form Analysis (enables reporting on form errors, abandonment, etc.)

This last few are more difficult to implement in most cases, and you might consider contacting a consultant here.

The plug-ins are one of the most useful features of a SiteCatalyst implementation but are often overlooked. I think that a session on a few of the more advanced plug-ins would be an excellent idea for an Omniture Summit session, don’t you?

Site Optimization and Targeting

All visitors are not the same, they can come in through different marketing channels, enter your site at different pages and view different parts of your site. One of the great things about the Web and the tools that we have, is that you can not only know that your visitors are doing different things, you can measure and then act on it. TV for example has to rely on sampled, panel services (i.e. Nielsen) where you might make an estimation about the different groups of people that saw your ad. On the Web, we know what visitors viewed and exactly who viewed it! If you have a robust analytics solution, you already have this data. The question is, what ,if anything, are you doing to act on it?

As an Omniture user, I have always relied heavily upon tools like Omniture Discover, because of the fact that it lets you segment your visitors into more meaningful groups. So, I can easily see for an e-commerce site, that the people clicking on the “View Larger Image” link on a product details page have a conversion rate that is 200% higher that those visitors that do not click on that link. Shouldn’t I be doing something about that? Like running a test to optimize that link for the visitors that have not been clicking it?

If you are currently or will be running a site optimization solution like Omniture Test&Target, you should always be running monitoring campaigns on your site. This can allow you to always be tracking and reporting on how different segments of your visitors are converting on your site, allowing you to quickly act by launching a test that is targeted towards a high-value segment of your visitors.

You should consider targeting your site optimization efforts towards different segments of visitors such as:

  • Logged in visitors (vs. not logged in)
  • Visitors from paid search campaigns
  • Visitors from natural search
  • Visitors using specific keyword phrases on search engines
  • Visitors from email campaigns
  • Visitors that stop at a certain point in your conversion funnel
  • First time visitors
  • Repeat visitors
  • Visitors that enter your site via a specific page
  • Visitors from a specific geographic location

Targeting your site optimization efforts to segments for visitors is usually more effective than just launching a test that is served to all visitors of your site as if they were equal. The reason, is that with different segments, you have an idea of their intentions. For example, if you are targeting a test that changes laptop product imagery for visitors that are entering your site after searching for “laptops” on Google, you know that your test is being served to the segment of visitors that is actively considering purchasing a laptop at this time or in the near future.

When considering site optimization, always ask yourself for which group or segment of visitors is this test targeted? You should see your site optimization efforts paying off more quickly if you are targeting your tests.

Content Site Optimization

Most of the blogs and literature that you will see on the Web about site optimization is going to be about e-commerce Web sites. The reason? Anyone can understand your results when you say you’ve increased conversion rate by 20%, thereby seeing an incremental lift in revenue of $100,000 over the next 30 days. The case for optimization here is pretty obvious. This doesn’t mean that content sites and publishers that aren’t selling a product on their site should not be optimizing their sites.

One common excuse on the part of a lot of content and publisher sites is that they are not selling anything. If you are in business and making money while not selling anything, please let me know what business you are in so that I can start one up too! The reality of the situation is that often, it’s just harder to measure revenue from online activities and marketing for a content site or publisher. Your are in fact “selling” some product or service to the visitors to your site, whether or not that “sale” is made online. Once you’ve realized this, you should also realize that your site could be better at selling to its visitors. In order to start optimizing your site, the first step is to identify and track your “converions,” not just basic traffic data. For publishers or lead generation sites, these conversions could include (but are not limited to) any of the following:

  • Page views
  • Ad views
  • Completion of a registration form
  • Registration for a newsletter
  • Completion of a contact form

Once you have identified your conversions on your site, you are ready to optimize your site so that you can get your visitors to view more ads, visit more pages, complete your lead generation forms, and sign up for your newsletter more than ever before.

Many site optimization platforms, such as Omniture Test&Target, will integrate directly with your already existing Web analytics solution, making it even easier to optimize your site since you won’t have to re-tag all of the conversions on your site. All optimization solutions should let you track “non-ecommerce” events in some fashion though, but if you can leverage your existing Web analtyics tagging, you should do so.

In terms of content sites, here are a few tests that you should be running on your content. These are what you might call the low hanging fruit common to a lot of content sites:

  • If running paid search campaigns, test different ways of presenting calls to action for your conversions
  • Test what you have above the fold of your homepage so that you can decrease bounce rate and increase conversions
  • If you have search on your Web site, change how you are presenting search results

These are just a few, very generic options. The options available are unique to every company out there, and you each have your own opportunities to optimize your existing content.

Impact of WPP Investment in Omniture

So yesterday there were press releases from both Omniture and WPP announcing their partnership, and the $25,000,000 common stock investment by WPP in Omniture. You can see these respective press releases here (they’re the same really):

I think that this was very big news, and that it will impact both Web analytics practitioners and other vendors alike. As I see it, here are a few (a very short, brief list) of the ways this partnership might affect us practitioners of Web analytics:

  • With Omniture training an additional 500 WPP employees in Omniture technology, the available pool of people with Omniture on their resumes will significantly increase.
  • There might be an internal impact at Omniture on their Best Practices group. Will Omniture keep consulting in house in light of this $25 million investment by WPP? How many Omniture consultants might be asked to leave Orem to work within a WPP company (as was basically stated in the press release)?
  • This could be good for practioners that are savvy enough to realize the impact now, and broaden their skill sets outside of Web analytics alone.

There’s also the potential impact on other vendors:

  • With the large client base at WPP the impact on competitors such as Coremetrics and WebTrends is obvious.
  • The same large client base could also help Omniture in increasing use of other tools such as Test&Target (look out Optimost and SiteSpect), Merchandising (look out Endeca), Discover OnPremise (look out BI vendors), etc.
  • What’s the impact on the many other consultancies out there that help companies with Omniture implementations and optimization?

Please let me know if you have any further thoughts on what the impact of this investment might mean for WPP, Omniture, us practioners of Web analytics or Omniture’s and WPP’s competition.

In closing, here are a few early thoughts on the WPP/Omniture news from some people on Twitter:

WPP Omniture Partnership on Twitter

Campaign Revenue Attribution

One of the most simple questions asked in analytics is, “How much money are we making from our paid search campaign?” The problem is that there are many ways to answer this question, as well as many factors from the Web analytics side that can created different answers.

As a Web analyst working within a team of more traditional SQL-using, data analysts, explaining how an analytics solution answers the above question can be challenging. The 3 primary variables that are a part of a revenue attribution methodology include:

  1. The order with which the campaign credited with the sell occurs in relation to other campaigns
  2. The length of time that a campaign may receive credit for a sale
  3. How attribution is split (or not) among multiple campaigns

As for order, the most common approach is last touch. In other words, if a visitor clicks through your email campaign today and then through your Google ad tomorrow, the Google ad will get all of the credit because it was the last campaign touched before the purchase. The problem of course, is that even though the email campaign was clicked first and might have impacted the sale, the email receives no credit. One alternative to last touch that gets around this is linear attribution. Basically, linear attribution would split the previously mentioned sale 50/50 between the email and the Google ad. But should it really be 50/50? In addition to last touch and linear, you can also have something like first, or original, touch, where the email would get all of the credit. So there are a lot of choices to mull over.

Now that I’ve talked about a few of the methodologies around the order of attribution, the variable of time needs to be added. Using the previous example, and assuming last touch as the order of attribution, how long after coming in through a Google ad should the ad get credit for the sale? Only if they buy within the visit? 7 Days? 30 Days? The most typical solution is 30 days. However, this could very well extend out to months if your Web site is one of lead generation where the sales cycle is weeks or months long. Also, if you send out daily emails, is 30 days really a good choice for attribution? If you don’t have the choice of a custom solution, then 30 days is probably your best bet right now since that seems to be the standard. But, just keep in mind that you might have the option of changing your attribution to any time preiod (or maybe even event on your Web site).

Earlier, I mentioned linear attribution as a method of splitting revenue between multiple campaigns. Aside from this even split among campaigns, there are not many other options out there. This is one of the biggest challenges in revenue attribution. One way around this is to export all of your analytics data by visitor ID for every visit (that’s a ton of data to say the least). Once you have this data, you can create your own methodology to tie a sale back to every visit by the visitor, and every campaign that they touched (and the time between) prior to the sale, all the way back to maybe even the first campaign code ever touch by the visitor. We’ve done this at my current job, and I can tell you that it is not something that is easy to recreate on an ongoing basis.

It would be great if there was some solution on the Web analytics vendor side that would let you create a truly custom attribution methodology. However, the problem there is that if you can customize every aspect of attribution, you might end up creating a self-fulfilling prophecy. What I mean here is that if you want to weight the last touch before a sale as being worth more than the campaign touches between the first and last, then you might be over valuing paid search if that is most often your last touch marketing channel.

So what is the solution?

As far as I am concerned, it is short sited to value everything as last touch. You can’t just ignore the fact that other campaigns have in someway influenced/impacted your visitor prior to making a purchase. To ignore this is to miss out on understanding and optimizing your marketing efforts from beginning to end. So, ditch last touch (in a perfect world, if you can).

Next, you’re going to need to create your own solution as to how to tie all of the different campaign touches together and appropriately attribute them to the sale. This is an easy thing to say, but not so easy to do obviously. In a later post I will try to flesh out an idea to actually do this.

Do you have any ideas as to how to improve upon existing ideas of revenue attribution? If you’re doing something custom yourself, let me know about it.

Programming and Web Analysts

Now that Omniture has APIs and WebTrends is doing more sophisticated things with their tools that have ODBC connections, I was thinking, should we Web analysts consider adding to our skill set? Primarily, should we begin to add programming abilities to our skill set? Things like APIs are great, but only if you have the ability to create applications that access these APIs. Should we Web analysts start learning languages like PHP, SOAP and XML so that we can create our own applications?

Also, most popular Web analytics technologies are based upon JavaScript (from the implementation side anyway). So, a better understanding of JavaScript would most likely benefit us all. A better understanding of JavaScript alone could open some doors for better Web analytics opportunities for those not already proficient with JavaScript.

I think that we Web analysts should be immersing ourselves in programming so that we become more than just analysts and the users of tools like Omniture, GA, WebTrends, etc. I for one will be trying to pick up the following skills in 2009:

  • PHP/SOAP – for the purpose of programming with Web APIs and creating new applications for analytics and online marketing
  • JavaScript – I’m already decent with JS, but would like to be able to do some more advanced things for analytics
  • SQL/MySQL – for the purpose of querying Oracle, SQL and MySQL databases

Are their any other skills that you think would benefit Web analysts? What additional skills are you trying to pick up on your own this next year?

Omniture API Development Contest

With all of the APIs that are out there for Google, Facebook, Twitter, etc., I can’t say that I surprised to see that Omniture has now started a contest to see what creative developers can do with the Omniture APIs. I received an e-mail the other day that stated that the first prize would receive $10,000! That’s right, $10K. The interesting thing here is that the deadline for entry into the contest is some time in early February, before the 2009 Summit, where the winners will be announced. Not a lot of time to develop something new if you haven’t already started.

I think that there’s a real opportunity here for some developers given the short time frame and what I would think would be a limited amount of competition. Developing something like this and winning a prize for it would also be a great career booster and way to get noticed for the use of product APIs in a Web 2.0 crazy world where every other developer on the planet has created some kind of Twitter application! Man, I’ve got to get back into some programming so that I can start using these APIs myself!

Omniture Dashboard Speed & Dates

This post is just to note a couple of things that I have discovered recently about Omniture dashboards. I hope that this might be of help to some of you that use Omniture.

Faster Omniture Dashboards

File this one under what is most likely common sense. But, I have seen many Omniture SiteCatalyst dashboards take forever and a day to run, or you will see the “unable to retrieve data” message. I at first thought that this might be due to the fact that I had seen this most often on dashboards for Omniture variables that were using 20 – 30 classifications. Maybe using that many classifications slowed everything down? But no, it was really just because of the number of metrics that we had for each reportlet. I tried recreating dashboards with only revenue, and voila, the dashboards ran in no time. The down side here, is that a dashboard is only so useful if it has a single metric. If you are experiencing problems with slow dashboards, you might want to try and reduce the number of metrics in your reportlets (maybe to just two or three) until it runs in a reasonable amount of time. The addition of calculated metrics is also a significant factor in slowing down or killing Omniture dashboards. Of course if the dashboard is automated via email, you can add everything you like, and the whole thing will get emailed perfectly fine.

Omniture Dashboard Dates

Just another Omniture dashboard experience that I thought I’d share. We have several dashboards that are setup so that each reprotlet might be reporting on ranked data for the last 30 days. Last 30 days was chosen since a current month would not be all that useful on the first of the month. One of the great advantages to dashboard in SiteCatalyst 14 (as opposed to earlier versions), is that you can change the date for all reportlets in a dashboard at the same time. This makes the dashboards much more useful. So, someone requested the executive dashboard for a custom date range. Knowing that you can do this in SiteCatalyst 14, I changed date range to the custom one requested. Everything ran great, so I sent the dashboard to the person via the email function within SiteCatalyst 14. However, the dashboard that the person received was stuck to the default of last 30 days in which the dashboard was originally created. So, just be aware that while you can change Omniture dashboard dates to custom ranges, the email results will be the default of the dashboard in the way that it was created. I confirmed this with Omniture, and it is not a bug, but just the way it was designed.

Commitment and Site Optimization

Reading a recent blog post from Jeffery Eisenberg (Realistic Expectations For Conversion Rate Optimization) made me once again think about how a lot of companies fail to really commit to testing and site optimization once they purchase a tool (Test&Target, SiteSpect, Optimpost, Goolgle Web Site Optimizer, etc.). Right now, I see site optimization where I saw Web analytics about 5 years ago in terms of tools and commitment.

A few years ago, businesses were ready to go out and buy the biggest and best Web analytics solution out there, without having any kind of dedicated resources to leverage the information or to ensure that any kind of best practices were being followed or developed. Now, many companies have dedicated Web analysts that can implement analytics solutions and help their businesses leverage the information contained within. Site optimization is, as I see it, about to explode (more than it already has) because companies appear ready to commit resources to the effort as opposed to just buying a solution and running with it.

Most companies do not dedicate any resources to actually making their existing Web sites better.

Most design and development efforts are concerned with developing new features or content. Instead, companies need to remember that they have a ton of content out there that could probably be performing better than it already is. After all, how often do any of us get something perfect on the first try (or the second for that matter)?

There are several things that a company can do to ensure that they are committed to optimizing their Web site:

  • Dedicated some of the time of your design and development teams to optimization.
  • Commit to designing at least 2 versions of everything that goes out. Make optimization a part of the design process (within reason of course). This is often a big challenge as designers see testing as just doubling their work.
  • Find a way to get everyone invested/interested. A lot of companies make the testing process an internal contest of sorts where everyone watches results in real time.
  • Pay your employees for coming up with ideas that improve conversion rates. After all, shouldn’t you be paying your employees to impact the bottom line anyway? Here, it’s measurable!
  • Realize that optimization and testing is just as important as your paid search and e-mail marketing efforts. All require an ongoing commitment in resources and effort.
Do you have any other thoughts on what companies can do to ensure that they are committed to site optimization and testing?

Creating a Hypothesis for Site Optimization

Creating a hypothesis should be one of the first things that you do when you start running A/B and multivariate tests on your Web site. Just because you have the keys to an optimization tool (even a free one like Google Web Site Optimizer), you should NOT be starting out saying, “hey, let’s see if changing this button from ‘Add To Cart’ to ‘Buy Now’ works better!” It’s vital to understand that you need to start with a hypothesis and then set clear goals before you start testing.

Setting a hypothesis is not a difficult thing to do, and it will help you stay clear on exactly what you are trying to accomplish in running a test. Here are a few examples of what might be appropriate hypotheses:

  • By changing the button on our product details page, we expect that we will be able to increase the rate at which visitors add products to their carts.
  • If we can decrease our shopping cart by one complete step, we can make it easier for customers to complete their purchase, thereby increasing conversion rate.
  • If we can provide more targeted information on our most popular landing pages, we can decrease bounce rates.
  • Maybe if we make it easier for visitors to use our internal search, visitors will more easily find products of interest, increasing conversion rate.
The common theme among all of these ideas/hypotheses is that none of them address, specifically, what will be done. This is the best way to start, because:
Creating and starting with a hypothesis, frees you from simply testing graphics and content, enabling you to test your business ideas and site effectiveness (i.e. conversion).

It is the hypothesis that you should be taking to the rest of your team when asking for the best user experience and design ideas to prove your hypothesis. You should not let a designer alone be the one that starts the process of site optimization.

Creating a hypothesis also makes it easier to measure the results of site optimization. If you start with just a design that is going to simply be “better than the last,” there’s no clear way to measure that. For example, if you were to change how you present your internal search results, is your success measure conversion rate, add to cart rate, product views or maybe average order value? There’s no real answer here, and starting a test without a hypothesis will result in a lot of debate over what success is when it comes time to evaluate the test.

Your hypothesis should make it clear what you are trying to improve, so that everyone can agree upon the success measure in advance of the test.

So if you start your testing and site optimization with an appropriate hypothesis, your goals and the eventual evaluation of your success should more easily fall into place.