18
2014
Feb

The Need to Experiment

Marketing in all its forms is an inexact process driven largely by complex variables. When someone goes searching for the right answer to their marketing they are asking the wrong question. There are no right or wrong answers they are simply different answers. So let’s look at why this is so.

The basic concept of marketing is creating influential communications with an audience of interest. The challenge is that nobody knows exactly what will influence the audience and what influences the audience is constantly changing. Marketing techniques that created great results a few years ago are out of date, old, and non-responsive. This is what creates this marketing rule:

Rule 1.01 – Experiment & evolve or die

Purple cow concept by Seth Godin - marketing requires constant reinventionAny product or service will have a number of value statements and each of those will have varying appeals to different audiences, this is what we are constantly testing. The need to experiment is at the core of the marketing industry because the market keeps adapting. There was a time when getting 100 people into a webinar was simple because the technology was exciting and cutting edge. Today people get a dozen of these invites each day and delete 99% of those they receive. This attribute was best described by Seth Godin with his example of a Purple Cow. As long as the Purple Cow is different and attention-getting being the Purple Cow works, but once the market adapts it will stop responding to the Purple Cow. It is this adaptation in the market that drives our need to never stop conducting experiments.

Setting up the experiment

Wikipedia defines experiment as “An orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis”. In order to set up the experiment we have to have an orderly procedure so we have to isolate what we are trying to prove or disprove. This is more difficult than you would think because the marketing environment is constantly evolving.

Testing the margin of error

One of the first things we need to test is the margin of error which is bigger than you think. We have run many tests of two sets of ad copy that were identical running in the same ad group, campaign, keywords, budgets, etc… and the results vary. What we have found is that if you have a response rate in the 2%-5% range with all other variables being the same it is common to have results that differ by about .25%. This means that a 3% CTR rate is equal to any other CTR from 2.75% to 3.25%.

Reaching statistical validity

An experiment with too little data is simply an invalid result. Several years ago I read an AdWords book that discussed the result of a test that had 11 clicks over one night using 4 different ads and the book proposed to have learned something from that. Let’s start by saying that is just stupid and state that it takes a lot more than you want to reach a dependable number.

While there is some great math that you can do here, we use a simple guideline called the rule of 4. Simply stated, if you expect an event to happen 4 times and it has not then it probably never will. If you expect an event to happen four times and it does then it’s probably about an 80% probability that it will continue to produce the same result. To use the old coin toss example from every entry level stats course. If you get heads 4 times in a row – it is probably a two headed coin. If I am testing a keyword for conversion and it has converted four times in the test data then it is likely that it will continue to convert. Getting to the conversion rate is going to take a lot more data because when you shift form a yes/no experiment to a rate validity the numbers get real big real fast.

Isolating the result

The key to any experiment is proving something worthwhile and that means getting down as close as you can to a Boolean outcome (true or false, yes or no, 0 or 1). This is more difficult than you would think because you can only change one value at a time. If you change two words in the headline and your experiment is to test if a word improves the conversions then you have designed failure into your experiment by changing the question.

Understanding that some things are outside your control

In Internet Marketing there are things that are outside of your control which cause your outcomes to be clues, not facts. The great example of this is a project that we did for a snow removal business in the Northeast. The Client wanted a specific outcome but everything we tried failed; that is until it snowed. The weather is clearly outside of Google’s control, at least for now, so the data can tell you lots of things but it cannot make it snow.

Be careful of what you think you know

Getting from effect to cause is a path filled with danger. Let me explain this with a true story that happened to us several years ago. We were working with a client on an image ad campaign to be launched on a Monday morning for branding purposes. The client produced a beautiful set of image ads and we had done the keyword research to get a good placement in the display network. On Monday at 10am we activated the campaign and within an hour we had over a million impressions and the site traffic spiked like it had been mentioned on Oprah. The measurement we had agreed on was an increase in brand searches because the ads were designed for brand impressions not a call to action. As we expected brand searches spiked in step with the impression level buy and for the next few days everyone was very happy and we were congratulating each other on a job well done.

We had found the right combination of ad copy, creative, and system placement to really move the needle. However – you knew there was going to be a however – what really happened was discovered about 4 days later. Six months before the launch a PR firm that was no longer working for the client had an article that hit a major publication and that is what drove the spike in traffic. We had our eyes on the wrong data and we were viewing it with an assumption of the traffic source in our heads but we were very wrong.

Budgeting for experimentation

While the need to experiment is clear the level is not because you need to allocate some budget into things you know are going to work so you have the budget you need to test for what you do not know. This is a matter of setting your risk aversion and this changes over time. In an early stage business your budget for experimentation could be as high as 100% because you know very little about what works in your business. Over the course of time you will learn from results and this could allow you to push the known items to be as much as 90%, but you have to resist the desire to hit 100%. If you quit testing you stop learning and your competitors will out-think you.

Never Stop Experimenting

As AdWords Experts we recommend to all clients that they conceive and test new ideas and seek the next Purple Cow. Do it in balance with other parts of your strategy but never, never, never stop experimenting.