There are two possible paths to releasing any marketing campaign into the world.
The first is the Big Bang approach, which treats marketing as a secret weapon that gets unleashed onto customers to generate a shockwave of response.
Except when it doesn’t.
But we’ll get into that in a moment.
The second is an experimental approach, which bases its structure on preliminary feedback from the customer before dedicating resources to any particular direction.
Marketers have a lot to gain from testing their assumptions through experimentation before going all-in on any given idea.
Which is why, in order to encourage marketing success, Agile ways of working support an iterative cycle of experimentation from ideation to execution.
Agile marketers can leverage concepts such as minimum viable campaigns, work breakdown structure, and metrics in order to disprove or prove the assumptions they generate internally and de-risk their next steps before the brands they promote can suffer the consequences.
In this article, we dive into each of the crucial steps that make Agile experimentation so powerful and explore how marketers can make use of them in their daily work to decrease the chance of failure and increase the chance of success of their marketing projects.
What do Pepsi’s Live for Now, Nivea’s White is Purity, Monopoly for Millennials and the Peloton Christmas Ad from a few years ago have in common?
They’re all examples of expensive, Big Bang campaigns executed by the marketing teams and agencies of well-known brands.
They’re also all marketing campaigns that flopped and failed in front of the very customers they were trying to serve. In fact, in Peloton’s case, their inability to capture their audience’s reality translated into a $1.5B billion loss in market value overnight.
From the outside looking in, all of these campaigns appear to have been executed without the use of experimentation. Let’s examine the evidence.
For starters, a simple customer survey or focus group would have quickly shown that audiences just were NOT here for it. A small experiment would have resulted in the same audience reaction that customers experienced en masse when these campaigns were released globally all at once.
All Big Bang campaigns follow this familiar pattern.
A great idea is born internally inside the organization, it passes through levels of approval within the corporate hierarchy, and a project team is assigned to execute it.
When the team is done executing the preconceived plan, they release the campaign and all of its collateral to a target audience and hope for the best. In some cases, internal assumptions are proven correct and the campaign achieves desired results.
In many more cases however, Big Bang campaigns come-off tone deaf, forced, or just odd because they don’t reflect the customer’s perspective.
Like a risky coin toss, Big Bang campaigns rarely manage to deliver on all of our customers’ expectations because, put simply, no one asked them in the first place.
Breaking down campaigns into smaller, low-risk experiments is the solution that Agile presents to this all-to-common marketing problem.
Minimum viable campaigns unite two concepts that allow marketers to interact with customers before a big campaign launch without giving everything away in the process.
The MVP of a marketing campaign is“the smallest piece of marketing that successfully achieves its desired outcomes.” It represents the smallest subset of work we could release that still delivers value, but isn’t the whole complete piece of work.
A minimum viable campaign is also “the smallest thing our team could create or do to prove or disprove an assumption.”
With both of these good definitions of an MVP at the center of our work, we’re able to gather customer feedback to influence the final version of our marketing campaign.
As an additional benefit, MVPs also allow us to deliver incremental value to our customers much sooner than if they had to wait for the entire resource-heavy Big Bang campaign to arrive in their inbox, at their doorstep, or on their screen.
From both our internal marketing team’s perspective as well as that of our customer, delivering MVPs is a win-win approach.
As an alternative to traditional Big Bang campaigns, large campaigns, like a new product launch, can be broken down into much smaller actionable steps that deliver value to customers sooner and with an increased rate of success.
To construct a viable experiment, the team must consider a number of different factors that make customer testing conclusive, factors such as:
To construct a minimum experiment, the executing team must consider an array of other vital criteria, like:
The perfect MVP lives at the intersection of what is minimum and what is viable, so marketing teams must account for both to construct an appropriate experiment.
Once a series of minimum viable experiments has been defined, these can be mapped on a test roadmap which the team can use to stay on track. Agile teams will recognize this roadmap as the Pivot and Persevere diagram, as seen below.
The diagram’s name refers to the process teams use to move through their experiments and make decisions.
Based on the results of each experiment, the team can decide whether to pivot, meaning to change direction or tweak their initial assumption, or persevere and continue in the same direction and allocate greater and greater resources towards it, on the path towards a full-blown completed campaign.
Depending on the characteristics of the product, a pivot and persevere diagram for a new product launch may include any of the following experiments (and many more) to test proposed messaging, colors, design etc.:
Like any work the execution team is doing, MVPs also find representation on the team’s visualized workflow (ex. Kanban board).
Experiments identified in the test roadmap will be broken down even further into granular tasks that can be tracked on the Kanban as the team makes progress on each MVP, from fleshing out the idea for the experiment to putting it out into the market and tracking how it performs in the wild.
When it comes to MVPs, running experiments in the market is only one half of the testing process that high-performing Agile marketing teams must go through for best results.
In fact, the whole reason MVPs exist is so that they can be measured.
Without the right measurement tools and benchmarks in place, it can be very difficult to capture the data from all of our experiments and make data-driven decisions.
There are many aspects of customer interaction with our experiments that we want to measure, but there are really only four elements of our experiments that we can reliably measure using the digital marketing tools at our disposal. They are:
In order to gather the data that can help us make the right choice whether to pivot or persevere, our teams need to be certain that the following components are in place before the test goes out:
A large campaign borne of a single experiment is, in almost all cases, a Big Bang campaign. One experiment is simply not enough to gather the information we need.
A Big Bang campaign, in all its forms, is a dangerous mess of assumptions. In order to unravel and dissect its pieces and make an informed decision about what our customers want and need, we need to be prepared to put each of its parts to the test.
That is why, in order to reap the full benefits of iterative experiments, Agile marketing teams test a number of different assumptions that converge to bring their large marketing campaign to life.
Testing and measuring customer feedback to a campaign’s visuals, messaging, design and intent means you’ll need to run a series of experiments to prove or disprove each.
Only then can we confidently arrive at a version of our original idea that our customers truly believe in and support.