Agile Marketing & Project Management | AgileSherpas Blog

5 Ways to Improve Forecasting in Marketing with Agile

Written by Alex Novkov | Feb 10, 2021 8:15:41 AM


Even though humans are notoriously terrible at predicting the future, playing the prediction game is an inevitable part of any marketer’s role.

Whether working in an agency environment or serving stakeholders in a matrixed organization, we're constantly asked to look into the future and give prognoses about the outcomes and output of what we are working on today.

Faced with these requests, most of us turn to estimation (or, in the worst cases, guess-timation) to appease the stakeholders that rely on us. We give our answers and hope for the best.

If you want to make sure that you're hitting the bullseye with your estimates every single time, there's a better, more data-driven way: forecasting, guesstimation's more responsible cousin.

Unlike estimation, forecasting relies on historical data generated by the team to develop statistical probabilities. In this way, forecasts provide the crucial answer to the common question --  When will it be done? -- AND have a higher chance of actually being correct.

 

In this article, we'll arm you with five Agile forecasting methods to help you swap guesstimation for more accurate forecasts about your marketing tasks and campaigns. 

Forecasting in Marketing vs. Estimating

When planning our work and considering how long it will take to complete, we have two available options: to make a prediction based on estimation or to offer up a forecast. Although forecasting and estimating might seem similar in theory, there's quite a bit of a difference between the two when we consider them in practice.

Estimating is a technique for predicting future outcomes based mostly on judgment or an opinion. It may or may not rely on historical data, and the accuracy depends heavily on the experience of the person making the estimate. 

Forecasting is a prognosis about the future completion of work items based on historical data that includes both date range and probability. This approach assumes that the future is uncertain. Therefore, we can’t be 100% sure about future events and have to consider the probability of making an accurate prediction.

Even in how we phrase our estimations and forecasts, the difference between the two methods of answering the vital question When will it be done? shines through.

Estimation has the tendency to appear simpler. It sounds like, “based on the required effort, we will likely be ready by March 5th.”

On the other hand, a forecast appears more scientific. It sounds like, “based on the historical data we’ve collected, there's an 80% chance that we would be ready by March 5th, 90% chance by March 12th; and 99% chance to be ready by March 21st.”

In the marketing function, forecasting can be quite a bit more complex than estimating because it requires frequent data gathering and the calculation of probability. However, when done correctly, it has the potential to bring peace of mind when committing to deadlines and planning our work as forecasting is not based on wishful thinking but data from the team instead.

With this approach in mind, there exist several improvements we can make  to our planning process to guarantee the accuracy of our forecasting.

Keep a Close Eye on Process Metrics

As it became evident from our definition of forecasting, data is the very foundation of using this technique for predicting our team's future output. To forecast how long it would take us to complete any work in our backlog or how many items from the backlog we're likely to finish within a predefined time frame accurately, we can use data collected from three important metrics from our process:

  • Throughput/Velocity
  • Lead time
  • Cycle time

Throughput shows us how many work items our team as a unit has completed within a predefined period of time. It's widely used by Kanban practitioners as a primary metric for tracking the consistency of the team's output. Its counterpart in Scrum is called velocity. Velocity tracks the output of the team as the number of story points that were completed within the parameters of a single Sprint.

To define an accurate forecast using these metrics in the marketing context, we track their average values over time per type of work.

So, if we are asked to forecast how many blog posts we can create within a month without making capacity adjustments, we reference the average number of posts we’ve been delivering per month in the past. Then, we account for potential problems and forecast a probable number of articles based on the likelihood of achieving this output.

Lead time and cycle time are process metrics that help us answer the question “when will it be done?” more accurately in terms of time, instead of pieces of work or story points.

Tracking lead time means we know how much time it takes to finish an assignment from the moment it lands in our backlog to the point of delivery to the customer. Cycle time covers the time frame between the the moment the team begins processing a task to the moment it is delivered to the customer. 

To make a forecast, we look at the average values of these metrics over a period of time.

Our interest lies in how much time it usually takes us to process a specific type of work. So, if the head of sales asks us “how much time would you need to create a case study about client X”, we can calculate the average lead time and cycle time for creating a case study in the past. Then, give her the probable outcome.

Apply Service Level Expectations

Armed with process data about the past output of our team, we can take forecasting to the next level by establishing service level expectations (SLEs) with our internal clients or external stakeholders.

As the name suggests, SLEs serve as universal forecasts about how long it is likely to take for our team to process a specific type of work from start to finish. Like any other forecast, a service level expectation consists of two parts: a period of elapsed time and a probability associated with that period.

SLEs take forecasting one step further by establishing clear boundaries and help us react accordingly if there’s a danger of deviating from our forecasts. A typical SLE could be framed like this:

“We will be delivering new white papers 20 days after they are requested with 90% probability; after 25 days with 95% probability and within 30 days with 99% probability. “

The word expectation is key here, because we're not making fixed formal commitments or promises. Instead, we're aiming to set realistic expectations based on probability with our stakeholders. While SLEs can act as a tool for stakeholder management, they're also focused on the team.

They help us keep work items that are in progress on track and may trigger changes in resourcing or capacity allocation to prevent stalled tasks that can put our initial forecasts at risk.

Apply User Story Mapping

So far, we have discussed forecasting in marketing on a granular task level. However, when we have to predict how long it will take to complete a whole marketing campaign that consists of several types of work, a forecast can become significantly more difficult to pin down.

To address the challenge accordingly, we can build on what we’ve already covered in this article by applying user story mapping. It enables us to create a dynamic outline of any upcoming marketing campaigns and the buyer’s interactions with each of them over time. 

Based on our map, we can break down our marketing user stories into actionable work items and plan to execute them at the moment when they're likely to make the greatest impact.

A user story map has two dimensions -- vertical and horizontal.

The horizontal dimension usually represents the steps a user takes toward becoming a satisfied customer through interaction with our campaigns.

The vertical dimensions indicates priority and the order of our internal tasks to facilitate a successful campaign delivery.

By visualizing all the user stories and the smaller tasks that comprise them on the user story map, we can get a better understanding of the scope of a campaign before launch. As a result, we can gather the required information for making a forecast and combine the average values of the process metrics of each type of work easily. 

In addition, we can identify potential problems and account for them in our forecast, while also taking precautions early on.

All of this gives us a good foundation for predicting the probable time we'd need to process all the work items and delivering on the campaign goal.

Apply Monte Carlo Simulations

Those analytical minds among us that like to work with a lot of data when trying to predict the future can improve their forecasting accuracy by applying Monte Carlo simulations.

This mathematical technique takes our historical process data and runs it through a large number of random simulations (e.g. 10,000+) to give us probable outcomes ranging from 100% to 1% depending on the frequency of the repeated outcomes.

Depending on the type of forecast we want to make, we can input past data for team throughput, task cycle time, and task lead time in the simulation algorithm. An advantage to using Monte Carlo simulations is that they can process large amounts of data very quickly and provide more statistically accurate forecasts.

The more times an outcome repeats, the more probable it is (and vice versa).

They also allow us to test different hypothetical scenarios by simulating our probable output based on the team’s performance in different periods of time in the past.

In recent years, Monte Carlo simulations have gained popularity among Agile practitioners. As a result, a growing number of platforms for Agile management have made them available or  integrate with complimentary platforms to provide this forecasting functionality.

Apply PDCA

PDCA is a cherished method for achieving continuous improvement within the team.

It consists of four steps that form a repeatable cycle for improving all of our processes, including forecasting. The steps of the cycle include planning, execution, reflection, and the definition of actionable takeaways for improvements:

  1. Plan
  2. Do
  3. Check
  4. Act

PDCA gives us the opportunity to improve our accuracy over time by following these four steps regularly. When it comes to improvement in our internal processes, our primary interest lies in the third and fourth stages of the cycle: check and act. 

When we plan our work, we forecast how long it would take to complete it and proceed to process it. At the delivery of a task or campaign, our forecast will either prove true or fail to reflect reality.

In both cases, PDCA encourages the team to analyze and reflect on what transpired.

If things didn’t go as planned and we deviated from the initial forecast, we can identify and point out the reasons for that as a team at the “check” stage of PDCA. If we achieved better efficiency than expected, we have to use this to our advantage and gain an understanding of the “why” so we can replicate it in future iterations. 

If we want our forecasting to become more accurate in the future, whatever improvements we discover we can make during the “check” phase of PDCA need to be implemented across the team process as part of the "act" stage of this cyclical process for improvements. 

Let’s Start Forecasting

Forecasting based on data is a powerful tool in the hands of any marketers that want to bring agility into every part of their work.

An analytical approach based on historical data has the potential to not only bring peace of mind for our stakeholders and the execution team, it also improves the likelihood of delivering customer value in a predictable manner.

Monitoring a number of key process metrics including throughput, velocity, cycle time, and lead time diligently over time will help our team make the most of forecasting in the marketing context. With these data points at our fingertips, we can communicate better service level expectations with our internal and external stakeholders and create realistic forecasts even on the campaign level.

To ensure that our forecasts statistically reflect the most probable outputs of our work, we can put our team's historical data to work by running it all through thousands of random simulations with the help of the Monte Carlo simulation technique. 

Further, we can continuously improve the way we generate our forecasts by applying the PDCA cycle with emphasis on its later stages: check and act.

Ready to hit the bullseye with your next forecast? Just try any of these proven techniques and be amazed at how truly accurate your predictions will become.