2023-12-14
/
Building Cleo

Four Things to Think About When Building Marketing Mix Model (MMM) In-House

Our Marketing Analytics team give an insight into how we built a Marketing Mix Model in-house.

Building a Marketing Mix Model in-house.
IN THIS ARTICLE:

Click-based attribution has become less reliable, with tracking partners restricting the data we can receive from channels. This has reduced the reliability of our attribution model, which is based on a last-touch approach. Therefore, we decided to develop a Marketing Mix Model (MMM) as an alternative attribution method at Cleo and to guide our monthly budget planning.

In this post, I'll share our experience in building an MMM in-house and what you need to know if you're planning to do the same.

What is MMM?

MMM (Marketing Mix Model) is a statistical approach to model the outcomes of marketing investment. It helps us understand the incremental impact of our marketing investment in each channel on the KPI/goal metric/target metric or target variable that matters. This KPI/target variable can be anything your company cares about, such as revenue or total new users.

There are several use cases for MMM:

  1. Attribution - Measuring KPI/target variable contribution by channel: MMM attributes KPI contribution from each channel without relying on individual click or user-level data. 
  2. Efficiency Insights - Understanding channel response curve: How does ROAS (Return on Ad Spend) change as we spend more in each channel? Identifying the point of diminishing returns helps us understand the spend limit we should aim for in each channel or make a decision to work on channel optimization.
  3. Budget Planning - Identifying how to best allocate marketing budget across channels: By understanding the incrementality and response curve of each channel, we can use MMM to find an optimal allocation (how much to spend in each channel) given a certain level of budget.

The figure below provides an overview of how MMM works:

At Cleo, we use the MMM output as part of our monthly budget planning process. Every month at the end of month, we run the budget optimisation feature of MMM to inform us how we should allocate our budget across channels, as illustrated below. After the end of each month, we also run MMM to understand the attribution from the previous month to ensure we're going in the right direction.

You can see from the charts below how using MMM has changed our budget allocation per channel, and the KPI uplift we've seen as a result:

Buy vs Build 

Generally, there are three options to implement MMM:

  1. Buy from MMM providers
  2. Hire an agency to build MMM 
  3. Build it in-house

There’s no “best” option as it will depend on your company situation. Hiring an agency / buying MMM products would be a straightforward solution for most startups. But some things you might want to consider include the capacity and domain expertise of your data and marketing team, your data/reporting maturity stage, your timeline, as well as your long-term marketing plan.

We eventually decided to built it in-house because of several reasons:

  • We already had good data foundations at Cleo within marketing. We have previously pulled data from various sources (channels, attribution partners, apps, etc.) and built robust datasets and an extensive reporting suite for analysis.
  • We wanted to have flexibility as well as build our domain understanding internally. We see this as a project for today but also a foundation for the long-term as we scale our marketing spend.
  • We were also growing our data team with additional Analytics Engineers and Marketing Analysts, therefore increasing our data team's capacity.
  • Our Head of Growth Marketing had prior experience with using MMM. He provided us with the initial knowledge we needed to get started.

To develop our MMM, we looked into LightweightMMM and Robyn (both open source packages). Their features are fairly similar but we decided to go with LightweightMMM because we have more people who are Python experts in our data team. Here's some basic info you might want to consider if you need to choose between these two packages:

Things to Think About When Building MMM In-House

Building MMM wasn't a smooth sailing process. We encountered several challenges that needed addressing. Here are four challenges for building MMM yourself along with some tips on how we tackled them.

  1. How good is the data?

Any predictive model is only as good as the data you fit into it, and MMM is no exception. To ensure your model have a great set of explanatory variables:

  • Complete channel data: If you don't have complete spend or impression data for all your marketing channels, the model won't be able to perform attribution correctly because it lacks all the necessary input.
  • Seasonality/external data: If your KPI/target variable is highly sensitive to external factors or seasonality, you'll also need to include this as an additional regressor for MMM. Otherwise, you won't get valid results from the model and it will be hard for the model to have a good predictive power.
  1. Multicollinearity

Performance marketers are very data-driven. When their channel performs well, they put more money into it, so it's not uncommon for several channels' spend to show a collinear trend. As MMM compares spend movement vs. KPI change to try to predict the incrementality of each channel, this multicollinearity creates an issue.

Some ways we tackled this issue:

  • A quick fix was to merge the collinear channels into one group. However, this isn't ideal and isn't how we want to use MMM effectively.
  • Communicating with channel managers about this issue and creating a plan to ramp up/down channel spend at different levels or periods.
  • Testing, testing, testing. Incrementality/lift testing helps to calibrate the model and makes multicollinearity issues less likely to occur. 
  1. Small channels with occasional spend

MMM doesn't work well for channels with many periods of zero spend. This is because MMM relies on the variability of spend or impression data from each channel to understand how it's driving the output KPI, so a lack of spend variability prevents the model from making comparisons across different spend levels for the channel.

There are a couple of options to deal with this:

  • If you have multiple channels with periods of $0 spend, try merging them together to create a group with non-zero spend in most periods.
  • Alternatively, if you have a click-based attribution model or any other way to understand how many users or revenues are coming from that channel, you can exclude them completely from the MMM. This isn't ideal but offers a workaround for this situation.
  1. Trust in your MMM Findings and Recommendation

The tricky part with any attribution method is that you'll never know the truth… at least not until you run an incrementality test! You might be able to evaluate the model quality with metrics like MAPE (Mean Absolute Percentage Error, which measures the mean absolute percentage difference between the model's prediction vs actual value) or r_hat statistic (this measures the convergence or reliability of your bayesian model, you'll want this to be close to 1 and no greater than 1.1), but they’re not a substitute for the truth of how much to spend on each channel. 

So, while MMM gives you the comfort of having your attribution and budget decision backed by data and model, you can never be 100% sure that its attribution is the truth and its budget allocation recommendation is optimal to the cent.

We ran into these observations from the MMM we developed:

  • It was actually showing more or less similar attribution to our click-based attribution. But is this good? Maybe…that's why running a proper lift test is necessary.
  • At some point, the model told us that we had no organic LTV at all… that did not sound right! When we dug further, we realized that this was because we didn’t have enough explanatory variables to account for our Organic contribution. Once we added an additional regressor and reduced the spend prior, it finally recognised that some users came organically. So don’t forget to apply common sense to fine tune your model.

Final Thoughts

It’s not enough to just build an MMM; you have to run experiments and find a balance between relying on MMM and occasionally running tests. Although incrementality tests can sometimes be impractical, they're still the only way to calibrate and validate your MMM.

If you're interested to learn more about incrementality tests, we published a post about our geo testing approach here.

Interested in joining us at Cleo?

FAQs
Still have questions? Find answers below.
Written by

Read more

signing up takes
2 minutes

QR code to download cleo app
Talking to Cleo and seeing a breakdown of your money.
// Table of Contents Dynamic Script //