Sitemap

Using Customers’ Forecasts to Improve Manufacturers’ Forecasts: Two Case Studies

More and more supply chains receive regular forecasts from their customers. Management and customers expect that these forecasts will help with supply and demand planning. Unfortunately, it’s complicated: the added value is unclear, and using these external forecasts comes at the cost of extra workload for planners. Based on SupChains’ experience with two recent projects, this article discusses various quantitative and qualitative techniques for evaluating your customers’ forecasts and how to utilize them to enrich your own.

10 min readOct 15, 2025

--

For this case study, we assume that we receive sell-out forecasts from our customers. In other words, they forecast their own supply orders toward you.

Customers’ Forecasts

Over the last few years, I have had the pleasure of working with supply chains, which have provided me with more and more data and insights to create forecasts. Including promotional and pricing calendars, sell-outs and point-of-sale inventory, future confirmed orders, and, finally, customers’ forecasts. Historically, using this data in a statistical forecasting model would have been close to impossible. So, this data was often used by planners to refine forecasts manually. Unfortunately, humans are biased, and manual processing is expensive. Thankfully, modern machine learning models make it possible to incorporate these insights into a single unified global model easily.

Customers’ forecasts are a rather unique data source, as they rely on collaborating with the supply planning process of an external party. It’s quite different from using sell-out data, which is an actual official figure and not the output of your customers’ planning process (which might be biased). To put it differently, pricing, discounts, and sell-out are tangible actual numbers. Whereas your customers’ forecasts are what they think the future will be.

In 2025, SupChains delivered two different forecasting projects, each relying on customer forecasts. The first project tried to include customers’ forecasts into the forecasting model as features. The second project focused on how planners could enhance their forecasts by incorporating their customers’ forecasts.

Let’s take a closer look at each case.

Case Study 1 — Automotive Spare-Parts Manufacturer

A leading European automotive spare-parts manufacturer wants to assess the added value of using one of their distributors’ data as an input for a machine learning forecasting engine. The scope included around 3,500 products and historical sales data dating back to 2017. Their main customer (referred to as the distributor) shared their sales, inventory, and forecasts with them.

Customer Data

Before jumping to the results, let’s take a moment to review the data the distributor shared.

Press enter or click to view image in full size

Visually, the inventory data reported by the customer is inconsistent with the sales and sell-out data.
We should observe,

∆ Inventory = Sell-in — Sell-out

But we don’t. See the second significant inventory increase at the start of the time series: it doesn’t coincide with a difference in sales and sell-out. Then, for nearly 6 months, sell-outs are higher than sell-in, but the inventory doesn’t decrease. We also miss one inventory report.

Let’s now look at the forecasts shared by the distributor.

Press enter or click to view image in full size
Figure 1 Forecasts shared by the distributor. Each grey time series is a different forecast submitted at a different point in time.

Unfortunately, we do not know how the distributor created these forecasts. But, it seems that they follow some budget cycle (you can identify four different cycles in the figure): the overall volumes are stable during a specific cycle (around 12 forecasts), then they drastically.

As with the inventory data, see that some forecasting cycles are also missing. The distributor most likely forgot to share them.

Forecasting Evaluation

We decided to try out two different forecasting models: one using historical sales data only (ML Simple in the table below), and the second using all available customer information (referred to as ML Extended). To evaluate the models, we simulated various forecasting cycles following a specific calendar, as shown below,

Press enter or click to view image in full size
Table 1 Example of evaluation calendar

Here are the results we achieved,

Press enter or click to view image in full size

The distributor forecasts got a score of 102%, which is 13–16 points worse than our models. In other words, SupChains’ forecasts are more than 10% better at predicting what the distributor will order than the distributor itself!

Unfortunately, the model using distributor data (sales, inventory, and forecasts) doesn’t seem to add much value compared to a simpler ML model. This slight difference (a 3% error reduction) may be due to random variations.

How do we measure Forecasting Quality?

To measure forecasting quality, we advise against relying solely on accuracy metrics (such as MAPE, MAE, or WMAPE).[1] Instead, we track both accuracy (using MAE) and bias, grouping them into a single metric, the Score (MAE + |Bias|), as recommended in Demand Forecasting Best Practices.

How do we measure Forecasting Variability?

Variability measures how much forecasts (from the same model) change from one month to the next. Lower values indicate stable outputs from one planning cycle to the next. To compute the variability, we take the overlapping periods of two consecutive forecast sets and calculate the usual score metric (Score = MAE + |Bias|), treating one forecast as the actuals and the other as the prediction. We then scale the metric by the average of the two forecasts (only keeping the overlapping periods).

[1] As explained in Data Science for Supply Chain Forecasting, tracking accuracy alone will mechanically promote under-forecasting.

Beyond the metrics, let’s examine one of the simulated forecasting cycles to gain insight into the potential added value of utilizing distributor forecasts.

Press enter or click to view image in full size

As you can see, the forecast from the distributor is poor at predicting their own ups and downs (on top of delivering poor accuracy).

Recommendations

Unfortunately, the distributor data is rather inconsistent, and they have a low-maturity demand planning process that likely relates more to budgeting than forecasting. Because of this, and supported by quantitative analysis, we didn’t recommend the spare-part manufacturer to use the inputs coming from their distributor.

Press enter or click to view image in full size
My amazon page

Case Study 2 — Chemical Manufacturer

SupChains partnered with Vantage, a US Chemical Manufacturer, to work on their overall demand planning capabilities. One of the first steps was to develop a new forecasting model using machine learning. As Vantage regularly received structured forecasts from its key customers, we decided to thoroughly analyze these forecasts for two of its main customers (referred to as Customer #1 and Customer #2) to see if and how we could use them to enrich Vantage’s forecasts.

Future Confirmed Orders

Both customers typically confirm their orders well in advance. In simpler terms, Customers #1 and #2 tend to place their orders before the delivery date. As shown in the chart below, Customer #1 confirms approximately 30% of its orders two months prior to delivery, while Customer #2 confirms around 20%. These confirmed future orders serve as a leading indicator of future demand, which is then fed into our machine learning engine to enhance forecasts.

Press enter or click to view image in full size

Collecting Customers’ Forecasts: A Cumbersome Process

Collecting customers’ forecasts can be a time-consuming and manual task — even if the forecasts are shared using structured templates. It can even be error-prone at times. Let’s take a look at the steps we had to follow to process the forecasts for Customers #1 and #2.

· Align product naming and codes, as customers, at times, used their own (or outdated) nomenclature when sharing forecasts.

· Both customers only provide forecasts with a 2-month frozen horizon.[1] To provide accurate short-term forecasts, we have to enrich their forecasts using their firm orders.

· Some of the forecasts were missing products.

Furthermore, as forecasts are received late in the month, they are always included in the following forecasting cycle. For example, as you receive your customers’ forecasts in mid-October, you will use them in the next forecasting cycle starting in November.

Let’s now review the quality of the forecasts we received from each customer.

Customer #1

Customer #1 is another US chemical manufacturer active in the personal care and beauty industry. They buy around 60 different products from Vantage.

To evaluate the quality of their forecasts, we compared them to a 12-month moving average and our own machine learning model (that didn’t use any of Customer #1’s forecasts). We back-tested these models over nearly 12 different cycles. We obtained the following results,

[1] Freezing forecasts is a bad forecasting practice. Forecasts should never be frozen.

Press enter or click to view image in full size

This customer’s forecasts (even when enriched with confirmed orders) don’t beat our own model — we are 7% better at predicting their orders than they are! Customer #1 forecasts also suffer from an astonishing 136% variability.

Forecasting Variability

Grasping a 136% variability is complicated. One way to see what it looks like in practice is to show different forecasting cycles of a single product.

As you can see in the following table, the customer tends to share “all-or-nothing” forecasts that can change drastically from one month to the next. On the other hand, our machine learning model delivers stable adjustments based on available data. For example, the SupChains forecast generated in the 2024–07 cycle is higher than the one from 2024–06, as the model just saw a 661-unit order in 2024–06. Note also that SupChains’ ML engine forecasts are consistently low for lag 1, as the model does not expect short-term orders if the customer has not yet provided a firm order.

Press enter or click to view image in full size
Table 2 Analyzing different forecasting cycles as provided by Customer #1 and generated by SupChains forecasting engine.

You can also easily visualize variability by plotting consecutive forecasts.

Press enter or click to view image in full size
Figure 2 Forecast Variability. The number next to each starting point denotes the variation compared to the previous forecast.
Press enter or click to view image in full size
Need help with forecasting and inventory optimization? Contact me here

Customer #2

Customer #2 is also a US Chemical Manufacturer active in the personal care industry.

We evaluated Customer #2 over the same cycles as Customer #1, with a slightly lower scope of around 45 active products.

Press enter or click to view image in full size

The forecasts from Customer #2 are better than those from Customer #1, with a similar score to SupChains’ ML model, but with variability that is more than twice as high.

Demand Planners’ Concerns

Some planners may be concerned that we might miss some signals or sales opportunities by disregarding customer forecasts. But overall, and for both customers, SupChains’ model was (way) more optimistic about the future than the customers (as illustrated below for one of them).

Press enter or click to view image in full size

Recommendations

Insight-Driven Enrichments
At SupChains, we recommend enriching forecasts when demand planners have access to insights that the ML model doesn’t have access to. Think insider information. For example, by contacting their customers, demand planners can learn new insights that they can use to enrich their forecasts. On the other hand, they shouldn’t change forecasts based on a simple visual inspection — if they manage to beat a model using visual inspections consistently, the model should be upgraded.

Using customers’ forecasts is a way to gain access to specific insights that the ML engine does not have access to. If only you can trust them.

Both forecasts and orders from Customer #1 are pretty volatile, denoting a low supply chain stability and maturity. Working on the overall supply chain stability of Customer #1 would be beyond a planning sharing exercise. In the meantime, it is unclear how we could use their forecasts beyond capturing insights about new and upcoming products — if any (the products sold to Customer #1 are relatively stable). Capturing specific trends wouldn’t be easy due to the extreme variability in their forecasts: Looking at products with the highest variation from one cycle to another will likely yield useless results.

Customer #2 forecasts were on par with SupChains’ ML engine forecasts, but much more volatile. As collecting and processing these forecasts takes time, they shouldn’t be used to override the baseline forecasts. However, they could be used by demand planners to help with specific new products or for products with strong trends (that the ML engine didn’t capture yet).

Press enter or click to view image in full size
Subscribe here: https://supchains.com/#newsletter

Conclusions

Using customers’ forecasts isn’t straightforward. It takes time to collect and structure the data, which at times is inconsistent or missing. Moreover, not all supply chains can populate meaningful, stable supply plans resulting in nervous forecasts that keep changing from one cycle to the next, with little actionable insights.

Receiving forecasts from your customers also often comes (but not always) with the expectation that you need to deliver on these numbers as they were communicated to you. That’s, unfortunately, an abusive supplier-customer relationship: customers expect the supplier to deliver on their forecasts while not providing any commitment on their end.

If customers want to collaborate with their suppliers, a simpler approach would be to commit to short and mid-term firm orders (that can be used by ML forecast engines as leading indicators) or commit to minimal yearly volumes (that can also be fed to ML engines). Feeding future confirmed orders in a forecast engine is much easier than customers’ forecasts while delivering a higher and more consistent value. Beyond collaboration on planning and data-sharing, customers can also agree to order in smaller order quantities to reduce artificial variability.

Before using any customer forecasts as inputs to your own forecasts, you should always first assess the consistency, variability, and added value of customers’ forecasts compared to your baseline. Using high-variability customers’ forecasts is unlikely to add value. However, this will result in a much higher workload as planners spend time tracking and explaining the endless cycle-to-cycle changes.

Acknowledgments

Konrad Grondek, Brad Blasi, Vishal Bhavsar, Deshen Naidoo, Richard Maestas, Kenton Martin

--

--

Nicolas Vandeput
Nicolas Vandeput

Written by Nicolas Vandeput

Consultant, Trainer, Author. I reduce forecast error by 30% 📈 and inventory levels by 20% 📦. Contact me: linkedin.com/in/vandeputnicolas

Responses (1)