An End-to-End Supply Chain Optimization Case Study:
Part 2 Inventory Optimization
This is the second part of a supply chain optimization project. This second part focuses on inventory optimization. You can read the first part (about forecasting) here.
In this two-part proof-of-concept, SupChains and DragonRitter (a forecasting-as-a-service platform) helped a Pharma Distributor to reduce its inventory while optimizing its service level. The POC showed a forecast error reduction of 25% and an expected inventory excess reduction of 40% within six months while securing higher service levels (>95% compared to the current 80%). In addition, the model provided the client with a list of dead stock so they could take action immediately.
Based on these preliminary results (presented in November 2022), the client decided to use our forecast engine as of December 2022 and plan its purchases based on the model as of January 2023.
Our client is a pharma distributor active in Latin America with a dozen active warehouses and around 10,000 unique products. In 2021, they reported a total revenue of 1B$, employing more than a thousand employees.
As many supply chains worldwide, they suffer from both dead stocks and shortages: despite stocking the equivalent of 45 days of sales, they only achieved a 60% fill rate in 2022 (up to 80% in the warehouse we used in the POC). They solicited the help of DragonRitter (a forecasting-as-a-service platform) and SupChains to propose a joint end-to-end solution.
Our client’s objective is to increase their service levels while reducing their total inventory.
Our End-to-End Solution: Demand Forecasting and Inventory Optimization
To achieve this double objective of inventory reduction and service level increase, SupChains recommended a 2-step approach:
- Improve the forecasting quality by implementing a machine learning forecasting engine.
- Compute adequate inventory levels capturing both demand and supply variability to achieve the desired service level targets.
In the first part of this business case, we presented our forecasting solution and its results — a 25% forecast error reduction (despite having little data).
Let’s continue with the inventory optimization engine. Business-wise, our objective is to assess how much inventory is required to achieve the desired fill rates. Once we have this target, we can evaluate the current inventory quality and how it will evolve based on our recommendations (stock targets and forecasts). Mathematically, we need to assess for each product-warehouse combination the optimal inventory level based on the safety stock required (to reach the service level targets) and the expected cycle stocks (based on the supply policies).
Inventory Quality. Percentage of the total inventory effectively required to achieve the desired service level. Technically, we segment inventory levels between cycle, safety, and excess inventory and compute the quality ratio as, inventory quality = (Cycle + Safety) / Excess
To do so, we will :
- Analyze historical forecast errors.
- Use our inventory model to compute stock targets (based on historical forecast errors, fill rate targets, and lead times).
- Compare these targets against the current actual inventory levels (and upcoming deliveries).
- Simulate 26 weeks in a digital twin to see how inventory quality will evolve over time.
Let’s detail the steps we took to create our model.
SupChains’ Inventory Model
Creating an advanced inventory model requires much information:
- Historical forecast errors. To generate these, we used our forecasting model to generate historical predictions for two years.
- Expected supply lead times and suppliers’ reliability.
- Current inventory levels and expected deliveries
- Fill rates targets
- Future forecasts to simulate future inventory levels
We used our forecast engine to compute historical and future forecasts, whereas the client provided the other pieces of information.
We created a dynamic inventory model targetting fill rates based on custom forecast error distributions directly computed over each product’s risk-horizon. (For more information, see Inventory Optimization: Models and Simulations.)
Let’s detail these concepts one-by-one:
- Dynamic Targets. Our model adequately adapts its targets to the expected demand: higher forecasts usually mean that it is healthy to keep more safety stock. Alternatively, if the forecast is low (or declining), the model will reduce its stock targets.
- Forecast Error. Instead of looking at the demand deviation per period (and assuming normally distributed demand), the model looks at historical forecast errors to estimate future errors.
Looking at historical demand deviation instead of forecast errors is a typical mistake that should be avoided. In other words, you shouldn’t look at demand variability (or COV) but at forecastability.
- Custom Forecast Error Distribution. The model does not assume normally distributed forecast errors. Instead, it creates custom error distributions for each product-warehouse combination.
Note that we do not recommend using the Poisson distribution as it doesn’t not describe properly low-volume erratic products. Moreover, it wouldn’t apply to forecast errors (that can be negative — whereas poisson is strictly positive).
- Risk-horizon. Instead of looking at the forecast error per week (per product per warehouse), we look at the error over the whole risk-horizon of each product (incoming supply lead-time plus its order review period). For example, if a product has a 3-week lead time with weekly orders, we will look at the forecast error distribution over four consecutive weeks. To put it differently, for each product, we look at what could possibly happen during the total risk-horizon.
Risk-Horizon. Maximum period of time that you need to wait to receive an order. During this period your inventory is at risk of being depleted. The term is coined in Inventory Optimization: Models and Simulations.
- Fill Rates. The model targets fill rates instead of the usual cycle service levels (which are used by most software vendors and the usual safety stock formula — more information about the difference here). Unfortunately, cycle service levels do not correlate well with business performance.
Even with these multiple features and refinements, the model still runs within a few minutes on a laptop.
Using our optimization engine, we ran a digital twin reproducing an MRP behavior over a 6-month horizon (see the figure below). We could assess,
- How much inventory is required to achieve the target service level, and how this level fluctuates over time.
- The current inventory quality and how it will evolve in the future (as excess inventory gets depleted slowly).
The model starts by replenishing all the missing goods (see the peak of deliveries in mid-November) — the immediate effect is that product shortages and lost sales are essentially stopped (assuming that supply is not constrained).
Then, excess inventory is reduced over time as it is consumed slowly by the forecast (from around 700,000 units to about 425,000 units). Part of the excess inventory can be considered as dead: natural consumption will never consume the current stocks in the foreseeable future. Instead, you might have to collaborate with the sales team to get rid of these products.
Looking at the inventory quality, it starts low at 37% (in other words, only 37% of the starting inventory is actually needed to reach the desired service level). After six months, the inventory quality should be much higher at 60%.
As shown in the figure below, we can also project how the overall days-of-stock will evolve over time. Note that the spread between the required and actual inventory is shrinking over time (as excess stocks get depleted).
Finally, this analysis helped the team spot dead stocks (i.e., products would still be in excess after six months) so they could start taking action about them right away.
As only little data cleaning was performed (most of it was done in Phase 1, and we received flat files with expected lead times), only two weeks of work were needed to create and test the inventory model. A final week was dedicated to analyzing the models’ performance and making the final report.