By Joannès Vermorel, Last updated April 2012The report point is the inventory level of a SKU which signals the need for a
replenishment order. The reorder point is classically viewed as the sum of the
lead demand plus the
safety stock. At a more fundamental level, the reorder point is a
quantile forecast of the future demand. The calculation of an optimized reorder point typically involves the
lead time,
service level, and the demand forecast. Relying on a
native quantile forecast vastly improves the quality of the reorder point for most retail and manufacturing businesses.
The concept we describe here under the name of "reorder point" is also known as ROP, reorder level, or reorder trigger level.
The reorder point is an important concept not only for inventory optimization but for
inventory automation as well. Indeed, most ERP and inventory management software associate a reorder point setting to each SKU in order to deliver some degree of automation for the inventory management.
Quantile estimate of the demand
A
little understood aspect of inventory management is that the reorder point represents a
quantile forecast of the demand for an horizon equals to the lead time. Indeed, the reorder point represents the inventory quantity that, with a confidence of τ% (the desired service level), will not be surpassed by the demand. If the demand goes above this threshold, an event that occurs only with a 1-τ frequency, then a stock-out is hit.
Native vs Extrapolated quantiles
Quantile forecasting model are complicated to write. As a result, most forecasting software only delivers
mean forecasts. Yet, as outlined here above, the reorder points are fundamentally quantile demand forecasts. Hence, the most popular
work-around for the lack of native quantile models consists of
extrapolating mean forecasts as quantile forecasts.
The extrapolation is typically based on the assumption that the forecast error follows a
normal distribution. Our guide about
safety stocks describes in detail how a plain
mean forecast can be extrapolated into a quantile forecast. In practice however, the
assumption that the error is normally distributed is weak. Indeed, the normal distribution:
- Converges too quickly toward zero, much faster than empirical distributions observed in retail and manufacturing.
- Is perfectly smooth while demand goes into integral steps. The negative impact this smoothness is the strongest on intermittent demand.
- Is not suited for high service levels (in practice values above 90%). Indeed, the further away from the median (50%), the less accurate the normal approximation.
Rule of thumb: when to favor native quantiles
Despite the extra computation overhead,
native quantiles bring
significant benefits, from an inventory optimization viewpoint, when:
- Service levels are above 90%.
- Demand is intermittent, with less than 3 units sold per period (day, week, month depending on the aggregation).
- Bulk orders, i.e. a single client purchasing more than 1 unit at once, represent more than 30% of the sales volume.
In practice, the reorder point error (see section below) is typically reduced by more than 20% if any one of those three condition is satisfied. This improvement is mostly explained by the fact that the extrapolation used to turn a
mean forecast into a
quantile one becomes the
weakest link of the calculation.
Accuracy of reorder points through the pinball loss function
Since the reorder point is nothing but a quantile forecast, it is possible to evaluate the
accuracy of this forecast through the use of the
pinball loss function.
Reducing the pinball loss for your inventory can only be achieved through better forecasts (quantile or extrapolated). As a rule of thumb, a reduction of 1% of the pinball loss will generate between 0.5% to 1% of safety stock reduction while preserving the same frequency of stock out.
With this, it becomes possible to
benchmark alternative stock strategies with your current practice. If an alternative strategy reduces the overall error, then it means that this strategy is
better for your company.
The process might appear a bit puzzling because we apply the term
accuracy in a context where no forecasts may exist (if the company does not have any forecasting process in place for example). The trick is that
target inventory levels by themselves represent implicit
quantile demand forecasts. The pinball loss function let you evaluate the quality of those implicit forecasts.
Download: reorder-point-accuracy.xlsxThe Microsoft Excel sheet here above illustrates how to assess your
reorder point accuracy using the pinball loss. The sheet includes several
input columns:
- Product name: for readability only.
- Service level: the desired probability of not hitting a stock-out.
- Lead time: the delay to complete a replenishment operation.
- Reorder point: the threshold (frequently called Min) that triggers the replenishment. Reorder points are the values being tested for their accuracy.
- Day N: the number of units sold during this day. The layout chosen in this sheet is handy, because then, it become possible to compute the lead demand through the
OFFSET
function in Excel (see below).
Then, the sheet includes two
output columns:
- Lead demand: that represents the total demand between the very start of Day 1 and the end of Day N (where N is equal to the lead time expressed in days). Here, the
OFFSET
function is used to make a sum over a varying number of days using the lead time as argument. - Pinball loss: that represent the accuracy of the reorder point. This value depends on the lead demand, the reorder point and the service level. In Excel, we are using the
IF
function to distinguish the case of over-forecasts from the case of under-forecast.
For consistency of the analysis, the input settings (reorder points, service levels and lead times) need to be extracted at the same time. Based on the conventions we follow in this sheet, this time can be either at the very end of Day 0 or just before the beginning of Day 1. Then, those settings are validated against
sales data that happen afterward.
Gotcha: In most ERPs, the historical values for reorder points, lead times and service levels are not preserved. Hence, if you wish to benchmark your reorder points, you need to start by taking a snapshot of those values. Then, you need to wait for a duration that covers most of the lead times. In practice, you do not need to wait until the longest lead time is covered. In order to get a meaningful benchmark, you can settle for a duration that covers, say, 80% of your lead times.
Finally, once a pinball loss value is produced for each SKU, we compute the sum of the pinball losses at the lower right corner of the sheet. When comparing two methods to compute reorder points,
the method that achieves the lower total pinball loss is the best one.
Pinball loss, Questions/Answers
This pinball loss looks suspicious. Didn't you made this function up just for the sake of boosting Lokad's relative performance?The pinball loss function has been known for decades. If you agree with the hypothesis that the reorder point should be defined as a value that covers the demand with a certain probability (the service level), then textbook statistics indicate that the pinball loss is the
one function that should be used to evaluate your quantile estimator. Early works on the question date from the late 1970s, but for recent materials see
Koenker, Roger (2005) Quantile Regression, Cambridge University Press.
How can you access the quality of the reorder point for a single SKU with the pinball loss?You cannot assess the quality of the reorder point for a single SKU by looking at a single point in time. Unless your service level is very close to 50%, the pinball loss has a strong variance. As a result, you need to average the loss values over several dozens of distinct dates to obtain a reliable estimate when looking at a single SKU. However, in practice, we suggest instead to average losses over many SKUs (rather than many dates). With an dataset containing more than 200 SKUs, the pinball loss is typically a fairly stable indicator, even if you only consider a single point in time to perform the benchmark.
The pinball loss reacts very strongly to very high service levels. Is it going to create very large stocks in case of very high service levels?The reality of inventory management is that achieving 99.9% service level requires an enormous amount of inventory. Indeed, 99.9% means that you don't want to afford more than 1 day of stockout every 3 years. With the classical
safety stock formula, using a very high service level does not generate massive stocks. However, using a very high service level
in the formula does not yield an equivalent service level in practice either. In short, you may enter 99.9% in your software, but in reality, your
observed service level will not raise above 98%. This situation is caused by the assumption that the demand is normally distributed. This assumption, used in the classical safety stock formula, is incorrect and leads to a false sense of security. Quantiles, however, respond much more aggressively to high service levels (i.e. bigger stocks). Yet, quantiles are merely reflecting the reality in a more accurate manner. Very high service levels involve very high stocks. You can't get 100% service level, you need to compromise.
In your sample sheet, you use daily data. What about using weekly data instead?If your lead times are long and can be expressed in weeks rather than days, then, yes, you can use historical data aggregated in weeks, the approximation should be good. However, if your lead times are shorter on average than 3 weeks, then the discrepancy introduced by the weekly rounding can be very significant. In those situations, you really should consider daily aggregated data. Daily data might complicate a bit the data handling within the Excel sheet, because of data verbosity. However, in practice, the pinball loss is not intended to be computed within an Excel sheet except for Proof-of-Concept purposes. The one aspect that really matter is to feed the inventory optimization system with daily data.
Misconception: reorder point leads to big infrequent orders
Relying on reorder points does no imply
anything about the quality of the inventory management. Indeed, as reorder points can be changed continuously (typically through software automation), any stocking strategy can be represented through ad-hoc reorder points values varying over time.
Big and infrequent orders are found in
companies that do not dynamically update their reorder points. However, the problem is not caused by reorder points
per se, but by the lack of software automation that would regularly update those reorder points.
Multiple suppliers with distinct lead times
The inventory quantity to be compared to the reorder point is usually the sum of the
stock on hand plus the
stock on order. Indeed, when making an order, one has to anticipate the stock already on its way.
The situation can be complicated if the same order can be passed to
multiple suppliers delivering the same SKUs with different lead times (and typically different pricing as well). In such a situation, a backorder made to a
local supplier can be delivered before an older backorder made to an
distant supplier.
In order to model more precisely a
two-suppliers situation, it becomes necessary to introduce a
second reorder point for each SKU. The first reorder point triggers the replenishment from the
distant supplier (assuming this supplier is cheaper, otherwise there is no point in purchasing from this supplier), while the second pulls from the local supplier.
Since the local supplier has a smaller lead time, the second reorder point is lower than the first one. Intuitively, orders are made to the local supplier, only when it becomes highly probable that a stock-out will be hit and that it is too late already to order from the
distant supplier.
Lokad's gotcha
Quantile forecasts are superior to compute reorder points in most situations encountered in retail and manufacturing. The strength of the approach can be most simply explained by the fact that, in statistics,
direct measurements trump indirect measurements. However, we do not imply that mean forecasts are useless. Mean forecasts are many other usages beyond the strict reorder point calculation. For example, when it comes to visualize the forecasts, quantiles tend to be harder to comprehend.