4
I Use This!
Activity Not Available

News

Posted over 10 years ago
So far, we covered data challenges and process challenges in the context of promotional forecasts. In this post, the last of the series, we cover the very notion of quantitative optimization when considering promotions. Indeed, the choice of the ... [More] methodological framework that is used to produce the promotion forecasts and measure their quantitative performance is critically important and yet usually (almost) completely dismissed. As the old saying goes, there is no optimization without measurement. Yet, in case of promotions, what are you actually measuring? Quantifying the performance of promotions The most advance predictive statistics remain rather dumb in the sense that it’s nothing but the minimization of some mathematical error function. As a consequence, if the error function is not deeply aligned with the business, there is no improvement possible, because the measure of the improvement itself is off. It doesn’t matter to be able to move faster as long you don’t even know if you’re moving in the right direction. When it comes to promotions, it’s not just the plain usual inventory economic forces: inventory costs money; however, compared to permanent inventory, it can cost more money if the goods are not usually sold in the store, because any left-over after the end of the promotion will clutter the shelves. promotions are an opportunity to increase your market shares, but typically at the expense of the retailer's margin; a key profitability driver is the stickiness of the impulse given to customers. promotions are negotiated rather than merely planned; a better negotiation with the supplier can yield more profits than a better planning. All those forces need to be accounted for quantitatively; and here lies the great difficulty: nobody wants to be quantitatively responsible for a process as erratic and uncertain as promotions. Yet, without quantitative accountability, it’s unclear whether a given promotion creates any value, and if it does, what can be improved for the next round. A quantitative assessment requires a somewhat holistic measure, starting with the negotiation with the supplier, and ending with the far reaching consequences of imperfect inventory allocation at the store level. Toward risk analysis with quantiles Holistic measurements, while being desirable, are typically out of reach for most retail organizations that rely on median forecasts to produce the promotion planning. Indeed, median forecasts are implicitly equivalent to minimizing the Mean Absolute Error (MAE), which without being wrong, remains the archetype of the metric strictly agnostic of all economic forces in presence. But how could improving the MAE be wrong? As usual, statistics are deceptive. Let’s consider a relatively erratic promoted item to be sold in 100 stores. The stores are assumed to be similar, and the item has 1/3 chances of facing a demand of 6 units, and 2/3 of facing a demand of zero unit. The best median forecast is here zero units. Indeed, 2 units per store would not be the best median forecast, but the best mean forecasts, that is, the forecast that minimizes the MSE (Mean Square Error). Obviously, forecasting a zero demand across all stores is buggy. Here, this example illustrates how MAE can extensively mismatch business forces. MSE show similar dysfunctions in other situations. There is no free lunch, you can't get a metric that is both ignorant of business and aligned with the business. Quantile forecasts represent a first step in producing more reasonable results for promotion forecasts because it becomes possible to perform risk analysis, addressing questions such as: In the upper 90% best case, how many stores will face a stock-out before the end of the promotion? In the lower 10% worst case, how many stores will be left with more than 2 months of inventory? The design of the promotion can be decomposed as a risk analysis, integrating economic forces, sitting on top of quantile forecasts. From a practical viewpoint, the method has the considerable advantage of preserving a forecast strictly decoupled from the risk analysis, with is an immense simplification as far the statistical analysis is concerned. Couple both pricing and demand analysis While a quantitative risk analysis already outperforms a plain median forecast, it remains relatively limited by design in its capacity to reflect the supplier negotiation forces. Indeed, a retailer could be tempted to regenerate the promotion forecasts many time, varying the promotional conditions to reflect the scenarios negotiated with the supplier, however such a usage of the forecasting system would lead to significant overfitting. Simply put, if a forecasting system is repeatedly used to seek the maximization of a function built on top of the forecasts, i.e. finding the best promotional plan considering the forecasted demand, then, the most extreme value produced by the system is very likely to be a statistical fluke. Thus, instead the optimization process needs to be integrated into the system, analyzing at once both the demand elasticity and the supplier varying conditions, i.e. the bigger the deal, the more favorable the supplier conditions. Obviously, designing such a system is vastly more complicated than plain median promotion forecasting system. However, not striving to implement such a system in any large retail network can be seen as a streetlight effect. A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is". The packaged technology of Lokad offers limited support to handle promotions, but this is an area that we address extensively with several large retailers, albeit in a more ad hoc fashion. Don’t hesitate to contact us, we can help. [Less]
Posted over 10 years ago
In our previous post, we covered data challenges in promotion forecasts. In this post, we cover process challenges: When are forecasts produced? How they are used? Etc. Indeed, while getting accurate forecasts is tough already, retailers frequently ... [More] do not leverage forecasts the way they should, leading to sub-optimal uses of the numerical results available. As usual, statistical forecasting turns to be a counter-intuitive science, and it’s too easy to take all the wrong turns. Do not negotiate the forecast results The purchasing department usually supervises the promotion planning process. Yet, as much haggling can be of tremendous power to obtain good prices from suppliers, haggling over forecasts don’t work. Period. Yet, we routinely observe that promotion forecasts tend to be some kind of tradeoff negotiated between Purchasing and Supply Chain, or between Purchasing and IT, or between Purchasing and Planning, etc. Assuming a forecasting process exists - which may or may not be accurate (this aspect is a separate concern) - then, forecasts are not up to negotiation. The forecasts are just the best statistical estimate that can be produced for the company to anticipate the demand for the promoted items. If one of the negotiating parties has a provably better forecasting method available, then this method should become the reference; but again, no negotiation involved. The rampant misconception here is the lack of separation of concerns between forecasting and risk analysis. From a risk analysis perspective, it’s probably fine to order a 5x bigger volume than the forecast if the supplier is providing an exceptional deal for a long lived product that is already sold in the network outside the promotional event. When people “negotiate” over a forecast, it’s an untold risk analysis that is taking place. However, better results are obtained if the forecasting and risk analysis are kept separate, at least from a methodological viewpoint. Remove manual interventions from the forecasts In general merchandise retail, all data process involving manual operations are costly to scale at the level of the network: too many items, too many stores, too frequent promotions. Thus, from the start, the goal should be an end-to-end automated forecasting process. Yet, while (nearly) all software vendors promise fully automated solutions, manpower requirements creep all over the place. For example, special hierarchies between items may have to be maintained just for the sake of the forecasting systems. This could involve special item groups dedicated to seasonality analysis, or listing of "paired" products where the sales history of the old product is used as a substitute when the new product is found having no sales history in the store. Also, the fine tuning of the forecasting models themselves might very demanding, and while supposedly a one-off operation, it should be accounted for as an ongoing operational cost. As a small tip, for store networks, beware of any vendors that promise to visualize forecasts: spending as much as 10s per data point to look at them is hideously expensive for any fairly sized retail network. The time spend by employees should be directed to the areas where the investment is capitalized over time - continuously improving the promotional planning - rather than consumed to merely sustain the planning activity itself. Don’t omit whole levels from the initiative The most inaccurate forecasts are that retailers produce are the implicit ones: decisions that reflect some kind of underlying forecasts but that nobody has identified as such. For promotion forecasts, there are typically three distinct levels of forecasts: national forecasts used to size the overall order passed to the supplier for the whole retail network. regional forecasts used to distribute the national quantities between the warehouses. local forecasts used to distribute the regional quantities between the stores. We frequently observe that distinct entities within the retailer’s organization end-up being separately responsible for parts of the overall planning initiative: Purchasing handles the national forecasts, Supply Chain handles regional forecasts and Store Managers handles the local forecasts. Then, the situation is made worse when parties start to haggle over the numbers. When splitting the forecasting process over multiple entities, nobody gets clearly accountable for the (in)effectiveness of the promotional planning. It’s hard to quantify the improvement brought by any specific initiative because results are mitigated or amplified by interfering initiatives carried by other parties. In practice, this complicates attempts at continuously improving the process. Forecast as late as you can A common delusion about statistical forecasting is the hope that, somehow, the forecasts will get perfectly accurate at some point. However, promotion forecasts won’t ever be even close to what people would commonly perceive as very accurate. For example, across Western markets, we observe that for the majority of promoted items at the supermarket level, less than 10 units are sold per week for the duration of the promotion. However, forecasting 6 units and selling 9 units already yields a forecast error of 50%. There is no hope of achieving less than 30% error at the supermarket level in practice. Yet, while the forecasts are bound to an irreducible level of inaccuracy, some retailers (not just retailers actually) exacerbate the problem by forecasting further in the future than what it is required. For example, national forecasts are typically needed up to 20 weeks in advance, especially when importing goods from Asia. However neither regional nor local forecasts need to be established so long in advance. At the warehouse level, planning can typically happen only 4 to 6 weeks in advance, and then, as far stores are concerned, quantitative details of the planning can be finalized only 1 week in advance before the start of the promotion. However, as the forecasting process is typically co-handled by various parties, a consensus emerges for a date that fits the constraints of all parties, that is, the earliest date proposed by any of the parties. This frequently results in forecasting demand at the store level up to 20 weeks in advance, generating wildly inaccurate forecasts what could have been avoiding altogether by postponing the forecasts. Thus, we recommend tailoring the planning of the promotions so that quantitative decisions are left pending until the last moment when final forecasts are finally produced, benefiting from the latest data. Leverage the first day(s) of promotional sales at the store level Forecasting promotional demand at the store level is hard. However, once the first day of sales is observed, forecasting the demand for the rest of the promotion can be performed with a much higher accuracy than any forecasts produced before the start of the promotion. Thus, promotion planning can be improved significant by not pushing all goods to the stores upfront, but only a fraction, keeping reserves in the warehouse. Then, after one or two days of sales, promotion forecasts should be revised with the initial sales to adjust how the rest of the inventory should be pushed to the stores. Don’t tune your forecasts after each operation One of the frequent questions we get from retailers is if we revise our forecasting models after observing the outcome of a new promotion. While this seems a reasonable approach, in the specific case of promotion forecasts, there is a catch and a naive application of this idea can backfire. Indeed, we observe that, for most retailers, promotional operations, that is, the set of products being promoted at the same period typically with some unified promotional message, come with strong endogenous correlations between the uplifts. Simply put, some operations work better than other, and the discrepancy between the lowest performing operations and the highest performing operations is no less than a factor 10 in sales volume. As a result, after the end of each operation, it’s tempting to revise all forecasting models upward or downward based on the latest observations. Yet, it creates significant overfitting problems: revised historical forecasts are artificially made more accurate than they really are. In order to mitigate overfitting problems, it’s important to only revise the promotion forecasting models as part an extensive backtesting process. Backtesting is the process of replaying the whole history, iteratively re-generating all forecasts up to the last and newly added promotional operation. An extensive backtesting mitigates large amplitude swings in the anticipated uplifts of the promotions. Validate “ex post” promotion records As discussed in the first post of this series, data quality is an essential ingredient to produce sound promotion forecasts. Yet, figuring out oddities of promotions months after they ended is impractical. Thus, we suggest not delaying the review of the promotion data and doing it at the very end of each operation, while the operation is still fresh in the mind of the relevant people (store managers, suppliers, purchasers, etc). In particular, we suggest looking for outliers such as zeroes and surprising volumes. Zeroes reflect either that the operation has not been carried out or that the merchandise has not been delivered to the stores. Either ways, a few phone calls can go a long way to pinpoint the problem and then apply proper data corrections. Similarly, unexpected extreme volumes can reflect factors that have not been properly accounted for. For example some stores might have allotted display space at their entrance, while the initial plan was to keep the merchandise in the aisles. Naturally, sales volumes are much higher, but it’s only a mere consequence of an alternative facing. Stay tuned, next time, we will discuss of the optimization challenges in promotion planning. [Less]
Posted over 10 years ago by Rinat Abdullin
Lokad Salescast is an inventory optimisation platform for retail, capable of dealing with big datasets. It gets inventory and sales information and does some number crunching. Produced reports tell when you need to reorder your products (and how ... [More] much) in order to serve forecasted demand and avoid overstocking. One of the objectives of Salescast is to make it available and affordable for small customers. Hence we introduced "Express Plan", which is free for small customers, but comes without any support. Making software free is easy. Making software usable without support is much harder. So Lokad developers had to create complicated heuristics to help customers deal with the problems. TSV parsing is one of problematic regions. Even though the major scenario for big data transfer at Lokad is "upload TSV-formatted text files to FTP", there are multiple things that can go wrong with this simple setup. No matter how precise is tech documentation, people can always miss seemingly unimportant things that are critical for computers. Here are some examples: text encoding of files; culture-specific format of dates; culture-specific format of numbers; optional columns in invalid format; required columns missing; missing files; non-standard separators. Yet, we are trying to provide the best experience out-of-the-box even with improperly formatted data. This would require doing a lot of smart TSV analysis in code. Here's how an output of one analysis process would look like (latest log entries at the top): Message-driven design patterns help to develop and maintain such logic. Public contract of it in the code might look like a simple function (with complicated heuristic inside): static IMessage[] AnalyseInput(SomeInput input) { .. } Here messages are strongly-typed classes that explain return results of that function (unlike event sourcing, they are not used for persistence). For example: public class UsedNonstandardExtension : ITsvFolderScanMessage { public readonly string Extension; public UsedNonstandardExtension(string extension) { Extension = extension; } public virtual AdapterTweet ToHumanReadableTweet() { return new AdapterTweet { Severity = AdapterTweetSeverity.Hint, Tweet = String.Format("Salescast found Lokad TSV files using" + " non-standard extension {0}.", Extension), }; } } Function would return one or more event messages. Various input scenarios might be unit-tested using given-when-expect approach, where we express test case as: given certain inputs ; when we invoke function; expect certain outcomes and assert them (e.g. verify that we get expected messages). Or in code: public sealed class given_compressed_files_in_txt_format : tsv_folder_analysis_fixture { public given_compressed_files_in_txt_format() { // setup all expectations in constructor, using helper methods // from the base class given_files( "Lokad_Items.txt.gzip", "Lokad_Orders.TXT.gzip" ); } [Test] public void expect_detection_with_extension_warning_and_compression_hint() { // assert expectations, using helper methods from the base class expect( new TsvFolderScanMessages.UsedNonstandardExtension("TXT"), new TsvFolderScanMessages.CompressedFilesDetected(), new TsvFolderScanMessages.StorageDetectionSucceeded( TsvInputFile.Item("Lokad_Items.txt.gzip").WithGzip(), TsvInputFile.Order("Lokad_Orders.TXT.gzip").WithGzip() )); } } This is an example of a single test scenario. There could be many others for a single function, reflecting complexity of heuristics in it: Each of these test scenarios shares same "when" method and helpers to setup "given" and "expect", so they are pushed to the base fixture class, which can be as simple as: public abstract class tsv_folder_analysis_fixture { readonly List<string> _folder = new List<string>(); ITsvFolderScanMessage[] _messages = new ITsvFolderScanMessage[0]; protected void given_files(params string[] files) { _folder.AddRange(files); } [TestFixtureSetUp] public void when_run_analysis() { // this is our "When" method. It will be executed once per scenario. _messages = TsvFolderScan.RunTestable(_folder); } static string TweetToString(ITsvFolderScanMessage message) { var tweet = message.ToHumanReadableTweet(); var builder = new StringBuilder(); builder.AppendFormat("{0} {1}", tweet.Tweet, tweet.Severity); if (!string.IsNullOrEmpty(tweet.OptionalDetails)) { builder.AppendLine().Append(tweet.OptionalDetails); } return builder.ToString(); } protected void expect(params ITsvFolderScanMessage[] msg) { CollectionAssert.AreEquivalent(msg .ToArray(TweetToString),_messages.ToArray(TweetToString)); } } If you look closely, then you'll find a lot of resemblance with specification testing for event sourcing. This is intentional. We already know that such tests based on event messages are non-fragile as long as events are designed properly. This additional design effort pays off itself really quickly when we deal with complicated heuristics. It makes development process incremental and iterative, without fear of breaking any existing logic. Step by step, one can walk around the world. In essence, we go through all the hoops of expressing behaviours via messages just to: express diverse outcomes of a single function; provide simple functional contract for this function; make this function easily testable in isolation; ensure that tests are easily maintainable and atomic. Downstream code (code which will use components like this one) might need to transform a bunch of event messages into a some value object before further use, but that is a rather straight-forward operation. Interested to dive deeper into Lokad development approaches? We are looking for developers in Paris and Ufa. You can also learn some things by subscribing to BeingTheWorst podcast which explains development ways of Lokad. [Less]
Posted over 10 years ago
Forecasting is almost always a difficult exercise, but there is one area in general merchandise retail considered as one order of magnitude more complicated than the rest: promotion planning. At Lokad, promotion planning is one of the frequent ... [More] challenges we tackle for our largest clients, typically through ad-hoc Big Data missions. This post is the first of a series on promotion planning. We are going to cover the various challenges that are faced by retailers when forecasting promotional demand, and give some insights in the solutions we propose. The first challenge faced by retailers when tackling promotions is the quality of the data. This problem is usually vastly underestimated, by mid-size and large retailers alike. Yet, without highly qualified data about past promotions, the whole planning initiative faces a Garbage In Garbage Out problem. Data quality problems among promotion's records The quality of promotion data is typically poor - or at least much worse than the quality of the regular sales data. A promotional record, at the most disaggregated level represents an item identifier, a store identifier, a start date (an end date) plus all the dimensions describing the promotion itself. Tthose promotional records have numerous problems: Records exist, but the store did not fully implement the promotion plan, especially with regards of the facing. Records exist, but the promotion never happened anywhere in the network. Indeed, promotion deals are typically negotiated 3 to 6 months in advance with suppliers. Sometimes a deal gets canceled with only a few weeks’ notice, but the corresponding promotional data is never cleaned-up. Off the record initiatives from stores, such as moving an overstocked item to an end aisle shelves are not recorded. Facing is one of the strongest factor driving the promotional uplift, and should not be underestimated. Details of the promotion mechanisms are not accurately recorded. For example, the presence of a custom packaging, and the structured description of the packaging are rarely preserved. After having observed similar issues on many retailer's datasets, we believe that the explanation is simple: there is little or no operational imperatives to correct promotional records. Indeed, if the sales data are off, it creates so many operational and accounting problems, that fixing the problem become the No1 priority very quickly. In contrast, promotional records can remain wildly inaccurate for years. As long nobody attempts to produce some kind of forecasting model based on those records, inaccurate records have a negligible negative impact on retailer operations. The primary solution to those data quality problems is data quality processes, and empirically validate how resilient are those processes when facing the live store's conditions. However, the best process cannot fix broken past data. As 2 years of good promotional data is typically required to get decent results, it’s important to invest early and aggressively on the historization of promotional records. Structural data problems Beyond issues with promotional records, the accurate planning of promotions also suffers from broader and more insidious problems related to the way the information is collected in retail. Truncating the history: Most retailers do not indefinitely preserve their sales history. Usually "old" data get deleted following two rules: if the record is older than 3 years, then delete the record. if the item has not been sold for 1 year, then delete the item, and delete all the associated sales records. Obviously, depending on the retailer, thresholds might differ, but while most large retailers have been around for decades, it’s exceptional to find a non-truncated 5 years sales history. Those truncations are typically based on two false assumptions: storing old data is expensive: Storing the entire 10-years sales data (down to the receipt level) of Walmart – and your company is certainly smaller than Walmart – can be done for less than 1000 USD of storage per month. Data storage is not just ridiculously cheap now, it was already ridiculously cheap 10 years ago, as far retail networks are concerned. old data serve no purpose: While 10 years old data certainly serve no operational purposes, from a statistical viewpoint, even 10 years old data can be useful to refine the analysis on many problems. Simply put, long history gives a much broader range of possibility to validate the performance of forecasting models and to avoid overfitting problems. Replacing GTINs by in-house product codes: Many retailers preserve their sales history encoded with alternative item identifiers instead of the native GTINs (aka UPC or EAN13 depending if you are in North America or Europe). By replacing GTIN with ad-hoc identification codes, it is frequently considered that it becomes easier to track GTIN substitutions and it helps to avoid segmented history. Yet, GTIN substitutions are not always accurate, and incorrect entries become near-impossible to track down. Worse, once two GTINs have been merged, the former data are lost: it’s no more possible to reconstruct the two original sets of sales records. Instead, it’s a much better practice to preserve GTIN entries, because GTINs represent the physical reality of the information being collected by the POS (point of sales). Then, the hints for GTIN substitutions should be persisted separately, making it possible to revise associations later on - if the need arises. Not preserving the packaging information: In food retail, many products are declined in a variety of distinct formats: from individual portions to family portions, from single bottles to packs, from regular format to +25% promotional formats, etc. Preserving the information about those formats is important because for many customers, an alternative format on the same product is frequently a good substitute to the product when the other format missing. Yet again, while it might be tempting to merge the sales into some kind of meta-GTIN where all size variants have been merged, there might be exception, and not all sizes are equal substitutes (ex: 18g Nutella vs 5kg Nutella). Thus, the packaging information should be preserved, but kept apart from the raw sales. Data quality, a vastly profitable investment Data quality is one of the few areas where investments are typically rewarded tenfold in retail. Better data improve all downstream results, from the most naïve to the most advanced methods. In theory, data quality would suffer from the principle of diminishing returns, however, our own observations indicate that, except for a few raising stars of online commerce, most retailers are very far from the point where investing more in data quality would not be vastly profitable. Then, unlike building advance predictive models, data quality does not require complicated technologies, but a lot of common sense and a strong sense of simplicity. Stay tuned, the next time, we will discuss of process challenges for promotion planning. [Less]
Posted almost 11 years ago
Over the last couple of months, Salescast has been made much simpler by supporting data import from flat text files. This option has been improved and simplified further through the introduction of BigFiles, our cloud-based file hosting solution. ... [More] Based on the feedback we received from many companies, the flat file approach proved to be superior to the SQL option in about every way: It's vastly faster, we routinely observe 100x speed-ups. It's easier to setup, to debug and to maintain. It's more secure, Lokad not requiring any remote access. We could have maintained the SQL support indefinitely until we realized that we were doing a disservice to our clients: by supporting an inferior option, we frequently faced situations where our clients started with SQL, failed along the way (frequently for performance issues), to finally succeed through the flat file approach. In contrary, failing with files to succeed with SQL was never observed. Thus, we have decided to phase out the SQL support of Salescast. Support of SQL import/export will end on December 31st 2013. Over the last couple of months, we have already transitioned most our customer base from SQL toward flat files. We will keep working with all the remaining impacted clients to make the transition as smooth as possible. In particular, it must be noted that the flat file format of Salescast is quasi-equivalent to raw dump of the SQL tables as expected by Salescast; hence, the effort to transition from SQL to flat files is typically minimal. If you have any question about this transition, don't hesitate to contact us. [Less]
Posted almost 11 years ago
The stock associated to each SKU is an anticipation of the future. From a more technical viewpoint, the reorder point of the SKU can be seen as a quantile forecast. The quantile indicates the smallest amount of inventory that should be kept to avoid ... [More] stock-outs with a probability equal to the service level. While this viewpoint is very powerful, it does not actual says anything about the risk of overstocking, i.e. the risk of creating dead inventory, as only the stock-out side of the problem is directly statistically addressed. Yet, the overstocking risk is important if goods are perishable or if demand for the product can brutally disappear – as it happens in consumer electronics when the next-generation replacement enters the market. Ex: Let’s consider the case of a western retailer selling, among others, snow chains. The lead time to import the chains is 3 months. The region were the retailer is located is not very cold, and only one winter out of five justify the use of snow chains. For every cold winter, the local demand for snow chains is of 1,000 kits. Now, in this context any quantile forecasts with a service level above 80% suggest to have more than 1,000 kits in stock in order to keep the stock-out probability under 20%. However, if the winter isn’t cold, then the retail will be stuck its entire unsold stock of snow chains, 1,000 kits or more, possibly for years. The reorder point calculated the usual way through quantiles focuses on upward situations with peaks of demand, but does not tell anything about downward situations where demand evaporates. Yet, the risk of overstock can be managed through quantiles as well, however it requires a second quantile calculation to be performed leveraging a distinct set of values for tau (τ not the service level) and lambda (λ not the lead time). In the usual situation, we have: R = Q(τ, λ) With R is the reorder point (a number of units) Q is the quantile forecasting model τ is the service level (a percentage) λ is the lead time (a number of days) As illustrated by the example here above, such a reorder point calculation can lead to large values that do not take into account the financial risk associated with a drop of demand where the company ends up stuck with dead inventory. In order to handle the risk of overstocking, the formula can be revised with : R = MIN(Q(τ, λ), Q(τx, λx)) With τx is maximal acceptable risk of overstocking λx is the applicable timespan to get rid of the inventory In this case, the usual reorder point gets capped by an alternative quantile calculation. The parameter τx is used to reflect the acceptable risk of overstock; hence, instead of looking at values at 90% as it done for usual service levels, it’s typically a low percentage, say 10% and below that should be considered. The parameter λx is used to represent the duration that would put the inventory value at risk because the goods are perishable or obsolescent. Ex: Let’s consider the case of a grocery store selling tomatoes with a lead time of 2 days. The retailer estimates that within 5 days on the shelf, the tomatoes will have lost 20% of their market value. Thus, the retail decides that the stock of tomatoes should remain sufficiently low so that the probability of not selling the entire stock of tomatoes within 5 days remains less than 10%. Thus, the retailer adopts the second formula the reorder point R with τ=90% and λ=2 days in order to maintain a high availability combined with τx=10% and λx=5 days in order to keep the risk of dead inventory in control. At present time, Salescast does not natively support a double quantile calculation, however, it’s possible to achieve the same effect by performing two runs with distinct lead time and service level parameters. [Less]
Posted almost 11 years ago
The Salescast Connector on Magento ConnectOptimizing inventory is not just for brick & mortar commerce, it's a cornerstone of online commerce as well.  As a matter of fact, we have been receiving many integration requests between Salecast, our ... [More] inventory optimization webapp and Magento - one of the most (if not the most) popular ecommerce software of the market. While exporting data from Magento into the format expected by Salescast is not overly complicated, it's not trivial either, and as it requires some SQL skills.Thus, we decided to move with Wyomind, a company that specializes in Magento extensions. As of today, Wyomind has just released its Salescast Connector. This extension takes care ofextracting the relevant data from Magentoformatting the data into the Salescast formatuploading (via FTP) the data to SalescastEven better, the Salescast Connector schedules daily data exports; that way, your reports from Salescast are always up to date whenever you need them. Concerning the pricing, Salescast comes with an Express Plan which is free and does not expire. This plan is compatible with the Wyomind extension, and as our Express Plan covers up to 10,000 items, it's sufficient for the vast majority of the smaller merchants. The Salescast Connector itself starts at 140€ as a one-time license fee.  [Less]
Posted almost 11 years ago
Merchants are frequently selling kits (or bundles), where several items are sold together, while the possibility remains to buy the items separately. The existence of kits further complicates inventory optimization because it introduces dependencies ... [More] between items as far availability is concern. In this post, we try to shed some lights about optimizing inventory in presence of kits. There are two opposed approaches to deal with kits: Do not store any kits, only separate items. Assemble the kits at the last moment assuming that all items are available. Store all kits pre-assembled as a separate SKUs. Kits are assembled in advance. If no kit is readily available, the kit is considered as out-of-stock. In practice, most inventory policies toward kits tend to be a mix of those two approaches. Let’s start the review with the first approach. The primary benefit of keeping everything disassembled is that it maximizes the availability of the separate items; however, this comes at the expense of the kit availability. Indeed, assuming the availability levels of items are independent and refered with L1, L2, … Lk (for a kit with k items), then the availability of the kit LK = L1 x L2 x … x Lk.  Let’s assume that we have a kit with 5 items, all items having the same service level. The graph above illustrates the correspondence between the service level of the kit w/o of the service levels of the separate items. For example, with 5 items at 90% service level, the kit ends up with a service level slightly below 60%. This behavior illustrates weakest link behavior of kits: it only takes one item to be out-of-stock to put the whole kit out-of-stock. Even if all items have fairly high availability, the kit availability can be much lower; and the bigger the kit, the worse it gets. If instead of 5 items, we consider a kit with 10 items at 90% service level, then the kit service availability is reduced to 35%; which is typically unacceptable for most businesses. The second approach consists of storing pre-assembled kits. This approach maximizes the availability of kits. In this case, kits are treated as like any other item: the demand for kits is forecast, with quantiles forecasts, and a reorder point is computed for the SKU representing the kits. This inventory policy preserves a strict decoupling of the kit and its items. With this approach, the service level of the kit is driven by the quantile calculation. As such, the kit is not negatively impacted by the separate availability of the items. Each item also gets its separate reorder point. The primary drawback of this second approach is that, in the worst case, the amount of inventory can be doubled for limited or no extra availability. In practice however, assuming that about half of the item consumption comes for kit’s sales, the stock is typically increased by roughly 50% when applying this second approach instead of the first one; the extra inventory is used to ensure the high level of availability of the kit itself. The optimal inventory strategy, the one that maximizes the ROI (Return On Inventory), is usually a mix of those two approaches. The exact inventory optimization of kits is a relatively intricate problem, however the problem could be rephrased as: at which point should the merchant start refusing to sell separately one of the kit’s items because she would risk losing more advantageous orders on kits instead? Indeed, all long as kits are available, there is typically no incentive for the merchant to refuse selling a kit in order to preserve the availability of the separate items. (There might be an incentive if items have much higher gross margin than the kit, but for the sake of simplicity, this case is beyond the scope of the present discussion). In order to determine how many items should be preserved for kits (assembled or not), one can use an alternative quantile forecasts, where the service level is not set as a desired availability target, but on a much lower probability that reflects a probable sales volume that should be preserved. For example, let's assume that a 30% service level on a kit gives a quantile forecast at 5. This value can be interpreted as “there are 70% chances that 5 or more units of the kits will be sold over the duration of the lead time”. If a 70% confidence in selling 5 kits outweighs the benefits of selling the next item now (assuming only 5 items remain), then the item should be considered as reserved for kitting purposes. We are still only scratching the surface as far kits are concerned. Don’t hesitate to post your question in comments. [Less]
Posted almost 11 years ago
Merchants are frequently selling kits (or bundles), where several items are sold together, while the possibility remains to buy the items separately. The existence of kits further complicates inventory optimization because it introduces dependencies ... [More] between items as far availability is concern. In this post, we try to shed some lights about optimizing inventory in presence of kits. There are two opposed approaches to deal with kits: Do not store any kits, only separate items. Assemble the kits at the last moment assuming that all items are available. Store all kits pre-assembled as a separate SKUs. Kits are assembled in advance. If no kit is readily available, the kit is considered as out-of-stock. In practice, most inventory policies toward kits tend to be a mix of those two approaches. Let’s start the review with the first approach. The primary benefit of keeping everything disassembled is that it maximizes the availability of the separate items; however, this comes at the expense of the kit availability. Indeed, assuming the availability levels of items are independent and refered with L1, L2, … Lk (for a kit with k items), then the availability of the kit LK = L1 x L2 x … x Lk.  Let’s assume that we have a kit with 5 items, all items having the same service level. The graph above illustrates the correspondence between the service level of the kit w/o of the service levels of the separate items. For example, with 5 items at 90% service level, the kit ends up with a service level slightly below 60%. This behavior illustrates weakest link behavior of kits: it only takes one item to be out-of-stock to put the whole kit out-of-stock. Even if all items have fairly high availability, the kit availability can be much lower; and the bigger the kit, the worse it gets. If instead of 5 items, we consider a kit with 10 items at 90% service level, then the kit service availability is reduced to 35%; which is typically unacceptable for most businesses. The second approach consists of storing pre-assembled kits. This approach maximizes the availability of kits. In this case, kits are treated as like any other item: the demand for kits is forecast, with quantiles forecasts, and a reorder point is computed for the SKU representing the kits. This inventory policy preserves a strict decoupling of the kit and its items. With this approach, the service level of the kit is driven by the quantile calculation. As such, the kit is not negatively impacted by the separate availability of the items. Each item also gets its separate reorder point. The primary drawback of this second approach is that, in the worst case, the amount of inventory can be doubled for limited or no extra availability. In practice however, assuming that about half of the item consumption comes for kit’s sales, the stock is typically increased by roughly 50% when applying this second approach instead of the first one; the extra inventory is used to ensure the high level of availability of the kit itself. The optimal inventory strategy, the one that maximizes the ROI (Return On Inventory), is usually a mix of those two approaches. The exact inventory optimization of kits is a relatively intricate problem, however the problem could be rephrased as: at which point should the merchant start refusing to sell separately one of the kit’s items because she would risk losing more advantageous orders on kits instead? Indeed, all long as kits are available, there is typically no incentive for the merchant to refuse selling a kit in order to preserve the availability of the separate items. (There might be an incentive if items have much higher gross margin than the kit, but for the sake of simplicity, this case is beyond the scope of the present discussion). In order to determine how many items should be preserved for kits (assembled or not), one can use an alternative quantile forecasts, where the service level is not set as a desired availability target, but on a much lower probability that reflects a probable sales volume that should be preserved. For example, let's assume that a 30% service level on a kit gives a quantile forecast at 5. This value can be interpreted as “there are 70% chances that 5 or more units of the kits will be sold over the duration of the lead time”. If a 70% confidence in selling 5 kits outweighs the benefits of selling the next item now (assuming only 5 items remain), then the item should be considered as reserved for kitting purposes. We are still only scratching the surface as far kits are concerned. Don’t hesitate to post your question in comments. [Less]
Posted about 11 years ago
Many software companies advertise themselves as Big Data companies, but few have access to datasets as rich and exploitable as the ones that Lokad is processing on a daily basis. We are hiring a Big Data developer. As a software developer at Lokad, you will help us to design, implement and run our Big Data apps.