Search

Warehouse Design – Data Analysis

Updated: Nov 13, 2020


In each of my previous Blogs

1. Warehouse Design - Block Stacking

2. Shuttle Rack

3. VNA


there has been a section that refers to data analysis. I have stated in each case that the data analysis stage should not be overlooked, or the complexity underestimated”.


Most logistics operations are currently going through a period of change, and have been subject to some form of re-evaluation, this may be due to one or more of the following:

  • Covid 19 – resource availability – staff shielding or self-isolating

  • Covid 19 – social distancing and revised operating methods are required

  • Sales volumes increasing / decreasing

  • Increased online sales volumes

  • Impact of Brexit

  • availability of labour

  • having to redesign the supply chain – with the possible addition of an EU hub

Most of the consulting projects I have been engaged in recently are primarily focused on the picking operation. The picking operations generally account for at least 60% of the overall labour hours in a warehouse. In many cases companies are looking at some of the following alternatives to their picking operations:

  • Pick to light

  • Autonomous Mobile Robots

  • Follow robots (Collaborative mobile robots)

  • Goods to man picking solution

  • Conveyorised zone picking solutions

  • ASRS systems

  • Autostore systems

  • Fully automated picking solutions

Whatever the type of operational change being considered, it is very important to have a detailed understanding of your operation supported by data analysis. Any future funding will be based upon some form of forecasted analysis and projected benefits.

This blog aims to guide you at a high level through the various stages I adopt when analysing an operation with data.

Task 1 - Obtain Data

When I engage with a client the process is very similar in many cases. Once we have agreed a scope of work and I have been appointed, I then issue data request documents. Each client and project are unique, and the requirements will be different, however, the analysis stages, validation and modelling stages are very similar, with varying degrees of complexity.

Data request – Typical Data set:

The typical data requirements are likely to include some or all of the following: by file type

  • Supplier names / addresses / supplier codes

  • Customer names / addresses / customer codes

  • Inbound receipt data

  • SKU / Item master file

  • SKU code / Description / Product group / Supplier reference # / Dimensions / Weights / Packaging type/ Ti Hi (cases per level and levels per pallet) / Cases per pallet,

  • Picking data

  • Order number

  • SKU code

  • Order qty

  • Order type

  • Full pallet / Full Layer / Case / Split Case / Each pick

  • Customer type

  • Wholesale / Retail / B2C

  • Shipping data - loading times / carrier

  • Labour Team / Org chart – split by shift / function

  • KPI’s (plus actual rates)

  • MHE Listing – by type – by quantity / specification

Generally, I will perform analysis for an average week and a peak week. Ideally these weeks will be recent, and the operators will be familiar with any exceptional circumstances in each given week.

I will also ask for future growth projections. This will include assumptions relating to

  • SKU range growth

  • Stock depth

  • Customer base

  • Order forecasts – lines per order, orders per day, etc

  • Order Type / delivery method

Site Data

In addition to the historic data I also ask for site specific information:

  • Warehouse drawing – in AutoCAD format preferred

  • Warehouse location report – typically extracted from WMS / ERP

  • General site questionnaire – to include details such as operating hours, etc.

Task 2 - Data Validation

Once the data has been received then we can go about the process of validating the accuracy of the data and checking it for completeness. Typically, the data supplied is extracted from the system and the quality and accuracy can be indifferent. Some of the data gaps may need to be either filled with new data, or if considered appropriate the allocation of averages / assumptions. Typically, SKU dimension data is the single most common area of missing data.

Task 3 Data Analysis

The analysis is typically divided by functional areas of the warehouse:

Inbound Profile

If possible, we can group the inbound deliveries by a relative grouping, e.g., supplies from a factory, parcel carrier deliveries, cross dock receipts, etc. These can be then allocated a time profile for modelling purposes, with a load size allocated. We will also need to understand the processing times through receipt for each load type.

Example Inbound Vehicle Arrival Graph

TH Comment: in this example my customer was looking to increase cross docked operations, the capacity of the docks and the marshalling space was a major consideration, also the capacity for the site to handle additional traffic through the security area and the yard was a factor.

Pick Data Analysis – By Order Type

Orders can be broken down by type – we will need to know the order cut off times for each type of order, and the despatch times.

The orders will then be further scrutinised to understand by order type

  • number of lines per order

  • the number of items picked per order

  • the number of cases / pallets / layers per order

We can also in the case of Ecom orders further analysis this to identify how many orders are single line orders. Single line orders could be batch picked and packed in waves.

ABC Pareto Analysis - The 80-20 Rule: “The Law of the Vital Few & The Trivial Many”


The pareto 80:20 rule is based on the logic that for many events, roughly 80% of the effects come from 20% of the causes. In the case of a warehouse this logic is applied typically to picking analysis. In the example above 20% of the SKU’s accounts for 80% of the pick lines – these are classified as the “A” items.

Pareto Analysis can be used to identify a variety of key metrics:

  • the sales volumes associated with each individual SKU

  • pallets processed by SKU

  • order lines picked by SKU

  • the sales revenue generated by each sku

Once we have classified items by AB and C, we can then consider suitable storage and picking strategies for each group. We might further divide the A’s and consider the very fast A items as "Super A’s", for which we might consider a different storage and picking strategy.

Replenishment

Often overlooked is the replenishment process and frequency. If a pick cannot be completed due to a lack of replenishment, or if pickers are redirected around the warehouse, this can lead to inefficient operations. Replenishment of fast-moving pick faces especially is key to maintaining efficient operations.

Outbound Analysis

To complete the data analysis, we will review the picking of orders by order type and by pick wave. This will also include analysis relating to the required vehicle loading and despatch times. The analysis may extend to include the arrival of items for cross docking if applicable. The despatch operations may require consolidation operations which we will need to account for.

Task 4 - Produce Summary Reports

Once the data analysis stage is complete and we agreed on any assumptions, we can then create various charts and models to allow for detailed analysis of the current and future operations. The models I usually create include the DILO (Day in the Life Of) model, Material Flow Diagrams and Resource models.

DILO

The Day in the Life Model is a graphical representation of the activities and when they occur during a typical day. The model can be extended to show a typical week. The model is very quick to create and a valuable tool to demonstrate to the operators and stake holders in a project when different tasks occur. This can highlight potential issues, and also demonstrate an imbalance of activities during a day. Recently with the huge increase with Ecom sales, and an increasing trend for later order cut offs and a condensing picking window, the DILO chart is a valuable tool to demonstrate issues visually.


Note: the above model demonstrated an operation that had 3 pick waves per day, each pick wave had a different cut off time and a different despatch time window. Matching the available resource to the demand was very challenging.

Material Flow Diagram

Following on from the data analysis stage we can now create a Material Flow diagram, this will show the volumes of units processed through each of the functional areas. This can is useful for comparing current average vs peak weeks, and the comparing future average and peak weeks based upon forecasted volumes.


Resource modelling

The resource model is used to take the outputs from the analysis and comparing the volumes to known task rates we can identify the number of people / equipment by task required in the warehouse. The resource model considers

  • the shifts operated,

  • the teams allocated to each shift

  • number of operators by function – number of MHE required by each function

  • the volumes processed by each shift

  • inbound

  • receipt

  • value add if required

  • put away

  • replen

  • pick

  • pack

  • consolidation

  • loading

The model will identify

  • utilisation of operators

  • volumes processed through each area / shift

  • if for any reason, there is a shortfall in the volumes processed with the resources available

  • potential bottlenecks

  • seasonal trends and variations

TC Comment: the resource model once created is a great tool for planning the recruitment of additional staff and short-term hire of equipment (MHE) if required. Trying to secure short erm hire equipment in November can be problematic.

Task 5 – Alternative Comparison

Once the data analysis and resource modelling stage has been completed, we can then consider some “what if” modelling. What if we change

  • the layout

  • the equipment used

  • automation of some tasks

  • the volumes processed through different functional areas

The resource model can be modified to show the revised flows and associated volumes through different processes. At this point we may demonstrate potential efficiency gains. The revised costs can be calculated. If there is a requirement for capital investment, then the Return on Investment timescale can be calculated.

Conclusion

Data analysis is a time-consuming complex process that cannot be neglected. Any capital investment decisions will be based upon projected savings. There is a risk that if the analysis stage is not completed to a high level of accuracy then incorrect decisions will be made.

The comparison of alternative concepts moving forward should if possible be conducted be impartial independent experts. Please do not hesitate to ask for advice. Of course, I would be very happy to provide a quotation to provide any assistance required.

87 views0 comments

© 2020 Tony Hughes Consulting Ltd

  • Twitter