REQ 297616 DHL Mothercare ELC Vision

From Calidus HUB
Revision as of 17:10, 26 March 2012 by Anw (talk | contribs) (v0.1 - Initial draft version)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)




Aptean Logo.png







DHL

Mothercare/ELC Requirements


CALIDUS Vision

26th March 2012 - 0.1
Reference: FS 297616












































Introduction

This document is the Mothercare/ELC Requirements for CALIDUS Vision.

Objective

The primary purpose of this document is to document the requirements gathered from DHL, at the Mothercare NDC in Daventry, on 21-22 March 2012.

The document will highlight the premise of the system being created, the modifications required to achieve this and the development effort. Furthermore, all project-related costs will be estimated where they differ or are extended from the standard estimates.

This document has been written in a manner such that it can be approved by non-technical representatives of DHL whilst also being of sufficient detail to allow the Functional or Technical Specification phase for this area to begin.

Scope and Limitations

This document is based on the documentation provided by DHL, as referred to in the appendices, as well as information gleaned from site visits and workshops with DHL.

  • The changes will be made in the latest version of the CALIDUS Vision system.


Client Requirements

The following is an extraction of the BRD created by the client, listing the critical requirements of the system (referenced as item 1 in Appendix B).

The Mothercare contract requires an LMS that can give them the following:

CRITICAL:

  • Productivity information, in the form of PI rate, at employee level and aggregated by shift, activity, site, made available to the site management team on a live* basis and with minimal intervention (* 15 minute update intervals)
  • Operator level PI rate with enough data integrity that it can be used for performance management purposes and ability to look at rate over a rolling 6 week period which includes number of hours spend on each activity to reflect the sites PIP policy as agreed with the Union
  • Ability to print off or export to Excel spreadsheet to keep a record
  • A detailed reporting suite that will allow both sites to look at performance at any level of hierarchy within the site (by activity, shift, sites, date ranges etc)
  • Enquiries to enable a user to select range of activities / dates or shifts / users or groups
  • Graphs of PI rate by employee to be displayed in each area (where each employee is identified by a unique confidential code or number)

NON CRITICAL:

  • Dashboard to display RAG by activity for current shift
  • Under-performers and over-performers report to list everyone who performed under 75% PI and over 100% PI for current shift and/or week-to-date
    • Effectiveness report to give a picture of indirect tasks and hours (vs. direct)
  • Monitoring tools to include:
    • PkMS / Kronos mismatch alerts
    • User swipes report
    • User log-on & log-off report
    • Exceptions alerts (forced moves, skipped picks, task change, nils…)

Scope

In Scope

  • All direct activities currently reported on the MIS for EDC and NDC
  • All indirect activities currently reported on the MIS for EDC and NDC
  • Interface with Kronos and both versions of PkMS

Out of Scope

  • PKMS and Kronos system changes
  • RCS grade jobs are excluded (e.g. admin, planning, superusers)

Measurement Areas

The following areas will be measured:

NDC

Area Activity
1 Intake NDC - Goods In Small Box
NDC - Goods In Large Box
NDC - Goods In Hanging
2 Putaway NDC - PD03
NDC - Direct To Active
NDC - Putaway boxed
NDC - Putaway hanging
3 Replen NDC - Replenishment large
NDC - Replenishment small
4 UK pick NDC - UK Picking BP20 - Large
NDC - UK Picking BP20 - Small
NDC - UK Picking DCR - Large
NDC - UK Picking DCR - Small
NDC - UK Picking DCR - Hanging
5 INT pick NDC - INT Picking BP20 - Large
NDC - INT Picking BP20 - Small
NDC - INT Picking DCR - Large
NDC - INT Picking DCR - Small
NDC - INT Picking DCR - Hanging
6 Mini Club inbound NDC - Mini Club PD06
NDC - Mini Club DTA
NDC - Mini Club Replen
7 Mini Club Pick NDC - Mini Club Picking - Ratio
NDC - Mini Club Picking - Single
8 Mini Club outbound NDC - Mini Club Nesting
NDC - Mini Club Despatch
9 UK marshalling NDC - UK Marshalling large
NDC - UK Marshalling small
10 INT marshalling NDC - INT Marshalling
11 UK despatch NDC - UK Despatch
12 INT nesting NDC - INT Nesting
13 INT despatch NDC - INT Despatch

EDC

Area Activity
1 Intake EDC - Goods In
2 Putaway EDC - Putaway
3 Replen EDC - Replenishment (Average)
4 UK pick EDC - UK Picking Bulk
EDC - UK Picking VNA/Wide
EDC - UK Picking Tote
5 INT pick EDC - INT Pick & stick
EDC - INT Picking small box
EDC - INT Picking large box
EDC - INT Picking FPP
6 UK consolidation EDC - UK Consolidation
7 INT consolidation EDC - INT Consolidation
8 UK marshalling EDC - UK Marshalling
9 INT marshalling EDC - INT Marshalling
10 UK despatch EDC - UK Despatch (Average)
11 INT despatch EDC - INT Despatch (Average)

Overview of Solution

CALIDUS Vision is a productivity and visibility tool, designed to mine and analyse system data.

Data Mining works by connecting to a known system and pulling data from the host database, analysing it and storing the result in the database for viewing within the CALIDUS Vision system.

The host systems in use at the Mothercare and ELC NDCs are PkMS (for WMS activity and system data) and Kronos (for activity and time information).

These 3 databases (2 separate instances of PkMS and 1 of Kronos) will be mined for data.

The PkMS data mines will be completed by connecting directly to the databases using an OLEDB connector and mining the data direct from the tables.

The Kronos data mine will be completed by importing a flat file from Kronos.

Each of these data mining processes will be created within CALIDUS Vision.

Additionally, several new screens or amendments to existing screens will be entered into, taken from the Critical list of changes above.

Each item will be discussed in detail in the following sections, along with any risks and technical assumptions.

Detailed Notes on Data Mining

This section focuses on the detailed mapping that occurred at the site. Where possible, areas have been mapped and queries created to extract the data as required. Note that not all areas have yet been mapped - further analysis work is required to finalise this.

The connection to the AS/400 databases will be through a tool used by the existing MIS system - HiT OLEDB AS/400 (website: HITSW.COM).

This is required to be installed as part of the CALIDUS Vision implementation on the server being provided by DHL. This will also be required by the development team within OBS. There is a license cost for this product, which will be part of the DHL project costs. There will be a purchase, support and potentially a developer cost per product - this cost information must be confirmed by the DHL IT team.

  • license/support per year: $500
  • Purchase cost: Unknown.
  • Developer cost: Unknown.

The server running the data mine will also require:

  • Microsoft .NET framework.
  • Microsoft IIS.
  • Oracle MySQL enterprise database.

The costs for the enterprise database have been covered in detail within previous estimates and are not covered again here.

The data mining programs will be written in Microsoft Visual Studio .NET 2010, as the driver is proven to work in this programming environment. Furthermore, the existing mechanism of writing datamining programs (through Windows Scripting Host) may not be fast enough, given the quantity of data to be mined and the potential slowness of some of the queries (see later for details). OBS have extensive experience of coding within the .NET language, but have no CALIDUS Vision data mining programs written in this language. Will have to extend the dev time to account for .NET dev - can't be based on existing dev in vbscript.

The PkMS databases are not accessible from outside the DHL Mothercare network, so OBS will not be able to connect to a test instance of the DB to test the data mining code directly from the development environment.

In order to develop this solution effectively, OBS must either:

  • Create a DB2 database within the OBS domain, copying the structure and sample data of the existing system
  • Connect to a DB2 database as above, made accessible within the wider DHL network
  • Create a copy of the structure and sample data within another database type (e.g. MySQL).

The third approach will be followed, as the other two are impractical.

This will allow OBS to test functionality against similar tables and data within the development environment. However, the final testing must be completed on the destination machine itself, with a version of the product built specifically for the OLEDB driver installed on that machine.

Note:

CALIDUS Vision is not extracting data by the method agreed in previous discussions with the operation, nor in the way that CALIDUS Vision expects.

For example, the transaction table is the core of the records used to calculate the productivity. The existing MIS system extracts data by date from the Task Header and Detail records based on the date completed, not the transaction table. Each activity is extracted from different tables in separate queries.

Therefore, multiple table reads and SQL statements must be done for each extract based on the individual activity and area.

This affects the speed of analysis of requirements (i.e. analysing 8 streams rather than 1) and the speed in which these will be processed.

Note:

Furthermore, the existing tables are not optimised for reading by Date, resulting in slower extraction. The data mining may not run as fast as was initially expected. This can only be evaluated once the system is built. A schedule (i.e. the rate at which the data mine will run) will be set when this speed can be evaluated.

Note:

As the data mine must now pull data on a timed basis from several tables separately, the existing mechanism of checking from last data mine date and time to current time must be changed to a stricter schedule.

For example, pulling the first task information may be from 08:00:00 to now (potentially 08:15:00) will extract this data. By the time the second file is mined, the current time will have moved on (for example 08:15:20). This will result in data being mined across several areas with different parameters and will show unexpected results. The data mine must now check hold a schedule i.e. from last data mine (e.g. 08:00:00) to next scheduled break point (e.g. 08:15:00) regardless of the current time.

Note:

The core systems will be unavailable at certain times in the day, for backup purposes. The data mining process must take this into account, when building the schedule for extraction.

If the core database is unavailable when attempting to connect to the database, the extraction will be abandoned for that run. Only when the database is available will the interval be calculated, based on the number of 15-minute periods that can be mined from the last successful data mine, the schedule end time not exceeding the current time.

For example, if the last successful data mine was for data up until 08:00:00, and the current time is now 08:20:00, if the database is unavailable, the mining will be abandoned. The next run at current time 08:35:00 successfully connects. The number of intervals in 15 minute periods will be calculated, ensuring that the calculated end time does not exceed the current time. In this example, the resulting interval will be 30 minutes (2 15 minute periods), giving an end time of 08:30:00. The next interval of 08:45:00 is not allowed, as this exceeds the current time.

The system will keep 48 hours of raw data (or more) so that figures can be recalculated as required. The system will ensure that the calculations only recalculate last 48 hours (more accurately, today and yesterday), to speed up the process.

In all the data extracts below:

  • Warehouse - will be defaulted to "MOT" (for Mothercare NDC) or "EDC" (for the ELC DC).
  • Owner - will be mapped to "UK", "INT" (International), "MINI" (Mini Club) or Space-filled for tasks that are not related to these others (e.g. combined UK and International Receipt and Putaway).

System Data

System Data is seen to be data that shows the number of outstanding tasks of each type in the system at the time that the data mine occurred (e.g. 34 pallets awaiting putaway, 50 orders sent to pick, 350 individual pick tasks awaiting picking, etc).

Base activity figures (i.e. not broken down to large/small/hanging, UK/International etc) have little meaning to the operation. These figures will not be mined from the system, but will instead be calculated by CALIDUS Vision from the Extended Activity information.

The base task types that will require supporting are:

  • Intake = RC
  • Putaway = PU
  • Replen = RP
  • Pick = PP
  • Marshalling = MA - this is the move at the end of pick to the assigned marshalling bay. This is done by a separate user, not the picker.
  • Despatch = DE - Loading and Despatch
  • Nesting = NE - consolidation of pallets in marshalling across orders (INT and MINI only)
  • Consol = CO - consolidation of pallets in marshalling

Note that italicised tasks already exist within the system. Those not italicised will be added.

Inbound Shipments figures (e.g. preadvices in the system for today, and a count of SKUs and Total Qty) are not applicable to this system, as the data held by PkMS is not scheduled. These figures, if produced, would be meaningless to the operation.

Putaway and Replenishment figures will be mined with one query.

Pick tasks will be mined with another.

Marshalling, Nesting, Despatch and Consolidation system data does not exist within PkMS - these are ad-hoc activities and the number or quantity of outstanding tasks cannot be found.

The queries to extract this information have not yet been created - the DHL IT team will write the SQL for this system data extraction and provide this to OBS.

Note on picks Picks can be partially picked, then the user goes back to complete the task at a later time. For example:

  • The original number of pick tasks for an order is 4
  • The user picks all of one of the pick tasks and only part of the second.
  • The count of the number of pick tasks at that point will be 3, as there is still some of 2 left, which will count as a task.

Note: The Total Number of Order displayed on the System Information screen will be calculated from the Order Status Detail feed seen later in this document.

The Picking Containers information displayed on this screen is not applicable - this has been replaced with new tasks (Marshalling, Loading, Despatch, Nesting and Consolidation).

Standing Data - Users

The users from the two PkMS systems will be mined for the User ID and Name. The data was completely mapped.

WCS Alerts (Exceptions)

This file was mapped to 'Skip Pick' functionality within PkMS. This functionality exists within Mothercare and ELC and can be extracted in an identical way.

All fields required for this functionality were mapped with no issues.

It was noted that the latest version of the WCS Alerts screen (as yet unreleased to production) would be used, to allow the user to click on an exception to see the details.

WMS Order Status

These CALIDUS Vision screens are used to see an overview of the orders available in the core systems, and to see a 'drill-down' of the data, showing the individual orders at each status.

These flows were mapped in detail.

Slight changes will be required to the screens to support the statuses within the core systems, as follows:

Status Description Mapped to
10 Unselected/Avail Available
15 Prewaved (New Status)
16 Awaiting Replen (New Status)
20 Released for Pick Pick Pending
35 In Picking Pick Pending
40 Pick Pack Complete Picked
55 Marshalled (New Status)
58 Marshalling in progress (New Status)
70 Loaded (New Status)

No Despatched figures will be sent across.

The dates against the figures will be the Order Created date only - no booking date exists within PkMS for this purpose. Note that this could be modified on reading the data, by adding a number of days (e.g. 3) to the order creation date, to make visibility easier within the CALIDUS Vision.

Approximately 150,000 orders will be pulled (based off the Mothercare live system on the day).

A new field 'Pick Wav' will be added to the existing Order Status Details table and screen, to allow the user to see the Wave under which the order was sent to pick.

The existing Order Type (Priority) and Haulier fields will be used to map the Order Type and Size respectively. The Owner will use used to map the general type (UK, INT or MINI).

Warehouse Map

The Warehouse Map is used to define the total of locations within areas and aisles, and is used to show the usage of the areas and aisles within the warehouse (i.e. percentage full).

It is possible to map the Pick Faces, Marshalling Lanes, Marshall Locations and Bulk Storage areas with several small queries, all of which have been created and mapped.

The number of cases in locations cannot be easily mapped, but the locations can be seen to be full or empty, allowing the existing screens to work without modification.

The following zones were identified:

  • Zone H/F - Bulk Locations
  • Zone I/P - Pack and Hold (Labelled as Marshalling Locations within CALIDUS Vision)
  • Zone D - Marshalling Locations
  • Zone H/F/X/Z/K/L (Excluding specific H/F Bulk locations) - Pick Faces.

Extended Activities Data

This activity feed forms the bulk of the information required by CALIDUS Vision to calculate the productivity of the users.

Each of the base task types will be extracted as follows:

  • Putways/Replens: From the Tasks tables
  • Receipts: From the Transaction Table
  • Picks: From the Transaction tables linked to Order Header and Detail
  • Nesting: As yet unknown - see notes below.
  • Consolidation: As yet unmapped, as this is a function in EDC only.
  • Marshalling: Split into two extracts, one for UK and one for INT, linking from Transactions to the Carton tables.
  • Despatch: As yet unmapped.

Notes:

  • The flows for the Mothercare NDC were extensively mapped, but were still not complete at the time of writing.
  • The ELC DC flows have not been mapped at all.
  • It has been shown that extracting Nesting information without checking for INT or MINICLUB is fast, whereas finding this information through the database is extremely slow. It is recommended that this is either excluded from the extraction entirely, or the extraction does not attempt to split into UK or MINICLUB. If this is required, this extract will have to be evaluated when the datamine is built, to ensure that the schedule of runs is sufficiently long to allow this extract to complete.

Kronos Data Feeds

Note: As yet, examples of the two Kronos data feeds have not been provided and therefore not mapped. An example of each of the data feeds need to be provided by the DHL IT team so that analysis can be completed. For estimation purposes, this document assumes that the feeds will be text and fixed format. If this is different, further costs may be accrued (for example, if the data feed is in MS Excel format, MS Office should be installed on the server to aid in extracting the file).

Examination of a similar file used within the existing MIS system shows that the data comes across in a Raw format, so shows only the scan codes that have been created to identify the specific activities within the warehouses. These codes will be entered into a cross-reference table within CALIDUS Vision.

The CALIDUS Vision table will be extended to include the new Kronos ID and indexed for cross-reference.

This lookup table will be used to stamp the Owner and User Activity (the task on which the current user is working).

The Date and Time stamps against the Kronos data will be used to provide the start and end time of tasks mined from PkMS from the Extended Activity feed above.

Note: It is core to the CALIDUS Vision system that the timestamps against the core systems (PkMS and Kronos) are identical. As the systems are not currently checked for this, CALIDUS Vision will include a simple '+/-Number of Seconds' parameter that will be user-maintainable. This will identify the number of seconds that the PkMS systems' time differs from the Kronos system time (one for each version of PkMS). If the core systems are synchronised with each other, this will not be necessary.

Data Mining Process

The new datamining processes will be written in Visual Studio .NET 2010, utilising the HiT OLEDB AS/400 connector to access the core PkMS databases.

It is unknown as to whether 32- or 64-bit will be used, although both exist, as this depends on the configuration of the server on which the system will reside.

The configuration of the AS/400 connecter will be provided by the DHL IT team, as this is already in use within the operation for the existing MIS system.

Note that this program will be written with extensive logging and error checking built into the code. This is because the final test build will be:

  • Untested against the AS/40 database
  • Timed to ascertain the respective speeds of each area of a data mine (the data extract, the loading of the data, the analysis of the data), to ensure that the fastest processing is being used in all areas.

To aid in the process of debugging, the process will:

  • Write a text log file for each run, recreated each time:
  • Also write all logging information to the Vision database, for persistent logging.

In order to protect against the log files building up over time, but allowing the maximum log file for debugging purposes, the process will write to these log tables cyclically, ensuring that there are never more than X,000 records stored at any time. This limit will be controlled by a parameter within Vision.

Main Process

The main process flow for the Data mining process will be as follows:

  • Connect to Vision database and get parameters for extracts.
  • Calculate interval parameter.
  • Get list of extracts to attempt.
  • Complete the Data Extracts for:
    • Mothercare NDC
    • ELC DC
    • Kronos 48-hour feed.
    • Kronos Incremental Times Feed.
  • Analyse the data and produce the productivity figures.

Mothercare NDC Extract

The Mothercare data extract will be as follows:

  • Connect to Core Database (through AS/400 OLEDB connector).
  • If not connected, log issue and move to next extract.
  • If connected:
    • Extract Standing Data:
      • Users
    • Extract System Data (Tasks Outstanding at this time):
      • Putaways (by detailed activity)
      • Replens (by detailed activity)
      • Picks (by detailed activity)
      • Order Status Details
      • Warehouse Map details (Area/Aisle Usage information)
    • Summarise Base Task information.
    • Summarise Order Status information for the Order Status Screens and the System Overview (total of orders available for pick)
    • Extract Detailed Activity Information:
      • Receipts
      • Putways/Replens
      • Picks
      • Nesting
      • Marshalling (UK)
      • Marshalling (INT)
      • Despatch
    • All data will be stamped when extracting, with:
      • Shift (based on the task time)
      • Base Activity (Receipt, Putaway, Replen, etc)
      • Extended Activity (Small/Large/Hanging/Ratio, BP20/DCR/PD06)
      • Owner (UK, INT, MINICLUB, None)
      • Warehouse (MOT)

ELC DC Extract

The ELC data extract will be as follows:

  • Connect to Core Database (through AS/400 OLEDB connector).
  • If not connected, log issue and move to next extract.
  • If connected
    • Extract Standing Data:
      • Users
    • Extract System Data (Tasks Outstanding at this time):
      • Putaways (by detailed activity)
      • Replens (by detailed activity)
      • Picks (by detailed activity)
      • Order Status Details
      • Warehouse Map details (Area/Aisle Usage information)
    • Summarise Base Task information.
    • Summarise Order Status information for the Order Status Screens and the System Overview (total of orders available for pick).
    • Extract Detailed Activity Information:
      • Receipts
      • Putways/Replens
      • Picks
      • Consolidation
      • Marshalling (UK)
      • Marshalling (INT)
      • Despatch
    • All data will be stamped when extracting, with:
      • Shift (based on the task time)
      • Base Activity (Receipt, Putaway, Replen, etc)
      • Extended Activity (Bulk, VNA, Tote, Pick & Stick, Small/Large, FPP)
      • Owner (UK, INT, None)
      • Warehouse (EDC)
    • When all complete, store new interval to database.

Kronos 48-hour Feed

The Kronos 48-hour data extract will be as follows:

  • Check inbound folder for new extract file
  • If not exists, log issue and move to next extract.
  • If exists, move file to working folder.
  • Pre-process file, to remove any unnecessary information (repeating headers, etc)
  • Remove all stored Kronos data from Vision table.
  • Import data to table, calculating any time difference from EDC/NDC to the Dates/Times and storing this in additional fields.

Kronos Incremental Times Feed

The Kronos Incremental Times data extract will be as follows:

  • Check inbound folder for new extract file
  • If not exists, log issue and move to next extract.
  • If exists, move file to working folder.
  • Pre-process file, to remove any unnecessary information (repeating headers, etc)
  • Import data to table (keeping 48 hours).

The users table will be updated with the Kronos User ID and the current swiped activity.

Analysis of Data

At this point:

  • All Kronos data for the last 48 hours will be present
  • All transactional information will be present for at last the last 48 hours.
  • All data will be stamped with:
    • Shift
    • Base Activity
    • Extended Activity
    • Owner
    • Warehouse

The process will merge the two core data tables (Base System and Kronos), ordering the resulting data by Date and Time. The data will be totalled for each:

  • Warehouse
  • Employee
  • Owner
  • Shift
  • Extended Activity

calculating:

  • Number of Tasks
  • Sum of Time Taken
  • Sum of Quantity on Tasks

When complete, these figures will then be totalled into the Base Task Details.

The Analysis process will then create any cross-reference lookups or productivity settings required if new extended tasks have been added to the system.

Database Modifications

New data tables are required to store:

  • Kronos Data
  • Mothercare/ELC data
  • Kronos Activity Code Cross References

The existing Activity tables (emp_details and emp_details_ext) will be modified to store:

  • Shift

The existing Users table will be modified to store the Kronos User ID.

New rules are required to control:

  • Kronos Time Difference (NDC)
  • Kronos Time Difference (MOT)

New database procedures and views are required to calculate:

  • Activity summary information

Screens

The screens available for use for the operations based on the extracted data above will be:

  • Order Status 1 and 2
  • Time to Completion for all extractions of the following task types:
    • Putaway
    • Replen
    • Pick
  • Exceptions (WCS Alerts)
  • Extended Task Enquiries and Extractions
  • Base Task Enquiries and Extractions
  • Aisles Usage
  • Areas Usage
  • System Overview
  • Single and Summary screens for base tasks, showing:
    • Lowest 10 by Productivity
    • Highest 10 by Productivity
    • Current by Productivity vs Target
    • Daily by Productivity vs Target
  • Users and Users Details screens

It was noted that the latest version of the WCS Alerts screen (as yet unreleased to production) would be used, to allow the user to click on an exception to see the details of the exception, not just the summary.

The existing Admin and Settings screens will be used for all maintenance and will require no modifications.

Additional Menu Items

Menu items will be created for each of the Extended Detail Extractions, as shown in the Client Requirements section. One of the following screens will be added to the Extended Extractions menu for each extended type:

  • A Single Summary screen showing:
    • Lowest 10 by Productivity
    • Highest 10 by Productivity
    • Current by Productivity vs Target
    • Daily by Productivity vs Target
  • Time to Completion

Warehouse and Shift Summary Screens

New versions of the existing Warehouse Summary, Warehouse Weekly Summary and Shift Summary screens will be created to show Extended activity tasks rather than the existing base activities. These will also display the Owner Code on the screens.

Extended Detail Enquiry / Extended Productivity Enquiry

These screens will be modified to allow the user to select 'Shift' as a selection type from the existing 'Level' drop-down list.

If selected, a sub-list will be shown below this, allowing the user to select from the available shifts. The shift will be defaulted to the User's default Shift value, although this can be changed.

When selected, all productivity information for users that exist within the shift start and end times will be displayed.

These screens will also be modified to include the Owner Code in the grid display. Owner will be added to the selection criteria, if the user is allowed to view multiple owners, and allow the users to select from a list of available owners.

The existing functionality in the screen, of allowing the users to

  • select a date range of data to be reported
  • export the list to CSV or Formatted Output
  • Select each Activity (extended extraction) through the Task Type drop-down list will fulfil the following criteria:
  • Productivity information, in the form of PI rate, at employee level and aggregated by shift, activity, site, made available to the site management team on a live* basis and with minimal intervention (15 minute update intervals)
  • Operator level PI rate with enough data integrity that it can be used for performance management purposes and ability to look at rate over a rolling 6 week period which includes number of hours spend on each activity to reflect the sites PIP policy as agreed with the Union.
  • Ability to print off or export to Excel spreadsheet to keep a record
  • A detailed reporting suite that will allow both sites to look at performance at any level of hierarchy within the site (by activity, shift, sites, date ranges etc)
  • Enquiries to enable a user to select range of activities / dates or shifts / users or groups

Note Note: The live basis of reporting is based entirely around the data mining refresh rate.

Area User PI Graph

A new graph will be created, allowing the following:

The user will be allowed to select an activity from the drop-down list. This will be populated with the available activities, as shown in the Client Requirements section. The user will also be able to select an Owner (from UK, INT, MiniClub or Blank), if they have not been set up with a default owner. The user will also be able to select a Shift, defaulting to the user's default shift.

Once selected, a bar chart of the average rate of the Activity within the warehouse will be shown. Two lines on the graph will indicate the Target and Minimum rates required for the activity and owner. The bar will be vertical, the y-axis showing percentage of Target. The bar will be RAG-coloured, red for below minimum, amber for between minimum and target and green for exceeding target.

A bar chart will be shown below that, showing a similar barchart, but broken down to each employee within the selected shift. Each bar will be marked with an Employee 'Code', which is representative of the user. This is expected to be the Kronos ID. All other functionality of the bar above will be maintained (Min and Target lines, RAG-colouring, etc.

This will fulfil the following criteria:

  • Graphs of PI rate by employee to be displayed in each area (where each employee is identified by a unique confidential code or number)


Appendix A: Table of SCRs and Ballpark Estimates

Developments Cost

SCR#SystemAreaDescriptionEstimate (Days)Notes
1 System Affected Area Affected Description of change  First argument to "number_format" must be a number.  Any notes, numerically listed


Development Plan

SCR#SystemAreaDescriptionEstimate (Days)Notes
1 System Affected Area Affected Description of change  First argument to "number_format" must be a number.  Any notes, numerically listed


Notes:

  1. Any high level ballpark estimates for development are based on the basic information provided and are subject to detailed design and creation of an SCR.
  2. <Further notes, referenced in the notes column of the grid above>


Appendix B: Document References

B.1 References

Ref NoDocument Title & IDVersionDate
1DE05 - Business Requirements Specification CV Mothercare V 0.03.doc0.0315/03/2012


B.2 Glossary

Term or Acronym Meaning
Ad Hoc A task instigated on the device (spec. Ad Hoc Pallet Move), rather than a task instigated from the WMS and Stock Control.
Advice Note Number An external reference linked to a Goods Receipt.
Aisle A component of a location; usually a space through rows of racking or storage locations; a collection of locations;
Anchor Point A starting location for a search for a suitable storage location; auto-putaway location suggestion start point.
Area A collection of aisles; an area in the warehouse for a particular purpose.
Batch A production batch of a product; a quantity of product that is considered to have the same characteristics;
Bay (Warehouse) A physical loading or unloading point for the warehouse.
Bay A component of a location; usually a space between uprights in racking, comprising several levels (horizontal beams).
Block Stack A stable stack of pallets.
Bulk Bulk storage; Usually full-pallet storage areas, racked or stacked.
Cancellation The facility to cancel a task due to some problem, identified by the user when performing the task.
Check Digit A short code, usually randomly generated and stored against a location, used to help identify that a user is at the right location before they proceed with a warehouse task.
CSV Character-separated values; a text file with multiple rows and values, usually separated with commas.
C-WCS CALIDUS WCS, the name of the OBS Logistics Warehouse Control system
C-WMS CALIDUS WMS, the name of the OBS Logistics Warehouse Management system
Dead Leg A movement of a truck without a pallet; wasted resource.
Despatch The final physical stage of an order; handover of goods to the haulier.
Drive-In A drive-in location, typically multi-level, multi-deep location.
Dual Cycling Processes utilizing P&D locations for interleaving tasks in and out of specific areas, reducing dead leg movements.
Exchange Specifically Pick Exchange or Task Exchange. The process of allowing a user to select a different pallet in a multi-pallet location and exchanging the expected pallet for this one. If the pallet is planned for another task, task exchange will complete this task instead of the expected one first. If the pallet is not planned, pallet exchange will swap the pallet (if suitable).
GR; GRN Goods Receipt; Goods Receipt Number or Note
High Bay Typically tall (greater than 5 level) racking, usually full pallet storage, usually Narrow Aisle.
JIT Just In Time; processes designed to trigger at the last instant.
KPI Key Performance Indicator.
Level A component of a location; usually the vertical compartments of an area, delineated by horizontal beams.
Loading The act of loading pallets onto a vehicle.
Location A uniquely identified space in the warehouse for storage of product. There are many types, most commonly Floor locations (for example, Marshalling, Inbound), Racking or Bulk Storage Locations and Pick faces.
Manifest The contents of a vehicle or container.
Marshalling The act of bringing pallets for an order or load together; an area to do so.
Multi-deep A location with 2 or more pallets stored sequentially i.e. only one can be accessed at a time.
NA Narrow Aisle; usually any area in the warehouse that is restricted access due to space limitations, Narrow Aisles have associate P&D locations.
P&D Pick-up and Drop-off locations; locations used to control the handover of pallets between distinct areas, for example between chambers and the wider area of the warehouse.
PI; Perpetual Inventory The act of continuously checking locations in a warehouse, identifying and correcting product quantity issues. Usually used in Bulk environments rather than Pick Faces. In pick faces, this process is called is called Residual Stock Balance and usually takes place after picking from a pick face.
Pick Face A location designed for picking part of a pallet of stock. Usually a low- or ground-level location.
Pick List (order) The instructions to pick pallets or cases from locations; the paper report associated to this; the stage of preparing these instructions; the sending of these instructions to WCS.
PO Purchase Order.
Pre-advice; Goods Receipt Pre-advice An advanced notification of what is being received. Part of a manifest. Pre-advices can be stock and quantity, or individual pallet level.
Putaway The physical move of a pallet to a storage location as a result of receiving it into the warehouse.
RAG Acronym for Red/Amber/Green, a traffic light colouration system depicting (in sequence) Errors, Warnings or Informational messages. Usually used in operational monitoring to effectively display when certain processes are not working as expected.
RDT Radio Data Terminal.
Replen; Replenishment The act of moving product (usually a pallet) from bulk storage to a pick face.
Reposition The facility to change the location of a movement or putaway when at the final destination, sue to some issue discovered when performing the task.
RF Radio Frequency; An RF device is an RDT, typically used by CALIDUS WCS for executing warehouse tasks.
SCR; CR Software Change Request.
Short Pick The process of not fulfilling an order due to failure to identify sufficient product when picking. May also be used as a term to indicate Short Allocation.
SO Sales Order.
Truck Types Plants, Mechanical Handling Equipment. For example, Reach trucks, Counter-balance trucks, pallet riders, etc.
UOM Unit of Measure.
WA Wide Area; usually any area in the warehouse that is not restricted access due to space limitations, for example, floor areas, not Narrow Aisle.
WCS Warehouse Control System
WMS Warehouse Management System


B.3 Authorised By


Rev1

Rev1 Title
_____________________________