FS 336010 SCR-332569-2 GK PvA DiPS-ePOD Interface
Greene King
DiPS-ePOD Interface for Load and Order Sequencing
CALIDUS EPOD
23rd August 2016 - 1.0
Reference: FS 336010 SCR-332569-2
Contents
Functional Overview
Client Requirement
Orders will be exported from Aurora into DiPS for route planning. DiPS is a route optimisation tool which optimises the drops in to the optimum route. When planned within DiPS, the routed orders with load details will be exported back into Aurora, through an existing interface, consisting of only limited details. At this time, DiPS will also generate a new fully-populated CSV for import into CALIDUS EPOD.
The file will be generated from DiPS in mid-afternoon for the following day - they will then be finalised and sent through to TomTom.
The file will contain all fields that can be exported from DiPS, to minimise any additional work required for any future requirements.
The file will contain all routes for individual depot, or all routes for all depots. Routes must be assigned to a vehicle & driver. Routes must contain postcodes and/or valid Lat/Long co-ordinates.
An example full file export has been provided and is referenced in Appendix B.
This file will be generated to a local or network folder, accessible by CALIDUS EPOD - this can be through an FTP account or file sharing. It will be expected to be named uniquely, as a variant of DIPS2AXS.TXT. Note that the name must either be unique per export, or unique per depot, or created as a different name then renamed for CALIDUS EPOD to pick up, to prevent partial uploading of the file.
CALIDUS EPOD will be modified to automatically import the file through a process triggered to run every few minutes. This will be a bespoke process written specifically for this purpose.
Additional orders, cancellation of orders and changes to orders already received throughout the day will necessitate changes to the plan. Aurora and DiPS will follow the same process to send orders across to DiPS and route plan them. When exported from DiPS, the same import file will be created for all of the orders and loads.
When uploading files with orders and loads that have already been created, the process will update any existing Loads and Orders with the information from the new import file.
Solution Overview
The process will process the file and create Loads and Jobs from the information provided.
The first depot job on a load will be marked as a loading job, and all subsequent depot jobs on a load will be marked as an unloading job. The Order Number (Job Code) for these jobs will be generated from a code "LOA" or "UNL" and the Load ID and a sequence.
Break jobs on a load will be generated as a normal job with customer 'BREAK'. They will be given a job address based on the BREAK customer, but with a Lat/Long set as the last job processed on the load. The instructions will be the break length. The Order Number (Job Code) for these jobs will be generated from a code "BRK", plus the Load ID and a sequence.
Note: DiPS can provide a planned driver break on the inbound file, in between of the execution of orders, not during the execution of the orders. This solution expects that breaks will be created as delivery jobs with specific default information. These will then act identically to delivery jobs within TomTom WEBFLEET.
No functionality is included here to make Break jobs visible within CALIDUS ePOD specifically as Break job types or to handle them differently to collections or deliveries on the mobile device. If the full CALIDUS ePOD solution is put in place, a further change will be undertaken there to handle break jobs on the Android device and within the general administration screens. This will be added as a change in the Greene King CALIDUS ePOD solution design document.
The Jobs and Load created will be stored in the normal tables within CALIDUS ePOD, with additional fields created for the new planned data.
The process will generate the following standard CALIDUS ePOD tables:
- EPOD_LOAD - each unique load will be created, within the Site (Depot) owning the load.
- EPOD_JOB - a unique job will be created for the order on the load specified.
- EPOD_CUSTOMER - A customer record will be automatically created if one does not already exist, including the Lat/Long of the address.
- EPOD_JOB_ADDRESS - if the customer record exists, but the address or information on it differs to the information on the job, an Address will be created specifically for the job.
- EPOD_VEHICLE - A vehicle record will be automatically created for vehicle assigned to the load, based on the Vehicle Registration, if one does not already exist.
Note: Vehicles created automatically in this way will not have a TomTom ID (Object ID) assigned to them, meaning that loads for vehicles automatically generated in this way will NOT be automatically sent to TomTom until this is completed.
- EPOD_USER - a driver record will be automatically created based on the driver name received from DiPS. This driver ID will be generated from the driver name, and provided a generic password.
Note: Although not part of this project, these automatically created drivers may then be used on CALIDUS ePOD devices to complete the jobs. This will be disabled until CALIDUS ePOD is implemented.
The following assumptions are made regarding the data that CALIDUS ePOD will use:
- Load
- Site - the depot, from CALLS OWNING DEPOT
- Load ID - from CALLOVER SEQUENCE NO.
Note: This load ID must be unique for every load ever created for a depot.
- Load Planned Start/End times - from the individual jobs on the load.
- Vehicle ID - generated from VEHICLE IDENT
- User ID - generated from DRIVER'S NAME OR CARRIER
- New fields will be required to store:
- Route Code - TRIP LABEL
- Job
- Job Code (the main reference) - ORDER OR SHIPMENT ID.
Note: This reference must be unique for for the order on the load (it can appear as a collection and a delivery if required).
- Job Type - These will be identified as Delivery jobs only, regardless of whether they are collections or deliveries within DiPS. If functionality is added between Aurora and DiPS to identify collections to DiPS, this may generate additional development. As this would only affect the full CALIDUS ePOD solution (if implemented), this will be added as change to that project.
- Job Group (the main configuration of the execution of the jobs) - based on the job type, hard-coded. For Loading, Unloading and Break jobs, a different job group will be used to Collections and Deliveries.
- Job Instructions - a concatenation of the weights/quantities, or ORDER SPECIAL DELIVERY INST
- Planned Start Date/Time (Planned Arrival) - OPENING TIME 1
- Planned End Date/Time - CLOSING TIME 1
- Planned Distance - TRAVEL DISTANCE TO NEXT CALL from previous job on this load
- Customer Code - IDENT OF CALL OR DEPOT
- Customer Details, from:
- Name - ADDRESS LINE 1
- Address - ADDRESS LINE 2/3/4/5/POSTCODE
- Telephone - TELEPHONE NUMBER
- Sequence on Job List - LINK SEQ NO
- Loading Type - For Depot jobs.
- New fields will be required to store:
- Planned Travel Time (Minutes) - TRAVEL TIME TO NEXT CALL from the previous job
- Planned Work Time (Minutes) - WORK TIME
- ETA Date and Time - For TomTom WEBFLEET integration
- Job Code (the main reference) - ORDER OR SHIPMENT ID.
Note: There is a requirement to display the Weight and Quantities on the TomTom WEBFLEET device, combining them into the instructions field. This can be achieved, depending on whether the length of the data required exceeds the length in CALIDUS ePOD. This will need to be agreed and the DiPS fields from which this data is taken agreed during the mapping session. If this exceeds the maximum length, additional cost may be incurred.
Time windows will be also be added to the jobs here if present in the file.
Additional orders, cancellation of orders and changes to orders already received throughout the day will necessitate changes to the plan. Aurora and DiPS will follow the same process to send orders across to DiPS and route plan them. When exported from DiPS, the same import file will be created for all of the orders and loads.
When uploading files with orders and loads that have already been created, the process will update any existing Loads and Orders with the information from the new import file.
When all imports are completed for a load, the process will check if any orders already on the load were not included in the new upload. If orders are found, these will be deleted.
When all load imports are completed, the process will check if any loads already in the system for that date were not included in the new upload file. If loads are found, these will be deleted.
Scope
Note: This process will be used to create loads and jobs for a particular date. It is expected that this import will always interface all loads and jobs for the date. This can then be used to determine loads or jobs which are no longer valid, which can be removed from the CALIDUS EPOD database.
Note: This import is expected to be in Load and Job order. No additional orders for a load will be found in this file after the file has moved on to another load.
Note: One file is expected to be imported per depot. These files should never be combined, and should retain either a unique naming convention or unique directory for the file to be found.
Set-up
Pre-requisites
Menu Structure
Data
The Site must be linked to the XF Config ID for the file to be imported. This is expected to be set during the implementation. This is achieved through the Site maintenance screen in the CALIDUS EPOD Admin console, from the Admin tab, XF Config drop-down list.
Note: Users generated through this process will be consistent and reused. However, they will have a default password created for them by this process, which should be changed to a reasonable and secure password through the CALIDUS EPOD Admin User Maintenance screen before use. For usability, the users created are enabled immediately, so this should be dealt with by the administrative users quickly.
Functional Description
Database/DAL
A new field will be added to the EPOD_LOAD table to hold the value of the Route Code from DiPS (TRIP LABEL):
- EPL_ROUTE_CODE - nvarchar(40).
All database packages and the DAL object will be updated to update this new field.
Note: It is not necessary to add this field as a search-able item.
New fields will be added to the EPOD_JOB table to hold various new values, as follows:
- EPL_TRAVEL_PLANNED - Planned Travel Time, in whole numbers of minutes - number.
- EPL_WORK_PLANNED - Planned Work Time, in whole numbers of minutes - number.
- EPL_ETA_DATE - ETA date from TomTom WEBFLEET in YYYYMMDD format - number
- EPL_ETA_TIME - ETA time from TomTom WEBFLEET in HHMMSShh format - number
All database packages and the DAL object will be updated to update these new fields.
Note: It is not necessary to add these fields as search-able items.
The existing field EPL_DISTANCE_PLANNED should be changed to be a float type (i.e. capable of storing decimal values).
Note: As standard, these fields will be added to the standard XML Webservice Import and Export flows. These changes (and all others for the project) have been documented and referenced in Appendix B. The standard XMLUpload.XSD file describing the XML in detail will also be updated as part of this project. Note also that the type of the distance planned object must change to "xsd:float".
Note: The following fields must be added to the standard Webservice import when creating addresses (customer or job):
- EPL_LAT
- EPL_LONG
Admin
XF Config Maintenance
The screen will allow the following to be created/edited:
- Config ID - The ID (e.g. GK).
- The Config Name - (e.g. DIPS Orders).
- Type - FILE or FTP only.
- ID - DIPS - DIPS Orders.
- Destination - the destination folder for the FTP Get or the FILE pick-up location.
- Direction - I - Inbound only.
- Filename - a filename pattern, for example "DIPS2EPOD_*.xls".
The screen will only allow these elements to be entered or edited, based on the ID (DIPS) selected.
Auto-Import
The existing Auto Import process will be modified to identify the new DIPS import id. If a configuration is found for this new type, the new DIP Import process will be run, passing in this file.
Note: This message type can be processed as FILE or FTP.
The standard processing for FILE and FTP types should be followed, in that the file to be imported should be found in the directory specified matching the pattern of the filename specified in the configuration. When found, the file should be copied and imported through the new DIPS orders import process.
Note: One file is expected to be imported per depot. These files should never be combined, and should retain either a unique naming convention or unique directory for the file to be found.
DIPS Import Process
Note: This process will be used to create loads and jobs for a particular date. It is expected that this import will always interface all loads and jobs for the date. This can then be used to determine loads or jobs which are no longer valid, which can be removed from the CALIDUS EPOD database.
Note: This import is expected to be in Load and Job order. No additional orders for a load will be found in this file after the file has moved on to another load.
Note: The extract file from DiPS contains over 150 columns. At this time, only a small section of these values will be used. However, it may be that this needs to expand for the customer in the future. To ensure that this process is easier to maintain moving forward, the extract file sent to CALIDUS EPOD will contain all the columns, with a header row labelling the contents. The process will be written in such a way as to extract the values by searching for the column required. This will make the function easier to maintain and more extensible in the future, should the format change or the required data in the file change.
The first row of the file being uploaded will be read, to determine the necessary columns to extract. This will be stored in a property of the import function. A getRowVal method will be written to identify the column, by searching for the value in the header row. This will be passed the header row, and will search the header property for the passed-in column header, and will return the value from this column in the row. If the column is not found, NULL should be returned, to identify this, different to when a column is found, but the data value is zero-length.
So accessing the column data will be as in the following typical example:
With the header row (e.g. dsData.Tables[0].Rows[0]) being stored first, getRowVal(Row, "ORDER SPECIAL DELIVERY INST")].ToString();
The process created here should be similar to the OBS Import process in structure, in that:
- Loads are found and created if required
- Jobs are found and created if required
- Standing Data (e.g. Vehicles, Users) should be found and created if necessary.
- Existing data not found on the export file will be removed after the update if they were not updated as part of this run.
The process will read through each row (storing the header row as identified previously) and process the inbound records.
The Site for the upload will be set from the Site to which the XF configuration is linked.
Create Standing Data
The process will generate a Vehicle ID. This will be through the value in VEHICLE IDENT. The vehicle record will be found using the Site and Vehicle ID, using the found Site ID and VEHICLE IDENT. If not found, this will be created with the following values:
- Site: the found Site ID
- Vehicle ID: VEHICLE IDENT with any spaces removed.
- Vehicle Reg/Description: VEHICLE IDENT
- Status: "Y", indicating Active.
- Last Updated Date and Time: Now
The process will generate a User ID. This will be from the value in DRIVER'S NAME OR CARRIER. This ID will be based on the first letter of the first name followed by the surname, lowercase converted. The user will be found. If the name stored matches DRIVER'S NAME OR CARRIER, this will be used. If the names do not match, a single digit (starting at 1) will appended to the name and be incremented by one, and the process repeated until the matching name is found, or no record is found. If a record is not found, one will be created, as follows:
- Site: the found Site ID
- User ID: The generated user ID
- Password: "PASSWORD"
- User Name: DRIVER'S NAME OR CARRIER
- Admin: "N"
- Active: "Y"
- Last Updated Date and Time: Now
Creating Loads
The process will find the load through the Site and Load id, from the found Site ID and CALLOVER SEQUENCE NUMBER respectively.
If the load is already present and complete or cancelled, no update to this load will be made and no jobs will be imported on this load. If the load is already present, the load will be updated with the following information:
- EPL_LOAD_START_PLANNED_DATE - DEPARTURE DATE (removing the last digit, as this is in format "YYYYMMDDW")
- EPL_VEHICLE_ID - from the found/created Vehicle ID
- EPL_USER_ID - From the found/created User ID
- EPL_TRAILER_ID - TRAILER_IDENT
- EPL_LOAD_INFORMATION - ROUTE_IDENT
- EPL_TIMEZONE - As normal, from the server
- EPL_STATUS "P", if not already created, else to be left at current status.
- EPL_ROUTE_CODE - TRIP_LABEL
The load should NOT be updated at this time, only when the load changes or no more records exist in the file to be imported.
Create Jobs
The process will then create Delivery and Break jobs from the data.
The process will find the job using the found Site ID, the found/created Load ID, the ORDER OR SHIPMENT ID from the file (the job code) and the Job Type, based on the column "ENTITY TYPE (C,D, K)". For ENTITY TYPE (C,D, K):
- If this column is "B" or "C", the Job Type is "D".
- If this column is "D", and this is the first job processed on this load, the job type is "C", else "D".
- If this column is "D", the ORDER OR SHIPMENT ID will be blank. If so, the Job Code should be generated as "LOA_<EPL_LOAD_ID>_<LINK SEQ NO>", if this is the first job processed on the Load. If this is not the first job on the load, this should be generated as "UNL_<EPL_LOAD_ID>_<LINK SEQ NO>". The tagged values should be substituted with the found Load ID, and the DIPS column value LINK SEQ NO respectively.
- If this column is "B", the ORDER OR SHIPMENT ID will be blank. If so, the Job Code should be generated as "BRK_<EPL_LOAD_ID>_<LINK SEQ NO>", replacing the tags as above.
If the job is found, this job will be updated if the status is not "X" or "C" (complete or cancelled). If not found, this job will be created. The Job ID will be generated if the job is not found.
Create Delivery Jobs
For Delivery Job Types:
- EPL_SITE_ID - From the found Site ID
- EPL_LOAD_ID - From the found Load ID
- EPL_JOB_ID - Generated if not found
- EPL_JOB_CODE - As generated above
- EPL_JOB_TYPE - As generated above
- EPL_JOB_GROUP - "DEL".
- EPL_JOB_INSTRUCTION - see the section below this for details.
- EPL_START_PLANNED_DATE - DEPARTURE DATE (YYYYMMDDW) (removing the last digit)
- EPL_START_PLANNED_TIME - EAT AT CALL
- EPL_END_PLANNED_DATE - DEPARTURE DATE (YYYYMMDDW) (removing the last digit)
- EPL_END_PLANNED_TIME - EDT AT CALL
- EPL_DISTANCE_PLANNED - TRAVEL DISTANCE TO NEXT CALL From previous job
- EPL_CUSTOMER_CODE - IDENT OF CALL OR DEPOT
- EPL_CUSTOMER_NAME - ADDRESS LINE 1
- EPL_ADDRESS_1 - ADDRESS LINE 3
- EPL_ADDRESS_2 - ADDRESS LINE 4
- EPL_ADDRESS_3 - ADDRESS LINE 5
- EPL_ADDRESS_5 - COUNTRY CODE
- EPL_POSTCODE - POSTCODE
- EPL_CONTACT - ADDRESS LINE 2 if not "."
- EPL_TELEPHONE - TELEPHONE NUMBER
- EPL_ORDER_DATE - defaulted to Now()
- EPL_SEQUENCE - LINK SEQ NO
- EPL_LINKED_ID - Generated from CUSTOMER SEQ - see below
- EPL_LOADING_TYPE - Blank
- EPL_ORDER_TIME - defaulted to Now()
- EPL_TRAVEL_PLANNED - TRAVEL TIME TO NEXT CALL From previous job
- EPL_WORK_PLANNED - WORK TIME
- EPL_LAT - LATITUDE
- EPL_LONG - LONGITUDE
- EPL_LAST_CHANGED_DATE - Now
- EPL_LAST_CHANGED_TIME - Now
As per normal processing, a Customer should be created if one does not exist. A Job Address should be generated if the details of the address differ to that stored against the Customer Code, if one is found already stored.
Generating Time Windows
Time Windows should be generated for the job if there are non-zero values in the OPENING and CLOSING TIME fields in the incoming file. There are two sets of these values:
- OPENING TIME 1 and CLOSING TIME 1
- OPENING TIME 2 and CLOSING TIME 2
If non-zero values are found in these fields, an EPOD_TIME_WINDOW record should be created for each pair:
- ETW_SITE_ID - From the found Site ID
- ETW_TYPE - "J"
- ETW_FK_ID - EPL_JOB_ID from the found/created Job.
- ETW_TIME_START - OPENING TIME
- ETW_TIME_END - CLOSING TIME
Note: This DAL object is created in SCR FS 336012 SCR-332569-3 GK PvA Add Time Windows.
Generating Job Instructions
The Job Instructions is expected to be populated with ORDER SPECIAL DELIVERY INST (example "pls call 30m b4 del 07970884489 Kul"), and the concatenated weight and quantity (example "179 KILOGRAM, 6 D-RB, 33 W+S").
The weight and quantity will be taken from the following fields, if populated and having a non-zero value:
- DELIVERY PRODUCT 1 to DELIVERY_PRODUCT 12, using LABEL PRODUCT 1 to LABEL PRODUCT 12
The values will be concatenated with ", ".
For example:
No | DELIVERY PRODUCT | LABEL PRODUCT |
---|---|---|
1 | 179 | KGS |
2 | 2 | TUBS |
3 | 0 | ETUB |
4 | 0 | D-RB |
5 | 0 | E-RB |
6 | 6 | CASE |
7 | 0 | S-LP |
8 | 0 | S-SP |
9 | 0 | C-LP |
10 | 0 | C-SP |
11 | 0 | W+S |
12 | 2639 | ITEM |
The weight and quantity string generated would be:
"179 KGS, 2 TUBS, 6 CASE, 2639 ITEM"
This will then be concatenated with the ORDER SPECIAL DELIVERY INST to for the job instructions. The final instructions stored in this example will be as follows:
"pls call 30m b4 del 07970884489 Kul 179 KGS, 2 TUBS, 6 CASE, 2639 ITEM"
Generating Linked IDs
Linked IDs are used to consolidate jobs at the same address together on the device. The DiPS interface provides enough information to achieve this.
For normal delivery jobs, the value should be generated as the value in CUSTOMER SEQ, multiplied by 10.
For Loading, Unloading or Break jobs, the value should be stored as the previous job's value in the Linked ID, plus 1.
Create Loading/Unloading Jobs
Loading and Unloading Job Types are populated almost identically to delivery jobs:
- EPL_SITE_ID - From the found Site ID
- EPL_LOAD_ID - From the found Load ID
- EPL_JOB_ID - Generated if not found
- EPL_JOB_CODE - As generated above
- EPL_JOB_TYPE - As generated above
- EPL_JOB_GROUP - "DEL".
- EPL_JOB_INSTRUCTION - Blank
- EPL_START_PLANNED_DATE - DEPARTURE DATE
- EPL_START_PLANNED_TIME - EAT AT CALL
- EPL_END_PLANNED_DATE - DEPARTURE DATE
- EPL_END_PLANNED_TIME - EDT AT CALL
- EPL_DISTANCE_PLANNED - TRAVEL DISTANCE TO NEXT CALL From previous job
- EPL_CUSTOMER_CODE - IDENT OF CALL OR DEPOT
- EPL_CUSTOMER_NAME - ADDRESS LINE 1
- EPL_ADDRESS_1 - ADDRESS LINE 2
- EPL_ADDRESS_5 - COUNTRY CODE
- EPL_POSTCODE - POSTCODE
- EPL_TELEPHONE - TELEPHONE NUMBER
- EPL_ORDER_DATE - defaulted to Now()
- EPL_SEQUENCE - LINK SEQ NO
- EPL_LINKED_ID - As generated above
- EPL_LOADING_TYPE - if first job, "L", else "U"
- EPL_ORDER_TIME - defaulted to Now()
- EPL_TRAVEL_PLANNED - TRAVEL TIME TO NEXT CALL From previous job
- EPL_WORK_PLANNED - WORK TIME
- EPL_LAT - LATITUDE
- EPL_LONG - LONGITUDE
- EPL_LAST_CHANGED_DATE - Now
- EPL_LAST_CHANGED_TIME - Now
Creating Break Jobs
For Break Jobs, the Job fields should be created as follows:
- EPL_SITE_ID - From the found Site ID
- EPL_LOAD_ID - From the found Load ID
- EPL_JOB_ID - Generated if not found
- EPL_JOB_CODE - As generated above
- EPL_JOB_TYPE - "D"
- EPL_JOB_GROUP - "OTHER"
- EPL_JOB_INSTRUCTION - ADDRESS LINE 1
- EPL_START_PLANNED_DATE - DEPARTURE DATE
- EPL_START_PLANNED_TIME - EAT AT CALL
- EPL_END_PLANNED_DATE - DEPARTURE DATE
- EPL_END_PLANNED_TIME - EDT AT CALL
- EPL_DISTANCE_PLANNED - TRAVEL DISTANCE TO NEXT CALL From previous job
- EPL_CUSTOMER_CODE - "BREAK"
- EPL_CUSTOMER_NAME - IDENT OF CALL OR DEPOT concatenated with CALLOVER SEQUENCE NUMBER
- EPL_ADDRESS_1 - IDENT OF CALL OR DEPOT concatenated with CALLOVER SEQUENCE NUMBER
- EPL_ADDRESS_5 - COUNTRY CODE
- EPL_POSTCODE - EPL_POSTCODE from previous job.
- EPL_ORDER_DATE - defaulted to Now()
- EPL_SEQUENCE - LINK SEQ NO
- EPL_LINKED_ID - As generated above
- EPL_LOADING_TYPE - Blank
- EPL_ORDER_TIME - defaulted to Now()
- EPL_TRAVEL_PLANNED - TRAVEL TIME TO NEXT CALL From previous job
- EPL_WORK_PLANNED - WORK TIME
- EPL_LAT - LATITUDE from previous job
- EPL_LONG - LONGITUDE from previous job
- EPL_LAST_CHANGED_DATE - Now
- EPL_LAST_CHANGED_TIME - Now
As per normal processing, a Customer should be created if one does not exist. A Job Address should be generated if the details of the address differ to that stored against the Customer Code, if one is found already stored.
As each break job will have a different address to the standard BREAK customer address, a specific Job Address should be created for each Break job, storing:
- EPL_CUSTOMER_CODE - "BREAK"
- EPL_NAME - IDENT OF CALL OR DEPOT concatenated with CALLOVER SEQUENCE NUMBER
- EPL_ADDRESS_1 - IDENT OF CALL OR DEPOT concatenated with CALLOVER SEQUENCE NUMBER
- EPL_POSTCODE - EPL_POSTCODE from previous job
- EPL_LAT - LATITUDE from previous job
- EPL_LONG - LONGITUDE from previous job
Post Job Processing
When a job is created, the following items should be stored for potential use on the next job created:
- EPL_LOAD_ID - From the found Load ID
- EPL_DISTANCE_PLANNED - TRAVEL DISTANCE TO NEXT CALL
- EPL_TRAVEL_PLANNED - TRAVEL TIME TO NEXT CALL
- EPL_LAT - LATITUDE
- EPL_LONG - LONGITUDE
- EPL_START_PLANNED_TIME - EAT AT CALL - if earlier than the stored value
- EPL_END_PLANNED_DATE - DEPARTURE DATE - if earlier than the stored value
- EPL_END_PLANNED_TIME - EDT AT CALL - if earlier than the stored value
- EPL_LINKED_ID - as generated from CUSTOMER SEQ above.
- EPL_POSTCODE - as set on this job.
A list of all Loads and Jobs processed in the file should be stored as each job is processed. This should store the Load IDs and Job IDs of the jobs created or updated.
A list of all start planned dates in the file should be stored as each load is processed.
Post Load Processing
When all jobs on a load have been processed, the process should loop through all jobs on that load that are not in the list of jobs processed. If any are found, these and all child records should be deleted. That is:
- EPOD_JOB
- EPOD_CONTAINER
- EPOD_PRODUCT
- EPOD_JOB_ADDRESS
- EPOD_TIME_WINDOW
The Load should only be updated at this time if the load being processed is different to the previous load processed in this run, or when no further records are pin the file being processed.
- EPL_LOAD_START_PLANNED_TIME - Earliest start planned time of jobs on this load
- EPL_LOAD_END_PLANNED_DATE TRUE - Latest start planned date of jobs on this load
- EPL_LOAD_END_PLANNED_TIME TRUE - Latest start planned time of jobs on this load
- EPL_LAST_CHANGED_DATE - Now
- EPL_LAST_CHANGED_TIME - Now
Note: The new change FS 336013 SCR-332569-4 GK PvA Automatically Set Loads In Progress details changes required to automatically set loads in progress when they are the earliest load for that vehicle. These changes must be implemented at this stage.
All previously stored values should be reset when the Load changes.
Post File Processing
Once all loads in the file are processed, all loads for the processed start planned dates should be retrieved, where the loads are not in the list of loads processed by this run. If any are found, these and all child records should be deleted. That is:
- EPOD_LOAD
- EPOD_JOB
- EPOD_PRODUCT
- EPOD_CONTAINER
- EPOD_JOB_ADDRESS
- EPOD_TIME_WINDOW
Appendix A: TEST PLAN
Test Script / Scenario Reference | DiPS-ePOD Interface for Load and Order Sequencing | Call Number(s): 336010 SCR-332569-2 |
Test Script / Scenario Description | Testing the DIPS-EPOD interface | PASS / ISSUES / FAIL |
Menu Access | N/A | |
Pre-requisites | N/A | Tested By: |
Test Objective | To test that: the interface can be created through Admin and; the interface creates and amends jobs correctly. | Date: |
Step | Action | Result | Remarks | P/F |
1 | Admin tests | |||
1.01 | Create a new DIPS interface in the XF Config screen | All fields are enter-able as described here. The new interface is created. |
Step | Action | Result | Remarks | P/F |
2 | Interface tests | |||
2.01 | Create a new DIPS interface file as follows:
|
All Loads are created. All Delivery Jobs are created correctly. All Windows are created correctly. All Customers are created. All Breaks are created. All Vehicles are created. All Load/Unload jobs are created correctly. The jobs are marked as In Progress and WEBFLEET orders are created. | ||
2.02 | Create a new DIPS interface file similar to the first, with the following differences:
|
The new Load and all jobs are created as before. All existing loads and jobs from before are updated. The new user will be created with a single-digit '1' after the user id. Any loads or jobs not included in the file are deleted, including all children. | ||
2.03 | Create a new DIPS interface file similar to the second, with the following differences:
|
Additional jobs should be created for the new Unload and Break jobs created. Break and Unload Job IDs should change based on the sequence (link number) of the job in the load. Prior Loading/Unloading/Break jobs should be deleted. |
Appendix B: Quote & Document References
Cost Details | ||||
Activity | Estimate No. of Days |
No. of Days | Rate per Day (£) | Cost (£ Exc. VAT) |
Requirements | 0.00 | 0.00 | 750 | £0.00 |
Change Request Evaluation | 0.00 | 0.00 | 750 | £0.00 |
Functional Specification | 1.50 | 1.50 | 750 | £1,125.00 |
Technical Specification | 1.50 | 1.50 | 750 | £1,125.00 |
Development | 6.75 | 6.75 | 750 | £5,062.50 |
Testing and Release | 1.25 | 1.25 | 750 | £937.50 |
Implementation | 0.00 | 0.00 | 750 | £0.00 |
Project Management | 0.50 | 0.50 | 750 | £375.00 |
TOTAL | 11.50 | 11.50 | £8,625.00 |
Estimate excludes training, release to live and go live support. |
B.1 References
Ref No | Document Title & ID | Version | Date |
1 | FS 336012 SCR-332569-3 GK PvA Add Time Windows | 1.0 | 16/06/2016 |
2 | EPOD Import Mapping | 5.1.2 | 17/06/2016 |
3 | EPOD Export Mapping | 5.1.2 | 17/06/2016 |
4 | FS 336013 SCR-332569-4 GK PvA Automatically Set Loads In Progress | 1.0 | 16/06/2016 |
B.2 Glossary
Term | Definition |
---|---|
EPOD | Electronic Proof of Delivery. The OBS EPOD system is CALIDUS ePOD. |
CALIDUS eSERV | The OBS mobile system to complete Service functionality in the field. This is part of the CALIDUS ePOD system. |
PDA | The mobile device on which the C-ePOD system will run in the field. This can be a Phone, EDA or industrial PDA, running Android. |
DAL | Data Access Layer. A mechanism for accessing data by the system that is removed from the application, allowing for simplified access and providing protection to the data, as only approved DAL methods can be used to modify it. |
GPS | Global Positioning System. A mechanism of retrieving accurate positioning information in the form of Latitude and Longitude (Lat-Long) co-ordinates from a device. |
GPRS, 3G, HSDPA, Data Service | All terms referring to mobile device network connectivity, and the speed at which the device connects to the internet. |
B.3 Authorised By
Rob Carter | Greene King Representative | _____________________________ |
Matt Tipping | OBSL Representative | _____________________________ |