1
Page 1 of 9
2
Page 2 of 9
3
Page 3 of 9
4
• •
• • • •
•
Page 4 of 9
5
• • • •
•
•
Page 5 of 9
6
• • • • • •
Page 6 of 9
7
Page 7 of 9
8
Page 10 12 71 112 134
135
Page 8 of 9
9
162 174
177 187
194
203
207
211
Page 9 of 9
Appendix 1 10
Great Western Railway Prohibition on grounds of safety Class 80x train dispatch on DOO(P) services using in-cab CCTV cameras
Stewart Player, Professional Head of Operations:
Angela Prescott, Professional Head of Safety
23rd November 2018 Date
Andrew Skinner, Professional Head of Engineering
11
1. Overview 1.1
In August 2018, safety validation GWR/SMC/2- Gateline 4d was approved for bringing into operation the DOO capability of class 80x trains.
1.2
The validation was specifically aimed at the ability of the driver to monitor the Platform Train Interface by use of the in-cab CCTV monitors and make safe decisions on the train safety check before moving. Hitachi had undertaken a human factor review (CCD reports) on the number of cameras and this was subject to review by GWR and ORR. This test did not consider different lighting and weather conditions.
1.3
Since the introduction on DOO services there has been numerous complaints from ASLEF that the CCTV system is not fit for purpose due to the lenses being recessed, affected by sunlight, signal glare, dirt and rain which makes the images unable to be seen in the cab. RIS-2703-RST clause 3.2.3 outlines ‘OTCM camera systems shall be such that camera performance does not degrade as a result of naturally occurring foreign objects’ it goes on to state ‘camera housing having a glass optical element that is either flush or stands slightly
proud of the camera housing aids cleaning and rejection of dirt’.
1.4
Whilst cleaning has been enhanced at a few stations it is apparent through cab riding and information / pictures passed to the Head of Operations by ASLEF members and personal cab rides that the cameras are not fit for purpose, as the design is unable to perform consistently in an operational environment.
1.5
The key risk if the driver fails to undertake a suitable train safety check prior to starting is a passenger being trapped in the doors and dragged along the platform. Numerous RAIB investigations have highlighted this risk especially our own incident at Hayes and Harlington
2. Prohibition requirements 2.1
Class 80x services are prohibited to operate from stations in DOO(P) using the in-cab CCTV to perform the train dispatch process until the actions listed in section 3 are completed to the satisfaction of the three Professional Heads.
2.2
Class 80x services are permitted to operate from stations in DOO(P) only when the train is dispatched by conventional CD/RA or Dispatcher.
3. Actions required to be completed 3.1
A further CCD type review must be undertaken to review the original findings on the ability of the driver to undertake the train safety check in a ‘live’ environment, at different times of day at different stations. This review must also consider the drivers views on the usability of the number of images on the screens following 3 months of operation.
3.2
An engineering solution to the lens to be able to cope with different lighting conditions especially low sunlight.
3.3
An engineering solution to the lens housing that better conforms to RIS-2703-RST and protects the lens from dirt ingress.
3.4
An enhanced permanent cleaning regime, when a new design of lens is fitted, that ensures visual integrity is maintained for safe operation and is capable of lasting between maintenance exams.
Appendix 2
12
Great Western Railway Human Factors DOO System Review
Report No:
MWHA/GWR/2019_20_010
Issued:
20th November 2019
Issue No:
2.0
Authors: M.W. Halliday Associates Limited M.W. Halliday
Copyright 2019 M W Halliday Associates Limited This document has been prepared under the terms of a contract between GWR & MW Halliday Associates Ltd. This document, either in whole or part, shall not be reproduced nor disclosed to others, or used for purposes outside the terms of the contract, without prior written permission.
13 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Abbreviations: Acronym
Meaning
DOO HF GWR HRA IEP IET ORR PEM PTI MWHA RIS RSSB (T)TA TMS
Driver Only Operation Human Factors Great Western Railway Human Reliability Assessment Intercity Express Project Intercity Express Train Office of Rail and Road Psychological Error Mechanism Platform Train Interface MW Halliday Associates Ltd Railway Industry Standard Rail Safety & Standards Board (Tabular) Task Analysis Train Management System
Glossary: Term
Meaning
Camera design
System comprising the physical externally mounted position and components of the camera such as the camera lens, camera lens recess, camera housing and deflectors Debris comprising of any material such as insects, soil, rain etc., that may cover the camera lens When a single target or object appears in more than one monitor image
Dirt Target duplication
Issue Record Issue Draft 1 Draft 2 Issue 1
Date 31st October 2019 5th November 2019 13th November 2019
Issue 2
20th November 2019
Comment First draft for internal review Second draft for external review Incorporates comments from Hitachi and GWR Incorporates further comments from GWR
Commercial in Confidence Page 2
14 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1. EXECUTIVE SUMMARY ............................................................................................ 5 The report highlights a number of factors for consideration relating to the design as well as a number of recommendations that are considered viable and likely to improve the system’s effectiveness and that of future DOO system design.1.1 Background ................................................................................................................ 6 1.2 Context ................................................................................................................. 8 1.3 Scope .................................................................................................................. 10 1.4 Limitations.......................................................................................................... 12 2. METHODOLOGY.................................................................................................... 13 2.1 Questionnaires ................................................................................................... 13 2.2 Driver Discussions .............................................................................................. 13 2.3 Cab Rides. ........................................................................................................... 13 3. FINDINGS .............................................................................................................. 14 3.1 Questionnaires ................................................................................................... 14 3.2 Driver Discussions .............................................................................................. 15 3.3 Cab Rides ............................................................................................................ 17 3.3.1 Data captured by HF analyst ....................................................................... 17 3.3.2 Photographic Data captured by other stakeholders. ................................. 20 4. HUMAN RELIABILITY ASSESSMENT........................................................................ 21 4.1 Scanning patterns .............................................................................................. 25 4.2 Scanning efficiency............................................................................................. 25 4.3 Target duplication .............................................................................................. 26 4.4 Image quality:..................................................................................................... 28 4.4.1 Field of view ................................................................................................ 28 4.4.2 Sunlight ....................................................................................................... 30 4.5 Equipment design .............................................................................................. 32 4.5.1 Deflectors and Camera Housing ................................................................. 32 4.5.2 Monitor Labelling ........................................................................................ 33 5. CONCLUSIONS ...................................................................................................... 34 6. FACTORS FOR CONSIDERATION ............................................................................ 39 7. RECOMMENDATIONS ........................................................................................... 41 8. REFERENCES.......................................................................................................... 42
Commercial in Confidence Page 3
15 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
APPENDICES ............................................................................................................. 43 Appendix A: CCTV layout illustrations ..................................................................... 43 Appendix B: CCTV Layout photographs ................................................................... 46 Appendix C: DOO CCTV Monitor Questionnaire Responses .................................... 49 Appendix D: Briefing Notes ...................................................................................... 52 Appendix E: Photographic illustrations of image quality......................................... 55 Appendix F: Human Reliability Assessment Error Definitions ................................. 58
Commercial in Confidence Page 4
16 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1. EXECUTIVE SUMMARY MW Halliday Associates Ltd (MWHA) undertook a human factors study against GWR’s ‘Remit for further human factors review of class 800 DOO CCTV’ [R1]. The key aim stated in MWHA’s in response to the remit [R2] was “To gain subjective confirmation from drivers and drivers’ union representatives that the train’s CCTV image quality, displayed on the in-cab monitors, is not impacted upon by real world environmental or operational factors sufficiently to prevent effective system usability (previously proven by controlled experimentation)”. The study primarily involved assessing the standard option by use of a series of subjective data collection methods (i.e. observations during cab rides and discussions with drivers) supported by driver union representation during these activities. A high-level human reliability assessment was also subsequently undertaken to provide a basis for comparison with these other findings. Importantly, although the later stages of the trial period saw the inclusion of two alternative layout options, the study as a whole did not seek to demonstrate comparative safety between layout options by replicating the earlier controlled human factors experimental studies within the wider operational and environmental context. Drivers were also given the opportunity to express opinions and preferences regarding the standard layout and the two alternative options via a questionnaire. However, based on the responses received, no clearly preferred option could be identified. In considering whether the study aim has been met it is important to note that during the trial period itself a series of modifications to the image processing were undertaken by Hitachi which have produced significantly enhanced image quality specifically including mitigations against sunlight impacting on the image quality. Furthermore, changes to implement enhanced cleaning regimes at both depots and platforms have been agreed by GWR and Hitachi and this should significantly improve image quality by reducing the accumulation of dirt on the lens. Accepting that image quality has been enhanced by the two measures outlined above, and in consideration of the project aim, the present study did not gather conclusive evidence that effective system usability is prevented by real world environmental or operational factors. However, the study did identify further generic usability issues associated with the design of the Class 800 DOO system and so in response to this, informal discussions were held with Hitachi engineers and GWR management to discuss the feasibility of implementing the changes to the system to address these. The meeting highlighted that system redesign is constrained and that whilst some improvements may enhance usability not all of these are considered feasible due to the cost of implementation, engineering considerations and the need to demonstrate compliance with railway standards. Commercial in Confidence Page 5
17 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Importantly, the standard option (as described within the report) is the only option having been already proven by experimental trial to be effective and therefore implementation of an alternative option, even if technically feasible, would require further work to demonstrate any equivalent level of safety.
The report highlights a number of factors for consideration relating to the design as well as a number of recommendations that are considered viable and likely to improve the system’s effectiveness and that of future DOO system design.
Commercial in Confidence Page 6
18 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1.1 Background The following background information is taken from Great Western Railway’s (GWR’s) Remit document [R1]: “Hitachi undertook a human factors study of the number of screens (up to 20) on the class 80x for DOO(P) operations. This was undertaken by CCD and reports issued [R3 & R4] that provided a level of safety assurance for Office of Rail and Road (ORR), Rail Safety and Standards Board (RSSB) and Great Western Railway (GWR) to accept the trains into Driver Only Operation (DOO) operation. These tests were undertaken in Paddington, under cover, in daylight and involving actors. Due to a clerical oversight, ASLEF were not involved in this review or the subsequent report. Following discussions with ASLEF and other drivers during the first few weeks of operation the usability of the cameras in real life was cause for concern and has been subject to a prohibition notice issued by the professional heads. One of the items in the prohibition notice is to undertake a further human factors review of the cameras based on real life situations”. GWR asked MW Halliday Associates Ltd (MWHA) to provide a proposal for this human factors work. The original proposal [R2] is based against GWR’s ‘Remit for further human factors review of class 800 DOO CCTV’ [R1].
Commercial in Confidence Page 7
19 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1.2 Context 1.2.1 Trial routes Plan 1: Extract of GWR route plan showing the Paddington to Bedwyn and Paddington to Oxford trial routes
Commercial in Confidence Page 8
20 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1.2.2 Vehicle Formations 1. 5 car 2. 9 car 1.2.3 Layouts There were three trial monitor image layouts. The first was based on that trialled prior to the present study (R3-4) and is referred to as the ‘standard’ layout and two further alternatives designed by GWR and implemented by Hitachi (referred to as Alternative 1 and Alternative 2). Details of the layouts are described illustratively in Appendix A (taken from R5) and photographically in Appendix B. The development of the alternative layouts was undertaken by Hitachi and therefore the Project was unable to dictate development timescales nor therefore control for the period that each option was available on the trial trains. As a result, the ‘standard’ option is likely to have been seen more often and by more drivers than the two later alternatives. 1.2.4 Quality There were 3 types of image modification introduced by Hitachi which were ‘active’ at certain times during the trial. As with the image layout options, the development work was undertaken by Hitachi and therefore the Project was unable to dictate development timescales nor therefore control for the period that each modification was active on the trial trains. As a result, ‘unmodified’ images are more likely to have been seen more often and by more drivers than any of the subsequent modifications. Note 1: The parameters and process for the modifications were undertaken by Hitachi and not the Project team. Photographic illustrations of the image quality changes are presented in Appendix E. Note 2: It is unclear how the image quality was benchmarked and therefore what the change in image quality is compared against and agreed as ‘better’ by Hitachi. It is understood that the three principal changes were: 1. The image ‘mist’ setting was modified. 2. The ‘wide dynamic range setting was modified 3. Image processing was modified Commercial in Confidence Page 9
21 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1.2.5 Period of implementation The period, during which the standard layout was used for DOO operation was between approximately September 2018 and November 2018 but the monitors, although not used for dispatch, remained ‘active’ subsequently. The period, during which the standard layout and the 2 alternative a Layout were ‘active’ was between 09/09/2019 and 15/10/2019. 1.2.6 Briefing Drivers of GWR were provided with a briefing note prior to each phase of the monitor layouts becoming active. The briefing notes are included as Appendix D. 1.3 Scope The initial scope within the Remit was defined as to: 1. Undertake cab rides to monitor the camera visibility: • During peak and non-peak • During different weather conditions • During different times of the day especially in low sun conditions • At different stations especially where there is a blend of canopy/open platform 2. Discuss with DOO(P) experienced drivers, their thoughts on the number and style of the cameras to confirm the findings of the CCD reports [R3-4]. 3. Discussions with Hitachi regarding camera numbers, image quality and other issues arsing The Proposal tried to summarise this with a stated Aim as “To gain subjective confirmation from drivers and drivers’ union representatives that the train’s CCTV image layout and quality, displayed on the in-cab monitors, is not impacted upon by real world environmental or operational factors sufficiently to prevent effective system usability (previously proven by controlled experimentation)” The scope of the project requirements, however, may be considered to have evolved since inception. This has arisen primarily because it became apparent that any image quality issues resulting from environmental conditions were by nature ‘transient’ and therefore very unlikely to be ‘witnessed’ during cab rides by the Project personnel. Accordingly, any evidence (such as photographs) taken during trial service operation (captured by project team members, drivers and/or union representative) has been considered in the findings of this HF study. It was also Commercial in Confidence Page 10
22 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
originally intended to capture subjective ratings from drivers regarding the image quality as well as their confidence in hazard detection but this was removed from scope as a result of concerns regarding inter-rater reliability as well as the transient nature both of hazards and environmental factors. This report presents the method, findings, conclusions and recommendations of the human factors review of the DOO CCTV monitors and Class 800s in-service trials.
Commercial in Confidence Page 11
23 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
1.4 Limitations 1. The study is based on the subjective opinion of the HF analyst, GWR Project personnel, GWR drivers and ASLEF members only and did not seek to establish an objective experimental basis for the assessment of DOO monitor image quality, monitor layout or driver information processing (e.g. hazard detection rates, eye scan patterns, task timings etc). 2. During the trial, the monitors whilst ‘active’ were not utilised by the driver for the purposes of DOO dispatch. 3. Drivers exposure to the monitor images was not controlled such that not all drivers may have seen any given monitor image layout or monitor quality modification (or specific combinations of these). 4. The study could not control for image quality as affected by camera or monitor degradation due for example to dirt or rain. 5. The study could not control for environmental factors manifest on the day of any cab rides or during the trial period for example the level and position of external sunlight relative to the cameras. 6. The actual design of the monitor image layout was not part of the present HF project scope although an assessment of this was within scope. 7. The actual design of the monitor image quality modification was not part of the present HF project scope although an assessment of this was within scope.
Commercial in Confidence Page 12
24 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
2. METHODOLOGY
As a results of the evolving study requirements the methodology changed to reflect this. Although initially, subjective ratings were intended to be sought regarding the image quality and also the likelihood of hazard detection at each station, inter-rater reliability and the transient nature of the issues causing concern regarding the monitor images were thought to negate the benefit arising from this. Instead, a questionnaire was developed and provided to drivers during the trial period (after a period of exposure to the trial images) rather than ‘live’ during cab rides since this was considered more likely to collect useful data. 2.1 Questionnaires The development of the questionnaire was undertaken by GWR Projects and reviewed by MWHA. A copy of the questions and responses is provided in Appendix C. The key aim was to identify drivers’ preferences regarding the alternative image layouts. The image layout formats are detailed pictorially in Appendices A & B. 2.2 Driver Discussions MWHA undertook discussions with drivers and drivers’ operational management at both Reading and Paddington mess rooms to discuss any issues associated with DOO CCTV image quality and layout. MWHA personnel were always accompanied by GWR Project management and on most occasions by driver union representatives. 2.3 Cab Rides. MWHA undertook a series of cab rides between Paddington and Bedwyn in order to observe and further understand the driving tasks involved as well as the environmental context in which they occurred. Limited discussions with drivers were undertaken during the cab rides for reasons of operational safety. MWHA personnel were always accompanied by GWR Project management and on most occasions by driver union representatives.
Commercial in Confidence Page 13
25 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
3. FINDINGS The findings below are presented in the same sequence as their respective methods detailed above.
3.1 Questionnaires 1. Only eight drivers responded to the survey. Therefore, although this information is useful it has not been considered in isolation in order to draw conclusions from the trial. 2. Only three drivers had seen all 3 layout variations and therefore opinion is typically expressed with reference to the ‘standard’ layout and whichever other single layout option they had seen. 3. Responses taken as a whole reflect exposure to 5 and 9 car train formations. 4. Responses taken as a whole reflect exposure to a variety of environmental conditions. 5. Drivers’ main concern was the cleanliness of the camera lenses and the subsequent degradation of in cab monitor image quality. 6. The majority of drivers who responded, notwithstanding the issue associated with camera cleanliness, considered image quality to have improved over the course of the trial. 7. Of the drivers who had seen all three layouts, one preferred the standard layout, one Alternative #1 and one Alternative #2. 8. Only two of the eight drivers overall expressed a preference for the standard option. 9. Six of the eight drivers preferred the alternative option they had seen compared to the standard layout. 10. Although Alternative 2 was preferred by more drivers than any other option, lack of controlled exposure to the image layout and quality makes the finding non-conclusive if taken in isolation.
Commercial in Confidence Page 14
26 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
3.2 Driver Discussions If drivers had not seen the alternative image layouts or variations in image quality settings, then as part of any mess room discussions, they were shown photographic examples of these. The following details represent a summary of discussions held with GWR drivers: Drivers were typically asked: 1. Whether they drove or had driven 387s because these trains also utilised DOO operation and therefore drivers already had exposure to an existing design of image layout and quality. 2. Drivers were asked which of the trial layouts that had seen 3. Drivers were asked if they had a preference based on the trial image layouts they had seen 4. Drivers were asked what they would like to see if they designed the system themselves. 5. What other issues they would like to raise specifically those relating to image quality. Summary responses are detailed below: 1. Drivers’ experience of operating the Class 387s as well as IETs was mixed. This was in part dependent in which mess room the discussion took place but some drivers did state that they transition or had transitioned between the two types of traction. 2. Drivers reported having typically only seen the standard layout and one other alternative layout option (this supports the data from the survey) 3. From the discussion held with drivers in the mess rooms (when all 3 image layouts were presented) the majority preferred Alternative 2. One driver who preferred the standard layout provided a detailed explanation as to why he preferred this to Alternative 1 and this is described below. The reasons for this are described below. …the driver scanned the images on a ‘coach by coach’ basis (because that’s what the standard layout required) and not by seeking to see the whole train length in one go. As a result, when trying to scan on a ‘coach by coach’ basis with Alternative Layout 1 this required additional vertical scanning of each Commercial in Confidence Page 15
27 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
image pair (because they were vertically opposed but off-set pairs) before moving to the next one. The standard layout, although also composed of alternating forward and reverse direction images, shows these as horizontally opposed (not off-set) and therefore did allow a naturalistic left-to right / top to bottom scan pattern (Z pattern) akin to normal text reading which he believed made scanning the standard layout easier. 4. Responses to the question of what they would design for themselves, drivers were almost unanimous (irrespective of their experience of driving 387s) in that they would like the system to be designed like that used by the 387s. Where drivers hadn’t driven 387s this was typically expressed by stating that they would like to see just a single image for each carriage and preferably one of the whole train length. 5. The biggest issue raised by drivers regarding image quality was the cleanliness of the cameras themselves and the resultant degradation in image quality on the in-cab monitors. Whilst driver felt that the image quality needed improving they did accept that some problems associated with bright sunlight and the angle of the sun relative to the cameras were likely to be sporadic and therefore difficult to prevent entirely. A number of drivers felt that because the system was not operational during the trial they hadn’t paid as much attention to either the image quality or layout as they would if they relied on the system to dispatch the trains. A number of drivers expressed concern that the layouts were ‘busy’ and wondered whether for 5 car formations for example, the images could be made larger and any redundant images ‘blanked’ out.
Commercial in Confidence Page 16
28 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
3.3 Cab Rides It was not always possible to discuss in detail operational issues with the drivers during service operation.
3.3.1 Data captured by HF analyst
However, the following points are noted based on drivers’ comments and the observations of the human factors analyst during the cab rides undertaken.
1. Time of day •
High sun presented no noticeable impact on images
•
The photographs of ‘low sun’ on the monitor images (as presented by email only) show a significant impact on the monitor image quality. It is unclear if this is ‘light on the lens’ or an image processing issue. This is pictorially highlighted in Section 4.4.2.
•
The initial mist setting showed a slightly unnaturalistic colour palette
•
The adjusted mist setting seemed to offer an improvement to the other mist setting
2. Degradation •
Drivers reported camera image quality as poor for trains which had left depot environments prior to service operation.
•
Cleaning at terminus stations (e.g. Paddington) was typically not witnessed
•
Image quality as a result of the above was typically witnessed as poor on some images when leaving a terminus station.
•
Cab rides often seem to show an increase in insect debris and/or dirt on some lens during journey time.
•
However, conversely, during some cab rides image quality seemed to improve despite the lack of cleaning.
3. Image quality and layout The current layout makes it difficult to check the PTI intuitively: Commercial in Confidence Page 17
29 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
•
There is no ‘continuous’ view along one side of the train / PTI available with the standard image layout or Alternative 1 because they are presented in pairs showing camera views in opposite directions.
•
Although Alternative 2 may appear to provide a continuous view, currently, the matching image for the pair is simply on the second monitor and technically requires to be viewed in order to gain a whole image of the PTI.
•
The field of view is restricted meaning that the ‘train’ element (particularly the door apertures) of the PTI is not as visible as on monitor images of the Class 387.
•
Zooming of the front most camera seems to provide a reasonable overview of the train length but may be compromised depending on platform curvature and, in zooming, all other camera views are lost.
•
The current layout means that people and objects on the platform may appear in multiple camera views making ‘tracking’ a target’s movement / location difficult
•
Furthermore, image ‘pairs’ result in the progress of a target along the platform appear to move randomly rather than logically / or linearly making target tracking difficult.
•
It is understood that in order to ‘fit’ the images onto the monitors (with the numbers required for 9 & 10 car operation) that images are ‘cropped’ (i.e. made small enough to fit). o It is unclear if this reduces image quality / definition which seems better in the zoomed formats. o The field of view seems to be further reduced as a result of the images being cropped which doesn’t seem to be apparent with the zoom function. o It is understood that downloadable images are not cropped and therefore different to those actually seen by drivers. This discrepancy would pose issues for any incident investigation. o It is unclear if whether for 5 car formations image size could be made bigger which would improve both field of view, and depending on the layout, a better ‘overall’ view of the PTI. o It is unclear whether there is scope to utilise larger monitor screen to increase the size of the cropped images which would appear to Commercial in Confidence Page 18
30 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
offer the potential for improved field of view (i.e. by larger images) and image quality (i.e. by smaller cropped sizes being required). Note subsequent informal discussions with Hitachi on 7th November have noted that such a modification is considered not feasible. o It is unclear whether monitor layouts can be configurable (in terms of image size) dependent on the number of carriages. Note subsequent informal discussions with Hitachi on 7th November have noted that such a modification is considered not feasible. •
It is unclear whether it is possible to use a single camera to cover either single carriages or the whole train carriage because: o Camera technology cannot provide a sufficiently clear image o The mounting position (by virtue of the train design / gauging) prevents the camera being positioned to achieve a sufficiently clear image o Standards (RIS) do not currently permit use of a single camera because of the carriage length (irrespective of the points above) o It’s cost prohibitive o Nobody has checked Note subsequent informal discussions with Hitachi on 7th November have noted that such a modification is considered not feasible primarily because of train length and the loss of PTI coverage on curved platforms as well as the need to cover the end vehicles.
In order to establish the extent of ‘feasibility’ of unidirectional cameras (as opposed to both the forward and backward facing cameras currently in use) an ad-hoc trial of the Alternative 2 layout was undertaken in order to establish whether if one of the monitors was ‘covered’ (therefore covering all of the visible images on that monitor) the images on the other monitor provided sufficient coverage of the PTI to view / track a target moving along the PTI covering the whole train length. This trial was done for both a 5 car and 9 car formation trains at Bedwyn and Paddington platform 9 respectively. The trial involved a ‘test’ subject walking the length of the train and the driver and/or union representative viewing the transition of the test subject along the train’s whole length on the non-covered in-cab monitor.
Commercial in Confidence Page 19
31 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
The test showed that, with the exception of the leading and trailing cars, the images seemed to capture the ‘target’ from one end the train to another along the whole PTI. Significantly, although there was still some ‘target’ image duplication the ‘flow’ of the duplicate image followed a strict directional logic that those present found easy to follow compared to the seemingly random appearance of ‘target’ images resulting from paired opposing camera views irrespective of their layout pattern. 3.3.2 Photographic Data captured by other stakeholders.
Photographic data from other stakeholders has been used within the following sections to illustrate the design and operational issues associated with the current DOO system.
Commercial in Confidence Page 20
32 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
4. HUMAN RELIABILITY ASSESSMENT To date, the author of this report has not been able to establish the process by which the present DOO system was designed. However, the meeting with Hitachi on 7th November 2019 has highlighted more clearly the design constraints with which Hitachi needed to consider including regulatory input, distances between doors, platform curvature and aerodynamics of the train etc. The system can be thought of as a series of design and usability requirements. The system for example, encompasses the design and positioning of the train’s external cameras, the image processing, the internal cab monitor image quality and layout. From a human factors perspective, adopting a ‘first principles’ approach to the design of the system would be the recommended method. This would encompass not only understanding the technical aspects of the system but also the user tasks in order to define the system requirements. In theory, fulfilling the requirements would provide a functional and usable system. A key part of defining the requirements is understanding the drivers’ tasks and integrating this into the system specification. This approach is now well captured by applicable standards such as EN 9241-210:2010 Ergonomics of human-system interaction: Human-centred design for interactive systems. In order to illustrate how this process could be utilised, a high-level human reliability assessment of the drivers’ tasks in relation to DOO dispatch is presented below. This is based on wellestablished techniques which are thoroughly detailed within [R6]. The aim being to analyse the driver’s tasks in relation to using a DOO system and determining the information / design necessary to support this. To do this a high-level task-analysis (TA) and decomposition was undertaken. The TA takes the form of a Tabular Task Analysis (TTA). The TTA involves breaking down the goals into tasks and plans. The goals, operations and plans categories used in the TA are as follows: •
Goals – The unobservable task goals associated with the task in question, the intended objective
•
Tasks – The observable behaviours or activities that the operator has to perform in order to accomplish the goal of the task in question
•
Plans – The unobservable decisions and planning made on behalf of the operator.
Tasks have been further ‘decomposed’ for each task undertaken to determine the requirements underpinning the following: •
Equipment and interface design
•
Equipment positioning
•
Procedures Commercial in Confidence Page 21
33 Reference: Version: Date: Compiled by:
•
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Training
The process for determining the TTA was based on observation of the tasks and discussions with drivers, drivers’ union representatives and GWR management. To date, the findings have not been validated subsequently with users and are therefore provisional. The results of the high-level assessment are presented below and the error and error mechanism taxonomy is included as Appendix F. No formal ‘plan’ has been produced since the assessment is high-level but it typically it would be expected that Task 1 would precede Task 2.
Commercial in Confidence Page 22
34 Reference: Version: Date: Compiled by:
Goal
Check PTI for hazards prior to closing doors
Subtask
1. Scan DOO monitor for hazard
Error Type
1. Action/Check omitted 2. Action too little 3. Wrong Check
Error Description
1. Fail to scan images 2. Fail to scan all of the images 3. Fail to scan correct images
Psychological Error Mechanism
1. Forget isolated act 2. Slip of memory / Misinterpretation 3. Mistake among alternatives
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Cause, consequence and comments
Information may not be presented in a format that permits efficient scanning
Recovery
None
Mitigation
Equipment / Interface Design 1. Ensure layout of camera images facilitates efficient scanning pattern of images 2. Ensure scanning is not impaired by the absence of information 3. Ensure scanning is aided by the presence of necessary information 4. Ensure scanning is not impaired by the duplication of information
Commercial in Confidence Page 23
Notes 1. Layout of images should match naturalistic scan pattern (top down left to right?) 2. It should be possible to check a specific target (hazard) or location (e.g. door) by reference to a single image 3. Need to see single image of a single target (hazard) 4. Need to see side of train and all door entrances to confirm no 'trapping' 5. Need to be able to gain an overview of the whole side of the train quickly 6. Target movement along the PTI should be easy to follow
Equipment positioning Ensure DOO monitors are positioned to permit all percentile ranges to see the images form their normal seated position
Procedures
Training
1. Use most effective scanning technique as determined by analysis 2. Standardise scanning across classes of vehicle
1. Ensure drivers are trained on most effective scanning technique
35 Reference: Version: Date: Compiled by: Goal
Subtask
2. Detect hazard image on monitor
Error Type
1. Information not obtained 2. Right action on wrong object 3. Wrong action on right object 4. Wrong information obtained
Error Description
1. Fail to detect hazard 2. Detect nonhazard believing it to be a hazard 3. Detect hazard believing it not to be a hazard 4. Observe information other than the hazard (miss hazard)
PEM
Misinterpretation Mistake among alternatives
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday Cause, consequence and comments
Information may not be presented in a format that enable sufficient hazard detection
Recovery
None
Mitigation
Equipment / Interface Design 1. Ensure image quality is sufficiently high to enable all 'targets' to be detected 2. Ensure design of external camera minimises dirt accumulation on lens 3. Ensure design of external camera minimises impact of sunlight on lens 4. Ensure field of view encompasses all relevant information 5. Ensure drivers know which image relates to which carriage
Commercial in Confidence Page 24
Notes 1. Image quality should not vary based on geographic location 2. Image quality should not vary based on weather conditions 3. Image quality should not vary based on sun's position relative to camera 4. Image quality should enable target detection 5. Target movement should be presented in a manner that enables efficient tracking 6. Image labelling should be meaningful to the drivers
Equipment positioning Ensure camera positioning enables a single camera to capture a view of a single carriage length and door apertures
Procedures
Training
1. Ensure external camera lenses are cleaned as frequently as possible 2. Drivers to check image quality at start up
1. Ensure maintenance staff understand and deliver cleaning programme
36 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
The assessment yields a number of system requirements and in doing so raises a number of questions relating to whether the current design fulfils those requirements. In undertaking this assessment, it should be noted that the author lacks knowledge relating to the current design and specifically the constraints that have shaped the design. The following questions and issues are noted from the high-level human reliability assessment: 4.1 Scanning patterns Scanning patterns required to interpret images presented from pairs of opposing forward and backward facing cameras (whether presented as horizontally or vertically opposed pairs or split between monitor screen) does not support a naturalistic (left to right, top to bottom) scan pattern that provides a single continuous view of the PTI. 4.2 Scanning efficiency In presenting images of opposing facing views of each carriage this results in twice as many images compared for example to the number utilised on the Class 387s. Irrespective of the image layout pattern, drivers are currently required to scan each ‘pair’ of images associated with any given train carriage in order to gain an overall ‘picture’ of that carriage and therefore subsequently the whole side of the train facing the platform (the PTI). There is no selectable view of the whole train length as a single image. There is no selectable view of a specific train carriage. Although the view of the train bodyside encompasses the area of the train door, there is no selectable view of a specific train door. By this it is meant that there is no image that directly shows the door aperture itself (from either an external or internal vantage point). Where drivers transition between types of trains operating DOO it is considered likely that scanning efficiency would be enhanced if images were presented in the same format and therefore scanning techniques were transferable. This may help prevent errors resulting from stereotype takeover where the more frequently used technique is applied in the incorrect setting. Commercial in Confidence Page 25
37 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
4.3 Target duplication Target duplication (i.e. the number of times a single target shown is shown in camera images) is considered higher when using images presented from pairs of opposing forward and backward facing cameras compared to a unidirectional series of cameras.
FIGURE 1: CLASS 800 5 CAR IMAGE DUPLICATION
Figure 1 above shows a single image duplicated 5 times for a Class 800 5 car formation.
Commercial in Confidence Page 26
38 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
FIGURE 2: CLASS 387 8 CAR IMAGE (SINGLE MONITOR ) DUPLICATION
Figure 2 above shows reduced image duplication for the Class 387 8 car formation (single screen image only): Although it is recognised that camera image overlap results in target duplication, the movement of a target along the PTI (‘target flow’) is considered more likely to be seen as ‘logical if the images are obtained from unidirectional cameras. Bidirectional (opposing) camera images result in targets appearing to ‘jump’ rather than ‘flow’ between the monitor images/ screens.
Commercial in Confidence Page 27
39 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
4.4 Image quality: 4.4.1 Field of view
The view presented by the Class 800 camera images on the monitor is considered to show insufficient detail to permit efficient scanning of the PTI and hazard identification. The current images (potentially as a result of the relative proximity of the camera to the train body) do not ‘look back’ towards the train rather they show a view ‘along’ the train. Drivers reported a strong preference for a view that encompasses the train’s body side and more of the door aperture (and doors themselves) to aid the detection of ‘trapped’ targets. To illustrate this, Figures 3 and 4 (below and overleaf) show the comparative field of view for the Class 800 compared to the Class 387. The Class 387 images show a field of view that encompasses the side of the train and part of the train door itself. The Class 800 image does not. The HF analyst does not fully understand the technical cause of this but does note that the camera on the Class 800 appears closer mounted the train bodyside than on the Class 387 (See Equipment Design section for photographic illustrations of this).
FIGURE 3: FIELD OF VIEW CLASS 800 Commercial in Confidence Page 28
40 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
FIGURE 4: FIELD OF VIEW CLASS 387 Note the images have not been controlled such that they show the same train at the same platform etc., they are for illustrative purposes only. Note that Hitachi engineers state that the size of objects within the Hitachi DOO system is uniform unlike those displayed by the system in use on the Class 387s in which objects appear larger or smaller based on their position relative to the camera.
Commercial in Confidence Page 29
41 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
4.4.2 Sunlight
The Figure 5 (below) illustrates how strong low sunlight seems to result in image quality (at some locations at certain times only) that is insufficient to support drivers’ tasks.
FIGURE 5:
IMPACT OF SUNLIGHT ON CLASS
800 IN-CAB MONITORS
The extent to which the most recent image processing changed has corrected for this issue remains unclear to the HF analyst but anecdotally it has improved the image quality significantly.
Commercial in Confidence Page 30
42 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
4.4.3 Dirt The external camera appears prone to fouling due to environmental conditions as illustrated in Figures 6 and 7 below:
FIGURE 6: SNOW ON C LASS 800 CAMERA
FIGURE 7:
DIRT ON CLASS 800 CAMERA AND LENS
Commercial in Confidence Page 31
43 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
4.5 Equipment design 4.5.1 Deflectors and Camera Housing
The reasons for the high accumulation of dirt on the camera housing and lens are not technically fully understood by the HF analyst but possible reasons are outlined below. In part, the removal of dirt is a maintenance issues (both at depot and station) but it is possible that the equipment design and installation itself contributes to this. The HF analyst notes that the Class 800 deflector system (used only for a trial period and no longer installed) seems less robust compared to Class 387 defector system (see Figures 8 & 9). Furthermore, the lens on the Class 800s is recessed in the camera housing whereas the Class 387 lens appears flush with the camera housing and also mounted at a greater angle to the train’s bodyside. These factors too may also contribute to a greater accumulation of dirt on the Class 800 camera lens. The figures below also appear to highlight that the Class 800 camera is more closely mounted to the train’s bodyside than for the Class 387. Additionally, the lens itself appears larger on the Class 387s. Again, although the HF analyst does not fully understand the technical reasons for the current design but it is possible that this also impacts on the achievable field of view.
FIGURE 8: CLASS 387 CAMERA HOUSING AND DEFLECTORS
Commercial in Confidence Page 32
44 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
FIGURE 9: CLASS 800 CAMERA HOUSING AND TRIAL DEFLECTORS 4.5.2 Monitor Labelling
Although the current system of labelling on the in-cab monitors (by the use of alphanumeric data related to the cameras position on the left or right hand side of the train and its position relative to the train formation) does relate to the information on the TMS) drivers reported confusion in understanding this. This may be due to this not matching drivers’ mental models of the train formation (which from discussions is typically linked to the car number).
Commercial in Confidence Page 33
45 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
5. CONCLUSIONS
The following conclusions are made in consideration of the driver questionnaire, observations during cab rides, discussions with drivers, drivers’ union representative and GWR Project management as well as a high-level human reliability assessment. Please note that the present study has not sought in any way to replicate the earlier CCD studies and therefore the results of the present study are not directly comparable with the earlier findings. The key conclusions drawn from this HF DOO CCTV monitor review are that: Trial 1. Drivers’ exposure to the monitor images was not ‘controlled’ such that the opinions they expressed are made when not all drivers may have seen any given monitor image layout or monitor quality modification (or specific combinations of these). 2. The monitor image layout design was not part of the present HF project scope. 3. The monitor image quality design and modifications were not part of the present HF project scope. 4. During the trial, the monitors whilst ‘active’ were not utilised by the driver for the purposes of DOO dispatch and this may also have influenced the opinions expressed. 5. Despite the above caveats, the different sources of data collection provide a broadly unified appraisal of the present system design and operation. This is outlined in the further conclusions below. System Engineering and Design 6. It should be noted that to date, the HF analyst is unaware of documentation detailing the overall system design rationale and therefore the points raised below are made in the absence of this data. Specifically, this includes how usability testing was integrated into each phase of the system design. Compliance with railway group standards for example does not necessarily equate to optimised usability. However, the meeting with Hitachi on 7th November 2019 has highlighted more clearly the design constraints with which Hitachi needed to consider including regulatory input, distances between doors, aerodynamics of the train etc. Commercial in Confidence Page 34
46 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
7. Many of the issues presented below are considered likely to arise from the use of two cameras (as pairs) per carriage, specifically the fact that each camera in the pair faces in the opposite direction to the other - as do associated images from these. This appears to be a key issue and the associated other issues arising from this are outlined in some of the further conclusions below. Image Quality 8. One of the key findings is that dirt on the camera lens impacts significantly on the image quality presented to the driver on the in-cab CCTV monitors. Inadequate maintenance at both depot locations and terminus platform locations results in dirt accumulation on the camera lens and the in-cab CCTV monitor images are degraded as a result. Cleaning regimes implemented at depot locations were observed to be inconsistently implemented. Observations noted that on occasion trains having just left the depot into service operation still /already had dirt on the lens. Depot based cleaning will by its nature be dependent on the service schedule of the train and is likely always to be performed less frequently than is required to ensure camera cleanliness. Cleaning regimes implemented at terminus station platform locations were observed to be inconsistently implemented. Although platform cleaning may be performed more frequently than at depot locations, platform based maintenance is considered likely to have limited effectiveness because typically only cameras on the platform side can be reached safely. The train’s stopping pattern may utilise cameras on both sides of the train. Furthermore, the fact that the externally mounted cameras face both directions (means that half of the cameras will always be facing the direction of travel and therefore are potentially more prone to the accumulation of dirt). 9. A further key finding is that low sunlight (as captured through the external CCTV cameras) can impact on the image quality and therefore the usability of the incab monitor images. On occasion the image degradation is such that it is considered that the images are not of sufficient quality to support drivers’ tasks during DOO despatch.
Commercial in Confidence Page 35
47 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Although this issue is considered likely to periodically affect any method of DOO despatch the concerns raised by drivers suggest that this is significantly worse for the Class 800s than for the Class 387s and for on-platform mirror based systems. The author does not fully understand the technical reasons for this issue (i.e. whether it is a function of the camera / lens design and or the image processing) but does note that image processing modifications were implemented during the trial period. It is unclear whether this issue has been fully resolved by the modifications but the subjective data captured from drivers and drivers’ union representative suggests that the image quality was perceived to have improved after the later modifications using a combination of adapted mist and wide dynamic range settings. As with the issues raised above in connection with the accumulation of dirt, the fact that cameras face both directions is likely to result in an increased exposure of the cameras’ position relative to low sun with a potential increase of image problems associated with this. 10. Again, although the author does not fully understand the technical reasons for the accumulation of dirt, the camera design itself (e.g. camera mounting position, lens, camera lens recess, camera housing and deflectors etc.,) may be contributing to the accumulation of dirt. Anecdotally it appears to be worse than for the Class 387 units. The camera designs appear, from pictorial evidence, to differ significantly between the Class 800 and 387 and this could potentially be a cause of the increased dirt accumulation. Layout and image format 11. All of the in-cab monitor screen layout options seen during the trial period show pairs of views (each pair comprised of views obtained from cameras facing in opposite directions). 12. This format of camera positioning by ‘pairs’ for the Class 800s results in approximately double the number of cameras and associated images per train formation compared to the Class 387 system. This increases the workload in terms of processing time to scan the images. 13. In order to accommodate the maximum number of paired images required for 10 car operation (20 cameras and 20 camera images) the size of the displayed images have been reduced or ‘cropped’ to the largest size for which the two incab monitors can display the maximum number of images.
Commercial in Confidence Page 36
48 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
For Class 800 5 car formations the image size remains reduced / cropped despite screen space being available for an increase in image size. 14. Although the reasons for reduced field of view in the Class 800 images compared to those of the Class 387s, are not fully understood by the author (that is whether it is a function of ‘cropping’ or reducing the size of the images and / or attributable to the camera design) it is important to note that drivers often cited the Class 800 field of view to be problematic when monitoring the images for PTI hazards. Issues associated field of view were particularly cited in relation to not being able to see the bodyside of the train and the train door position. It is noted that the Class 800 door is recessed and this therefore contributes to the difficulty of seeing the aperture itself. However, comments also encompassed the overall field of view of the image covering the platform. 15. This format of camera positioning by ‘pairs’ is considered to have resulted in image layout patterns that, whilst each format has advantages and disadvantages, overall none are considered to support intuitive scanning because the driver is unable to obtain a single unidirectional view of the PTI. The scanning is required to be done by pairs which, as noted, comprise opposing views. The standard option presents these pairs as horizontally opposed. Although this layout theoretically support a ‘zulu’ or ‘z’ scan pattern (top down and left to right which is the stereotypical method of reading in the UK) the use of pairs of opposite facing cameras means that the scanned information itself does not match the need to mimic ‘looking out of the window back along the train’ as one driver described it. Alternative option 1 presents the pairs as vertically opposed and offset (i.e. images from the pair not being directly above the other image) Alternative option 2 presents each image of the pair on a separate monitor. Whilst Alternative option 2 appears superficially to remove many of the issues associated with the image pairs being co-located on the monitor screens, the advantages may be negated if the driver is still required to observe all of the images across the two screens. An ad hoc trial of this option, utilising only the images from one screen, showed target ‘gaps’ only at each end of the train formation. 16. The DOO system design utilising opposite facing camera and image pairs is considered to impact negatively on target duplication. Commercial in Confidence Page 37
49 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
There appears to be an increase in the number of duplicated targets on the Class 800s as compared to the system utilised on the Class 387s comprising single and unidirectional camera. Target duplication is considered to impede effective hazard identification. Additionally, the presentational format of the Class 800 image pairs on the in-cab monitors means that target duplication (which does occur on the Class 387s), when tracking a target’s movement on the PTI, results in non-linear ‘flow’ of the target such the target appears to ‘jump’ around the screen rather than transition uniformly from one image to the next sequentially. 17. Although in-cab on-screen image nomenclature / identification tags do match the terminology used by other in-cab systems such as the TMS drivers still find the labelling confusing meaning that they cannot easily associate an image with a specific location on the train. For the Class 387s, the vehicle number is displayed irrespective of the platform side use for dispatch. This issue may require additional training to alleviate this.
Commercial in Confidence Page 38
50 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
6. FACTORS FOR CONSIDERATION
Please note that subsequent to the original recommendations being made within the draft report, informal discussions held with GWR management and Hitachi engineers on 07th November 2019 highlighted issues with the implementation of a number of the recommendations because of the cost, engineering feasibility (e.g. gauging, the vehicles structural integrity, aerodynamics etc., ) as well as the need to demonstrate compliance with railway standards. Note however that these discussions do not represent a formal, quantified analysis of engineering feasibility and cost benefit implications. However, based on the informal discussions, many of the original recommendations are now documented in this section of the report as Factors for Consideration and only those considered at the meeting as feasible, are retained as Recommendations. Factors for consideration are therefore documented as: 1. The railway standard governing the design of DOO systems standard is prescriptive in terms of the external camera layout and internal monitor image layout required for train carriage lengths achieved by the Class 800. It is considered that the use of pairs of opposite facing cameras on each carriage to meet these requirements may actually impact negatively on achieving the most usable system from the perspective of supporting drivers’ tasks. 2. In order to meet, in full, the usability requirements identified by the present study the DOO system would require significant engineering redesign. This redesign is considered likely to include but not be limited to: a. Re-engineering the camera system design such that all utilised cameras are unidirectional. b. Re-engineering the camera system design such that one camera per carriage provides a unified view of the carriage. These two changes alone would reduce driver workload by reducing the number of images to be scanned, reduce target duplication and improve the transitions of targets between camera images. c. Increasing the in-cab monitor image size to the maximum that any redundant screen space (resultant from the above) permits. d. Re-engineering camera system design to increase the field of view so that monitor images show more of the train body side and door aperture. e. As part of the above, or as an alternative viable option, providing a selectable ‘door only’ view - either on a carriage by carriage basis or as a ‘global’ train function. Commercial in Confidence Page 39
51 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
f. Re-engineering the camera system design to reduce the accumulation of dirt. 3. In-cab monitor layout should be amended to reflect the new system design as achieved by the above. This would permit standardisation with class 387 operation.
Commercial in Confidence Page 40
52 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
7. RECOMMENDATIONS
The retained recommendations from the draft report are reiterated here as well as the addition of further recommendations to guide future system use and system development. 1. Ensure that driver training supports the use of the Class 800 system through initial and refresher programmes as required. As part of this, ensure that: a. Scanning techniques adopted by drivers are optimised for performance b. Drivers understand the potential content and operation of the ‘zoom’ function c. Drivers understand the link between the image labelling on the in-cab DOO monitors and those used on the TMS system by which train carriage can be identified. 2. Ensure that lessons learned from the human factors evaluation of this completed DOO system design are fed into future DOO system designs sufficiently early in the development process to ensure that usability of the system by drivers is optimised.
Commercial in Confidence Page 41
53 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
8. REFERENCES
1. GWR Remit for further review of the DOO CCTV on class 80x. Dated 2/12/2018 2. MWHA proposal Class 800 DOO CCTV Operational Review. Dated 08/02/2019 3. CCD Report: CCD-1752-REP-001-18 v2.0 Hitachi IET Driver Controlled Operation: Train Safety Check Experiment Report. 30/05/2018 4. CCD Report: CCD-P312-REP-001-17 v1.0 Hitachi Intercity Express Driver Only Operation - Human Factors Assessment. 01/03/2017 5. GWR discussion document: Camera Layout Alterations. Received 08/05/2019 6. A guide to practical human reliability assessment. Kirwan, B. (1994). Taylor and Francis.
Commercial in Confidence Page 42
54 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
APPENDICES
Appendix A: CCTV layout illustrations
Commercial in Confidence Page 43
55 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Commercial in Confidence Page 44
56 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Commercial in Confidence Page 45
57 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Appendix B: CCTV Layout photographs
Photograph of Standard Option_5 car formation (incorporating original mist setting)
Commercial in Confidence Page 46
58 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Photograph of Alternative Option 1_5 car formation (incorporating modified mist setting and wide dynamic range modification)
Commercial in Confidence Page 47
59 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Photograph of Alternative Option_2 5 car formation (incorporating modified mist setting and wide dynamic range modification)
Commercial in Confidence Page 48
60 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Appendix C: DOO CCTV Monitor Questionnaire Responses
Which unit are you providing comment on? (please choose ID only 1) 1 800307
What route(s) have you been driving the unit on? Paddington Oxford;
What times of day have you driven the unit? Late Evening/Night (Darkness);
Daytime;
In your opinion, did the layout of images on this unit make it easier or harder to see the platform / any hazards than Option 3 (ie the existing layout)? No
2
800304
Paddington Oxford;
No difference between the new and the existing layout
3
800018
Paddington Early Morning Yes Oxford;Paddington (Darkness);Daytime;Late - Bedwyn; Evening/Night (Darkness);
Commercial in Confidence Page 49
Have you seen both types of new layout on the trial units? (Ie Option 1 and Option 2) Yes
Which of the 3 layouts do you prefer with regards to viewing hazards on the platform? Existing Layout (Option 3 Layout)
No
Existing Layout (Option 3 Layout)
No
800018/800307 (Option 2 Layout)
61 Reference: Version: Date: Compiled by:
Which unit are you providing comment on? (please choose ID only 1) 4 800018
5
800304
6
800018
7
800011
8
800018
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
In your opinion, did the layout of images on this unit make it easier or harder to see the platform / any hazards than Option 3 (ie the existing layout)? Yes
Have you seen both types of new layout on the trial units? (Ie Option 1 and Option 2) No
Which of the 3 layouts do you prefer with regards to viewing hazards on the platform? 800011/800304 (Option 1 Layout)
Late Evening/Night (Darkness); Late Evening/Night (Darkness);
Yes
Yes
Yes
No
800011/800304 (Option 1 Layout) 800018/800307 (Option 2 Layout)
Early Morning (Darkness);Daytime;
Yes
No
800018/800307 (Option 2 Layout)
Paddington Early Morning Yes Oxford;Paddington (Darkness);Daytime;Late - Bedwyn; Evening/Night (Darkness);
Yes
800018/800307 (Option 2 Layout)
What route(s) have you been driving the unit on? Paddington Oxford; Paddington Oxford; Paddington Oxford;Paddington - Bedwyn; Paddington Bedwyn;
What times of day have you driven the unit? Late Evening/Night (Darkness);
Commercial in Confidence Page 50
62 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Is there anything else you wish to add about this trial? Impossible to give a subjective view on these trial camera layouts at the present, as EVERY SINGLE UNIT I've encountered so far has nigh-on 80% of the camera lenses plastered in filth, making the images impossible to gauge. I would have thought regular lens cleanliness might have been insisted upon PRIOR to this trial taking place. Clearly not... For me, the problem is not the way the images are presented, but the quality of the images. The train has a high speed ethernet backbone - I don't understand why the images are so low resolution and so highly compressed, with such a low frame rate.
Whilst Option 2 is a big improvement to the existing layout it still isn't as good as Option 1. Existing option 3 layout should definitely be removed as it is confusing.
Image quality still poor when dirty and in the direction of the sun. Also, could the images be slightly wider as there is no need for 12 images only 10 plus be good to see slightly more of the bodyside Although I have used the layout on 800011 and although the layout looked better, I cannot honestly comment on whether risks were easier to see. This was due to the cameras being absolutely filthy. New layout is better.
Commercial in Confidence Page 51
63 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Appendix D: Briefing Notes
Commercial in Confidence Page 52
64 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Commercial in Confidence Page 53
65 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Commercial in Confidence Page 54
66 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Appendix E: Photographic illustrations of image quality
Class 387 images
Commercial in Confidence Page 55
67 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Photograph of original mist settings showing unnatural colour palette.
Commercial in Confidence Page 56
68 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Photograph of modified mist setting with additional dynamic range adjustment
Commercial in Confidence Page 57
69 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
Appendix F: Human Reliability Assessment Error Definitions The definitions of the External Error Modes and Psychological Error Mechanisms are detailed below: External Error Modes •
Action omitted
•
Action too early
•
Action too late
•
Action too much
•
Action too little
•
Action too short
•
Action in wrong direction
•
Right action on wrong object
•
Wrong action on right object
•
Misalignment error
•
Information not obtained/transmitted
•
Wrong information obtained/transmitted
•
Check Omitted
•
Check on wrong object
•
Wrong check
•
Check mistimed
Commercial in Confidence Page 58
70 Reference: Version: Date: Compiled by:
MWHA-GWR/DOO CCTV 2.0 20/11/2019 M.W. Halliday
SHERPA psychological Error Mechanisms 1.
Failure to consider special circumstances: a task is similar to other tasks but special circumstances prevail which are ignored and the task is carried out inappropriately.
2.
Short cut invoked: a wrong intention is formed on familiar cues which activate a short cut or inappropriate rule
3.
Stereotype takeover: owing to a strong habit, actions are diverted along some similar but intended pathway
4.
Need for information not prompted: a failure on the part of the external or internal cues to prompt a need to search for information
5.
Misinterpretation: the response is based on the wrong apprehension of information such as misreading of the text or misunderstanding of a verbal message.
6.
Assumption: a response is inappropriately based on information supplied by the operators (e.g. recall or guesses etc.,) which does not correspond with information available from the outside.
7.
Forget isolated act: the operator forgets to perform an isolated act or function i.e. one that is not prompted by the operational context or which does not have an immediate effect on the task sequence OR one that is not integrated into part of a memorised structure.
8.
Mistake among alternatives: a wrong intention causes the wrong object to be selected and acted on or the object presents alternative modes of operation and the wrong one is selected.
9.
Place losing error: the current position in the action sequence is misidentified as occurring later on
10. Other slip of memory 11. Motor variability: a lack of manual precision, too great or too small a force is applied or inappropriate timing 12. Topographic or spatial orientation inadequate: in spite of the operators correct intention and correct recall of identification marks/tagging etc, a task/act is unwittingly carried out in the wrong place or on the wrong object this occurs as a result of following an immediate sense of locality where this is not applicable or has not been update due to surviving imprints or old habits.
Commercial in Confidence Page 59
Appendix 3 71
Project Report
Hitachi IET Driver Controlled Operation: Train Safety Check Experiment Report
72
Document Control Client
Hitachi
Title Reference
Hitachi IET Driver Controlled Operation: Train Safety Check Experiment Report CCD-1752-REP-001-18
Version
1.0
Status
First Issue
Issue Date
30th April 2018
Authors
Daniel Simmons Principal Consultant
Laura Jones Consultant
Signature
Approved by
Martin Freer Technical Director
Signature
Record of Issue Name
Company/ Organisation
Tien Bang
Hitachi
Document History Version no.
Date of Issue
Document Changes
1.0
30th April 2018
First Draft
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 2
73
References No.
Title
Document Reference
Version/ Date
1.
Rail Industry Standard for Driver Only Operated On-train Camera / Monitor Systems Hitachi Intercity Express Driver Controlled Operation Human Factors Assessment Station Duties and Train Dispatch
RIS-2703-RST
Issue 1, June 2014
CCD-P312-REP-001-17
v2.0
GE-RT8000-Rulebook-Module SS1
Issue 4, September 2015
Rail Industry Standard for Passenger Train Dispatch and Platform Safety Measures Hitachi IEP Driver Controlled Operation: Train Safety Check Experimental Assessment Hitachi Train Safety Check Experiment Proposal
RIS-3703-TOM
Issue 2, March 2013
CCD-P312-TECH-004-17
v1.0
CCD-1752-PROP-001-17
September 2017
2.
3. 4.
5.
6.
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 3
74
Table of Contents 1
Introduction .............................................................6 1.1 1.2 1.3 1.4 1.5 1.6 1.7
2
Method ...................................................................10 2.1 2.2 2.3 2.4
2.5 2.6
3
Target Detection Rates................................................................................23 Detection Rates by Participant ....................................................................24 Emerging Target Detection Rates................................................................24 Detection Failures from Timeouts ...............................................................26 Results by DOO Experience .........................................................................27 Results by Driving Experience .....................................................................27 Results by Age .............................................................................................28 False Target Detection ................................................................................28 Response Time ............................................................................................29 Timeouts .....................................................................................................29 Scanning Techniques ...................................................................................30 Post-test Interviews ....................................................................................34
Discussion...............................................................37 4.1 4.2 4.3 4.4 4.5 4.6
5
Overview .....................................................................................................10 Incident Scenarios .......................................................................................11 DCO Driver Task Simulation ........................................................................14 Experimental CCTV Footage ........................................................................15 Footage Capture ...................................................................................15 Footage Configuration ..........................................................................17 Experimental Conditions .............................................................................19 Participants .................................................................................................20
Results ....................................................................23 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12
4
Background ...................................................................................................6 Research Question ........................................................................................7 Experiment Development .............................................................................7 Quantifying Reliability ...................................................................................8 Important Qualifications ...............................................................................8 IET Train Development ..................................................................................9 Key Activity Dates..........................................................................................9
Target Detection Reliability .........................................................................37 Individual Differences .................................................................................38 Target Type .................................................................................................38 Response Time ............................................................................................39 False Target Detection ................................................................................39 Scanning Techniques ...................................................................................39
Conclusions ............................................................40
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 4
75
Acronyms List Acronym
Definition
CCD
CCD Design & Ergonomics Limited
CCTV
Closed Circuit Television
DCO
Driver Controlled Operation
DOO
Driver Only Operation
GWR
Great Western Railway
HF
Human Factors
HRE
Hitachi Rail Europe
IEP
Intercity Express Programme
IET
Intercity Express Train
ORR
Office of Rail Regulation
PTI
Platform Train Interface
RIS
Rulebook and Industry Standards
RSSB
Rail Safety and Standards Board
SPSS
Statistical Package for Social Sciences
TOC
Train Operating Company
VTEC
Virgin Trains East Coast
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 5
76
1
Introduction
This document describes an experimental assessment designed to assess driver reliability at using the onboard Driver Controlled Operation (DCO) CCTV system on Hitachi’s Intercity Express Train (IET) to complete the train safety check prior to departing a station.
1.1 Background In Driver Controlled Operation (DCO) of passenger trains drivers use in-cab CCTV monitors to conduct the train safety check before leaving the station. The current Rail Industry Standard RIS-2703-RST [Ref 1] allows drivers to operate DCO trains with up to x12 CCTV camera images (i.e. one image per car of a 12-car train). However, the HRE IET carriages are 26m in length and are thus longer than other UK rolling stock and will require two cameras per carriage side to ensure no blind spots occur on curved platforms (see Figure 1). The IET driver will therefore be provided with two images per car, twice as many as on other existing DCO systems, with potentially up to 24 images for one side of a 121 car train. The HRE IET train’s CCTV system does not therefore comply with the current standard, yet HRE remain responsible for demonstrating to the relevant authorities that the system is fit for purpose.
Figure 1 - Hitachi IEP train camera set up
HRE therefore contracted CCD to conduct an experimental assessment of the IET train to assess driver reliability at detecting persons / articles trapped in the train doors using up to x24 CCTV images pairs [Ref 1]. An experimental assessment was developed in consultation with the Office of Rail Regulation (ORR) and Rail Safety and Standards Board (RSSB) and conducted in 2017 [Ref 2]. This tested the ability of drivers to detect simulated incidents randomly inserted into sequences of CCTV footage captured from the IET train. The results demonstrated levels of driver performance comparable to those for other existing DCO trains which only use up to x12 single images. However, following a review of the report, the ORR, RSSB and Train Operating Companies (TOCs) collectively concluded that the experimental remit did not enable its results to demonstrate that drivers would be able to reliably complete the train safety check. Specifically, the experiment did not assess whether the driver can reliably make a “final holistic check” to determine that the whole Platform Train Interface (PTI) is safe prior to making the decision to depart. The ORR explained that the industry’s position on PTI risk has moved on from only considering trap-anddrag potential, as per previous DCO experiments, and that the rulebook [Ref 3] and industry standards
1
Train Operating Companies (TOCs) currently only plan to operate IET train formations of 5, 9 and 10 cars.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 6
77
[Ref 4], now expect the train safety check to include a final holistic check of the PTI prior to deciding whether it is safe for the train to depart. This final holistic check of the PTI is considered important because there may be incidents / events that develop AFTER the driver’s check of the doors has started or has been completed (as became apparent to the ORR after they tried IET dispatch at Ebbsfleet in January 2017). The concern being that these “developing incidents” could be missed during a purely sequential scan of the carriage doors for potential trap-and-drag scenarios. CCD’s original IET experiment [Ref 2] did not cover this eventuality, since it used “static apparent trappings” at the carriage doors which were present at the start, and existed without change, for the duration of each image sequence test participants viewed. The contention being that the experiment only considered the sequential assessment of CCTV images to detect trap-and-drag potential of passengers stuck in the doors; but did not test that drivers could reliably detect any “developing” incidents with a holistic final check of the PTI. The ORR consider this to be a key issue specifically for the IET train because it is thought that the proposed 20 to 24 CCTV images presented to the driver will make a final holistic PTI check much more challenging than would be the case for other existing DCO trains (which only use up to a maximum of 12 CCTV images). HRE therefore instructed CCD to design, conduct and report a new experiment that would address the perceived shortcomings of the previous one, by assessing driver performance at a holistic final check with developing incidents.
1.2 Research Question The key question for the new experiment is whether drivers will be less reliable in making a holistic overview of the PTI because of the marked increase in the number of CCTV images being presented by the IET train (up to 20-24 image) compared to existing DCO trains (up to a maximum of x12 images). The aim of the new experimental investigation was thus agreed as: To measure any difference in reliability between the IET CCTV configuration (2 opposing cameras per car) and a current operational CCTV configuration (1 camera per car) for driver detection of developing hazards
1.3 Experiment Development The experimental requirements were developed and agreed through a series of meetings between the Office of Rail Regulation (ORR), Rail Safety and Standards Board (RSSB), Great Western Railway (GWR), Virgin Trains (VTEC), CCD and chaired by HRE, between July and September 2017. One of the key meetings reviewed RSSB incident data and agreed the set of incident scenarios that would be used in the new train safety check experiment (see Section 2.2), with attendees as detailed in Table 1. Table 1 – Stakeholder Target Incident Scenario Meeting Attendees July 14th, 2017
Attendee Huw Gibson Glen Brunsden Martin Freer Cliff Cork Jon Colley Sam Varghese
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
Company RSSB RSSB CCD HRE VTEC GWR
DCO: IET Train Safety Check Experiment 7
78
A further PTI Hazard Workshop was held at Hitachi offices on the 8th August 2017, which further reviewed the hazard scenarios and agreed a script for those to be used in the experiment. The attendees are detailed in Table 2. Table 2 – PTI Hazard Review Meeting Attendees August 8th, 2017
Attendee Huw Gibson Glen Brunsden Martin Freer Cliff Cork Keith Shepherd Iain Ferguson Tina Thompson Malcolm Cook Alasdair Forsyth Andrew Penrose
Company RSSB RSSB CCD HRE ORR VTEC VTEC GWR GWR GWR
The consultative development process culminated a final experimental design document [Ref 5] that was reviewed and agreed by all stakeholders and which became the basis of CCD’s commercial proposal to HRE [Ref 6]. This report details the design and results of the new experimental assessment.
1.4 Quantifying Reliability In the absence of a standard / definition of the performance reliability that a DCO system needs to achieve, previous DCO experiments have made comparative assessment with representations of CCTV image arrangements that meet standards and are in use, as well as with the results of any previous DCO experiments. However, there has been no previous assessment of driver ability / reliability to make a final holistic check of the PTI, and hence there is no reliability data for different numbers of CCTV images against which a comparison of performance of this aspect of the train safety check with the IET train can be made. Therefore, as part of the new experiment, it was agreed to create some comparable data by generating test CCTV footage from an existing DCO train that can be directly compared with that from the IET.
1.5 Important Qualifications The following qualifications are provided to facilitate understanding of the intent of the new experimental assessment: • • •
• •
The experiment is not attempting to test the safety of DCO as means of train dispatch. The experiment is not attempting to test whether DCO is an effective or appropriate mode of train operation. The experiment is not attempting to test the IET train (or DCO operation) in comparison with any other dispatch methods that do not employ on-board CCTV cameras and monitors2 (such as platform mounted CCTV, dispatch with platform staff, dispatch with on-board guards, etc.). The experiment will not attempt to consider the potential impact of extraneous operational or environmental distractions on driver performance at DCO. The experiment will not attempt to consider the impact of competing workload or time pressures on driver performance at DCO.
DOO systems with on-board CCTV cameras and monitors exist and operate on the UK network in compliance with current standards and regulations; the only issue which this experiment is designed to consider is whether the increased number of images the IET train uses will cause a final holistic check of the PTI to be less reliable than would be the case using other existing and compliant DOO trains. 2
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 8
79
• •
The experiment will not attempt to consider the issue of how driver expectation is shaped or how the fact that real incidents are very rare could affect driver performance at DCO. The experiment is not testing the effectiveness of the Hitachi IET DCO CCTV system itself, and the following assumptions are made: o IET CCTV system provides imagery of suitable quality to meet standards. o IET CCTV system provides suitable target image size and field of view. o IET CCTV system provides suitable imagery under all ambient / environmental lighting / weather conditions.
It was agreed by all parties that the experiment would test driver performance in a lab-based experiment using exemplar CCTV footage captured from both an IET train and an existing DCO train, which included incidents simulated using actors and props.
1.6 IET Train Development Following the original assessment [Ref 2], changes to the IET CCTV image presentation on the in-cab monitors were suggested by the stakeholders, involving improvements to the grouping of image pairs and image boundaries. These agreed changes were implemented by Hitachi and featured in this new experiment.
1.7 Key Activity Dates • • • •
Sunday 4th February – Scenario filming day at Paddington Station, Platform 1. Thursday 8th March – Wednesday 14th March – Simulation Testing at VTEC Training Centre in York. Thursday 15th March – Thursday 29th March – Simulation Testing at GWR Drivers Office at Paddington Station. Monday 30th April – Delivery of Experiment Report
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 9
80
2 Method CCD designed an experimental assessment of driver performance at detecting simulated targets (incidents) using exemplar CCTV footage of passengers waiting and moving alongside the train, as previously conducted in support of other DCO systems [Ref 2]. The experiment was designed to quantify driver error rate and task times for the in-cab CCTV scanning element of DCO duties. Its specific objectives being to determine: •
• •
Whether drivers can reliably make a holistic final check of the PTI (i.e. detect incidents that develop during or after they have checked that the train doors are clear) with the increased number of paired CCTV images proposed for the IET train How long they require to do this effectively Whether driver reliability is different between the IET train and another existing DCO equipped train that is compliant with current standards
2.1 Overview The overall experimental design was similar in form to the previous IET assessment [Ref 2], in that it was a lab based repeated measures test with driver participants from two TOCs (VTEC and GWR) being asked to view a series of CCTV images captured from live IET and DCO trains, some of which included Target Scenarios (i.e. incidents / events deemed to constitute an unsafe situation that would prevent dispatch) simulated by actors. The imagery also included varying levels of platform crowding / passenger density and movement. As in the original IET assessment [Ref 2], the driver participants were asked to conduct the train safety check using the CCTV imagery to decide whether it is safe to dispatch the train (i.e. whether a Target Scenario is present). The principle difference in this experiment lies with the target scenarios (explained in detail in Section 2.2). Most of them were designed to develop during or after the driver makes their sequential check of the train doors, and importantly they can appear in images that drivers have already otherwise checked. To detect them reliably then, the drivers need to be able to make an effective overall / holistic check of all imagery presented. It is this specific aspect of the train safety check that the experiment was designed to assess. Examples of the more static target scenarios used in the previous experiment [Ref 2], and as required by the current standard [Ref 1], were included in the experiment, but only to induce drivers to make a sequential check of the doors as they would in practice, rather than to simply always scan just for the developing Target Scenarios, which would be unrealistic. The experimental assessment of driver performance consisted of presenting CCTV video sequences taken from the train cameras, some of which randomly included simulated Target Scenarios, to a sample of train drivers under the control of a bespoke software system. Drivers were required to detect whether a Target Scenario was present in each sequence they viewed. The software control system recorded accuracy of detection and the time taken to do so as the key measures of driver performance. Driver responses were captured as follows: • • • •
Target present and driver detects (true positive) Target present but driver fails to detect (false negative) No target present but driver detects (false positive) No target present and driver correctly gives the all clear (true negative)
In addition, on each occasion where drivers decided not to move the train, they were asked to point out the target they detected or explain their reasons for deciding it was unsafe to move. © CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 10
81
The imagery was developed in four sequences to provide the comparison between different train configurations and compliance with RIS-2703-RST. 10 car trains are the most likely IET configuration (at least to start with). 12 car configurations are the longest that already meet existing standards. The aim is to provide a comparison between most likely and worst-case train lengths for IET and existing complaint DCO trains. 1.
Non-Compliant arrangement IET 10-car train with x2 opposing cameras per car = x20 images presented as x10 CCTV image pairs
2.
Non-Compliant arrangement IET 12-car train with x2 opposing cameras per car = x24 images presented as x12 CCTV image pairs
3.
Compliant arrangement DCO 10-car train with x1 camera image per car = x10 single images
4.
Compliant arrangement DCO 12-car train with x1 camera image per car = x12 single images (maximum on compliant DCO trains)
2.2 Incident Scenarios A Stakeholder meeting was held on 14th July 2017 to define “developing incident” scenarios to use in the new experiment. A list of hazards and incidents was produced by RSSB Human Factors (HF) and Operations specialists prior to the meeting. When considering the scenarios, it was decided that due to the view achievable from the train monitors (1.5m from the platform edge) all scenarios worthy of consideration should occur within the train dispatch corridor. The train dispatch corridor is currently defined in RIS-3703-TOM [Ref 4] as being: i) ii) iii) iv)
The full length of the train or the full length of the platform, whichever is the shortest distance. The gap between the train and the platform. At least 1500 mm of the platform measured from the platform edge. At least the height of the doors.
This meant that any incidents judged to occur outside of the dispatch corridor, for example more than 1500mm from the train, would not be included in the new experiment. During the stakeholder meeting RSSB’s list of hazards and incidents was reduced to 14 to be considered for this experiment, these were: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
Trap and drag incidents – Aware & Unaware (as defined in RIS-2703-RST) Pushchair unattended in train dispatch corridor (as defined in RIS-2703-RST) Small unaccompanied child within the train dispatch corridor (as defined in RIS-2703-RST) Person running towards train and attempting to board (during the train dispatch process) Person running alongside the train Person fallen or lying down within the train dispatch corridor Person leaning towards train having dropped an item and attempting to retrieve it from trackside Falling between train and platform edge (not visible to driver) Falling between train and platform edge (action being taken by other passengers to alert driver) Erratic passenger behaviour Disturbance at platform edge (horseplay, intoxication, assault) Person riding on a wheeled device (bicycle, skateboard, skates) * Added from RIS-3703-TOM hazard list. 13. Adult in train dispatch corridor (moving or static) 14. None or only part of platform visible on the monitor (stop short or doors open on the wrong side) © CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 11
82
The first 3 scenarios are hereinafter referred to as Static Targets, in that they are present at the start of an image sequence and remain unchanged for the duration. They are all defined in RIS-2703-RST and are identical to the targets used in the previous experiment [Ref 2]. However, note that “static” is not meant to imply that the target (person) does not move at all. The remaining 11 scenarios are types of developing incident, hereinafter referred to as Emerging Targets, in that they are introduced sometime after the doors have been closed and during the image sequence (i.e. during or after the driver makes a check of all the doors). These have not been employed before in experimentation and have been used in this experiment to create delayed occurrences the drivers had to make a final holistic safety check for, since they will appear in images already viewed, checked and cleared. A second PTI Hazard Workshop was held at Hitachi offices on the 8th August 2017, involving ORR, RSSB, GWR, VTEC, HRE and CCD, which further reviewed the hazard scenarios and agreed a script for those to be taken forward in the experiment. The 10 agreed incident scenarios and their associated scripts are shown in Table 3. Table 3 – Incident Scenarios
Scenario Trapped Aware (as defined in RIS2703RST)
Scenario Type Static
Script Adult passenger caught in the train doors making vigorous efforts (for example, waving arms) to attract attention.
2
Trapped Unaware (as defined in RIS2703RST)
Static
Adult passenger standing tight against the train but, either as yet unaware they are trapped or otherwise making little effort to attract attention (for example, assuming doors will re-open to release them)
3
Pushchair unattended in train dispatch corridor (as defined in RIS2703-RST)
Static
Push chair with baby – left on platform close to doorway (abandoned by or stuck in door)
4
Small Child (unaccompanied) within the train dispatch corridor (as defined in RIS2703-RST)
Static
A 2-year-old child – a model child 825 mm high and 150mm deep placed close to the train doors (i.e. abandoned by, or stuck in door)
1
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 12
83
Scenario Attempt to Board - person running towards the train and attempting to board (during the train dispatch process
Scenario Type Emerging
Script Adult runs into view and simulates trying to open doors to get onto the train.
6
Run Alongside - person running alongside the train
Emerging
Adult runs into view (as above) the proceeds to run along PTI (banging on train)
7
Collapsed -- person fallen or lying down within the train dispatch corridor
Emerging
Developing Adult walks into scene and falls and remains lying in the PTI
8
Retrieving dropped item - person leaning towards train having dropped an item and attempting to retrieve it from trackside
Emerging
Adult walks into scene and leans into gap between carriages, then goes to knees / all fours pretending to retrieve something.
9
Fallen in - person has fallen between train and platform edge (action being taken by other passengers to alert driver)
Emerging
Developing Passengers on the platform react, crowd around site of incident and act as if trying to help. It was agreed that there would be no need to simulate the actual fall to the track, merely the reaction of passengers as if a fall had occurred and was seen by them.
10
Erratic passenger behaviour
Emerging
Developing Adult wanders about (as if drunk), perhaps bouncing off train
5
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 13
84
NOTE: it was agreed that none of the developing incidents would end / disappear; i.e. once started, they would continue to the end of the sequence. It was agreed that any scenario that ends or is resolved before the driver makes the decision to depart ceases to be an “incident” that would affect safe departure and can therefore be discounted. The workshop agreed to discount the scenarios in Table 4. Table 4 – Discounted scenarios
Scenario Falling between train and platform edge
Scenario Type Emerging
Disturbance at platform edge (horseplay, intoxication, skates)
Emerging
Adult in train dispatch corridor (moving or static)
Emerging
None or only part of platform visible on the monitor (stop short or doors open on the wrong side)
Emerging
Reasoning Discounted – the initial event would be visible only very briefly (perhaps 1 to 2 seconds, and once at track level the victim would be out of the camera’s field of view (obstructed by platform structure) and in darkness. If the victim were to stand, the visible portion of their head / torso would not constitute a target size that drivers could reasonably be expected to detect – the angle subtended at the eye by the target image would not meet minimum target size in standards. Discounted – while this was considered a reasonable incident, it was agreed that this was no more instructive than Incident 10 – Erratic Passenger Behaviour; effectively the same incident but involving more than one person. It was agreed that there was no additional value to be had from including this scenario. Discounted – while this was considered a reasonable incident, it was agreed to be insufficiently different or more instructive than other incidents already accepted. Discounted – it was agreed that situation is relevant to the opening of the doors, but not to train dispatch.
2.3 DCO Driver Task Simulation The experiment was, as far as possible, run within the context of the driver task. However, service provision elements of that task, such as monitoring the platform when the doors are open and /or decisions about when to close the doors were not be included. A practicable experimental method cannot assess the decision / action to close the doors since it would not be possible to extend or contract the lengths of pre-recorded video to match active decisions that drivers might make; an altogether different type of experiment with a live system would need to be conducted in real time to assess this. This experiment seeks to only quantify whether targets can be reliably detected using either the IET or DCO train camera systems for ten and twelve car formations. The experiment therefore only simulated a limited portion of the overall driving task, starting from the time when the doors (all single leaf interlock doors) have closed correctly (mechanically) and the interlock indicator has illuminated, as illustrated in Figure 2.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 14
85
Doors Closed (interlock indicated)
Figure 2 Experimental DCO Driver Task Breakdown While the issue of how drivers ensure that doors are not closed while passengers are still boarding, or alighting is important, it was agreed during development (see Section 1.3) that this is a general issue for DCO operation with all trains and is not specific to the Hitachi IET train. It was agreed that this aspect would need to be managed through operational processes and driver training and therefore did not need to be included in this experiment. Similarly, typical distractions that operational drivers may experience are not specific to the IET train and it was therefore agreed that these did not need to be considered / simulated in the experiment. The issue of whether the CCTV imagery should remain visible to the driver as the train moves off and leaves the station was not considered as part of this experiment.
2.4 Experimental CCTV Footage Footage Capture On Sunday 4th February 2018, CCD conducted the filming of the target scenarios with 200 hired actors to simulate commuter traffic on the platform (Figure 3 and Figure 4). The 10 target scenarios detailed in Table 3, were recorded live, multiple times and at multiple points along both the IET and DCO trains, as well as suitable “clear” footage of passengers waiting on the platform for the purposes of developing the CCTV imagery.
Figure 3 – Paddington Station Platform 1 hired for filming purposes
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 15
86
Figure 4 – CCD staff organising the filming day
All actors were dressed in neutral clothing representative of winter commuter wear, to be no more conspicuous than is likely in everyday life. All actors were asked not to cross the yellow line into the PTI unless they were instructed to do so by a member of the CCD team when filming an incident scenario. This was to ensure the incident / target scenarios were clearly identifiable during the simulation testing and to ensure that ambiguous scenes, that were not intended as targets, but which might be interpreted as such in testing, did not occur. The actors were split into 8 groups, each managed by a member of the CCD team. The filming day was structured so that each of the incident scenarios shown in Figure 5 to Figure 7 were split into individual film sequences. These were then captured several times on each train carriage CCTV camera or camera pair. These, coupled with “clear” footage in the remaining train CCTV cameras from each film sequence, provided the CCTV imagery to develop the Simulation Test.
Figure 5 Static targets – Trapped Aware, Trapped Unaware, Unattended Pushchair and Small Child
Figure 6 Emerging Targets a) - Attempt to board, Run alongside, Collapsed
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 16
87
Figure 7 Emerging Targets b) - Retrieving dropped item, Fallen in, Erratic passenger behaviour
Footage Configuration The CCTV imagery that the driver test participants viewed was developed into four sequences, two for each train and each train length, as illustrated in Figure 8. Sequences 1A and 1B featured the paired CCTV images from the IET train, while Sequences 2A and 2B featured single images from the DCO train. Each Sequence was presented to drivers on two in cab monitors, arranged vertically as in the train cab. Two images in the bottom right corner of the lower monitor were blanked off for the x2 10 car sequences. IET Train (double camera)
DCO train (single camera)
IET train system
DCO train system
Sequence 1A
Sequence 1B
Sequence 2A
Sequence 2B
10 Car Train (20 images displayed)
12 Car Train (24 images displayed)
10 Car Train (10 images displayed)
12 Car Train (12 images displayed)
40 Video clips played
48 Video clips played
40 Video clips played
48 Video clips played
Figure 8 – Structure of CCTV screens for the four different representations of imagery
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 17
88
Each of the four sequences consisted of either 40 (10 car) or 48 (12 car) sets of video clips for the simulation test; each video clip consisting of 10 or 12 live images, posted to the image slots available on the two monitors. Target Scenarios were inserted into 25% of the video clips within each of the four sequences. This level of target incidence is more than real life frequency but is entirely consistent with previous DCO studies [Ref 2] and is done to give a reasonable prospect of capturing some errors, within a feasible experimental timeframe. Participants were not told how many targets they were about to see nor what sort of frequency to expect. Note that the remaining 75% of video clips in each of the four sequences showed no targets, simulating the safe situation where the driver would expect to dispatch the train. A significant difference with this experiment is that the frequency and location of different Static and Emerging targets was fully randomised, rather than being managed, as was the case for the previous experiment [Ref 2]. The x4 Static targets appeared randomly, since was not necessary to make sure that all examples appeared in all image positions (as in the previous IET assessment [Ref 2]), merely that enough appear in different screen positions and times in the sequence to induce drivers to perform a sequential check of the doors. It was therefore agreed to use significantly less of these targets (i.e. 8-10% of the video clips). The Emerging targets, which are the principal focus of this experiment, were the more frequent type, appearing in 15-17% of video clips. To negate potential order and position effects, the Emerging targets were randomly located to the 10 or 12 available image positions on the monitors but were never shown concurrently with any other target scenario and appeared only once in each available image position. Each participant was shown an example of each of the 10 target types in each of the four sequences (with some randomised duplication of extra Emerging targets for the two 12 car versions). Each video clip lasted up to a maximum of 30 seconds3, any subject failing to give a response before the time limit was recorded as a time-out error. Emerging Target Scenarios only started appearing between 8-10 seconds after the sequence started, to give drivers time to get started with the sequential check of the doors. Static Target Scenarios, when present, were apparent from the start of the sequence. The simulation test was configured so that a participant response could only be made 12 seconds after starting each sequence to ensure that they could not give the all clear before the Emerging Targets appeared. CCD conducted in-house pilot testing of the system to prove that the software sequencing and control were functioning properly, to verify that imagery was correct and of suitably quality, and to validate the test script and data output. A run-through of the full test was undertaken by Malcom Cook of GWR under experimental conditions, on 6th March 2018, for final operations validation prior to starting the experimental runs with test subjects.
3
CCTV clip length had to be limited as cycled replay (as used in previous tests) was not possible since Emerging Targets would be seen to disappear and then reappear on each re-run. The 30 second limit was chosen on basis that this was the 95th percentile task time limit determined in the previous study [Ref 2]. Only 5% or less of participant responses could be expected to require more than 30 seconds. Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 18
89
2.5
Experimental Conditions
A pragmatic desk-top exercise using portable computer-based equipment was employed for the simulation testing. The set-up consisted of two synchronised computer monitors, reproducing imagery at the same size as in the trains, and arranged and positioned to accurately simulate their physical arrangement in the cabs, i.e. at the correct visual distance, vertical / horizontal location and orientation (see Figure 9). The image presentation was controlled by a bespoke software system, which managed the balancing and randomisation of image sequences and the position and sequence of target scenarios and recorded participant responses and task time. The driver was seated at a desk, with two TFT monitors mounted in front. A keyboard and mouse were used by the CCD test team to setup the simulation test. The control / response device (see Figure 10) was placed on the desk in front of the participant. This was used to start and stop each video clip, and each response was recorded by the system. Two members of the CCD team were present during each test. One member was the lead and ensured each participant received the same test instructions. The second member was responsible for noting each identified target and data collection. The lead also conducted a post-test interview, to capture participant feedback.
Figure 9 – Equipment Set Up
Each participant was presented with the four CCTV image sequences in a randomised order controlled by the computer software: • • • •
Sequence 1 A - 20 CCTV images x 40 videos Sequence 1 B - 24 CCTV images x 48 videos Sequence 2 A - 10 CCTV images x 40 videos Sequence 2 B - 12 CCTV images x 48 videos
Each participant had a control pad available to them with a red, green and blue button.
Figure 10 – Control Pad
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 19
90
The participant was told to select green if they decided it was safe to move the train out of the station. They would select red if they would not leave the station due to an identified target. The participant was told to select blue when prompted to move to the next video. If the participant selected the red button, they were told to inform the CCD team of their identified target. If a decision had not been made after 30 seconds, the system automatically assigned a red selection and the participant had to indicate to the lead facilitator why they did not make a decision. At the start of each test session the CCD team ran through a demonstration run of 6 videos to familiarise each participant with the test. The demonstration videos consisted of two different target types and one with no target for both train lengths. Participants were encouraged to ask any questions during the demonstration. Once the test commenced, the CCD team were not able to answer any questions. Once the demonstration was complete, the CCD team started the test, with the computer software running the 4 sequences in a random order. On completion of the test participants were asked a set of 4 questions to complete, as follows: 1. 2. 3. 4.
Did you have a scanning technique? Which of the 4 sequences did you prefer and why? You also saw examples of footage from a 10 car and 12 car trains, how did these compare? Any comments or difficulties?
Each full test lasted for a maximum of 1 hour and 30 minutes, with each of the 4 main sequences lasting approximately 20 minutes. All data collection was anonymous.
2.6 Participants To provide sufficient data for a robust statistical analysis the experiment required a minimum of 30 current train drivers as participants. Participants were both male (n=36) and female (n=3) and covered a range of ages (28-64 years), with a mix of experience of DCO (0 – 28 years). All participants were current drivers, not driver trainers or instructors. The details of each participant involved in the experiment are captured in Table 5. Table 5 – Participant Details Participant Gender Age no. 1. Male 47 2.
Male
61
3.
Male
57
4.
Male
35
5.
Male
43
6.
Male
41
7.
Male
27
8.
Male
51
9.
Male
50
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
Job Title
TOC
DOO Experience No
Last Shift 06/03/18
Glasses
VTEC
Driver Experience 13
Driver (VTEC) Driver (VTEC) Driver Manager (VTEC) Driver (VTEC) Driver (VTEC) Driver Manager (VTEC) Driver (VTEC) Driver Manager (VTEC) Driver Manager (VTEC)
VTEC
36
No
07/03/18
No
VTEC
37
No
07/03/18
No
VTEC
5
No
Training
No
VTEC
15
No
11/03/18
No
VTEC
8
No
08/03/18
No
VTEC
1
No
11/03/18
No
VTEC
34
No
1600
No
VTEC
34
No
11/02/18
No
DCO: IET Train Safety Check Experiment 20
No
91
Participant no. 10.
Gender
Age
Job Title
TOC
Male
45
11.
Male
52
12.
Male
60
13.
Male
35
14.
Male
58
15.
Male
37
16.
Male
55
17.
Male
32
18.
Male
55
19.
Male
28
20.
Male
31
21.
Male
58
22.
Male
48
23.
Male
44
24.
Female
38
25.
Male
49
26.
Male
44
27.
Male
54
28.
Male
47
29.
Male
53
30.
Male
56
31.
Male
44
32.
Male
42
33.
Male
50
34.
Male
63
Driver Manager (VTEC) Driver (VTEC) Driver (VTEC) Driver (VTEC) Driver (VTEC) Driver (GWR) Driver Manager (GWR) Driver (GWR) Driver (GWR) Driver (GWR) Driver (GWR) Driver (GWR) Driver Manager (GWR) Trainee Driver (GWR) Driver (GWR) Driver (GWR) Driver (GWR) Driver (GWR) Driver Manager (GWR) Driver (GWR) Driver (GWR) Driver Manager (GWR) Driver (GWR) Driver Manager (GWR) Driver (GWR)
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DOO Experience No
Last Shift Yesterday
Glasses
VTEC
Driver Experience 12
VTEC
27
No
Yesterday
No
VTEC
40
No
5 months
No
VTEC
10
No
Today
No
VTEC
37
No
22/02/18
No
VTEC
1
No
Yesterday
No
VTEC
25
No
Yesterday
Yes
GWR
3
Yes – 1 year
18/03/18
No
GWR
15
15
17/03/18
Yes
GWR
5
5
Today
No
GWR
7
7
19/03/18
No
GWR
28
No
19/03/18
Yes
GWR
18
6
20/03/18
Yes
GWR
8 months
1 month
21/03/18
No
GWR
4
4
6 days
Yes
GWR
25
25
Yesterday
No
GWR
2.5
3.5
10 days
No
GWR
16
16
Yesterday
Yes
GWR
14
4
Yesterday
Yes
GWR
10
10
20/03/18
No
GWR
17
1.5
Yesterday
Yes
GWR
16
16
GWR
1 month
1 month
23/03/18
No
GWR
29
26
26/03/18
Yes
GWR
30
28
Yesterday
Yes
DCO: IET Train Safety Check Experiment 21
No
Yes
92
Participant no. 35.
Gender
Age
Job Title
TOC
Male
50
36.
Female
Did not provide age
37.
Male
44
38.
Male
55
39.
Female
34
Driver Manager (GWR) Driver Manager (GWR) Driver (GWR) Driver (GWR) Driver (GWR)
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DOO Experience 15
Last Shift 26/03/18
Glasses
GWR
Driver Experience 33
GWR
21
2
Office
No
GWR
3
3
Yesterday
No
GWR
2
No
Long sick
Yes
GWR
1.5
1.5
1 week
No
DCO: IET Train Safety Check Experiment 22
No
93
3 Results Statistical comparisons were made between the different conditions and participants’ performance (number of errors made, and time taken) using the SPSS statistical analysis package. Both parametric and nonparametric statistics have been used to analyse the results, depending on whether the assumptions of parametric tests were met (e.g. data must be normally distributed and have homogenous variances).
3.1 Target Detection Rates The numbers of each target type that participants failed to detect for each train type and length are shown in Table 6. Table 6 – Overall Target Detection Failures Target Type IET 10 Car Child Pushchair Trapped Unaware Trapped Aware Attempt to board Run alongside Collapsed Retrieving dropped item Fallen in Erratic Total Total No. of Targets Shown
1 2 2 0 2 0 2 3 0 0 12 390
DCO 10 Car 1 2 1 1 2 3 1 5 2 2 20 390
IET 12 Car
DCO 12 Car
Totals
1 4 4 1 3 7 3 3 2 2 30 468
4 9 9 4 9 10 7 16 5 7 80 1716
1 1 2 2 2 0 1 5 1 3 18 468
Overall the 39 participants failed to detect a total of 80 targets across both lengths of both trains out of a total of targets 1716 shown in the experiment. This gives an overall rate of 95.33% for correct detection of targets. The “Retrieving dropped item” target was the most frequently not detected. Table 7 shows the overall rates at which participants correctly identified targets from the total number of targets shown for each of the four conditions. Table 7 – Overall rates of correct target detection
Train length
Train Type
10 Car
IET 96.92%
DCO 94.87%
12 Car
96.15%
93.59%
The data does not meet the necessary criteria for an ANOVA test, and hence a non-parametric Freidman test compared the four testing conditions to (N = 39). The Friedman test indicated a significant difference (below p = 0.05) in errors between the 10 car IET and 10 car DCO trains, χ2(3) = 9.20, p = 0.027. Therefore, a Wilcoxon signed-rank post-hoc test was conducted showing a significant difference in errors between the IEP 10 car and DCO 10 car trains p = 0.021.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 23
94
The results indicate that performance at target detection is slightly lower for the DCO train for both train lengths, although this is only statistically significant between the two 10 car arrangements (i.e. the performance was worse with the 10 Car DCO compared to the 10 Car IET). Note that for every positive detection of a target the participants were asked to identify which target they had seen in which image. Post-hoc analysis showed that none of the participants made correct detections for the wrong reason; i.e. made a correct response for a non-existent target in one image while missing an actual target in a different image. There were a small number of incidents where participants correctly identified the right target scenario in the right image but misnamed it (i.e. reported it using the wrong target type – typically running alongside was mixed with running to board).
3.2 Detection Rates by Participant The number of targets missed by the different participants is illustrated in Figure 11.
Total number of Targets not deteccted per participant (N=39) 14 12 10 8 6 4 2 0 .00
1.00
2.00
3.00
4.00
5.00
6.00
8.00
14.00
Total Missed targets Driver (VTEC)
Driver (GWR)
Driver Manager (GWR)
Trainee Driver (GWR)
Driver Manager (VTEC)
Figure 11 Total number of targets not detected per participant It shows that 31% of participants (n=12) missed no targets in any of the four conditions, i.e. just under 1/3 of them had 100% detection rate. 85% of participants (n=33) only failed to detect 3 or less targets. Over half the failures to detect (43 instances) were recorded by just 6 (15%) of the participants. Of the 12 participants who achieved 100% reliability (i.e. made no failures to detect targets in all four conditions), 4 had previous experience of on-board DCO CCTV operation. Of the other 8 participants who achieved 100% reliability, 4 had experience of platform mounted DCO CCTV operation, and 4 had no prior experience of using DCO CCTV at all. The worst individual performance was 14 targets not detected, which was nearly twice as many as the next worst performing participant (8 targets not detected). On average participants failed to detect 2 targets.
3.3 Emerging Target Detection Rates However, performance at detection of Static targets is not the focus of this study4; the Static targets were only included in this experiment to influence driver behaviour, i.e. to induce drivers to make the required systematic check of the train doors in all images (as explained in Section 2.4.2).
4
Rates of detection for Static Targets are extensively detailed in the previous study [Ref 2].
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 24
95
For the purposes of this study then, it is more instructive to concentrate on the results for the Emerging Targets. The results with the Static targets removed is shown in Table 8. Table 8 - Emerging Target Detection Failures Target Type IET 10 Car Attempt to board Run alongside Collapsed Retrieving dropped item
IET 12 Car
DCO 12 Car
Total
2 0 2 3 0 0
2 3 1 5 2 2
2 0 1 5 1 3
3 7 3 3 2 2
9 10 7 16 5 7
7
15
12
20
54
234
234
282
280
1030
Fallen in Erratic Total Total No. of Targets Shown
DCO 10 Car
This shows that participants failed to detect a total of 54 Emerging targets out of a total of 1030 shown, equating to an overall rate of 94.76% for correct detection across all conditions. On average participants failed to detect 1.4 Emerging targets. Table 9 shows the rates of correct detection of Emerging targets for each of the four conditions. Table 9 – Overall rates of correct Emerging target detection Train length IET 10 Car 97.01% 12 Car
Train Type DCO 93.59%
95.74%
92.86%
With the Static targets removed, the results are only marginally different; reliability at Emerging target detection shows marginal improvement for the 10 Car IET, and small decreases for the other conditions (the largest change being -1.28% for the 10 Car DCO). The number of Emerging targets not detected by the different participants is illustrated in Figure 12.
Number of Emerging Targets not detected per participant (N =39) 20 15 10 5 0 .00
1.00
2.00
3.00
Driver (VTEC)
Driver (GWR)
Driver Manager (GWR)
Trainee Driver (GWR)
4.00
7.00
Driver Manager (VTEC)
Figure 12 Number of Emerging targets not detected per participant
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
10.00
DCO: IET Train Safety Check Experiment 25
96
It shows that 18 participants (46.15%) detected all Emerging targets, and the worst individual performance was 10 Emerging targets not detected.
3.4 Detection Failures from Timeouts As explained in Section 2.4.2, if participants took longer than 30 seconds to complete the train safety check the video clips timed out, and the system automatically recorded a “not clear” result. The numbers of time outs recorded for each sequence are shown in Table 10. Table 10 - Numbers of video clip timeouts Sequence Total per Session IET10 74 IET12 75 DCO10 26 DCO12 47 Total 222 A total of 222 instances of timeout were recorded (3.23% of total responses), mostly for the IET train (both train lengths). As can be seen from Table 11, a total of 9 of the 222 instances of timeout included a target, 3 Static and 6 Emerging, the remaining 213 video clips had no targets in them.
Run Alongside
Collapsed
0
1
2
1
2
Total
Attempt to Board
3
Erratic
Aware Person
0
Fallen in
Unaware Person
0
Retrieving Dropped Item
Pushchair
Number of time outs
Child
Table 11 – Targets missed through time outs
0
0
9
For one instance of timeout, the participant made a definite decision to select the “green” option to move the train (i.e. record that they thought there were no targets), when in fact there had been an Emerging target present (in this instances person trying to retrieve a dropped item). As this was therefore recorded as a definite failure to detect it is already included in the results previously described. However, for the other 5 timeout instances where Emerging targets were shown the participants all selected the “red” option (i.e. decided not to move the train). While these do not represent failures in the sense that the train would have been moved when it was not safe to do so, they nevertheless constitute situations where a target was present but not detected. As such, these can be added to the results, as illustrated in Table 12. The shaded cells indicate where the values have increased (in each case by one instance of failure to detect). Table 12 - Emerging Target Detection Failures including instances of timeout Target Type IET 10 Car DCO 10 IET 12 DCO 12 Car Car Car Attempt to board 2 2 3 3 Run alongside Collapsed Retrieving dropped item Fallen in Erratic Total Total No. of Targets
Total
0 2 3 0 0 7
4 1 5 2 2 16
1 1 6 1 3 15
7 3 4 2 2 21
10 12 7 18 5 7 59
234
234
282
280
1030
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 26
97
The results now show that participants failed to detect a total of 59 Emerging targets out of a total of 1030 shown, which equates to an overall correct detection rate of 94.27% across all conditions (i.e. a 0.49% reduction as a result of including timeout failures). Table 13 shows the revised rates of correct detection for Emerging targets including instances where timeout occurred for each of the four conditions. Table 13 – Overall rates of correct Emerging target detection Train length IET 10 Car 97.01%
Train Type
94.68% (-1.06%)
12 Car
DCO 93.16% (-0.43%) 92.50% (-0.36%)
The results for the IET 10 Car train remain unchanged, while the other conditions are marginally lower (with the reduction indicated by the percentages in brackets). A non-parametric Freidman test was re-run comparing the four test conditions including the undetected targets from timeouts and this showed a reduced significance value χ2(3) = 7.918, p = 0.048. A Wilcoxon signed-rank post-hoc test indicated that the addition of the failures of detection from timeouts
With the inclusion of all failures of detection, these results should be considered the definitive set. While performance with the DCO train was found to be marginally worse, this was not found to be statistically significant. Additionally, while performance at the 12 car trains was found to be marginally worse than the respective 10 car trains, this was also not found to be statistically significant. The results show that target detection performance is similar between the IET and DCO systems. They further indicate that performance is similar for both 10 and 12 car trains.
3.5 Results by DOO Experience Table 14 shows the percentage target detection failures rates based on participant’s experience of DCO operation. Table 14 Target detection rates by DCO experience Targets Detected On-Board CCTV (n=3)
93.94%
On-Platform CCTV (n=9)
92.93%
Both (n=11)
96.69%
None (n=17)
95.02%
The results show no clear trend; those with experience of both On-board and On-Platform DCO systems performed best, but those with no experience performed better than those with only On-board or OnPlatform DCO experience. The differences were not found to be statistically significant.
3.6 Results by Driving Experience The time subjects had been driving ranged from one month to 40 years. A graph showing number of Emerging targets not detected against years driving experience is shown in Figure 13.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 27
98
Number of Missed Dynamic Targets
Emerging Target Failures by Driver Experience 12 10 8 6 4 2 0 0
5
10
15
20
25
30
35
Years Driver Experience
Figure 13 Numbers of failures to detect Emerging Targets compared with Driving Experience
The graph illustrates an even spread with no statistically significant trend. A couple of individuals made proportionately more errors than the others, but not enough to indicate any kind of important trend.
3.7 Results by Age The youngest participant was 28 and the oldest was 63. The summary results by age group are shown in Table 15. Table 15 Target detection failure by age Age Groups Emerging Targets not detected IEP10
DCO10
IEP12
DCO12
20-29
0
0
0
0
Totals 0
30-39
0
1
1
2
4
40-49
2
3
6
6
17
50-59
1
6
7
10
24
60+
4
6
1
3
14
Total
7
16
15
21
59
The results show a slight overall trend for performance to be lower for older subjects, although this did not prove to be statistically significant. Older subjects tended to perform slightly worse on the DCO train for each train length group, though again, this is not statistically significant.
3.8 False Target Detection The percentage rate and total numbers of false positive results Table 16 Table 16 – False Positives Per Condition
IET 10 DCO 10 IET 12 DCO 12
Percentage of False Positives 4.7% 3.5% 10.2% 3.9%
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
Total Number 73 54 191 73
DCO: IET Train Safety Check Experiment 28
99
The results indicate that false positives were more common with the two IET train configurations.
3.9 Response Time
Table 17 Table 17 - Percentile task times in seconds for correct clear responses (N=39)
Response time Percentiles of all Participants (N=39) IET 10
5 0:00:12
10 0:00:12
25 0:00:13
50 0:00:16
75 0:00:19
90 0:00:24
95 0:00:28
DCO 10
0:00:12
0:00:12
0:00:13
0:00:15
0:00:18
0:00:22
0:00:24
IET 12
0:00:12
0:00:12
0:00:14
0:00:16
0:00:20
0:00:25
0:00:29
DCO12
0:00:12
0:00:12
0:00:13
0:00:15
0:00:18
0:00:22
0:00:25
3.10 Timeouts As noted in 3.4, there were 222 instances where the video clip timed out before the participant made a decision. Analysis indicated that one participant was responsible for a clear majority of timeouts (n=109), as illustrated in Figure 14 (note years of experience is used to merely to categorise the data for presentation). This participant had recently returned from a period of several months off work and had no previous DCO experience. When asked about reasons for timeouts, the participant stated that they were making deliberate efforts to scan and re-scan the passengers on the platform, especially those close to the yellow line, waiting to see if they would do anything unexpected, in effect trying to second guess when the experiment might throw in an incident. The subject’s average response time to each video clip was between 25 and 37 seconds across the four conditions, compared to an overall average response time of 15 to 17 seconds for the other participants; indicating that this person was taking significantly more time to double-
5
Correct target detections or false target detections halt the timer, regardless of whether all images have been checked, and therefore give curtailed task durations, and so are excluded from the analysis. Results for missed targets are also excluded on the basis that the task time for failures is not relevant. Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 29
100
check the images. Note the participant did not make any target detection failures because of timeout; they were always searching for a target that was not going to happen.
Years of Driver Experience vs. Timeouts
Number of Timeouts
120 100 80 60 40 20 0 0
5
10
15
20
25
30
35
Years Driver Experience Figure 14 Numbers of timeouts against years of driving experience.
The effect of removing this participant’s timeouts is shown in Table 16. Table 18 - Numbers of video clip timeouts Sequence Total per Session IET10 74 IET12 75 DCO10 26 DCO12 47 Total 222
Revised Total 44 39 12 18 113
This would reduce the number of timeouts by half, though the general trend across the four conditions remains similar, with the IET train experiencing more than the DCO train. This pattern correlates with the response time analysis, which suggested the increased number of images on the IET train requires a little more task time; although timeouts only represent 3.23% of responses (total) or 1.65% (without worst participant), either of which lie beyond the 95th percentile value for response time. The effect is relatively minor.
3.11 Scanning Techniques Participants were not instructed to use a specific scanning technique. It was suggested they could use any method they had been trained in and that they would find a systematic scan to be useful, however whichever option they chose, they should be sure to check all images. The proportions of people using different scanning techniques are shown in Figure 15. Illustrations of the various techniques are shown in the subsequent sections.
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 30
101
SCANNING TECHNIQUE Zulu Top to Bottom, Let to Right Random/Changed throughout Yellow Zones C C Check Other Bottom to top, right to left 10%
3%
Scanning Technique
Frequency
Zulu Top to Bottom, Let to Right Random / Changed Throughout Yellow Zones C C Check Other Bottom to top, right to left Total
23 7 1 1 2 4 1 39
5% 3% 2%
18%
59%
Figure 15 Percentages and Frequencies for the different scanning techniques used
Zulu The Zulu technique is what drivers with DCO training referred to as a “Z” shaped search pattern, which they had been taught to use.
Figure 16 – Zulu
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 31
102
Top to bottom – left to right
Figure 17 – Top to bottom – left to right
Bottom to top – right to left
Figure 18 – Bottom to top – right to left
Other / Random Some participant’s focused on trying to detect action and / or situations that might easily develop into an incident rather than making a systematic car by car approach (typically those with no prior DCO training or experience). Others reported swapping between several of the techniques during and between the different sequences, this was typically drivers testing different techniques to find their own preference or changing technique between the two trains.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 32
103
C C Check
Figure 19 – C C Check
Yellow Zones
Figure 20 – Yellow Zones – a vertical scan along the yellow line, with a Z shaped pattern across images
The relative proportions of failures to detect Emerging targets for each of the scanning techniques are shown in Table 19. The Zulu technique was the most popular (largely because it is taught to DCO drivers) and while the numbers show it to be marginally more reliable than the Top to Bottom, Left to Right technique, this is not statistically significant. No technique appears to be any more successful for either the IET or DCO trains. There were too few examples of the other techniques to produce any meaningful comparative results.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 33
104
Table 19 – The relative proportions of Emerging Targets not detected by scanning technique Scanning No using IEP10 DCO10 IEP12 DCO12 technique technique Zulu Top to Bottom, Left to Right Random/Changed throughout Yellow Zones C C Check Other Bottom to top, right to left
23
0.87%
3.48%
2.54%
3.26%
7
4.29%
7.14%
3.57%
7.14%
1
0.00%
0.00%
8.33%
8.33%
1 2 4
10.00%
10.00%
0.00%
8.33%
0.00%
5.00%
0.00%
8.33%
2.50%
2.50%
6.25%
4.17%
1
0.00%
0.00%
8.33%
0.00%
3.12 Post-test Interviews On completion of the test participants were asked a set of 4 questions to complete. Responses are summarised in the following sections. 1.
Which of the 4 sequences did you prefer?
Of the 39 participants 74% preferred the DCO single camera view (n=29), 16% (n=6) preference the IET train split camera view and 10% (n=4) had no preference, as illustrated in Figure 21.
POST-TEST TRAIN TYPE PREFRENCE IEP
DCO
10%
No Preference
16%
74% Figure 21 Preference for train type
Reasons participants liked the DCO: • Easy to understand the orientation of the train • Wider view of the dispatch corridor, including clear view of the train and the platform beyond the yellow line – liked having the full image to spot emerging situations • Felt like less information to look at • Platforms felt less busy • Familiarity: People are used to and trained to use this format • Easier to work out what carriage the target would be on and which camera would show the closest image © CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 34
105
Reasons people disliked the DCO: • Too much unnecessary information with the wide view – easy to become concerned by things that are a safe distance from the train Reasons people liked the IET • Liked seeing the carriage from both sides when trying to understand what a passenger was doing or to see a closer shot • The footage looked clearer and you could see further down the train • Quality of the footage better than the DCO • Focused your check just on the dispatch corridor • Liked focusing just on the triangle zones as one image = anything in that area a hazard Reasons people disliked the IET • Found both confusing but the IET was slightly worse, particularly working out the orientation of the train • Looked like an optical illusion seeing people from both sides – not like what you would see in reality • Felt like it took longer to do the full check • Confused which side of the image the train was on because the image is clipped to only show a small section of train side • Difficult to understand the layout of the train – saw the far target and then was unsure what carriage the target was closest to • Difficult to work out where to look – end up treating the triangle area marked by the yellow lines as where to look rather than the side of the train Reasons people had no preference • Not a huge difference – the task itself is still the same • Would rather just look outside the train • Dislike DOO in general General thoughts • Thought the IET was doable but preferred the DCO • Reflections on the side of the DCO train were confusing and made task more difficult, had to double check. 2.
You also saw examples of footage from a 10 car and 12 car trains, how did these compare?
A summary of results is shown in Figure 22.
POST-TEST CAR LENGTH COMPARISON 10
12
No Preference
41%
49%
10%
Figure 22 Post-test comparison of length of train
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 35
106
Reasons people said they found the 10 easier • The less information to look at the better Reasons people found the 10 harder • Didn’t like having missing slots, felt like there was an error and effected scanning rhythm Reasons people found the 12 easier • Preferred having all slots filled, better for scanning technique/rhythm Reasons people found the 12 harder • A more information, especially on the IET • Feels like a lot of time elapses between checking car 1 and car 12 Reasons people had no preference • They appear more or less the same for each train, just a few more images. 3.
Any comments or difficulties? • • •
None of the participants reported any difficulties with conducting the experiment or found the task too demanding Some Drivers commented that the testing as a whole was quite repetitive and demanding, but understood the requirement for testing Drivers with experience in DCO commented that 15 to 30 seconds task timing seemed to be reasonable and quite realistic compared to their own experience
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 36
107
4 Discussion 4.1 Target Detection Reliability Overall, participants demonstrated a high level of reliability at detecting targets under all conditions. Participants only missed a total of 88 targets (including timeouts) out of 1716 presented in 6,864 video clips, an overall success rate of approximately 95%. However, these results include Static targets, which were only placed in the footage to induce drivers to make the normal sequential check of the train doors and are not really of interest here. The aim of this experiment was to make a comparative assessment between the IET and an existing and compliant DCO train of whether drivers can reliably detect emerging / developing incidents during (or after) they have made a check of the doors. It is therefore more instructive to consider the performance at detecting just the Emerging targets. For Emerging targets, including a small number of detection failures occasioned by video clip timeouts, the results show a total of 59 failures to detect from a total of 1030 Emerging targets presented; an overall success rate of 94.3%. Interestingly, nearly half of the subjects (18 out of 39) missed no Emerging targets at all (i.e. were 100% reliable). The separate rates of detection for Emerging targets for the various conditions are summarised below (see also Section 3.4). Train length
Train Type
10 Car
IET 97.01%
DCO 93.16%
12 Car
94.68%
92.50%
The results show that driver performance at detecting Emerging targets was marginally better with the IET train than with the DCO train for respective the train lengths. Each train’s 10 car result was also marginally better than its respective 12 car result. However, none of these differences were found to be statistically significant. Given that the differences in results are marginal, it is reasonable to conclude that the answer to the agreed research question (see Section 1.2) is: There is no appreciable difference in driver reliability between the IET CCTV configuration and a current operational DCO CCTV system for detection of developing hazards. Furthermore, performance was not shown to deteriorate appreciably with more images from longer trains, i.e. target detection rate is no worse for 12 car trains than for 10 car trains. The experimental results are favourably comparable to those from the previous experiment [Ref 2], which showed an overall target detection success rate of 94% (for the IET train). While those results were only for Static targets, the similar level of performance suggests that driver reliability at detection of Emerging targets is not significantly different to that for detection of Static targets. In other words, drivers are as capable of detecting Emerging targets with a holistic PTI check as they are at detecting static targets by sequential / systematic check of the train doors. Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 37
108
This is an interesting finding, since it was expected that the holistic PTI check would prove to be less reliable than the sequential check of the doors for the Static target types for either type of train CCTV system, based on the assumption that developing targets could be more easily missed if they occur in images a driver has already checked. Indeed, it was thought that it would be harder still for the IET train due to the increased numbers of images. The experimental results demonstrate that these expectations and assumptions do not appear to have been met. Drivers have been found to be able to detect developing targets with a high degree of reliability (i.e. with comparable reliability to tests with Static targets). This is possibly because the incident scenarios involve moving targets, which tend to more easily attract the driver’s attention, even during a more general holistic final check. In general, the drivers tended to prefer the DCO train configuration, as well as preferring the shorter trains, largely due to their lack of familiarity with the paired image configuration of the IET train. While some drivers noted the IET imagery could appear confusing and worried about understanding where the target was in relation to train carriage, these factors did not seem to appreciably affect their performance at target detection.
4.2 Individual Differences The number of targets that participants missed, or the time taken to correctly identify non-target scenarios showed no systematic relationship with the subject’s age, general train driving experience, experience of DCO, nor their eyesight. Some participants made proportionately more errors than others, but this was more a function of their individual capability, the care / effort they took with the test, or some other individual characteristics rather than any discernible causative factor. Older drivers tended to perform slightly worse than the younger ones, more frequently with the DCO train, though this was not significantly. It appears likely that the slight differences noted in performance are merely a modest example of the typical effect of age on human performance.
4.3 Target Type No particular issues were identified with any of the target types; there was no evidence that any of the targets caused participants systematic problems or were too difficult to detect or too ambiguous to interpret as dangerous situations. The “retrieving the dropped item” target was missed most often (18 instances), with the “run alongside target” next (12 instances), although they were not significantly more frequent than other targets, nor with either train. Both target types tend to be highly visible; particularly the running alongside scenario, which always involved a lot of obvious movement. It seems likely that those few that were not detected were probably instances where the participant did not interpret the situation as dangerous or felt there was some level of ambiguity. This is more difficult to comprehend for the dropped item, since all examples involved a person first moving toward the train and then crouching / kneeling down very close to the edge of the platform and moving / crawling about in what should appear a dangerous way. It is possible on some occasions drivers did not see the target or on others thought a person in such a position would most likely get back if the train were to move, rather than continuing their behaviour. It is a little easier to understand why running alongside targets might not always be considered dangerous, since the actor was not constantly in contact with nor always very close to the train, although always inside the yellow line. Drivers may on occasion have assumed that the person would give up once the train moved. © CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 38
109
But whatever the cause of the small number of detection failures, neither target type proved significantly more difficult to reliably detect for most participants on most occasions.
4.4 Response Time The most interesting task response times are those achieved for correctly identified non-target scenarios (i.e. clips where no target was shown) since this indicates the length of time required to make the correct decision to safely move the train. The results indicate that participants tended to take slightly longer with the IET train. However, over 95% of the time drivers correctly cleared the train to move in 29-30 seconds or less. In the worst case, on only 3.23% of occasions did participants require longer that 30 seconds. Nearly half of these occasions were recorded by a single individual; if their data is removed, the other 38 participants only took longer than 30 seconds on 1.65% of occasions. In either case, the results demonstrate that on most occasions drivers can be expected to be reliable with up to 30 seconds available to complete their train safety check.
4.5 False Target Detection Participants made significantly more false positive responses (i.e. detecting a target where none was present) with the IET train. They also made more with the longer 12 car IET train, compared to the 10 car IET. While false positives are technically performance errors, they are not dangerous / unsafe (i.e. the driver decided NOT to move the train). Nevertheless, the results showed a propensity for drivers to think they saw more unsafe situations with the IET train than with the DCO train. This may have an implication for operations in that more IET departures could be delayed for a time while the driver satisfies themselves that what looked like a potential incident actually turns out to have been a false alarm. Note that the experiment was not designed to quantify the duration of this sort of situation; the test ceased when the participant made a response (right or wrong). Any such delay to investigate an incident (imagined or real) would be separate to and in addition to the task response times discussed above. However, it is worth remembering that the IET image arrangement is unique and novel, quite unlike anything the participants have experienced before. Several reported that it looked complicated and confusing. It is thus possible that its very unfamiliarity was instrumental in drivers being particularly cautious or more zealous about deciding not to move the train in the event they saw anything that looked like it might be a potential incident. It is probable that drivers who have been trained on and have experience with the IET system would make less false positive responses. Nevertheless, this issue might usefully be considered when deciding on suitable station dwell times / train timing patterns for IET operation, at least in the initial phases of DCO introduction.
4.6 Scanning Techniques Most participants adopted the Zulu scanning technique, zig-zagging from left to right, top to bottom, in conformance with their typical DCO training (for those who have been trained). This was not found to be significantly better than the other techniques used, although many who were 100% reliable used it. Indeed, the numbers of people using alternative techniques were too small to make any meaningful comparisons. It appears that the Zulu technique is as effective as any other.
Š CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 39
110
5
Conclusions •
The test participants demonstrated similar performance at detecting Emerging targets with both train systems; it can therefore be concluded that there is no significant difference in driver reliability at detection of developing hazards between the IET CCTV configuration and a current compliant DCO CCTV configuration.
•
It is further concluded that Increased numbers of images, up to 24 in 12 pairs, does not appear to affect the reliability with which drivers can detect Emerging Target incident scenarios under DCO operation.
•
Drivers can be very reliable at making a holistic final check of the PTI to complete the Train Safety Check with both existing DCO systems and with the proposed IET train system; performance rates compare favourably with previous experiments for static targets.
•
Nearly half of participants were 100% reliable at detecting Emerging targets with the IET train.
•
Individual differences do not appear to have any systematic impact on performance with the IET train or with the DCO train.
•
Scanning technique was not shown to have a particularly significant effect, the Zulu arrangement is as at least as effective as any other.
•
The IET system was shown to give rise to more false positive responses; this may have been due to driver’s being more cautious with an unfamiliar system, but operators should consider and / or monitor the issue during introduction of DCO operation with the IET to determine whether it could lead to a significant number of delays.
•
Drivers may require up to 30 seconds to achieve a 95% reliability at correctly determining that a train is safe to move. The IET train was found to require longer than the DCO train, typically in the order of about 4 to 5 seconds. Operators should take this into account when determining station dwell times for IET operation.
© CCD Design & Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 40
111
CCD Design & Ergonomics Second Home Spitalfields 68-80 Hanbury Street London E1 5JL United Kingdom +44 (0) 20 7593 2900
hello@designbyccd.com www.designbyccd.com
0) CCD Š 7515Design 287128& Ergonomics Ltd. 2018 CCD-1752-REP-001-18 v1.0
DCO: IET Train Safety Check Experiment 41
Appendix 4
112
Trial specification for CCTV DOO Wide Dynamic Range Parameter Change Engineering Trial Report HRE-000-TEC-REP- 30002 Issue 04
Author / Preparer Takuma Karasawa
15 November 2019
Stakeholder 1 Paul Scrimshaw
15 November 2019
Stakeholder 2 Paul Pryke
15 November 2019
COPYRIGHT The information in this document is confidential to Hitachi Rail Limited ("the Company'') and the copyright, design right and/or other intellectual property rights in this document belongs to the Company. All rights conferred by law and by virtue of international copyright and other conventions are reserved to the company. This document and the information contained therein or any part thereof must not be reproduced, disclosed or used for purposes other than those for which the prior written consent of the Company has been given. Š Hitachi Rail Limited, 2019 eDOCs:
113
Revision History Issue
Date
Detail
Author
00
2/10/2019
First Issue
Takuma Karasawa
01
4/10/2019
Second Issue
Takuma Karasawa
02
31/10/2019
Third Issue
Takuma Karasawa
03
15/11/2109
Fourth Issue
Takuma Karasawa
Organisation
Comment(s)
Date
Stakeholder Review(s) Stakeholder Name None
Distribution Organisation
Recipients
ERG
Tom Lendhill
GWR
Andrew Skinner
GWR
Jose Miguel Frade
GWR
Stewart Player
GWR
Jamie Baggott
Agility Trains
Jonathan Waters
Abellio
Dominic Juner
DfT
Dale Shuttleworth
Attached Document(s) Document Reference
Issue
Document Title
IEP19-0011-E
0
DOO Visibility Issue Verification Report
CHG-000-1082
1.1
CCTV Wide Dynamic Range Parameter
Scan of Authorising Signatures
HRE-000-TEC-REP- 30002 Issue 04
Page 2 of 22
114
Contents 1
INTRODUCTION ......................................................................................................................... 4
2
TRIAL PROPOSAL PLAN .............................................................................................................. 5
3
2.1
Trial Proposal
5
2.2
Train Routes
5
2.3
Train Condition
5
2.4
Access and Data Request
5
2.5
Trial Methodology
6
TRIAL ANALYSIS ......................................................................................................................... 7 3.1
London Paddington to Swansea: Unit 800034
7
3.2
London Paddington to Swansea: Unit 800034
10
3.3
Analysis by Recorded Data
12
3.4
London Paddington to Swansea: Unit 800034
13
3.5
Continued Analysis by Recorded Data
15
4
CONCLUSION .......................................................................................................................... 20
5
REMARKS ................................................................................................................................. 21 5.1
DOO Camera Cleaning Area
21
5.2
Example of lens condition
21
5.3
Remedial Actions by HRUK
22
HRE-000-TEC-REP- 30002 Issue 04
Page 3 of 22
115
1
Introduction
GWR have expressed concerns regarding the safe dispatch of trains afflicted by low sunlight conditions, as the images presented to the driver have the whiteout phenomenon. Whiteout phenomenon occurs when the light source increases above a level to which the camera system cannot cope and the expected image is washed out. This can be evidenced in Figure 1 where the DOO image in the top right is washed out. Following an investigation conducted by the OEM, a technical solution has been proposed as per document IEP19-0011-E DOO Visibility Issue Verification Report. This trial involves adjusting the electric shutter and gain control, enabling a better wide dynamic range and reduce the effect of direct sunlight into the cameras. During the trial, it is proposed to optimise the settings and work with GWR to assess the improvements.
HRE-000-TEC-REP- 30002 Issue 04
Page 4 of 22
116
2
Trial Proposal Plan
2.1
Trial Proposal 1 x Class 800 - 5 car made available for trial. The software for this trial is not compatible with Class 802. All cameras will be loaded with software with variable parameters set in all 5 cars. The parameters in each vehicle will be set as 3.1.1. Comparisons will be made during live monitoring and fog correction may be switched for analysis purposes. The WDR parameter setting of 64 was identified as the most effective at combatting whiteout following testing on Depot. The OEM have also developed a new Fog Correction setting of FAINT. This setting will be trialled along with existing settings of OFF and WEAK. The WDR parameter, camera aperture setting and fog correction setting will be adjusted by Kokusai engineers on depot during trial as necessary. HRUK and Kokusai engineer will conduct monitoring from the Rear Driving Cab during this trial.
2.2
Train Routes There were no specified routes required for this trial.
2.3
Train Condition The trial unit will be required to powered up to allow for installation of the software. The trial unit can be stabled in sidings or in the shed for the software load. Software version 1.017 will be applied to ALL cameras. Please note this software is for cameras only and will not be loaded to any other component within the system. Prior to release into service, HRUK to conduct backwards compatibility testing and functionality testing. Any detriment in system functionality will be reported immediately and assessment made as to the suitability of the trial software to remain installed.
2.4
Access and Data Request Permission is requested for Hitachi Rail and Kokusai Engineers to access the rear cab for the duration of this trial to conduct live monitoring. HRUK request permission from GWR to download any recorded footage during this trial to provide as evidence on completion.
HRE-000-TEC-REP- 30002 Issue 04
Page 5 of 22
117
2.5
Trial Methodology
This trial will rely on observatory assessments to analyse the success of the Wide Dynamic Range function. The software will have a variable WDR parameter setting and camera aperture setting. Additionally, there is an additional fog correction setting that has been developed. Fog correction setting Faint will be monitored along with 'Off' and 'Weak' to ascertain which combination of variable settings provides the best visibility without detriment to image quality or clarity. During the trial, it may be necessary to adjust these settings and parameters to find the most suitable overall setting. This will be recommended in the trial report with subsequent evidence to support this should the trial be deemed successful. As identified during previous investigations, HRUK will specifically target diagrams that run in levels of low sunlight particularly sunrise and sunset on clear days. Pre and post sunrise operations will also be monitored to demonstrate that image quality is not reduced in this condition. Evidence will be collated from on board monitoring. Where possible, Drivers are encouraged to flag 'whiteout' phenomena occurrences. HRUK will attempt to provide conclusive evidence that demonstrates that this function will overcome levels of intense light that have previously produced the 'whiteout' phenomena. Evidence gathered will be used to demonstrate that the addition of the Wide Dynamic Range function is performing as expected in neutralising excessive light conditions and providing the Driver the ability to make a safe dispatch from a platform. On completion of the trial, the software will be reverted to the pre-trial version. A trial report will be compiled detailing findings, evidence and recommendations.
HRE-000-TEC-REP- 30002 Issue 04
Page 6 of 22
118
3
Trial Analysis
3.1
London Paddington to Swansea: Unit 800034 10/9/19 London Paddington - Swansea Paddington: 13:45 - Swansea: 16:41, Swansea: 17:29 - Paddington: 20:32 800022 car 1- 5 & 800034 car 6 - 10 800022: Old software unit, 800034: New software unit for WDR
3.1.1 Software and Parameter Settings T22 / 800022
811022
812022
813022
814022
815022
Software(WDR)
Old
Old
Old
Old
Old
WDR parameter(aperture value)
1/10000
1/10000
1/10000
1/10000
1/10000
Fog correction parameter
Weak
Weak
Weak
Weak
Weak
T34 / 800034
811034
812034
813034
814034
815034
Software(WDR)
New
New
Old
New
New
WDR parameter(aperture value)
1/10000
1/10000
1/10000
1/10000
1/10000
Fog correction parameter
Faint
Weak
OFF
Faint
Weak
3.1.2 Route Observations Unit 800034 (WDR trial unit) with 800022 leading. Unit departed London Paddington at 13:45 and headed west. Observations were made for about 3 hours from Paddington to Swansea, for about 3 hours from Swansea to Paddington, total journey time approximately 6 hours. Sunlight was observed to be shining in the cameras while running between Port Talbot Parkway and Bridgend (Figure1), and between Newport and Bristol Parkway (Figure2). The upper images of the monitor, with the exception of 3-B, were trialling the new software function for WDR, and the lower images of the monitor and 3-B are old software cameras. Cameras that have the original software loaded were found to be afflicted with whiteout with visibility of the train body side to be non-existent. The cameras that were loaded with the WDR software showed that the function of WDR could successfully suppress the sunlight and provide improved visibility of the train body side and subsequent passenger doors.
HRE-000-TEC-REP- 30002 Issue 04
Page 7 of 22
119
Figure1: Between Port Talbot Parkway and Bridgend
Figure2: Newport and Bristol Parkway
Figure 3 - 5 shows images that are taken at Cardiff. Although these are new software cameras, some whiteout happened in the upper right of the image. Passenger boarding / exiting can be confirmed, but visibility has deteriorated slightly due to sunlight . It will be verified in a future trial whether the visibility can be further improved by adjusting the parameters of WDR and Fog correction.
HRE-000-TEC-REP- 30002 Issue 04
Page 8 of 22
120
Figure3: CAR 6 DOO CAMERA at Cardiff
Figure 4: CAR7 DOO CAMERA at Cardiff
Fog Correction = Faint
Fog Correction = Weak
Figure 5: CAR 9 DOO CAMERA at Cardiff Fog Correction = Faint
HRE-000-TEC-REP- 30002 Issue 04
Page 9 of 22
121
3.2
London Paddington to Swansea: Unit 800034 11/9/19 London Paddington - Swindon Paddington: 18:14 - Swindon: 19:13 800022 car 1- 5 & 800034 car 6 - 10 800022: Old software unit, 800034: New software unit for WDR
3.2.1 Software and Parameter Settings T22 / 800022
811022
812022
813022
814022
815022
Software(WDR)
Old
Old
Old
Old
Old
WDR parameter(aperture value )
1/10000
1/10000
1/10000
1/10000
1/10000
Fog correction parameter
Weak
Weak
Weak
Weak
Weak
T34 / 800034
811034
812034
813034
814034
815034
Software(WDR)
New
New
Old
New
New
WDR parameter(aperture value )
1/10000
1/10000
1/10000
1/10000
1/10000
Fog correction parameter
Faint
OFF
OFF
Faint
Weak
3.2.2 Route Observations Unit 800034 was coupled with 800022 leading. Unit departed London Paddington at 18:14 and headed west. Observations were made for the journey between Paddington to Swindon. The images while stopping at each station identified no problems, but there was visible whiteout due to sunlight in the cameras while unit was transiting. Figure 6 shows the image of running between Paddington and Reading, while Figure 7 shows the image of running between Didcot Parkway and Swindon. The upper images of each monitor are the old software cameras, and the lower are the new software cameras for WDR. The upper part of the screen is whiteout in part or most of the screen, and the image cannot be confirmed. On the other hand, the WDR function works in the lower image, and whiteout can be suppressed to some extent. However, some images are not completely suppressed, and the upper part of the camera image is partially whiteout.
HRE-000-TEC-REP- 30002 Issue 04
Page 10 of 22
122
Figure 6: Paddington and Reading
HRE-000-TEC-REP- 30002 Issue 04
Figure 7: Didcot Parkway and Swindon
Page 11 of 22
123
3.3
Analysis by Recorded Data
3.3.1 Software and Parameter Settings T34 / 800034
811034
812034
813034
814034
815034
Software(WDR)
New
New
Old
New
New
WDR parameter(aperture value )
1/10000
1/10000
1/10000
1/10000
1/10000
Fog correction parameter
Faint
OFF
OFF
Faint
Weak
3.3.2 Analysis by recorded data The recorded footage between 11/9 and 26/9 was analysed. To compare the new software with old WDR software, car 3 cameras have been installed old software. The whiteout phenomena was witnessed while stopping at stations and could be compared. Figure 8 shows the image at Chippenham station at 16:29 on 18/9 and Figure 9 shows the image at Bristol Parkway station at 18:10 on 22/9. Figure 8 show that WDR function works well, and whiteout can be suppressed. However figure 9, is not completely suppressed, and the upper part of the camera image is partially whiteout. From this result, Kokusai engineers considered and speculated that the camera aperture value was not enough for the level and intensity of sunlight in the UK. In the next stage of the trial, Kokusai will adjust the aperture value of the camera and continue the trial.
Figure 8: Chippenham
HRE-000-TEC-REP- 30002 Issue 04
Page 12 of 22
124
Figure 9: Bristol Parkway
3.4
London Paddington to Swansea: Unit 800034 27/9/19 London Paddington - Swansea Paddington: 5:02 - Swansea: 8:58, Swansea: 9:29 - Paddington: 12:30 800006 car 1- 5 & 800034 car 6 - 10 800006: Old software unit, 800034: New software unit for WDR
3.4.1 Software and Parameter Settings T6 / 800006
811022
812022
813022
814022
815022
Software(WDR)
Old
Old
Old
Old
Old
WDR parameter(aperture value )
1/10000
1/10000
1/10000
1/10000
1/10000
Fog correction parameter
Weak
Weak
Weak
Weak
Weak
T34 / 800034
811034
812034
813034
814034
815034
Software(WDR)
New
New
Old
New
New
WDR parameter(aperture value )
1/50000
1/50000
1/50000
1/99990
1/99990
Faint
Faint
Fog correction parameter
HRE-000-TEC-REP- 30002 Issue 04
Faint
Weak
Weak
Page 13 of 22
125
3.4.2 Route Observations Unit 800034 (WDR trial unit) with 800006 leading. Unit departed London Paddington at 5:02 and headed west. Observations were made for about 4 hours from Paddington to Swansea, for about 3 hours from Swansea to Paddington, total journey time approximately 7 hours. Some degradation of visibility was identified during the journey. Figure 10 shows images at Bristol Parkway station. Regarding the camera on the right side of each car, the images of 1-B and 10-D are good, but the other cameras are foggy images. Regardless of the camera software and parameters, there were difference in how the images looked. This is due to dirt on the camera lens. The 1-B and 10-D lenses were relatively good as shown in Figure 11. But the other camera lenses were very dirty as shown in Figure12. Also, since the lens was already dirty before running, this is not a stain that occurs in a day, it has not been cleaned for several days, and it seems that dirt has accumulated for many days. Regular cleaning is required.
Figure 10: Bristol Parkway
HRE-000-TEC-REP- 30002 Issue 04
Page 14 of 22
126
Figure 11: Lens with partial dirt accumulation
3.5
Figure 12: Lens with excessive dirt accumulation
Continued Analysis by Recorded Data
3.5.1 Software and Parameter Settings T34 / 800034
811034
812034
813034
814034
815034
Software(WDR)
New
New
Old
New
New
WDR parameter(aperture value )
1/50000
1/50000
1/50000
1/99990
1/99990
Fog correction parameter
Faint_R1
Weak
Faint_R1
Faint_R1
Weak
3.5.2 Analysis by recorded data The recorded footage between 27/9 and 2/10 was analysed. To compare the new software with old WDR software, car3 cameras have been installed old software. Figure 13 shows that whiteout footage at Didcot Parkway station at 18:11 on 29/9. As sunlight can be confirmed with car3, whiteout has not occurred in other cameras. However, in CAR4 and 5 are foggy. This is due to the dirt on the lens. Figure 14 shows that whiteout footage at Didcot Parkway station at 08:30 on 2/10. As quite strong sunlight can be confirmed with car3, whiteout has not occurred in other cameras. However, in CAR2 and 4 are foggy. This is due to the dirt on the lens. As these results, it was confirmed that WDR aperture value of 1/50000 can sufficiently suppress whiteout. However the lens need to be clear by cleaning.
HRE-000-TEC-REP- 30002 Issue 04
Page 15 of 22
127
Figure 13: Didcot Parkway
HRE-000-TEC-REP- 30002 Issue 04
Page 16 of 22
128
Figure 14: Didcot Parkway
Figure 15 shows the comparison between Fog correction "Faint" and "Weak". CAR1 and car4 are "Faint" with CAR2 with new software on the Weak setting and CAR 3 with the old software and fog correction set to "Weak". CAR5 is out of comparison because the lens condition was better than other cameras. It can be confirmed that “faint” improves visibility compared to “weak”.
HRE-000-TEC-REP- 30002 Issue 04
Page 17 of 22
129
3.6
Figure 15: Cardiff
Figure 16 shows that image of CAR1 at Reading station at 05:41 on 27/9 and figure 17 shows that image of CAR1 at Swansea station at 20:45 on 27/9, which are set WDR aperture 1/50000 and fog correction Faint. The images show no degradation or reduced performance when operating in conditions of reduced light/night time.
Figure 16: Reading
HRE-000-TEC-REP- 30002 Issue 04
Page 18 of 22
130
Figure 17: Swansea
HRE-000-TEC-REP- 30002 Issue 04
Page 19 of 22
131
4
Conclusion
This trial confirmed that whiteout can be sufficiently suppressed utilising the WDR function coupled with adjustments to the camera aperture and fog correction settings. During the trial it was necessary to adjust the aperture value of the WDR in order to establish the ideal parameter to suppress the whiteout. Following in depth monitoring, it has been established that the best setting is 1/50000. This evidence is shown in Figure 13(Car 3 shows no WDR software function installed) and 14. Refer to table in 3.5.1 which details each car configuration, Car 1 optimum setting configuration that HRUK are proposing. Built in to the new software function was an additional fog correction setting. This setting has been developed by the supplier to provide a better picture resolution and clarity of image. This new setting has been defined as 'Faint' by the supplier. Adjusting the fog correction to Faint, the misty/cloudy images can be improved a little more than the conventional weak. In the next software update, WDR's new software (aperture value 1/50000) and Fog correction "Faint" setting along with WDR parameter set at 64 will be installed on all DOO cameras. If Fog correction is set to ON, there will be no adverse effects on the cleaned lens and night time images therefore always ON setting is recommended. HRUK have deemed it unnecessary to add Fog correction ON-OFF switching button to Monitor PC as a new function. This is due to reservations regarding the addition of a new driver interface function. To summarise, HRUK are satisfied that the new wide dynamic range function and parameter, aperture setting and fog correction function will provide the most effective technical solution for overcoming whiteout in conditions where the cameras are exposed to excessive light sources. However, one negative aspect that could not be resolved technically within this trial scope is the level of cleanliness of the lenses. It has been identified, that the current cleaning regime is not robust enough in ensuring that DOO lenses are cleaned to the standard required to aid visibility. Even if the WDR function or Fog correction function will be used, it is impossible to completely prevent the deterioration of visibility due to dirt. It is therefore necessary to review the cleaning method and system.
HRE-000-TEC-REP- 30002 Issue 04
Page 20 of 22
132
5
Remarks
5.1
DOO Camera Cleaning Area
The lens window must be cleaned within the range shown in Figure 18, which affects the image.
Figure 18: DOO Camera Lens Cleaning Area
5.2
Example of lens condition Figure 19 - Although it is relatively good, the cleaning range is a little narrow, which may affect the outside of the screen. Figure 20 - There is a little dirt. In normal conditions, the image can be corrected with fog correction, but when exposed to lighting or sunlight, it may become foggy. Figure 21, 22 - The condition of the lens is quite bad. Fog correction and WDR will not work well and visibility cannot be secured.
5.3
Figure 19: Lens 1
HRE-000-TEC-REP- 30002 Issue 04
Figure 20: Lens 2
Figure 21: Lens 3
Figure 22: Lens 4
Page 21 of 22
133
5.3
Remedial Actions by HRUK
Following on from the trial report, HRUK has identified that the current cleaning frequency and standard is not suitable for conducting effective and safe DOO operations. Therefore, the following has been actioned and recommended for full implementation:Cleaning Regime is now considered a maintenance task to be carried out every 10 days. This maintenance regime has been published as a VMP task XCH023 Additionally, if units being prepared for service are found to have degrading images, then this is to be reported to the Production team, actioned and recorded accordingly. Additional platform side cleaning is to resume with a more detailed work instruction for platform cleaning staff to ensure that the standard of cleaning is improved and consistently upheld.
HRE-000-TEC-REP- 30002 Issue 04
Page 22 of 22
134
Appendix 5/6
III
..
Appendix 7
135
DOO Combined Solution Trial Report Engineering Trial Report HRE-AT3-TST-REP-30001 Issue 00
Author / Preparer Paul Scrimshaw System Integration Engineer
13th March 2019
Checked Paul Pryke Head of Systems Engineering
13th March 2019
Approved Koji Agatsuma Head of Engineering
14th March 2019
COPYRIGHT The information in this document is confidential to Hitachi Rail Europe ("the Company'') and the copyright, design right and/or other intellectual property rights in this document belongs to the Company. All rights conferred by law and by virtue of international copyright and other conventions are reserved to the company. This document and the information contained therein or any part thereof must not be reproduced, disclosed or used for purposes other than those for which the prior written consent of the Company has been given. Š Hitachi Rail Europe Limited, 2019 eDOCs:
136
Revision History Issue
Date
Detail
Author
00
12/03/2019
First Issue
Paul Scrimshaw
Organisation
Comment(s)
Date
Stakeholder Review(s) Stakeholder Name None
Distribution Organisation
Recipients
J. Baggott
GWR
T. Lendhill
ERG
D. Juner
ATW
Attached Document(s) Document Reference
Issue
Document Title
HRE-AT3-TST-SPE-30001
01
DOO Camera Deflector Trial
OPE-ETR-2019-0001
00
DOO Camera Coating Trial
HRE-IEP-PRJ-CHR-00348
00
DOO Camera Deflector Change
HRE-WOE-PRJ-CHR-00348
00
DOO Camera Deflector Change
RIS-2703-RST
02
Railway Industry Standard
HRE-AT3-TST-REP-30001 Issue 00
Page 2 of 27
137
Contents 1
2
INTRODUCTION ......................................................................................................................... 4 1.1
Scope
4
1.2
Purpose
4
1.3
Background
4
DEFLECTOR DESIGN ................................................................................................................... 5 2.1
3
4
5
6
Design Concept
5
TRIAL PROPOSAL PLAN .............................................................................................................. 7 3.1
Trial Proposal
7
3.2
Deflector Installation
7
3.3
Trial Methodology
8
3.4
Post-Trial Analysis
8
TRIAL ANALYSIS ......................................................................................................................... 9 4.1
London Paddington to Plymouth: Unit 802010
9
4.2
London Paddington - Bristol Temple Meads - London Paddington Unit 800011
12
4.3
London Paddington - Cardiff Central - London Paddington: Unit 800011
14
COMBINED SOLUTION TRIAL ................................................................................................... 17 5.1
Background
17
5.2
Trial Run 1 - London Paddington - Bedwyn - London Paddington
17
5.3
Pre-sunrise Performance - Oxford to London Paddington
21
5.4
Fog Correction and Enhanced Cleaning Standalone Trial - London Paddington to Cardiff on 800026 22
SUMMARY ................................................................................................................................ 24 6.1
Deflector Type D Trial Overview
24
6.2
Deflector Type F
24
6.3
Combined Solution Conclusion
25
6.4
Short term containment recommendations
25
6.5
Long term design and operational improvements
26
HRE-AT3-TST-REP-30001 Issue 00
Page 3 of 27
138
1
Introduction
1.1
Scope
This document applies to the AT300 platform with respect to DOO Camera system.
1.2
Purpose
This document will detail the technical analysis from data gathered following the deflector trial and the combined solution trial.
1.3
Background
Due to concerns raised by ASLEF and GWR regarding visibility particularly in low sun conditions, HRE have produced a variety of work packages to alleviate concerns and restore drivers confidence whilst running on DOO specified routes. HRE has undertaken a detailed investigation and design review alongside Kasado, Hitachi Kokusai Electric (HiKE) to develop a deflector with a dual purpose aim of providing protection for the cameras in low sun conditions as well as reducing accumulation of dirt on the surface of the DOO housing lens. This report will provide analysis and evidence gathered from the DOO Camera Deflector trial undertaken as detailed in HRE-AT3-TST-SPE-30001 Issue 01.
HRE-AT3-TST-REP-30001 Issue 00
Page 4 of 27
139
2
Deflector design
2.1
Design Concept
In Driver Controlled Operation (DCO) of passenger trains drivers use in-cab CCTV monitors to conduct the train safety check before leaving the station. The current Rail Industry Standard RIS-2703-RST is an industry document that highlights the expectations of a DCO system. The IEP unit uses two cameras to enable DCO due to the length of each carriage.
Figure 1: Camera Layout
Figure 2: Type D Deflector
HRE-AT3-TST-REP-30001 Issue 00
Page 5 of 27
140
Figure 3: Type F Deflector
Figure 4: Deflector Type D
HRE-AT3-TST-REP-30001 Issue 00
Figure 5: Deflector Type F
Page 6 of 27
141
3
Trial Proposal Plan
3.1
Trial Proposal
The aim of this trial is to conduct installation of 2 variant types of manufactured deflectors on 2 IEP or WoE trains. The primary objective of this trial is to demonstrate that the design of the deflector provides adequate protection and minimises DOO camera lens exposure to low sunlight. Results will consist of driver feedback following a period of usage against units without deflectors installed. The secondary objective of this trial is to show that the deflectors reduce dirt accumulation when in service. This issue becomes more prevalent when the train is transiting through Severn Tunnel Propose 2 Class 800 service units made available for trial. Unit 1 to have Type D deflector installed with both left and right variant. Unit 2 to have Type F deflector installed with both left and right variant. The deflectors are to be installed on Cars 1, 2, 4 & 5. This will allow HRE to monitor real time comparison, data footage review and analysis of data. HRE staff request permission to access modified units in traffic to conduct initial external noise assessment from deflectors. HRE also request access to data footage acquired following any reports of DOO visibility issues on modified units. This data will be viewed to ascertain if cars with deflectors provided better clarity on the CCTV display.
3.2
Deflector Installation
The deflectors for this trial were installed on units 800011 & 802010 to provide a deeper range of comparative analysis for the suitability of the deflectors designed. Deflectors have been located to allow comparison between the following:Cars 1 & 2 Side A & B (Cars 1 & 2 leading direction of travel) Cars 1 & 2 Side A against Cars 3,4 & 5 Side A Car 2 Side A against Car 3 Side A Car 4 Side B against Car 3 Side B
HRE-AT3-TST-REP-30001 Issue 00
Page 7 of 27
142
Type D
Type D
Type F
Type F Figure 6: Fitment Layout
3.3
Trial Methodology
Deflectors are fitted to cars 1, 2, 4 & 5 of units 800011 & 802010 The trial was conducted primarily on the following routes:London Paddington to Plymouth (WoE) London Paddington to Cardiff Central London Paddington to Bristol Temple Meads London Paddington to Bedwyn HRE will conduct live monitoring on-board trial AT300's for the duration of the trial. GWR have given HRE express permission to ride these units to gather evidence and where necessary download footage to provide as evidence purely for the interest of this trial.
3.4
Post-Trial Analysis
The success of the trial will rely on subjective analysis, however, where possible, evidence will be provided to demonstrate effectiveness of design solutions. Live monitoring was conducted across multiple diagrams on trial units in order to gather suitable comparative data until conclusive recommendations can be provided by HRE. Recommendations will be provided in section 6 of this document and, if agreed by all stakeholders, will be the basis for the strategy to manage DOO visibility on AT300's going forward.
HRE-AT3-TST-REP-30001 Issue 00
Page 8 of 27
143
4
Trial Analysis
4.1
London Paddington to Plymouth: Unit 802010 11/2/19 London Paddington – Penzance Headcode: 1C04 802010 cars 1-5 & 802005 cars 6-10 835010 leading type F deflector Current cleaning regime – Daily and at Terminal Stations Weather Conditions at start of journey - Slight coverage with outbreaks of sunshine Cameras cleaned at London North Pole (both sides) Sunrise - 06:59. Unit departed London Paddington at 07:30
4.1.1 Route Observations Unit 802010 was first unit to be observed with deflectors fitted. Unit was coupled to 802005 with 802010 leading. Prior to boarding the unit at London Paddington, the cleanliness of the cameras was observed platform side. All cameras were clear of any dirt or obstructive debris. Non platform side cameras were observed from the Monitor display PC with clear visibility throughout 10 Car consist. Unit departed London Paddington at 07:30 and headed west. During this portion of journey, weather conditions remained overcast with very limited outbreaks of sunshine. Unit arrived at Reading Station at 07:57 with excellent visibility platform side. Unit departed Reading Station and continued west towards Swindon. On approach to Didcot Parkway ATP initiated and unit switched from OHLE supply to running on Diesel. Unit approached Swindon at 08:31 under cloud coverage. Station infrastructure also offers good protection from sunlight. During transit between Swindon and Chippenham the cloud coverage cleared with bright sunshine. Unit arrived at Chippenham with clear visibility along 802010. Unit continued to Bath Spa where during the transit it was observed that visibility was starting to degrade. A snapshot of the visibility reducing is captured in Figure 7.
Figure 7: Visibility 835010 Type F Deflector between Chippenham & Bath Spa HRE-AT3-TST-REP-30001 Issue 00
Page 9 of 27
144
Route continued from Bath Spa to Bristol Temple Meads started to show signs of slight degradation due to sunlight hitting the cameras which was witnessed at Bristol Temple Meads particularly 835010 & 834010 (shown as 1-D and 2-D in Figure 8.) During platform dwell at Bristol Temple Meads it was observed that all cameras facing direction of travel were showing signs of visibility reduction whilst all rear facing cameras were clear.
Figure 8: Monitor images at Bristol Temple Meads
Unit departed Bristol Temple Meads and arrived at Taunton at 09:49. Once at platform, car 835010 was stationary past the point of station infrastructure and exposed to sunlight. Images from camera 3 (Type F Deflector) were found to have degraded visibility at this station as shown in Figures 9 & 10.
Figure 9: 835010 Type F Deflector at Taunton
HRE-AT3-TST-REP-30001 Issue 00
Figure 10: 833010 no deflector fitted
Page 10 of 27
145
Unit arrived at Exeter St David's at 10:12 with bright clear sunshine with images at this station similar to those at Taunton with no obvious signs of further degradation discernible on the Monitor PC.
Figure 11: 835010 Type F Deflector at Exeter St David's
Figure 12: 833010 no deflector fitted
Exeter St David's to Newton Abbott via Dawlish Line (potential for sea spray) with clear low sunlight. Orientation of sunlight transposed from left hand side facing to right hand side facing. High Tide with sea conditions calm. Unit transits through tunnels at 10:27. Arrive at Newton Abbott at 10:34 with leading car exposed to low sunlight with no cloud coverage. Images starting to degrade even with platform infrastructure on vehicles without deflectors.
Figure 13: 835010 Type F Deflector at Newton Abbott
Figure 14: 831010 no deflector
Unit arrived at Totnes where visibility was seen to neither improve nor degrade since departing Newton Abbott. All images appear consistently misty with platform infrastructure at Totnes and Plymouth providing adequate protection from sunlight.
HRE-AT3-TST-REP-30001 Issue 00
Page 11 of 27
146
4.1.2 Deflector performance summary Neither Type D nor Type F deflector have prevented excessive sunlight from affecting visibility. This was particularly prevalent at stations with limited infrastructure (e.g. Newton Abbott) or natural environmental protection against sunlight. Misting of lenses also still a factor on this route. Due to this observation it is difficult to provide a definitive deflector type that is best suited to dealing with low sun conditions at this point of the trial.
4.2
London Paddington - Bristol Temple Meads - London Paddington Unit 800011 14/2/19 London Paddington – Cardiff Central. Headcode: 1C02 Cardiff Central - London Paddington. Headcode: 1A10 835011 = Car 6, 831011 = Car 10 for London Paddington - Bristol Temple Meads 831011 = Car 1, 835011 = Car 5 for Bristol Temple Meads - London Paddington Current cleaning regime – Daily and at Terminal Stations Weather Conditions at start of journey - Heavy Fog Cameras cleaned at London North Pole (both sides) Sunrise - 06:57. Unit departed London Paddington at 06:30
4.2.1 Route Observations Unit departed London Paddington at 06:30. All camera lenses platform side appear clear of dirt and debris. Images from the Monitor PC display show excellent visibility non platform side. Unit departed London Paddington at 06:30 with 800035 leading and unit 800011 trailing with 835011 coupled. Unit arrived at Reading Station at Platform 9 with all images along the 10 car consist providing good visibility for the driver despite heavy fog potentially affecting first 2 carriages of 800035.
Figure 15: Visibility for 80035 in transit to Didcot Parkway
HRE-AT3-TST-REP-30001 Issue 00
Figure 16: Visibility for 800011 in transit to Didcot Parkway
Page 12 of 27
147
Unit arrives at Didcot Parkway at 07:08; however, TMS time is displaying 07:10:34. Monitor PC display is slow to respond to request variant sides to display visibility during transit as conditions worsen. Images were taken during transit between Reading and Didcot Parkway show front facing cameras with the exception of the leading car appearing heavily misty and reduced in visibility. Despite this, the visibility when the unit comes to a halt at Didcot Parkway is clear along the entirety of the train.
Unit approaches Swindon at 07:27 (07:29 TMS time) with conditions still very foggy and a light drizzle. Leading camera from unit 800035 is showing signs of degrading visibility. Cameras from cars 6 & 7 are seen to displaying the best visibility. Car 6 has Type F deflector fitted with Car 7 having Type D deflector fitted. During holdover at Swindon Station the unit switches from OHLE to diesel running.
Figure 17: Visibility at Swindon Station - 800035
Figure 18: Visibility at Swindon Station - 800011
Route continued to Chippenham where platform release was on right hand side with all images along the train starting to show degradation in quality of image displayed on Monitor PC. Weather conditions still heavy fog. Images from cars with deflectors fitted are no better or worse than cars without deflectors. During transit between Chippenham and Bath Spa conditions start to improve with fog lifting to overcast conditions. All images at Bath Spa appear clear with no obvious signs of restriction of visibility of passenger doors. Journey continued to Bristol Temple Meads whereby weather conditions remained overcast. Images along 10 car at Bristol Temple Meads remain clear with good visibility. During holdover at Bristol Temple Meads no camera lenses were cleaned to allow monitoring of cameras and the effects of dirt accumulation for entirety of journey. Return journey between Bristol Temple Meads with train configuration now has 831011 leading with Type D deflector fitted to this vehicle and Type F fitted to 832011. Conditions remained overcast when arriving at Chippenham where it was observed that all rear facing cameras on this unit had reduced visibility as opposed to the images that were now forward facing. Journey continued to London Paddington with visibility slowly degrading despite conditions improving to light cloud coverage. All images reviewed both sides at London Paddington where no clear improvement of visibility with cars with deflectors compared with cars without was observed.
4.2.2 Deflector performance summary HRE-AT3-TST-REP-30001 Issue 00
Page 13 of 27
148
Slight improvement of visibility with cars with Deflectors fitted initially observed at Swindon This improvement was negligible following return journey to London Paddington. Cleaning regime has been amended to see if dirt accumulation is reduced on cameras with deflectors fitted. Visibility in fog conditions does not deteriorate
4.3
London Paddington - Cardiff Central - London Paddington: Unit 800011 London Paddington – Cardiff Central. Headcode: 1B27 Cardiff Central - London Paddington. Headcode: 835011 = Car 6, 831011 = Car 10 for London Paddington - Cardiff Central 831011 = Car 1, 835011 = Car 5 for Cardiff Central - London Paddington Current cleaning regime – Daily and at Terminal Stations Weather conditions at start of journey - Bright sunshine Cameras cleaned at London North Pole (both sides)
4.3.1 Route Observations: London Paddington to Cardiff Central Unit departed London Paddington with clarity of images both sides consistent along 10 car consist. Weather conditions have improved since morning route to Bristol Temple Meads (as described in 4.2.1) in heavy fog. The unit is operating on 25kV with 2 Pantographs raised. Unit arrives at Reading Station at 11:42 with a left hand door release. All images are clear although there was a noticeable amount of misting starting to appear across all images on left hand (platform) side of unit. Unit switches to Diesel traction prior to approaching Didcot Parkway with weather condition becoming very bright sunshine. Images remain consistently clear along train at Bristol Parkway prior to transiting Severn Tunnel. Following transit from tunnel the image quality began to deteriorate significantly. The first station stop after Severn Tunnel is Newport where images along non platform side (left hand side) were found to be heavily misted whereas platform side there was a reduced visibility but not to the extent of the opposite side. One observation made is that the front carriage camera images were exceptionally clear both sides with the trailing 9 carriage forward facing cameras all displaying restricted visibility. Unit continued to Cardiff Central and on departure from Newport transited Newport Tunnel where images began to degrade even further including all cars with both variant types of deflectors fitted. As conditions brighten it appeared to have a detrimental effect on platform visibility particularly for all forward facing cameras. Cardiff Central is final calling point for this route with the unit stabled at sidings prior to return to London Paddington.
4.3.2 Route Observations: Cardiff Central to London Paddington HRE-AT3-TST-REP-30001 Issue 00
Page 14 of 27
149
800011 coupled with 800035 departs Cardiff from Platform 1 with 815011 (Type F deflector fitted) leading. Conditions on departure from Cardiff Central are clear with no cloud cover. Rear facing camera images are showing signs of heavy misting with visibility obscured due to sunlight conditions. 814011 (Type D) & 815011 (Type F) are showing reduced visibility in line with all other vehicles with exception of 815035 (car 10) which has very clear images in both front facing and rear facing cameras. During run to Newport, the images during transit displayed signs of heavy misting and severely reduced visibility due to position of sun at time of observation. On arrival at Newport Station at 14:12, however, this eased to a slight obscuration of image. The infrastructure at the platform of Newport provided significant protection for the first 6 cars of the 10 car consist with cars 7-9 (no deflectors fitted) being completely obscured by sunlight.
Figure 19: Rear facing camera poor visibility - 800011
Figure 20: 800011Camera visibility at Newport Station
Figure 21: Symptoms of whiteout on 800035 at Newport
Following transit of the Severn Tunnel, the front facing cameras on both sides were showing to be misting up heavily meaning visibility during transit was severely reduced across all cameras. The first station stop post Severn Tunnel was Bristol Parkway although the visibility had improved when doors were set to release with the bodyside of the train being visible if not slightly impaired due to the misting effect.
HRE-AT3-TST-REP-30001 Issue 00
Page 15 of 27
150
As the journey progressed to Swindon Station there was a consistency in the misting across all cameras but with the bodyside of the train still clearly identifiable. Unit switched to OHLE post Didcot Parkway with no further degradation of cameras observed as unit came to a stop at London Paddington.
This was the second scheduled diagram of the day (see 4.2 for first diagram report) Cameras with deflectors fitted did not significantly abate build-up of dirt on lenses As a result, images started to degrade rapidly with excessive reduction of visibility noted when transiting Severn Tunnel and Newport Tunnel As weather conditions improved and dirt build-up on lenses increased, visibility quickly deteriorated across all cameras on both sides.
HRE-AT3-TST-REP-30001 Issue 00
Page 16 of 27
151
5
Combined Solution Trial
5.1
Background
In order to progress the trial in line with GWR's proposed timeline, a decision was made to combine a multitude of design and operational improvements to see if this had a positive impact as a whole as opposed to individual changes. Kokusai had also provided a report detailing that an adjustment of a fog correction setting could assist in overcoming visibility issues particularly at stations where there is no natural or platform infrastructure protection against low sun conditions. This information was detailed within document HRE-AT3-TEC-PLN-00001 issue 00 The combined solution trial incorporated the following elements:Enhanced cleaning regime - to be conducted at every terminal station where possible and at depot daily Aqueous Guard coating applied to all lenses 2 variant types of deflectors fitted Adjustment of fog correction setting with all variables i.e. Off, Weak, Strong
5.2
Trial Run 1 - London Paddington - Bedwyn - London Paddington Trial unit 802010 London Paddington – Bedwyn. Headcode: 3Z33 (5 Car only) Aqueous Guard fitted to all cameras Deflectors fitted to 831010 (Type D), 832010 (Type F), 834010 (Type D), 835010 (Type F) Fog correction settings 831010 (Weak), 832010 (Strong), 833010 (Off), 834010 (Weak), 835010 (Strong) Cleaning conducted at LNP prior to entering service Sunrise - 06:33. Unit departed London Paddington at 07:33
Fog correction settings on this 5 car were configured as described above with clear visibility between platform and passenger doors observed for all cars. Unit departed London Paddington with conditions on departure clear sunshine and operating on OHLE. 800011 approached Reading Station at 07:55 with a left hand release. 831010 (Weak) was exposed however not subjected to intense light source due to current position of sun. Unit arrived at Theale left hand side at 08:04 with station infrastructure currently shielding against excessive sunlight. Approach Aldermaston at 08:11 Tree lined platform is helping to provide natural protection to reduce sunlight glare. Good natural protection at this station helps to prevent against sun exposure.
HRE-AT3-TST-REP-30001 Issue 00
Page 17 of 27
152
Approach Midgham at 08:15 with no protection from sunlight. The monitor display PC shows that car 833010 visibility is severely reduced by sunlight with 811010 with fog correction setting set to weak has overcome conditions and provided excellent visibility. 815010, which has the fog correction setting set to strong, also provides good visibility despite excessive sunlight exposure although the quality of the image is affected.
Figure 22: 813010 Fog Correction = OFF at Midgham
Figure 23: 811010 Fog Correction = WEAK
Figure 24: 815010 Fog Correction = STRONG
HRE-AT3-TST-REP-30001 Issue 00
Page 18 of 27
153
Unit continued to Thatcham at 08:20 where conditions were similar to Midgham with platform infrastructure unable to provide adequate protection against sunlight. This was again evident on 833010 with no adjustment of fog correction. As before, the 2 cars with fog correction settings set to weak and strong provided clear visibility of the train bodyside.
Figure 25: 813010 at Thatcham = OFF
Figure 26: 811010 at Thatcham = WEAK
Figure 27: 815010 at Thatcham = Strong
Route continued to Newbury Racecourse Station (arrived at 08:34) with continuation of sunlight conditions. Contrast between cameras with fog correction settings set to weak and strong provide better visibility than 833010 where setting is to off. This theme continued through to Bedwyn where the adjustment of the fog correction setting to weak provided the best clarity when sunlight is shining directly at the DOO cameras while the misting effect on 833010 continued to degrade visibility further until sunlight abated as the angle of sun increased.
HRE-AT3-TST-REP-30001 Issue 00
Page 19 of 27
154
Figure 28: 813010 at Newbury Racecourse
Figure 29: 811010 at Newbury Racecourse
Figure 30: 815010 at Newbury Racecourse
The unit continued in service to Newbury where the combination of station infrastructure and natural surroundings provided good conditions to aid visibility although there was evidence of slight misting on car 3 with all other cameras providing excellent visibility. This theme continued until the unit reached Bedwyn Stations prior to entering Bedwyn sidings with car 3 showing slight misting (as shown in fig. 31)
Figure 31: 813010 at Bedwyn Station HRE-AT3-TST-REP-30001 Issue 00
Figure 32: 811010 at Bedwyn Station Page 20 of 27
155
Figure 33: 815010 at Bedwyn Station
5.2.1 End of route summary Adjustment of fog correction setting to WEAK has proven to provide an increase in visibility in adverse conditions whilst retaining good quality of image. Fog correction set to STRONG also aids in increasing visibility, however, the intensity of colour contrast was determined to be too intense after consulting drivers. Neither deflector type can be assessed to provide an obvious positive advantage in deflecting sunlight. Stations with minimal station infrastructure are prone to exposure to sunlight especially at times of sunrise and sunset. This has an adverse effect on units operating scheduled Driver Operation Only (DOO) diagrams. There was no obvious advantage to the application of Aqueous Guard during this trial.
5.3
Pre-sunrise Performance - Oxford to London Paddington
A further concern communicated by driver representatives was the visibility of the cameras in presunrise/post sunset conditions. With the camera fog corrections configured as per the Bedwyn trial run and the enhanced cleaning routine in effect, monitoring of the cameras took place at Oxford Station 05:55 prior to sunrise. The following was observed on review of the pre-sunrise conditions and performance of cameras:813010 - Fog correction setting set to OFF and cameras platform side cleaned the visibility was very clear with no inhibiting factors. 811010 - This produced an almost identical result as with the fog correction set to off and cameras cleaned. 815010 - The intensity of the adjustment was far more evident when reviewing the footage. The contrast of the signal aspect in fig. 36 show the intensity of the setting when it is adjusted to strong.
HRE-AT3-TST-REP-30001 Issue 00
Page 21 of 27
156
Figure 34: 813010 Fog correction = OFF
Figure 35: 811010 Fog correction = WEAK
Figure 36: 815010 Fog correction = STRONG
5.4
Fog Correction and Enhanced Cleaning Standalone Trial - London Paddington to Cardiff on 800026
5.4.1 Pre-conditions for trial No deflectors fitted No Aqueous Guard or Rain-X applied to DOO housing lens Fog correction settings configured as per Bedwyn trial Enhanced cleaning regime to be in effect
5.4.2 Route Observations Unit departed London Paddington with conditions initially sunny but soon deteriorated to overcast with outbreaks of rain. All cameras were cleaned prior to entering service with excellent visibility observed along at all doors.
HRE-AT3-TST-REP-30001 Issue 00
Page 22 of 27
157
Unit arrived at Reading at 10:49 with platform infrastructure providing excellent cover against adverse weather conditions. Visibility was good at all camera positions with no inhibiting factors affecting field of view. Unit continued to Didcot Parkway, where conditions improved to overcast. Camera image from 813028 show signs of misting occurring at this station as seen below in fig. 38. The fog correction settings applied to the other cars are seen to provide clarity along the bodyside with no inhibiting factors. Conditions continued to vary between rain and outbreaks of sunshine during transit to Bristol Parkway. No further degradation was observed on cameras from 813010 during this run but the better visibility is clearly observed with cameras with fog correction set to weak. Unit transited Severn Tunnel after departing Bristol Parkway. This has previously shown evidence previously that concentration of dirt has a significant detrimental impact on visibility when arriving at stations post tunnel transit. Conditions of rain are not seen, in this instance, to have a negative impact on visibility when on this diagram. The images on arrival at Cardiff Central were generally of a high quality with clear visibility of the guiding yellow line platform side.
Diagram: London Paddington => Reading => Didcot Parkway => Swindon => Bristol Parkway => Cardiff C
Figure 37: Top row - Fog Correction = Weak; Middle row = Off; Bottom row = Strong
5.4.3 End of route summary Adjustment of fog correction setting does not have a negative impact when conditions deteriorate from sunlight to that of overcast or rain. There is no evidence to suggest that Aqueous Guard or Rain-X provide a positive improvement against the accumulation of dirt especially when transiting Severn Tunnel as opposed to enhancing the cleaning regime. Visibility was assessed to be good across the entirety of this diagram.
HRE-AT3-TST-REP-30001 Issue 00
Page 23 of 27
158
6
Summary
6.1
Deflector Type D Trial Overview
6.1.1 Sunlight Protection No substantive evidence was observed or acquired which proves that this design provides significant protection against low sun conditions. Observations were made during live monitoring that cameras with this design of deflector fitted still suffered from effects of low sun and dirt resulting in conditions of 'whiteout'.
6.1.2 Dirt Accumulation Similar to above, there was no discernible evidence that this type of deflector helps to reduce the build-up of dirt. Multiple runs through Severn Tunnel showed similar effects of misting to vehicles without deflectors fitted.
6.1.3 Design Summary HRE does not recommend a fleet wide implementation of this design solution.
6.2
Deflector Type F
6.2.1 Sunlight Protection There was no substantive evidence observed or acquired which proves that this design provides significant protection against low sun conditions. There were observations made during this trial that show evidence of whiteout even with this type of deflector fitted.
6.2.2 Dirt Accumulation There was no conclusive evidence obtained in the duration of this trial that this design of deflector provides a significant improvement against the inhibiting factors currently afflicting the DOO cameras Multiple runs through Severn Tunnel showed similar effects of misting to vehicles without deflectors fitted.
6.2.3 Trial Assessment HRE does not recommend fleet wide implementation of this design solution.
HRE-AT3-TST-REP-30001 Issue 00
Page 24 of 27
159
6.3
Combined Solution Conclusion
6.3.1 Fog Correction Setting There was a significant improvement observed with the adjustment of the fog correction setting to weak. There was a minor improvement observed when adjusting the setting to strong although the contrast was reported to be too intense in certain conditions i.e. pre-sunrise. There were instances where visibility was further improved with this set to off.
6.3.2 Enhanced Cleaning Enhancing the cleaning regime has been a contributing factor in aiding visibility, observed during the trial. This has been observed repeatedly during the trial. When the cleaning periodicity of at least 24 hours was found to be beneficial. When the cleaning routine was extended past 48 hours or more, there was a noticeable degradation in visibility observed across all cameras, particularly when operating on routes with multiple tunnel transits. I.e. Cardiff/Swansea diagram, Dawlish Line diagram. Driver representatives also reported that the enhancement of cleaning across all AT300's operating on GWML had proven to be an improvement in aiding visibility.
6.3.3 Deflectors Addition of deflectors did not provide adequate evidence of protection from low sun conditions or prevention of dirt build-up.
6.3.4 Aqueous Guard Application of Aqueous Guard did not provide evidence of significant improvement to accumulation of dirt.
6.4
Short term containment recommendations HRE propose to adjust the setting of fog correction to all AT300's operating on GWML/WoE diagrams. Setting to be adjusted to WEAK as this has consistently proven to overcome low sun conditions and provide good visibility in all conditions. The adjustment of this setting will be controlled via a campaign raised and controlled on HRE's SAP system. Current periodicity of camera cleaning at Depot to be reduced from 10 day cycle to every 24 hours where reasonably practicable. HRE recommend that cleaning of DOO lenses be conducted at all terminal stations pre-diagram and post diagram as well as daily when stabled at depots. This will be managed at operational level.
HRE-AT3-TST-REP-30001 Issue 00
Page 25 of 27
160
6.5
Long term design and operational improvements HRE is committed to continuously improving the design of this system to eradicate, as far as reasonably practicable, symptoms of camera 'whiteout'. HRE, in conjunction with Kokusai, will look to implement a design solution to the Monitor Display PC with the addition of a soft key function allowing the driver to manually select the level of fog correction to be applied depending on driver's visual assessment when operating on a DOO diagram. The implementation of this solution will be controlled using the HRE Change Process. HRE will also continue to look at potential improvements for the deflector concept alongside Kasado. If a design is identified which will provide drivers with a clear advantage then this option will be considered and trialled. HRE will review full DOO network/current diagrams to identify locations susceptible to rapid degradation against of visibility due to dirt accumulation. Working alongside operational teams to optimise enhanced cleaning regime.
HRE-AT3-TST-REP-30001 Issue 00
Page 26 of 27
161
HRE-AT3-TST-REP-30001 Issue 00
Page 27 of 27
Appendix 8 162
Engineering Technical Report
Document Title
CHR-0576 DOO Camera Coating Trial Document Number
Issue
Number of Pages
OPE-ETR-2019-0001
01
15
Name & Role
Signature
Date
Kate Worsley Graduate Engineer
14/03/2019
Reviewer
Josh Moore Mechanical Engineer
14/03/2019
Approver*
Andrew Millner Senior Mechanical Systems Engineer
15/03/2019
Author
*Please see document HRE-OPE-POL-0012 which details approval requirements COPYRIGHT The information in this document is confidential to Hitachi Rail Europe ("the Company'') and the copyright, design right and/or other intellectual property rights in this document belongs to the Company. All rights conferred by law and by virtue of international copyright and other conventions are reserved to the company. This document and the information contained therein or any part thereof must not be reproduced, disclosed or used for purposes other than those for which the prior written consent of the Company has been given. This Hitachi Rail Europe document is reviewed, approved and released from Singlepoint electronic document management system's approval process. Š Hitachi Rail Europe Limited, 2016 HRE-OPE-FOR-0011 Engineering Technical Report Form
163
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 2 of 12
Revision History Issue
01
Author
Comments
K. Worsley
Date
14/01/2019
First issue
Referenced Document List Document No.
Document Title
Issue
Total Pages
Issued by
OPE-300-ZZZ-ETR00056
DOO CCTV Dirt Build-Up Preliminary Investigation
01
6
Kate Worsley
Issue
Total Pages
Issued by
Date issued
23/11/18
Attached Document List Document No.
Document Title
Date issued
Distribution List Internal Recipients
Ops Rolling Stock Engineering Ops Support Western Region Ops Support Eastern Region Ops Support Craigentinny Ops Support Ashford Depot Managers Depot Production Systems Integration Engineering (Holborn)
External Recipients
Operator(s) Please specify Owner(s) Please specify Hitachi Kasado Hitachi Mito Hitachi Tokyo Others: Please specify
Abbreviation List Abbreviation List
DOO ETR LNP LCC
Driver Only Operation Engineering Technical Report London North Pole Life Cycle Cost
Confidential - © Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
164
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 3 of 12
Contents 1
Executive Summary
4
2
Introduction
5
3
Investigation
6
Camera cleaning
7
DOO footage
7
Cost
9
4
Discussion
10
5
Conclusion
11
6
Appendix A
12
Confidential - © Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
165
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
1
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 4 of 12
Executive Summary
A trial was conducted using two hydrophobic coatings to identify their ability to reduce the accrual rate of dirt, and improve the visual quality of the Driver Only Operation (DOO) body side camera images. The trial used Rain-X and Aqueous Guard coatings on the DOO cameras of two Unit. The following conclusions were made: The camera coatings had no effect of the amount of dirt accrued over the trial period The coatings had no effect on the quality of the DOO CCTV camera footage The report concluded that based on this trial, there is no evidence to support the application of these coatings for the improvement of camera image quality, or ease of camera cleaning.
Confidential - Š Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
166
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
2
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 5 of 12
Introduction
Following a preliminary investigation in to poor quality Driver Only Operation (DOO) camera images (see ETR-00056), two causes of poor image quality were identified. The first issue related to dirt buildup, and the second to the overexposure of the camera images due to sunlight. One of the report recommendations was to investigate the effect of a hydrophobic coating on the build-up of dirt, and subsequently the DOO camera image quality. The coatings that were chosen for investigation were Rain-X Rain Repellent and Aqueous Guard Clear Coat. Rain-X represented an ‘off the shelf’ solution, where Aqueous Guard was a specialist product. The effectiveness of the coatings was to be determined by the amount of dirt removed from the cameras at the end of the trial, the ease of dirt removal, and the effect the build up of dirt had on the camera image quality. This Engineering Technical Report (ETR) reviews the effectiveness of both solutions compared to each other and to a camera with no coating on, and gives recommendations on solution selection based on performance and cost.
Confidential - © Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
167
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
3
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 6 of 12
Investigation
Rain-X Rain Repellent and Aqueous Guard Clear Coat coatings were applied to two trains 800005 and 800013 in the configuration shown in Figure 1 and Figure 2. The configuration allowed for both Units to have both coatings on, meaning that the cameras will have completed the same diagrams and should have the same amount of dirt build-up. This allowed direct comparison between the coatings and cameras with no coating on.
A Guard
A Guard
Rain-X
A Guard
Rain-X
Rain-X
A Guard
Figure 1: Coating Layout on 800005
A Guard
Rain-X
A Guard
A Guard
A Guard
A Guard
Figure 2: Coating Layout on 800013
The trial ran from the 14th February to 6th March, during which time the cameras were not to be cleaned. At the end of the trial, the cameras were cleaned with a cotton pad and water. Water was used as this would have no effect on the camera coatings. The DOO footage from the final day of the trial, before and after cleaning, was reviewed to identify the effect of the coatings on the camera images, and to then identify any improvement made. 800013 was unavailable for cleaning and CCTV download at the end of the trial, and as such, this report is based on the findings on 800005. The cost of both products was then calculated, looking at the cost to apply the product to every AT300 and AT200 camera, broken down by fleet, for their respective maintenance contract lengths. Confidential - Š Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
168
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 7 of 12
Camera cleaning Upon the arrival of 800005 in to the building at London North Pole (LNP), the DOO cameras had been cleaned as part of the current enhanced cleaning regime. The dirt removed from the cameras can be seen in Figure 3.
Figure 3: Dirt removed from DOO cameras
Some of the cotton pads have dirt on, however this was from the housing, and not the lens itself. The images shown in Figure 3 were also taken as soon as the camera had been cleaned to avoid contamination, as such the pads show slight discolouration, but this was due to the water used to clean. Due to the enhanced cleaning regime, there was minimal dirt removed from the cameras. This demonstrates that the presence of a coating does not reduce the amount of dirt left on the cameras, nor improve the ease of dirt removal from the cameras. Note: 800013 was unavailable for camera cleaning at the end of this trial.
DOO footage Figure 4 and Figure 5 show the DOO camera images from cameras with no coating applied, with Aqueous Guard applied, and with Rain-X applied before the cameras were cleaned at LNP that evening.
Figure 4: DOO images from cameras with no coating applied before cleaning Confidential - Š Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
169
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 8 of 12
Figure 5: DOO images from cameras with Aqueous Guard and Rain-X applied before cleaning
Figure 6 and Figure 7 show the DOO camera images of the same cameras after they were cleaned at LNP.
Figure 6: DOO images from cameras with no coating applied after cleaning
Confidential - Š Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
170
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 9 of 12
Figure 7: DOO images from cameras with Aqueous Guard and Rain-X applied after cleaning
These images show no clear difference between the image quality of the cameras with coatings on, and the cameras without. It can also be seen that the cleaning of the cameras made no difference to the image quality of any camera, coated or not. Note: 800013 was unavailable for DOO CCTV download at the end of this trial.
Cost The whole life cycle cost (LCC) of the application of both products on all DOO cameras on the AT300 and AT200 fleet was calculated:Aqueous Guard quoted £46.95 (exc VAT) per camera application for Year 1, then followed by an annual maintenance application of £36.95 (exc VAT) per camera. Rain-X recommends that Rain Repellent be reapplied every two weeks. When the product was applied for the trial, a 5 Car Unit used approximately half a bottle of product. Based on this information, Table 1 shows the estimated cost of application of each product over the various maintenance contract length. The full calculation and all assumptions made are in Appendix A. Table 1: Cost of Aqueous Guard and Rain-X over maintenance contract term (in GBP)
Aqueous Guard (exc VAT) AT300 GW WoE EC TPE Hull AT200 ASR Total Cost Confidential - © Hitachi Rail Europe Limited, 2017
Rain-X (inc VAT)
1,515,000 969,000 2,040,000 74,000 19,000
132,000 85,000 178,000 6,000 2,000
240,000 4,857,000
21,000 424,000
HRE-OPE-FOR-0011 Engineering Technical Report Form
171
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
4
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 10 of 12
Discussion
At the end of the trial, 800013 was unavailable for camera cleaning and CCTV download, and as such the findings of this report are based on the findings from 800005. In addition to this, the enhanced cleaning regime that has recently been put in place on the GWR fleet was performed during the trial. This was in direct contradiction to what was requested, which was caused by a communication breakdown within the process. The amount of dirt removed from the cameras at the end of the trial was negligible. This was due to the fact that the Unit had been cleaned just prior to them getting in to the shed at LNP as part of the current enhanced cleaning regime. Additionally, the enhanced cleaning regime saw the cameras being cleaned at all terminal stations throughout the trial. The advanced cleaning regime will have been a contributing factor to the amount of dirt that was removed from the cameras. Despite cleaning of the cameras throughout the trial, the amount of dirt removed and the quality of the DOO footage is still directly comparable between the cameras on the Unit that had Aqueous Guard, Rain-X and no coating applied. It can be seen from Figures 4 – 7 that the quality of the DOO camera footage is consistent across all cameras, regardless of whether a coating was applied or not. Additionally, despite the cameras having been cleaned prior to further cleaning at LNP, the amount of dirt that came off all cameras was close to nothing, which again implies that the current cleaning methods are not causing deterioration of the camera image quality. The total cost of application of Aqueous Guard over the duration of the maintenance contract of AT300 and AT200 would be approximately £3.8m (excluding VAT), and that over the same period, Rain-X would cost approximately £0.3m (excluding labour). Rain-X, however, would require application every two weeks according to the manufacturers specification, meaning the application of the product would be much more labour-intensive over the lifetime of the camera.
Confidential - © Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
172
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
5
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 11 of 12
Conclusion
Based on the evidence presented, this report concludes that the application of Aqueous Guard or Rain-X does not improve the quality of the DOO camera images, or reduce the amount of dirt removed at the end of the trial whilst under the enhanced gleaning regime. It was calculated that the whole LCC of application of Aqueous Guard to all AT200 and AT300 cameras would be approximately £3.8m, and the same cost for Rain-X would be £0.3m. Based on this evidence, this report cannot recommend fitment of Aqueous Guard or Rain-X to the DOO cameras.
Confidential - © Hitachi Rail Europe Limited, 2017
HRE-OPE-FOR-0011 Engineering Technical Report Form
173
Engineering Technical Report CHR-0576 DOO Camera Coating Trial
6
OPE-ETR-2019-0001 Issue: 01 Date: 14/03/2019 Page: 12 of 12
Appendix A
Below are the calculations performed to estimate the cost of product application on all AT300 and AT200 DOO cameras. All costings are in GBP. The cost of Rain-X was based on using 0.5 bottles per 5 Car Unit. This calculation did not account for: Changes in the cost of Rain-X Rain Repellent over the 27.5 year maintenance contract. The cost of labour to apply Rain-X 26 timer per camera per year. The calculated cost of Aqueous Guard excludes VAT. The quoted price for Aqueous Guard application is based on coating 80 cameras per shift, this calculation did not consider if more or less than this number was completed. All replacement cameras and cameras in Stores that would need product application. Table 2: Calculation of total cameras in each fleet
Fleet GW WoE EC TPE HULL
Number of 5 Car 36 22 22 19 5
Fleet ASR
Number of 3 Car 46
AT300 Number of 9 Car Total number of cars Total Number of cameras 21 369 1476 14 236 944 43 497 1988 95 380 25 100 AT200 Number of 4 Car Total number of cars Total Number of cameras 24 234 234
Table 3: Calculation of cost of fitment to all cameras over maintenance contracts
Fleet
Cost (Year 1, exc VAT)
GW WoE EC TPE HULL ASR
69298.2 44320.8 93336.6 17841 4695 10986.3
Aqueous Guard Cost Maintenance (year 2+, Contract exc VAT) Length (years) 54538.2 27.5 34880.8 3 73456.6 27.5 14041 5 3695 5 8646.3 7 TOTAL COST
Confidential - © Hitachi Rail Europe Limited, 2017
Cost over contract length (exc VAT) 1514561 114082 2039937 74005 19475 62864 £3824924
Rain-X Cost (to Cost Maintenance apply to (annual, Contract each Car, inc VAT) Length inc VAT) (years) 184.5 4810.179 27.5 118 3076.429 3 248.5 6478.75 27.5 47.5 1238.393 5 12.5 325.8929 5 29.25 762.5893 7
Cost over contract length (inc VAT) 132279.9 84601.79 178165.6 6191.964 1629.464 5388 £332834
HRE-OPE-FOR-0011 Engineering Technical Report Form
Appendix 9 174
Review of DOO (P) camera cleaning 3rd and 4th Feb 2020 3rd Feb Paddington (attendees S. Player, Mark Prescott, Nigel King)
Process used
Wet wipe
Positives
Negatives
Removed surface dirt and image from cab was OK
Ground in dirt on outer parts of lens still present
Dry cloth None Glass cleaner and dry Removed surface dirt cloth and image from cab was OK
Did not remove any dirt Ground in dirt on outer parts of lens still present
Action
Further work is needed to establish a cleaning process from a station platform where the lens has deep ground in dirt. I.e. using the deep clean solution in a spray and a brush and cloth - HITACHI
4th Feb North Pole (attendees S. Player, Geoff Markey, + external supplier)
Process used
Self Cleaning pole with ionised water Day clean solution used with large brush
Heavy clean solution used with large brush and then washed down with water
Positives
Negatives
None
Did not remove dirt
N/A
Removed surface dirt
Ground in dirt on outer parts of lens still present.
N/A
Removed all dirt
Solution and water dripped down the train Brush was too big for the fine detail of the lens. Solution and water dripped down the train (After hand bash the train goes through the washer to further remove residue)
Action
Only use fine brush that is properly attached to a pole - HITACHI
175
Heavy clean solution used with fine brush then washed with water
Removed all dirt and able to get into the detail of the lens
Solution and water dripped down the train. (After hand bash the train goes through the washer to further remove residue)
Pole to be adapted for permanent solution to take the fine brush (not tape) – HITACHI
Heavy clean solution used with self cleaning pole and washed down with ionised water from self cleaning pole
Used less water so less mess
Did not remove the ground in dirt as the pads are not abrasive enough
Adapt pole head to have fine brush attachment in addition to the wash pad. EXTERNAL MANUFACTURER Adapt pole head to have fine brush attachment in addition to the wash pad. EXTERNAL MANUFACTURER
Heavy clean solution used with Used less water so fine brush then washed with less mess water using the self cleaning pole.
Results after using heavy cleaning product:
Before
After
Had to swap poles
176
Addition actions: • • •
Undertake swab analysis of the residue on the camera lens to determine what the material is – this might better determine the best solution for cleaning - Hitachi Review the work procedure for the cleaning supervisor on checking the condition of the lenses post cleaning - Hitachi Review the work procedure of the train preparer on the daily exam looks at the images in the driving cab are working and clean on both sides of the train. - Hitachi
Appendix 10 177
178
179
180
181
XC-038-00
182
XC-035-00
183
XC-036-00
184
XC-037-00
185
186
Appendix 11 187
188
189
190
1
2
XC-002-00
191
192
193
Appendix 12 194
Class 800/801/802
XCH023
OPE-XXX-XXX-XXX-NNNN
CCTV – DOO Camera – Clean
Issue:
1.00
Date:
05-Nov-19
1 Introduction This Vehicle Maintenance Procedure (VMP) gives instructions to clean the Driver Only Operation (DOO) camera.
2 Revision History Rev.
Pages
Date
Initials
1.00
All
05/11/2019
AB
Revision Detail First Issue. Ref MMC-00441. Draft D1.02 Approved by Mark Crane for engineering and Tony Tull For SSHEQ.
Refer to Section 9 for full Revision History.
3 Health & Safety 3.1
Safety Conditions Safety Conditions – See Section 6 of Maintenance Plan
SC1
Safety Critical
Min. Staff
No
1
SC1 - Local Depot Protection (including wheel scotching and isolation of the ETCS system)
3.2
Personal Protective Equipment (PPE)
In addition to wearing site-specific PPE, the following task-specific PPE must be worn:
Eye Bump Hat Protection
3.3
Hand Protection
COSHH, Risk Assessments, and Manual Handling
Make sure that you are familiar with all the COSHH assessments, risk assessments, and manual handling procedures before you start this task.
UNCONTROLLED WHEN PRINTED
Page 1 of 9
195
XCH023 CCTV – DOO Camera – Clean
Class 800/801/802 OPE-XXX-XXX-XXX-NNNN
Issue:
1.00
Date:
05-Nov-19
4 Vehicle Applicability
DPTS
MeS
MeC | MS
DPTF
MeS | MS
5 Car Unit
DPTS
MeS
MeS | MS
TpS
MeS | MS
TS
MeC | MC
MeF | MF
DPTF
9 Car Unit
UNCONTROLLED WHEN PRINTED
Page 2 of 9
196
Class 800/801/802
XCH023
OPE-XXX-XXX-XXX-NNNN
CCTV – DOO Camera – Clean
Issue:
1.00
Date:
05-Nov-19
5 Materials and Special Tools Item -
Materials None
Item
Consumables
Part Number
Qty.
-
-
Part Number
Qty.
C1
Microfiber Cloth (P/N 301314 or equivalent)
Local Supply
A/R
C2
Cleaning Fluid (Superfine: Speed Clean 21)
Local Supply
A/R
Part Number
Qty.
-
-
Part Number
Qty.
-
-
Part Number
Qty.
-
-
Item -
Tools None
Item -
Special Tool None
Item -
Test Equipment None
6 References Document Section 6 of Maintenance Plan
Title
Remarks
Safety Conditions Procedure
Mandatory
Personal Protective Equipment (PPE) HRE Requirements
Mandatory
XCH002
CCTV – DOO Camera – Replace
Mandatory
XCH021
CCTV – DOO Camera – Adjust
Mandatory
XCX002
CCTV - Internal / External Check
Mandatory
HRE-IND-REQ-002
UNCONTROLLED WHEN PRINTED
Page 3 of 9
197
XCH023 CCTV – DOO Camera – Clean
Class 800/801/802 OPE-XXX-XXX-XXX-NNNN
Issue:
1.00
Date:
05-Nov-19
7 Work Procedure RISK OF DANGER WARNING
WARNING
7.1
Do not reach up above the cant line during DOO cleaning. Risk of electric shock and Death
WORKING AT HEIGHT Make sure that you use the correct maintenance platform and obey the depot procedures when you work above floor level.
CCTV – DOO – Clean RISK OF ELETRIC SHOCK Do not use a spray near to the DOO. Risk of electric shock and death.
WARNING 7.1.1
At the rail side, apply cleaning fluid [C2] at a microfiber cloth [C1], before you get access to clean the DOO. Do not use a spray near to the DOO.
UNCONTROLLED WHEN PRINTED
Page 4 of 9
198
XCH023 CCTV – DOO Camera – Clean
Class 800/801/802 OPE-XXX-XXX-XXX-NNNN
Issue:
1.00
Date:
05-Nov-19
Figure 1: CCTV – DOO Camera -Location
7.1.2
Refer to Figure 1. At the vehicle bodyside, locate each DOO camera (1) adjacent to the passenger doors.
UNCONTROLLED WHEN PRINTED
Page 5 of 9
199
XCH023 CCTV – DOO Camera – Clean
Class 800/801/802 OPE-XXX-XXX-XXX-NNNN
Issue:
1.00
Date:
05-Nov-19
Figure 2: DOO Camera
RISK OF ELETRIC SHOCK WARNING
WARNING
Do not reach up above the cant line (3) during DOO cleaning. Do not use a spray near to the DOO. Risk of electric shock and death
WORKING AT HEIGHT Make sure that you use the correct maintenance platform and obey the depot procedures when you work above floor level.
7.1.3
Refer to Figure 2. At each DOO (1) use an appropriate access platform to reach up to the get access to clean the DOO. Do not go above the cant line (3).
7.1.4
Use a microfiber cloth [C1] soaked in cleaning fluid [C2]. Apply the cleaning fluid to completely wet the DOO lens (2) and the surrounding area of the DOO body and leave for at least 30 seconds to soak, to loosen any dirt or debris and prevent the lens (2) from being scratched.
7.1.5
Wipe the DOO lens (2) and the surrounding area of the DOO body with the wet microfiber cloth [C1] to remove all dirt.
7.1.6
Use a second clean dry microfiber cloth [C1] to polish the DOO lens (2) only to remove any residue of cleaning fluid and dirt.
7.1.7
Check the DOO lens (2) for any damage, scratch or distortion.
UNCONTROLLED WHEN PRINTED
Page 6 of 9
200
XCH023 CCTV – DOO Camera – Clean
Class 800/801/802 OPE-XXX-XXX-XXX-NNNN
Issue:
1.00
Date:
05-Nov-19
7.1.7.1 If any damage, scratch or distortion is found on the DOO lens (2) report this to your team leader and raise a defect notification in SAP. If necessary replace the DOO in accordance with XCH002. 7.1.8
Complete cleaning of the DOO in all locations.
7.1.9
In the Cab, Carry out XCX002 to check the image quality at each DOO display.
7.1.9.1 If image quality of any DOO is not acceptable, repeat steps 0 to 7.1.6 to clean that DOO again and repeat the check the image quality. 7.1.9.2
If image quality of any DOO is not acceptable after a second clean report this to your team leader and raise a defect notification in SAP.
7.1.9.2.1
If necessary adjust the DOO in accordance with XCH021.
7.1.9.2.2
If necessary replace the DOO in accordance with XCH002.
UNCONTROLLED WHEN PRINTED
Page 7 of 9
201
XCH023 CCTV – DOO Camera – Clean
Class 800/801/802 OPE-XXX-XXX-XXX-NNNN
Issue:
1.00
Date:
05-Nov-19
8 Completion 8.1
Housekeeping
8.1.1
Remove all tools and materials from the work area, and return to Stores as required.
8.1.2
Make sure the work area is left clean and tidy, and dispose of any waste material in accordance with Local Depot Procedures.
8.1.3
Inform the Team Leader that the work procedure is complete.
8.2
Report Work Deviations and Test Results
8.2.1
Complete all Task Documentation and mark as complete.
8.2.2
Make a SAP Defect Notification for defects identified during the completion of work.
END OF TASK
UNCONTROLLED WHEN PRINTED
Page 8 of 9
202
Class 800/801/802
XCH023
OPE-XXX-XXX-XXX-NNNN
CCTV – DOO Camera – Clean
Issue:
1.00
Date:
05-Nov-19
9 Revision History (Complete) Rev.
Pages
Date
Initials
D1.00
ALL
24/10/2019
AB
D1.01
ALL
01/11/2019
AB
Revision Detail First Draft issue: New VMP ref OPE-MMC-00441. V18 Template. Draft Issue without SC2 pending approval by production / SSHEQ. Corrected “DDO” ref to be DOO. Draft Issue without SC2 Added working at height warning, in 2 locations, per Tony Tull – SSHEQ review.
D1.02
3,4 & 6
04/11/2019
AB
1.00
All
05/11/2019
AB
Added warning about use of sprays. Added Step 7.1.1. Updated C1 and C2 with information from OPE-DOC00022 Issue 1.0 and BOM. Removed ref OPE-DOC-00022 and related note. First Issue. Ref MMC-00441.
UNCONTROLLED WHEN PRINTED
Draft D1.02 Approved by Mark Crane for engineering and Tony Tull For SSHEQ.
Page 9 of 9
Appendix 13 203
RISK ASSESSMENT Task: DOO Camera Cleaning The following method statement(s) & series are covered by this assessment: MSGWR403
Risk Assessment Ref.: RAGWR403
Version: 1
Location: Various Issue Date: 01/07/2019 Frequency: As required
People affected: Operative, Client colleagues, Customers and sub-contractors (collectively known as Review Annually or on change of significant Hazards/Incident ‘All’) Initial Residual potential for potential for Harm harm Significant Person(s) Consequence No Controls Final Risk Existing Control Measures Further Information Hazards at Risk (Risk) L x S = R L x S = R 4 5 20 Operative must: Managers must ensure 1 5 5 • Have received and signed the OLE awareness briefing to undertake this the following for all task at Paddington. Overhead Line Operative Fire, Burns, activities; •Assume the OLE is live at all times (including any cables attached to OLE Equipment Electrical lines). (OLE) shock, Death All operatives are to • Ensure they or anything being held does not come any closer than 2.75 receive initial, on the meters (9 foot) of any live OLE. job and task specific • Not extend any equipment or hand above the orange cant rail. • Ensure equipment handles are constructed of non-conductive materials. training, this training Culture Major injury 4 4 16 • Effective and visible leadership 1 4 4 shall be identified and (eg accident • Effective and collaborative communications recorded on the • Employees fully trained to complete task behaviours, Colleague Task Injuries, • Employees fully site inducted accents, All Assessment • Competence management system strains, human factors, • Familiarisation training sprains, railway Monthly spot checks fractures and terminology, / or head experience) are undertaken to injuries
RAGWR403 DOO Camera Cleaning
Uncontrolled when printed
Page 1 of 4
204
RISK ASSESSMENT 3
Platform / Train Interface
Falls from Height
Slips & trips, whilst carrying out task
Manual Handling
Operative
Fatality / major injury accident
Operative
Contact injuries, strains, sprains, fractures, cuts, scratches Muscularskeletal disorders (MSDs) or work related upper limb
RAGWR403 DOO Camera Cleaning
15
Impact with moving rail vehicle / Fatality / Major injury
Operative
Operative
5
3
4
12
4
3
12
3
3
9
Operative must: • Stand back from the yellow line until the train has come to a complete stop • Not attempt to retrieve any items or equipment dropped between the platform edge and train. Any items are dropped onto the track must be reported to GWR, station management or Servest management immediately. Operative must: • Be aware of other people accessing / egressing and allow them to access/ egress before commencing task • Ensure a secure footing and be correctly balanced before commencing task. • Not step over the Platform Edge indicator line whilst completing task. • Be aware of the differing gap between the train and the platform edge. • Not attempt to retrieve any items or equipment dropped between the platform edge and train. Any items are dropped onto the track must be reported to GWR, station management or Servest management immediately. • Wear safety footwear at all times. Operative must: • Be aware that platforms may be slippery during wet or cold weather. • Be aware of debris / litter and / or left passenger property that may create a trip or slip hazard. • Wear safety footwear at all times. • Employees identified at being at increased risk (pregnancy) from manual handling or with any underlying medical conditions that prevent them from undertaking manual handling should inform their supervisor. Line manager to arrange to carry out individual assessment – RA022 Operative must: • Have current manual handling training. Uncontrolled when printed
1
5
5
1
4
4
Periodic site inspections to ensure environment is kept clean & tidy and free from slip/trip hazards.
1
3
3
MH Training Program in place
1
3
3
ensure compliance with RAMS Regular performance review and buddy system to support new and inexperienced staff.
First Aid provisions and training Use the HSE Brief guide
Page 2 of 4
205
RISK ASSESSMENT • Use correct manual handling techniques as per training and Servest H&S booklet. • NEVER over extend or stretch beyond a safe point or beyond their capability.
disorders
to Manual Handling
PPE Safety Footwear (EN345)
L I K E L I H O O D
Overall / Uniform
Severity / Consequence
Safety Glasses EN 166
Vinyl / Nitrile Gloves
Likelihood
Severity/Consequence
0
1
2
3
4
5
1
Very Unlikely
1
Minor Injury / No time off
1
1
2
3
4
5
2
Unlikely
2
Significant injury up to 3 days off
2
2
4
6
8
10
3
Likely
3
Temporary disability > 3 days off
3
3
6
9
12
15
4
4
8
12
16
20
4
Very Likely
4
Permanent disability, long term absence
5
5
10
15
20
25
5
Certain
5
Death
NB: Red area is high risk and task will have to be reviewed before operation is allowed to continue. Status Values: 1 - 9 = Low 10 - 15 = Med. 16 - 25 = High RAGWR403 DOO Camera Cleaning
Uncontrolled when printed
Page 3 of 4
206
RISK ASSESSMENT ASSESSOR DETAILS Print Victoria Shepherd-Garner
Sign Victoria Shepherd-Garner
Date 01/07/2019
Sign
Date
Sign
Date
SIGNED BY PERSONS UNDERTAKING WORK / TASK (The persons signing below confirm that they will follow the Risk Assessment).
If a situation arises that represents a serious or imminent danger to ourselves or others I will immediately cease works, inform line management / Compliance Manager and not recommence works until suitable / sufficient control measures are implemented.
Name (print)
RAGWR403 DOO Camera Cleaning
Name (sign)
Job Title
Uncontrolled when printed
Date
Page 4 of 4
Appendix 14 207
METHOD STATEMENT
Method Statement Title:
DOO Camera Cleaning
MS Ref. No: & Issue
MSGWR401
Produced By:
Victoria Shepherd-Garner
Date Produced:
01/02/2020
Issue 2
PPE to be Used (insert )
Bump Cap EN 812
Goggles EN 166
Safety Glasses EN 166
Dust Mask EN 149 FFP3
Hi-Viz EN 471
Overall / Uniform Standard
Apron / Tabard Standard
Vinyl / Nitrile Gloves
Protective Gloves EN374
Safety Footwear EN345
Hi-Vis waterproof jacket & trousers
Face Visor EN166 Class A
EN 471
•
PPE, such as safety glasses may differ from task to task. Please refer to each task to ensure correct PPE is worn/used.
INDEX 1 Before you start your shift 2
Your safety & that of your colleagues
3
Movement around site
4
Task Method
5
What to do in an emergency
TASK: 1 BEFORE YOU START YOUR SHIFT What you need to know/observe and follow at all times: • Ensure you have had adequate rest/sleep prior to your shift • Ensure you are not under the influence of Alcohol or Drugs (For the avoidance of doubt, the use of drugs and/or consumption of alcohol prohibited under the drugs & alcohol policy. • You should inform your manager or supervisor that you are taking prescribed and/or over-the-counter drugs, and any information provided by your doctor or a pharmacist on likely effects on work performance. MSGWR401 DOO Camera Cleaning
Uncontrolled when printed
Page 1 of 4
208
METHOD STATEMENT
• Any illness or conditions that may affect your ability to undertake your job safely should, in the first instance, be reported to your supervisor prior to starting any tasks or going on or near the railway. TASK: 2 YOUR SAFETY & THAT OF YOUR COLLEAGUES What you need to know/observe and follow at all times: • This Method Statement must be read in conjunction with Risk Assessment RAGWR401 • Required PPE, as shown above, must be worn at all times. • Manual Handling awareness training is current. • Awareness of changing environment and hazards in the location this procedure is to be carried out. • Knowledge of the Contract Health & Safety site folder and its location. • Knowledge of the First aid facilities and first aider arrangements. • Operatives must have had local induction training and authorisation to carry out this procedure. • Where OLE exists, only operatives who have received and signed for the OLE awareness briefing may undertake this task. • Be aware of the potential trip hazard caused by platform furniture, passengers and fellow operatives. • Awareness of the varying gap between the platform and train. • Mobile phones are not to be used whilst carrying out this task. • Do not extend any equipment or hand above the orange cant rail. • Only cameras on platform side are to be cleaned. • Do not use any chemicals on the cleaning tool – use only clean water. • Do not use buckets of water on the platform – clean and dampen equipment away from platform. • Do not attempt to retrieve any items or equipment dropped between the platform edge and train. If any items are dropped onto the track, inform GWR and Servest management immediately. TASK: 3 MOVEMENT AROUND SITE 1. Operatives must stand in a place of safety until the train has stopped, the driver confirmed it is safe to work on and the train is under possession by the operative attaching a NTBMB to drivers cab door, departing end. Where OLE exists: 1. Always assume the OLE is live at all times (including any cables attached to OLE lines) 2. Ensure you and anything you hold does not come any closer than 2.75 meters (9 foot) of any live OLE.
MSGWR401 DOO Camera Cleaning
Uncontrolled when printed
Page 2 of 4
209
METHOD STATEMENT
TASK: 4 METHOD 1. Ensure you have a secure footing and are correctly balanced Using a high level cleaning tool or clean damp cloth 2. Damp wipe the IET DOO Camera lens to remove dust and debris, leaving it clean and smear free.
Tools and equipment
Using a magic sponge and clean cloth 3. If within your own capability, ensuring no over stretching at any time, damp wipe the IET DOO Camera lens to remove dust and debris, leaving it clean of any build up and smear free.
Clean, dampened high level cleaning tool / flick duster (with electrification sign attached and constructed of non-conductive materials at OLE locations) clean dampened blue microfiber cloth Clean, dampened blue microfiber cloth / magic sponge
END OF TASK 1. Remove NTBMB 2. Report completion of task to supervisor TASK: 4 WHAT TO DO IN AN EMERGENCY 1. Advise your site manager, supervisor, GWR member of staff immediately by the quickest means.
ASSESSOR DETAILS Print Victoria Shepherd-Garner
Sign Victoria Shepherd-Garner
Date 01/07/2019
Print Denise Dempsey
Sign
Date 01/02/2020
Sign
Date
MSGWR401 DOO Camera Cleaning
Uncontrolled when printed
Page 3 of 4
210
METHOD STATEMENT
SIGNED BY PERSONS UNDERTAKING WORK / TASK (The persons signing below confirm that they will follow the Method Statement). If a situation arises that represents a serious or imminent danger to ourselves or others I will immediately cease works, inform line management / Compliance Manager and not re-commence works until suitable / sufficient control measures are implemented. Name (print) Name (sign) Job Title Date
MSGWR401 DOO Camera Cleaning
Uncontrolled when printed
Page 4 of 4
Appendix 15 211
Sue Mundy From: Sent: To: Subject: Attachments:
Stewart Player 07 May 2020 08:33 Mark Prescott FW: DOO Cleaning update DOO brush 1.JPG; DOO brush 2.JPG
Stewart Player| Head of Operations | Great Western Railway 4th Floor | Milford House | 1 Milford Street | Swindon | SN1 1HL e: stewart.player@gwr.com | m: 07786 338755
From: Markey, Geoff <Geoff.Markey@hitachirail.com> Sent: 07 May 2020 08:25 To: Stewart Player <Stewart.Player@gwr.com> Subject: DOO Cleaning update Good morning Stewart I have to attend an OPS call this morning at 08.30 so please accept my apologies for not being on the call. The AT300 GWML document is now updated and the items regarding DOO cleaning are listed below. The DOO cleaning brush is undergoing front line trial tonight providing we get the correct brush to pole coupler this morning. Photos of the brush attached. The edge bristles are 10mm wide just around the right width to concentrate on the just the DOO lens. Daily Clean Specification
1
212
PHC specification
Platform Specification
Kind regards Geoff 2
213
214