D R I V I N G I M P R O V E M E N T W I T H O P E R AT I O N A L R E S E A R C H A N D D E C I S I O N A N A LY T I C S
SPRING 2017
MEETING THE NATIONAL POLICE AIR SERVICE’S CHALLENGES Simulation helps improve efficiency
IMPROVING FLOW THROUGH THE KIEL CANAL Sophisticated modelling reduces ship waiting times
RESPONDING TO THE EBOLA CRISIS © National Police Air Service
Planning capacity for the treatment and evacuation of health care workers
THE JOURNALS OF THE OR SOCIETY Linking research, practice, opinion and ideas The journals of The OR Society span a variety of subject areas within operational research and the management sciences, providing an essential resource library for OR scholars and professionals. In addition to the Journal of the Operational Research Society (JORS) – the flagship publication of The OR Society and the world’s longest-established OR journal – the portfolio includes leading research on the theory and practice of information systems, practical case studies of OR and managing knowledge, and journals taking a focused look at discrete-event simulation and applying a systems approach to health and healthcare delivery.
OR Society members receive complete online access to these publications. To learn more about member benefits please visit the membership section of The OR Society website: theorsociety.com
palgrave.com/journals
E D I TO R I A L Welcome to another issue of Impact, packed with interesting stories of the benefits that analytical work has brought to a wide range of organisations. Electronic copies of earlier issues are available at https://issuu.com/orsimpact. For future issues of this free magazine, please subscribe at http://www. getimpactmagazine.co.uk/. It’s a cliché that a week is a long time in politics. It is becoming apparent that a couple of months is a long time in the gestation of an issue of Impact. The work of the BT team featured in this issue won the President’s Medal at the last OR Society conference. Well deserved it was, too. I was on the jury! The article describes their efforts to support BT’s ‘Digital Britain’ vision with an efficient, responsive mobile field service operation. Increasing the operational efficiency of around 23,000 BT field engineers has significantly improved customer experience, travel impacts and CO2 emissions. In March, BT announced that Openreach will become a distinct company. The work done by the BT team will continue to provide O.R. capabilities for managing field engineers across the BT Group. When I heard that the prestigious INFORMS Prize had been awarded in 2016 to General Motors for its sustained track record of innovative and impactful applied O.R. and advanced analytics, I asked them to contribute an article to describe the work of their group, including reference to that which involved their operations in the UK. In March the news broke that a tentative agreement was reached to sell the UK plants at Ellesmere Port and Luton to France’s PSA Group, who would then become the beneficiaries of the work of GM’s O.R. group. Our columnists, Mike Pidd and Geoff Royston, are concerned about small data: both inspired by a comment of the chief economist of the Bank of England, Andy Haldane, concerning the ‘Michael Fish moment’ for economists of failing to predict the bank crash of 2008. Mr Haldane said that economists could learn from meteorologists, who now use much more data to understand how weather patterns develop. Mike argues that we should not get carried away by big data, but welcome its availability and add value by creating models that use it, whilst Geoff makes the point that big data is not going to lead to the demise of small data – and that O.R. will continue to use both. Great minds – with slightly different thoughts!
The OR Society is the trading name of the Operational Research Society, which is a registered charity and a company limited by guarantee.
Seymour House, 12 Edward Street, Birmingham, B1 2RX, UK Tel: + 44 (0)121 233 9300, Fax: + 44 (0)121 233 0321 Email: email@theorsociety.com Secretary and General Manager: Gavin Blackett President: Ruth Kaufmann FORS, OBE (Independent Consultant) Editor: Graham Rand g.rand@lancaster.ac.uk
Print ISSN: 2058-802X Online ISSN: 2058-8038 Copyright © 2017 Operational Research Society Ltd Published by Palgrave Macmillan Printed by Latimer Trend This issue is now available at: www.issuu.com/orsimpact
Graham Rand
OPERATIONAL RESEARCH AND DECISION ANALYTICS
Operational Research (O.R.) is the discipline of applying appropriate analytical methods to help those who run organisations make better decisions. It’s a ‘real world’ discipline with a focus on improving the complex systems and processes that underpin everyone’s daily life - O.R. is an improvement science. For over 70 years, O.R. has focussed on supporting decision making in a wide range of organisations. It is a major contributor to the development of decision analytics, which has come to prominence because of the availability of big data. Work under the O.R. label continues, though some prefer names such as business analysis, decision analysis, analytics or management science. Whatever the name, O.R. analysts seek to work in partnership with managers and decision makers to achieve desirable outcomes that are informed and evidence-based. As the world has become more complex, problems tougher to solve using gut-feel alone, and computers become increasingly powerful, O.R. continues to develop new techniques to guide decision making. The methods used are typically quantitative, tempered with problem structuring methods to resolve problems that have multiple stakeholders and conflicting objectives. Impact aims to encourage further use of O.R. by demonstrating the value of these techniques in every kind of organisation – large and small, private and public, for-profit and not-for-profit. To find out more about how decision analytics could help your organisation make more informed decisions see www.scienceofbetter.co.uk. O.R. is the ‘science of better’.
Annual Analytics Summit Thursday 15 June 2017
The Annual Analytics Summit delivers a one-day learning and networking event about how big data and analytics are shaping organisational decision-making. Filled with case studies, innovations and strategies on turning data into decisions, the Annual Analytics Summit is the event for practitioners and decisionmakers alike. The summit brings together experts from government, industry and academia, as well as exhibitors from software providers, consultancies and specialist recruitment agencies.
Location IET Savoy Place London WC2R 0BL
www.analytics-events.co.uk #TAAS17
CONTENTS 7
OPTIMISING EFFICIENCY IN THE NATIONAL POLICE AIR SERVICE Gail Ludlam describes how simulation modelling has helped the National Police Air Service meet its various challenges
13
UNTANGLING THE KIEL CANAL Brian Clegg reports on how Rolf Möhring and his team helped improve the flow of ships through the Kiel canal
18
LEVERAGING O.R. TECHNIQUES FOR SMARTER FIELD OPERATIONS Gilbert Owusu and colleagues tell us about their work to help improve BT’s field operations
22
REDUCING CANINE GENETIC DISEASE Ian Seath and Sophie Carr explain how simple analysis has helped achieve a substantial reduction in the number of Mini Wire puppies born with a rare and incurable form of epilepsy
26
OPERATIONAL RESEARCH AT GENERAL MOTORS Jonathan Owen and Robert Inman record the work of the large O.R. team at General Motors
© © WSV Kiel-Holtenau. Accessible at: http://www.wsa-kiel.wsv.de/Service/gallery.php.html
32
OPTIMISING THE WIND Brian Clegg describes how a University of Strathclyde team provide O.R. support for planning and implementation of offshore wind farm installations
41
SUPPORTING THE UK’S RESPONSE TO AN INTERNATIONAL PUBLIC HEALTH CRISIS Phillippa Spencer and colleagues report dstl analysts’ contribution to the UK Government’s response to the 2014 ebola crisis in Sierra Leone
46
PROBLEM solvED Neil Robinson describes Dr Karin Thörnblad’s work to optimize heat treatment schedules at GKN Aerospace’s plant in Sweden
4 Seen elsewhere
Analytics making an impact 11 Making an impact: in praise of
small data and big thinking Mike Pidd tells us not to get carried away by big data, but welcome its availability and add value by creating models that use it 17 Universities making an impact
A brief report of a postgraduate student project 36 Forecasting: what an O.R.
approach can do for you John Boylan explains how organisations can improve their forecasting performance 51 Small data
Geoff Royston argues that big data is not going to lead to the demise of small data – and that O.R. will continue to use both
SEEN ELSEWHERE
© Design Pics Inc / Alamy Stock Photo
SEPSIS STINKS
A predictive analytics tool developed at the Mayo Clinic, the Sepsis “Sniffer” Algorithm (SSA), promises to help clinicians identify high-risk patients more quickly and accurately than manual methodologies. An article in the Journal of Nursing Care Quality (32, pp25-31) reports on a multihospital health system study that found that the SSA reduced the chances of incorrectly categorizing patients at low risk for sepsis while detecting high-risk situations in half the time it typically takes for clinicians to recognize symptoms and begin treatment. The tool also reduced redundant nursing staff screenings by 70% and cut manual screening hours by up to 72%. The SSA electronically monitors patients to assign a risk score, and trigger an alert requesting nurses to perform a manual Nurse Screening Tool (NST) whenever the patient’s risk increased. Nurses were expected to complete the NST within fifteen minutes of the algorithm’s alert. The predictive analytics tool halved the time between initial symptoms and detection while also decreasing average patient length of stay by about one day, though differences in mortality rates were not statistically significant. “Leveraging digital alert technology, such as the SSA, may identify sepsis risk earlier and reduce manual surveillance efforts, leading to more efficient distribution of existing nurse resources and improved patient outcomes,” the authors concluded.
caused worries and protests from local communities, alarmed at possible predation against domestic livestock and the economic impact on their livelihoods. Researchers from the University of Portsmouth’s Centre for Operational Research and Logistics have used a new risk classification method, based on the analytic hierarchy process, to show that a high number of municipalities are at risk. Understanding the risk of wolf attacks on livestock allows authorities and landowners to properly manage and prevent any potential human-wolf conflict and provide adequate damages for lost livestock.
Lead author of the study, Professor Alessio Ishizaka said: “The design of effective conservation and management plans needs to be informed by an effective decision support system. Our study shows the hotspots that are at risk, which can help the local government in planning conflict mitigation strategies. The local government should focus their efforts and resources on these hotspots when implementing methods to control the wolf.” (See Ecological Indicators (2017), 73, 741-755).
WOLF! WOLF!
COMPETING IN A DATADRIVEN WORLD
An increasing wolf population in the Umbria region of central Italy has
Apparently, big data’s potential just keeps growing! According to a new
4
report, taking full advantage means companies must incorporate analytics into their strategic vision and use it to make better, faster decisions. The report from the McKinsey Global Institute (MGI), The age of analytics: Competing in a data-driven world, suggests that the range of applications and opportunities has grown and will continue to expand. Given rapid technological advances, the question for companies now is how to integrate new capabilities into their operations and strategies, and position themselves in a world where analytics can upend entire industries. The report’s authors argue that the convergence of several technology trends is accelerating progress. The volume of data continues to double every three years as information pours in from digital platforms, wireless sensors, virtual-reality applications, and billions of mobile phones. Datastorage capacity has increased, while its cost has plummeted. Data scientists now have unprecedented computing power at their disposal, and they are devising algorithms that are ever more sophisticated. The authors conclude that data and analytics are already shaking up many industries, and the effects will only become more pronounced as adoption reaches critical mass, and as machines gain unprecedented capabilities to solve problems and understand language. Organizations that can harness these capabilities effectively will be able to create significant value and differentiate themselves, while others will find themselves increasingly at a disadvantage.
IMPACT © THE OR SOCIETY
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 6 1 0 2 ©
WATCH IT
Scientists at MIT have developed an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vital signs. This new deep-learning system could someday serve as a ‘social coach’ for people with anxiety or Asperger’s, they say. Tuka AlHanai and Mohammad Mahdi Ghassemi have built an algorithm that can analyse speech and tone from data captured by smart watches. This data can be ‘analysed’ to detect what emotion a person is roughly feeling for every five second block of conversation. In one example, a person recalled a memory of their first day in school, and the algorithm identified the moment the tone shifted from positive, through neutral, down to negative. “Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious” said graduate student Alhanai, who worked with PhD candidate Ghassemi. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.” Ghassemi commented that. “As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions. Our results show that it is possible to classify the emotional tone of conversations in real-time.” For analysts, the capability of tapping into emotion data which could be streamed from individuals via their wearables to IoT connected tech, signifies a whole new phase of retail and predictive analytics. Individuals could be tracked, and their emotional state and propensity for buying goods judged, and tailored incentivisation
strategies to propel them toward retail ‘investment’ could be applied more effectively than ever before. See: http://news.mit.edu/2017/ wearable-ai-can-detect-toneconversation-0201 SEATING PLANS
Rhyd Lewis and Fiona Carroll, researchers in South Wales, are concerned with designing seating plans for large events such as weddings and gala dinners so that guests are sat on the same tables as friends and family, and, perhaps more importantly, are kept away from those they dislike. This is a difficult mathematical problem, but their heuristic algorithm is used on the commercial website www. weddingseatplanner.com, which currently receives approximately 2000 hits per month. They discuss their work in the Journal of the Operational Research Society (67, 1353–1362). They try to strike the right balance between being useful to users, while also being easy to understand. To this end, in their online application they allow only three different choices: ‘Rather Together’, ‘Rather Apart’, and ‘Definitely Apart.
THE INTERNET OF THINGS
Gartner, Inc. forecasts that 8.4 billion connected things will be in use worldwide in 2017, up 31% from 2016, and will reach 20.4 billion by 2020. Not surprisingly, China, North America and Western Europe are
driving the use of connected things, and the three regions together will represent 67% of the overall Internet of Things (IoT) installed base in 2017. The consumer segment is the largest user of connected things with 5.2 billion units in 2017, which represents 63% of the overall number of applications in use. “Aside from automotive systems, the applications that will be most in use by consumers will be smart TVs and digital set-top boxes, while smart electric meters and commercial security cameras will be most in use by businesses,” says Peter Middleton, research director at Gartner. In addition to smart meters, applications tailored to specific industry verticals (including manufacturing field devices, process sensors for electrical generating plants and real-time location devices for healthcare) will drive the use of connected things among businesses through 2017, with 1.6 billion units deployed. However, from 2018 onwards, cross-industry devices, such as those targeted at smart buildings (including LED lighting, and physical security systems) will take the lead as connectivity is driven into highervolume, lower cost devices. While consumers purchase more devices, businesses spend more. In 2017, in terms of hardware spending, the use of connected things among businesses will drive $964 billion. Consumer applications will amount to $725 billion in 2017. “IoT services are central to the rise in IoT devices,” says Denise Rueb, research director at Gartner. “Services are dominated by the professional IoT-operational technology category in which providers assist businesses in designing, implementing and operating IoT systems,” Rueb adds. “However, connectivity services and consumer services will grow at a faster pace. Consumer IoT services are
IMPACT | SPRING 2017
5
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 6 1 0 2 ©
newer and growing off a small base. Similarly, connectivity services are growing robustly as costs drop, and new applications emerge.” SUPPLY CHAIN ANALYTICS TOOLS CAN REDUCE BLOOD PRESSURE
© blickwinkel / Alamy Stock Photo
The future for blood supply chains is fraught with uncertainty, says Anna Nagurney, Professor of Operations Management, University of Massachusetts. Demand from hospitals has fallen due to changing practices in surgery. Coupled with a relatively strong supply, this has given US hospitals the upper hand while negotiating with the suppliers. However, with significantly less blood being collected, there is a danger that there could be a shortage if there was a major disaster. Furthermore, there could be another rise in demand in coming years due to population increases, and changing demographics, such as baby boomers’ aging. The unpredictability of natural and man-made disasters requires all blood banks to stay alert and be responsive to fluctuating demand and supply.
enhancing their operations with collection, testing and distribution to hospitals, to minimize costs as well as risk and waste and to optimize the supply chain network design. More recently, they have turned to the assessment of mergers and acquisitions in the blood supply chain, since some of its evolving features have taken on the characteristics of corporate supply chains, from which lessons can be learned. (More at: http://bit.ly/2kKSlFE) O.R. IN THE ENERGY INDUSTRY
A special issue of Interfaces (Vol. 46, No. 6, 2016) presents a tutorial and five applications of O.R. in the energy industry. The applications include: • optimizing plant dispatch decisions for Mexico’s power system operator. Over 10 years, it is expected that there will be more than $20 million in total savings; • managing oil and gas pipelines in China, which is expected to generate an extra 2.1 billion Chinese yuan (£0.25bn) for China National Petroleum Corporation’s natural gas usage between 2016 and 2020; and • securing cost-effective coal supplies for Tampa Electric Company’s power plants. It is estimated that the implementation of this model can provide annual fuel-cost savings of 2-3%, which translate to millions of dollars of savings in total fuel costs.
BLOOD BANK TRANSFUSION BAGS
It is therefore imperative to apply supply chain analytics tools derived from industry to assist in both supply side and demand management to make for the best utilization of a perishable lifesaving product that cannot be manufactured – human blood. Anna and her colleagues have researched blood supply chains, from
6
INCREASING HOTEL PROFITS
Standby upgrade programs are an innovative way for hotels to increase annual revenue while also filling frequently unused premium rooms, creating awareness for unique room features, and improving guest satisfaction and loyalty. However, an article in Manufacturing & Service
Operations Management (19, 1-18) finds that the success of a standby upgrade program is directly tied to the type of guests who frequent the hotel, and the types and quantity of rooms available. The researchers found that standby upgrades are especially lucrative when the hotel has a high ratio of premiumto-standard rooms and guests may not be able to predict the chances of being awarded a standby upgrade, and can lead to a 30-35% revenue increase. Furthermore, the authors found that upmarket hotels whose rooms have unique features (e.g., city vs. ocean views) can benefit more from standby upgrades than those with mostly standard rooms. MINING THE PANAMA PAPERS
The leak of 11.5m files from the law firm Mossack Fonseca, the so-called Panama Papers, allowed us to see the offshore tax accounts of the rich, famous and powerful. At 2.6 terabytes of data, it was the biggest leak in history. How did the journalists sift through the files and pick out meaningful data? In an article in Analytics Magazine, Emil Eifrem, CEO of Neo Technology, explains that the answer is found in graph database technology. This enables relationships between data to be found and understood. According to Mar Cabra, head of the data and research unit at the International Consortium of Investigative Journalists, graph database technology is “a revolutionary discovery tool that’s transformed our investigative journalism process”. Unlike relational databases, which store information in rigid tables, graph databases utilise structures made up of nodes, properties and edges to store data, and then map the links between required entities.
IMPACT | SPRING 2017
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 6 1 0 2 ©
OPTIMISING EFFICIENCY I N T H E N AT I O N A L P O L I C E A I R S E RV I C E GAIL LUDLAM
for the Police Service. West Yorkshire Police (WYP) volunteered to act as the lead Force for the development and delivery of NPAS. Before NPAS, there were 30 bases and 31 aircraft serving either single Forces or a group of Forces. By 2014 there were 23 bases and 24 aircraft across England and Wales, including the Forces that were still to join NPAS. Even though the implementation of NPAS resulted in substantial savings, 23% (£11m) between 2012 and 2015, there was still a requirement to identify and secure further savings. NPAS have governance arrangements to provide reassurance
that the service being delivered is fit for purpose and demonstrates value for money. One aspect of this is the National Strategic Board, comprising diverse stakeholders such as: Chief Officers and Police and Crime Commissioners giving, respectively, an operational and financial perspective across the six NPAS regions; a Home Office representative; the Chief Officer lead for Police Aviation from the National Police Chiefs Council (NPCC); other specialist policing representatives; and representatives from NPAS covering operational delivery, safety and continuing airworthiness. The Board’s primary focus is to hold the lead Force to account, agree the
IMPACT © THE OR SOCIETY
7
© National Police Air Service
THE NATIONAL POLICE AIR SERVICE (NPAS) was a ground breaking concept and the first truly national police collaboration. In 2009 a review of the capacity and capability of police aviation was conducted, with professional agreement that the concept of a ‘national police air service’ was required. The 2010 Comprehensive Spending Review provided further imperative to ensure that any solution was affordable whilst delivering an acceptable level of operational capacity. NPAS was formed in 2012 to provide a national airborne response capability with borderless tasking, with the aim to deliver a cost effective service which yields actual savings
© National Police Air Service
ability of NPAS responding to calls for support. This included modelling:
operational model for NPAS each year and set the budget from which the model must be delivered.
THE NEED FOR MODELLING
NPAS is interested in using quantitative methods to identify how the savings from the Comprehensive Spending Review can be achieved while still delivering the level of service required. In early 2014, WYP worked alongside the Home Office to produce a spreadsheet model looking at the location of bases, focusing on the benefit covered within a certain response time. This benefit was based on social economic analysis on the cost of crimes or the cost of not providing support. However, this model didn’t look at the level of service provided. Therefore, in September 2014, NPAS commissioned WYP to provide independent analysis and an evidence based view on a new operating structure testing the indicative performance and viability of alternative base numbers and locations. The analysis also had to appeal to the diverse members of the board covering the operational and financial risks, and to take a logical approach to avoid any emotional bias in the decisionmaking process.
THE MODEL
A simulation model was built in Witness to look at the operational
8
IMPACT | SPRING 2017
• All 23 bases and 24 helicopters in England and Wales, and the new concept of operating fixed wing aircraft in NPAS. • Over 300 operating areas for 43 Forces, where these areas are the local authorities or police districts within a Force. • Different types of support e.g. searching for a suspect or pursuit of a vehicle. • Characteristics for these calls for support where the priority and duration changes for each type of support. The process of an aircraft responding to a call starts with generating and allocating a task to an operating area within a Force, then identifying which type of support is required along with the duration and priority of the task. The task is then allocated to the operating area which prompts the model to find the nearest aircraft that can respond. This takes into account whether it is
The analysis had to appeal to the diverse board covering the operational and financial risks, and avoid any emotional bias in the decision-making process
currently responding to a task and when it will be free to respond, and also the remaining operating time of the aircraft based on the fuel, including how long it would take to respond to the task and return to base to refuel. Once the nearest available aircraft is found, it attends and completes the task.
Inputs The inputs used in the model can be broken into three categories – data, engagement and experiments. The inputs gathered through analysing the data include profiles for the demand covering the different characteristics and types of support requested along with when and where support was requested. These inputs also included the chance of a task being abandoned by poor weather as this was an important element NPAS wanted capturing. There are some aspects of air support that couldn’t be captured through analysing the data and these required engaging with NPAS and specialists e.g. pilots. These inputs include information on base operating hours, the number of hours a pilot can fly in a shift and the maintenance schedules for each type of aircraft. The last set of inputs are those that are mostly used in the experiments and these include the number of bases and their locations, and the type of aircraft located at each base. Outputs The outputs collected from the model and saved in Excel were categorised by aircraft, Force, time period and base, including details on hours flown, response time, number of calls for support and number of tasks completed. Assumptions As the process of providing air support is complex with various factors affecting the response e.g. weather, maintenance, the aircraft being interrupted to attend another task, available fuel etc. some assumptions and simplifications had to be made. These simplifications underestimate the performance that could actually be achieved. The key assumptions and simplifications are:
The model provides a visual representation of the service, showing a map of the bases, tasks, and aircraft responding to tasks. There is also an information feed of when tasks are generated.
HOW THE MODEL HAS BEEN USED
In December 2014, the model was introduced to the National Strategic Board to engage the stakeholders and to build acceptance and buy-in before any results were presented. This involved running through the details of the model and showing a video of the model running so they could see aircraft responding to tasks. The December minutes for the National Strategic Board state ‘It was agreed by the Board that this was an excellent piece of work and it was confirmed that the modelling would be flexible enough for other emergency services to use.’
It was agreed that the modelling would be flexible enough for other emergency services to use
A range of saving options between 7% and 28% were presented to the Board in January 2015 with the anticipation of further budget cuts in the 2015 Spending Review. These options included changing the:
• Number of bases and locations. • Mix of fleet of fixed wing and helicopters including the number of resilience aircraft. • Type of aircraft at each base. • Locations of fixed wing patrol areas. • Base operating hours. • Change in the distribution for the priority of the call for support. Details of which bases were selected were removed from the options to avoid any emotional bias caused by stakeholders having to make a decision on the future of their local base. The Board members considered the indicative service level and the cost of each option to select one for further development. This developed option was presented to the Board in February 2015, when the Board agreed on a new 15 base operating model with a fleet of 19 helicopters and 4 fixed wing aircraft with each base operating 24/7. This option achieved an approximate saving of 14% of the current budget, in addition to the 23% originally saved from nationalising police air support. A series of road shows were held across the country to promote the new operating model and the simulation model to the Forces in England and
IMPACT | SPRING 2017
9
© National Police Air Service
• Refuelling – in the model, refuelling has been simplified so the aircraft only lands at the allocated base for that aircraft. To simplify the model further, the aircraft will take time refuelling each time it lands at a base. • Multiple aircraft responding to one task – when analysing the data there were some tasks where an aircraft would take over from the first responding aircraft to allow it to refuel. In this model, it is simplified to only one aircraft responding to a task, so if the task is going to take longer than the endurance of the aircraft it won’t receive air support and will be lost. • Shifts – no overtime is considered in this model. If a task will take an aircraft beyond the end of the shift, it won’t respond to the task. • Travelling to a task – once an aircraft has been allocated a task to respond to, it can’t be interrupted to respond to a different task in the model. In reality, NPAS prioritises tasks on threat, risk and harm which might mean redeploying aircraft from one task to another. • Daily aircraft checks – as there isn’t a set time to complete the aircraft checks each day, the shifts have been modified to provide 30 minutes at the beginning of each shift to check the aircraft. This is a mandatory requirement and must be done in each shift. • Fixed wing patrol areas – fixed wing aircraft return to a central point called their operating location between tasks. In reality they will operate where necessary.
model can be used to run additional options if further savings are required, and different support tasks that NPAS provide can be added along with requests for support from other organisations. There is also the possibility that the model could be used by other air support providers wishing to look at a national service.
© National Police Air Service
COMMENTS FROM NPAS
Wales. These roadshows opened the simulation model to further scrutiny and challenges, and provided the chance to build further acceptance in the model. Since this agreement has been made, NPAS has requested further variations to the simulation model to see how different conditions affect performance.
HOW THE MODEL HAS MADE AN IMPACT
The simulation model has made an impact throughout the process of identifying and agreeing a new operating model. It has secured diverse stakeholder buy in, enabling agreement across the National Strategic Board where the operational and financial perspectives can come to an agreement on which level of service and financial savings are suitable for each perspective. The model also removes any local emotional bias from the decision making process by having the ability to remove the details of which bases have been selected. Along with defining the future operating model, the simulation
10
IMPACT | SPRING 2017
model also provides what-if analysis on any other decisions that can be modelled. The results of the model have informed a review on the funding formula which has been agreed. Through the use of the model and creating an independent methodology, this work has also prompted the National Strategic Board to consider a new fleet plan and estates plan; the National Police Chiefs Council has adopted a new deployment model with three different priorities of calls; and it has informed a discussion around developing a balanced scorecard with the outcomes expected to assess NPAS.
the simulation model provides what-if analysis on any other decisions that can be modelled
As the main aspect of the simulation model is how aircraft respond to tasks, the model has various possible future uses. With the range of inputs that can be modified, the
Tyron Joyce, Chief Operating Officer for NPAS, said that ‘the NPAS National Board and colleagues across the country support NPAS delivering a safe and effective service. I was searching for a methodology of evidential mapping combined with professional judgement to identify the potential cost and service delivery of various operational models. The early introduction of indicative mapping was essential in securing and maintaining the trust and confidence of all of our many stakeholders. There were many additional benefits including the active consideration of the value of air support to policing and how it’s positively supports successful outcomes. As a direct result of this work we were able to develop the new operating model and base locations and latterly a completely new funding model. I have no hesitation in describing its use as essential during this process.’
Gail Ludlam joined WYP as a Business Change Specialist in 2013 after completing both a BSc and MSc in O.R. and Management Science at Lancaster University. Gail has worked on crossorganisational projects and continues to look at applications for O.R. to help management decision making across WYP.
M A K I N G A N I M PAC T : I N P R A I S E O F S M A L L DATA AND BIG THINKING
Mike Pidd A REASON TO RUN FOR THE HILLS
I recently found myself in a formal meeting which included an agenda item headed something like ‘Organisational transformation through data analytics.’ When I saw this I wanted to run for the hills. I’m not saying the organisational transformation is impossible. Nor am I saying that applying data analytics isn’t useful. I do worry, though, when a fashion (in this case data analytics; aka ‘big data’) appears as a guarantor of a brave new world for a complex and, inevitably, messy organisation. I’ll spare you the details, but the conversation that followed brought back memories of another, back in the late-1970s. At the time I was visiting one of Britain’s motor manufacturers. I’d gone to discuss some possible simulation modelling, but the person I met was much more interested in telling me about a large computer system project he was leading. I asked him how much it would cost and he quoted a number that seemed very large, at least to this O.R. person at the time. Gently, or so I thought, I asked how he could justify the cost. “Simple,” he said, “it’ll save at least 10% of our stockholding costs.” Apparently this was a very large number. Even more gently, I asked how he could be sure about this. “Of course it will,” he said, “we have millions tied up in stock, it can’t fail to cover its costs.” You will search in vain for this motor company now. I doubt this investment was solely responsible for its demise, but it probably characterised its way of doing things.
TO A POINT, LORD COPPER
I’m no fan of the term big data. Does it imply a few very large numbers, lots of small ones, or some vague idea that a bigger bag of numbers is always better than a small one? Some people may insist that investing in large data sets must surely lead to improvement, though they may allow some concern about moral rights to privacy. I hope that anyone making such a claim has at least one finger crossed, preferably more than one. I’m much less sure that this improvement, let alone transformation, will follow. OR/MS types know better than this; or at least I hope so. Ginormous raw data sets guarantee nothing. The largest and fastest computers in the world can do nothing with these data sets unless some intelligence is applied. So, what use are large data sets? Should we always go big on big data? As ever, this is not a straightforward question to answer. As I type this, Andrew Haldane, currently the Bank of England’s Chief Economist, has apologised for errors in the Bank’s economic forecasts. He issued his mea culpa on the failure of most economists to predict the crash of 2008 and the Bank’s problematic forecasts for life after Brexit. Now, he said, was the Michael Fish moment for economists. I don’t think he was referring to the interesting suit worn by Mr Fish on his famously inaccurate weather forecast in 1987. As I listened to the radio I heard one commentator claim that weather forecasts are now much better than in Mr Fish’s day because they are based on much larger data sets than in the past. Let’s assume that the forecasts really have improved, though I suspect there is much more to this improvement than bigger data sets. Ergo, he argued, if economists had much bigger data sets they would produce admirably accurate forecasts and the world would be a much better place. When I heard this comment I didn’t know whether to laugh or cry. It’s probably good news for data storage companies, but it seems to me to be a very silly claim to make – assuming that the BBC hadn’t edited out any caveats uttered by this commentator.
ALGORITHMS, MODELS AND DATA
Curiously, a word that’s been very common in OR/MS for a long time crept into widespread use in the last year. I’ve seen it much used in the press, which may tell you which type of newspaper I read, and heard it discussed many times on TV and the radio. The word is algorithm. Algorithms seem to be praised and blamed in equal measure. They help reduce costs by ensuring consistent decisions and actions, but when they go wrong… Readers will I hope, agree with me that algorithms are more important than ginormous data sets.
IMPACT © THE OR SOCIETY
11
have purpose-collected data for building and parameterising our models, we can make every effort to check its provenance. That is, we can rigorously check whether it is fit for the purpose in hand.
© Hero Images Inc. / Alamy Stock Photo
BIG THINKING ON SMALL DATA
Because algorithm is a hard word to spell and to say, I’ll use the term model, instead. Model building and model use, often based on mathematics and statistics, form the technical core of what OR/MS people do. Models, aka algorithms, are what add value to data, whether lovingly collected in small amounts by hand, or harvested in large volumes by machines. It seems to me that data is used in two different ways in O.R. It is used to develop a model, determining the form it will take and its parameters. Data is also used to produce results from that model. Thus some historical data may be used to parameterise a time series model for forecasting and freshly collected data may be run through the model each week to develop forecasts and to update parameters (yes, I do know it’s a bit more complicated than that).
TABLE D’HÔTE OR À LA CARTE?
Model development requires big thinking but may not need big data. Years ago, I wrote a book about modelling in OR/MS that has sold rather well and probably caused many students to suffer. In it, I wrote that analysts should prefer their data à la carte rather than table d’hôte when model building. In writing this, I was no Mystic Meg foreseeing the current, rather curious popularity of food programmes on TV. Indeed, now I think about it, I should have chosen other metaphors, because my aim was to compare specially-collected data with those that are routinely harvested. Few restaurants will allow their customers to specify all the ingredients in a meal. I was trying to stress that knowing the provenance of data used in model building is absolutely crucial and is much, much more important than using whatever data happens to be lying around at the time. I am, though, well aware that asking for specially-collected data is a counsel of perfection. Collecting data costs money and time and these are usually in shorter supply than we wish. Thus, compromises are inevitable. However, even if we cannot
12
IMPACT | SPRING 2017
I find it helpful to think of a spectrum of intended model use and have written about this elsewhere. I say intended model use, but once a model is released into the wild it may occasionally be used in ways that were never intended. This is a gross simplification, but it may be helpful to think of two extremes of model use: those used to support routine decision making and those used to help think about complex issues. Both can be used as tools for thinking. By models that are used to support, or even to replace, routine decision making, I mean those that are run many times and may even run without much or any human intervention. Once developed and in use, they are supplied with data and use this to either propose or enact a course of action. Such models need to be thoroughly and regularly validated and verified. Thus it is crucial that they are built and parameterised with as complete and accurate a data set as possible. Likewise, the data stream on which the recommendations and actions that stem from the models’ frequent use depends, needs to be complete and accurate. The case for expensive, big data for such modelling is clear. Harvesting such data from a published, set menu, that is, table d’hôte, is the way to go. However, not all models are used in this way. Others are used to support decision making and thinking and may only be used once, or a few times. Examples may be one-off investments or the support of a group aiming to establish a strategic direction and commitment. Here the case for big data is less clear. Indeed, there is unlikely to be big data available in many such cases. In many such cases there is not even a reliable small data set available and it may have to be specially-collected or even estimated. That is, it cannot be ordered table d’hôte and any supplied ready to eat on a plate, may not be what is needed. In such cases, we may have to do with small data and big thinking. So, let’s not get carried away by big data. Instead, let’s welcome its availability but continue to argue that OR/MS people add value by creating models that use it. Also, let’s not be ashamed of applying big thinking to small data. Mike Pidd is Professor Emeritus of Management Science at Lancaster University.
BRIAN CLEGG
THE KIEL CANAL may be less famous than Panama or Suez, but like its more glamorous cousins, it has a vital role in providing a short cut for shipping traffic. First fully opened in 1895 and 98 kilometres in length, the canal cuts across from the North Sea to the Baltic, saving ships from taking a detour of over 500 kilometres around Denmark. Arguably Kiel should have a little more of the fame than it does, as it is the canal with the highest traffic level in the world, with more shipping passing through than Panama and Suez combined. This reflects a significant rise in usage, nearly tripling from around 60 million tonnes in 1996 to over 170
million tonnes in 2008. At any one time, between 40 and 50 vessels take the 8 to 10-hour trip through the canal, in one direction or the other. And using both directions is a problem – because much of the canal is not wide enough for the larger ships that are travelling through to safely pass each other. Smaller vessels are fine, but above a certain size passing becomes unsafe in vessels that were designed for navigation at sea, rather than the much tighter confines of inland waters. Taking terminology from the railways, some canals have wider sections called sidings, where ships can be held at the edge of the waterway while others pass. The Kiel canal depends on a series
IMPACT © THE OR SOCIETY
13
© WSV Kiel-Holtenau. Accessible at: http://www.wsa-kiel.wsv.de/Service/gallery.php.html
U N TA N G L I N G T H E K I E L CANAL
© WSV Kiel-Holtenau. Accessible at: http://www.wsa-kiel.wsv.de/Service/gallery.php.html
of twelve sidings along its length to provide these passing places. When it was decided to improve the flow through the canal, the obvious solution might seem to be to make the whole canal wide enough to avoid the need for sidings at all, but this would be too costly, particularly where the canal passes through the middle of a city such as Rendsburg. Instead, the plan was to expand existing sidings and incorporate additional ones to improve the flow. But deciding on the best layout using the current manual scheduling method, where experienced planners decide which ships should wait at which points, was likely to result in failure. It was time to bring in operational research in the form of Professor Rolf Möhring and his team. Möhring studied mathematics at the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, a German research university, and became interested in operational research after reading Eugene Lawler’s 1976 book Combinatorial Optimization, which introduced many to the approach of using mathematical networks to solve optimisation problems. A network consists of a set of nodes – points on the network – and links between those nodes. The network can represent
14
IMPACT | SPRING 2017
anything from a physical network such as a telecommunications network or road network to a virtual network of possible states for a system, where traversing a link takes a system from one state to another.
Kiel is the canal with the highest traffic level in the world, with more shipping passing through than Panama and Suez combined
Möhring started to work on scheduling problems with engineers at RWTH Aachen, expanding his role into routeing. A breakthrough to real world applications came with the foundation in 2002 of Matheon (http://www.matheon.de), a funded research centre for applied mathematics. Through the centre, Möhring has run numerous projects for industrial companies covering scheduling in production and traffic, routing in traffic, logistics and telecommunication. It was Möhring’s team’s work on the routing of automated guided vehicles (AGV) in Hamburg harbour that brought them to the attention of WSV, Germany’s federal waterways and shipping administration. The
high-tech Hamburg container terminal uses around 70 automated vehicles to carry containers from place to place in its 1.4 kilometre-long collection of storage spaces. Spotting the details of the Hamburg project on Matheon’s website, the Kiel canal administrators felt that the ability to optimise the routes in the complex dance of the loading vehicles was an ideal background to deal with the routeing of ships through the canal to minimise the overall waiting time in sidings, a mix of Möhring’s specialities, routing and scheduling. For the AGV problem, Möhring and his team had developed an algorithm that built up a picture sequentially, journey by journey. For each new journey, the system would plan a route which treated the vehicles already in motion as temporary road blocks, a process that in mathematical terms was like finding a route through a graph – a collection of nodes and links – where some of the edges were blocked for part of the time. Möhring: ‘Let’s translate that to car traffic. You want to drive from A to B in a city and have for every street a list of time intervals in which that street is closed, say because there is a garbage collection going on. Now you need to find the quickest route for yourself. But there is one essential and computationally important difference from car traffic. You may now wait at any street along your route for streets ahead to become free again. This waiting will create additional blocks. In car traffic, you would wait behind the garbage collection, but here it can be in any street. We could show that this can be done efficiently by a variant of the famous algorithm by Dijkstra for computing a shortest path in a network, which then had to be adapted to all the real world conditions such as acceleration, precise positioning under a crane, turning behaviour of AGVs, and so on.’
methods are available that will come sufficiently close to optimisation that the approximation is acceptable – and this also applies to the canal problem. Taking the approach used in the Hamburg docks of setting up routes sequentially would not have been effective with this kind of problem, as the geometry of the canal and the need to have multiple ships in play in both directions, with limited capacities in sidings, is beyond its capabilities. Similarly, though lessons could be learned from existing problems, such as the scheduling of trains on single track lines with passing places, the Kiel canal was an unusually complex environment and required ideas from both types of solution. The team combined the basic routing algorithm from Hamburg with a scheduling approach called ‘local search’, also employing a rolling time horizon, which planned the immediate two hours based on the ships in the canal plus those entering in those two hours. This was then reworked two hours later to bring in newly arrived ships and so on. This approach was necessary because ships don’t book their passage well in advance, only giving around 2 hours’ notice. The local search aspect provides decisions on which ships must wait
while others pass, and in which siding. Möhring: ‘Local search is a technical term meaning that we explore short sequences of successive scheduling decisions and take a sequence that reduces the total waiting time the most. It is roughly a clever move analysis like in chess. If ship A waits in siding X for ship B, then ship C can also pass but must wait in siding Y for ship D etc. Here “good” means that we save waiting time by a combination of such decisions.’ The team had plenty of data to work on, though the existing database only allowed for the production of printouts, so contractors had to be brought in to provide software access to the data. When the model was applied to the historical data it produced a similar shaped distribution of waiting times to that from the professionals operating the manual system, but managed a 25 per cent reduction in waiting times. This was all from an algorithm which typically took around 2 minutes to run. By looking at different options for the enlargement of the canal, in 2011 the model was able to provide recommendations for changes that would have the best impact on waiting times. By this time, pressure had been
IMPACT | SPRING 2017
15
© WSV Kiel-Holtenau. Accessible at: http://www.wsa-kiel.wsv.de/Service/gallery.php.html
When dealing with Hamburg harbour it was possible to plan each vehicle’s movement sequentially, allowing container vehicles to wait at pre-specified stops along the route should this be required. However, untangling the scheduling and routing on the canal brought in a whole new level of complexity. Möhring describes this as ‘NP-hard.’ Let’s unpick that. ‘NP-hard’ comes out of a classic problem in computing theory. ‘NP’ stands for ‘non-deterministic polynomial time’, and contrasts with ‘P’ for polynomial time. A problem that fits in class P can be solved by a computer in polynomial time, which means the maximum time taken to find a solution is proportional to a power of the number of components in the problem. So, for instance, it might take a maximum of n2 time units where n things are involved. By contrast, the solution to an NP problem can only definitively be verified in polynomial time – if you have a solution, it will take a maximum polynomial time to check it. Taking one step further, solving an NP-hard problem requires an algorithm that could deal with any NP problem in polynomial time. It is thought (though not proved) that there is no algorithm that can deal with an NP-hard problem in polynomial time, typically taking instead exponential time – so, for example, n it could take 2 time units to reach a solution. This means that as the number of possibilities to be considered increases, the time taken shoots up beyond practical values, making such problems only amenable to approximate solutions. The best-known NP-hard problem is the travelling salesman problem, which looks for the most efficient route for a salesman to take when visiting a number of cities. With more than a handful of cities, this rapidly becomes impossible to optimise, but
© WSV Kiel-Holtenau. Accessible at: http://www.wsa-kiel.wsv.de/Service/gallery.php.html
somewhat reduced. The recession period from 2008, and particularly the impact of the financial crisis in Greece, saw a significant reduction in traffic through the canal, which has yet to return to pre-crash levels. At the same time, the German government has provided €265 million for improvements, reshaping some of the bottleneck bends, enlarging some sidings and providing a new lock chamber in Kiel. With guidance from the model, this work can be best applied to ensure that the canal operates efficiently into the future. It was also hoped to provide automated guidance during the construction phase, when scheduling through the canal is likely to be particularly fraught. The team provided an option to maintain the code during this phase, but the contract was limited to a 3-year horizon, which because of the impact of recession has elapsed without construction beginning. An obvious question, if the model is so much more effective than manual planning, is why it is not being adopted for ordinary, day-to-day running of the canal. Rolf Möhring commented
‘We discussed that service with WSV, but it would have required to provide maintenance of the code for many years (adaption to new ships, changes of the canal’s topology etc.). This is not possible for a research team but would require a software company.
Each of the 256 combinations changes the topology of the canal, and we ran our routing algorithm for every one of them
Neither WSV nor my co-workers and I liked that idea much when the project started. Also, manual planning would still be needed in the presence of our algorithm, since we neglect nautical fine tuning (influence of strong winds, ships that are limited in their manoeuvring etc.). Nevertheless, WSV might still realize that possibility. They bought a well-documented copy of our source code with all the IP rights.’ There is no doubt that such a model could have been incorporated in a turnkey system which would have
enabled the operators of the canal to make use of it on a day-to-day basis, and it is possible that this may happen in the future. For the moment, though, the model still proved its worth in enabling planners to explore different options for the development of the canal and to look forward as far as 2025, taking in alternative scenarios for the changes in trade in this region. Möhring: ‘We were given eight different local enlargement components (new sidings in predefined places, lengthening of certain sidings, a new lock chamber, widening certain parts etc). We studied all 256 combinations of these. Each combination changes the topology of the canal, and we ran our routing algorithm for every one of them, for a number of different future traffic scenarios provided by WSV. We then ranked these combinations by their average waiting time.’ And the work has the potential for extension to other similar problems like train timetabling, where again a combination of scheduling and routing is involved. The Kiel canal is a good example of the kind of problem that seems simple, but where the interaction of different components rapidly produces unmanageable complexity. This typifies the real world problems where O.R. excels, bringing together mathematics and an understanding of the mechanisms involved to provide solutions that keep trade flowing. Brian Clegg is a science journalist and author and who runs the www. popularscience.co.uk and his own www. brianclegg.net websites. After graduating with a Lancaster University MA in Operational Research in 1977, Brian joined the O.R. Department at British Airways, where his work was focussed on computing, as information technology became central to all the O.R. work he did. He left BA in 1994 to set up a creativity training business.
16
IMPACT | SPRING 2017
U N I V E R S I T I E S M A K I N G A N I M PAC T EACH YEAR STUDENTS on MSc programmes in analytical subjects at several UK universities spend their last few months undertaking a project, often for an organisation. These projects can make a significant impact. This issue features a report of a project recently carried out at one of our universities: Southampton. If you are interested in availing yourself of such an opportunity, please contact the Operational Research Society at email@theorsociety.com OPTIMISING BAGGAGE RECLAIM ASSIGNMENTS AT HEATHROW AIRPORT (Julie Stanzl, Southampton University, MSc Operational Research)
Heathrow Airport is a major European hub with approximately 200,000 passengers arriving or departing each day, and this number is likely to increase. It has been recognised for high service standards, being named the ‘Best Airport in Western Europe’ for the second consecutive year at the Skytrax World Airport Awards in 2016. To help maintain its competitive position and high passenger service, the airport ground operations have to run efficiently to cope with the predicted increase in demand. Julie’s project focused on the efficient use of baggage reclaim facilities. As a flight approaches Heathrow, a decision is made as to which baggage reclaim is to be used for that flight. This decision is especially important because customer surveys suggest that baggage reclaim is one of the main drivers of overall satisfaction with the arrival experience. However, there are other stakeholders to be considered. Airlines often prefer a reclaim close to their desk. There are also preferred assignments for baggage handlers that allow luggage to be unloaded speedily and thus help to avoid delays in unloading subsequent flights. Julie’s objective was to design models and algorithms for the Baggage Reclaim Assignment and Scheduling Problem (BRASP) in which flights are to be
assigned to baggage reclaims and the times at which the bags are loaded onto the carousels are to be determined. BRASP is inherently a dynamic/online problem since decisions have to be made before full information about future arriving flights is known. However, the corresponding static/offline problem with full information available at the outset is also of interest, since it provides a baseline against which solutions of the dynamic/ online problem can be evaluated. Julie developed an integer programming model for BRASP that considers the varied objectives of the different stakeholders such as: passenger crowding around and between reclaims; passenger waiting time; walking distance of passengers to reclaims; time after the flight arrival that the first and last bags are loaded onto the reclaim relative to targets; and delays in loading bags onto reclaims due to carousels reaching their capacity. These were weighted to form a single objective function. There are constraints that account for the rate at which passengers pass through immigration control and arrive into the baggage hall, and the rate at which bags are loaded onto the baggage carousel. Several heuristics were also developed since the integer program required long computation times in some cases.
When Julie tested the integer program and the heuristics on real arrivals data for the off-line problem, average improvements in the objective function value relative to the solutions actually used at Heathrow were over 30% for the integer program and over 20% by the best of the heuristics. For the dynamic/online problem, a simulation model was created that exhibits similar characteristics to the real operating environment. A rolling horizon approach was used with solution updates produced by one of heuristics. Results confirmed the improvements over the solutions that were actually used. The models developed in this project for BRASP capture the aspirations of the various stakeholders much more closely than previous models in the literature. Mark Powell, the Head of Performance and Planning for Heathrow Baggage Operations and sponsor of the project, stated that the findings of the project “has helped us make decisions on how we operate our baggage service, to keep improving passenger experience and baggage handling efficiency. The work has also provided insight to inform one of our multi-million IT development projects, saving time and money on requirements definition”.
IMPACT © THE OR SOCIETY
17
© BT
L E V E R AG I N G O. R . TECHNIQUES FOR SMARTER FIELD O P E R AT I O N S GILBERT OWUSU, SID SHAKYA, ANNE LIRET AND ALI MCCORMICK
18
IMPACT © THE OR SOCIETY
THERE IS AN ADAGE which says when you fail to plan; you are planning to fail. Indeed, an organisation is as good as the resources it has. Operational Research (O.R.) technologies provide the enabling technologies to optimise resource utilisation, planning and scheduling. Service organisations, such as BT, employ field and office-based engineers, call centre agents and other personnel to install, deliver or terminate their services in addition to upgrade, repair
or maintain their assets. A common scenario involves a customer requesting a service/reporting a fault which cannot be configured/resolved automatically and requires one or more engineering or other activities to take place either at the office, the customer’s premise and/or other facilities (e.g. telephone exchange building, a store to collect spares, etc.). The optimum servicing of customer requests is of prime importance to service companies as it both improves customer satisfaction
rather produce near-optimal solutions. Scheduling tasks to resources is in general a complex decision making problem. It requires optimising the personnel schedules/routes against travel times while respecting skill preferences, time windows and other soft or hard constraints that may apply in the specific problem context. O.R. techniques such as constraint programming allow for efficient handling of the variety of requirements that real-life schedules must comply with such as skill-matching, routing, due dates, working shifts, staff breaks, regulatory constraints, functional dependencies between tasks, and so on. Resource planning and scheduling are complex processes, usually involving the analyses of large amounts of information. The complexity increases when more than one objective is being evaluated and the number of variables to consider is huge. For example, consider the task of ascertaining how best to deploy a number of engineers of a mobile multiskilled workforce, each with a number of skills (j), a number of availability types (k), and a number of areas in order to meet customer commitments
and improve quality of service (l). Two main characteristics of resource planning and scheduling are noted. First, the two processes are combinatorial, since as the number of resources increases, the number of profiles to be considered increases by a factor of j x k x l. Second, the goal of resource planning is to optimise resource deployment in order to service as many jobs as possible, improve quality of service, and reduce cost. The scheduling process is about assigning a job to the right resource in the right place (for mobile workforce) and at the correct time. Given the combinatorial and optimisation nature of resource planning and scheduling, manual processes are slow, tedious, and sub-optimal. The need for optimally generating resource plans and schedules is well recognised, and this has been the subject of continuous research.
APPLYING O.R. TECHNIQUES IN BT
BT has a long history of applying O.R. techniques to managing its field force. Attempts to automate work allocation dates back to the late
IMPACT | SPRING 2017
19
© BT
and also reduces the workforce costs by, for instance, optimising the number of required engineers and the distances they travel to reach customers’ premises. The effective planning and scheduling of resources is thus critical to optimal service delivery in service organisations. At the heart of effective planning and scheduling is the optimal use of resources. Time cannot be stored, and thus every hour that a resource is not utilised is lost forever. Resource planning provides the mechanism to match available manpower resource capability and capacity to the demand forecast. Resources tend to be multi-skilled; mobile (i.e. have multiple areas of work), and have different attendance patterns. The procedure for matching supply to demand involves either (i) flexing capacity to meet the demand or (ii) constraining the demand. Typically, capacity can be flexed along three dimensions: skill, geography and availability. Flexing capacity requires making decisions. Decisions related to skill involve skill selection (i.e. type of work for the day, e.g. maintenance or service provisioning), retraining to optimise skill mix, etc. Those along the geography dimension include permanent/temporary (re-)deployment, recruitment, and retention. When it comes to availability, decisions such as productivity improvements, shift patterns and overtime allowance can be made. Accurate resource planning is sine quo non for optimal resource scheduling. O.R. technologies have been extensively used in the decisionmaking process of resource planning and scheduling. The motivation for utilising such technologies in resource planning and scheduling stems from the belief that automation will not just speed up the decision making process of resource managers, but
To address the aforementioned challenge, BT Research exploited O.R. techniques such as simulation, resource planning and scheduling in developing two systems for (i) resource planning, and (ii) resource scheduling and allocation. These systems are in operational use and underpin the management of BT’s field engineering teams with the view to minimise travel and improve resource utilisation. The resource planning system identifies the optimal changes in supply to meet the demand. These changes in
© BT
BT Research exploited O.R. techniques in developing systems for resource planning and resource scheduling and allocation
1980s. In the mid-1990s, BT was the first European telco to automate workforce management. The company is continually investing in making the Digital Britain vision a reality by creating a faster, more flexible network and associated Internet services. To support this investment and large scale network infrastructure programmes, BT requires an efficient, responsive mobile field service operation. BT’s 23K field engineers serve geographically dispersed and diverse customers including ISPs and end users. The deployment of our engineers has significant impact on delivering customer experience, travel and CO2 emissions. The question is how can we send the right engineer with the right skill to the right location to deliver the right service? Addressing this challenge will lead to reduction in CO2 emissions, improvement in engineer utilisation, and better customer experience.
20
IMPACT | SPRING 2017
supply include people moving between areas, focusing on different skills, invited to work on overtime, allowed more annual leave, increasing number of third party contractors etc. Due to the combinatorial nature of the planning problem, the resource planning system uses Fuzzy logic and Evolutionary Algorithm techniques to optimise the daily deployment of the right engineer with the right skill on the right day so as to reduce travel and maximise engineer productivity. The resource scheduling and allocation system uses Guidedlocal Search and a rules-based engine to generate optimised schedules for our exchange-based engineers. In building the systems, BT Research teams worked closely with the operational teams to elicit requirements and engage all potential stakeholders. Three approaches were employed; workshops, simulation and rapid trialling. We used the workshops to realise a shared understanding of the challenges.
Simulation allowed the modelling and experimentation of scenarios without involving end users in endless field trials. This was used to narrow down options and support key decisions in determining the “best” solution for operations. Simulations provided the environment for scenarios to be tested out before committing to costly trials or development budgets. Rapid prototyping and trialling helped to garner support for using the tools in operations. By engaging end-users in prototyping of solutions, they quickly identified with what was being developed and were more inclined to use the tools. Rapid prototyping and trialling were used as part of assessing the viability of the impact of the solutions. These approaches enabled us to maximise stakeholder engagement. One of the key innovations was an interactive portal that provides field managers with insight so they can help their teams improve their driving behaviours; saving money by better and more efficient driving styles. By providing this portal we have combined several sources of data into one web based system and now have the ability for operational managers to encourage best practice. The operational managers use the portal to monitor the improvements through coaching and training (where required). We issued the following: • Driver Best practice cards • Driver Aide Memoir safety check cards • Driver reminder labels fitted to vehicles (sun visor) Furthermore, with the O.R.-based models, our engineers are no longer constrained by their preferred work areas (PWAs). They did not move out of their PWAs for resolving tasks even when the tasks were closer to
them. This new way of allocating work to engineers has led engineer empowerment. When at a site, engineers are presented with a list of recommended tasks. The objective is to encourage engineers to be proactive in fixing faults. We conducted a number of trials and surveyed the engineers and operational managers to elicit their experience with planning and scheduling systems before deployment. This approach enabled us to coinnovate with the engineers.
performance,” Karen Giles (Change Architect). • “Fantastic achievement, we have been using it for some time now and feedback from the engineers and the Gold User calls I chair are becoming increasingly more positive with great feedback from the field”, Gavin Ashkettle (Operational Manager).
feedback from the engineers and the Gold User calls are becoming increasingly more positive
THE BENEFITS AND OPERATIONAL IMPACT
A highlight for us was when our resource planning and scheduling systems won the 2015 BT’s Chairman’s award for “Sustainable business environmental innovation of the year” (https://www.btchairmansawards.com/ Public/annual_award_winners). The systems have transformed our field operations and are in operational use. We have seen success across the user community including: • productivity gains in our planning communities. • our field engineers empowered to work. This has led to improvements in employee satisfaction. • operational cost savings. • improvements in the way we deliver service. We also received very positive feedback from BT operational teams. Quotes from some of the operational teams are as follows: • “Our engineers are travelling less,” Matt Walker (Director, Field Dynamics). • “It’s one of a range of initiatives that’s contributing to better service
The quality of the resource plans generated by the planning system has reduced the complexity of the resource planning process. The planning system automatically analyses and processes large amounts of information and variables to optimise resource deployment. This has resulted in better resource schedules being generated with the ability to optimally allocate tasks to resources. The benefits of resource planning, scheduling and allocation are in three main areas: (i) Better service delivery for customers as part of the Digital Britain initiative. BT’s field engineers are at the heart of rolling out fibre across the UK. Managing the engineers is a complex undertaking especially in light of increasing demand volatility and volumes. Getting it right on the day impacts customer experience and has sustainability implications. Reduced travel time due to better planning means more time on site with customers and more chance to complete the jobs while they are there. Engineers can then be sent within their geographical areas of preference to an area where there is a potential backlog of need for their
particular skill set. About 400K extra jobs are being delivered per year with savings of 12,500 hours of driving every month through better resource planning. (ii) Reduction in Carbon emissions. Our scheduling tool is enabling reduction in travel of up to 17%. Through intelligent bundling of tasks using O.R. technologies, we are seeing productive uplift of approximately 10.1%. We developed a fuel utilisation dashboard for providing insights into driver behaviour. This highlighted a reduction of 36,000 tonnes of emissions. (iii) Operational cost savings. The O.R. models have underpinned operational savings of ~£25M.
THE FUTURE
We are also actively researching and developing technologies to improve the utilisation of field engineering teams. One of our research projects is to optimise the deployment of network spares to minimise travel and improve customer experience. We are taking lessons learned in resource (human) optimisation and applying them to other resource types such as fixed inventory, spares, office based resources, vehicles– to get a holistic plan and schedule involving all moving parts. We are also exploring the use of smart glasses to improve the delivery of service. Gilbert Owusu (gilbert.owusu@bt.com) is a Chief Researcher responsible for R&D technologies for optimising operations, Sid Shakya, Anne Liret and Ali McCormick are Principal Researchers. Raphael Dorne, Ahmed Mohamed and Andrew Starkey were also part of the team that carried out this work.
IMPACT | SPRING 2017
21
Image courtesy of the author
REDUCING CANINE GENETIC DISEASE IAN J SEATH AND SOPHIE CARR
22
IMPACT © THE OR SOCIETY
MINIATURE WIREHAIRED DACHSHUNDS can suffer from a rare and incurable form of Epilepsy called Lafora Disease. Back in 2010 it was estimated that between 5 and 10% of UK Mini Wires were affected, possibly amounting to 500 dogs with a further 50-100 being born with the disease each year. Seven years later, the number of affected puppies being born has been reduced to less than 10 per year. A Pro Bono O.R. project played a part in this success story. We collaborated to produce an Excel tool that could be used to help inform dog breeders’ decisions about
which pairs of dogs could be bred at minimum risk of the resulting puppies having Lafora disease.
the number of affected puppies born has been reduced to less than 10 per year
DNA TESTING FOR GENETIC DISEASES
There are a growing number of DNA tests to help dog breeders identify potential breeding pairs that could be affected by inherited diseases. Lafora
disease is inherited as an autosomal recessive condition; in other words, both parents must carry one copy of the Lafora mutation for the disease to be present in any puppies. Every Mini Wire is either CLEAR of the mutation (carries no copies), is a CARRIER (one copy) or is AFFECTED (two copies). The genetic mutation causing Lafora Disease was identified by a team of researchers based at the Sick Kids Hospital in Toronto. The condition exists in people as well as dogs and a groundbreaking collaboration of human and veterinary medicine identified the gene in 2005. In 2010, the Wirehaired Dachshund Club began working with the Toronto Lab to develop a DNA test that breeders could use to screen their dogs and therefore prevent affected puppies being born. Two tests were developed: a “simple” one that could identify dogs carrying two copies of the mutation and dogs carrying one or no copies, and a “complex” test that could differentiate between clear, carrier and affected dogs. The Wirehaired Dachshund Club with support from the Dachshund Breed Council began a concerted programme of education for breeders and owners to raise awareness of the disease and the importance of screening dogs before breeding from them. They also raised around £40,000 to help further develop the DNA tests and to subsidise a UK-wide screening programme.
BUILDING MOMENTUM IN THE SCREENING PROGRAMME
Whilst there were two different tests available to determine if a Dachshund carries the autosomal
FIGURE 1 PROBABILITY TREE FOR A LAFORA CARRIER BREEDING WITH A LAFORA CLEAR DOG
recessive mutation, not every dog was tested. Consequently this created four populations: tested; untested; clinically affected (i.e. showing symptoms) and clinically not affected. An Excel tool was developed to evaluate the risk factors associated with the mutation status of DNA tested and untested dogs. The tool was to be used as part of the education programme to help breeders understand why DNA testing for Lafora Disease is so important. To develop the tool, the analysts spent a couple of days examining the problem and available data before agreeing upon the statistical approach to be taken. Although there are many complex statistical techniques used within the field of genetics testing, what was required was a simple, robust solution presented in Excel. Given the available volume and quality of data, a probability tree was used to generate the overall risk of puppies being bred with Lafora disease.
Figure 1 shows the possible outcomes for puppies being bred from parents who are clear, carriers or affected by Lafora. It shows a randomly selected dog and the associated probabilities of being clear, a carrier or affected. These values were derived from the underpinning data set which showed 10% of screened dogs were affected, with the remaining dogs having an equal chance of being clear (45%) or a carrier (45%). The overall calculated theoretical probabilities from the screening data available at the time are seen in Table 1. The spreadsheet allowed these baseline values to be revised as more dogs were screened and additional information became available, thus improving the value of the tool. Screening sessions were typically carried out twice a year with batches of 50-60 dogs and the proportion of affected dogs remained around the 10% level through to 2016. The proportion of clear dogs increased slightly and carriers reduced.
IMPACT | SPRING 2017
23
CLEAR
CARRIER
AFFECTED
Clear and clear
1.000
0.000
0.000
Clear and affected
0.000
1.000
0.000
Clear and carrier
0.750
0.250
0.000
Clear and not affected
0.848
0.106
0.000
Clear and not tested
0.823
0.178
0.000
Carrier and not affected
0.538
0.356
0.106
Not affected and not affected
0.743
0.212
0.045
Carrier and not tested
0.495
0.368
0.138
Not affected and not tested
0.684
0.258
0.058
Not tested and not tested
0.629
0.295
0.076
Affected and not tested
0.000
0.725
0.275
Carrier and carrier
0.250
0.500
0.250
TABLE 1 THEORETICAL PROBABILITIES FOR PUPPIES WITH EACH PARENT’S DNA STATUS
Mini Wire Litters - Lafora Status
% of Litters Registered
100
75
50
25
0 IN
R TE
W
12
ER
S
M UM
IN
R TE
W
13
ER
S
W
M UM
IN
R TE
14
ER
S
W
M UM
R TE
IN
15
R
ME
M SU
KC Breed Records Supplement 2012-16 Safe %
Unsafe %
FIGURE 2 PROPORTIONS OF “SAFE” AND “UNSAFE” LITTERS BEING BRED
Each quarter, the UK Kennel Club publishes details of new litters of dogs that have been bred so the Breed Council was able to look at the
24
IMPACT | SPRING 2017
test status of the parents of each litter and count the number of litters and puppies in each of the ten probability categories. The number of puppies
in each category was entered in the model which then predicts the number of affected puppies that had been produced, even if the parents had not been tested.
Every affected puppy is an owner’s much-loved pet and they might have to live with the epileptic fits associated with Lafora disease for 6-10 years
That number was a key input to the Breed Council’s quarterly communication updates to breeders and owners on the screening programme. It enabled them to say, for example, “this quarter, 200 Mini Wire Dachshund puppies were bred and we would expect 30 of them to be affected by Lafora Disease because of breeding from so many untested parents”. Telling people about the number of puppies likely to be affected was a much more powerful message than simply saying “we need more people to be DNA screening their Dachshunds”. Every affected puppy is an owner’s much-loved pet and they might have to live with the epileptic fits associated with Lafora disease for 6-10 years. The disease is late-onset which means it typically doesn’t appear as clinical symptoms (epileptic fits) until the dog is 5 or 6 years old. So, owners of at risk puppies are essentially sitting on a time bomb, waiting to see when their dog will develop the disease and how bad the symptoms will be. By the end of 2016, the uptake of testing and the adoption of responsible breeding practices meant that the number of at risk puppies had been drastically reduced. Figure 2 shows the quarterly proportions of “safe” and “unsafe” litters being bred.
An “unsafe” litter is one where there is a risk of an affected puppy being born. Predictions, such as those shown above were used to make more informed decision-making relating to a range of different breeding strategies and thereby supporting the overall welfare of the dogs.
OPPORTUNITIES FOR O.R. TO MAKE AN IMPACT
Those responsible for many breeds, like the Dachshund clubs discussed here, have lots of data but perhaps lack the capability or capacity to analyse it and draw out the insights. Sometimes the data is locked away in spreadsheets, sometimes it’s in an online Health Survey database. One of the great things about most of these breed clubs is that they are so passionate about their breed, its health and its development. There’s no lack of enthusiasm and motivation to make a difference, but there’s never enough funding or resource to match these. Passion without data will take you a long way, but in an increasingly complex and uncertain world, the danger is that breed clubs simply won’t be able to demonstrate the impact they are making. If they can’t do that, they won’t be able to persuade owners and breeders to participate in future initiatives to improve breed health and welfare. That’s why a pro bono O.R. project such as this one can be so valuable. Inevitably, there’s a 4-sector matrix combining Passion and Data (see Figure 3). This project was a great example of bringing together the passion of a voluntary organisation with some specific O.R. expertise and really
FIGURE 3 PASSION AND DATA: FOUR PERSPECTIVES
making a difference for the health of the dogs.
This project was a great example of bringing together the passion of a voluntary organisation with some specific O.R. expertise and really making a difference for the health of the dogs
Ian Seath is an independent consultant who serves on the OR Society’s Pro Bono Committee, and can be contacted at ian.seath@improvement-skills. co.uk. Ian has more than 25 years’ experience of working with private, public and third sector clients. His work includes strategy development, process improvement and project management. Prior to becoming a consultant he
worked in R&D, Marketing and HR in manufacturing. He is also Chairman of the UK Dachshund Breed Council, a not-for profit organisation working for the benefit of Dachshunds and their owners. Sophie Carr is an independent consultant (sophie@baysconsulting.co.uk) and a member of the OR Society. With experience across all industry sectors, Sophie’s work includes analytics, statistics and data science. Before becoming a consultant, Sophie worked in engineering and analysis with the defence industry. She is a chartered scientist and chartered mathematician. For more information about Pro Bono O.R. contact project manager Felicity McLeister at felicity.mcleister@ theorsociety.com. Alternatively, please visit http://www.theorsociety.com/ Probono.
IMPACT | SPRING 2017
25
© Photo courtesy of General Motors
O P E R AT I O N A L R E S E A R C H AT G E N E R A L M OTO R S JONATHAN OWEN AND ROBERT INMAN
26
IMPACT © THE OR SOCIETY
IN TODAY’S WORLD, where uncertainty is a fact of business life, where new disruptive business models continually challenge the status quo, where competition is fiercely intense, where global trade policies are in flux and where concerns such as security and climate change are global in scope, a science-based approach to decision-making and problem-solving is indispensable. At General Motors (GM), operations research (O.R.) provides that framework, particularly for complex issues and systems that involve multiple objectives, many alternatives, trade-offs between competing effects, large amounts
of data and situations involving uncertainty or risk. In truth, for an entity the size of General Motors, these are the only kind of challenges the company faces because GM is huge and no issue is simple! With products that range from electric and mini-cars to heavyduty full-size trucks, monocabs and convertibles, GM offers a comprehensive range of vehicles in more than 120 countries around the world. Along with its strategic partners, GM sells and services vehicles under the Chevrolet, Buick, GMC, Cadillac, Opel, Vauxhall, Holden, Baojun, Wuling and Jiefang brand
THE EARLY YEARS
Even before the industry entered the current period of globalization and
profound technological change, O.R. was valued within GM. As early as the 1960s and 1970s, GM employed analytical techniques for transportation science and traffic flow analyses. In the 1980s, GM used mathematical optimization methods to reduce logistics costs and to improve assembly line job sequencing. In the 1990s, it patterned warranty cost reduction analyses after epidemiology studies from the health care field. Near the turn of the century, GM applied decision analysis to determine the best business model for its OnStarTM technology and service. Another notable O.R. project was optimizing the scheduling of vehicle cold weather testing. This decision support tool simultaneously improved test throughput, reduced test mileage, improved employee enthusiasm, and reduced warranty cost.
In recognition of GM’s integration of O.R. into its business, General Motors was awarded the 2016 INFORMS Prize
A notable long term analytics effort was GM’s work on production throughput analysis and optimization. Even when overall industry production capacity is above demand, it is usually the case that demand for certain “hot” vehicles exceeds planned plant capacities. In such cases, an increase in production capacity will generate larger profits via more sales revenue and/or overtime cost avoidance. GM’s O.R. team analysed production throughput using math models and simulation, identified cost drivers and bottlenecks, and developed a throughput improvement process
MANUFACTURING THE VAUXHALL ASTRA SPORTS TOURER AT ELLESMERE PORT UK 2016
to increase productivity and reduce costs. The resulting software has been enhanced over a 20-year period to extend GM’s capabilities, enabling it to accommodate product and manufacturing flexibility, variable control policies and more complex routing. The software is used globally in GM plants, as well as to design new production systems and processes. These and other O.R. applications are now standard processes in our manufacturing facilities such as Ellesmere Port.
O.R. AT GM TODAY
Ten years ago, the R&D O.R. team broadened its mission and today provides a research capability within the company focused on tackling long-term strategic challenges. With the wideranging scope of potential assignments, the O.R. team is composed of Ph.D. and master’s-level technical experts, along with subject-matter
IMPACT | SPRING 2017
27
© Photo courtesy of General Motors
names. GM also has significant equity stakes in major joint ventures in Asia, including SAIC-GM, SAICGM-Wuling, FAW-GM and GM Korea. GM has 212,000 employees located in nearly 400 facilities across six continents. Its employees speak more than 50 languages and touch 23 time zones. The work they do demonstrates the depth and breadth of the auto business – from developing new vehicles and product technologies to designing and engineering stateof-the-art plants, organizing and managing the company’s vast global supply chain and logistics networks, building new markets and creating new business opportunities. The work is multifaceted, but whether in Detroit, Frankfurt, São Paulo, Shanghai or Ellesmere Port, the goal is the same: offer products and services that establish a deep connection with customers around the world while simultaneously generating revenue and profit for the company. Considering the complexity of the challenges in the auto business and the speed at which change is occurring in every arena – technology, business, materials and resources, governmental policies and regulations – it is critical to employ a scientific approach in thinking about and attempting to understand problems and implement viable solutions. Today, no area of GM is untouched by analytical methods. In recognition of GM’s integration of O.R. into its business, General Motors was awarded the 2016 INFORMS Prize. General Motors R&D operational researchers are seen in the leading image with the Prize.
© Photo courtesy of General Motors
ONSTAR DATA STREAMS FROM VEHICLE SENSORS WILL ENABLE PROACTIVE ALERTS
experts with hands-on and executive leadership experience in key areas of the business, such as manufacturing, supply chain, engineering, quality, planning, marketing, and research and development. Projects are aligned with top company priorities, which are based on a combination of business performance drivers and senior leadership input. The team’s implementation model comprises a mix of: • analysis by internal consultants to understand the issue, • capability development, including analytical principles, math models and tools, and • partnering with stakeholders and decision-makers early to scope and maximize the potential impact of implemented solutions. The work may start with targeted questions, e.g., what’s the opportunity of (fill in the blank), or it can focus on improving operational effectiveness through process improvements in areas such as manufacturing productivity,
28
IMPACT | SPRING 2017
capital or supply chain management. Many opportunities to improve revenue management exist through the application of tools and systems that help decision-makers optimize portfolio planning and reduce complexity. In addition, given the large new data streams coming from the intelligence available in today’s vehicles, new emphasis is being put on improving vehicle efficiency and quality, as well as more deeply understanding customers so GM can provide differentiated value through new automotive products and services. The four recent projects below are examples of GM applying O.R. and management science to the most complex issues the company faces.
OnStar TM PROACTIVE ALERT AT GM
Everything wears out over time. But when something wears out unexpectedly, it can shift from a minor annoyance to major distress. Unexpected vehicle repairs can cause major disruptions. That’s why General Motors developed OnStarTM Proactive
Alert. This technology provides an alert before a failure occurs, transforming an emergency repair into planned maintenance. This industry-leading prognostic technology can predict and notify drivers when certain components need attention – in many cases before vehicle performance is impacted. Proactive Alert uses big-data analytics of vehicle performance data from millions of vehicles to overcome the challenges of uncertainty due to usage variations, part to part variations, and environment factors. These prognostics use learning technologies to accelerate algorithm development. These prognostic algorithms predict impending failures and then trigger a communication to the driver.
Prognostic algorithms predict impending failures and then trigger a communication to the driver
When a customer has enrolled their properly equipped vehicle in this service, the data is sent to OnStar’s secure servers and proprietary algorithms are applied to assess whether certain conditions could impact vehicle performance. When indicated, notifications are sent to the customer via email, text message, in-vehicle alerts or through the OnStar RemoteLink smartphone app. These analytics are implemented and have already proved their worth by detecting a quality issue due to a supplier shipping out of spec components. Not only did the tool detect the issue, it was able to identify which vehicles had the faulty component. This early detection drastically reduced the number of
customers impacted, and the detail provided enabled much tighter identification of the specific vehicles involved (as opposed to broad service action covering a large number of vehicles that might be impacted). Most importantly it protected our customers from unexpected repairs. It is a current example of applying O.R. and management science to the most complex issues the company faces.
GM uses O.R. to answer two complementary questions; “how many vehicles” and “which ones?”
VEHICLE CONTENT OPTIMIZATION
CONFIGURING AN ASTRA SPORTS TOURER AT WWW.VAUXHALL.COM
most. How to determine the best alternatives is the core of Vehicle Content Optimization. The challenges are twofold: building a consumer choice model
to capture the dynamics of consumer choice and solving an unconventional optimization problem, embedded with a nonlinear choice model. The objective function is non-linear,
IMPACT | SPRING 2017
29
© Photo courtesy of General Motors
Automakers must decide what models to produce and in what varieties (including optional features and packages of options), and how to price each variant. For example the adjacent figure shows the engine choice screen from Vauxhall’s online configurator for the Astra Sports Tourer built in Ellesmere Port. GM uses O.R. to optimize these contenting and pricing decisions. GM would like every customer to find exactly what they are looking for, and all dealerships to order exactly what customers want and be satisfied with their profit margins. Fortunately, higher customer satisfaction does not necessarily require a greater number of product varieties. Research has shown that, if choices or selection processes are too complex, consumers may choose to simplify or delay decisionmaking to reduce stress and avoid regret. It turns out that offering the right alternatives to the customer is more important than offering the
© Photo courtesy of General Motors
VAUXHALL ASTRA SPORTS TOURER ON HOLIDAY
non-convex, and non-smooth. The constraints are nonlinear and complex. And given every possible combination of every possible feature, the problem suffers from colossal combinatorial complexity. GM has developed in-house capabilities to solve these extremely complex problems and helped reduce the design and validation costs by getting the content right early in the vehicle development – thus avoiding the cost of engineering variants that customers don’t really want.
OPTIMIZING NEW VEHICLE INVENTORY
In some countries, the USA for example, the vast majority of vehicles are sold off dealers’ lots. Although some automakers sell a limited number of vehicles to customers directly, franchise laws generally prohibit direct-to-customer sales
30
IMPACT | SPRING 2017
for brands with existing franchise contracts. The autonomous and independently-owned dealer partners complicate the inventory management decisions. This project succeeds within this context by developing and implementing O.R. solutions that respect the business processes of both
Portfolios of offers that have broad appeal allow Vauxhall to focus on a few impactful offers that best align with customer preferences, reducing the confusion created by having too many offers in the market simultaneously
GM and its dealers, are understood and trusted by both GM and its dealers, and benefit GM, its dealers, and customers.
Managing the amount and model/option mix of this inventory is a challenge; optimizing it is an opportunity. GM uses O.R. to answer two complementary questions; “how many vehicles” and “which ones?” To answer the first question, we find the inventory that maximizes profit – which differs from the standard approach of finding the inventory needed to satisfy a given fill rate. To answer the second question we provide a decision support tool to help dealers order the best variations of each vehicle model using a setcovering philosophy – which differs from the standard approach of recommending the highest selling variants. Since these decisions cross organizational boundaries, these tools were developed jointly by operational researchers and a cross-functional team of business stakeholders. We used the Pareto principle to simplify overwhelming complexity
to what matters most, and enforced simplicity and transparency to foster ongoing use.
OPTIMIZING SALES INCENTIVES IN THE UNITED KINGDOM
One recent O.R. application specific to the U.K. automotive market is optimizing sales offers at Vauxhall. All manufacturers use offers to promote new vehicle sales. The U.K. market is especially interesting due to an abundant variety of offer types. In addition to cash discount and financing offers, manufacturers also make available many noncash offers such as complimentary equipment, extended warranty, free fuel or insurance, and others. Which of these offers are most preferred by customers? One of the challenges in analysing offers is that individual customer preferences can vary significantly so that a “one size fits all” approach is inefficient. For instance, customers that finance their vehicle would be attracted to a reduced rate financing offer while others that pay cash would have little interest. Simply averaging each customer’s preference would under or overstate their actual preference. We used Bayesian estimation methods to extract preferences for individual customers, based on primary market research data. These individual preferences can then be used within a discrete choice model to construct portfolios of offers that have broad appeal. This allows Vauxhall to focus on a few impactful offers that best align with customer preferences, reducing the confusion created by having too many offers in the market simultaneously. Additionally, the team
can perform “what-if ” simulations to evaluate the impact of any changes to freshen the portfolio or react to competitors.
THE FUTURE OF O.R. AT GENERAL MOTORS
GM’s commitment to O.R. and Analytics has grown in recent years with senior leadership support from our CEO and Chair Mary Barra and our CTO Jon Lauckner. We continue building core expertise at R&D, while we are strengthening analytical capabilities in critical functions across the enterprise, including Product Development, Purchasing, Manufacturing, Sales and Marketing, Finance, and IT.
O.R. practitioners can have a profound impact on their organization, help their company rise above the competition, and most importantly provide increased value to customers
With the exponential growth in data, the ever-expanding digital connection to customers and the introduction of exciting new vehicles technologies, this is an exciting time for O.R. at GM. With so many research-rich opportunities, the team is always mindful of the characteristics that are key to successfully applying O.R. methods and achieving organizational excellence, including the ability to: • choose the right problem to address; • convince others that a complicated problem is important and solvable;
• work as part of a team toward a common and well-defined goal; • have tenacity in chasing down details and data, and then equal tenacity in implementing a solution; • build models at the right level of detail for the purpose at hand so it is not too complex nor too data intensive, but sufficiently detailed to capture the salient characteristics and trade-offs; • engage the key stakeholders in the process of development and implementation, in order to gain joint ownership; and • deliver an O.R. solution to decision makers in a form that they understand and act upon. O.R. practitioners who embody these characteristics can have a profound impact on their organization, help their company rise above the competition, and most importantly provide increased value to customers. But just as important as the economic benefits is the mindset – the scientific approach to problem solving, decisionmaking, structuring the business, and identifying new opportunities. As the world goes global – as innovation strives to create more, faster, better and at less cost; as new business and technology paradigms emerge – endless opportunities abound to take advantage of O.R. and reap the substantial good that can be realized from its practice. Jonathan Owen is the director of Operations Research at General Motors and Robert Inman is a Technical Fellow. David Vander Veen, Michael Harbaugh, Peiling Wu-Smith, Yilu Zhang, Steven Holland and Lerinda Frost also made substantial contributions to the preparation of this article.
IMPACT | SPRING 2017
31
By NHD-INFO - Havvindparken Sheringham Shoal (Foto: Harald Pettersen/Statoil, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=22778112
OPTIMISING THE WIND BRIAN CLEGG
32
WIND FARMS WERE ONCE THE PINUPS OF THE GREEN ENERGY WORLD, but recently they have suffered attacks on environmental and political fronts. By putting them offshore it is possible to make bigger turbines practical and reduce their visual impact, but it significantly increases the cost. Operational Research (O.R.) has found a role in ensuring that the budget can be trimmed from one of the most expensive parts of the operation: the installation of the turbines. Offshore farms are being moved increasingly distant from shore – some planned sites are over 100 kilometres out at sea, and in water that can be 60 metres deep. Installation in such difficult conditions provides around
26 per cent of the capital cost of the build – so any chance to reduce this can have a big impact on profitability. The University of Strathclyde’s Kerem Akartunali: ‘These are massive projects, with costs often exceeding £100 million and involving many complex sequences of operations. Even if you can achieve a slight improvement in the decision-making process, that directly translates into significant amounts of money saved. This is not only welcome by the industry in times of economic uncertainties, but is also a big boost to encouragement when climate change and green energy are becoming more central to our government policies and our daily lives.’ The University of Strathclyde has a strong focus on ‘knowledge
IMPACT © THE OR SOCIETY
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 7 1 0 2 ©
moving further offshore into deeper water so that they can build bigger wind farms. The industry is still maturing, however, and offshore wind is comparatively more expensive than other more-established energy sources, so reducing costs would position offshore wind as a more attractive investment for a clean energy source, which would increase the likelihood of the industry becoming self-sufficient.’
Reducing costs would position offshore wind as a more attractive investment for a clean energy source
Matthew Revie added: ‘This aspect of being self-sufficient is really important. Even though wind farms have been around for many years, the industry is immature in many ways. As it’s rapidly expanded, there has always been an assumption that overall costs will come down as they learn from previous installations. However, this hasn’t always been the case as they have continuously stretched themselves. We have supported them by structuring some of the
problems they face and helped them solve challenging problems’. After exploring the options, there seemed two prime opportunities to make use of O.R. to enhance installation. The first was in planning, where simulation techniques would be used to compare different logistical options to get the required parts in place at the right time. This would be followed up during implementation with a mechanism to minimise the impact of disruption. For this, the team decided to use a rolling-horizon optimisation tool, which would constantly evaluate the status quo, and would recommend the best way forward should conditions change. This process also required robust optimisation – specific optimisation methods able to cope with a considerable degree of uncertainty in, for instance, the weather conditions. With a choice of different ports to sail from, different fleet makeup and a wide range of options for scheduling, the planning model made use of Monte Carlo simulation to deal with the uncertainties of the impact of varying task duration, equipment reliability and weather. The term ‘Monte Carlo’,
IMPACT | SPRING 2017
33
By NHD-INFO Foto: Harald Pettersen/Statoil - Flickr: Havvindparken Sheringham Shoal (Foto: Harald Pettersen/Statoil, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=26793451
exchange’, which involves translating research into action with impact on industry. From its Technology and Innovation Centre, as part of its Low Carbon Power and Energy research, the university has strengthened their long-term collaborations with Scottish Power Renewables (SPR) and SSE (formerly Scottish & Southern Energy), providing O.R. support to the planning and implementation of offshore wind farm installations. The project was driven by a team comprising six participants from the energy companies and six from Strathclyde: Kerem Akartunali and Matthew Revie from the department of Management Science, Sandy Day and Evangelos Boulougouris from the department of Naval Architecture, Ocean and Marine Engineering and two full-time postdoctoral research associates, Euan Barlow and Diclehan Tezcaner-Öztürk. Euan Barlow described how he became involved in O.R. projects: ‘I’ve been at Strathclyde since about 2000. I started with an undergraduate degree in maths, followed by a PhD in applied maths and then my first job was as a postdoc in civil and environmental engineering. A large part of that project was working on an applied optimisation problem and after that finished I found that some of the skills that I’d picked up fitted well with a job that was going in the management science department; I’ve been here for about four years now. The projects I’ve been working on have all involved working closely with industry to solve real world problems using O.R. techniques and methodologies.’ The team became involved because of the changing nature of offshore wind projects. Euan Barlow again: ‘Wind farms are being developed in more extreme environments, which are challenging to operate in. They’re
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 7 1 0 2 ©
By SteKrueBe - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=17009450
based on the casino, was used as a code name during the Second World War, when the technique of taking repeated samples from random values was first used to make simulations during the Manhattan Project to construct the first nuclear bomb. At its simplest, a Monte Carlo simulation might be replicating the possible outcomes of tossing a fair coin by sampling random numbers between 0 and 1, with those up to 0.5 representing heads and larger values tails. When predicting the potential impact of weather, for example, the model takes existing weather patterns and makes repeated predictions using appropriate random inputs which come together to give a feel for the uncertainty in the impact of the weather on the simulated process.
34
This part of the project is already in use in the planning stage for the installation of the 588 megawatt, £2.6 billion Beatrice Offshore Windfarm in
The planning model made use of Monte Carlo simulation to deal with the uncertainties of the impact of varying task duration, equipment reliability and weather
the Outer Moray Firth, to be constructed by SSE with partners Copenhagen Infrastructure Partners and SDIC Power. Construction is due to begin in 2017 with completion in 2019. Similarly, it is hoped that SPR will use the models in
the construction of its comparably priced East Anglia One farm, in conjunction with Vattenfall Wind Power, a 714 megawatt installation off the Suffolk coast, due to begin construction a year later. Between them, the two farms will generate enough electricity to serve nearly 1 million homes. The initial simulation work identified the parts of the process that are most sensitive to weather delays, notably the installation of the turbines and turbine jackets. Having the simulation process available made it possible to test out new ways to undertake the installation, for example with different numbers of vessels involved, and get a feel for their impact on times and costs. The simulations identified that relatively small changes in procedure could result in large shifts in the predicted time to undertake installation. For example, offshore wind turbines are installed from specialised ‘jack-up’ vessels, which have retractable legs that are lowered to the seabed to jack the ship up above the water level to keep it stable. The jacking procedure can only be undertaken in relatively calm waters, and the simulation showed that if it were possible to increase the acceptable wave height for jacking from 1.5 to 2 metres, it would result in a typical reduction of the installation duration of 20 days – making a major impact on costs. As the construction part of the project begins, the second phase of the O.R. input will come into use with the application of optimisation techniques. As disruptions occur, from weather, breakdown of vessels or other causes, the model will establish the impact on the schedule in real time, see if a new schedule is required and devise changes that minimise the cost of the disruption to keep the installation as close as possible to schedule and budget. This part of the process was
IMPACT | SPRING 2017
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 7 1 0 2 ©
more unfamiliar to the client. Matthew Revie: ‘It’s a tool they will find more difficult to implement as they won’t understand why it’s giving them a particular answer, but we felt that these are some tools that they should be getting exposed to and learning about, and understanding the benefits they can bring. If you don’t do that then ultimately all we’re doing here is solving problems with simulation. To an extent what the project was about was getting our industry collaborators to step outside their comfort zone and be exposed to new techniques.’ The aim is to have a model that can largely be run directly by energy industry clients, but continued O.R. input is likely, as Kerem Akartunali explains: ‘We have run various workshops and training events with the industry partners so the end users can use the tools built in the project independently from us. However, as in any practical O.R. problem, the buck doesn’t just stop there, as extending and improving models is always a great opportunity for both sides: we are still collaborating with one of the partners to expand the models beyond the original scope to address some further related decision problems. When you develop a model that works well, you can still later find finer details of the problem and can improve the model further.’ This doesn’t mean, though, that Strathclyde provided a black box that the clients must take on trust. Matthew Revie: ‘SSE don’t view the model as a black box. They understand the qualitative structure to the model but they also understand at a technical level what’s going on in the tool that delivers the output. They understand the way it has been set up. They understand that if they want to change one part, say how weather is modelled, they can take that part out and replace it with something else… ’
This has been an academic exercise with a strong focus on the practical benefits. Matthew Revie commented: ‘At Strathclyde, engineers and operational researchers regularly work collaboratively for a range of reasons. Typically, operational researchers will have a broader modelling toolkit, focused more on methodology than context. The engineers live and breathe the problem and know everything about the context, but by bringing our toolkit to the conversation, we can do things like develop emulators, have more sophisticated approaches to simulation, better tools for optimization and so forth. We typically have more depth in terms of these tools: we can ultimately bring more sophisticated approaches to bear. This is the unique part operational researchers bring to these collaborations.’
The tool developed provided the means to evaluate our marine construction programme and weather risks to support optimisation of the business case
Ken Scott, Head of Innovation, SSE Renewables: ‘The tool developed provided the means to evaluate our marine construction programme and weather risks to support optimisation of the business case. This was through supporting efficient allocation of financial contingency and management of risks to achieve investment contract key dates. The tool is now being used to manage contingency, assess contractor performance and provide evidence to support any claims.’ Matthew Revie summed up why this work involved more than reducing costs, important though that might
be: ‘A lot of people find it rewarding to feel they’ve had an impact on big societal challenges. I care that potentially through the work we’ve done, the cost of energy to an average person will have come down – and that a technology that it is at risk of becoming dead before it gets started, a technology that’s good for the environment and UK PLC giving us something that we can export to other countries, will be saved. It’s these big societal challenges where O.R. should be playing a key role, because O.R. analysts can understand the complexity of the problem and often can do something quantitative that provides real decision support. The reason I stay here at the university is not because I can earn more money, but it’s because I can work on projects that can have a real impact.’ As well as the benefits this work can bring to the offshore wind industry, it has the potential to contribute to offshore wind developing as a cheap renewable energy source, which would bring much wider benefits – to the environment, the UK economy, and to the cost of energy for an average household. Because O.R. techniques can be used to support decision making on a wide variety of problems, working in this area can give opportunities to work on problems which have the potential to have a real impact on these big societal challenges. Brian Clegg is a science journalist and author and who runs the www. popularscience.co.uk and his own www. brianclegg.net websites. After graduating with a Lancaster University MA in Operational Research in 1977, Brian joined the O.R. Department at British Airways, where his work was focussed on computing, as information technology became central to all the O.R. work he did. He left BA in 1994 to set up a creativity training business.
IMPACT | SPRING 2017
35
. d e v r e s e r s t h g i r l l A . n o i t a i c o s s A l a t n e D h s i t i r B 7 1 0 2 ©
F O R E C A S T I N G : W H AT A N O. R . A P P R OAC H C A N D O F O R YO U JOHN BOYLAN
ALL ORGANISATIONS FORECAST. The forecasting may be ad hoc. Even managers who say “I don’t forecast - I simply plan on next year being the same as this year” are relying on a forecast. For them, of course, the forecast is implicit rather than explicit. Although all organisations forecast, few are satisfied with their forecasting performance, and many are unsure about how it should be improved. I believe that an O.R. approach can help and, in this short article, I shall explain how. One way that an O.R. approach to forecasting can help is by evaluating the scope for improvement. A good evaluation will challenge fatalistic attitudes such as “Forecasts will always be wrong – why spend time and money on trying to improve them?” It will also counter an over-optimistic expectation of gains in forecast accuracy which will never be fulfilled. This will lead only to disillusionment and lack of sustained support for forecast improvement.
UNDERSTANDING CURRENT FORECASTING PRACTICE
To understand the scope for improvement, it is first necessary to understand the current position. A ‘forecasting audit’, such as those conducted by the Lancaster Centre for Forecasting, for example, is often a useful exercise. Our approach is to undertake a systemic audit, in the true spirit of O.R. This focuses
36
IMPACT © THE OR SOCIETY
JOHN BOYLAN
on forecasting processes as well as forecasting systems and methods. The importance of process has long been appreciated by practitioners of Sales and Operations Planning (S&OP). The S&OP approach seeks to integrate the forecasting activities of the Operations and Marketing functions. However, process is important not only in a supply-chain context; it is vital in all forecasting activities which are not fully automated. To appreciate the effect of forecasting process on forecasting performance, it is necessary to understand which forecasts are purely computer-generated, which are modified by human judgment and which are generated purely
judgmentally. There are two reasons for emphasising this. Firstly, survey studies (and my own experience) show that many forecasts are judgmental or judgmentally adjusted in practice. Organisations relying purely on computer-based forecasts are the exception rather than the rule. Secondly, empirical studies in a range of industries show that, although judgmental adjustments may improve forecast accuracy, they may also be harmful to it. If the records are available, then an audit will be able to reveal how many forecasts are judgmental or judgmentally adjusted and how accurate these forecasts have been. In particular, it can show what gain in accuracy (if any) has been achieved by judgmentally adjusting a statistical forecast. Research conducted by Robert Fildes and colleagues (see panel at end) showed that it is useful to distinguish between upward adjustments (where the person thinks the statistical forecast is too low) and downward adjustments (where the statistical forecast is thought to be too high). This is because upward adjustments of quantities such as sales forecasts are sometimes affected by ‘optimism bias’, a systematic tendency to forecast too high. To identify this issue, upward and downward adjustments need to be analysed separately. It is also useful to distinguish between smaller adjustments and larger adjustments to check if the smaller adjustments are necessary. Empirical evidence shows that much time is wasted making small adjustments to forecasts that make no difference to forecast accuracy. If forecasts are generated by a computer-based system, and later adjusted judgmentally, then an O.R.
approach will focus not only on the forecasting methods and algorithms themselves but also on the users’ understanding of them. This should identify the users’ understanding of such issues as how the methods are initialised and how the parameters are chosen. For example, many commercial software packages use methods based on exponential smoothing. The simplest form of exponential smoothing works like this: New Forecast = (a x Last Actual) + ((1 – a) x Last Forecast)
Empirical evidence shows that much time is wasted making small adjustments to forecasts that make no difference to forecast accuracy
As well as understanding that this is how Simple Exponential Smoothing forecasts are updated, users also need to appreciate: i) other forms of exponential smoothing are needed if the data are trended or seasonal; ii) how the smoothing constant (‘a’ in the above equation) is determined by the system; and iii) how the first forecast is calculated by the system. This is an example of a univariate method, where the forecast relies only on the past history of the variable of interest. Alternatively, a multivariate method may be used, which also relies on the histories of explanatory variables. The most commonly employed approach to multivariate modelling is regression analysis. There are many pitfalls in regression analysis, particularly in a forecasting context, and the issue of user understanding is just as important as in univariate modelling, if not more so.
EVALUATING SCOPE FOR IMPROVEMENT
Evaluation of the scope for improvement in forecasting requires some care. Understandably, managers are often keen to assess their forecasting performance against an industry ‘benchmark’. Benchmark figures, by industry sector, are occasionally published by the Institute of Business Forecasting (IBF). However, comparing forecasting accuracy between an organisation and its industry sector may be like comparing apples and oranges. If your data are more variable than is typical for your sector, then your forecast accuracy will almost certainly be worse than the sector. If so, the scope for improvement is unclear. An O.R. approach to this evaluation needs to be more detailed. The specifics depend on organisational context, but often a strong case can be made for internal benchmarking. This may take a number of forms: 1. Comparison of current methods with simpler methods. 2. Comparison against the current suite of methods but with a more sophisticated approach to the selection and implementation of those methods. 3. Comparison of current methods with more advanced methods, including multivariate modelling. The purpose of the first comparison is to provide a ‘reality check’. Results from forecasting competitions show that simpler methods may often be competitive with more complex methods. Even if the current methods are already quite simple, it is almost always possible to find a simpler benchmark, such as the naïve forecast (e.g. Forecast for April = Actual for March) or the seasonal naïve forecast (e.g. Forecast for this April = Actual for
IMPACT | SPRING 2017
37
© 02lab Productions / Alamy Stock Photo
last April). Alternatively, if the current methods are more sophisticated, then benchmarks such as the Holt-Winters method may be used. This method is a variant of exponential smoothing that addresses data which is both trended and seasonal (see Goodwin’s article for an explanation and discussion of the Holt-Winters method). Suppose that the comparison shows the current method to be less accurate than a simpler method. This may be due to inappropriate judgemental adjustments, especially if this leads to biased forecasts. Alternatively, it may be due to the method being unsuitable for the data or, perhaps, being inappropriately applied. To determine whether the problem lies in the method itself or in its implementation, another type of comparison is needed. The purpose of the second comparison is to assess if the current methods could yield more accurate forecasts if applied more intelligently. There are a number of ways of achieving this. Firstly, better categorisation rules may help to determine the best choice of method. For example, in my own
38
IMPACT | SPRING 2017
work with Aris Syntetos, I have found that simple rules to determine whether a series should be classified as ‘intermittent’ can help to improve forecast accuracy. Secondly, better ‘tuning’ of the methods can improve performance. Many forecasting methods depend on tuning parameters (known as
Whilst many organisations are prepared to invest large sums in forecasting software, fewer invest adequately in the training of the staff who use these systems
‘smoothing constants’ in exponential smoothing methods, as described above). In practice, these parameters are often adjusted by demand planners with little understanding of the forecasting methods. There is a strong case for better training here. Whilst many organisations are prepared to invest large sums in forecasting software, fewer invest
adequately in the training of the staff who use these systems. Vendors will usually provide training in the use of their systems but not in the forecasting methods employed by their software. Good training will enable demand planners to become more skilled in knowing how and when to intervene in the operation of a forecasting system, and when to desist. For the selection of parameters, automatic optimisation should normally be the default setting, with intervention needed only in exceptional circumstances. Thirdly, current methods may be enhanced by applying them at different levels of aggregation. For example, the Holt-Winters method may be modified as follows: i) allocate an individual series to a group of series with similar seasonal patterns; ii) for each time period calculate the total across these series; and iii) use the usual smoothing equation to update the seasonal indices. Provided that the indices are multiplicative (e.g. an index of 110% indicating a seasonal 10% above the average), then the group seasonal indices can be applied at the level of the individual series. This may be beneficial if the data is noisy or if relatively short histories are available (as is often the case in practice). An alternative type of aggregation is across time, also known as temporal aggregation; e.g. quarterly data is more highly aggregated than monthly data. Just as series aggregated to a group may become more forecastable, so more highly temporally aggregated data may also become more forecastable. This is particularly useful in any application where we can match the aggregation level to the forecast horizon. For example, if data is collected weekly and a four-week lead-time forecast is required, then we have two options. The usual approach is to use the weekly data to generate forecasts for one week, two weeks, three
weeks and four weeks ahead and then to sum these forecasts to give the lead-time forecast. An alternative approach is to aggregate the history into four-weekblocks and then forecast demand for the next four weeks directly. Temporal aggregation is an area of current interest to O.R. researchers; the current-state of the art is summarised in a recent article in by Aris Syntetos and colleagues (see panel at end).
6
: Issue Points
5
4
3
2
1
0
IMPLEMENTING CHANGE
A number of avenues for improvement may open up, but it is important that these are properly evaluated before proceeding to implementation. Not only should we identify the best forecasting approach, but we should also provide a reasonable estimate of the expected level of improvement in forecasting accuracy, and its impact on the business. Assuming that we have sufficient historical data available, it makes sense to adopt a strategy whereby certain data is ‘held-out’ for evaluation. To follow this strategy, we divide the history into in-sample and out-of-sample (or hold-out) sets, with the out-of-sample including the most recent observations. The in-sample set is used for finding the best parameters for the methods and for selecting a method from a family of methods. The comparison across methods needs to take into account the number of parameters that have been estimated. This will avoid “over-fitting” a model, which performs well in-sample but badly out-of-sample. Information Criteria such as the AIC (Akaike’s Information Criterion) may be used to make this in-sample comparison for exponential smoothing models. Then, the out-of-sample (test) dataset can be used to evaluate the accuracy of a method. If comparing methods from different families of methods (e.g. comparing
Demand
Forecast
Avg Demand
FORECASTING INTERMITTENT MEAN DEMAND RATE USING SIMPLE EXPONENTIAL SMOOTHING
exponential smoothing with ARIMA), then a variant on the strategy is needed, dividing the data into estimation, validation and test sets. The estimation set is used to find the best parameters, as previously. Then the models from different families are compared in the validation set. This circumvents the difficulties which arise in comparing Information Criteria between different families of models. Finally, accuracy is evaluated on the test set, as before. In the validation set, it is possible to compare methods not just on accuracy but also on performance using other criteria, such as financial criteria.
we should provide a reasonable estimate of the expected level of improvement in forecasting accuracy and its impact on the business
WHAT CAN AN O.R. APPROACH DO FOR YOU? THE BOTTOM LINE
I began this article by saying that organisations sometimes over-estimate
the gains in accuracy that may be achieved by improving their forecasting processes or methods. However, many organisations under-estimate the financial benefits of doing so. Intermittent demand forecasting, an area in which I have specialised, is an example of this. ‘Intermittent’ demand refers to infrequent low volume demand, with some periods showing no demand at all. It often arises when managing the stocks of service parts, for example in the automotive, aerospace and military sectors. The stocks for these parts may constitute a considerable portion of an organisation’s inventory investment. However, forecasting the mean demand rate for these items is not straightforward. The graph shows forecasts using Simple Exponential Smoothing. These forecasts are too high (on average) immediately after a demand incidence (called an ‘issue point’), which biases the forecasts and results in excessive stock. A whole stream of research in recent decades has focussed on ways of addressing this problem and of identifying which items need an alternative method to Simple Exponential Smoothing.
IMPACT | SPRING 2017
39
FOR FURTHER READING Eaves, A.H.C. and B.G. Kingsman (2004). Forecasting for the ordering and stock holding of spare parts. Journal of the Operational Research Society 55: 431-437. Fildes, R., P. Goodwin, M. Lawrence and K. Nikolopoulos (2009). Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting 25: 3-23. Goodwin, P. (2010). The Holt-Winters approach to exponential smoothing: 50 years old and still going strong. Foresight 19: 30-33. Makridakis, S. and H. Hibron (2000). The M3-Competition: results, conclusions and implications. International Journal of Forecasting 16: 451–476. Syntetos, A.A., M.Z. Babai, J.E. Boylan, S. Kolassa and K. Nikolopoulos
13.6% saving were to be achieved, this would amount to significant financial and environmental benefits. The latter arise from reduction in obsolescence and the waste of resources in making goods that are never used. In summary, I would encourage you to think in these terms and to work with O.R. and analytics specialists to achieve the benefits that are waiting to be realised. I would also urge you to have your forecasters trained in the best practices of forecasting to ensure that these benefits are sustained.
(2015). Supply chain forecasting: Theory, practice, their gap and the future. European Journal of Operational Research 252: 1-26.
An independent study, by Eaves and Kingsman (see panel), of improvements in categorisation and forecasting methods proposed by Aris Syntetos and myself showed that savings of 13.6% of inventory were achievable, compared with standard methods. These
approaches have since been adopted by two large software companies, which have client bases with a combined turnover of over £200 billion per annum and combined slow-moving inventories of approximately £10 billion. Even if only a fraction of the
John Boylan is Professor of Business Analytics at Lancaster University. He has written extensively on business forecasting, for both academic and practitioner audiences. John is committed to training and consulting on forecasting matters as part of the Lancaster Centre for Forecasting’s drive to bridge the gap between academic and real-life forecasting.
ADVERTISE IN IMPACT MAGAZINE The OR Society are delighted to offer the opportunity to advertise in the pages of Impact. The magazine is freely available at www.theorsociety.com and reaches a large audience of practitioners and researchers across the O.R. field, and potential users of O.R. If you would like further information, please email: advertising@ palgrave.com. Rates Inside full page: £1000 Inside half page: £700 Outside back cover, full page: £1250 Inside back cover, full page: £1200 Inside front cover, full page: £1200 Inside full page 2, opposite contents: £1150 Inside full page, designated/preferred placing: £1100 Full Page Adverts must be 216x286mm to account for bleed, with 300dpi minimum.
40
IMPACT | SPRING 2017
PHILLIPPA SPENCER, CHARLOTTE VALLILY, JORDAN LOW AND DAVID THOMAS
IN MAY 2014, a respected tribal healer working in Guinea returned to her home in Sierra Leone. She had been treating patients crossing the border with the deadly viral illness, Ebola Virus Disease (EVD), and travelled back to Sokoma, a remote village in the Kailahun district where she died from the virus only days later. Her funeral, the first EVD-related case in Sierra Leone, sparked a chain reaction and started the country-wide
outbreak. Just two months later, by July 2014, the number of confirmed EVD cases in Sierra Leone had surpassed that of neighbouring Liberia and Guinea. The EVD epidemic in West Africa triggered an international response. The British Government needed to consider options for the provision of military and scientific support as the UK’s contribution to the international Ebola response.
IMPACT © THE OR SOCIETY
41
Image licensed under the terms of the Open Government Licence. To view this licence, visit https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
SUPPORTING THE UK’S R E S P O N S E TO A N I N T E R N AT I O N A L P U B L I C H E A LT H C R I S I S
EVACUATION OF PERSONNEL
© Flexiplastics
AIR TRANSPORT ISOLATOR
Ministers and planning groups began to formulate scientific and analytical questions which would enable them to robustly plan this intervention. The UK Ministry of Defence (MOD) Operations Directorate, responsible for the strategic management of the UK’s enduring and short-notice military commitments at home and overseas, took the planning lead. Operational analysis is utilised by government to plan a variety of operations at home and abroad. Defence Science and Technology Laboratory (Dstl) has been privileged to be called upon in the most critical situations. Dstl provides impartial scientific and analytical advice to the MOD, so was the first port of call to respond to questions in the short timescales required. Dstl Support to Operations group was asked to assist the UK MOD’s Operations Directorate by providing analytical evidence to support their planning. The main focuses of the planning questions for the operational analysis were the health care workers (HCW) and their support staff (SS). It was
42
IMPACT | SPRING 2017
recognised early on that Healthcare Workers (HCW), who put themselves at risk of EVD, are one of the most critical groups and required separate care. So in conjunction with Save the Children (StC) and the Department for International Development (DfID), the Operations Directorate planned to construct a facility to treat and evacuate HCWs, co-located alongside a larger treatment centre in Kerrytown (see the lead photograph).
The British Government needed to consider options for the provision of military and scientific support as the UK’s contribution to the international Ebola response
This decision raised two major planning considerations: Does the UK own sufficient equipment to evacuate personnel, and how big should this facility be?
A critical aspect of this operation was how personnel could be evacuated should they become infected. This duty of care requirement was one of transport and repatriation of patients (both military and civilian). This responsibility fell to the Royal Air Force Infection Prevention Team. EVD patients would require complete isolation throughout the process which necessitates specialist teams and equipment; one of the most important and constraining of these pieces of equipment is the Air Transport Isolator (ATI). An ATI provides the closed environment necessary to protect the medical team and prevent aircraft contamination. Each ATI is escorted by a trained team (the Deployable ATI Team, DAIT) and the limited number of both constrains the maximum number of concurrent evacuations the UK can perform. And so operational modelling began. A simulation of the planned endto-end (E2E) medical evacuation (MEDEVAC) was built based on input from Defence and other government departments. The key outcome of the simulation was to provide a firm, evidence based assessment of the capacity of the UK MEDEVAC chain. Based on assumptions agreed with stakeholders, analysis showed that the MEDEVAC chain could provide the capacity to support an average of 1.8 evacuations per week. Dstl’s modelling also showed that the planned purchase of 18 ATIs would be sufficient for up to five evacuations per week, again based on key assumptions agreed with stakeholders such as that the ATI remains with them for the duration of their treatment and how long it would
take to decontaminate an ATI between uses. A final key resource limitation was likely to be UK bed spaces, with significant increases required to support any growth in MEDEVAC capacity. Overall, the modelling allowed an appropriate balance to be selected between ATI numbers, UK bed requirements and the capacity of the MEDEVAC chain. This balance catered for key sensitivities such as the ability of each DAIT team to surge for a period of demand in the case of a spike on demand.
CALCULATING THE INFECTION RATE The infection rate is a simple calculation and a monthly rate was deemed appropriate. Infection Rate = “Number infected” /”Total at risk” ÷”Months at risk x 100” Equation 1: Infection Rate calculation However, despite access to WHO data, understanding the population at risk (PAR) or total at risk in equation 1 was very difficult. With staff turnover and exact figures of those being deployed to help in the area unknown from the international community, it became increasingly more complex to understand the PAR. But an estimate needed to be found. The WHO published guidelines on how many health care workers there should be per EVD emergency facility. As we had no better estimates,
This raised two major planning considerations: Does the UK own sufficient equipment to evacuate personnel, and how big should this facility be?
we utilised this to provide an estimate of PAR. To understand this value as being a specific subsection of the PAR, the fraction of this (P_Sub) to the overall PAR is calculated: P_Sub = (PAR)/(WHO EVD Response PAR) Equation 2: Proportion of HCW PAR that is UK supported calculation
SIZE OF FACILITY
A key driver of any medical requirement is the rate at which HCWs get taken ill. Though not the nicest statistic to calculate, this is an essential parameter to drive medical operational analysis modelling. The EVD epidemic was a highly unusual planning and decision support task for Dstl and models had to be generated and adapted in conjunction with Public Health England (PHE) and other collaborators to provide advice. The deployment of UK HCW and support staff and military staff carried with it a serious risk. The rate of infection of HCWs was also needed to understand what levels of risk the UK may be taking in deploying its people to help in West Africa. With these two very strong reasons in mind, the hunt was on for data to calculate the HCW EVD infection
This proportion is then applied to the overall HCW cases reported to provide the likely number of cases that will present in the UK supported PAR: Likely HCW cases per month=P_(Sub)×(total confirmed HCW cases)/ (number of months) Equation 3: Rate of HCW cases over the course of the epidemic in months This then provided a number of infected that can be used in equation 1. Where time-distributed data is available, a distribution can be imposed. This allows for a representative mean rate (such as a moving average) and confidence interval based on the most representative distributions to be calculated. An exponential binomial model was then applied to provide the confidence intervals in the estimates and provide a range of possible scenarios.
rate. It soon became clear that data was sparse, although there were a lot of opinions about – some better informed than others! Dstl Staff engaged in a Go Science supported, Ministry of Health chaired
modelling group. This allowed Dstl analysts to be part of a group that included the most knowledgeable academics and government health experts in the field. Access to data was difficult even for them, and so
IMPACT | SPRING 2017
43
halfway to knowing how many beds were needed. How long each patient stays in bed (the time between initial presentation at the facility and their ultimate outcome) naturally varies between patients. Therefore it is convenient to talk about the mean time spent in each stage of the medical chain and to fit an appropriate distribution around this when necessary. The analysis highlighted that bed stay length was heavily dependent on the Case Fatality Rate, i.e. whether the patient survives the Ebola virus or not. In the initial analysis conducted, a 62% fatality rate was used. Lower rates were later considered to determine the magnitude of impact this would have on required bed numbers.
© Cultura RM / Alamy Stock Photo
ELECTRON MICROGRAPH OF THE EBOLA VIRUS
with the help of Go Science and Imperial College, the World Health Organisation (WHO) provided data streams of casualties and their information. This was of course, anonymised; however it could be data mined to find the HCWs in the datasets.
The rate of infection of health care workers was also needed to understand what levels of risk the UK may be taking in deploying its people to help in West Africa
We also looked to open source information. This was often incomplete, inconsistent or unreliable. But it enabled us to gain a greater understanding of
44
IMPACT | SPRING 2017
the desperate situation that was happening in Sierra Leone, Guinea and Libya. In October 2014, we produced our first estimates; however these were still extremely sensitive to some of the assumptions and uncertainties in the data. It was necessary to calculate the infection rate, in the way shown in the side panel. These estimates were updated every two to three months throughout the crisis.
KERRYTOWN TREATMENT UNIT CAPACITY
The calculation of infection rates is nearly always a stepping stone to making predictions about future patient numbers. At a very basic level, the occupancy of any hospital is governed by the rate of arrivals and the length of stay, meaning that having an infection rate is already
The analysis highlighted that bed stay length was heavily dependent on whether the patient survives the Ebola virus or not
Analysis of WHO Sierra Leone data for HCWs initially indicated that the mean time between hospitalisation and death was 6 days, and the mean time between hospitalisation and discharge (survival) was 12 days. As more data became available, the mean time from symptom onset to hospitalisation was also considered. This does not directly affect hospital occupancy, but could potentially inform the bed stay length as it is a fraction of the total disease duration. Analysis of WHO data indicated that the mean time was 3.3 days but this was considered unrepresentative of HCWs working within UK facilities who were assumed to be more aware of
the early symptoms and who did not have to travel far to a facility. Separate analysis showed this had reduced to a mean of 2.4 days which was considered more representative of our target population.
THE ARTHUR MODEL
In order to estimate bed occupancy requirements for the Kerrytown Healthcare worker facility, we adapted an existing medical operational analysis model that has been used for previous operational support in Afghanistan and Iraq. The Analysis of Requirements Tool for Hospital Utilisation and Resources (ARTHUR) is a stochastic model, developed by deployed analysts whilst on operation and used by Dstl to assist with medical planning for deployed medical facilities in operations such as Afghanistan. This existing model was modified to take into account the parameters specific to the EVD Sierra Leone situation. Parameters such as timevarying population sizes, case fatality rates, infection rates and the time a patient spends in the facility were included.
OUTCOME
In October 2014, output from the model showed that the number of beds initially planned to be built would be more than sufficient for both the mean and 95th percentile estimated bed occupancy levels. This prompted a decision to reduce the number of beds; a decision wellreceived by the deployed military staff who found manning the facility a challenge. As the situation changed, the Operations Directorate began asking for updates to the analysis:
what is the maximum number of workers the facility can support, what if the infection rate increases, what was the capacity to open the beds up to other critical staff. This required careful and diligent re-testing of the operational analysis (OA) models Dstl had generated. It also required us to add levels of complexity to the analysis. This added to the uncertainty as more assumptions were required and as such Dstl sought to provide the operational planners with a range of worst and best case scenarios in order to manage the expectations of senior stakeholders in government.
The results of Dstl’s analysis were reported at senior levels and used to shape the decisions that were made regarding the capacity of the UK supported facilities in Sierra Leone
The results of Dstl’s analysis were reported at senior levels and used to shape the decisions that were made regarding the capacity of the UK supported facilities in Sierra Leone. It also informed the number of diagnostics tools to be provided along with a number of other logistical requirements. In addition to the analysis conducted, Dstl also deployed a scientific staff in order to facilitate speedy testing of patients who were suspected to have EVD. Dstl also provided scientific advice to the disinfection of various military platforms, correct PPE policy and helped the planners understand the burden on the deployed workers (in full
PPE, in high temperatures in stressful environments). Following the hard work of all Dstl staff involved in the Ebola crisis support, the Defence Chief Scientific Adviser (Vernon Gibson) awarded all the Ebola response team with a commendation in recognition of their contribution. The citation said: “It was an excellent example of OA and science working together to help shape a response to a crisis.” On 7 November 2015, eighteen months after the first death, the World Health Organization declared Sierra Leone Ebola-free. Phillippa Spencer (pspencer@mail.dstl. gov.uk) is the Dstl principal technical authority for support to operations and a principal statistician. Charlotte Valily and David Thomas are senior operational analysts and Jordan Low is a operational analyst who came to Dstl recently as a graduate. Jake Geer is acknowledged for his support in this work. We are grateful to Flexiplastics for allowing their images to be reproduced. Flexiplastics manufacture a diverse range of high quality products from flexible plastics. Their specialism is high frequency plastic welding. Flexiplastics can be found at www.flexiplastics.co.uk.
© Crown copyright (2017), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/ doc/ open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gsi. gov.uk.
IMPACT | SPRING 2017
45
© GKN Aerospace
PROBLEM solvED NEIL ROBINSON
46
IMPACT © THE OR SOCIETY
THE JOB-SHOP PROBLEM is one of the classic conundrums of computer science. Faced with an especially complex version of it at their heat-treatment facility in Trollhättan, Sweden, managers at GKN Aerospace turned to the O.R. community for help. Heat treatment has come a long way in the seven thousand years since man first recognised fire’s ability to alter a metal’s properties. The earliest evidence of its use dates back to the end of the fifth millennium BC, when Chalcolithic smiths relied on primitive hearths to restore ductility to forged copper, so allowing the thin edges and sharp tips essential for effective tools and weapons to be fashioned more easily. For the vast majority of its history, from its origins during the transition
from Stone to Bronze Ages until the mid-19th century, the process was essentially seen as an art. It arguably became a science only when Russian metallurgist Dmitry Churnov, in seeking to explain why the barrels of early steel cannons routinely exploded, carried out a series of microscopic analyses and realised that iron-carbon alloys grow stronger as their structure becomes finer. Churnov’s discovery, immortalised in his formative phase diagram charting the approximate points at which significant structural changes occur, paved the way for ever-increasing understanding and sophistication. The strength of treated metals has since risen in tandem, as has the intricacy of the procedures used. Nowadays many alloys undergo a complex cycle
of different heat-treatment operations designed to impart precisely the desired properties. The aerospace industry offers perhaps the ultimate illustration of both progress and complication. Here various components – most notably those made from so-called “superalloys”, which must be unusually resistant to corrosion, oxidisation, high temperatures and extreme stress – may require five or more treatments before fully developing the attributes that set them apart.
The value of what the model was able to achieve – creating a 24-hour production schedule in a matter of minutes – did not go unnoticed
Not least at a busy facility dealing with an assortment of products and processes, this can present a considerable challenge in terms of scheduling. Such was the problem Dr Karin Thörnblad was asked to address at GKN Aerospace’s heat-treatment department in Trollhättan, in her native Sweden, after completing her PhD in the field of applied optimisation at the department of mathematical sciences at Chalmers University of Technology, Gothenburg.
GKN Aerospace is involved in almost every major civil and military aviation progamme currently in development or production. The company’s technology is said to play a role in 100,000 flights a day, including 90% of all those made by commercial aircraft.
Such ubiquity translates into a lot of heat treatment. By extension, it also translates into a lot of scheduling dilemmas. In early 2014, with workloads expected to escalate even further, managers at Trollhättan – at the time home to half a dozen furnaces of varying sizes – reasoned that O.R. might be able to help conceive a more efficient approach.
Their thinking was influenced by Dr Thörnblad’s previous work elsewhere at the site. During her PhD studies she had devised an iterative scheduling procedure for Trollhättan’s multi-task cell, a flexible job-shop containing 10 resources and capable of accommodating a wide range of parts and processes.
IMPACT | SPRING 2017
47
© GKN Aerospace
THE HEAT IS ON
ACCESSIBLE VIA A USER-FRIENDLY INTERFACE, THE SCHEDULE IS PUBLISHED AS BOTH A SORTED LIST AND A GRAPHICAL CALENDAR ON A WEBSITE AND THE COMPANY’S INTRANET
THE SOLV DATABASE DRAWS INFORMATION FROM A VARIETY OF SOURCES TO CALCULATE AN OPTIMAL OR NEAR-OPTIMAL 24-HOUR SCHEDULE WITHIN A MATTER OF MINUTES
“Our model used ever-shorter time steps to solve a mathematical formulation of the scheduling problem with ever-greater accuracy,” she says. “As far as I know, it was the first timeindexed model applied to a flexible job-shop and the first mathematical optimisation to include side constraints regarding preventive maintenance, unmanned shifts and fixture availability.”
48
IMPACT | SPRING 2017
Novel it may have been, yet for a while the idea had seemed doomed to come to nothing – save, of course, for earning Dr Thörnblad her PhD. The project was postponed when the cost of retrieving data for ongoing jobs from the multi-task cell’s control system proved prohibitive. Fortunately, the value of what the model was able to achieve – creating a 24-hour production schedule in a matter of
minutes – did not go unnoticed, and Dr Thörnblad was soon given the chance to transfer the same principles to the heat-treatment (HT) division. Working with planners, operators and other experts at the facility, she first set about gathering all the information needed to generate a schedule. As well as starting to build a database in Access 2007, she interviewed staff to get a better idea of the day-to-day difficulties involved. It quickly became obvious that the issues she had encountered at the multi-task cell had been straightforward by comparison. She learned that products would arrive at the HT facility not just from throughout the company but from external customers. Many would need to be treated several times – some with a minimum recommended wait between treatments, some with a
HOT PROPERTY
SOLV is an acronym of Schema Optimalt Lagt i Värmebehandlingen, which translates as Optimal Schedules in Heat Treatment. The term has come to be well known since Dr Thörnblad’s model was fully implemented in late 2015, with the HT division now unquestionably enjoying the hoped-for benefits of what has been dubbed the “SOLV effect”.
The system is used to produce a new schedule every day. The results are available within minutes
The system is used to produce a new schedule every day. An operator, normally one with a planning role and working the morning shift, accesses the SOLV database and imports or inputs relevant data about the jobs in the HT queue. As with the original
model for the multi-task cell, SOLV uses ever-smaller time steps to calculate an optimal (or near-optimal) 24hour schedule – in this case to begin the following morning. The results are available within minutes and are published on the company’s intranet as both a sorted list and a graphical calendar. “The HT department’s planners are happy to generate just a single schedule every weekday morning,” says Dr Thörnblad. “Most HT processes take between 10 and 20 hours, so this makes sense. The only exception would be when there’s an event that’s likely to have a big impact on the planning situation – say, if a furnace breaks down. Even on a Friday, when a schedule for the next 72 hours is produced, the model should complete its calculations in five to 20 minutes.” It is important to remember that Dr Thörnblad was invited to apply the model to the HT facility in anticipation of an upsurge in workload.
IMPACT | SPRING 2017
49
© GKN Aerospace
maximum recommended wait – and each was subject to a due date specified by GKN’s enterprise resource planning system (ERP). Other factors were more familiar. Preventive maintenance – in this instance the regular “bake-outs” and vacuum tests necessary to keep the furnaces in top condition – would have to be taken into account, as would an unmanned shift on Sundays and the need to mount certain parts on special fixtures. A crucial goal, particularly in light of the lengthy processing times involved, would be to identify means of batching jobs together. “At the time it was hard to get an overview of the planning situation,” says Dr Thörnblad. “There were often queues, the lead times could be quite long, and the delivery times couldn’t be trusted. Production flows were frequently disturbed by the variable output from the HT department.” All of these concerns were considered during a nine-month concept study that finished in January 2015. “By the end, through tests on real data, we were able to show it would be possible to implement the scheduling procedure in the HT department,” says Dr Thörnblad. “The longest computation time in the first round of tests was less than a minute, so the business case was promising. We got the go-ahead and were tasked with improving utilisation of the furnaces by 2% a year.” Expressed simply, that 2% equated to 120 extra HT operations annually. With this key objective established, a genuine “go live” date in prospect and the team bolstered by the addition of two IT engineers who would deliver a user-friendly interface, the project was given not only the green light but an official name: SOLV.
THE FLEXIBLE JOB-SHOP PROBLEM The flexible job-shop problem is an extension of the classic job-shop problem, in which the challenge is to assign jobs to available resources as efficiently as possible. At its most basic, the problem involves scheduling a given number of jobs on a given number of machines. The most common objective in research is to minimise the overall processing time – known as the “makespan” – but this is not a suitable objective for a real-world application, which, according to Dr Thörnblad, should involve the sum of weighted tardiness or at least the sum of weighted job completion times. Many of the problem’s common variations could be found in GKN Aerospace’s heat-treatment department. These include delays between processes, sequence-dependent set-ups and assorted side constraints. DR. KARIN THÖRNBLAD
That upsurge turned out to be substantial. For example, the number of jobs involving titanium products – which demand both a bake-out and a vacuum test, leading to especially tricky scheduling – soared by 238%. Despite these extra challenges, the 2% target was met. In fact, utilisation of four of the six furnaces went up by 12% on weekdays over the three months to the end of December 2015, while on Sundays the “SOLV effect” led to a striking 45% rise. Queuing times decreased by 4%, delays by 8%, the number of non-productive bake-outs needed per special job by 32% and the number of vacuum tests performed per titanium job by 55%. The reduction of non-productive jobs results in yearly energy savings of 250 megawatt hours. The qualitative consequences have also been manifest. Operators have spoken of a newfound ability to “work smarter” and the advantages of “a living system updated on a continuous basis”. “This is great for us,” says Jan Dippe, the HT department’s manager. “We gain control over what components
50
IMPACT | SPRING 2017
to work on and when, and we have a whole other level of predictability compared to previously. This means we can increase volumes and bring in external customers, reducing our hourly costs. Put simply: SOLV helps us to plan our work and fill the business.”
Utilisation of four of the six furnaces went up by 12% on weekdays over the three months to the end of December 2015, while on Sundays the “SOLV effect” led to a striking 45% rise
Writing to Dr Thörnblad in early 2016, Martin Lindström, head of logistics and master production scheduling at Trollhättan, indicated the story would not end there. He acknowledged SOLV’s “positive effects” and suggested the model be introduced elsewhere at the site. Sure enough, Dr Thörnblad has recently been investigating another scheduling conundrum.
“The logic behind the algorithm is generic to many types of production cells with complex planning situations,” says Dr Thörnblad. “The tool is currently being adapted and implemented at the coordinatemeasuring machine workshop at Trollhättan, but it could be considered for use at many other production cells at other GKN plants.” Thus a project whose future briefly appeared in doubt continues to go from strength to strength, as further underlined by its place among the finalists for the European Association of Operational Research Societies’ EURO 2016 Excellence in Practice Award. Small wonder that Lindström saw fit to conclude his letter to Dr Thörnblad with a markedly optimistic sign-off: “To be continued...” As the O.R. community knows only too well, there is always some problem to SOLV. Neil Robinson is the managing editor of Bulletin Academic, a communications consultancy that specialises in helping academic research have the greatest economic, cultural or social impact.
SMALL DATA Geoff Royston
In analytical and management circles there is much talk nowadays about ‘big data’. And rightly so, the opportunities are enormous. The landscape of the digital world features vast ranges of data mountains thrown up by business transactions, public services and social communications. Computers and analytical techniques allow these to be mixed, matched and mined rapidly and extensively, searching for connections, patterns and trends in such areas as consumer purchases, population health, or popular culture. But are more data always the answer? The chief economist of the Bank of England, Andy Haldane, seems to think so, at least in his field. He was recently reported saying that the ‘Michael Fish moment’ of failing to predict the bank crash of 2008 highlighted a crisis in economics but that big data could bring about a transformation in economic forecasting in the same way as it has in improving forecasting the weather. But could it? A key analytical error behind the US banking crisis was the assumption that pooling many small mortgage accounts reduced the aggregate risk from defaults. True in times where the risk of one mortgage account default was independent of others. False in times where there was a single underlying risk driver affecting them all – in this case the fragility of the US housing price boom. This was not a data problem. What was needed to gauge the risk of a financial storm was a better understanding of some basic statistical concepts and an accompanying realistic financial model. And, as in any financial bubble, there were behavioural factors at play too, and these will always make economic forecasting an even more uncertain business than predicting the weather. Big data, valuable though it undoubtedly is, will not be enough. THE END OF STATISTICS?
In some ways the story of big data is an inversion of the story of statistics. A key concept in statistics is that it is not necessary
to measure all of a large population in order to establish its key features - a sample will generally suffice. Some of the key advances in statistics have been about how to make good use of very small, cost-effective, samples. The advent of big data has sometimes been taken to indicate that, as huge volumes of data of all varieties can now be so easily and cheaply collected and so quickly analysed, small data - perhaps even the discipline of statistics itself - are no longer important. Not so. As the statistician David Spiegelhalter (Winton Professor of the Public Understanding of Risk at Cambridge University - and OR Society Blackett lecturer) has said “There are a lot of small data problems that occur in big data. They do not disappear because you have got lots of the stuff. They get worse.” Big data can suffer just as much – if not more – as small data from: • irrelevance (much big data is passively ‘found’ whereas small data is often actively ‘sought’ – selected with a view to understanding a problem or finding a solution); • errors (of collection or recording); • noise (finding a needle in a haystack); • sampling bias (another issue arising from the ‘found’ nature of much big data – even if your data comes from the usage records of 50 million smartphones you are still sampling only smartphone users); • false positives (while all car owners might buy things at garages, not everybody who buys at a garage owns a car); • historical bias (the past is not necessarily a good basis on which to predict the future, especially in turbulent times); • multiple-comparisons hazard (test a big data set for enough relationships and some spurious association will come up eventually); • risk of confusing correlation with causation (infamously, increases in autism correlated with increases in vaccination – but there is no causal link). So much for the essay in Wired magazine that suggested “with enough data, the numbers speak for themselves”. If numbers speak, it can be in an unfamiliar tongue, requiring some expert translation. Big data is not going to see the death of statistics or the demise of small data – and operational research is going to continue to draw upon both. THE BLACK SWAN
One of the arguments in favour of small data is more a matter of logic than of statistics. You see a swan, it is white. Then another, also white. And another. If you had never seen or heard of swans before, how many would you want to see to be confident that
IMPACT © THE OR SOCIETY
51
swans are a white-feathered bird? 100, 1000, 10,000? How many black swans would it take to prove you wrong? As demonstrated by the Dutch explorer Willem de Vlamingh (who in 1697 in Western Australia made the first European record of sighting a black swan), sometimes a very few occurrences – or even a single example - of something can provide an insight into reality that a huge volume of data may not. That is the basis of a recent book Small Data: The Tiny Clues That Uncover Huge Trends by Martin Lindstrom. This Danish author, a global branding consultant who describes himself as “a forensic investigator of small data”, was (like Andy Haldane) selected a while back by Time Magazine as “one of the 100 Most Influential People in the World”. TINY SIGNALS, DEEP INSIGHTS
For Lindstrom, small data are little nuggets of often qualitative information – from habits, décor, gestures, tweets and so on – tiny signals that can yield deep insights into people’s wants and desires. His book contains fascinating accounts of his hunting and gathering of small data and his business application of the results, for example how: • seeing the wear in an old pair of training shoes worn by a young German skateboarder produced a transformation in LEGO’s business strategy; • observing that Russian households had many more fridge magnets than most other countries led to setting up an online shopping site run by and for women; • identifying the typical patterns of wear on toothbrushes guided the design of cars for the Chinese market. Lindstrom’s philosophy is that “a lone piece of small data is almost never meaningful enough to build a case or create a hypothesis, but blended with other insights and observations ... comes together to create a solution that forms the foundation of a future brand or business”. This resonates with another book on a very different topic that I happen to have been re-reading recently – The Double Helix. This tells the gripping story of how James Watson and Francis Crick, who, while other more cautious investigators were calling for more data, had the audacity to use a model–building approach that sought and pieced together small fragments of data of various kinds – qualitative and quantitative - that they gleaned from a variety of sources. That combination of modelling and ‘small data’ won them the race to solve the mystery of the structure of DNA. Both these stories should ring bells with O.R. analysts and others working to use data and models to improve systems and processes in organisations.
52
IMPACT | SPRING 2017
Lindstrom remarks “Most illuminating to me is combining small data with big data by spending time in homes watching, listening, noticing and teasing out clues to what consumers really want”. That principle clearly can be extended, beyond observing domestic behaviour, to looking for clues in how people behave – and indeed in how anything actually functions – in their working or other environments. Lindstrom’s stance – “If you want to learn how the lion hunts, don’t go to the zoo, go to the jungle”- chimes well with O.R.’s focus on tackling “real world” problems. BEWARE FOOL’S GOLD
I am certainly not gainsaying the power and potential of big data. After all, big data has, as Andy Haldane noted, transformed weather forecasting, and is proving its worth in many other important areas. Indeed, work with big data and associated analytics is proving so successful that its future challenges may be less about computing, analysis and modelling than about ethical and political issues around privacy, transparency and ownership. However, those working in or using the products of the digital data mines need to be on the look-out for fool’s gold. Lindstrom recounts the tale of a bank that used a big data analytics model to identify customers whose accounts showed signs, such as abnormal transfers of money, associated with people on the verge of exiting their banks (‘churn’). It was about to send out letters asking them to reconsider – when an executive happened to find out that the unusual activity was not because the customers were dissatisfied with the bank, it was because most of them were getting a divorce. A small data study could have found that out in a day. Managers thinking of investing in big data mining should ensure they have advice from the mineralogists, geologists and mining engineers of the data world, who will know what precious data look like, where they are most likely to be found and that the best way to extract them is not necessarily to excavate and sieve the entire mountain. Fragments of data, whether they are mainly qualitative (as for Lindstrom) or mainly quantitative (as for Watson and Crick) can have big impact, especially when combined with modelling to integrate them into a coherent whole. Let’s not overlook the power of small data. Dr Geoff Royston is a former president of the O.R. Society and a former chair of the UK Government Operational Research Service. He was head of strategic analysis and operational research in the Department of Health for England, where for almost two decades he was the professional lead for a large group of health analysts.
OR ESSENTIALS Series Editor: Simon J E Taylor, Reader in the Department of Computer Science at Brunel University, UK The OR Essentials series presents a unique cross-section of high quality research work fundamental to understanding contemporary issues and research across a range of operational research (OR) topics. It brings together some of the best research papers from the highly respected journals of The OR Society.
ACCESS THESE TITLES AT: palgrave.com/series/14725
A38629
OR59:
The OR Society Annual Conference
This year’s OR Society conference is designed to support everyone – analytics professionals, academics and practitioners – in making an impact.
12-14 September 2017 Loughborough University What you can look forward to: Hosted by Loughborough University, The OR Society’s #OR59 will help you present your work, network with colleagues, develop your professional skills and ‘Make an Impact’. You will be immersed in a programme featuring: Superb Plenary and Keynote speakers 200+ paper presentations An excellent choice of streams #SpeedNetworking and social networking events ‘Making an Impact’ day #MAI59 Practitioner/Academic collaboration sessions And much, much more. This is an Operational Research event not to be missed and we look forward to seeing you there!
This year’s conference is three days of: An eclectic mix of presentations from 20+ streams Stimulating plenary and keynote speakers Academic-practitioner bazaars
Great opportunities also exist for Sponsors and Exhibitors: The conference is a great place to meet both academics and practitioners and provides a great opportunity to help Operational Research analysts solve their problems. A range of sponsorship and exhibitor options are available, from exhibition stands and conference bags to dinners and ticketed events. For more information, contact hilary.wilkes@theorsociety.com
www.theorsociety.com/OR59 Look out for the #OR59 hashtag on social media.
Speed networking One-to one mentoring clinics The Big Debate Workshops, exhibits, social events and more!