JULY/AU
GUST 20
12
Virtualisation
not all smoke and mirrors
Getting to grips with what’s unreal
System Platform and Virtualisation The Cloud. Here to stay or fading drizzle?
The Economics of the Cloud
Virtual Reality in the Process Industry Cyber security and the Power industry EVENTS | TECH TIPS | TRAINING | SUPPORT
Award winning innovation in HMI / SCADA Implementation
upgrade to advansys Advansys provides specialised Industrial Control and Automation Engineering and Consulting services to the manufacturing and InnovatIon award 2009 / BESt EMI aPPLICatIon 2009 toP SI award 2010 / BESt HMI aPPLICatIon 2010 / toP SI award 2011
utilities sectors throughout Southern Africa. Our services include: ▪
Advansys invites you to contact us for Wonderware related project initiatives and software licensing as follows: 1.
HMI / SCADA Standards Design and Development
2.
ArchestrA System Platform Implementations
3.
Production / KPI Reporting Solutions
4.
Wonderware Licensing
5.
Wonderware aligned Customer First Support package with your annual renewal
▪
▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪
011 510 0340 | information@advansys.co.za | www.advansys.co.za
Process analysis and performance optimisation Control system design, specification and project management Instrumentation specification and installation PLC solutions SCADA / HMI solutions S88 Batch solutions S95 MES and “vertical” integration solutions Reporting solutions Manufacturing Intelligence solutions Software Applications solutions
Contents
Contents JULY/AU
GUST 20
12
2 Editor’s notes
Cyber security in the power
42 industry
Virtualisation and the ArchestrA
3 System Platform
4
Invensys first to support virtualisation technologies for High Availability and Disaster Recovery
43 Power industry locks down Control and Automation security:
48 Fort Knox or swing-door
Eskom conforms to legal
About System Platform - your
5 base for operational excellence
50 emission limits with help from Wonderware
Thin client computing for
Getting started with System
55 virtualisation and SCADA
Virtualising the ArchestrA System
59 availability functionality
7 Platform and virtualisation
10 Platform
12 Availability 14 Disaster Recovery High Availability with Disaster
15 Recovery
16 The economics of the cloud
Virtualisation needs high
60 Virtualisation dictionary Events - MESA SA “Adapt or
63 Die” 2012 CONFERENCE
Use Protocol Magazine to
65 generate business opportunities 2012 Training Schedule
68 (Johannesburg) The cloud ... here to stay or
30 fading drizzle?
Protocol Magazine Owner and Publisher: Invensys Operations Management Southern Africa
Marketing Manager: Jaco Markwat
33 An African cloud The role of virtual reality in the
34 process industry
69 Support – Customer FIRST 70 On the lighter side 72 Protocol crossword #55
jaco.markwat@invensys.co.za
Editor: Denis du Buisson, GMT Services ddb@iafrica.com
Advertising Sales: Heather Simpkins, The Marketing Suite heather@marketingsuite.co.za
Distribution: Nikita Wagner nikita.wagner@invensys.co.za
Contributors Many thanks to the following for their contributions to this issue of the magazine: • Rolf Harms, Director of Corporate Strategy and Michael Yamartino, Manager Corporate Strategy, both from Microsoft Corporation, for the highly informative article on the economics of the cloud • John Coetzee of Aristotle Consulting for the article titled: Control and Automation security: Fort Knox or Swing-door?
Facebook: Twitter: YouTube:
Wonderware Southern Africa WonderwareSA WonderwareSA
www.protocolmag.co.za
• Gerhard Greeff, Divisional Manager at Bytes Systems Integration for the article titled: The Cloud ... here to stay or fading drizzle? • Maurizio Rovaglio and Tobias Scheele of Invensys for the article dealing with the role of virtual reality in the process industry. • Ernest Rakaczky and Thomas Szudajski of Invensys for the article on cyber security in the power industry.
July/August 2012 | 1
Editor’s Notes question: Why? Why must I reprogram my
the intense scrutiny providers must place on
intellectually-challenged technology brain
security and the deep level of expertise they
cell to wrap itself around what sounds like
are developing.”
yet another (and somewhat far-fetched) vendor-driven fad designed to separate
As organisations continue to implement
people from their wallets?
virtualisation and converge their data centre environments, client architectures
I discovered that what’s driven virtualisation
also continue to evolve in order to take
is getting the most from the unused power
advantage of the predictability, continuity
in existing computer technology so that end-
and quality of service delivered by their
users could access a greater set of solutions
converged infrastructure.
at a lower cost. This came as a huge blow to my home-cooked cynicism about a vendor-
Selected client environments move
driven fad but gave my technology brain cell
workloads from PCs and other devices to
a new purpose and a reason to live.
data centre servers, creating well-managed virtual clients, with applications and client
Virtualisation – not all smoke and mirrors
With computing power doubling every 18
operating environments hosted on servers
months according to Moore’s law, the PCs
and storage in the data centre. For users,
on our desks and the servers in our server
this means they can access their desktop
While this magazine is real, what it talks
rooms are doing nothing most of the time
from any location, without being tied to a
about doesn’t necessarily exist in the same
– so why not put them to work? Powerful
single client device. Since the resources
reality that we’re used to. No one has
desktop PCs can be replaced with Thin
are centralised, users moving between
ever seen most of the servers in today’s
Clients while a single server can now be host
work locations can still access the same
industrial and business systems, for example,
to many “virtual” servers, each operating
client environment with their applications
because they don’t really exist – at least not
in their own environment and making the
and data. For IT administrators, this
as individual box-like entities. Yet they do
most use of the power of multi-core CPUs.
means a more centralised, efficient client
all the work that their “real” counterparts
What’s more, these servers need not all be
environment that is easier to maintain
do. Welcome to the world of virtualisation
in one box but “somewhere out there” in
and able to more quickly respond to the
which is rapidly becoming the de-facto
the cloud of Internet and intranet-linked
changing needs of users and businesses.
computational environment of today and
computing resources and services. The
tomorrow.
operational and cost benefits to be derived
So, it’s not all smoke and mirrors after all.
from this arrangement are numerous but This becomes obvious when looking at the
that led me to another question: While this
Until next time,
market landscape for virtualisation as shown
may sound like the ultimate “outsourcing”
Denis du Buisson
in the diagram where the installed base of
solution, what about security and proprietary
ddb@iafrica.com
physical units is dwarfed by that of logical
processes (e.g. real-time manufacturing)?
units.
Acknowledgement: Some of this material is Quite simply, there are some applications
When I first came across virtualisation and
which are suited to the cloud environment
cloud computing, I asked myself a simple
while others (e.g. real-time industrial processes) are not. Further, as outlined in the article “The economics of the cloud”, there is no reason why cloud security should be any more lax than any other. To quote: “In fact, they [public clouds] are likely to become more secure than on premises [private clouds] due to
2 | www.protocolmag.co.za
sourced from Wikipedia
Virtualisation and the ArchestrA System Platform Launched in 2003, Invensys Wonderware’s System Platform is the unique unifying force behind industrial applications and their implementation. This ArchestrA-based, industrial “operating system” preserves past investments, helps save buckets of money on current engineering efforts and is ready to do the same in the future - because that’s what it was designed to do.
July/August 2012 | 3
Invensys first to support virtualisation technologies for High Availability and Disaster Recovery Company expands certification for VMware
Virtualisation software, like that offered
the physical hardware and the software,
and Microsoft Hyper-V virtualisation
by Microsoft and VMware, transforms or
customers have more choices to improve the
platforms
“virtualises” a computer’s hardware, such as
management of their applications, servers
the CPU, hard drive and network controller,
and equipment. One of the many benefits
In February 2012, Invensys Operations
to create a virtual computer that can run
is the ability to move virtual machines
Management announced that it had
its own operating system and applications
between host computers, which enables
expanded its certification for virtualisation
just like a “standard” computer. By sharing
a variety of different fail-safe scenarios to
technology, making it the first industrial
hardware resources with each other, multiple
be implemented, each providing options
automation provider to be certified for
operating systems can run simultaneously on
for different levels of redundancy that
high availability, disaster recovery and fault
a single physical computer. And because it
make systems more resilient, less prone
tolerance in supervisory control applications
has the CPU, memory and network devices
to equipment or site failure and simpler to
leveraging both the VMware and Microsoft
of the “host,” a virtual machine is completely
upgrade.”
Hyper-V virtualisation platforms. The
compatible with all standard operating
company’s ArchestrA® System Platform 2012
systems, applications and drivers. With
At the OpsManage’11 conference last year,
and Wonderware® InTouch® 2012 software
virtualisation, users can safely run several
Invensys Operations Management set up
are now certified for the latest VMware
operating systems and applications at the
and demonstrated a scenario where an
solutions, including VMware vSphere version
same time on a single computer, with each
entire primary system in Florida failed over
5.0 and ESXi version 5.0 for mission-critical
having access to the resources it needs
to a backup disaster recovery system in
applications.
when it needs them. And it’s all possible
California.
with commercial off-the-shelf hardware and “Historically, high-availability and disaster-
operating systems.
recovery solutions in supervisory control
“Customers were impressed with the ease and speed with which the backup system
systems were expensive to implement, not
“End-users are rapidly deploying
took over control, using only commercial
only because of hardware and software
virtualisation solutions to reduce the
hardware,” said van Aardt. “That’s the
costs, but also because of additional
number of physical servers needed for
flexibility they are looking for to modernise
administrative burdens,” said Deon
their plants in order to lower their hardware
and optimise their businesses. As the
van Aardt, Divisional Director, Invensys
costs, IT costs and energy bills,” said Craig
first industrial automation provider to
Operations Management Southern Africa.
Resnick, vice president, ARC Advisory
be certified for high availability, disaster
“Along with many other benefits, when
Group. “Virtualisation technology also
recovery and fault tolerance on the two
ArchestrA System Platform 2012 and InTouch
helps end-users with system deployment
major virtualisation platforms, we look
2012 software are deployed, they support
of high-availability, disaster-recovery and
forward to continuing to offer the products
high-availability and disaster recovery
fault-tolerance solutions as it is used to
our customers need to achieve the highest
implementations using Windows Server
quickly get plants back up and running
levels of system availability, reliability and
Hyper-V virtualisation from Microsoft, as
when computers fail, regardless of location.
operational efficiency.”
well as the latest remote desktop services
Invensys Operations Management’s
that are part of Windows Server 2008 R2.
certification for both VMware and Microsoft
Now, after a rigorous validation period,
Hyper-V ensures that its customers are
our ArchestrA System Platform 2012 and
covered and protected regardless of their
Wonderware InTouch 2012 software are
choice of platform.”
also certified for disaster recovery and high availability using VMware virtualisation. All
“While the underlying technology is
this is possible on commercial operating
sophisticated, virtualisation can deliver
systems using off-the-shelf hardware, further
benefits that are much simpler to
reducing cost and easing implementation of
understand and achieve,” added van Aardt.
mission-critical applications.”
“By eliminating dependencies between
4 | www.protocolmag.co.za
About System Platform - your base for operational excellence
About System Platform your base for operational excellence
In a nutshell ... The Wonderware System Platform provides a versatile application server, a powerful historian server, an easy-to-use information server and unparalleled connectivity. This de-facto information-capturing and application-unifying standard is designed with flexibility and power for a wide range of applications and industries from geo-SCADA to real-time environments.
Key benefits • Integrate people, information and processes • Empower and motivate people to collaborate • Drive, enforce and adapt processes
The Wonderware® System Platform is a
ArchestrA System Platform 2012 supports
strategic industrial software application
new high-availability disaster recovery
platform that’s built on ArchestrA®
implementations using Windows Server
technology for supervisory control,
Hyper-V virtualisation from Microsoft. In
Geo-SCADA as well as production and
addition, ArchestrA System Platform 2012
performance management solutions.
software supports all the latest remote
• Drive, enforce and adapt standards • Achieve consistent high quality • Enable innovation and adaptability to change
desktop services that are part of Windows Designed to suit the needs of industrial
Server 2008 R2.
• Reduce risk
automation and information personnel, the Wonderware System Platform is the backbone and manager for all functional capabilities required for industrial software solutions. With the System Platform, Wonderware provides an industrialised application server, a powerful historian server, an easy to use information server as well as
With System Platform, implementation topology and architecture can be modified and redeployed without any need for reengineering, giving you the advantage of freedom and speed.
• Shorten the time to value/time to market • Retain highly valued people • Increase profits
Key capabilities • Common plant model reduces complexity
unparalleled connectivity, all specifically • Remote software deployment and
built for demanding real-time industrial environments.
Many Windows Server Hyper-V customers
maintenance
in manufacturing and processing need to
System Platform 2012 features a virtual computer infrastructure.
1
integrate legacy automation, monitoring and
• Extensible and easily maintained using
reporting systems across different locations.
template-based and object oriented
By supporting the full spectrum of Windows
structures
Server Hyper-V capabilities, Invensys
Improved process control
Operations Management is enabling the need to achieve real-time business optimisation.
2
Tighter integration of automation applications
• Powerful role-based security model
flexibility and technology their customers
Today’s manufacturers and processing companies typically operate a range of
• “Optimised for SCADA” network and communication features • Historical data collection and advanced trending
facilities, all with different automation,
3
Virtual computer infrastructure
monitoring and reporting systems.
• Web based reporting capabilities
Managing these legacy systems can be • Support of Microsoft Remote Desktop Services, Smart Card authentication and Hyper-V virtualisation allow highly economic, July/August 2012 | 5 secure and available systems
The Wonderware system platform provides costeffective communication to virtually any plant information source, including historians, relational databases, quality and maintenance systems, enterprise business systems and manufacturing execution systems.
costly and time consuming. ArchestrA
physical and virtualised computers. Not only
titled: “Wonderware ArchestrA System
System Platform solves this problem by
does this reduce engineering costs, shorten
Platform in a Virtualised Environment -
providing a common, highly-efficient
project timetables and enforce automation
Implementation Guide�.
infrastructure to easily develop, manage
and operational procedures, it really
and maintain industrial applications with
empowers the workforce and strengthens
The full guide on implementing the
exceptional scalability and openness.
the drive toward real-time business
ArchestrA System Platform in a Virtualised
optimisation.
Environment using Microsoft Hyper-V
These solutions allow users, anywhere
technology, failover clustering and other
on the network, to design, build and
What follows is a brief look at what you
strategies to create High Availability,
deploy industrial workflow, HMI as well as
need to consider for the implementation
Disaster Recovery and High Availability with
automation and information applications,
of Virtualisation, High Availability and
Disaster Recovery capabilities can be viewed
while leveraging a powerful combination
Disaster Recovery on your existing ArchestrA
on-line and you can also download it and
of re-usable application templates, along
System Platform. These are extracts from
print it in part or in whole.
with easy and transparent management of
the comprehensive 700-page document
6 | www.protocolmag.co.za
Getting started with System Platform and virtualisation
Getting started with System Platform and virtualisation Overview of implementing ArchestrA System Platform in a virtualised environment
• Disaster Recovery (DR) was intended for use in case of the inability of localised, on-line servers to function or in the event of their destruction.
This is done using Microsoft Hyper-V technology, failover clustering
Because of this, systems able to cope with Disaster Recovery are
and other strategies to create High Availability (HA), Disaster Recovery
normally quite geographically remote from those they will be asked
(DR) as well as High Availability integrated with Disaster Recovery
to back up. For example, at the OpsManage’11 conference, Invensys
capabilities.
Operations Management set up and demonstrated a scenario where an entire primary system in Florida failed over to a backup disaster
Virtualisation technologies are becoming high priority for
recovery system in California.
IT administrators and managers, software and systems engineers,
Because these systems are not on-line like the local servers, a failover
plant managers, software developers, and system integrators. Mission-
condition may result in the loss of more data than would be the case
critical operations in both small and large-scale organisations demand
in a local switch-over condition.
availability—defined as the ability of the user community to access the system—along with dependable recovery from natural or man-made
While these definitions are general and allow for a variety of HA
disasters. Virtualisation technologies provide a platform for High
and DR designs, what follows focuses on virtualisation, an indispensible
Availability and Disaster Recovery solutions.
element in creating the redundancy necessary for HA and DR solutions.
Let’s assume that you and your organisation have done the necessary
Types of Virtualisation
research and analysis and have made the decision to implement ArchestrA System Platform in a virtualised environment that will
There are eight types of virtualisation:
replace the need for physical computers. Such an environment can take advantage of advanced virtualisation features including
• Hardware: A software execution environment separated from
High Availability and Disaster Recovery. In that context, we’ll define
underlying hardware resources. Includes hardware-assisted
the terms as follows:
virtualisation, full and partial virtualisation and paravirtualisation.
• Virtualisation is a concept in which access to a single underlying
• Memory: An application operates as though it has sole access to
piece of hardware (like a server) is coordinated so that multiple guest
memory resources, which have been virtualised and aggregated
operating systems can share that single piece of hardware with
into one memory pool. Includes virtual memory and memory
no guest operating system being aware that it is actually sharing
virtualisation.
anything. So, instead of using, say, 4 servers all running at between 10% and 35% utilisation, one could rather run the applications as 4 virtual
• Storage: Complete abstraction of logical storage from physical storage
machines but on 2 physical servers – and these 2 servers may now run at say 70% utilisation.
• Software: Multiple virtualised environments hosted within a single
In short, virtualisation allows for two or more virtual computing
operating system instance. Related is a virtual machine (VM) which is
environments on a single piece of hardware that may be running
a software implementation of a computer, possibly hardware-
different operating systems and decouples users, operating systems
assisted, which behaves like a real computer.
and applications from physical hardware. Virtualisation creates a virtual (as opposed to “real”) version of ArchestrA System Platform or one of its components, including
• Mobile: Uses virtualisation technology in mobile phones and other types of wireless devices.
servers, nodes, databases, storage devices and network resources. • Data: Presentation of data as an abstract layer, independent of • High Availability (HA) means having the physical servers on which
underlying databases, structures and storage. Related is database
the community depends available for use the great majority of the
virtualisation, which is the decoupling of the database layer
time (e.g. 99% or greater). So, if a physical server fails, the system
within the application stack.
will ensure a rapid switchover to a standby backup unit to ensure continuity of service. A failover condition may result in some loss of data.
• Desktop: Remote display, hosting or management of a graphical computer environment—a desktop.
July/August 2012 | 7
VM and Hyper-V Limits in Windows Server 2008 R2 Tables 1 and 2 show maximum values for VMs and for a server running Hyper-V in Windows Server 2008 R2 Standard and Enterprise editions, respectively. By understanding the limits of the hardware, software and virtual machines, you can better plan your ArchestrA System Platform virtualised environment. Component
Maximum
Virtual processor
4
Memory
64 GB
Virtual IDE disk
4
Notes
The boot disk must be attached to one of the IDE devices. The boot disk can be either a virtual hard disk or a physical disk attached directly to a virtual machine.
Virtual SCSI
4
controllers
Use of virtual SCSI devices requires integration services to be installed in the guest operating system.
Virtual SCSI discs
256
Each SCSI controller supports up to 64 SCSI discs.
Virtual hard disk
2040 GB
Each virtual hard disk is stored as a .vhd file on physical
capacity Size of physical discs
media. Varies
Maximum size is determined by the guest operating system.
50
The actual number depends on the available storage and
attached to a VM Checkpoints (snapshots)
may be lower. Each snapshot is stored as an .avhd file that consumes physical storage.
Figure 1: Hyper-V architecture Virtual network
• Network: Implementation of a virtualised network address space
12
adapters
provides better performance and requires a virtual machine driver that is included in the integration services packages.
within or across network subnets.
The remaining 4 can be the “legacy network adapter” type. This type emulates a specific physical network adapter and supports the Pre-execution Boot Environment (PXE) to
Virtualisation using a Hypervisor Microsoft Hyper-V technology implements a type of hardware virtualisation using a hypervisor, permitting a number of
8 of these can be the “network adapter” type. This type
perform a network-based installation of an operating system. Virtual floppy drives
1
Serial (COM) ports
2
guest operating systems (virtual machines) to run concurrently on
Table 1: Hyper-V Server maxima - Windows 2008 R2 Standard
a host computer. The hypervisor, also known as a Virtual Machine
Edition
Monitor (VMM), is so named because it exists above the usual supervisory portion of the operating system. There are two classifications of hypervisor: • Type 1: Also known as a bare-metal hypervisor, runs directly on the host hardware to control it and to monitor the guest operating systems. Guest operating systems run as a second level above the hypervisor. • Type 2: Also known as a hosted hypervisor, runs within a conventional operating system environment as a second software
Component
Maximum
Logical processors
64
Virtual processors
8
per logical processor Virtual machines per
Hyper-V architecture Hyper-V implements Type 1 hypervisor virtualisation, in which the hypervisor primarily is responsible for managing the physical
384 (running)
server Virtual processors
512
per server Memory
1TB
Storage
Varies – no limit
Limited by the support capability of the
imposed by Hyper-V
management operating system.
level. Guest operating systems run as a third level above the hypervisor.
Notes
Physical network
No limit imposed by
adapters
Hyper-V
Virtual networks
Varies – no limit
(switches)
imposed by Hyper-V
Virtual network
Varies – no limit
switch ports per
imposed by Hyper-V
Limited by available computing resources
Limited by available computing resources
server
CPU and memory resources among the virtual machines. This
Table 2: Hyper-V Server maxima - Windows 2008 R2 Enterprise
basic architecture is illustrated in figure 1.
Edition
8 | www.protocolmag.co.za
imagine a better future Cost Reduction
Security
Resources Optimization
Building intelligence
Better Services
Public Utilities management
integrated management of the City
Traffic that flows. Intelligent buildings that save energy. A public infrastructure that delivers whole new levels of service at lower cost. Systems, assets, people and the environment living in harmony. You have imagined ArchestrA, the Wonderware technology that lets you manage your infrastructure as you like in an integrated way. Open, scalable, affordable. Turn imagination into reality with Wonderware. Visit wonderware.com/Infrastructure for more info.
Facility Management • Environment • Power • Smart Cities • Transportation • Waste • Water & Wastewater Avantis
Eurotherm
Foxboro
IMServ
InFusion
SimSci-Esscor
Skelta
Triconex
Wonderware July/August 2012 | 9
© Copyright 2012. All rights reserved. Invensys, the Invensys logo, Avantis, Eurotherm, Foxboro, IMServ, InFusion, Skelta, SimSci-Esscor, Triconex and Wonderware are trademarks of Invensys plc, its subsidiaries or affiliates. All other brands and product names may be trademarks of their respective owners.
Virtualising the ArchestrA System Platform Abstraction versus Isolation
Sizing recommendations for virtualisation
Note: An abstraction layer is a layer with
With the release of InTouch 10.0, supporting the VMWare ESX platform, Wonderware
The following provides sizing guidelines and
drivers that make it possible for the
became one of the first companies to
recommended minima for ArchestrA System
virtual machine (VM) to communicate
support virtual machine operation of
Platform installations.
with hardware (VMware).
industrial software. VMware ESX is referred to as a “bare metal” virtualisation
For a virtualisation-only implementation, you
In this scenario, the drivers
system. The virtualisation is run in an
can use these minima and guidelines to size
need to be present for proper
abstraction layer, rather than in a standard
the virtualisation server or servers that will
communication with the hardware.
operating system.
host your System Platform configuration.
With an isolation layer, the VM uses the operating system, its functionality
Microsoft takes a different approach
Cores and Memory
hypervisor-based virtualisation system. The
and its installed drivers. This scenario does not require special drivers.
to virtualisation. Microsoft Hyper-V is a • Spare Resources - The host server should
hypervisor is essentially an isolation layer
always have spare resources of 25% above
between the hardware and partitions which
what the guest machines require. For
layer in VMware is 32MB and in
contain guest systems. This requires at least
example, if a configuration with five nodes
Hyper-V it is 256kb.
one parent partition, which runs Windows
requires 20GB of RAM and 10 CPUs, the
Server 2008.
host system should have 25GB of RAM and 13 CPUs. If this is not feasible, choose
As a comparison, the abstraction
• Hyper-Threading - Hyper-Threading
Figures 2 and 3 show non-virtualised and
the alternative closest to the 25% figure,
Technology can be used to extend the
virtualised ArchestrA System Platform
but round up so that the host server has
amount of cores, but it does impact
topologies respectively.
32GB of RAM and 16 cores.
performance. An 8-core CPU will perform better than a 4-core CPU that is HyperThreading.
Storage It is always important to plan for proper Storage. A best practice is to dedicate a local drive or virtual drive on a Logical Unit Number (LUN) to each of the VMs being hosted. We recommend SATA or higher interfaces.
Recommended storage topology To gain maximum performance, the host OS also should have a dedicated storage drive. A basic storage topology would include: • Host storage Figure 2: A common, non-virtualised ArchestrA System Platform topology.
• VM storage for each VM • A general disc which should be large enough to hold snapshots, backups and
10 | www.protocolmag.co.za
Virtualising the ArchestrA System Platform
Figure 3: The same environment as figure 2 but virtualised using Hyper-V
other content. It should not be used by
on storage could well become more costly in
the host or by a VM.
the end.
• Storage network • Hyper-V network.
Recommended storage speed
Networks
Boot times and VM performance are
Networking is as important as any other
node, an internal-only Static Virtual Network.
impacted both by storage bandwidth and
component for the overall performance of
In the event that the host and the guest
storage speed. Faster is always better. Drives
the system.
VMs become disconnected from the outside
A best practice is to establish, on every
rated at 7200 rpm perform better than those
world, you will still be able to communicate
rated at 5400 rpm. Solid-state drives (SSDs)
Recommended networking for virtualisation
through an RDP session independent of
perform better than 7200-rpm drives.
- If virtualisation is your only requirement,
external network connectivity.
your network topology could include the Keep in mind that multiple VMs attempting
following elements:
Table 3 shows recommended minima for
to boot from one hard drive will be slow and your performance will experience a
System Platform configurations. • Plant network
significant degradation. Attempting to save Item
Cores
RAM
Storage
Galaxy Repository node
2-4
2-4
100/250
Historian
2-4
2-4
250/500
Application Server
2-2/4
2-4
100/100
RDS Servers
2-4/8
2-4/8
100/100
Information Servers
2-4
2-4
100/100
Historian Clients
2-2
2-4
100/100
Table 3: Minimum system Platform configurations for small (black) as well as medium and large (red) systems
July/August 2012 | 11
Availability Levels of Availability When planning a virtualisation implementation—for High Availability, Disaster Recovery, Fault Tolerance, and Redundancy—it is helpful to consider levels or degrees of redundancy and availability as shown in table 4. Level
Description
Comments
Level 0
No redundancy built into the architecture for
Expected failover: None
Redundancy
safeguarding critical architectural components
Level 1
Redundancy at the Application Object level
Cold stand-by redundancy
Safeguards single points of failure at the
Expected failover: 10 to 60 seconds
DAServer level or AOS redundancy. Level 2
• With provision to synchronise in real-time
Expected failover: Uncontrolled 30 seconds
High Availability (HA)
• Uses virtualisation techniques
to 2 minutes,
• Can be 1-n levels of hot standby
Disaster Recovery: 2 - 7 minutes
• Can be geographically diverse (DR) • Uses standard OS and non-proprietary hardware Level 3
Redundancy at the application level typically
Expected failover: Next cycle or single-digit
Hot redundancy
provided by Invensys controllers. For example,
seconds
hot backup of Invensys software such as Alarm System. Level 4
Expected failover: Next cycle or without loss
Provides lock-step failover
Lock-step fault tolerance (FT)
of data. For ArchestrA System Platform, this would be a Marathon-type solution, which can also be a virtualised system.
Table 4: Degrees of redundancy and availability that should be considered
High Availability
of an on-line system including a Hyper-V Server and a number of virtual PCs, linked by a LAN to an offline duplicate system. The LAN
About HA
accommodates a number of networks including a plant floor network linked to plant operations, an I/O network linked to field devices and a
High Availability refers to the availability of resources in a
replication network linked to storage.
computer system following the failure or shutdown of one or more components of that system.
The basic architecture shown in figure 4 permits a number of common scenarios:
At one end of the spectrum, traditional HA has been achieved through custom-designed and redundant hardware. This solution
IT maintains a virtual server
produces High Availability, but has proven to be very expensive. • A system engineer fails over all virtual nodes hosting At the other end of the spectrum are software solutions designed
ArchestrA System Platform software to back up the virtualisation
to function with off-the-shelf hardware. This type of solution
server over the LAN.
typically results in significant cost reduction and has proven to survive single points of failure in the system.
• For a distributed system, the system engineer fails over all virtual nodes to back up the virtualisation server over a WAN.
High Availability scenarios • IT performs the required maintenance, requiring a restart of the The basic HA architecture implementation described here consists
12 | www.protocolmag.co.za
primary virtualisation server.
Availability
• Any of the primary virtualisation server network components fail with a backup virtualisation server connected via WAN, triggering a backup of virtual nodes to the backup virtualisation server over WAN. For these scenarios, the following expectations apply: • For the maintenance scenario, all virtual images are up and running from the last state of execution prior to failover. • For the hardware and network failure scenarios, the virtual images restart following failover.
Figure 4: High-Availability architecture
Virtualisation server hardware fails (Note: This scenario is a
• For LAN operations, you should
hardware failure, not software. A program that crashes or hangs is
see operational disruptions for approximately 2-15 seconds (LAN
a failure of software within a given OS.)
operations assume recommended speeds and bandwidth.
• The primary virtualisation server hardware fails with a backup virtualisation server on the same LAN.
• For WAN operations, you should see operational disruptions for approximately 2 minutes (WAN operations assume recommended speeds and bandwidth.
• For a distributed system, the virtualisation server hardware fails with a backup virtualisation server over WAN.
Note: The disruption spans described here are general and approximate.
A network fails on a virtual server. • Any of the primary virtualisation server network components fail with a backup virtualisation server on the same LAN, triggering a backup of virtual nodes to the backup virtualisation server.
July/August 2012 | 13
Disaster Recovery About DR
• Replication of data to an off-site location, making use of storage area network (SAN) technology. This strategy eliminates the need
Disaster Recovery planning typically involves policies, processes, and
to restore the data. Only the systems need to be restored or
planning at the enterprise level, which is well outside the scope of this
synchronised.
article. • High availability systems which replicate both data and system offDR, at its most basic, is all about data protection. The most common
site. This strategy enables continuous access to systems and data.
strategies for data protection include the following: The ArchestrA System Platform virtualised environment implements the • Backups made to tape and sent off-site at regular intervals, typically
fourth strategy—building DR on an HA implementation.
daily.
Disaster Recovery scenarios • For the hardware and network failure scenarios, the virtual images restart following failover
The basic DR architecture implementation described here builds on the HA architecture by moving storage to each Hyper-V server and moving
• For the hardware and network failure scenarios, the virtual images
the offline system to an off-site location.
restart following failover The DR scenarios duplicate those described in “High Availability • Backups made to disk on-site, automatically copied to an off-site disc, or made directly to an off-site disk.
Figure 5: Disaster Recovery architecture
14 | www.protocolmag.co.za
Scenarios” above, with the variation that all failovers and backups occur over a WAN as shown in figure 5.
High Availability with Disaster Recovery
High Availability with Disaster Recovery About HADR
HADR scenarios
The goal of a High Availability and Disaster Recovery (HADR) solution
The basic HADR architecture implementation described in this
is to provide a means to shift data processing and retrieval to a standby
guide builds on both the HA and DR architectures adding an offline
system in the event of a primary system failure.
system plus storage at “Site A”. This creates a complete basic HA implementation at “Site A” plus a DR implementation at “Site B”
Typically, HA and DR are considered as individual architectures. HA and
when combined with distributed storage.
DR combined treat these concepts as a continuum. If your system is geographically distributed, for example, HA combined with DR
The scenarios and basic performance metrics described in “High
can make it both highly available and able to recover from a disaster
Availability Scenarios” above also apply to HADR.
quickly.
Figure 6: Combined DR and HA architecture
July/August 2012 | 15
The economics of the cloud Cloud computing is neither an invention nor a discovery. It is the natural next step in an information delivery evolution that started with mainframes and which is now poised to provide an unprecedented level of decision support to all levels of the enterprise because, whatever their size, they can at last all afford it. Information and the access to computing resources has never before had such a breakthrough. Rolf Harms, Director of Corporate Strategy and Michael Yamartino, Manager Corporate Strategy, both from Microsoft Corporation, explain the economics of the cloud.
16 | www.protocolmag.co.za
The economics of the cloud
grasp the potential impact their work could
In Section 2, we outline the underlying
have on the world. When Daimler, arguably
economics of the cloud, focusing on what
the inventor of the automobile, attempted
makes it truly different from client/server.
to estimate the long-term auto market opportunity, he concluded there could never
In Section 3, we will assess the implications
be more than 1 million cars, because of
of these economics for the future of IT. We
their high cost and the shortage of capable
will discuss the positive impact cloud will
chauffeurs1.
have but will also discuss the obstacles that still exist today.
By the 1920s the number of cars had already reached 8 million and today there are over
Finally, in Section 4, we will discuss what‘s
600 million cars – proving Daimler wrong
important to consider as IT leaders embark
hundreds of times over. What the early
on the journey to the cloud.
pioneers failed to realise was that profound reductions in both cost and complexity of Figure 1: Horseless carriage syndrome
Computing is undergoing a seismic shift
2. Economics of the cloud
operating cars and a dramatic increase in its importance in daily life would overwhelm
Economics are a powerful force in
prior constraints and bring cars to the masses.
shaping industry transformations. Today‘s
from client/server to the cloud, a shift similar
discussions on the cloud focus a great deal
in importance and impact to the transition
Today, IT is going through a similar change:
on technical complexities and adoption
from mainframe to client/server. Speculation
the shift from client/server to the cloud.
hurdles. While we acknowledge that such
abounds on how this new era will evolve in the
The cloud promises not just cheaper IT, but
concerns exist and are important, historically,
coming years, and IT leaders have a critical
also faster, easier, more flexible and more
underlying economics have a much stronger
need for a clear vision of where the industry
effective IT.
impact on the direction and speed of
is heading. We believe the best way to form
disruptions, as technological challenges are
this vision is to understand the underlying
Just as in the early days of the car industry,
resolved or overcome through the rapid
economics driving the long-term trend. In this
it‘s currently difficult to see where this new
innovation we‘ve grown accustomed to
paper, we will assess the economics of the
paradigm will take us. The goal of this
(Figure 2). During the mainframe
cloud by using in-depth modelling. We then
whitepaper is to help build a framework
era, client/server was initially viewed
use this framework to better understand the
that allows IT leaders to plan for the cloud
as a “toy” technology, not viable as a
long-term IT landscape.
transition2. We take a long-term view in
mainframe replacement.
our analysis, as this is a prerequisite when
1. Introduction
evaluating decisions and investments that
Yet, over time, the client/server technology
could last for decades. As a result, we focus
found its way into the enterprise (Figure 3).
When cars emerged in the early 20th
on the economics of the cloud rather than on
Similarly, when virtualisation technology was
century, they were initially called “horseless
specific technologies or other driving factors
first proposed, application compatibility
carriages”. Understandably, people were
like organisational change, as economics
concerns and potential vendor lock-in
sceptical at first, and they viewed the
often provide a clearer understanding of
were cited as barriers to adoption.
invention through the lens of the paradigm
transformations of this nature.
Yet underlying economics of 20 to 30
that had been dominant for centuries: the horse and carriage. The first cars also looked very similar to the horse and carriage (just without the horse), as engineers initially failed to understand the new possibilities of the new paradigm, such as building for higher speeds, or greater safety. Incredibly, engineers kept designing the whip holder into the early models before realising that it wasn‘t necessary anymore. Initially there was a broad failure to fully comprehend the new paradigm. Banks claimed that, “The horse is here to stay but the automobile is only a novelty, a fad”.
Figure 2: Cloud opportunity. Source: Microsoft
Even the early pioneers of the car didn‘t fully 1
Source: Horseless Carriage Thinking, William Horton Consulting.
2
Cloud in this context refers to cloud computing architecture, encompassing both public and private clouds.
July/August 2012 | 17
client/server computing.
However, there was a significant utilisation
The mainframe era
trade-off, resulting in the current state of
was characterised by
affairs: data centres sprawling with servers
significant economies of
purchased for whatever need existed at the
scale due to high up-front
time, but running at just 5%-10% utilisation4.
costs of mainframes and the need to hire
Cloud computing is not a return to the
sophisticated personnel
mainframe era as is sometimes suggested,
to manage the systems.
but in fact offers users economies of scale
As required computing
and efficiency that exceed those of a
Figure 3: Beginning the transition to client/server
power – measured in
mainframe, coupled with modularity and
technology. Source: “How convention shapes our
MIPS (million instructions
agility beyond what client/server technology
market” longitudinal survey, Shana Greenstein, 1997
per second) – increased,
offered, thus eliminating the trade-off.
cost declined rapidly percent savings3 compelled CIOs to
at first (Figure 4), but
overcome these concerns, and adoption
only large central IT organisations had the
quickly accelerated.
resources and the aggregate demand to justify the investment. Due to the high cost,
The economies of scale emanate from the following areas: • Cost of power. Electricity cost is rapidly
The emergence of cloud services is
resource utilisation was prioritised over
rising to become the largest element
again fundamentally shifting the economics
end-user agility. Users‘ requests were put in
of total cost of ownership (TCO)5
of IT. Cloud technology standardises and
a queue and processed only when needed
currently representing 15%-20%. Power
pools IT resources and automates man y
resources were available.
Usage Effectiveness (PUE)6 tends to be significantly lower in large facilities than in
of the maintenance tasks done manually today. Cloud architectures facilitate elastic
With the advent of minicomputers and later
smaller ones. While the operators of small
consumption, self-service, and pay-as-
client/server technology, the minimum unit
data centres must pay the prevailing local
you-go pricing.
of purchase was greatly reduced, and the
rate for electricity, large providers can
resources became easier to operate and
pay less than one-fourth of the national
Cloud also allows core IT infrastructure to
maintain. This modularisation significantly
average rate by locating its data centres
be brought into large data centres that take
lowered the entry barriers to providing IT
in locations with inexpensive electricity
advantage of significant economies of scale
services, radically improving end-user agility.
supply and through bulk purchase
in three areas: • Supply-side savings. Large-scale data centres (DCs) lower costs per server. • Demand-side aggregation. Aggregating demand for computing smoothes overall variability, allowing server utilisation rates to increase. • Multi-tenancy efficiency. When changing to a multitenant application model, increasing the number of tenants (i.e., customers or users) lowers the application management and server cost per tenant.
2.1 Supply-side economies of scale Cloud computing combines the best
Figure 4: Economies of scale (illustrative)
economic properties of mainframe and 3
Source: “Dataquest Insight: Many Midsize Businesses Looking Toward 100% Server Virtualisation”. Gartner, May 8, 2009.
4
Source: The Economics of Virtualisation: Moving Toward an Application-Based Cost Model, IDC, November 2009.
5
Not including app labour. Studies suggest that for low-efficiency data centres, three-year spending on power and cooling, including infrastructure, already outstrips three-year server
hardware spending. 6
Power Utilization Effectiveness equals total power delivered into a data centre divided by critical power – the power needed to actually run the servers. Thus, it measures the efficiency of
the data centre in turning electricity into computation. The best theoretical value is 1.0, with higher numbers being worse.
18 | www.protocolmag.co.za
The economics of the cloud
Going forward, there will likely be many additional economies of scale that we cannot yet foresee. The industry is at the early stages of building data centres at a scale we‘ve never seen before (Figure 5). The massive aggregate scale of these mega DCs will bring considerable and ongoing R&D to bear on running them more efficiently, and make them more efficient for their customers. Providers of large-scale DCs, for which running them is a primary business goal, are likely to benefit more from this than smaller DCs which are run Figure 5: Relatively recent large data centre projects. Source: Press releases
agreements7.In addition, research has
secure and reliable.
shown that operators of multiple data centres are able to take advantage of
inside enterprises.
2.2 Demand-side economies of scale
• Buying power. Operators of large data
geographical variability in electricity rates,
centres can get discounts on hardware
The overall cost of IT is determined not
which can further reduce energy cost.
purchases of up to 30 percent over smaller
just by the cost of capacity, but also by the
buyers. This is enabled by standardising
degree to which the capacity is efficiently
on a limited number of hardware and
utilised. We need to assess the impact that
computing significantly lowers labour
software architectures. Recall that for
demand aggregation will have on costs of
costs at any scale by automating many
the majority of the mainframe era, more
actually utilised resources (CPU, network,
repetitive management tasks, larger
than 10 different architectures coexisted.
and storage)9.
facilities are able to lower them further
Even client/server included nearly a
than smaller ones. While a single system
dozen UNIX variants and the Windows
In the non-virtualised data centre, each
administrator can service approximately
Server OS, and x86 and a handful of
application/workload typically runs on
140 servers in a traditional enterprise8, in a
RISC architectures. Large-scale buying
its own physical server10. This means the
cloud data centre the same administrator
power was difficult in this heterogeneous
number of servers scales linearly with the
can service thousands of servers. This
environment. With cloud, infrastructure
number of server workloads. In this model,
allows IT employees to focus on higher
homogeneity enables scale economies.
utilisation of servers has traditionally been
• Infrastructure labour costs. While cloud
value-add activities like building new capabilities and working through the long queue of user requests every IT department contends with. • Security and reliability. While often cited as a potential hurdle to public cloud adoption, increased need for security and reliability leads to economies of scale due to the largely fixed level of investment required to achieve operational security and reliability. Large commercial cloud providers are often better able to bring deep expertise to bear on this problem than a typical corporate IT department, thus actually making cloud systems more
7
Figure 6: Random variability (exchange server). Source: Microsoft)
Source: U.S. Energy Information Administration (July 2010) and Microsoft. While the average U.S. commercial rate is 10.15 cents per kilowatt hour, some locations offer power for as little as
2.2 cents per kilowatt hour 8
Source: James Hamilton, Microsoft Research, 2006.
9
In this paper, we talk generally about “resource” utilization. We acknowledge there are important differences among resources. For example, because storage has fewer usage spikes
compared with CPU and I/O resources, the impact of some of what we discuss here will affect storage to a smaller degree. 10
Multiple applications can run on a single server, of course, but this is not common practice. It is very challenging to move a running application from one server to another without also
moving the operating system, so running multiple applications on one operating system instance can create bottlenecks that are difficult to remedy while maintaining service, thereby limiting agility. Virtualisation allows the application plus operating system to be moved at will.
July/August 2012 | 19
other parts of the day causing low utilisation. This variability can be countered by running the same workload for multiple time zones on the same servers (Figure 7) or by running workloads with complementary time-of-day patterns (for example, consumer services and enterprise services) on the same servers.
Figure 7: Time-of-day patterns for search. Source: Bing search volume over 24-hour period
3. Industry-specific variability. Some extremely low, around 5 to 10 percent11.
variability is driven by industry
Virtualisation enables multiple applications
dynamics. Retail firms see a spike
to run on a single physical server within
during the holiday shopping season
their optimised operating system instance,
while U.S. tax firms will see a peak
so the primary benefit of virtualisation is
before April 15 (Fig. 8). There are
that fewer servers are needed to carry the
multiple kinds of industry variability —
same number of workloads. But how will this
some recurring and predictable (such
affect economies of scale? If all workloads
as the tax season or the Olympic
had constant utilisation, this would entail a
Games), and others unpredictable
simple unit compression without impacting
(such as major news stories).
Figure 8: Industry-specific variability. Source: Alexa Internet
economies of scale. In reality, however, workloads are highly variable over time,
.
The common result is that capacity
often demanding large amounts of resources
has to be built for the expected peak
one minute and virtually none the next.
(plus a margin of error). Most of this
This opens up opportunities for utilisation
capacity will sit idle the rest of the
Figure 9: Multi-resource variability (illustrative).
improvement via demand-side aggregation
time. Strong diversification benefits
Source: Microsoft
and diversification.
exist for industry variability.
We analysed the different sources of
4. Multi-resource variability.
utilisation variability and then looked at the
Compute, storage, and input/
ability of the cloud to diversify it away and
output (I/O) resources are generally
thus reduce costs.
bought in bundles: a server contains a certain amount of computing
We distinguish five sources of variability and
power (CPU), storage, and I/O (e.g.,
assess how they might be reduced:
networking or disk access). Some workloads like search use a lot of
1. Randomness. End-user access patterns
CPU but relatively little storage or
contain a certain degree of randomness.
I/O, while other workloads like email
For example, people check their email
tend to use a lot of storage but little
Figure 10: Uncertain growth patterns. Source:
at different times (Figure 6). To meet
CPU (Figure 9).
Microsoft
service level agreements, capacity buffers have to be built in to account for
2
While it‘s possible to adjust capacity
computing resources and the long
a certain probability that many people
by buying servers optimised for CPU or
lead-time for bringing capacity online is
will undertake particular tasks at the
storage, this addresses the issue only to
another source of low utilisation (Figure
same time. If servers are pooled, this
a limited degree because it will reduce
10). For start-ups, this is sometimes
variability can be reduced.
flexibility and may not be economical
referred to as the “TechCrunch effect”.
from a capacity perspective. This
Enterprises and small businesses
Time-of-day patterns. There are daily
variability will lead to resources going
all need to secure approval for IT
recurring cycles in people‘s behaviour:
unutilised unless workload diversification
investments well in advance of actually
consumer services tend to peak in the
is employed by running workloads with
knowing their demand for infrastructure.
evening, while workplace services tend
complementary resource profiles.
to peak during the workday. Capacity has to be built to account for these daily peaks but will go unused during 11
5. Uncertain growth patterns. The difficulty of predicting future need for
Source: The Economics of Virtualisation: Moving Toward an Application-Based Cost Model, IDC, November 2009.
20 | www.protocolmag.co.za
Even large private companies face this challenge, with firms planning their purchases six to twelve months
The economics of the cloud
workload types. Within an
So far we have made the implicit assumption
average organisation, peak IT
that the degree of variability will stay the
usage can be twice as high as
same as we move to the cloud. In reality, it
the daily average. Even in large,
is likely that the variability will significantly
multi-geography organisations,
increase, which will further increase
the majority of employees and
economies of scale. There are two reasons
users will live in similar time
why this may happen:
zones, bringing their daily cycles close to synchrony. Also, most
• Higher expectation of performance.
organisations do not tend to have
Today, users have become accustomed
Figure 11: Diversifying random variability.
workload patterns that offset
to resource constraints and have learned
Source: MicrosoftInternet
one another: for example, the
to live with them. For example, users
email, network and transaction
will schedule complex calculations to
in advance (Figure 10). By diversifying
processing activity that takes place during
run overnight, avoid multiple model
among workloads across multiple
business hours is not replaced by an equally
iterations, or decide to forgo time-
customers, cloud providers can
active stream of work in the middle of the
consuming and costly supply chain
reduce this variability, as higher-than-
night. Pooling organisations and workloads
optimisations. The business model of
anticipated demand for some workloads
of different types allows these peaks and
cloud allows a user to pay the same for
is cancelled out by lower-than-
troughs to be offset.
1 machine running for 1,000 hours as he
anticipated demand for others.
would for 1,000 machines running for 1 Industry variability results in highly
hour. Today, the user would likely wait
A key economic advantage of the cloud is
correlated peaks and troughs throughout
1,000 hours or abandon the project. In
its ability to address variability in resource
each firm (that is, most of the systems
the cloud, there is virtually no additional
utilisation brought on by these factors. By
in a retail firm will be at peak capacity
cost to choosing 1,000 machines and
pooling resources, variability is diversified
around the holiday season (e.g., web
accelerating such processes. This will have
away, evening out utilisation patterns. The
servers, transaction processing, payment
a dramatic impact on variability. Pixar
larger the pool of resources, the smoother
processing, databases)13. Figure 12 shows
Animation Studios, for example runs its
the aggregate demand profile, the higher
industry variability for a number of different
computer-animation rendering process on
the overall utilisation rate and the cheaper
industries, with peaks ranging from 1,5x to
Windows Azure because every frame of
and more efficiently the IT organisation can
10x average usage.
their movies takes eight hours to render
meet its end-user demands.
today on a single processor, meaning it Microsoft services such as Windows Live
would take 272 years to render an entire
We modelled the theoretical impact of
Hotmail and Bing take advantage of
movie. As they said, “We are not that
random variability of demand on server
multi-resource diversification by layering
patient.” With Azure, they can get the job
utilisation rates as we increased the number
different subservices to optimise workloads
done as fast as they need. The result is
of servers12. Figure 11 indicates that a
with different resource profiles (such as
huge spikes in Pixar‘s usage of Azure as
theoretical pool of 1,000 servers could
CPU bound or storage bound). It is difficult
they render on-demand.
be run at approximately 90% utilisation
to quantify these benefits, so we have not
without violating its SLA. This only holds
included multi-resource diversification in our
true in the hypothetical situation where
model.
random variability is the only source of
• Batch processes will become real time. Many processes — for example, accurate stock availability for online retailers — that
variability and workloads can be migrated
Some uncertain growth pattern variability
between physical servers instantly without
can be reduced by hardware standardisation
interruption. Note that higher levels
and just-in-time procurement, although likely
of uptime (as defined in a service level
not completely. Based on our modelling, the
agreement or SLA) become much easier to
impact of growth uncertainty for enterprises
deliver as scale increases.
with up to 1,000 servers is 30 to 40 percent
were previously batch driven, will move
over-provisioning of servers relative to a Clouds will be able to reduce time-of-day
public cloud service. For smaller companies
variability to the extent that they are
(for example, Internet start-ups), the impact
Figure 12: Industry variability. Source:
diversified amongst geographies and
is far greater.
Microsoft, Alexa Internet, Inc.
12
To calculate economies of scale arising from diversifying random variability, we created a Monte Carlo model to simulate data centres of various sizes serving many random workloads.
For each simulated DC, workloads (which are made to resemble hypothetical web usage patterns) were successively added until the expected availability of server resources dropped below a given uptime of 99.9 percent or 99.99 percent. The maximum number of workloads determines the maximum utilization rate at which the DC‘s servers can operate without compromising performance. 13
Ideally, we would use the server utilisation history of a large number of customers to gain more insight into such patterns. However, this data is difficult to get and often of poor quality. We
therefore used web traffic as a proxy for the industry variability.
July/August 2012 | 21
than running an application instance
Applications can be entirely multitenant by
for each customer – as is done for
being completely written to take advantage
on-premises applications and most
of these benefits, or can achieve partial
hosted applications such as dedicated
multi-tenancy by leveraging shared services
instances of Microsoft Office 365 – in
provided by the cloud platform. The greater
a multi-tenant application, multiple
the use of such shared services, the greater
customers use a single instance of the
the application will benefit from these multi-
application simultaneously, as in the
tenancy economies of scale.
case of shared Office 365. This has two Figure 13: Variable electricity pricing. Source:
important economic benefits:
2.4 Overall impact
•
The combination of supply-side economies
Ameren Illinois Utilities
Fixed application labour
to real-time. Thus, multi-stage processes
amortised over a large number of
of scale in server capacity (amortising
that were once sequential will now occur
customers. In a single-tenant instance,
costs across more servers), demand-side
simultaneously, such as a manufacturing
each customer has to pay for its own
aggregation of workloads (reducing
firm that can tally its inventory, check its
application management (that is, the
variability), and the multi-tenant application
order backlog and order new supplies
labour associated with update and
model (amortising costs across multiple
at once. This will amplify utilisation
upgrade management and incident
customers) leads to powerful economies of
variability.
resolution). We‘ve examined data from
scale. To estimate the magnitude, we built a
customers, as well as Office 365-D and
cost scaling model which estimates the long
We note that even the largest public
Office 365-S to assess the impact. In
term behaviour of costs.
clouds will not be able to diversify away
dedicated instances, the same activities,
all variability; market level variability will
such as applying software patches, are
Figure 15 shows the output for a workload
likely remain. To further smooth demand,
performed multiple times – once for
that utilises 10 percent of a traditional server.
sophisticated pricing can be employed. For
each instance. In a multi-tenant instance
The model indicates that a 100,000-server
example, similar to the electricity market
such as Office 365-S, that cost is shared
data centre has an 80% lower total cost of
(Figure 13), customers can be given the
across a large set of customers, driving
ownership (TCO) compared to a 1,000-server
incentive to shift their demand from high
application labour costs per customer
data centre.
utilisation periods to low utilisation periods.
towards zero. This can result in a
In addition, a lower price spurs additional
meaningful reduction in overall cost,
usage from customers due to price elasticity
especially for complex applications.
of demand. Demand management will further increase the economic benefits of the cloud.
• Fixed component of server utilisation amortised over large number of customers. For each
2.3 Multi-tenancy economies of scale
application instance, there is a certain amount of server overhead. Figure 14 shows an example from
The previously described supply-side and
Microsoft‘s IT department in which
Figure 15: Economies of scale in the cloud.
demand-side economies of scale can be
intraday variability appears muted
Source: Microsoft
achieved independent of the application
(only a 16 percent increase between
architecture, whether it be traditional
peak and trough) compared to actual
scale-up or scale-out, single tenant or
variability in user access. This is caused
multitenant. There is another important
by application and runtime overhead,
source of economies of scale that can be
which is constant throughout the day. By
harnessed only if the application is written
moving to a multitenant model with a
as a multi-tenant application. That is, rather
single instance, this resource overhead can be amortised across all customers. We have examined Office 365-D, Office 365-S and Microsoft Live@edu data to estimate this overhead, but so far it has proven technically challenging to isolate this effect from other variability in the data (for example, user counts and server utilisation) and architectural differences in the applications. Therefore, we currently
Figure 14: Utilisation overhead. Source: Microsoft
22 | www.protocolmag.co.za
assume no benefit from this effect in our
Figure 16: IT spending breakdown.
model.
Source: Microsoft
The economics of the cloud
This raises the question: what impact will the
tenancy and demand-side aggregation
New/custom applications: Infrastructure-
Cloud Economics we described have on the
is often difficult for developers or even
as-a-Service (IaaS) can help capture some
IT budget? From customer data, we know
sophisticated IT departments to implement
of the economic benefits for existing
the approximate breakdown between the
on their own. If not done correctly, it could
applications. Doing so is, however, a bit of a
infrastructure costs, costs of supporting and
end up either significantly raising the costs
“horseless carriage” in that the underlying
maintaining existing applications and new
of developing applications (thus at least
platform and tools were not designed
application development costs (Figure 16).
partially nullifying the increased budget
specifically for the cloud. The full advantage
The cloud impacts all three of these areas.
room for new app development); or
of cloud computing can only be properly
The supply-side and demand-side savings
capturing only a small subset of the savings
unlocked through a significant investment
impact mostly the infrastructure portion,
previously described. The best approach in
in intelligent resource management. The
which comprises over half of spending.
harnessing the cloud economics is different
resource manager must understand both the
Existing app maintenance costs include
for packaged applications vs. new/custom
status of the resources (networking, storage,
update and patching labour, end-user
applications.
and compute) as well as the activity of the
support, and license fees paid to vendors.
applications being run. Therefore, when
They account for roughly a third of spending
Packaged applications: While virtualising
writing new apps, Platform as a Service most
and are addressed by the multi-tenancy
packaged applications and moving them
effectively captures the economic benefits.
efficiency factor.
to cloud virtual machines (e.g., virtualised
PaaS offers shared services, advanced
Exchange) can generate some savings, this
management and automation features
New application development accounts for
solution is far from ideal and fails to capture
that allow developers to focus directly on
just over a tenth of spending14, even though
the full benefits outlined in this section. The
application logic rather than on engineering
it is seen as the way for IT to innovate.
cause is twofold. First, applications designed
their application to scale.
Therefore IT leaders generally want to
to be run on a single server will not easily
increase spending here. The economic
scale up and down without significant
To illustrate the impact, a start-up named
benefits of cloud computing described here
additional programming to add load-
Animoto used Infrastructure-as-a-Service
will enable this by freeing up room in the
balancing, automatic failover, redundancy
(IaaS) to enable scaling – adding over 3,500
budget to do so. We will touch more on this
and active resource management. This
servers to their capacity in just 3 days as
aspect in the next paragraph as well as in
limits the extent to which they are able to
they served over three-quarters of a million
Section 3.
aggregate demand and increase server
new users. Examining their application later,
utilisation. Second, traditional packaged
however, the Animoto team discovered that
applications are not written for multi-tenancy
a large percentage of the resources they
and simply hosting them in the cloud does
were paying for were often sitting idle –
Capturing the benefits described above
not change this. For packaged applications,
often over 50%, even in a supposedly elastic
is not a straightforward task with today‘s
the best way to harness the benefits of cloud
cloud. They restructured their application
technology. Just as engineers had to
is to use SaaS offerings like Office365, which
and eventually lowered operating costs by
fundamentally rethink design in the early
have been structured for scale-out and
20%. While Animoto is a cloud success story,
days of the car, so too will developers have
multi-tenancy to capture the full benefits.
it was only after an investment in intelligent
2.5 Harnessing cloud economics
to rethink design of applications. Multi-
resource management that they were able to harness the full benefits of the cloud. PaaS would have delivered many of these benefits “out-of-the-box” without any additional tweaking required.
3. Implications In this Section, we will discuss the implications of the previously described economics of cloud. We will discuss the ability of private clouds to address some of the barriers to cloud adoption and assess the cost gap between public and private clouds.
3.1 Possibilities and obstacles Figure 17: Capturing cloud benefits. Source: Microsoft 14
The economics we described in section 2
New application development costs include only the cost of designing and writing the application and excluding the cost of hosting them on new infrastructure. Adding these costs results
in the 80% / 20% split seen elsewhere.
July/August 2012 | 23
experimentation. This both lowers
highly customised to achieve these goals,
the costs of starting an operation and
and moving to a cloud architecture can
lowers the cost of failure or exit – if an
be challenging. Furthermore, experience
application no longer needs certain
with the built-in, standardised security
resources, they can be decommissioned
capabilities of cloud is still limited and
with no further expense or write-off.
many CIOs still feel more confident with legacy systems in this regard.
•
Self-service Provisioning servers • Maturity and Performance – The cloud
Figure 18: Price elasticity of storage. Source:
through a simple web portal rather than
Coughlin Associates
through a complex IT procurement
requires CIOs to trust others to provide
and approval chain can lower friction
reliable and highly available services.
in the consumption model, enabling
Unlike on-premises outages, cloud
will have a profound impact on IT. Many IT leaders today are faced with the problem
rapid provisioning and integration of
outages are often highly visible and may
that 80% of the budget is spent on “keeping
new services. Such a system also allows
increase concerns
the lights on” and maintaining existing
projects to be completed in less time
services and infrastructure. This leaves
with less risk and lower administrative
few resources available for innovation or
overhead than previously.
addressing the never-ending queue of new business and user requests. Cloud
• Compliance and Data Sovereignty – Enterprises are subject to audits and oversight, both internal and external (e.g.
• Reduction of complexity. Complexity
IRS, SEC). Companies in many countries
computing will free up significant resources
has been a long standing inhibitor
have data sovereignty requirements that
that can be redirected to innovation.
of IT innovation. From an end-user
severely restrict where they can host
Demand for general purpose technologies
perspective SaaS is setting a new bar for
data services. CIOs ask: which clouds
like IT has historically proven to be very price
user friendly software. From a developer
can comply with these systems and
elastic (Figure 18). Thus, many IT projects
perspective Platform as a Service (PaaS)
what needs to be done to make them
that previously were cost-prohibitive will now
greatly simplifies the process of writing
compliant?
become viable thanks to cloud economics.
new applications, in the same way as
However, lower TCO is only one of the key
cars greatly reduced the complexity
While many of these concerns can be
drivers that will lead to a renewed level of
of transportation by eliminating, for
addressed by the cloud today, concerns
innovation within IT:
example, the need to care for a horse.
remain and are prompting IT leaders to explore private clouds as a way of achieving
• Elasticity is a game-changer because,
These factors will significantly increase
the benefits of cloud while solving these
as described before, renting 1 machine
the value add delivered by IT. Elasticity
problems. Next, we will explore this in more
for 1,000 hours will be nearly equivalent
enables applications like yield management,
detail and also assess the potential tradeoffs.
to renting 1,000 machines for 1 hour in
complex event processing, logistics
the cloud. This will enable users and
optimisation, and Monte Carlo simulation,
organisations to rapidly accomplish
as these workloads exhibit nearly infinite
complex tasks that were previously
demand for IT resources. The result will be
Microsoft distinguishes between public
prohibited by cost or time constraints.
massively improved experience, including
and private clouds based on whether the IT
Being able to both scale up and scale
scenarios like real-time business intelligence
resources are shared between many distinct
down resource intensity nearly instantly
analytics and HPC for the masses.
organisations (public cloud) or dedicated
3.3 Private clouds
to a single organisation (private cloud).
enables a new class of experimentation However, many surveys show that significant
This taxonomy is illustrated in Figure 20.
concerns currently exist around cloud
Compared to traditional virtualised data
computing. As Figure 19 shows, security,
centres, both private and public clouds
will significantly lower the risk
privacy, maturity, and compliance are
benefit from automated management (to
premium of projects, allowing for more
the top concerns. Many CIOs also worry
save on repetitive labour) and homogenous
and entrepreneurship. • Elimination of capital expenditure
about legacy compatibility: it is often
hardware (for lower cost and increased
not straightforward to move existing
flexibility). Due to the broadly-shared nature
applications to the cloud.
of public clouds, a key difference between private and public clouds is the scale and
•
Security and Privacy CIOs must
scope at which they can pool demand.
be able to report to their CEO and other executives how the company‘s
• Traditional virtualised data centres
data is being kept private and secure.
generally allow pooling of resources
Financially and strategically important
within existing organisational
data and processes often are protected
boundaries — that is, the corporate IT
Figure 19: Public cloud concerns.
by complex security requirements.
group virtualises its workloads, while
Source: Gartner CIO survey
Legacy systems have typically been
departments may or may not do the
24 | www.protocolmag.co.za
The economics of the cloud
3.4 Cost trade-off While it should be clear from the prior discussion that conceptually the public cloud has the greatest ability to capture diversification benefits, we need to get a better sense of the magnitude. Figure 21 shows that while the public cloud addresses all sources of variability the private cloud can address only a subset. For example, industry variability cannot be addressed by a private cloud, while growth variability can be addressed only to a limited Figure 20: Comparing virtualisation, private cloud and public cloud. Source: Microsoft
degree if an organisation pools all its internal resources in a private cloud. We modelled
same. This can diversify away some of
architectural elements as private clouds,
all of these factors, and the output is shown
the random, time-of-day (especially if
but bring massively higher scale to
in Figure 22.
the company has offices globally), and
bear on all sources of variability. Public
workload-specific variability, but the size
clouds are also the only way to diversify
The lower curve shows the cost for a public
of the pool and the difficulty in moving
away industry-specific variability, the
cloud (same as the curve shown in Figure
loads from one virtual machine to another
full geographic element of time-of-day
15). The upper curve shows the cost of a
(exacerbated by the lack of homogeneity
variability and bring multi-tenancy
private cloud. The public cloud curve is
in hardware configurations) limits the
benefits into effect.
lower at every scale due to the greater
ability to capture the full benefits. This is
impact of demand aggregation and the
one of the reasons why even virtualised
Private clouds can address some of the
multi-tenancy effect. Global scale public
data centres still suffer from low utilisation.
previously-mentioned adoption concerns. By
clouds are likely to become extremely
There is no application model change so
having dedicated hardware, they are easier
large, at least 100,000 servers in size, or
the complexity of building applications is
to bring within the corporate firewall, which
possibly much larger, whereas the size of an
not reduced.
may ease concerns around security and
organisation‘s private cloud will depend on
privacy. Bringing a private cloud on-premise
its demand and budget for IT.
• Private clouds move beyond
can make it easier to address some of the
virtualisation. Resources are now pooled
regulatory, compliance and sovereignty
Figure 22 also shows that for organisations
across the company rather than by
concerns that can arise with services that
with a very small installed base of servers
organisational unit15 and workloads are
cross jurisdictional boundaries. In cases
(<100), private clouds are prohibitively
moved seamlessly between physical
where these concerns weigh heavily in an IT
expensive compared to public cloud. The
servers to ensure optimal efficiency and
leader‘s decision, an investment in a private
only way for these small organisations or
availability. This further reduces the impact
cloud may be the best option.
departments to share in the benefits of at
of random, time-of-day and workload
scale cloud computing is by moving to a
variability. In addition, new, cloud-
Private clouds do not really differ from public
optimised application models (Platform
cloud regarding other concerns, such as
as a Service such as Azure) enable more
maturity and performance. Public and
efficient application development and
private cloud technologies are developing
lower ongoing operations costs.
in tandem and will mature together. A
public cloud.
variety of performance levels will be • Public clouds have all the same
available in both public and private form, so there is little reason to expect that one will have an advantage over another16. While private clouds can alleviate some of the concerns, in the next paragraph we will discuss
Figure 21: Diversification benefits. Source: Microsoft 15
whether they will offer the same kind
Figure 22: Cost benefit of public cloud. Source:
of savings described earlier.
Microsoft
Aggregation across organisational units is enabled by two key technologies: live migration, which moves virtual machines while remaining operational, thereby enabling more dynamic
optimization; and self-service provisioning and billing. 16
Private clouds do allow for a greater degree of customization than public clouds, which could enhance performance for a certain computational task. Customization requires R&D effort
and expense, however, so it is difficult to make a direct price/performance comparison.
July/August 2012 | 25
3.5 Finding balance today: weighing the benefits of private cloud against the costs
ERP and related custom applications at larger companies who have more sizable application portfolios. Applications that are more “isolated “such as CRM, collaboration,
We‘ve mapped a view of how public and
or new custom applications may be more
private clouds measure up in Figure 23.
easily deployed in the cloud. Some of those
The vertical axis measures the public cloud
applications may need to be integrated back
cost advantage. From the prior analysis we
to current on-premises applications.
know public cloud has inherent economic advantages that will partially depend on
Before we draw final conclusions, we need to
customer size, so the bubbles‘ vertical
make sure we avoid the “horseless carriage
Figure 23: Cost and benefit of private clouds.
position is dependent on the size of the
syndrome” and consider the likely shift
Source: Microsoft
server installed base. The horizontal axis
along the two axes (economics and private
represents the organisation‘s preference for
preference).
For large agencies with an installed base
private cloud. The size of the circles reflects
of approximately 1,000 servers, private
the total server installed base of companies
clouds are feasible but come with a
of each type. The bottom-right quadrant
significant cost premium of about 10
thus represents the most attractive areas for
times the cost of a public cloud for the
private clouds (relatively low cost premium,
As we pointed out in the introduction of this
same unit of service, due to the combined
high preference).
paper, it is dangerous to make decisions
effect of scale, demand diversification and multi-tenancy.
3.6 The long view: cloud transition over time
during the early stages of a disruption We acknowledge that Figure 23 provides
without a clear vision of the end state. IT
a simplified view. IT for is not monolithic
leaders need to design their architecture
In addition to the increase in TCO, private
within any of these industry segments. Each
with a long term vision in mind. We therefore
clouds also require upfront investment
organisation‘s IT operation is segmented
need to consider how the long term forces
to deploy – an investment that must
into workload types, such as email or
will impact the position of the bubbles on
accommodate peak demand requirements.
ERP. Each of these has a different level of
Figure 23. We expect two important shifts to
This involves separate budgeting and
sensitivity and scale, and CIO surveys reveal
take place. First, the economic benefit of
commitment, increasing risk. Public clouds,
that preference for public cloud solutions
public cloud will grow over time. As more
on the other hand, can generally be
currently varies greatly across workloads
and more work is done on public clouds, the
provisioned entirely on a pay-as-you-go
(Figure 24).
economies of scale we described in Section 2 will kick in and the cost premium on private
basis. An additional factor is that many application
clouds will increase over time. Customers
portfolios have been developed over the
will increasingly be able to tap into the
past 15-30 years and are tightly woven
supply-side, demand-side and multi-tenancy
together. This particularly holds true for
savings as discussed previously. As shown in Figure 25, this leads to an upward shift along the vertical axis. At the same time, some of the barriers to cloud adoption will begin to fall. Many technology case studies show that, over time, concerns over issues like compatibility, security, reliability and privacy will be addressed. This will likely also happen for the cloud, which would represent a shift to the left on Figure 25. Below we will explore some of the factors that cause this latter shift. Cloud security will evolve Public clouds are in a relatively early stage of development, so naturally critical areas like reliability and security will continue to
Figure 24: Cloud-ready workloads (2010). Source: Microsoft survey question: In the next 24-24
improve. Data already suggests that public
months, please indicate if a cloud offering would augment on-premise offering or completely
cloud e-mail is more reliable than most
replace it.
on-premises implementations. In PaaS, the
26 | www.protocolmag.co.za
The economics of the cloud
clouds—small and medium businesses
them with a private cloud; for these users,
(SMBs) and consumers of Software as
productivity and convenience often trump
a Service (SaaS)—will be a formidable
policy.
force of change in this area. This growing constituency will continue to ask
It is not just impatience that drives “rogue
governments to accommodate the shift
clouds”; ever-increasing budgetary
to cloud by modernising legislation. This
constraints can lead users and even
regulatory evolution will make the public
departments to adopt cheaper public cloud
cloud a more viable alternative for large
solutions that would not be affordable from
enterprises and thus move segments
traditional channels. For example, when
along the horizontal axis toward public
Derek Gottfrid wanted to process all 4TB of
cloud preference.
the New York Times archives and host them online, he went to the cloud without the
Figure 25: Expected preference shift for public and
Decentralised IT (also known as ‘rogue
knowledge of the Times‘ IT department17.
IT’) will continue to lead the charge
Similarly, the unprecedented pricing transparency that the public cloud offers, will
private cloud. Source: Microsoft
automatic patching and updating of cloud
Many prior technology transitions were
put further pressure from the CEO and CFO
led not by CIOs but by departments,
on CIOs to move to the public cloud.
systems greatly improves the security of all
business decision makers, developers, and
data and applications, as the majority of
end users – often in spite of the objections
CIOs should acknowledge that these
exploited vulnerabilities take advantage of
of CIOs. For example, both PCs and servers
behaviours are commonplace early in
systems that are out-of-date. Many security
were initially adopted by end users and
a disruption and either rapidly develop
experts argue that there are no fundamental
departments before they were officially
and implement a private cloud with the
reasons why public clouds would be less
embraced by corporate IT policies. More
same capabilities or adopt policies which
secure; in fact, they are likely to become
recently, we saw this with the adoption of
incorporate some of this behaviour, where
more secure than on premises due to the
mobile phones, where consumer adoption
appropriate, in IT standards.
intense scrutiny providers must place on
is driving IT to support these devices.
security and the deep level of expertise they
We‘re seeing a similar pattern in the cloud:
are developing.
developers and departments have started
Clouds will become more compliant Compliance requirements can come
Perceptions are rapidly changing
using cloud services, often without the
Strength in SaaS adoption in large
knowledge of the IT group (hence the
enterprises serves as proof of changing
name “rogue clouds”). Many business users
perceptions (Figure 26) and indicates that
will not wait for their IT group to provide
even large, demanding enterprises are
from within an organisational, industry, or government (e.g., European Data Protection Directive) and may currently be challenging to achieve with cloud without a robust development platform designed for enterprise needs. As cloud technologies improve and as compliance requirements adapt to accommodate cloud architectures, the cloud will continue to become more compliant and therefore feasible for more organisations and workloads. This was the case, for example, with e-signatures, which were not accepted for many contracts and documents in the early days of the Internet. As authentication and encryption technology improved and as compliance requirements changed, e-signatures became more acceptable. Today, most contracts (including those for opening bank accounts and taking out loans) can be signed with an e-signature. The large group of customers who are rapidly increasing reliance on public 17
Figure 26: Increasing adoption of software as a service (SaaS). Source: Gartner
http://open.blogs.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/
July/August 2012 | 27
moving to the left on the horizontal axis (i.e.,
4. The journey to the cloud
budget “keeping the lights on”, with few resources left to focus on innovation. Cloud
reduced private preference). Just a few years ago, very few large companies were willing
Because we are in the early days of the cloud
services will enable IT groups to focus
to shift their e-mail, with all the confidential
paradigm shift, there is much confusion
more on innovation while leaving non-
data that it contains, to a cloud model. Yet
about the direction of this ongoing
differentiating activities to reliable and
this is exactly what is happening today.
transformation. In this paper, we looked
cost-effective providers. Cloud services
beyond the current technology and focused
will enable IT leaders to offer new solutions
As positive-use cases continue to spur more
on the underlying economics of cloud to
that were previously seen as either cost
interest in cloud technology, this virtuous
define the destination – where all of this
prohibitive or too difficult to implement. This
cycle will accelerate, driving greater interest
disruption and innovation is leading our
is especially true of cloud platforms (Platform
in and acceptance of the cloud.
industry. Based on our analysis, we see a
as a Service), which significantly reduce
long-term shift to cloud driven by three
the time and complexity of building new
important economies of scale:
applications that take advantage of all the
In summary, while there are real hurdles to cloud adoption today, these will likely diminish over time. While new, unforeseen
benefits of the cloud. • Larger data centres can deploy
hurdles to public cloud adoption may
computational resources at significantly
This future won‘t materialise overnight.
appear, the public cloud economic
lower cost than smaller ones;
IT leaders need to develop a new 5- to
advantage will grow stronger with time as cloud providers unlock the benefits of
10-year vision of the future, recognising • Demand pooling improves the utilisation
that they and their organisations will play a
economics we discussed in Section 2. While
of these resources, especially in public
fundamentally new role in their company.
the desire for a private cloud is mostly driven
clouds; and
They need to plot a path that connects
by security and compliance concerns around existing workloads, the cost effectiveness
where they are today to that future. An • Multi-tenancy lowers application
important first step in this is to segment
and agility of the public cloud will enable
maintenance labour costs for large
their portfolio of existing applications
new workloads.
public clouds. Finally, the cloud offers
(Figure 27). For some applications, the
unparalleled levels of elasticity and agility
economic and agility benefits may be very
Revisiting our “horseless carriage”
that will enable exciting new solutions and
strong so they should be migrated quickly.
analogy, we see that cars became a huge
applications.
However, barriers do exist today and while
success not simply because they were
we outlined in section 3 that many of them
faster and better (and eventually more
For businesses of all sizes, the cloud
will be overcome over time, the cloud may
affordable) than horse-drawn carriages.
represents tremendous opportunity. It
not be ready for some applications today.
The entire transportation ecosystem had to
represents an opportunity to break out of
For tightly-integrated applications with
change. Highway systems, driver training
the longstanding tradition of IT professionals
fairly stable usage patterns, it may not make
programmes, accurate maps and signage,
spending 80 percent of their time and
sense to move them at all, similar to how
targeted safety regulation and a worldwide network of fuelling infrastructure all had to be developed to enable this transition. Each successive development improved the value proposition of the car. In the end, even people‘s living habits changed around the automobile, resulting in the explosion of the suburbs in the middle part of the 20th century. This created “net new” demand for cars by giving rise to the commuting professional class. This behavioural change represented a massive positive feedback loop that inexorably made the automobile an essential, irreplaceable component of modern life. Similarly, we believe the cloud will be enabled and driven not just by economics and qualitative developments in technology and perception, but by a series of shifts from IT professionals, regulators, telecom operators, ISVs, systems integrators and cloud platform providers. As the cloud is embraced more thoroughly, its value will increase.
28 | www.protocolmag.co.za
Figure 27: Segmenting the IT portfolio. Source: Microsoft
The economics of the cloud
some mainframe applications were never
this, IT leaders can determine what parts
a Platform as a Service that reduces complexity
migrated to client/server. While new custom
of their IT operation are suitable for public
for developers and IT administrators.
applications don‘t have the legacy problem,
cloud and what might justify an investment
designing them in a scalable, robust fashion
in private cloud. Beginning in this manner
Microsoft also brings to the cloud the richest
is not always an easy task. Cloud-optimised
takes advantage of the opportunity of cloud
partner community in the world. We have
platforms (Platform as a Service) can
while striking balance between economics
over 600,000 partners in more than 200
dramatically simplify this task.
and security, performance and risk.
countries servicing millions of businesses. We
This transition is a delicate balancing act.
To accomplish this, IT leaders need a partner
our partners on the cloud transition. Together
If the IT organisation moves too quickly in
who is firmly committed to the long-term
we are building the most secure, reliable,
areas where the cloud is not ready, it can
vision of the cloud and its opportunities,
scalable, available, cloud in the world.
compromise business continuity, security
one who is not hanging on to legacy IT
and compliance. If it moves too slowly, it can
architectures. At the same time, this partner
Over the last three decades, Microsoft
put the company at a significant competitive
needs to be firmly rooted in the realities
has developed strong relationships with
disadvantage versus competitors who do
of today‘s IT so it understands current
IT organisations, their partners and their
take full advantage of cloud capabilities,
challenges and how to best navigate the
advisors. This offers us an unparalleled
giving up a cost, agility, or value advantage.
journey to the cloud. IT leaders need a
understanding of the challenges faced by
Moving too slowly also increases the risk
partner who is neither incentivised to push
today‘s IT organisations. Microsoft is both
that different groups or individuals within the
for change faster than is responsible nor to
committed to the cloud vision and has the
company will each adopt their own cloud
keep IT the same. Customers need a partner
experience to help IT leaders on the journey.
solution in a fragmented and uncontrolled
who has done the hard work of figuring out
fashion (“rogue IT”), wresting control over IT
how best to marry legacy IT with the cloud,
Microsoft has a long history of bringing to
from the CIO. IT leaders who stay ahead of
rather than placing that burden on the
life powerful visions of the future. Bill Gates
the cloud trend will be able to control and
customer by ignoring the complexities of
founded Microsoft on the vision of putting
shape this transition; those who lag behind
this transformation.
a PC in every home and on every desktop
are already collaborating with thousands of
will increasingly lose control.
in an era when only the largest corporations At Microsoft, we are “all in” on the cloud. We
could afford computers. In the journey
To lead the transition, IT leaders need to
provide both commercial SaaS (Office 365)
that followed, Microsoft and our partners
think about the long term architecture of
and a cloud computing platform (Windows
helped bring PCs to over one billion homes
their IT. Some see a new role emerging,
Azure Platform). Office 365 features the
and desktops. Millions of developers and
that of a Cloud Services Architect, who
applications customers are familiar with like
businesses make their living on PCs and we
determines which applications and services
Exchange email and SharePoint collaboration,
are fortunate to play a role in that.
move to the cloud and exactly when such
delivered through Microsoft‘s cloud. Windows
a move takes place based on a business
Azure is our cloud computing platform,
Now, we have a vision of bringing the power
case and a detailed understanding of the
which enables customers to build their own
of cloud computing to every home, every
cloud capabilities available. This should start
applications and IT operations in a secure,
office and every mobile device. The powerful
by taking inventory of the organisation’s
scalable way in the cloud. Writing scalable
economics of the cloud drive all of us towards
resources and policies. This includes an
and robust cloud applications is no easy
this vision. Join Microsoft and our partners on
application and data classification exercise
feat, so we built Windows Azure to harness
the journey to bring this vision to life.
to determine which policy or performance
Microsoft‘s expertise in building our cloud-
requirements (such as confidential or top
optimised applications like Office 365, Bing,
secret data retention requirements) apply
and Windows Live Hotmail. Rather than just
to which applications and data. Based on
moving virtual machines to the cloud, we build
July/August 2012 | 29
The cloud ... here to stay or fading drizzle? Will the cloud work for manufacturing? Will
another vendor-driven fad? Certainly, there
plants accept cloud-based applications? Is
may be success stories in some industries.
the cloud just a vendor-driven fad?
But do they really win or are they just Guinea pigs. I believe only time will tell. The
Introduction
manufacturing industry is holding out, but at the 2011 MESA Conference, the theme was
Recent technology developments have been
around cloud technology. This indicates that
focused more on collaboration, optimisation,
even within manufacturing, there is at least
consolidation, doing more with less, treating
some interest.
the cloud and IT services as utilities, etc.
Gerhard Greeff – Divisional Manager at Bytes Process Management and Control
Within manufacturers’ organisations, there
“What value can this technology bring
has been a profound move by business,
to manufacturing?” is the predominant
the true owners of applications, to take
question manufacturing technologists ask
back ownership of the business while the
themselves. The cloud, by nature, is virtual
IT department is tasked with providing
and not tied to any specific hardware or site.
an infrastructure that can deliver or make
For manufacturing plants specifically, this
available these applications securely,
sounds uncertain and non-secure. Certainly
anywhere and on any device.
at Supply Chain and Enterprise level, the cloud could add value as these systems
This trend is quite evident in employees
do not need to be on-line 24/7/365. But to
bringing in their own devices to the office
what degree can manufacturing plants trust
and demanding to have their enterprise
technology that absolutely, positively has
applications available on them. Employees
to deliver all the time. What will happen if
and departments are becoming increasingly
the technology fails to deliver? In the case
self-sufficient in meeting their own IT needs.
of manufacturing plants, at best it will mean
Products and applications have become
that production stops, at worst someone
easier to use and technology offerings
may die.
are addressing an ever-widening range of business requirements in areas such
In addition, there are security considerations
as video-conferencing, digital imaging,
at plant level to take into account. How
employee collaboration, sales force support
secure is the cloud infrastructure and does
and systems back-up, to mention but a few.
it suit manufacturing needs? I believe that adequate security can be applied at cloud
Considering the global economic
level, but this is normally IT-level security
environment, the business ability made
with levels of encryption that may slow
possible by technology and the above-
down the data transfer rates, something true
mentioned changing consumption of
manufacturing facilities can ill afford. A few
technology trends, organisations today
seconds or milliseconds at SCM or ERP level
have an increasing challenge to maintain
are acceptable, but makes a huge difference
the existing whilst driving innovation to
at a real-time level on the plant floor.
continuously improve business efficiency and
Security may thus be a major inhibitor that
profitability.
may constrain acceptance at plant level.
Cloud infrastructure concerns
In South Africa, as with many other countries on the continent, bandwidth is restricted,
30 | www.protocolmag.co.za
One of the more recent hypes punters
unreliable and prone to failure. the cloud
are promoting is cloud services, but is this
in its purest form therefore needs to
technology a real contributor or simply
be considered very carefully in terms of
The cloud ... here to stay or fading drizzle?
About Bytes Process Management and Control (PMC) PMC delivers an integrated solution strategy and implementation practice designed to help organisations leverage their installed IT environment and enable them to increase the ROI from their overall solution investment. PMC is a division of BYTES Systems Integration and operates primarily in the Manufacturing IT sector. PMC is highly regarded for its achievements in implementing integrated MES, MIS, EMI, SCADA and other plant automation solutions. PMC boasts an impressive 15-year
Figure 1: Manufacturing 2.0 architecture
track record of successful consulting reliability and bandwidth needs. If, for
applications are specifically structured to live
and solution implementations. PMC
instance, the Manufacturing Operations
in the cloud. Most of these are only required
specialises in assisting its customers
Management (MOM) system is on the cloud,
to deliver Key Performance Indicators or
to achieve operational excellence
what would happen at plant level when
critical event information to executives and
by combining their experience in
the communication fails? So bandwidth
not to actually deliver execution ability.
industrial processes and information
reliability (or lack thereof) is another
Some manufacturers have expressed
technology to deliver value-adding
constraint on acceptance.
concerns about the mechanisms of data
solutions. These solutions include all
delivery to these devices and the security of
levels of industrial IT such as:
With regards to the application technology
the data.
available within manufacturing operations,
• Manufacturing application needs
I have some concerns as well. I am not sure
In light of the above concerns, it should be
how ready MOM technologies are for the
clear why the cloud is not as yet generally
cloud, specifically with regards to host server
accepted at plant level. Does this mean
(firmware) or VMWare upgrades, patches
that manufacturing facilities should discard
and technology upgrades, re-establishing
the cloud as “pie in the sky”? My belief
OPC connectivity, not to mention virus and
is that they can only do this at their own
other security updates. For some of these,
peril. There is much to learn from cloud
a hardware re-boot is often required. How
infrastructure that can be applied to the
would one go about re-booting a server
benefit of manufacturing plants.
running somewhere in the cloud in a virtual environment sharing hardware with other
analysis and strategy formulation, • Enterprise Manufacturing Intelligence (EMI) solutions, • Manufacturing Execution Systems (MES), • Process monitoring and control systems (SCADA/PLC),
On-site architecture
applications owned by other companies?
• Equipment anomaly detection and
This would not be easy to accomplish and
The concepts of “private cloud”, “on-site
failure prediction/online condition
at best would take hours instead of minutes.
cloud” or “virtualisation” comes to mind.
monitoring (OCM), as well as
At plant level, where time equates to
Virtualising the plant manufacturing
production, getting systems up-and-running
applications from a redundancy and
fast is the key and delays in achieving this
reliability perspective can bring about great
industrial information systems into
means loss of production or at least the
peace of mind for plant IT people. Having
the ERP
missing of a critical plant event or loss of
the infrastructure on-site will alleviate the
production data.
fears of outside intrusion and remove the constraint of unreliable bandwidth.
Manufacturing solution providers are also
• Integrating these multi-level
in the long term. With on-site infrastructure, plants will have more freedom to control
more frequently releasing applications
With a good virtualisation strategy and
their own applications and will be more
that run on personal devices such as
infrastructure, manufacturing plants can
secure in the knowledge that they own their
smart phones and tablets. Some of these
save capital and add manufacturing value
hardware and software. The same applies to
July/August 2012 | 31
manufacturing enterprises that are reluctant
Small manufacturing concerns
to hand the control of their applications and
time before it does. The cloud also fits in with other concepts being implemented
infrastructure to some vendor that lives in
When we look at smaller manufacturing
by manufacturing enterprises such as
the cloud somewhere.
concerns, cloud applications and
Manufacturing 2.0, so it may just be a
infrastructure as well as Software as a Service
question of time before it is adopted.
When one looks at Manufacturing 2.0
(IaaS and SaaS) may be valid business
architecture for larger enterprises, the cloud
models to access advanced functionality
For more information, contact :
and virtualisation fit right into the basic
that are too expensive by other means. It
Gerhard Greeff,
concepts. On-site cloud infrastructure can
does depend on the specific needs of the
Divisional Manager: Bytes PMC
deliver on most of the MOM requirements
manufacturing facility of course, as even for
Cell : (+27) 82-654-0290
(such as MOM, LIMS and WMS) at plant
small operations real-time MOM connectivity
Mailto: gerhard.greeff@bytes.co.za
level in a virtualised environment. It can
may still be required. But as a strategy it
deliver the role-based user interaction,
may be something to consider for smaller
Operations Process Management,
operations.
Enterprise Manufacturing Intelligence as well as the basic application services and
Conclusion
development administration management. It will also provide the infrastructure that
Cloud infrastructure may not be well
enables the Manufacturing Service Bus and
accepted by the more conservative
manufacturing Master Data Management.
manufacturing concerns at this time. The
Global cloud infrastructure can deliver the
technology has not proven itself to the
same concepts at Enterprise level for SCM,
degree accepted by the manufacturing
CRM and ERP applications.
industry but it may just be a question of
32 | www.protocolmag.co.za
An African cloud
An African cloud Thanks to access to the latest international
NAPAfrica, was launched in March 2012
fibre technology and South African know-
providing an open, free and public peering
how, Africa could rapidly become one of
facility with the aim of making peering
the most connected continents in the world
simple and available to everyone. Van Wyk
while making cloud computing a reality at
says that the Africa Cloud eXchange concept
the local and international levels.
is no different. “We want to provide a highly secure data centre environment with easy
“There are many challenges in building
access to global connectivity providers,”
data centres around the African continent,”
says van Wyk.
says Lex van Wyk, MD Teraco Data Environments, “…but with the rapid growth
Richard Vester, Director of Cloud Services
in IT requirements, Africa needs a solution to
at EOH says, “Teraco provides a world-class
keep up with the demand. With the recent
data centre facility which meets our unique
growth in fibre capacity along the east
SLA requirements and more importantly
and west coasts, Teraco has identified the
allows for our customers to connect from
opportunity to offer the ideal data centre
any network onto their private clouds. The
environment to service providers wanting to
ability to deliver services on demand across
provide IT solutions into the local and other
Teraco’s data centre ensures we meet the
African markets further afield.”
operational requirements of our customers.”
Along with current benefits like free peering,
Teraco is the first premier grade data centre
cost effective interconnects, access to all
environment in Africa to provide access to a
major carriers, resilient power, remote
vendor-neutral colocation space for sharing
support and high levels of security, Teraco
and selling cloud services. “The Africa Cloud
is the ideal space for the cloud to be
eXchange allows South African and African
established in Africa. “Our facilities in South
cloud providers to host their platforms
Africa already boast connectivity to all the
and offer services from a vendor-neutral,
undersea cables offering a combined 28
well connected and highly secure data
landing points along the East and West
centre environment thereby opening up
coasts of the continent, as well as all major
South Africa to the rest of the continent,”
carriers operating in SA and several active
concludes van Wyk.
cloud providers. This in effect means the Africa Cloud eXchange is already in
For more information,
operation,” says van Wyk.
visit www.teraco.co.za
July/August 2012 | 33
The role of virtual reality in the process industry Virtual reality has been with us for decades with probably its most impressive manifestation in flight simulators. If youâ&#x20AC;&#x2122;ve ever been in one, the experience is so real that after a while, itâ&#x20AC;&#x2122;s hard to believe that youâ&#x20AC;&#x2122;re in an artificial environment. In fact, commercial pilots must train regularly on simulators to retain their licences. This technology is now being applied to the industrial environment to familiarise personnel with aspects of operations, maintenance and safety that would otherwise be costly, impractical or even dangerous. Maurizio Rovaglio, Vice President, Solutions Services, Invensys and Tobias Scheele, Vice President/ General Manager Advanced Applications, Invensys, explain the applications and benefits of virtual reality in industry.
34 | www.protocolmag.co.za
The role of virtual reality in the process industry
Introduction
• Non-immersive — the environment is
environment is very important, whereas
displayed in a conventional manner on
the technology used to create the virtual
Until recently, the use of virtual reality (VR)
a display screen and interacts through
environment is critical.
had been limited by systems constraints.
standard computer inputs (such as a
Real-time rendering of equipment views
mouse or a joystick).
Unfortunately, there is a tendency to think
places extreme demands on processor
in terms of head-mounted displays when
time and the invariable need for expensive
The key feature that VR brings to computer-
considering VR systems, and this narrow
hardware. As a result, VR solutions were
aided process engineering is the real-time
perspective leads to confusion when talking
largely ineffective, being unrealistically slow
rendering capability. This has been used
about other types of VR. The solution is
or oversimplified.
to great effect in other areas (such as the
to use “VR” as an all-embracing term to
gaming, aircraft and medical industries) and
cover all forms of VR systems, including
However, as VR technology continues
is now poised for use within the process
stereoscopic auditorium, 3D localisation
to develop, ongoing advances in
industry.
in conventional projection or a mix of
hardware processing power and software
both. Some individuals use the term virtual
development will allow VR to be used as the
The key to an effective virtual environment
environments instead of VR adding further
interface with computer-based multimedia
system is the close integration of the
confusion.
activities that include training, process
enabling hardware with software support
design, maintenance and safety.
tools. This process, known as systems
However, it is better to think of a virtual
integration, demands that operation and
environment as a computer representation
This paper discusses the range of
implementation — hardware and software —
of a synthetic world. This means a virtual
multimedia VR aids that can be used
be dealt with together.
environment can be defined irrespective
economically and effectively to support computer-based multimedia activities.
Overview
of the delivery technology. But it is not However, before considering the type of
simply sufficient to produce a lone virtual
technology used to implement a VR system
environment. Their component parts require
for process purposes, it is useful to consider
controlling in such a way that the user
the various forms of the virtual environment.
believes they are actually immersed in a
For the purposes of this article, VR is defined
real environment. This requires a process/
as: “A three-dimensional (3D) environment
It is assumed that a VR-based system can
machinery simulation tool that interacts with
generated by a computer, with a run-time
only be provided by a head-mounted
the virtual environment in a tangible action/
that closely resembles reality to the person
display. However, a head-mounted display
reaction mode.
interacting with the environment.”
may be the wrong device to use for some applications as it only creates a single-user
Different forms of delivering VR systems are
experience. Therefore, a broader definition
defined by their peripheral technologies.
of VR must be assumed, one that retains
For example, the term “desktop VR” does
the key attributes of a VR system, for
not relate to the virtual environment, but the
peripherals (such as goggles and
example, the greater sense of presence and
delivery technology used. Today, a desktop
gloves) are used to produce the effect
interaction the user receives when immersed
VR system is generally based on a PC
of being inside the computer-generated
in a virtual environment.
platform, which employs the latest graphic
This environment is further defined as: • Immersive — where extra computer
environment;
systems to provide optimum performance at The technology used to deliver the virtual
Figure 1: Basic 3D visualisation
a reasonable cost.
Figure 2: A detailed, photo-realistic environment
July/August 2012 | 35
The route to a simple solution is usually extraordinary Get an end to end solution tailor-made for your business with Business Connexion’s Professional Services When it comes to making extraordinary connections, nothing comes close to the human brain. That’s why it’s the inspiration behind our Professional Services. With our unique understanding of your business model, value chains and strategy, we can supply you with an end to end solution that helps you make the most of your Business Processes, Applications Portfolio, Application Management and third party solutions. With our unique integrated solutions, we can help you build systems that enable you to enhance and grow your business. We call it the amplifying power of Connective Intelligence™.
www.bcx.co.za 36 | www.protocolmag.co.za
July/August 2012 | 37
with each other through a network using the standard TCP/IP protocol. The server application handles communication among the various modules and is responsible for the updated version of all scene parameters. It retains a copy of the scene graph — a hierarchical representation of the 3D scene — that is synchronous with the one present in each satellite application. The server application constantly updates scene graph data, notifying changes via the network protocol to satellite applications. These satellite applications are in command of rendering the visualised data and providing additional functionality to users. Meanwhile, the main client station reproduces the plant environment and allows users to perform actions on plant elements (for example opening a valve), playing the role of Field- Operator. All
Figure 3: A schematic system architecture
actions performed by the virtual Field
The virtual plant
• UV maps compression for the illumination data that is baked on textures
The 3D content section of the VR environment requires a CAD file as the
(such as those created by COMOSFEED™,
the other platform elements, including the process simulator. Outputs can be displayed
• Texture tiling to prevent pixel wastage
basic source of the material. This either can be in standard 2D, or advanced formats
Operator are tracked and synchronised with
on various systems, from standard desktop monitors and head-mounted displays to
• BSP/portals generation for large-scale environments
immersive projection systems. Both mono and stereoscopic vision can be used.
SMARTPLANT® and AUTOCAD®). These programs generate a 3D CAD® file used
Once the graphics have been created,
The VR system requires a monitoring station
to speed up the conversion process
the next step is to detach the geometries
that centralises all information on a running
required for photo-realistic, real-time
that represent the interactive actors. This
simulation. This includes the number and
graphics. Initially, a basic 3D geometry is
is important because it separates dynamic
type of connected stations to the 3D model
created to reflect plant specifications, and
geometries (those that can move and be
used and specific training exercises being
then software such as 3dStudioMax® or
interacted with) from static geometries
carried out within the simulation. The
Maya® is used to process graphics details.
(those that cannot).
monitoring station can be integrated with
This software adds the details and small
the Instructor Station on traditional OTS
adjustments needed to turn a flat CAD into
The final step is to create a collision
systems, giving a single point for managing
a photo-realistic product. Various other
geometry that resembles the graphics
a full training session. Events and training
tools and applications optimise textures and
geometry. This allows users to collide with
exercises are triggered by the Instructor
illumination to further improve the effect.
the virtual environment rather than simply
Station and transmitted to both the SIM4ME
passing through it.
engine and the VR platform.
VR platform and architecture
IPS’ DYNSIM® and FSIM Plus® interface
Unlike conventional, non real-time rendering, a real-time program allows users to move and interact freely within the
directly with the VR system main simulation
environment. Special graphic technology
The VR interactive system is a server-centric
modules. They give a fully synchronised
permits the environment to be rendered at
distributed application that centralises
integration between the 3D world and the
60 frames a second, compared to one frame
scene updates. Therefore, it enables
process/control simulation. So any action
per second in the traditional approach.
scene rendering to be carried out on many
that the Field Operator carries out in the
Specific optimisation techniques are
concurrent stations. The server synchronises
3D environment is immediately reflected
required to achieve 60 frames a second that
directly with SimSci-Esscor’s SIM4ME®
in DYNSIM. Conversely, any value that is
include the following:
simulation engine, so the properties of each
updated by DYNSIM is also updated in the
plant element in the VR scene is constantly
VR platform.
• Level of detail (LOD) geometries, used where the detail is not needed
updated in time with the process simulation. Other stations have various roles within the simulation and are able to communicate
38 | www.protocolmag.co.za
The role of virtual reality in the process industry
of specific tasks. The content is normally structured in the form of a detailed account of tasks. 3. The VR platform facilitates good teaching practice by allowing trainers to individually match up trainees with the training mode most suitable for them. Therefore, some trainers might introduce tasks slowly in a progressive learning curve, while others might prefer that trainees meet the full task in all its complexity (step-by-step guidance/task guided mode). Figure 4: An iterative design process with specified VR models in a concurrent and multi-
4. VR training allows skills transfer
disciplinary design situation.
rather than simply knowledge transfer.
Impact of VR on training
The main elements of a training session are:
The main advantage that VR brings to both
1. Setting objectives;
performance becomes a simpler task.
theoretical and conceptual training is that
2. Outlining contents;
Note that the platform also includes
it allows trainees to become much more
3. Choice of methodology;
alternative modes to score results from
familiar with the layout and operation of
4. Assessment.
training sessions.
on a specific piece of equipment not only
The VR platform should guide the user
VR models in process design
involves 3D models that can be viewed from
through the development of all the following
any angle, but also allows that equipment
elements:
Importantly, it also allows these skills to be tested. So assessing trainee
the subject matter. For example, training
Fig. 4 shows an example of iterative and concurrent process design based on
to be set in motion. For integrated systems, such as complex processes, VR allows
1. Setting ‘training objectives’ highlights
the use of VR models. The client/EPC is
trainees to walk around the 3D model and
the different options available:
responsible for the overall design process while the design teams within construction,
improve their spatial awareness of the plant.
• Technical training focussed on transferring technical knowledge
VR techniques can be used to enhance the
• Operational training focussed on skills and procedures
integration to be represented:
building structure and installations). All design teams are also responsible for providing correct and updated input data to
representation of process unit behaviour. There are three main ways for this
responsible for the design of the plant subsystems, (such as process equipment,
In addition, when integrated with the detailed DYNSIM simulation environment,
mechanical, control system, etc., are
• Safety training focussed on possible hazards
the “VR database”. The VR provider, working for the client, manages all the VR data and makes updated and corrected VR models
• A navigational front-end representation of continuous rather than discrete state
• Emergency response; how to react to a critical situation
accessible for everyone involved in the design process.
for multi-degree of freedom objects. This supplies visual feedback only, with no equipment interaction
• Interpersonal skills training (crew training):
The VR models from 1 to n in Fig. 4 provide
communication, collaborative decision-
the design teams with structured and
making and teamwork
easy-to-understand design information. This is done in such a way that is not possible
• As above but with equipment interaction These are all supported by the VR platform.
using a traditional design approach based
Some training modules may be solely
on 2D CAD drawings. By navigating in the
synthetic world) by a link between process
devoted to process knowledge, for example
models, stakeholders can analyse the design
simulation models and physical-spatial
a session that provides greater process
from both a general and a more detailed
models for all training objectives
understanding for operators. However, most
perspective. Moreover, with VR models, it is
training sessions should deal with all the
easier to explain and discuss different design
elements listed above.
solutions with a larger group of stakeholders,
• Complete environment emulation (a
Note that giving users a fully interactive
particularly where they have different ways of
view can, in some cases, detract from the objectives being taught, as it can be a much
2. The majority of the training sessions
interpreting 2D design drawings. This ability
more complex system to understand.
will be structured around the learning
to collect views from different perspectives
July/August 2012 | 39
Irrespective of the subject matter, however, in general a maintenance task can be broken down into the following subtasks: • Replication - being able to reproduce the reported fault. • Identification - being able to accurately diagnose the source of the fault. • Rectification - correcting the fault by taking action appropriate to the policies of the maintenance establishment. • Confirmation - checking to see that the identified fault has been cleared. Each of the four stages described above requires Figure 5: A screenshot extracted from a VR model showing a design solution that would have
a mixture of generic and specific physical
blocked the access
and mental skills.
gives a better and more productive overall
in the overall project. It also saves time. Said
When using VR facilities, the usual
design approach. It also makes it much
one design manager, “I was sceptical at
approach is to train users to have a deep
simpler to discover and correct collisions
first. Then I realised that by studying one VR
understanding of both the maintenance task
and design errors earlier in the design
model I could save a lot of time and be more
itself and the science behind the involved
phase.
focused on important issues rather than
equipment. This means that the structure
searching through piles of drawings.”
of a typical training course includes training
Fig. 5 shows how a VR model identified a
objectives that can be taken from a broader
bad design solution, in this case process
Increasing time-pressure on projects,
water outlets hindering access. Finding a
partnerships and/or MAC roles is likely to
solution to this type of error after the event
be the stimulus that enhances collaboration
1. Initial Theoretical Training
is often highly costly. Such an error also
between stakeholders. This could lead to
2. Instructor Led Training
affects production by generating delays in
a concurrent design approach in which
3. Systems Appreciation
the re-scheduling and re-planning of some
VR models are used to coordinate and
4. Fault Diagnostics Training
activities.
communicate the design to the client. In
5. Recognition Training
addition, as well as making it easier for
6. Equipment Familiarisation
During a trial on the use of VR models,
them to make crucial decisions, VR models
7. Scenarios Simulation
users commented that a major benefit of
can also involve the client in everyday
8. Visual Appreciation
the technology is that it gives a far greater
design work. Being able to quickly sort the
9. Hand/Eye Coordination
appreciation of other skilled areas involved
relevant information and present it in an
10. Spatial Appreciation
number of training categories:
easy and comprehensible way enables the client to collect opinions from a
For example, in the analysis of an overall
wider audience, such as the operations
plant working environment, a specially
and maintenance staff, and to improve
designed avatar of large size could
the organisation’s decision-making
mimic the behaviour of operational and
procedures.
maintenance staff. This is primarily a system analysis (3, 8, and 10) where working spaces,
VR in maintenance tasks
escape routes, risky areas and transportation routes within the plant are investigated from
To understand the training aids required
a logistics viewpoint. The result of such
for maintenance training, the first
analysis allows maintenance procedures
sensible, logical step is to look at the
to be optimised and highlights if there is a
task that the trainee is expected to
need to ask a design team for improvements
perform after completing the course.
or modifications.
In process operations, for example, the organisation of that task is heavily
A second example (see fig. 6) also refers to
Figure 6: A screenshot showing the use of avatars
dependent on the industry sector, the
spatial appreciation (10), but this time to
for investigating the maintainability of the process
range of equipment to be maintained
improve equipment familiarisation (6) and
machinery in the plant.
and specific company culture.
hand/eye coordination (9). The operation
40 | www.protocolmag.co.za
The role of virtual reality in the process industry
need to take real equipment off-line and risk production. This allows users to learn within computer-generated environments and gives them the opportunity to make mistakes and suffer the consequences without putting themselves at risk. Overall, VR improves design procedures and is a far superior training tool to more traditional approaches. As a result, it saves both staff time and money.
EYESIM for VR To meet your virtual reality operator training needs, Invensys offers EYESIM. EYESIM is a comprehensive solution linking Control Room Operators to Field Operators and Maintenance Operators by means of a
Figure 7: A virtual fire
High-Fidelity Process Simulation and Virtual of a highly automated industrial process
can be tested in their entirety, up to and
Walkthrough Plant Environment. EYESIM
depends largely on the maintainability
including the disaster that might result. After
provides complete Plant Crew Training
of its process equipment. Because of the
all, learning from a virtual disaster can help
to improve skills that are safety-critical by
huge economic impact that a failure could
avoid the real thing.
enabling operators to perform tasks in
have, preventing such events has a very
a simulated environment, allowing them
high priority. Therefore, to make sure that
The strength of this approach is two-fold:
to react quickly and correctly, facilitating
maintenance can be properly conducted and
safety can be tested and experimented
reactions in high stress conditions and
performed on time, maintenance personnel
upon as a training tool; and risk-assessors
instilling standards for team training and
can participate in training using avatars or
are better able to identify hazardous
communications.
in “first person” through VR models of the
scenarios. Together, they improve the ability
process machinery and layout. Therefore,
of operators to make the right decisions
The EYESIM solution is comprised of a
maintenance issues involving diagnostics,
at the right times. In other words, VR
modelling engine, powered by SimSci-
timing and procedures can be highlighted
makes training, risk assessment and safety
Esscor’s DYNSIM; services through the
and consequently optimised.
management more effective and realistic
SIM4ME bridge, and is coupled with a high-
than ever before.
performing Virtual Reality Engine and a high
VR in safety
quality 3D Modelling/Scanning toolset.
Conclusions
Using VR, field operators feel completely immersed and perceive the virtual
VR provides a 3D computer-generated
environment as if it was the real plant. By
representation of a real or imaginary world
simply putting on their goggles, they are
in which the user experiences real-time
able to see stereoscopically the spatial
interactions and the feeling of actually being
depth of the surroundings, walk-through the
present.
virtual plant and “feel” it. 3D spatial sound contributes to this natural feel, as does the
VR technology is well developed and
ability to perform tasks using different hand-
cost-effective, even for smaller organisations
held devices. Once immersed in a virtual
or companies who might be considering
environment where everything resembles
its use. The flexibility of VR-based training
reality, all normal and abnormal situations
systems means that they are simple to
can be experimented on and tested by
configure, use and will form an increasingly
operators. Any action either in the field or
important element of new training systems.
in the control room is simulated rigorously in terms of process behaviour with a clear
The ability to simulate complex processes
action/reaction perception. In practice, VR
by virtual actions means that trainees
allows operators to test every abnormal
experience an environment that changes
situation that can be thought of, alongside
over time. At the same time, using computer
little-understood atypical plant behaviours.
models of real equipment is risk free and
Both expected and predictable malfunctions
allows endless experimentation without the
July/August 2012 | 41
Cyber security in the power industry While Eskom is seen as the principal source of electrical power
what applies to power generation, applies equally well to many
in South Africa, many large companies have their own power
other industries.
generation capability, to the extent that they can contribute to the national grid.
Ernest Rakaczky, Director of Process Control Network Security, Invensys and Thomas Szudajski, Director of Global Power
Many of these facilities are subject to cyber threats that could
Marketing, Invensys, explain the security challenges facing the
severely disrupt production at the local and national level â&#x20AC;&#x201C; and
power industry.
42 | www.protocolmag.co.za
Power industry locks down
Power industry locks down 1. Introduction
threats facing cyber security experts around
producing unpredictable results
the world. Cyber security has become as Like it or not, the power industry is
much a part of doing business in the 21st
susceptible to a variety of cyber threats,
century as traditional building security was in
which can wreak havoc on control systems.
the last. While power engineers have always
Management, engineering and IT must
taken measures to maximise the security and
Historically, control system vendors have
commit to a comprehensive approach that
safety of their operations, heightened global
dealt with such threats by focusing on
encompasses threat prevention, detection
terrorism and increased hacker activity have
meeting customer specifications within
and elimination.
added a new level of urgency and concern.
guidelines and metrics set by industry
Consider a couple of plausible threat
Many plants are convinced their networks
of Electrical and Electronics Engineers
scenarios:
are isolated and consequently secure,
(IEEE) and Instrument Society of America
but without ongoing audits and intrusion
(ISA). Indeed, much of this compliance
detection, that security could be just a
was designed into proprietary equipment
mirage. Moreover, the growing demand for
and applications, which were beyond the
Using “war diallers,” simple personal
open information sharing between business
skills of all but the most determined cyber
computer programs that dial consecutive
and production networks increases the need
attacker. Increasingly, however, process
phone numbers looking for modems, a
to secure transactions and data. For power
control networks are better equipped for
hacker finds modems connected to the
generating companies, where consequences
gathering information about generation
programmable circuit breakers of the electric
of an attack could have widespread impact,
and distribution and sharing it with business
power control system, cracks passwords that
the need for cyber security is even more
networks using standard communications
control access to the circuit breakers, and
pressing.
protocols such as Ethernet or IP. These open
• Interfering with operation of safety systems
standards groups such as the Institute
Cyber- attack scenario 1
changes the control settings to cause local
protocols are being used to communicate
power outages and damage equipment. He
A recent U.S. General Accounting Office
between dispatch, marketing, corporate
lowers the settings from, for example, 500
report, titled Critical Infrastructure
headquarters and plant control rooms as
A to 200 A on some circuit breakers, taking
Protection, Challenges and Efforts to Secure
well. While such sharing enables more
those lines out of service and diverting
Control Systems, offered the following
strategic management of enterprise assets,
power to neighbouring lines. At the same
examples of actions that might be taken
it does increase security requirements.
time, he raises the settings on neighbouring
against a control system:
2. Open exposure
lines to 900 A, preventing the circuit breakers from tripping and overloading
• Disruption of operation by delaying or
the lines. This causes significant damage to
blocking information flow through control
The open and interoperable nature of
transformers and other critical equipment,
networks, thereby denying network
today’s industrial automation systems - many
resulting in lengthy repair outages.
availability to control system operators
of which use the same computing and networking technologies as general purpose
Cyber-attack scenario 2
• Making unauthorised changes
IT systems - requires engineers to pay close
to programmed instructions in
attention to network and cyber security
A power plant serving a large metropolitan
programmable logic controllers (PLCs),
issues. Not doing so can potentially lead to
district has successfully isolated the control
remote terminal units (RTUs) or distributed
injury or loss of life; environmental damage;
system from the business network of the
control system (DCS) controllers, changing
corporate liability; loss of corporate license
plant, installed state-of-the-art firewalls,
alarm thresholds or issuing unauthorised
to operate; loss of production, damage to
and implemented intrusion detection
commands to control equipment. This
equipment; and reduced quality of service.
and prevention technology. An engineer
could potentially result in damage to
innocently downloads information on a
equipment, premature shutdown, or
Such threats can come from many sources,
continuing education seminar at a local
disabling of control equipment.
external and internal, ranging from terrorists and disgruntled employees to environmental
college, inadvertently introducing a virus into the control network. Just before the
• Sending false information to control
groups and common criminals. Making
morning peak, the operator screens go
system operators either to disguise
matters worse, the technical knowledge,
blank and the system is shut down.
unauthorised changes or to initiate
skills and tools required for penetrating
inappropriate actions by system operators
IT and plant systems are becoming more widely available. Figure 1 shows that as the
Although the above scenarios are hypothetical, they represent the kinds of real
• Modifying control system software,
incidence of threats increases, the level of
July/August 2012 | 43
the need for real-time availability to keep units online. With priorities in place, engineering and IT can work together to create a plan that should, at a minimum, address the following issues: • An approach to convergence of IT and plant networks • A process for managing secure and insecure protocols on the same network • Methods for monitoring, alerting and diagnosing plant network control systems and their integration with the corporate network Figure 1: Attack sophistication versus intruder knowledge.
• A method for retaining forensic information to support investigation/legal
sophistication necessary to implement an attack is decreasing, making it all the easier
• Lack of plant-wide awareness of cyber
for intruders.
• A means of securing connectivity to • Inconsistent administration of systems
Many companies are bracing for the worst.
as manifested by active participation in
Management must also recognise that • Lack of a cyber security incident response plan
industry standards groups, including the Department of Energy, the Federal Energy Regulatory Commission, and the North
investment in prevention will have a far greater payback than investment in detection and removal. Although investment
• Poor physical access to some critical assets
American Electric Reliability Council.
in the latter areas may be necessary to ward off immediate threats, focusing on activities that prevent attacks in the first place will
• Lack of a management protocol for Power producers have also been putting
wireless devices
(managed by different business units)
Major power producers, for example, have begun paying greater attention to security,
litigation
security issues in general
accessing cyber resources
reduce the need for future detection and removal expenditures.
more pressure on automation suppliers and their partners to accelerate development of
• Lack of a change management process
technologies that will support compliance with emerging standards. The power industry
of cyber-terrorism is another pressing • Undocumented perimeter access points
is looking to non-traditional suppliers to improve control room design and access,
management issue. While sticking with proprietary technologies may seem much
• Lack of a disaster recovery plan
operator training, and procedures affecting control system security that lie outside the
Embracing open standards in the age
less vulnerable to intrusion, doing so will limit reliability, availability and efficiency
• Inability to measure known vulnerabilities
domain of control system vendors.
improvements that could be available from integration of digital technologies and
Corporate management must first
advanced applications. In fact, proprietary
acknowledge the need for secure
technology could become even more
operations. Then, because few companies
expensive as vendors seek to recover the
will have the resources to harden all
cost of additional hardening that may still
While power engineers will play a critical
processes against all possible threats,
be needed, since these systems are secure
role in hardening power operations against
management must guide development of
owing only to their obscurity, not to some
intruders, collaboration and support of
a security policy that will set organisational
inherent capability. Management support at
both corporate management and the
security priorities and goals. Finally,
the highest levels will help ensure that any
IT department is essential. A company-
companies must foster collaboration among
technical hardening is implemented most
wide vulnerability audit of a large U.S.
all layers of management, IT and project and
strategically and cost-effectively.
utility revealed some areas of technical
plant engineering. Project engineers need
vulnerability in the control system, but most
to understand the security risks and possible
of the findings had to do with organisational
mitigation strategies. IT, which brings much
issues:
of the security expertise, must understand
3. Not just an engineering problem
44 | www.protocolmag.co.za
Staying competitive means continuously improving the assets of your plant, throughout its productive life. Invensys provides a path to keep your automation and control system continuously current with cost effective modernisation solutions, regardless of your systemâ&#x20AC;&#x2122;s age or vintage.
Find out how Invensys can help secure your future: Logon to www.invensys.co.za for a whitepaper, brochure, customer success stories and migration video.
July/August 2012 | 45
4. A prevention-based cyber security architecture
control network. This sub-zone contains servers or workstations that gather data from the controls network devices and
One of the most effective ways to implement
make it available to the plant network.
a prevention-based, standards-driven cyber security architecture is to segment the
The Service and Support Sub-Zone is
network into several zones, each of which
typically used by outsourcing agencies,
would have a different set of connectivity
equipment vendors or other external
requirements and traffic patterns. Firewalls
support providers that may be servicing
placed at strategic locations provide the
the controls network. This connection
segmentation. Intrusion detection and
point should be treated no differently
prevention systems are also deployed at
than any other connection to the outside
key locations and alerts are reported to
world and should therefore utilise strong
a monitoring centre. Figure 2 illustrates
authentication, encryption or secure VPN
a multi-zone cyber security architecture,
access. Modems should incorporate
consisting of five segments:
encryption and dial-back capability. Devices introduced to the network should
• The Internet Zone, which is the unprotected public Internet
item is particularly important for service providers, who will often bring a PC into
• The Data Centre Zone, which may be a
Figure 2: Multi-zone cyber security architecture.
use updated anti-virus software. This last
the plant for analysis. An example is turbine
6. Securing the plant control networks
single zone or multiple zones that exist at
monitoring. What’s more, power companies
the corporate data centre, dispatch and
should audit outsourcing providers for
At the plant control network level are the
corporate engineering
adequate security measures.
firewall, intrusion detection and prevention technology, modems, and wireless access
• The Plant Network Zone, which carries
The Public Sub-Zone is where public facing
points - all of which are integrated with a
the general business network traffic
services exist. Web servers, SMTP messaging
communications infrastructure involving
(messaging, ERP, file and print sharing,
gateways and FTP sites are examples of
equipment such as routers, bridges and
and Internet browsing). This zone may
services found in this sub-zone.
switches.
network. Traffic from this zone may not
The Extranet Sub-Zone is commonly used to
Firewalls restrict the types of traffic allowed
directly access the Control Network Zone
connect to the company’s trading partners.
into and out of the control network zone,
Partners connect to these by various
and can be configured with rules that
• The Control Network Zone, which has
methods including dialup, private lines,
permit only traffic designated as essential,
the highest level of security, carries the
frame-relay and VPN. VPN connections
triggering alarms for noncompliant traffic.
process control device and application
are becoming more common due to
Alarms should be monitored 24/7, either by
communications. Traffic on this network
the proliferation of the Internet and the
an internal or third party group. In addition,
segment must be limited to only the
economy of leveraging shared services.
each unit’s network should be isolated,
process control network traffic as it is
Firewall rules are used to further control
in particular from remote sites. This is
very sensitive to the volume of traffic and
where the partners are allowed access as
extremely important for recovery.
protocols used
well as address translation.
span multiple locations across a wide area
The firewall should use a logging server to • The Field I/O Zone, where communications are typically direct hard
5. Securing the business network
capture all firewall events either locally or in a central location. One can, for example, configure the firewall to allow remote telnet
wired between the I/O devices and their controllers. Security is accomplished by
The two most critical components of data
access to the control network, but while the
physical security means.
centre security are a perimeter firewall and
firewall can monitor access to connections,
an internal firewall. The perimeter firewall
it cannot provide information about what
An extra level of control - commonly
controls the types of traffic to and from the
someone might be attempting to do with
implemented as DMZs on the firewall - is
public Internet while the internal firewall
those connections. A hacker could be
often added for supplemental security.
controls the types of internal site-to-site
accessing the control system through telnet
These supplemental zones are typically used
traffic and site-to-data centre traffic. The
and the firewall would have no way of knowing
for data acquisition, service and support, a
internal firewall is essential for controlling
whether the activity is from friend or foe. That
public zone and an extranet sub-zone.
or containing the spread of network-borne
is the job of Intrusion Detection Systems (IDS)
viruses. It also restricts types of traffic
and Intrusion Prevention Systems (IPS), which
The Data Acquisition and Interface Sub-Zone
allowed between sites and provides and
can detect usage patterns.
is the demarcation point and interface for all
protects the data centre from internal
communications into or out of the process
intruders.
46 | www.protocolmag.co.za
An IDS monitors packets on a network wire
Power industry locks down
and determines if the observed activity is
that authorise wireless LAN connections;
approach to network security between the
potentially harmful. A typical example is a
and compliance with 802.11 and 802.1x
plant network layer and business/external
system that watches for a large number of
standards. Consumer grade equipment is
systems. It is an ongoing process that
TCP connection requests to many different
not recommended and VPN connection
begins with awareness and assessment,
ports on a target machine, thus discovering
with software clients is preferable to WEP
continues through the creation of policy
if someone is attempting a TCP port
or proprietary data encryption. This allows
and procedures and the development of
scan. An IDS may run either on the target
support for multi-vendor wireless hardware
the security solution, and includes ongoing
machine, which watches its own traffic, or
with a common solution.
security performance management.
VPN concentrators are devices that
Some of the key activities for the awareness
encrypt the data transferred between the
and assessment phase include defining
An IPS complements the IDS by blocking
concentrator and another concentrator or
security objectives, identifying system
the traffic that exhibits dangerous behaviour
client based on a mutually agreed upon key.
vulnerabilities, establishing the security
patterns. It prevents attacks from harming
This technology is most widely used today
plan and identifying the key players on
the network or control system by sitting in
to allow remote users to securely access
the security team. In this phase, one is
between the connection and the network
corporate data across the public Internet.
determining which networks are involved
and the devices being protected. Like an
The same technology can be used to add
and how isolated they are from each other.
IDS, an IPS can run in host mode directly on
additional security accessing data across
Is there a DCS for common systems, coal
the control system station, and the closer
wireless and existing corporate WANs. In
handling, service water, for example? How
to the control system it is, the better the
lieu of a separate VPN concentrator, it is
vulnerable are remote facilities, such as
protection.
possible to utilise VPN services that are
landfill and water treatment?
on an independent machine such as an IDS appliance (also referred to as Host IDS).
integrated with the firewall. At the policy and procedures phase, one
Modems connect devices asynchronously for out-of-band access to devices. Because
While the firewalls, IDS, IDP and encryption,
would review safety and security aspects of
modems can connect outside directly
add the greatest hardening, they must work
established industry standards such as ISO
through public carriers, they are unaffected
in tandem with the existing communications
17799, ISA-SP99, META, and CERT along
by security measures and represent a
infrastructure. The job of the routers, hubs,
with regulatory drivers, such as those offered
significant point of vulnerability. At the
bridges, switches, media converters and
by FERC, NERC, and DOE. Local regulatory
very least, any modem with links to the
access units is to keep network information
requirements related to site security and
main control network should be a dial-back
packets flowing at the desired speed
safety must also be considered.
modem, which will not transmit data until
without collision. The more network traffic is
it receives dial-back authentication from
routed, segmented and managed, the more
The security solution phase is where
the receiving system. For sensitive data,
easily any intrusion can be contained and
one would focus on technologies and
encryption is also recommended.
eliminated.
processes for system access control,
Wireless access points are radio-based
Although some of these systems do have
encryption, intrusion detection and system
stations that connect to the hard-wired
certain levels of security functionality built
management. In the security program
network. Wireless communications can
in, it is not wise to rely on that to protect
performance and management phase, one
be supported if implemented securely.
mission-critical data. Routers, for example,
would address continual monitoring and
Solutions provided must be capable of
can be configured to mimic basic firewall
alerting, yearly audits, periodic testing and
both preventing unauthorised access and
functionality by screening traffic based on
evaluation, and continual updating of system
ensuring that data transmitted is encrypted
an approved access list, but they lack a
requirements.
to prevent â&#x20AC;&#x153;eavesdropping.â&#x20AC;? For maximum
hardened operating system and other robust
flexibility, devices must be capable of data
capabilities of a true firewall.
perimeter security and isolation, identity and
filtering or blocking Media Access Control
Following the procedures defined above will not guarantee immunity from cyber
encryption with dynamic or rotating keys;
7. Planned-in prevention
attack, but will ensure that the risk has been managed as strategically and cost-effectively
(MAC) addresses that uniquely identify each network node; disabling broadcasting of
Developing a prevention approach to
Service Set Identifiers (SSID), passwords
plant control systems will require a new
as possible.
July/August 2012 | 47
Control and Automation security: Fort Knox or swing-door
Figure 1 - Fort Knox Bullion Depository (Wikipedia3)
John Coetzee – Business Development Director, Aristotle Consulting (Pty) Ltd.
Introduction
Background
Fort Knox is a (gold) bullion repository
Events over the last decade have resulted
located in Kentucky. It was built in 1936 on a
in far greater legal and governance
military base and has walls of granite, steel
requirements being placed on corporates,
and concrete to protect its exterior. The
including in the IT arena. The South African
building can be completely isolated and
response has largely been addressed by
supply its own water and power for a limited
the King III code of practice4. King III has an
period. The vault is sealed with a 20-ton
entire section dedicated to IT governance
steel door. The combination to enter the
and management and how it relates to
vault is split across different personnel, so no
the Corporate Board, section 5 – The
one person knows the entire combination.
governance of information technology. This
The building, under the control of the USA
section goes on to prescribe that the board
Mint, is currently guarded by over 20 000
must take responsibility for IT governance
soldiers, making it one of the most secure
as well as delegation of responsibility to
locations on the entire planet1.
IT management for the implementation of an IT governance framework. IT needs to
In contrast, a swing-door is defined as “A
form part of the risk management of the
door that is opened by either pushing or
organisation. In Section 5.6.1 it specifically
pulling from either side (i.e. opens both
refers to information security and the need
ways) and is not normally capable of being
for management systems. In order to comply
locked2”.
with these best practices requirements, corporates often refer to international
Hopefully, your security is neither Fort Knox
standards such as CobiT.
(no-one will ever be able to get anything done) nor a swing-door (nothing will ever be
The CobiT5 standard references the
controlled).
word “security” over 250 times and has entire sections dedicated to the effective management of security within IT. CobiT goes on to define Information Technology
48 | www.protocolmag.co.za
Control and Automation security: Fort Knox or swing-door
as the hardware and software that facilitate
• Manage the physical environment by
the input, output, storage, processing, and
defining and implementing access control
transmission of data with no distinction
policies.
made based on the application of these elements. This implies that the standard
• Monitor and evaluate the internal control
is equally applicable to Control and
mechanisms periodically to identify
Automation (C&A) equipment and software.
deficiencies and continuously improve
About Aristotle Consulting
these mechanisms.
Security approach
Aristotle Consulting offers consulting and • Record, manage and address security
The goals of IT security are to:
incidents.
training services to the Systems of the Manufacturing Industry. Aristotle Consulting can assist in developing and implementing your
• Protect C&A assets
Conclusion
• Control access to critical / confidential
The use of “traditional IT” equipment
security plan. Aristotle Consulting specialises in providing consulting in best practice for your
information
entire Manufacturing Systems portfolio.
is now pervasive in C&A systems and architecture. This now should be managed
• Ensure information exchanges between systems are trustworthy • Resist attack from external disaster or sabotage • Ensure failure recovery • Maintain information and processing
and governed according to requirements based locally on King III. In order to achieve
For more information contact:
this, look to international best practice on
John Coetzee
how implement this, such as CobiT, which
Business Development Director
presents a useful model. Aim for a security
Aristotle Consulting (Pty) Ltd
model that is neither Fort Knox nor a swing-
Landline: +27 79 517 5261
door, but has a suitable level of control,
Mailto: info@aristotleconsulting.co.za
risk management and ease of use for your
Web site: www.aristotleconsulting.co.za
organisation.
Find us on Linked In: http://www.linkedin. com/profile/view?id=60025171&trk=tab_pro
integrity.
References So, how do you approach a security model for C&A? Below is a recommendation based
1. HARGROVE, J., 2003, Fort Knox Bullion
on the CobiT standard.
Depository, Dayton: Teaching & Learning Company.
• Define a C&A security and risk plan, implemented into policies and
2. WIKTIONARY, 2011 [Online] Available
procedures.
at www.wiktionary.ord/wiki/swing_door [Accessed on 9 May 2012].
• Define a security, risk and compliance responsibility model.
3. WIKIPEDIA, 2012 [Online] Available at http://en.wikipedia.org/wiki/Fort_Knox_
• When acquiring new technology, implement the security and audit
Bullion_Depository [Accessed on 9 May 2012].
measures as part of the installation and commissioning.
4. KING III, 2009. King III Code of Governance for South Africa 2009, Institute
• Manage changes to your environment
of Directors. [Online] Available at: http://
with a clear change management
african.ipapercms.dk/IOD/KINGIII/
procedure, including impact assessments,
kingiiicode/ [Accessed on 10 April 2012].
authorisation mechanism, change tracking and change completion.
5. COBIT, 2007, 4.1 ed., IT Governance Institute. Rolling Meadows.
July/August 2012 | 49
Eskom conforms to legal emission limits with help from Wonderware The measurement of stack emissions at coal-fired power stations is of high importance to Eskom as exceeding the emission limits may result in the forced shutdown of generating units. These emission levels are imposed by legislation and must therefore be monitored and alarmed continuously.
50 | www.protocolmag.co.za
Eskom conforms to legal emission limits with help from Wonderware
About Eskom Holdings Limited Eskom generates approximately 95% of the electricity used in South Africa and approximately 45% of the electricity used in Africa. The company generates, transmits and distributes electricity to industrial, mining, commercial, agricultural and residential customers and redistributors. The majority of sales are in South Africa. Other countries of southern Africa account for a small percentage of sales. Figure 1: The main contributors to global fossil carbon emissions
Additional power stations and major To address the problem, system integrator
installation of efficient pollution abatement
power lines are being built to meet
Bytes Systems Integration used Wonderware
technology and the decommissioning of
rising electricity demand in South
solutions to implement a comprehensive
older plant.
Africa.
emission monitoring system which is flexible enough to handle geographically-dispersed
“Without treatment, we would be spewing
Eskom buys electricity from and
data sources while complying to various
concentrations of 30 000 to 60 000 mg
sells electricity to the countries of
business rules. The result is a system which
of ash/normal cubic metre into the
the Southern African Development
is helping to ensure the supply of electricity
atmosphere,” says Dr. Kristy Ross, senior
Community (SADC). The future
while minimising the impact on the
consultant at Eskom. “But with the use of
involvement in African markets
environment.
abatement technology such as electrostatic
outside South Africa (that is the
precipitators or fabric filter plants, more
SADC countries connected to the
Background
than 99% of ash is removed from flue gas
South African grid and the rest of
stream providing a particulate emission
Africa) is currently limited to those
The combustion of coal produces almost as
concentration of usually less than 200 mg/
projects that have a direct impact on
much carbon emissions as the combustion
normal cubic metre.”
ensuring a secure supply of electricity for South Africa.
of petroleum (figure 1). What can we do about the more than two gigatons of carbon
Getting legal
released into the atmosphere in the form of
The Current Eskom Power Generation Fleet:
carbon dioxide every year? The answer is;
Every power station has an emissions licence
“not much” unless you treat the problem
with which it needs to comply. This ensures
at its source – which is exactly what Eskom
that the environment and health of people in
has been doing for several decades. 95%
the vicinity of power stations are not affected
of Eskom’s generating capacity comes from
negatively. The emissions licence specifies
coal, and ash emissions from Eskom’s coal-
two limits – a ‘normal’ operating limit, which
fired power stations have reduced by more
emissions must be below for 96% of the
than 90% since the early 1980s due to the
time and a ‘cap’ limit, which emissions must
(One Unit = One boiler + one generator)
never exceed. For example, Lethabo Power Station’s licence limits are:
“This is definitely going to form part of our Peak Hour Risk Analysis”
exceeded for this time, but emissions • Normal limit: 75 mg/Nm3 • Cap limit: 300 mg/Nm3
Vusi Shabangu, Head Office Generation Control Centre Shift Manager
breakdown, etc. The normal limit can be must remain below the cap limit. Staying within these legal requirements, however, isn’t plain sailing. Because of
• Grace period: 90 hours per stack (three
the capacity shortage, shutdowns for
units). Time given to rectify malfunctions
maintenance or repair are reduced to a
such as poor quality coal, equipment
minimum which means that equipment isn’t
July/August 2012 | 51
ripple effect in that another power station
“This project proves once again that the Wonderware System Platform can be used to add tremendous value to any organisation. The flexibility of System Platform enabled us to connect to multiple source systems and applications to deliver critical decisionmaking information to the highest levels of the company.” Gerhard Greeff, Bytes Process Management and Control
Solution selection
will be required to take up the slack by ramping up its production.
System Platform (ArchestrA)-certified system integrator Bytes Systems Integration was
The solution
chosen for the project because of the company’s long-standing and successful
“Given the scope of the problem and
relationship with Eskom, notably in the
Eskom’s nation-wide footprint of 13
Enterprise Manufacturing Intelligence (EMI)
operational coal-fired power stations, it
field of which this project forms part.
was decided that emission status should be centrally monitored and controlled in real-
Due to its ease of integration with other
time from the Integrated Generation Control
initiatives and customisation capabilities
Centre at Megawatt Park,” adds Dr. Ross.
as well as its scalability, Bytes would use the existing Wonderware infrastructure
This would allow the information to be
consisting of System Platform (ArchestrA),
available remotely through a user-friendly
Historian, Historian Client (ActiveFactory),
interface so that environmental specialists
InTouch (SCADA/HMI), Information Server
could take the necessary action to control
and Alarm Provider.
some complex processes. Top executives also needed access to this information via a necessarily operating at maximum efficiency.
Figure 3 shows the interaction necessary
Varying coal qualities and high load factors also contribute to the difficulty of complying
In short, the project goals were to:
between all the players. Aggregated hourly averages from each power station is sent to
with the legal emission limits. • Prevent financial and production losses “Under exceptional circumstances, where
Implementation
web-interface.
caused by forced outages.
Megawatt Park who can make the necessary operational decisions.
taking a unit off load would result in load-shedding, we ask the authorities for
• Prevent environmental degradation and
Machiel Engelbrecht of Bytes explains: “For
short-term exemption from the emission
fines from authorities due to exceeded
example, during start-up after shutdown, a
licence rules, usually from the normal limit,”
emission limits.
unit’s emissions will be higher than during normal operation. So it’s important to know
says Dr.Ross. • Deliver real-time KPI dashboards and
The problem
reports.
when a unit is about to come on line, how long it was off and how long the higher emission level is likely to last. This helps
Control room operators at power stations
• Give early warning alarms before emission
apply for the necessary exemption from
must keep a constant lookout for potential
limits are exceeded, enabling preventative
the authorities. In addition, the supplied
emission problems that might exceed the
measures to be implemented.
information places Megawatt Park is in
set legal limits. In the case of such an event,
a good position to initiate preventative
it might be necessary to exercise what’s
measures and to ensure optimal load
known as “load loss” but this will have a
distribution in the event of a shutdown due to excessive emissions.” The geographically-dispersed historians are used as the base for real-time information and this is then compared to targets, plans and projections from other transactional systems such as information supplied by National Control (Simmerpan - Germiston). Trending information is required to monitor the emission levels over certain time periods and this is done with ActiveFactory. Aspects of the Wonderware Historian Client are also used to calculate the hourly time-weighted averages.
Figure 2: Cause and effect – High load demand resulted in emissions exceeding the cap limit which initiated “load loss” which, in turn, brought emissions to within the legal limits.
52 | www.protocolmag.co.za
Wonderware’s Information Server is used to
Eskom conforms to legal emission limits with help from Wonderware
“I cannot believe how quickly this system was implemented” Dr Kristy Ross, senior consultant, Eskom
for the operators at the stations as it is a product with which they are familiar. We used MSSQL to extract the time-weighted
Figure 3: System flow diagram (Eskom WAN)
hourly averages from the Wonderware distribute data and information where it can
values get within 20% of the acceptable
be monitored from any client workstation.
limits where the right decision can be made
Historian.”
to prevent penalties and unit shutdowns.
The main beneficiaries of the system’s
A single dashboard for each thermal power
A simplified robot on the dashboard gives
information are senior consultants
station was developed showing hourly
a quick overview of the stations’ status
from Environmental Management who
emission values together with their specific
from where the user can drill down to more
generate reports for the authorities. Other
limits and for how long the normal limit
detailed information.
beneficiaries include top executives and
was exceeded. Additionally any applicable
head office Generation Control Centre
exemptions to the limits are also shown
The first three Power Stations’ emission
personnel, especially those involved with the
and the system adapts automatically. Early
monitoring was delivered within a month.
early warning system which involves risk and
warnings are raised when the emission
This was followed by training of head office’s
strategic analysis.
control centre operators, environmental
“Due to the integration, scalability and versatility of the various Wonderware solutions used, it was possible to deliver a sophisticated system quickly for a process which was previously accomplished manually due to its complexity.”
consultants, managers and station
The system is integrated with the legal
personnel.
documentation of the authorities and client. It is also integrated with a number of other
Bytes enlisted the help of environmental
transactional and web-based systems within
specialists from Eskom to provide scenarios
the business infrastructure.
and business rules. They also interacted with various system owners within the
Benefits
organisation for access to data. All the development and tests were done on a
• Weekly reports are now supplemented
live system. “The end-user was helpful by
with hourly monitoring – no more “after
entering manual data such as exemption
the fact” initiatives
information, in parallel to their existing
Machiel Engelbrecht, Bytes Systems Integration
process,” says Engelbrecht. “This speeded
• Early warnings of possible forced load
up delivery as all values could be verified in
losses – allows for pro-active decision-
real time. Excel was used as an input form
making
Figure 4: Desktop “widgets” alert supervisors of critical conditions and help them drill down to the cause through easy-to-understand dashboards.
July/August 2012 | 53
Figure 5: System topology for each coal-fired power station (only Lethabo power station shown)
• Real-time KPI dashboards and reports –
Conclusion
presents a window on reality rather than history
Eskom’s vast resources literally run South Africa. Many people do not fully understand
• 24 Hour monitoring and alarming –
the consequences of not having them. Quite
follows the business Eskom is in
simply, Eskom is responsible for the way we run our lives and it is encouraging to see the
• Enables preventative measures to be implemented – early detection of trends is
steps the company is taking to ensure the continuity of that way of life.
crucial to minimising downtimes So, the next time you experience a blackout, • Ensures compliance with emissions
it may not be due to insufficient generating
licence – elimination of environmental
capacity but to ... ash. Thankfully, that
degradation and fines as far as possible
particular source of annoyance is being minimised rapidly through Eskom’s proactive
• Real-time monitoring of plant performance – provides symptoms of potential problems before they affect service deliv
54 | www.protocolmag.co.za
initiatives.
Thin client computing for virtualisation and SCADA
Thin client computing for virtualisation and SCADA Thin clients - Affordable terminals
• Reduced software administration and
In the 1970s and 80s, the PC revolution
• Lower support costs
management costs
gained momentum because PCs freed people from the “big brother” syndrome of mainframes by putting intelligence and
Decrease your total cost of ownership
computing power to use on a “one-perdesk” basis. And this is still the case today.
Thin client computers can decrease the total
But there are those instances where going
cost of ownership for plant floor systems in
back to that earlier concept but using today’s
all of the following ways.
Virtualisation makes the most of server power and technology which negates the need for intelligence at the PC level making Thin Clients the obvious, costeffective replacement for traditional desktop PCs.
infections. These thin-client terminals also
technology is by far the more practical and cost-effective alternative.
Reducing hardware costs
Thin-client terminals provide a low-
Thin client computers are affordable yet
drives and fans, making them ideal
cost alternative for system expansion while
flexible, durable and reliable. They are
for environments that are too harsh
increasing reliability. They are an ideal and
usually factory-tested, certified and ready to
for conventional PCs.
cost-effective solution for client/server
install right out of the box.
provide exceptional reliability because there are no moving parts such as hard
Applications
architectures featuring InTouch for Terminal Services software.
Standardising on one software platform
Thin client computers are ideal for visualising, monitoring and controlling
These compact, lightweight, robust industrial thin-client terminals are available
InTouch software can be used as part
machine or process operations. For
as an ACP ThinManager-ready client or a
of a thin-client/server architecture,
applications already using InTouch
thin client for Microsoft operating systems.
in standalone thick clients or as the
for Terminal Services software or ACP
visualisation tool within the Wonderware
thin-client management and configuration
System Platform architecture.
software, alternative thin client terminals of
Rugged and reliable, thin client terminals
choice can be a drop-in replacement.
have no moving parts. There is no need to data loss.
Decreasing administration and maintenance costs
Features and benefits
System programmes and applications
Computers can eliminate communication
are hosted on a centralised server, so
gaps between traditional operator panels
software enhancements, migration,
and supervisory level HMIs. Thin clients used
ACP ThinManager-ready or with the
upgrades and deployment are simplified.
as operator panels reduce hardware costs
Microsoft Windows CE operating system
No programmes or system configurations
while increasing
are required at the client level whenever
reliability and uniting
application changes occur.
disparate sources of
worry about client system reconfiguration or
Replacing legacy graphical operator panels with Wonderware’s Thin Client
• Thin client computers are available as
• Compact, reliable and robust hardware with no moving parts
information.
Increasing security and reliability • Consistent operator interface from supervisory to operator panels • A standardised, maintainable approach to supervisory and HMI systems • Reduced hardware, maintenance and lifecycle costs
Thin client Since all applications and critical data
computers are
are stored on a centralised server, it is
also excellent
much easier to secure information at the
for SCADA/
client level. Thin client computers are ideal
HMI applications
for locked-down local applications
requiring remote,
and are less vulnerable to unauthorised
secure and locked-
system data modifications and virus
down operations,
July/August 2012 | 55
tampering.
Thin Clients make more sense than PCs
Thin clients have an obvious price and
There are several issues with maintaining
reliability advantage over their (fat) PC
PCs in plant floor environments but two that
counterparts only if they can be properly
are constant irritations are hard drives and
supported, provide the same functionality
fans. Hard drives are susceptible to magnetic
and meet operational requirements. Thin
interference and the vibration of heavy
clients have found a natural niche in the
machinery and assembly lines and computer
virtualisation and SCADA / HMI environment
fans are always pulling dust and debris into
because of their cost-effectiveness,
the box that eventually causes the PC to
robustness, lack of moving parts such as
overheat.
protecting against potential system
disc drives and ability to work in harsh environments.
It was these facts among others that led ACP to begin looking at thin clients as
access to Windows applications, without
Their support is also much simpler than
an alternative to the expensive PC-based
necessarily exposing the full desktop and
going around and upgrading possibly
systems that were literally failing every day.
Start menu to the operator. Applications
dozens of PCs because their “intelligence”
Thin clients have no hard drives and no fans.
only need to be installed once on the
comes from one source.
Thin client support - ACP ThinManager Platform 5
terminal server but can be deployed to
A better software solution
multiple users. This is still one of on the greatest benefits of terminal services and the
While thin client hardware was an excellent
time-savings it offers reduces system setup
(and cheap) replacement for PCs, setting
time and costs right out of the box.
The ThinManager Platform takes control of
up and administrating a terminal services
resources in the modern factory
environment proved difficult and time
ThinManager also offers some very powerful
consuming. The existing management tools
features at the server side of the thin client
The number of software companies that
were also severely lacking. Available tools
system. Instant Failover allows for terminal
offer solutions for the modern workplace
do not offer enough functionality to manage
servers configured with ThinManager to
is staggering. Browse any trade magazine
a thin client system properly—and so the
switch back and forth between servers in the
and the ads are plentiful with promises
creation of ThinManager began.
event of a server failure or if a server needs
to make you and your business more
to be taken down for routine maintenance.
efficient and profitable. Granted, many offer
The goal was a management tool that would
The Instant Failover feature is so robust that
viable solutions ranging from application
offer control over the thin client terminals
the thin client users are not even aware when
development to PLC programming, but in
but also the terminal server systems
a server goes offline. This feature is part of
practice, many fall short of their promised
themselves. By fully managing both ends
ThinManager’s “zero” downtime goal.
ideals and still fail to fully address the real
of the thin client system, administrators
needs of the modern plant floor.
would have total control of the thin client
High availability is of course a necessity for
environment and therefore total control of
any plant solution and usually, if a PC fails,
Automation Control Products (ACP) was
the plant floor. ACP also knew that industrial
your process naturally becomes unavailable.
created by a group of Integrators working
customers would need extensibility, support
However because of the nature of terminal
in process control environments every
for touch-screens as well as sound and
services architectures, should a thin client
day. The years spent repairing, replacing
video cards. ThinManager was designed to
fail, your process software continues to run,
and upgrading PCs on the factory floor
provide such extensibility and has always
uninterrupted. Replacement of a thin client
led to the idea that there should be a
provided support for more of these than any
can be done by an unskilled operator and
better way to manage all of the computer
other solution available.
within a few minutes the operator station is
resources—hardware and software—in these compromising and harsh environments. This simple idea drove ACP to create
fully functional once more.
Reducing management costs and consolidating applications
More core functionality
Since saving time and money was the main
After ThinManager was installed into several
reason for creating ThinManager, these same
large customer sites and proved itself as a
two goals have driven the functionality of
viable alternative to traditional PCs, ACP’s
the software. An easy-to-use user interface,
customers began requesting even more
allowing simple, wizard-based configuration
features from ThinManager. One of the
of terminals and servers was one of the
first was the ability to see more than one
first things ThinManager performed.
session at each terminal. ACP addressed
ThinManager-Ready thin clients allow you to
the problem and the result is something we
easily connect to a terminal server to gain
call MultiSession. Now ThinManager-Ready
ThinManager.
56 | www.protocolmag.co.za
Thin client computing for virtualisation and SCADA
New functionality will deliver virtualisation and mobile applications Another technology ACP developers see gaining traction in the IT world is virtualisation. They believe this is another resource that ThinManager can make easier to manage for system administrators. Now, just like they did with terminal services, ACP is moving management of virtual resources in the ThinManager tree giving ThinManager administrators access to many of the same features that are available in VMware’s vSphere Client. ACP’s development team is currently working on adding built-in support for virtualisation. This will allow ThinManager thin clients can see multiple sessions by
ThinManager allows a user to not only see
users to manage virtualisation and terminal
accessing a simple drop-down menu.
an image coming from an IP camera, but
services at a much lower cost than previously
lets them overlay that image on top of their
available.
ThinManager’s ability to see anything,
HMI. Now a worker could be at one end of
anywhere is what makes it such a powerful
a baking oven and view the other end from
If that were not enough, ACP has also
tool for the factory floor. Built on top of the
the terminal they are standing at without
developed an application for Apple’s iOS
MultiSession feature are other powerful
having to walk back across the floor.
devices. Most of ThinManager’s capabilities
features like SessionTiling where you can tile
are now available on a mobile platform
up to 25 sessions on a single screen. There
The IP camera feature can be combined
for Apple’s iPhone and iPad. You can look
is also SmartSession which can balance the
with ThinManager’s security module called
for this technology to expand into making
load on a collection of terminal servers,
TermSecure to allow administrators to view
mobile devices become actual thin clients
without the need to install a clustered
users when they log into a terminal. With
themselves in the near future.
system.
TermSecure, administrators can also deploy programmes to specific terminals or users
The final word
Perhaps the most used feature to emerge
and manage access through keyboard
from MultiSession is MultiMonitor. Just
logins, USB dongles or RFID cards that allow
You would be hard-pressed to find as robust
as the name states, MultiMonitor allows
users to “swipe in” instead of logging in.
a solution for managing your plant floor
multiple monitors to be attached to a single
This is an additional layer of security on top
environment as ThinManager. No other
terminal and display multiple sessions.
of Window’s usernames and passwords.
management tool¬ takes control of all the resources used in today’s modern factories
The ability to arrange sessions on multiple monitors is virtually unlimited. Currently
The inherent security of thin client hardware
as completely as ACP’s ThinManager. With
ThinManager supports thin clients that allow
also allows factories to lock-down their
more than 12 years in business, ThinManager
up to five monitors to run through a single
production environments even further. Thin
is a proven technology. Thousands of
terminal.
clients have no CD/DVD drives for loading
companies in 30 countries, including one
malicious content and ThinManager-Ready
in ten Fortune 500 companies, use ACP
thin clients have their USB drives disabled
ThinManager and ThinManager-Ready Thin
by default. This means that employees are
Clients for their daily operations.
Increased visualisation and enhanced security
not loading viruses, games or music onto Since ACP customers were finding
company hardware—and since there is no
MultiSession so useful that ACP decided
hard drive present in the thin clients, there
to add even more functionality. IP cameras
is nothing to gain by theft of the unit nor is
were the next natural progression.
there any data lost if a unit is taken.
July/August 2012 | 57
58 | www.protocolmag.co.za
Virtualisation needs high availability functionality
Virtualisation needs high availability functionality Whether you’re using thin clients or not,
simplicity; no need for sophisticated IT skills
server failures in critical production areas and especially in a virtual computing
• Simple to maintain: special features
environment don’t only lead to costly
include remote monitoring, diagnostics
downtimes but can also have immediate and
and alerts; hot-swappable components;
serious consequences. Data loss, regulatory
no need for failover testing or scripts
non-compliance and health risks are just a few of the hazards that can arise from even a brief outage.
• Provides comprehensive uptime protection for Wonderware solutions in Microsoft® Windows Server® and
That’s why manufacturers running
Product offerings
VMware® vSphere™ environments
critical Wonderware solutions rely on fault-tolerant Stratus servers for continuous
• Mission-critical services offer
ftServer systems ensure uptime
24/7/365 uptime assurance. For nearly
comprehensive, 24/7 support and long-
assurance and operational
a decade, Stratus ftServer systems have
term coverage plans
simplicity for:
delivered unsurpassed levels of reliability — for both the server and operating system.
• Wonderware System Platform
• Server consolidation: Wonderware System
Today, ftServer uptime averages six nines
Platform runs on a single, virtualised
(i.e. 99.9999%), a number that translates to
ftServer system
• Wonderware Application Server
less than 32 seconds of downtime per year.
• Wonderware InTouch® HMI for
Engineered to prevent failure
Worry-free solutions that are simple to maintain
The ftServer architecture eliminates
Real-time solutions place high demands
single points of failure. Every system
on hardware. When business needs
comes equipped with replicated hardware
dictate continuous trouble-free
components, built-in Automated Uptime
performance, the fault-tolerant ftServer
technology and proactive availability
system is the right choice. Equipped
monitoring and management features.
with Intel® Xeon® quad-core processors,
These capabilities automatically diagnose
QuickPath Interconnect technology, up to
and address hardware and software errors
96 GB memory and 8TB of physical storage,
that would quickly halt processing
fifth-generation ftServer systems have
in other x86 servers. Even in the event of
never been so right for critical heavy-duty
a hard component failure, the duplicate
workloads. A choice of operating systems
component simply continues normal
adds to the versatility of ftServer solutions
processing. As a result, your Wonderware
and offers virtual environments the simplicity
solutions run uninterrupted and your
of automatic “load-and-go” availability.
terminal services • Wonderware Historian • Wonderware InBatch™ Software • Wonderware Device Integration I/O Servers • Wonderware MES software • Kepware KEPServer for Wonderware • ACP ThinManager® Platform
critical data is fully protected from loss or corruption — even data not yet written to
In any setting, Stratus’ advanced
disk.
technology makes these servers quick to deploy, simple to manage and cost-effective
Why Invensys Wonderware customers choose Stratus
to own. Built-in self-diagnostics and “callhome” capabilities automatically report issues and, when necessary, the system even
• Delivers industry-leading uptime >99.999%
orders its own hot swappable replacement parts. No special IT skills are required to replace hardware components, saving time,
• Easy to set up and operate: load-and-go
effort and expense.
July/August 2012 | 59
Virtualisation dictionary Acronyms
Cores - Two or more independent actual processors that are part of a single computing environment
COBIT
Control OBjectives for Information and related Technology
DR
Disaster Recovery
HA
High Availability
IDE
Integrated Development Environment (ArchestrA)
IaaS
Infrastructure as a Service
LAN
Local Area Network
PaaS
Platform as a Service
SaaS
Software as a Service
SCSI
Small Computer System Interface
SSD
Solid State Drive
VM
Virtual Machine or Virtual Memory
VMM
Virtual Machine Monitor
WAN
Wide Area Network
Disaster Recovery (DR) - The organisational, hardware and software preparations for system recovery or continuation of critical infrastructure after a natural or human-induced disaster. High Availability (HA) - A primarily automated implementation which ensures that a pre-defined level of operational performance will be met during a specified, limited time frame. Hypervisor – In computing, a hypervisor, also called virtual machine manager (VMM), is one of many hardware virtualisation techniques allowing multiple operating systems, termed guests, to run concurrently on a host computer. It is so named because it is conceptually one level higher than a supervisory programme. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualised hardware resources. Hypervisors are very commonly installed on server hardware, with the function of running guest operating systems, that themselves act as servers. Infrastructure as a Service (IaaS) - In this most basic cloud service model, cloud providers offer computers – as physical or more often as virtual machines –, raw (block) storage, firewalls , load balancers and networks. IaaS providers supply these resources on demand from their large pools installed in data centres. Local area networks including IP addresses are part of the offer. For the wide area connectivity, the Internet can be used or - in carrier clouds - dedicated virtual private networks can be configured. To deploy their applications, cloud users then install operating system images on the machines as well as their application software. In this model, it is the cloud user who is responsible for patching and maintaining the operating systems and application software. Cloud providers typically bill IaaS services on a utility computing basis, that is, cost will reflect the amount of resources allocated and consumed. Local Area Network (LAN) - A local area network (LAN) is a computer network that interconnects computers in a limited area such as a Cloud computing – Cloud computing is the delivery of computing as a
home, school, computer laboratory, or office buildings. The defining
service rather than a product, whereby shared resources, software and
characteristics of LANs, in contrast to wide area networks (WANs),
information are provided to computers and other devices as a utility (like
include their usually higher data-transfer rates, smaller geographic area
the electricity grid) over a network (typically the Internet).
and lack of a need for leased telecommunication lines.
COBIT – First released in 1996, COBIT is a framework created by ISACA
Paravirtualisation – A case where a hardware environment is not
(Information Systems Audit and Control Association) for information
simulated; however, the guest programmes are executed in their own
technology (IT) management and IT Governance. It is a supporting
isolated domains, as if they were running on separate systems. Guest
toolset that allows managers to bridge the gap between control
programmes need to be specifically modified to run in this environment.
requirements, technical issues and business risks. Platform as a Service (PaaS) - In the PaaS model, cloud providers deliver a computing platform and/or solution stack typically including
60 | www.protocolmag.co.za
Virtualisation dictionary
cloud user who sees only a single access point. To accommodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organisation. It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, Test Environment as a Service, communication as a service. The pricing model for SaaS applications is typically a monthly or yearly flat fee per user Virtualisation - In computing, this is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, a storage device or network resources. It’s a concept in which access to a single underlying piece of hardware (like a server) is coordinated so that multiple guest operating systems can share that single piece of hardware with no guest operating system being aware that it is actually sharing anything. In short, virtualisation allows for two or more virtual computing environments on a single piece of hardware that may be running different operating systems and decouples users, operating systems and applications from physical hardware.
Virtualisation types: Hardware - Hardware virtualisation or platform virtualisation refers operating system, programming language execution environment,
to the creation of a virtual machine that acts like a real computer with
database and web server. Application developers can develop and
an operating system. Software executed on these virtual machines is
run their software solutions on a cloud platform without the cost and
separated from the underlying hardware resources. For example, a
complexity of buying and managing the underlying hardware and
computer that is running Microsoft Windows may host a virtual machine
software layers. With some PaaS offers, the underlying computation and
that looks like a computer with Ubuntu Linux operating system with the
storage resources scale automatically to match application demand such
result that Ubuntu-based software can be run on the virtual machine.
that the cloud user does not have to allocate resources manually. In hardware virtualisation, the host machine is the actual machine on Private cloud - Private cloud is cloud infrastructure operated solely for a
which the virtualisation takes place and the guest machine is the virtual
single organisation, whether managed internally or by a third-party and
machine. The words host and guest are used to distinguish the software
hosted internally or externally.
that runs on the actual machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the
They have attracted criticism because users “still have to buy, build, and
host hardware is called a hypervisor or Virtual Machine Monitor.
manage them” and thus do not benefit from less hands-on management, essentially “[lacking] the economic model that makes cloud computing
Different types of hardware virtualisation include:
such an intriguing concept”. • Full virtualisation - Almost complete simulation of the actual Public cloud - Public cloud applications, storage, and other resources
hardware to allow software, which typically consists of a guest
are made available to the general public by a service provider. These
operating system, to run unmodified
services are free or offered on a pay-per-use model. Generally, public cloud service providers like Microsoft and Google own and operate the
• Partial virtualisation - Some but not all of the target environment
infrastructure and offer access only via Internet (direct connectivity is not
is simulated. Some guest programmes, therefore, may need
offered).
modifications to run in this virtual environment.
Software as a Service (SaaS) - In this model, cloud providers install and
• Paravirtualisation - A hardware environment is not simulated;
operate application software in the cloud and cloud users access the
however, the guest programmes are executed in their own
software from cloud clients. The cloud users do not manage the cloud
isolated domains, as if they were running on separate systems.
infrastructure and platform on which the application is running. This
Guest programmes need to be specifically modified to run in this
eliminates the need to install and run the application on the cloud user’s
environment
own computers simplifying maintenance and support. What makes a cloud application different from other applications is its elasticity. This
1. Desktop - Desktop virtualisation is the concept of separating the
can be achieved by cloning tasks onto multiple virtual machines at run-
logical desktop from the physical machine. One form of desktop
time to meet the changing work demand. Load balancers distribute the
virtualisation, virtual desktop infrastructure (VDI), can be thought as
work over the set of virtual machines. This process is transparent to the
a more advanced form of hardware virtualisation: Instead of directly
July/August 2012 | 61
interacting with a host computer via a keyboard, mouse and monitor
4. Storage - Storage virtualisation is the process of completely
connected to it, the user interacts with the host computer over a network
abstracting logical storage from physical storage
connection (such as a LAN, Wireless LAN or even the Internet) using another desktop computer or a mobile device. In addition, the host
5. Data - Data virtualisation is the presentation of data as an abstract
computer in this scenario becomes a server computer capable of hosting
layer, independent of underlying database systems, structures and
multiple virtual machines at the same time for multiple users.
storage. Database virtualisation is the decoupling of the database layer, which lies between the storage and application layers within the
Another form of desktop virtualisation, session virtualisation, allows
application stack.
multiple users to connect and log into a shared but powerful computer over the network and use it simultaneously. Each is given a desktop
6. Network - Network virtualisation is the creation of a virtualised
and a personal folder in which they store their files. With Multi-seat
network addressing space within or across network subnets
configuration, session virtualisation can be accomplished using a single PC with multiple monitors, keyboards and mice connected.
Virtual Machine (VM) – A virtual machine is a software implementation of a machine (i.e. a computer) that executes programmes like a physical
Thin clients, which are seen in desktop virtualisation, are simple and/
machine. Virtual machines are separated into two major categories,
or cheap computers that are primarily designed to connect to the
based on their use and degree of correspondence to any actual machine.
network; they may lack significant hard disk storage space, RAM or even
A system virtual machine provides a complete system platform which
processing power but in this environment, this matters little.
supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single programme, which
Using Desktop Virtualisation allows companies to stay more flexible in an
means that it supports a single process. An essential characteristic of
ever changing market. Having Virtual Desktops allows for development
a virtual machine is that the software running inside is limited to the
to be implemented quicker and more expertly. Proper testing can also
resources and abstractions provided by the virtual machine—it cannot
be done without the need to disturb the end user. Moving the desktop
break out of its virtual world.
environment to the cloud also allows for less single points of failure in the case where a third party is allowed to control the company’s security and
Virtual Memory – The concept whereby an application programme has
infrastructure.
the impression that it has contiguous working memory, isolating it from the underlying physical memory implementation.
2. Software – Software virtualisation includes the following: Wide Area Network (WAN) - A WAN is a telecommunication network • Operating system-level virtualisation - The hosting of multiple virtualised environments within a single OS instance
that covers a broad area (i.e. any network that links across metropolitan, regional or national boundaries). Business and government entities use WANs to relay data among employees, clients, buyers and
• Application virtualisation and workspace virtualisation - The hosting
suppliers from various geographical locations. In essence this mode of
of individual applications in an environment separated from the
telecommunication allows a business to effectively carry out its daily
underlying OS. Application virtualisation is closely associated with
function regardless of location.
the concept of portable applications. Acknowledgement: Most of the definitions in this dictionary are sourced • Service virtualisation – This involves emulating the behaviour of dependent (e.g. third-party, evolving, or not implemented) system components that are needed to exercise an application under test (AUT) for development or testing purposes. Rather than virtualising entire components, it virtualises only specific slices of dependent behaviour critical to the execution of development and testing tasks. 3. Memory - Memory virtualisation means aggregating RAM resources from networked systems into a single memory pool. This leads to the concept of Virtual Memory, which gives an application programme the impression that it has contiguous working memory, isolating it from the underlying physical memory implementation.
62 | www.protocolmag.co.za
from Wikipedia
Events - MESA SA “Adapt or Die” 2012 Conference
Events - MESA SA “Adapt or Die” 2012 Conference The MESA SA Executive
The planned Agenda is as follows:
Committee announces that the 2012 MESA Conference is
13th November
scheduled for 13th and 14th of November 2012 at the Indaba
08:30 – 12:00
MESA Global Executive Education Programme
Hotel, Fourways. The format of
presented by Jan Snoeij from Logica and MESA
the Conference has changed
EMEA
considerably as result of user requests. On the first morning of the conference, MESA SA will run the first
12:00 – 13:00
Lunch in Exhibition Hall
13:00 – 15:30
International speaker and user case studies
15:30 – 17:00
Networking session
Executive Education Programme session in South Africa, designed by the MESA Global Education Programme (GEP) members. This session is a half-day session specifically aimed at the busy executive that cannot afford to be out of the office for a full day or more. The session
14th November
provides an overview of MES/MOM, where it fits in the organisation, 08:30 – 12:00
Motivational speaker and user case studies
deployments.
12:00 – 13:00
Lunch in Exhibition Hall
After lunch on the 13th, the conference proceedings will kick off with
13:00 – 15:30
User case studies
15:30 – 16:00
Afternoon tea
16:00
Delegates depart
the difference between MES/MOM and ERP systems, where and how to deploy MES/MOM and the benefits and pitfalls of MES/MOM
an international speaker followed by user case-studies and concluding with a networking session for the delegates. As per request from our vendors and sponsors, MESA SA also plans to have an exhibition hall available this year where teas and lunches will be served. This will give the exhibitors more time to interact with users and provide exhibitors more space than previous years.
15th November
On the 14th, the conference will kick off with a motivational speaker
08:30 – 17:00
Post Conference Training – MESA Global
followed by more user case-studies. The conference will conclude at
Education Programme Certificate of Awareness
afternoon tea so that travellers can catch their flights home.
16th November In addition to the conference, MESA SA is also holding a postconference training programme on the 15th and 16th of November to
08:30 – 15:00
Post Conference Training – MESA Global
present the two-day Certificate of Awareness (CoA) GEP training. This
Education Programme Certificate of Awareness
training is aimed at operational and project managers responsible for MES/MOM systems and sales and marketing people from vendors. 32
MESA SA “Adapt or Die” 2012 call for papers
MES/MOM professionals in South Africa did this training last year and the training received great reviews. Registration for this training can be
MESA Southern Africa is sending out a Call for Papers for the
done directly on the MESA International website (www.mesa.org).
conference scheduled for 13th and 14th November this year. The theme of the conference is “Adapt or Die”.
Please keep this date open and watch the press and social media for further details.
Any papers and presentations explaining how your company adapted your manufacturing operations to: • Changing market conditions, • Changes in the skills make-up of your company,
July/August 2012 | 63
• Changes in the systems landscape,
MESA SA “Adapt or Die” 2012 call for sponsors and exhibitors
• Changes in the market, or Various exhibition and sponsorship opportunities will be available for • Changes within your suppliers or raw materials will be suitable.
interested vendors and integrators.
Presentations are also welcome that discuss any new or different
We are looking for the following sponsors:
way your company is looking at any aspect within the Operations Management environment (Maintenance, Production, Quality, Inventory) and has implemented a project to support for instance:
• Platinum sponsor – This sponsor will be allowed to erect banners in the actual conference/speaker venue as well as get a free Exhibitor space in the Exhibitor Hall. The Platinum Sponsor will also be
• People,
recognised as such on the Programme and printed and electronic media going out before and on the conference day. This sponsor will
• Equipment,
also get special mention during the conference proceedings and will have the opportunity to add brochures and literature to the delegate
• Materials,
bags. • International Speaker Sponsorship - This sponsor will be
• Products,
recognised as the GOLD SPONSOR and will get a free Exhibitor • Capacity,
space in the Exhibitor Hall. The Gold Sponsor will also be recognised as such on the Programme and printed and electronic media going
• Production Schedule, or
out before and on the conference day. This sponsor will also get special mention during the conference proceedings.
• Operations Performance. • Speaker gifts – This sponsor will be recognised on the Programme The MESA SA executive committee will be looking for real-life case
and printed media
studies that indicate how your company changed or adapted your way of working in order to adapt to the changing environment or to increase the cost-effectiveness of your operations. The best speaker at the conference (as decided by the MESA SA Executive committee) will get an i-Pad as a prize. Please look at the following dates if you have a good story to tell and send in your submission/contribution. • Abstract due (120 – 250 words) – 17 August 2012
• Networking session – This sponsor will be recognised on the Programme and printed media • Best Speaker prize (i-Pad) – This sponsor will be recognised on the Programme and printed media • Name tags and delegate bags – This sponsor will be recognised on the Programme and printed media • Exhibitors - The exhibitors will be recognised on the programme and printed media
• Evaluation complete
– 31 August 2012
• Presentation Due
– 19 October 2012
If you are interested in any of the above sponsorships and related costs, please contact gerhard.greeff@bytes.co.za and daniel.spies@sappi. com. Please send your abstracts to gerhard.greeff@bytes.co.za and daniel.spies@sappi.com.
64 | www.protocolmag.co.za
Use Protocol Magazine to generate business opportunities
Use Protocol Magazine to generate business opportunities Protocol magazine continues to be well
you’re a solution supplier) or recognition
The Guideline is in the form of prompts to
received on a bi-monthly basis by 6500
(if you’re an end-user)
which you supply the answers to the best of your ability. This, together with the graphical
industry professionals like you, at every level of the country’s leading mining and manufacturing companies. You can
• It must generate market awareness of your capabilities
information required, will be used to write the article which will be sent back to you for editing, approval, etc.
leverage this highly-qualified readership to be heard.
• It must do all that at a reasonable cost
How do you promote yourself right now?
Protocol magazine meets all these criteria.
The Permission to Publish form must be signed by the end-user of the installation and system integrator / solution vendor
Some of the things you might be doing could include inserting opinion pieces,
If you’re an end-user, your stakeholders are
(if applicable) before work on the article is
adverts, editorials and other material
most interested to know how well you’re
started. This ensures that all the work that
into South Africa’s leading manufacturing
looking after their interests by lowering costs
goes into compiling the story will not be
and mining magazines. A good choice
and improving efficiency. Your colleagues
wasted.
since these are excellent and professional
in the industry are keen to see how you’ve
publications that land on decision-makers’
implemented Wonderware solutions so that
You are free to use the completed success
desks every month.
they can evaluate if these will have the same
story in any marketing sense you wish and
benefits in their environments.
you have hundreds of examples on our web site and in past issues of A2ware and
What Protocol offers is all the advantages of a professional magazine with a large
If you’re a system integrator, end-users
circulation but the cherry on the cake is
want to know what you’ve done so that they
that all the readers of Protocol have one
can consider you as a solution supplier for
thing in common – Wonderware solutions
their next project.
FutureLinx.
Opinion Pieces: Once again, there’s no cost involved and
in the areas of SCADA, MES, EMI, BPM and enterprise integration – in fact, anything to
If you’re a hardware or software vendor,
you don’t have to worry about probably
do with industrial and corporate production
end-users and system integrators want to
not having majored in English. Decide on a
IT. Everything in Protocol is aimed at helping
know about how well your offerings work
central theme and the idea(s) you want to
end users get more from their Wonderware
in the Wonderware environment and how
put across, then jot down all the reinforcing
investment and trigger them to look at new
they can help them do a better and more
arguments you can think of (as well as
possibilities. Nobody wants to reinvent a
cost-effective job.
references if applicable). Also include any supporting graphics you feel will better
costly development or investigation wheel to stopping that happening.
What medium will work best for you?
Let’s think for a minute about your perfect
Success stories:
and what you have to offer will go a long way
illustrate the point. Send your draft article to your account manager
you:
They won’t cost you a cent and you don’t have to write them. Simply send an e-mail
• It must convey your message in a
to your account manager stating that you
professional manner to a large, targeted
have the makings of a good story and why
and qualified audience
you think it is so. You will then be sent a
• It must generate incremental business (if
and, if necessary, we’ll make the necessary edits before returning it to you for approval.
promotion vehicle and what it should do for
Comments to the editor, Q&As, Product and/or service information:
Guideline and a Permission to Publish form
Send your submissions to Denis your
to complete and return.
account manager and they (as well as the
July/August 2012 | 65
answers) will be published in the next issue
So what are we really saying?
(if interesting and relevant).
and we have made it as easy as possible for you to say it!
As an end-user or supplier of Invensys
Material formats
Wonderware and associated solutions, you
You will be talking to people with the same
form part of the world’s largest ecosystem
reality as you and who have the same
Text – In Microsoft Word format
of professionals in the fields of industrial
problems and concerns.
Graphics – In PowerPoint, Bitmap or JPEG
automation and the delivery of actionable
format (the last two in the highest possible
intelligence from the shop floor to the top
So, what we’re really saying is, use Protocol
resolution you have)
floor.
magazine to say what you believe needs
Advertising:
That makes you pretty special.
For all your advertising requirements –
That makes what you have to say significant
including the drafting of effective adverts
and important.
to be said.
from scratch – contact Heather Simpkins at The Marketing Suite.
i
In other words, what you have to say matters
Did you know that if you don’t talk to anyone, they’re not likely to talk to you or send orders?
66 | www.protocolmag.co.za
Use Protocol Magazine to generate business opportunities
s e c i v r t e r S o p p u S d an
July/August 2012 | 67
2012 Training Schedule (Johannesburg)
i
Did you know that your bottom line is directly proportional to the effectiveness of your workforce?
InTouch Part 1 Fundamentals (includes New Graphics)
System Platform – Application Server (includes new graphics)
• 2 - 6 July
NOTE: The dates shown apply to training
• 7 – 11 May • 30 July – 3 August
at our offices in Bedfordview, Johannesburg. Regional training is
• 25 – 29 June • 3 – 7 September
presented on demand. A minimum of six delegates is required to arrange
• 23 – 27 July
a course.
• 27 – 31 August
Regional training venues:
• 1 – 5 October
Durban: Khaya Lembali,
• 8 – 12 October • 29 October – 2 November • 26 – 30 November
Morningside. • 19 – 23 November
InTouch Part 2 Advanced (includes New Graphics) • 4 – 8 June
Cape Town: Durbanville Conference
Historian (includes ActiveFactory and Wonderware Information Server)
Centre. Port Elizabeth: Pickering Park Conference Centre, Newton Park.
• 9 – 13 July • 14 – 18 May • 10 – 14 September • 18 – 22 June • 5 – 9 November
As the owner of some of the world’s most popular, advanced and versatile industrial
• 20 – 24 August
automation, information and MES software solutions, you’ll want to get the most
• 17 – 21 September
from your investment and that includes getting the best training in the business.
• 15 – 19 October
We routinely train about 600 professionals like you every year not only on how to use
• 12 – 16 November
our solutions but how to turn our product features into real business benefits.
• 3 – 7 December So, let us suggest a training curriculum best suited to your needs. For all your training requirements, contact Emmi du Preez at emmi.dupreez@wonderware.co.za or call her on 011 607 8286
68 | www.protocolmag.co.za
Support – Customer FIRST
Support – Customer FIRST Maximise asset performance
to meet contractual obligations to your customers and the loss of business that
Downtime costs businesses millions of Rand
Comprehensive Services
- Customer FIRST support gives you options to maximise productivity by keeping your operations running smoothly.
In a nutshell ...
may ensue. • Customer FIRST also gives you access to Invensys technical resources to
Customer FIRST is not just technical
help you ensure that your system is
support, it’s a comprehensive
Outages, both planned and unplanned,
back to capacity in as short a time as
programme to help you manage your
are costly; businesses increasingly need to
possible. Our world-class global service
systems and protect your investments.
employ effective pre-emptive strategies
organisation is available locally, so the
to reduce risks and employ efficient and
help you need is never far away.
Real Value
effective resourcing strategies to ensure that non-productive time is kept to a minimum.
Asset performance is not just about
Customer FIRST members enjoy the
maximising availability though; you need
many benefits of a closer collaborative
Customer FIRST is not just technical support,
to ensure that your assets are working to
relationship with Invensys.
it’s a comprehensive programme to help
their maximum potential. You also need
you manage your systems and maximise the
to minimise the risk to your business of
performance of your assets.
missed schedules, poor quality or regulatory violations, with the business consequences
Downtime hurts - Customer FIRST can help
• Responsive services • Depth of expertise
that may follow. • Proactive planning Customer FIRST gives you proactive remote
Even the most reliable equipment requires
health monitoring services to spot warning
downtime, perhaps for routine maintenance,
signs before problems occur and advanced
preventative maintenance, upgrades or
consulting services to tune your systems to
replacement. You need to ensure that
maximum performance.
downtime is kept to a minimum and to
• Continuous performance monitoring • Emergency contingency provisioning • Deep discounts on hardware
as a result.
Customer first – our mission: your success
• Software and services
• Customer FIRST provides you with
ensure that there is minimal production loss
Customer FIRST membership gives you
These important elements make the
access to great hardware maintenance,
access to award-winning technical support,
Customer FIRST membership an
software maintenance and comprehensive
hardware and software maintenance
essential part of your business success.
lifecycle management services to help
services, lifecycle management and remote
you optimise your planned downtime and
Services, training and consulting services
minimise unplanned downtime events.
and much more. The programme provides you with comprehensive services and flexible
Recovery time is critical and any delays
options to choose exactly the right kind of
in acquiring either replacement parts, or
programme to suit your business needs and
the expertise required to quickly resolve
help you to maximise asset performance.
problems, can have a significant financial impact on your business.
Contact information
• Customer FIRST provides you with timely
Support Telephone Number:
access to critical spare parts with the
0861-WONDER (0861-966337) or
ability to manage spares more easily and
0800-INVENSYS (Toll Free)
ensure the reliability of your systems. E-mail: What’s more, extended downtime presents
support@wonderware.co.za or
support@invensys.co.za
other risks to your business such as failing
July/August 2012 | 69
On the lighter side The world’s greatest philosophers could learn a thing or two from this lot:
• Snowmen fall from Heaven unassembled.
plagiarism. To steal from many is research. • If all else fails, immortality can always be assured by spectacular error.
• If life gives you lemons, stick them down your shirt and make your boobs look bigger.
• A bus station is where a bus stops. A train station is where a train stops. My desk is a
• That all men are equal is a proposition
work station.
which, at ordinary times, no sane individual has ever given his assent
• Just because nobody complains doesn’t mean all parachutes are perfect.
• To steal ideas from one person is
• How is it one careless match can start a forest fire, but it takes a whole box to start
• A marriage is always made up of two
a campfire?
people who are prepared to swear that • When the pin is pulled, Mr. Grenade is not
only the other one snores.
our friend.
weeks of captivity, they can train people • Marriage is an institution consisting of a
• Great minds discuss ideas. Average minds discuss events. Small minds discuss
master, a mistress and two slaves, making
• A bank is a place that will lend you money • One of my pet peeves is women who they’re finished.
• Whenever I fill out an application, in the part that says “In an emergency, notify:” I
• I don’t have a sense of decency. That way,
put “ A DOCTOR.”
all my other senses are enhanced.
Earth? Judging from realistic simulations involving a sledge hammer and a common
if you can prove that you don’t need it.
don’t put the toilet seat back up when
repeat myself. • What happens if a big asteroid hits
wanted paycheques.
considered normal.
approaching magnificent!
you were a member of parliament. But I
throw them fish. • I thought I wanted a career; turns out I just
• In some cultures, what I do would be
• Suppose you were an idiot. And suppose
to stand on the very edge of the pool and
in total, two.
people. • If things get better with age, I’m
• Dolphins are so smart that within a few
• I didn’t say it was your fault, I said I was • Anatidaephobia: The fear that
laboratory frog, we can assume it will be
somewhere, somehow, a duck is watching
pretty bad.
you.
blaming you. • Why does someone believe you when you say there are four billion stars, but check
• We don’t see things as they are; we see things as we are.
• Do not argue with an idiot. He will drag experience.
• My wife submits and I obey, she always lets me have her way.
when you say the paint is wet?
you down to his level and beat you with • Behind every successful man is his woman. Behind the fall of a successful • The last thing I want to do is hurt you. But
man is usually another woman.
it’s still on the list. • Diarrhoea is hereditary... it runs in your jeans • I named my dog ‘Herpes’ because he won’t heel • Save the trees, wipe your butt with an owl
• A clear conscience is usually the sign of a • If I agreed with you, we’d both be wrong. • We never really grow up; we only learn how to act in public. • Knowledge is knowing a tomato is a fruit; Wisdom is not putting it in a fruit salad.
• I went to my doctor and asked for something for persistent wind. He gave me a kite.
b a d memory. • The voices in my head may not be real, but they have some good ideas! • I discovered I scream the same way whether I’m about to be devoured by a great white shark or if a piece of seaweed
• The early bird might get the worm, but
touches my foot.
the second mouse gets the cheese. • Some cause happiness wherever they go.
• The great thing about democracy is
• Evening news is where they begin with
that it gives every voter a chance to do
‘Good evening,’ and then proceed to tell
something stupid.
you why it isn’t.
70 | www.protocolmag.co.za
Others, whenever they go. • There’s a fine line between cuddling and
On the lighter side
holding someone down so they can’t get away. • I used to be indecisive. Now I’m not sure. • I always take life with a grain of salt... plus a slice of lemon... and a shot of tequila. • You’re never too old to learn something stupid. • To be sure of hitting the target, shoot first and call whatever you hit the target. • Nostalgia isn’t what it used to be. • A bus is a vehicle that runs twice as fast when you are after it as when you are in it. • Change is inevitable, except from a vending machine. • Friday, I was in a bookstore and I started talking to a French looking girl. She was a bilingual illiterate - she couldn’t read in two different languages. • If toast always lands butter-side down, and cats always land on their feet, what happen if you strap toast on the back of a cat and drop it? • The other day, I was walking my dog around my building... on the ledge. Some people are afraid of heights. Not me, I’m afraid of widths. • When I have a kid, I want to buy one of those strollers for twins. Then put the kid in and run around, looking frantic. When he gets older, I’d tell him he used to have a brother, but he didn’t obey. • Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us.” • ‘If women are so bloody perfect at multi-tasking, how come they can’t have a headache and sex at the same time?
July/August 2012 | 71
Protocol Crossword #55 When you’ve completed the crossword, the letters in the coloured boxes spell out the name of the Invensys foundation for all applications that now supports virtualisation. Note: This magazine contains the answers to a number of the clues. E-mail your answer to: editor@wonderware.co.za. The sender of the first correct answer received will get a hamper of Invensys Wonderware goodies.
Clues across: 1. The creation of something other than the real thing (14) 13. Object-Oriented Programming (3) 14. Small willow used for basket work (5) 15. Is it a car? Is it a skirt? (4)
76. One who is the ultimate beneficiary of Invensys
54. Paper sat-nav? (3)
solutions (4)
55. Compass point (2)
77. Measures to ensure that a pre-defined level of
57. Used in Iraq with shock (3)
operational performance will be met during a specified,
60. Most volcanically active body in the solar system (2)
limited time frame (4,12)
61. What’s brown and sounds like a bell? (4) 63. Girl’s name (4)
17. Part of a train (7)
Clues down:
20. Not Miss or Mrs (2)
1. SimSci-Esscor has elegant solutions in this area (7,7)
21. Cunning animals (5)
2. Makes liquid cloudy by stirring up sediment (5)
22. Cash register (4)
3. The GFIP introduced this highly unpopular way of
23. Difficult and tiring (7)
getting extra revenue (4)
25. Eggs (3)
4. Universal Product Code (3)
26. User Requirement Specification (3)
5. Truck (5)
28. A dog will let out one if you hurt him (4)
6. Exists (2)
29. Skeletons are made of them (5)
7. The technique of representing the real world by a
31. Aluminium symbol (2)
computer programme (10)
32. Ocean (3)
8. Fable guy (5)
33. Volcanic rock (2)
9. Transmit Ready (data coms.) (2)
35. Real Time (2)
10. Operational Management Office or washing powder
36. Many can be saved with proper safety measures (5)
(3)
37. The world-wide web (8)
11. Tricky Dicky (U.S. president) (5)
41. Royal Navy (2)
12. The infrastructure for system recovery after a natural
42. Places where you might pitch a tent (9)
or human-induced catastrophe (8,8).
44. It’s all around us (3)
16. Not ever (5)
46. Electrical Engineering (2)
18. It owns and flies passenger aircraft (7)
47. Prefix attached to everyday words to add a
19. Car body (2)
computer, electronic or online connotation (especially
21. Full Service Banking (3)
to 32 down) (5)
24. University officials (5)
49. 7 down greatly improves the quality of this (8)
27. Flat-topped hill or mountain (Mexico) (4)
53. Doctors’ prescriptions are notoriously not this (7)
30. Small number (3)
54. Melodious sound (5)
32. The tighter these measures, the less likely systems
56. Industrious island (6)
are likely to be hacked (8)
58. Friend (U.K. and Australia) (4)
34. Raw material before metal is extracted (3)
59. Messy with “un” in front (4)
37. Intellectual Property.
62. Not old (3)
38. Movie alien (2)
64. Range of numbers expressing the relative acidity or
39. National Senior Certificate (3)
alkalinity of a solution (2)
40. You might start running one in a bar, for example (3)
65. Future Value (of an investment) (2)
45. Id est (in other words) (2)
66. Not me (3)
48. Young in appearance and manner (8)
67. They’re good for you (6)
49. Old Germiston registration (2)
70. Elsa was born this way (4)
50. American Bar Association (3)
72. Old horse (3)
51. More recent (5)
75. Commercial stomach settler (3)
52. Depart (2)
72 | www.protocolmag.co.za
65. Imperial unit of measurement (4) 68. Slippery fish (3) 69. Cry uncontrollably (3) 71. Regional System Integrator (3) 73. Expression of surprise (2) 74. Gallium symbol (2)
Answer to Protocol crossword #54: Question: What is Invensys Operations Management’s name for its all-inclusive enterprise control system? Answer: InFusion
Measurement Under Control
Achieving competitive and efficient process plant operation is an increasingly tough challenge in todayâ&#x20AC;&#x2122;s fast moving business environment. Selecting the most reliable and longest life measurement instrumentation is more important than ever. Invensys Foxboro offers time proven innovative measurement solutions that make this possible, leading the way with longer life pH, redox and conductivity measurement sensors and instrumentation.
View our full range of measurement tools and instrumentation at: www.invensys.co.za or call 0800 INVENSYS for more information
Get Intouch
with your artistic side that craves performance
Creating powerful visualisation and supervisory apps is a snap with the world’s number one HMI software. iom.invensys.com/InTouch For more information, call us on 0800 INVENSYS
Wonderware InTouch ® An HMI and way more. Avantis
Eurotherm
Foxboro
InFusion
SimSci-Esscor
Skelta
Triconex
Wonderware
© Copyright 2012. All rights reserved. Invensys, the Invensys logo, Avantis, Eurotherm, Foxboro, IMServ, InFusion, Skelta, SimSci-Esscor, Triconex and Wonderware are trademarks of Invensys plc, its subsidiaries or affiliates. All other brands and product names may be trademarks of their respective owners.