Computer Vision & Deep Learning | MVPro 5 | September 2017

Page 1

MACHINE VISION PROFESSIONAL

DEEP LEARNING POSSIBILITIES

BUSY CONFERENCE SEASON

ISSUE 5 - SEPTEMBER 2017

SENSORS: WHAT LIES AHEAD

mvpromedia.eu

THE LATEST MACHINE VISION NEWS AND VIEWS


The applications are endless...

MACHINE VISION | AERIAL IMAGING | AEROSPACE | INDUSTRIAL | MICROSCOPY | INSPECTION | ASTRONOMY | QUALITY CONTROL | MILITARY | TEST & MEASUREMENT

New!

TIGER Â&#x; CHEETAH CCD, CMOS

Up to 47 Megapixel

For all your imaging needs... www.imperx.com


CONTENTS mvpromedia.eu

Visit our website for daily updates

www.mvpromedia.eu

MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0) 1179 089686 © 2017. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies. Designed by The Wow Factory www.thewowfactory.co.uk

mvpromedia.eu

4

Welcome to MVPRO

5

NEWS a round-up of what’s been happening in the Machine Vision sector

19

DEEP LEARNING MVPro interviews Jeff Bier, Founder of the Embedded Vision Alliance, about the possibilities of Deep Learning

24

SENSORS Editor Neil Martin hears about the sector’s future prospects and takes a look at SICK, one of the leading players

30

SCORPION From epic bike ride to new distributorship – Scorpion Vision’s boss Paul Wilson has had a busy summer

32

GARDASOFT We caught up with Gardasoft over the summer, asking how business was doing and Managing Director Hiliary Briggs gave us the details

33

BOAZ ARAD has won the EMVA Young Professional of 2017 award

34

TPL VISION the future’s bright

37

WOMEN IN ENGINEERING urgent action from the top needed to address the shortage of women in uk engineering

38

BITFLOW What the future holds for the frame grabber in the vision market

41

CALL THE PAPERS computers in industry

42

RD TAX CREDITS you need to know about these

45

TELEDYNE DALSA’s Christopher Chalifoux takes us through the four basic machine vision applications

48

XILINX revision accelerates your surveillance application

54

CONFERENCE It’s a busy time for exhibitions and conferences from now until the end of the year and Editor Neil Martin casts his eye over the main events

56

VISION BUSINESS: WILHELM STEMMER bows out in style and Basler acquires mycable

59

PUBLIC VISION: 10 YEARS ON from the financial crisis

62

CALLY’S SIGNOFF Well, that was a nice summer

3


SEPTEMBER, A NEW YEAR STARTS September usually brings a sense of a new year in the corporate world, possibly more than January itself. People come back off holidays and start thinking about the remainder of the year and early the next. It’s a good time to examine everything within the company and check on existing strategies across all departments, and instigate new ones. We’re certainly finding that in the MVPro office, as we talk to companies about what they are planning for the remainder of 2017. And the mood is optimistic. There’s a buzz out there that is hard not to notice. Companies within the sector seem well set to take advantage of a continuing healthy market, although as we sit here, there are things that management teams will need to consider as they forward plan. Much of the growth for machine vision has come from the consumer electronics market and this is still in the driving seat. But, is this set to slow down as the big economies see pressures on the horizon? Will the rise in personal debt and the prospect of higher interest rates, as the age of cheap credit starts to come to a close, push down spending on main street and thereby curtail demand? We’ve got a busy issue for you. In Public Vision I take a look at what two key players in the industry, Cognex and Basler are saying. Both have recently announced results and both, as public companies, have to issue forward guidance so investors don’t get caught out. The statements of both companies make interesting reading. In the section Vision Business, I take my hat off to the Stemmer-Imaging who appeared to have handled their trade sale in a very solid fashion. And happy retirement to Wilhelm Stemmer. And if you’re a UK company and an SME, and think you do a bit of research and development, take a look at the article which focusses on how to claim R&D tax credits. There is a fairly busy conference season over the autumn and winter, and the MVPro team will be at a number of the shows and as always, happy to meet you and discuss business. And don’t forget we now have a sister publication, RoboPro. It’s an industry wide platform just as MVPro and the website has been live for some time now. Within the magazine we focus on precision and collaborative robots, as well as automated vehicle and next generation automation. It’s a lively

4

marketplace and one where things move very quickly indeed. There’s also a fair amount of crossover between it and machine vision, so if you step across both sectors, please let me know – I’m always interested in hearing your news. I look forward to meeting you over the coming months.

Neil

Neil Martin Editor, MVPro

Neil Martin Editor neil.martin@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry RoboPro B2B digital platform and print magazine for the global robotics industry www.mvpromedia.eu

mvpromedia.eu


NEWS

BASLER STARTS ACE L CAMERA LINE PRODUCTION Basler (Ahrensburg, Germany) has begun production for all camera models in its new ace L product line. The 12 new ace L models are equipped with the highresolution sensors IMX253, IMX255, IMX267 and IMX304 from Sony’s Pregius line. This expansion of the ace series includes twelve new cameras with the highresolution CMOS sensors of the latest generation from Sony’s Pregius line. The twelve new ace L models offer resolutions of 9 and 12 megapixels and frame rates of up to 40 fps. Featuring the high-performance IMX253, IMX255, IMX267 and IMX304 sensors, they offer Pregius image quality with a dynamic range of more than 70 dB and a high quantum efficiency over a broad spectrum of visible light into the NIR. Thanks to these properties, says Basler, the cameras

are also an excellent choice for highly demanding applications. For example, this includes high-resolution AOI systems for display inspection or enforcement systems in the ITS market. The twelve ace L models have a compact design size, with a footprint of 30 mm x 40 mm to accommodate the new large-format sensors and thus ensures a simple integration. All are available with the proven GigE or USB 3.0 interface and conform to GigE Vision 2.0, or the USB3 Vision standard. The GigE models excel with highperformance GigE Vision 2.0 features such as PTP. The ace L color models also include the unique PGI Feature Set for in-camera image optimization. The twelve ace L models have been created to supplement Basler’s product range with additional high resolutions in the C mount portfolio while using a global shutter with a pixel size of 3.45µm.

M I DOPT LAU NCH ES N EW N E UTRAL DE NSITY FI LTE R SWATCH KIT Midwest Optical Systems (Palatine, IL, US), a leader in machine vision filters, has launched the NS100 Neutral Density Filter Swatch Kit. The company says that testing is now easier than ever with the kit which includes all of the most popular ND Filters and allows a user to stack multiple ND Filters to achieve a custom optical density. Created to be a great tool to have in the field, or in a laboratory, to test the effects of ND Filters, it aims to solve applications quickly and improve image quality. Recognized in the industry as ‘sunglasses for your system,’ neutral density filters are designed to reduce light intensity neutrally over a specific wavelength range without affecting image color or contrast. They also serve, says MidOpt as a great solution for lens aperture control and reducing depth of field. Available in both absorptive and reflective style options, they can be used with monochrome or color cameras. The NS100 ND Filter Swatch Kit includes 43mm size filters and no mounting is required. Each kit includes: ND030 (OD: 0.3), ND060 (OD: 0.6), ND090 (OD: 0.9), ND120 (OD:1.2), ND200 (OD:2.0), ND300 (OD: 3.0) and ND400 (OD: 4.0).

mvpromedia.eu

5


»The Mitsubishi Electric LINE SCAN BAR solution offers high quality image acquisition in the smallest footprint ever!« Hans Gut VP Marketing & Sales, Hunkeler AG

Share our passion for vision. www.stemmer-imaging.de/CIS

THE PERFECT COMBINATION OF FAST DATA RATES AND LOW EFFORT

Authorized Distributor of LINE SCAN BAR


NEWS

FRAMOS AN D PYXALIS EXTE N D FRI E N DSH I P I NTO FORMAL AGRE E M E NT FRAMOS (Munich, German) and PYXALIS (Moirans, France) are extending their collaboration by entering into a formal agreement. The pair have worked together for a number of years and wish to extend their relationship. FRAMOS is as a leading global image sensor distributor and PYXALIS is as an advanced custom image sensor supplier. FRAMOS said that the partnership provides it with fully customized, high performance sensors, including sensor specification elaboration support, sensor architecture, design, prototyping, validation, industrialization and manufacturing using the best silicon imaging technologies worldwide. And that the technology partnership significantly enhances FRAMOS’ imaging solutions portfolio which includes Sony, ON Semiconductor, and Teledyne e2v. FRAMOS explained that image sensor technology is a highlydeveloped field where a great number of manufacturers provide the market with

thousands of sensor varieties for most off-the-shelf applications. Yet, for high performance systems in the automotive, consumer, medical and security industry, sometimes a custom design is required. And this is where PYXALIS steps in. It has already created custom sensors in these industries, represents an interesting alternative to currently established sensors with capabilities in time of flight, super low noise pixels, processor based SOCs and very-high-speed, or high dynamic architectures. What’s more, camera manufacturers with the goal of creating premium products with more added value strive for customization, too. In addition, for consumer grade products, customization is a must for companies who want to differentiate themselves from competitors with an exclusive sensor. President of FRAMOS Technologies Sebastien Dignard said: “In this day and age, as technology is rapidly

changing in the sensor space, we are happy to partner with PYXALIS to offer our customers more options to suit their sensor needs. Our business innovation and coverage make this agreement a great complement to the FRAMOS portfolio to support our customers along the complete imaging value chain from now, as well as customized sensors to complete systems.” PYXALIS’s President and CEO Philippe Rommeveaux added: “We’re delighted to work with FRAMOS Technologies in Europe and North America. As a 7-yearold company supplying custom image sensors, we’ve built successful partnerships with customers in many applications from niche markets (aerospace, scientific, defense) to medium volume (industrial, medical) and consumer markets (biometrics, automotive). Thanks to this cooperation with FRAMOS, it is now time to reach a larger market and to provide our capabilities and technologies to a greater number of customers.”

WORLD’S FI RST M EGAPIXE L, SHORTWAVE-I N FRARE D CAM E RA WITH NO ITAR EXPORT RESTRICTION Princeton Infrared Technologies (Monmouth Junction, NJ, US) has introduced the world’s first and only MegaPixel (MP) shortwaveinfrared (SWIR) camera with no ITAR export restrictions. The 1280SciCam is the newest camera in Princeton Infrared’s family of SWIR imaging products to fall under the no ITAR restrictions umbrella. It has a 1280 x 1024 image sensor on a 12 µm pitch and features long exposure times, extremely low read noise, 14-bit digital output and full frame rates up to 95 Hz. The camera has been designed for advanced scientific and astronomy applications. It

mvpromedia.eu

detects light from the visible to the SWIR (0.4. to 1.7 microns) and is available with a variety of lens formats. President of Princeton Infrared Technologies Dr Martin Ettenberg said: “After an exhaustive Commodity Jurisdiction process, which occurred at the very same time as the new U. S. export reform rules went into place, we are thrilled to have our entire product line defined in the EAR. We are now ideally positioned to serve the scientific and astronomical communities, in addition to machine vision and spectroscopy, with our nonITAR, SWIR imaging products.”

Sales Director at PIRT Bob Struthers adds: “Our 1280SciCam has already generated sales and applications with leading research entities overseas. An EAR export classification will propel our ability to serve these customers promptly and efficiently. This will be very valuable to their upcoming projects and equally beneficial to the growth of our young company.”

7


NEWS

MVTEC HALCON GOES DEEPER MVTec is to release a new version of its standard software HALCON which aims to set new standards for the use of deep learning. Available at the end of the year, the company said that the new solution offers a large selection of functions for the use of deep learning out of the box. What’s more, it paves the way for the wide use of self-learning machine vision technology based on artificial intelligence, which means that users can achieve more robust classification results faster and more easily. Customers will be able to conduct training of convolutional neural networks (CNNs) based on deep learning algorithms themselves for the first time.

The trained networks can then be used to automatically classify the image data corresponding to the pre-defined classes. Highlighted by the company is the fact that the future-proof deep learning features are seamlessly integrated into a professional and established standard machine vision library.

“ Tedious programming for identifying different defect classes is therefore no longer necessary. In the industrial machine vision environment, deep learning is mainly used for classification tasks which appear in many applications areas, e.g., in the inspection of industrial goods or the recognition of components.”

Product manager HALCON at MVTec Johannes Hiltner said: “With the new HALCON version, we are specifically addressing a current trend and strong market need for machine vision. By using their self-trained networks, customers save a great deal of effort, time, and money. For example, defect classes can be identified solely through reference images.

UAV TEAMS ACH I EVE SUCCESS WITH H E LP FROM TE LE DYN E DALSA CAM E RAS MVTec is to release a new version of its standard software HALCON which aims to set new standards for the use of deep learning. Available at the end of the year, the company said that the new solution offers a large selection of functions for the use of deep learning out of the box. What’s more, it paves the way for the wide use of self-learning machine vision technology based on artificial intelligence,

which means that users can achieve more robust classification results faster and more easily. Customers will be able to conduct training of convolutional neural networks (CNNs) based on deep learning algorithms themselves for the first time. The trained networks can then be used to automatically classify the image data corresponding to the pre-defined classes.

Highlighted by the company is the fact that the future-proof deep learning features are seamlessly integrated into a professional and established standard machine vision library. Product manager HALCON at MVTec Johannes Hiltner said: “With the new HALCON version, we are specifically addressing a current trend and strong market need for machine vision. By using their self-trained networks, customers save a great deal of effort, time, and money. For example, defect classes can be identified solely through reference images. “ Tedious programming for identifying different defect classes is therefore no longer necessary. In the industrial machine vision environment, deep learning is mainly used for classification tasks which appear in many applications areas, e.g., in the inspection of industrial goods or the recognition of components.”

8

mvpromedia.eu


NEWS

M I KRONTRON LAU NCH ES ITS FI RST H IGH-SPE E D MACH I N E VISION CAM E RA WITH FU LLY I NTEGRATE D FI BE R SOLUTION Mikrontron (Unterschleissheim, Munich, Germany) has launched its first high-speed machine vision camera with a fully integrated fiber solution. Called the EoSens 3FIBER (pictured above), it’s a fanless 3 megapixel camera which is capable of transmitting data up to a distance of 300 meters. It runs up to 566 frames per second, transmitted through the fiber interface. Based on a full resolution of 1,696 x 1,710 pixels, the frame can be reduced continuously and allows frame rates up to 225,000 at smaller ROIs. The compact and robust MTP/ MPO connector ensures, says Mikrontron, that the camera

does not disconnect even during fast and sudden movements. What’s more, the thinness of each individual fiber allows bundling all fibers into one cable that transmits the entire data. Strategical Marketing Manager at Mikrontron Max Scholz said: “Fiber transmitters have long become standard in the telecommunication and IT industries where the technology has been proved to be reliable. Through this, also costs for fiber solutions have gone down significantly in the past years and we believe it is the right time to introduce this communication instrument finally to highspeed machine applications. “Another great advantage of the cheap and thin high-speed fiber

cables over existing coppercablings is that while they solemnly transmit light they are entirely spurious impulse independent and can be applied even in critical environments where interfering impulses occur.” The camera’s ultra-slim design (80 x 80 x 58mm), with c-mount adapter, means that it can be integrated easily into existing production lines, machinery or moving equipment.

BAUMER RELEASES CMOS CAMERAS WITH EXPOSURE TIMES OF JUST 1 µs Baumer has released for the first time into the mainstream digital industrial market, CMOS cameras with an exposure time of just 1 µs. The CX models include the second generation of Sony Pregius sensors which feature exposure times ranging from 1 µs to 60 s.

mvpromedia.eu

Baumer say that the cameras, available with up to 12 megapixel resolution, are ideal in tasks at high light intensity such as laser welding. And, they minimize blur in high-speed applications like pick and place. The CX cameras, with a 29 x 29 mm housing design, perform

perfectly, says Baumer, in hot environments due to their high operating temperature capability of up to 65°C. What’s more, they feature 1000 fps with ROI (Region of Interest) and an excellent dynamic range of 71 dB.

9


SMART VISION LIGHTS

The Best Need the Brightest Smart Vision Lights develops the BEST and the BRIGHTEST LED lights for machine vision. Smart Vision Lights was the first to offer OverDrive™ strobe, the first to offer Multi-Drive™ internal drivers capable of continuous or strobe operation, and the first to use silicone optics for ultra-high-power line lights producing in excess of 5 million lux. When you need a compact, rugged LED light in any configuration — spot lights, bar lights, front lights, ring lights, and structured lights — be sure to call the technology light leaders. Call Smart Vision Lights — The BEST and the BRIGHTEST.

tony@smartvisionlights.com | +44 1295 768080

smartvisionlights.com

SVL_MVPro_A4_Issue5_print ad_PRESS.indd 1

8/14/17 11:14 AM


:14 AM

NEWS

BASLER ADDS 20 TO ITS ACE CAMERA SERIES Basler (Ahrensburg, Germany) has added 20 new high-resolution cameras to its ace range. The new cameras feature IMX sensors from Sony’s Pregius and STARVIS lines, with the latest global and rolling shutter technology. The ace range is now the largest camera series in the industrial processing market, with 120 models. And for convenience, these have now been categorized into three product lines: ace classic, ace U and ace L. Of the 20 new models, 12 are equipped with the IMX253, IMX255, IMX267 and IMX304 sensors from Sony’s Pregius line, and form the Basler ace L product line.

mvpromedia.eu

The other eight new cameras come with the IMX178 and IMX226 sensors from Sony’s STARVIS line, and join the Basler ace U product line. The 12 are well-suited especially for applications in highly automated 3D inspection systems, or in traffic monitoring, for example in tolling, said Basler. They offer resolutions of 9 and 12 megapixels and frame rates of up to 40 fps. State-of-theart global shutter technology ensures distortion-free images, even at high speeds. The eight new cameras are targeted at microscopy applications and less complex automation tasks in the electronics industry. They ae equipped with the latest rolling shutter technology

and feature high resolutions of 6 and 12 megapixels with up to 59 frames per second. The innovative BI (back-illuminated) sensor technology provides outstanding sensitivity at a small pixel size of 2.4 µm (IMX178), or 1.85 µm (IMX226). With these particularly lightsensitive sensors, said Basler, the eight new ace cameras offer excellent image quality even in low light conditions. All new ace models are available with the GigE, or USB 3.0 interface, and conform to GigE Vision 2.0, or the USB3 Vision standard. Basler’s pylon Camera Software Suite also ensures a fast and simple integration of the ace cameras. All 20 models are available as design-in-samples.

11


NEWS

SICK LAUNCHES ITS FIRST INDUSTRIAL IMAGING CAMERA TO CAPTURE HIGHRESOLUTION 3D DATA

DE E P LEARN I NG FOR COM PUTE R VISION TAKES CE NTRE STAGE Deep learning for computer vision took centre stage in September when The Embedded Vision Alliance (Walnut Creek/California, US) offers a training course. The full-day course, based on Google’s TensorFlow framework, took place at the Steigenberger Hotel in Hamburg on the 7th of September 2017. Experts say that deep neural network techniques are showing excellent results for a wide range of visual perception tasks, from face and object recognition to optical flow. And even very difficult problems like lip reading are yielding to these algorithms. And it’s for these reasons that developers, who are trying to solve challenging visual perception tasks, will want to carefully consider deep neural network techniques.

SICK has launched its first robust, industrial imaging camera to capture high-resolution 3D data with a single snapshot, whether the object is stationary, or moving. Called the SICK Visionary-T, the camera uses high-resolution Time-of–Flight (TOF) technology to achieve superior quality 3D imaging for vision applications. SICK highlights the fact unlike 3D vision systems based on laser triangulation, the 3D image is captured with one shot of light, without the need to profile a moving object. Whist this technology has already been introduced for consumer applications, this new SICK camera is designed for 24/7 industrial use in rugged conditions. This means, say SICK, that the camera offers an affordable alternative to high-end 3D vision systems so that manufacturers and machine builders can integrate 3D imaging into a wide range of vision applications. Examples include obstacle recognition for automated

12

vehicle, or robot navigation, intrusion detection, parcel quality checking, or gesture recognition. Images of moving, or stationary objects are captured within a range of up to seven metres.

What’s more, Google´s open source TensorFlow has rapidly become one of the most popular software frameworks for designing, training, evaluating and deploying deep neural networks.

SICK’s National Product Manager for Imaging, Measurement, Ranging and Systems Neil Sandhu said: “The Visionary T builds up a detailed and accurate real-time 3D image of fixed or moving objects with excellent results regardless of angle, surface finish, material or shape of object. The Snapshot technology means it is not necessary to design a system in which either the camera or the object must move across a laser line to create a triangulated image.

The training, which will be held in English, is targeted at engineers creating algorithms and software for visual machine perception in all types of applications (in the industrial, medical, consumer, retail, public safety or automotive area) who want to quickly come up to speed on using TensorFlow for these applications. It’s also useful for managers who want to get a flavour for creating deep neural networks and using TensorFlow.

“In a single shot, the Visionary T combines different aspects of the light scattered by the object to build up a detailed picture of shape, distance, reflectivity and object depth. Our trials have shown that the single shot method performs well, with less false imaging than can occur with some of the other commonly used methods, and lead to far more reliable results over a wide range of conditions.”

mvpromedia.eu


NEWS

TATTILE CHOOSES FRAMOS Tattile has chosen FRAMOS to distribute its products in Europe and North America. The new distribution agreement means that all Tattile hardware and software products are now available in the FRAMOS sales network in the two key territories. Tattile CEO Corrado Franchi said: “With FRAMOS we now cooperate with one of the strongest distribution partners in the machine vision industry. Both Tattile and FRAMOS are very dynamic companies with impressive success stories in last years. “Following the FRAMOS approach ‘From Sensors to Systems’, the Tattile product line is a great match, enabling synergies for

customers to bundle Tattile cameras, software and/or vision controllers with third party lenses and lighting from the FRAMOS portfolio. This is why we are looking forward to a long and prosperous cooperation with this distribution partner. Our endeavour is to respond to the needs of our customers by offering the most suitable technology for each individual application.” FRAMOS’ Head of Sales in Europe Lorenzo Cassano said: “The new partnership is a next step in growing both FRAMOS and Tattile. The FRAMOS sales strategy matches the Tattile products line, providing state-of-the-art linescan technology, fully open machine vision controllers and the power of embedded imaging with the

S-Line Smart Cameras. This agreement is the natural evolution of a consolidated relationship between our two companies as we have always had similar values: to provide high quality products and services to our customers. With their innovative products, Tattile is now an important part of our imaging portfolio.” Tattile was created in 1988 and develops and produces sophisticated vision systems for different applications in the three divisions: industrial, traffic and railways. It offers ANPR solutions for ITS (Intelligent Transport Systems) applications. It also offers a totally renewed catalogue of smart cameras, line scan cameras, digital cameras and multi-camera vision controllers for high performing applications.

PIXE LI N K LAU NCH ES I NTE RACTIVE, M U LTI-CAM E RA APPLICATION At the time of the launch of the software, PixeLINK said: “Users now have the ability to drag and drop or arrange windows as they like. As a multi-camera application with a built-in autofocus application, PixeLINK Capture offers tremendous flexibility and power allowing vision engineers the ability to configure and test multi-camera vision applications.

PixeLINK (Rochester, New York), has released a real-time, interactive, multi-camera application which comes fully integrated with PixeLINK SDK R10.2. Called PixeLINK Capture, the software is compatible with all PixeLINK’s PL-B and PL-D line of cameras. It has been developed, said PixeLINK, using the most advanced software development tools in the market to provide an unmatched multicamera user experience. High quality video is streamed by PixeLINK Capture in real-time. It can be viewed in a multi-window environment which includes a preview window, a configuration window and a real-time graphical

mvpromedia.eu

histogram on a monitor. This allows the ability to adjust image size, colour and exposure interactively through an easyto-use control interface prior to image, or video clip capture.

“For advanced users, PixeLINK Capture offers options of more complex image enhancements for exposure control, filtering, frame-byframe property changes, multi-camera application testing and configuration, all viewable in the preview window prior to capture.”

13


NEWS

AT OPENS NEW PRODUCTION HALL

SICK TO SHOWCASE FLOW M EASU RE M E NT SE NSOR AT DRI N KTEC 2017 SICK, a leading producer of sensor solutions for industrial applications, will use drinktec 2017 to showcase a sensor created for an automated production process for the food and beverage industry.

Automation Technology, the vision sensors and systems developer, has opened its new production hall at its Bad Oldesloe base. The company has expanded its operations due, it said, to the worldwide strong demand for thermography systems and 3D sensors. The new facility has increased AT’s production area by more than 1000m². CTO of AT André Kasper said: “Due to the increasing order

volume over the past several years, this expansion was only a logical consequence. The new production hall allows us to adapt our manufacturing capabilities to the increasing market demand.” The roof of the new building features a solar system with a peak capacity of up to 60 kilowatts and, in combination with a new energy storage unit, enables a nearly self-sufficient power supply of the entire company.

HALCON NOW OFFE RS STAN DARD SU PPORT FOR ARM-BASE D ARCH ITECTU RES For the first time, MVTec’s HALCON machine vision software now offers standard support for ARM-based architectures. The new HALCON version, 13.0.1, can be used for ARMbased platforms running the Linux operating system. This latest version, says MVTec, provides a straightforward way to use HALCON’s powerful functions on the widely distributed ARM processing technology. The ability to easily integrate HALCON software into ARM-based computing platforms significantly expands the range of devices on which customers can deploy their machine vision applications. It also opens up new opportunities and use cases for solution providers. The new release also offers a number of improved features including better grading of data-codes and optimizations for developers using the Visual Studio extension.

14

Johannes Hiltner, Product Manager HALCON at MVTec said: “ARM is a widely distributed and popular architecture in the area of embedded systems. Now that HALCON is available to run on ARM-based platforms by default, customers can seamlessly use the accustomed robust functions of their standard machine vision software in an extremely diverse landscape of embedded systems.”

drinktec is the world’s leading trade fair for the beverage and liquid food industry which takes on September 11 to 15, Messe Munchen. The SICK DOSIC stainless steel sensor for flow measurement not only communicates at the controller level, but also at the higher data level. The additional interface to the data, or software system, enables new analyses and functions to be performed. These, said SICK, increase flexibility, quality, efficiency and transparency in production. The compact and rugged sensor detects the flow volume of conductive and non-conductive liquids based on non-contact ultrasonic technology. With its measurement channel and stainless-steel housing, the ultrasonic flowmeter is suitable for a wide range of application possibilities, including measuring tasks in hygienic environments. The company said that the installation is quick and easy, and does not require medium calibration.

Dr Olaf Munkelt, Managing Director of MVTec Software GmbH, added: “With HALCON 13.0.1, we address the needs of the rapidly growing embedded vision market. Driven by the emergence and growth of the Internet of Things, sensors are becoming smarter and smaller, even in the non-industrial environment. Many of these sensors and systems are using ARM processors. By addressing this trend in the new HALCON version, we are enabling customers to place their image processing algorithms on a vastly wider range of devices with little effort.”

mvpromedia.eu


Camera Link

IMAGING SUPPORT

Reports of Camera Link’s Demise Have Been Greatly Exaggerated

BitFlow’s Camera Link Family BitFlow has been making Camera Link frame grabbers since the standard was released in 2000. Many other vision standards have come a long since then, BitFlow even makes frame grabbers for some of these newer, faster standards, yet Camera Link has been and will continue to be for some time, the workhorse of the Machine Vision industry. Camera Link is low cost, reliable, predictable, provides high enough data rates and long enough cables for many applications. BitFlow’s offerings have continued to evolve as well, the latest being the very sophisticated yet remarkably affordable Axion-CL family, which can acquire from every CL camera made, actually two of them, at the same time.

BitFlow CL Frame Grabber features Half size PCIe cards All connectors are PoCL Supports CL clock up to 85 MHz Support on Windows and Linux DMA direct to GPU support Industry leading StreamSync DMA engine Virtual Frame Grabber for support of multiple cameras on the same hardware Triggers and encoders for every camera, or all cameras synchronize from one source

Camera Link Models Frame Grabbers Machine Vision Software Support Application Development Software

Neon-CLB - One base camera Neon-CLD - Two base cameras Neon-CLQ - Four base cameras Axion-1xE - One base/medium/Full/80-bit camera Axion-2xE - Two base/medium/Full/80-bit cameras

400 West Cummings Park, Suite 5050, Woburn, MA 01801 USA tel +1-781-932-2900 www.bitflow.com


NEWS

TERARECON ACQUIRES McCOY MEDICAL TECHNOLOGIES Advanced visualization and enterprise medical image viewing solutions company TeraRecon has acquired McCoy Medical Technologies, and spun out a new AI platform company. The new company, called WIA Corporation, aims to simplify access, and use of third party computer vision and artificial intelligence applications. WIA focuses on integrations that connect the work of individual end users, machine learning researchers, open source organizations and diagnostic imaging companies. Its products include a developer platform and a vendor neutral API for integration partners. This is designed to streamline the distribution and hospital implementation of evidencebased practices and trained machine learning algorithms. As part of the completed deal, WIA retains the McCoy Medical advisory board, including three world-leading imaging informatics experts and serial entrepreneurs Dr Eliot Siegel, Dr Paul Chang and Dr Khan Siddiqui.

Dr Siegel said: “This decade has seen a proliferation of extremely impressive applications leveraging machine learning, especially for computer vision. Today, there is a real need for simpler, standards-based channels to socialize, access and apply these technologies. The TeraRecon and McCoy venture holds great potential to be among the first to develop and commercialize their offerings in the form of a truly open platform community. This kind of approach is exactly what is needed for the amazing innovations in AI to achieve widespread utilization.” TeraRecon President and CEO Jeff Sorenson said: “The new company’s platform is open

to everyone, from individual physician-inventors, to research institutions, and the world’s largest PACS vendors alike. Together, this new company becomes a catalyst to join the various AI communities together.” He continued, “Our goal is to incubate and accelerate a new kind of AI platform that allows a proven algorithm to be productized in 20 minutes.” McCoy CEO Misha Herscu said: “This transaction results in a company with a unique combination of technology, healthcare-specific expertise and commercial reach. We look forward to meeting with potential collaborators and partners at SIIM17 and introducing these new possibilities.”

N EW UV CAM E RA FROM MATRIX VISION Matrix Vision (Oppenweiler, Germany) has introduced a new UV camera with GigE interface. The mvBlueCOUGAR-X104b UV comes equipped with a special version of the established CMV4000 image sensor from CMOSIS. The micro-lenses have been removed and a sensor cover glass with special UVpermeable quartz is used. This makes the camera ideal for laser technology, semi-conductor inspection and food product quality inspection applications. The default configuration is offered with a standard Gig E

16

interface. In common with all of the company’s products, the camera also performs as the platform from which a customised variant can be

created to meet the specific needs of camera users. In terms of specification, the mvBlueCOUGAR-X104b UV image sensor can run at 18.5 fps and offers a 2048 x 2048 format of 5.5µm square pixels. The new UV camera, as with other mvBlueCOUGARs, is fully compatible to the standards GenICam and GigE Vision. Drivers are available for Windows and Linux. What’s more, the camera supports all image processing libraries of third-party suppliers, which are compliant to the GigE Vision standard.

mvpromedia.eu


NEWS

WENGLOR ACQUIRES SHAPEDRIVE Wenglor sensoric (Munich, Germany) has acquired 3D specialist ShapeDrive. Also based in Munich, 3D specialist ShapeDrive is a leading manufacturer of components and systems in the field of 3D coordinate measuring technology for industrial, medical and scientific applications. Wenglor did the deal so that it could expand its innovative portfolio for 3D measuring technology and offer its worldwide customer base new solutions for automated manufacturing and quality assurance. The backdrop is advancing automation and digitalization

the future”, says “A mutual fascination for imaging technologies unites us and results in ideal prospects for combining the best of both worlds into something new and innovative. We see tremendous potential in this constellation!” The ShapeDrive product line will be integrated into Wenglor’s portfolio and further developed at the WenglorMEL facility in Eching near Munich. The ShapeDrive research and development department will also move to Eching. Managing director of ShapeDrive Dr Matthias Rottenkolber said: “We can hardly wait to start working together with Wenglor. The

MORE GE N I E NANO CAM E RAS FROM TE LE DYN E DALSA Teledyne DALSA has expanded its high value Genie Nano camera series. The ten New Genie CL Nano models set out to offer system integrators reliable high-resolution inspection for existing Camera Link vision systems. They are built around the industry’s top performing CMOS image sensors from SONY and ON-Semi. The first monochrome and colour Camera Link models will feature ONSemi’s 16 and 25M Python image sensors. More will be introduced shortly. The new Nano models are engineered for industrial imaging applications that require highspeed data transfer. Teledyne DALSA said that the two new Nano models are an easy replacement for cameras built into current vision systems that rely on the AIA’s Camera Link interface standard.

in the age of Industry 4.0, in which three-dimensional image processing systems are expanding the opportunities provided by process automation and quality assurance many times over. Wenglor managing director Rafael Baur said: “We’re very pleased to once again extend our expertise in 3D measuring technology with ShapeDrive, and to thus broaden our options for

mvpromedia.eu

company has an outstanding, internationally aligned sales network and well developed production capacity. In the years to come, the trend towards Industry 4.0 will change the industrial landscape to a great extent – in particular due to increasing degrees of automation, big data and advancing levels of networking in production. Together, we will actively shape this transformation.”

The new colour and monochrome models are offered in a compact form and available in multiple resolutions with fast frame rates. With a broad feature set, wide operating temperature range and GenICam GenCP 1.1 compliance, the new Genie Nano-CL models will, said Teledyne DALSA, extend the life of existing systems with improved overall functionality, and performance.

17


Machine vision: key technology for automated production. Experience how robots react flexibly to their environment. Meet industry visionaries and innovators, discuss important topics such as embedded vision, and discover the path that non-industrial machine vision is taking. At VISION, the world’s leading trade fair for machine vision.

06 – 08 November 2018 Messe Stuttgart, Germany www.vision-fair.de


DEEP LEARNING TAKES COMPUTER VISION TO THE NEXT LEVEL: AN INTERVIEW MVPro interviews Jeff Bier, Founder of the Embedded Vision Alliance, about the possibilities of Deep Learning, the influence of this technology on computer vision and the first Deep Learning training event in Germany, based on Google´ s open source framework, TensorFlow

MVPro: Deep Learning seems to be the latest magic word in the computer vision industry. How would you describe this technology in short? Jeff Bier: Classical visual perception algorithms are hand-crafted by engineers for very specific tasks. For example, to identify certain types of objects, algorithm designers typically specify small features like edges or corners for the algorithm to detect. Then the algorithm designer specifies how groups of these small features may be used to identify larger features, and so on. Such approaches can work very well when the objects of interest are uniform and the imaging conditions are favorable, for example when inspecting bottles on an assembly line to ensure the correct labels are properly affixed. But these approaches often struggle when conditions are more challenging, such as when the objects of interest are deformable, when there can be significant variation in appearance from individual to individual, and when illumination is poor. With major recent improvements in processors and sensors, a case can be made that good algorithms are now the bottleneck in creating effective “machines that see.” Deep neural networks are a very different approach to visual perception, and not only to visual perception as they are used in many other fields as well. In essence, instead of “telling” our machines how to recognize objects (“first look for edges, then look for edges that might make circles”, etc.), with artificial neural networks, it is possible to “train” algorithms by showing them large numbers of examples and using

mvpromedia.eu

19


a feedback procedure that automatically adapts the functionality of the algorithm based on the examples. More specifically, convolutional neural networks are massively parallel algorithms made up of layers of simple computation nodes, or “neurons”. Such networks do not execute programs. Instead, their behavior is governed by their structure (what is connected to what), the choice of simple computations that each node performs, and coefficients or “weights” determined via a training procedure. So rather than trying to distinguish dogs from cats via a recipe-like series of steps, for example, a convolutional neural network is taught how to categorize images by being shown a large set of example images. Three things make this approach very exciting right now: 1) For many visual perception techniques, deep neural networks are outperforming the accuracy of the best previously known techniques by significant margins. 2) The rate of improvement of accuracy of deep neural network algorithms for visual perception tasks is significantly faster than the rate of improvement we had previously seen with classical techniques. 3) With deep neural networks, we are able to use a common set of techniques to solve a wide range of visual perception problems. This is a big breakthrough compared to classical techniques, where very different types of algorithms are typically used for different tasks. MVPRo: How can computer vision developers benefit from this technology? Jeff Bier: Deep neural network techniques are showing excellent results for a wide range of visual perception tasks, from face and object recognition to optical flow. Even very difficult problems like lip reading are yielding to these algorithms. So, developers who are trying to solve challenging visual perception tasks will want to carefully consider deep neural network techniques. MVPro: What are the applications or systems where the use of Deep Learning technologies opens up new markets for computer vision? Jeff Bier: Previously, computer vision has mainly been successful in applications such as the inspection of manufactured items, where imaging conditions can be controlled and pass/fail criteria can easily be quantified. But there are numerous opportunities for machine perception where imaging conditions can’t be controlled, and where there’s big variation in the objects of interest. Deep neural network techniques are particularly helpful in these cases. For example, it’s quite simple for a human to distinguish a strawberry from other kinds of fruit, but not so simple for an algorithm, considering the variations in size and shape of strawberries, which can be exacerbated

20

mvpromedia.eu


by variations in camera angle, lighting, surrounding objects, etc. Similarly, for an automotive safety system detecting pedestrians is very challenging because people come in different sizes, wear different clothing, have infinitely variable poses, etc. MVPro: Google´s open source framework TensorFlow is based on Deep Learning. According to the latest survey of the Embedded Vision Alliance, it is currently the most popular deep learning framework for computer vision, having left behind Caffe, OpenCV and others in popularity. What do you think are the reasons for this success? Jeff Bier: I think that one reason for TensorFlow’s popularity is that Google is a leading technology company, and Google uses TensorFlow extensively itself. Engineers in other companies are eager to use the same technology that one of the industry leaders is using. The fact that TensorFlow is open source is also a big factor – there’s no cost to use it. In addition, TensorFlow is the first deep learning framework to emphasize efficient deployment of deep neural networks not only in data centers, but also in embedded and mobile devices. MVPro: The Embedded Vision Alliance is offering the first TensorFlow training event in Germany in Hamburg on the 7th of September 2017. Who should attend this training and what is on schedule? Jeff Bier: This training is ideal for engineers working on all types of vision applications, creating algorithms and software for visual machine perception, e.g. in the industrial, medical, consumer, retail, public safety or automotive area, who want to quickly come up to speed on using TensorFlow for these applications. It’s also appropriate for managers who want to get a flavor for deep neural networks and TensorFlow. More generally said, the training will be applicable to people working on all forms of “machines that see”, whether they are implementing visual perception in the cloud, in a PC, on mobile devices or in an embedded system. The course will provide a hands-on introduction to the TensorFlow framework, with particular emphasis on using TensorFlow to create, train, evaluate and deploy deep neural networks for visual perception tasks. For more details about the agenda, I recommend to visit https:// tensorflow.embedded-vision.com . MVPro: Who will be the trainer at the Hamburg event? Jeff Bier: The training will be presented by Douglas Perry, who is uniquely qualified for this role. He has presented dozens of professional training classes to engineers in the electronics industry over the past five years, and he has hands-on experience with TensorFlow deep neural networks. In preparing the training content and handson excercises, Douglas is assisted by my colleagues at BDTI, who contributed to the creation of an earlier deep learning training class that was very well received by attendees.

mvpromedia.eu

21


MVPro: How will attendees benefit from the Hamburg event? Jeff Bier: Attendees will benefit from accelerated learning of practical techniques using TensorFlow for visual perception applications. After the training, attendees will be ready to begin using TensorFlow productively in their work. MVPro: How can people being interested in attending register for that training? Jeff Bier: We have prepared a web page with all information about the Hamburg and other training events at https:// tensorflow.embedded-vision.com. MVPro: Are attendees expected to already have an understanding of deep neural networks before attending the training? Jeff Bier: Attendees will get the most out of the training if they are familiar with the basic concepts and terminology of deep neural networks. For attendees who require an introduction to deep neural network algorithms, the Embedded Vision Alliance will make available a two-hour video tutorial presentation online prior to the TensorFlow class at no additional cost. MVPro: In 2011 you founded the Embedded Vision Alliance. What are the main tasks of that organization and why is it so actively driving Deep Learning technologies and the TensorFlow framework? Jeff Bier: The Embedded Vision Alliance exists to facilitate the practical use of vision technology in all kinds of applications. We do this primarily through providing training and other educational resources for engineers and companies who are incorporating or want to incorporate visual perception into their devices, systems and applications. The Alliance also helps technology supplier companies, for example, suppliers of processors and sensors, to get the information and insights they need in order to succeed in vision markets.

formed to enable the market for embedded vision technology by inspiring and empowering design engineers to create more capable and responsive products through the integration of vision capabilities.

Jeff Bier is founder of the Embedded Vision Alliance, an industry partnership

22

Jeff is also co-founder and president of Berkeley Design Technology, Inc. (BDTI), a trusted resource for independent analysis and specialized engineering

services in the realm of embedded digital signal processing technology. Jeff oversees BDTI’s benchmarking and analysis of chips, tools, and other technology. He is also a key contributor to BDTI’s consulting services, which focus on productdevelopment, marketing, and strategic advice for companies using and developing embedded digital signal processing technologies.

mvpromedia.eu


H.265 HEVC High Efficiency Video Coding

Get the most out of your video with HEVC

Picolo.net HD1 HEVC (H.265) 1080p60 IP video encoder AT A GLANCE • High-quality HEVC (H.265) / AVC (H.264) encoder, up to 6 encoded streams • Video streaming from one full HD (up to 1080p60/1080i60) HDMI or SDI source • ONVIF Profile S and T interface • Video encryption • Hi-Fi AAC or uncompressed audio • USB edge storage / USB GPS support • Serial connection for PTZ cameras • PoE+ Power over Ethernet • Fanless aluminum housing

LEARN MORE

www.euresys.com - sales@euresys.com PicoloNetHD1_MVPRO_Aug17.indd 1

25-08-17 10:35:17


IMAGE SENSORS IS KEY GROWTH SEGMENT FOR MACHINE VISION MARKET In a quick look at sensors, Editor Neil Martin hears about the sector’s future prospects and looks at SICK, one of the leading players

Sensors are an integral part of the machine vision industry and according to a recent research report, they are a key driver of future growth.

New Report Recently Dan Rogers, Head of Publishing at Smithers Apex, told MVPro about their new report on the prospects of image sensors for machine vision. Machine vision is one of the strongest innovation drivers of the 21st century. Smithers Apex’s new report – The Future of Image Sensors for Machine Vision to 2022 – states that the key growth segment for this market will be image sensors for machine vision. Industry experts agree that the market growth of image sensors is driven by performance, technologies and applications rather than by price, since the image sensor itself is often not the cost driver of machine vision systems. The report finds that the volume of image sensors used in machine vision applications shows a strong compound annual growth rate (CAGR) of 12.3 per cent, while the CAGR of the value is at 6.9 per cent. The difference derives from decreasing average unit prices. At the end of the forecast period, the price drop results in a small decrease of the market size that cannot be compensated by the growth rates in volume. The trend of image sensor prices is driven by the shift away from CCD sensors to CMOS imagers. In 2014 and shortly after, both technologies coexisted on the market for machine vision. CMOS showed a focus on low-cost rolling shutter devices; CCD a focus on higher priced low-end variants, aside costly high-end versions. The trend clearly goes to CMOS global shutter sensors, which are now partly

24

higher priced than comparable CCDs. Yet in the future, they will most certainly eradicate low-end CCDs from the market, due to their better performance and the potential for even lower pricing. Traffic and transportation is maintaining its market share as numerous highways, parking lots, toll gates, red lights, speed control points, railway inspection vehicles and other units in the field will be equipped with cameras for automatic monitoring, access control, quality inspection, number plate recognition and so on. The electronics and automotive manufacturing industries operate already at a high degree of automation. Thus, growth rates are not expected to keep up with other areas, especially as higher resolution sensors are subject to replace multiple low-resolution sensors in new electronics manufacturing lines. Overall, The Future of Image Sensors for machine Vision to 2022 report provides a detailed capture of the status-quo and a solid data-driven forecast of the market volumes, with over 12% CAGR and values with close to 7% CAGR. The victory march of CMOS global shutter sensors has already begun. Multiple factors will influence how manufacturing automation will develop as such, and which innovations will leverage machine vision in this area. As the field of manufacturing is considered to be well served, many machine vision OEMs strive to find fast growing applications beyond. They will likely be successful in sectors like medical and life sciences, entertainment, sports, as well as in airborne and ground-based UAVs. Price pressure and declining production costs will challenge image sensor OEMs of machine vision cameras in the near future.

mvpromedia.eu


SICK – a major player in the sensor sector Based in Waldkirch, Germany, SICK is one of the leading sensor manufacturers. The group considers itself a technology and market leader. It provides sensors and application solutions that create the basis for controlling processes securely and efficiently, protecting individuals from accidents, and preventing damage to the environment. Since its creation in 1946, it has expanded to become a global player, one with 50 subsidiaries and equity investments as well as numerous agencies around the globe, employing over 8,000 employees. Financial figures At the time of its last financial statement, for the fiscal year 2016, the company said that it remained on the path to success. Also, that the record figures achieved in its core business, reflected consistent growth of innovation leadership in the field of sensor intelligence. For 2016, SICK generated sales of €1.4bn, with earnings before interest and tax (EBIT) of €148m. In the company’s official financial statement, it said: “In the last fiscal year, the SICK Group made outstanding progress. After a cautious start,

mvpromedia.eu

both sales and orders received grew consistently during the course of the year. Sales increased by 7.4 percent to EUR 1,361.2 million. Orders received also achieved record increases, growing by 10.1 percent to EUR 1,398.9 million. “ ‘Despite the challenging economic conditions, political uncertainties, and negative currency effects, we were able to achieve aboveaverage sales growth,’ explained Markus Vatter, member of the Executive Board in charge of Finance, Controlling & IT at SICK AG, when the balance sheet ratios were published. ‘Our presence across the globe has contributed significantly to this outcome: We have achieved growth in all sales regions. Gains in sales were made primarily in the factory and logistics automation business fields.’ “The continued high demand for increased productivity in factory and logistics processes as well as in process flows was palpable all over the world. In Europe, America, and Asia, customers showed considerable interest in sensor solutions for automation in the factory environment. In the field of logistics automation in these regions, demand for automated systems in parcel-based services was also high. In the process automation business field, however, the difficult market environment prevailing in the steel, cement, mining, oil, and gas industries had a noticeable effect. Sales in this business field remained below expectations.”

25


Markus Vatter, member of the Executive Board in charge of Finance, Controlling & IT at SICK AG

Q: Who would you consider your main competitors?

Key Financial results SICK Group (IFRS)

2015

2016

Orders received

1,270.5

1,398.9

Change in % +10.1

(in EUR million) Sales

1,267.6 1,361.2

+7.4

129.1 147.9

+14.6

90.8

+14.5

(in EUR million) EBIT (in EUR million) Net income

104.0

(in EUR million) R&D expenditure

129.0

143.4

+11.2

(in EUR million) Employees on December 31

7,417 8,044

+8.5

A: Due to our wide portfolio of 40.000 sensors covering areas from factory automation to logistics and process automation, we see no competitor having such a wide range of sensor solutions. But in specific application areas there are of course many competitors which are focusing in a very narrow area. Q: How do you see the machine vision sector developing over the coming years? For example, continuing to expand, or being slowly absorbed into other industries such as robotics? A: Due to the increasing need for automation in the context of Industry 4.0 the camera technology will be a very important data generator for the transparency of production. The camera technology will enlarge in new application fields where cameras have not been used before. The established camera application fields – e.g. quality inspection – will also increase due to the need of precise data for improving of production processes. Q: What are the biggest trends within the machine vision market?

Q&A session I asked SICK a number of questions; the replies are printed below: Q: Where would SICK position itself within the sensor market in terms of sales, number of customers etc? A: SICK sees itself as one of the world’s leading producers of sensors and sensor solutions for industrial applications. In the fiscal year 2016, SICK had more than 8,000 employees worldwide and achieved Group sales of just under EUR 1.4 billion.

26

A: The main trend we see is that there will be more intelligence and more application software within the sensor. There will be special functionalities which will be handled by the sensor itself. The camera is no longer making only a picture, it also provides information that creates added value within the application and leads to more customer satisfaction. Q: What are the prospects for the sensors market? A: Sensor intelligence has already become successfully established within the field of automation technology and is now a key part of Industry 4.0. The possibility of using a multitude

mvpromedia.eu


of data to produce and supply goods in a more efficient and flexible way, while also saving resources and achieving better quality, ultimately depends on the reliability of the data which forms the input of many process chains. This represents the fundamental starting point for complex systems to be able to make autonomous decisions. To put it in a nutshell: transparent data evaluation would be completely impossible without sensor technology. In the future, SICK will be able to use intelligent sensor technology within the context of Industry 4.0 to not only ensure the safety of personnel, but to meet ever-increasing production requirements as well. But also aspects such as transparency and traceability are playing an increasingly essential role for manufacturers. Vertical integration is the keyword for track-and-trace. The traceability of products during complex manufacturing and logistics processes stands at the forefront of this field. Q: What are the biggest trends within the sensors market? A: We see automation shifting to a higher gear and Industry 4.0 will be the biggest impact to all businesses in the future. An increasing trend of e-commerce that centre around logistic automation will be a potential area of growth, too. One of the main trends is the so called “batch size 1” which requires format changeovers and frequent modification of machine parameters. In order to minimize errors during changeover and to parametrize the machine optimally you have to automate your processes. Another trend we see is the protection of resources. Customers want to reduce their material use, to protect the environment and to produce less waste. With sensors from SICK, for example, labels can be positioned securely without a print mark so you have a reduced material consumption because no print marks had to be cut.

All in all there is a general trend to faster processes with more flexibility and quick set-up plus avoiding errors during format change. With our approach „Inside the Industries“ SICK delivers a customer and market orientated product, systems and service portfolio which enables us to offer added value sensor solutions to our customers and to strengthen the business areas in all growth regions. Q: What is the ownership structure of SICK? Who are the major shareholders? A: 95 percent of the shares are held by the SICK family, 5 percent of the shares are held by employees and friends of the SICK family. Q: As a private company, has it considered going public? A: Due to the high equity ratio and the good cash situation we do not see any need for such a step. Therefore an initial public offer (IPO) will not take place. Q: ‘With more than 50 subsidiaries and equity investments as well as many agencies’ – what are the main equity investments and what is their purpose? A: Competence and production centers are located all over the world. The sales function is generally performed by the Group’s own sales and service companies in all key industrial countries. The product-generating entities are controlled from the German locations. Products for regional markets are developed and manufactured in the Savage / Eagle Creek and Stoughton locations in the USA; there are also production facilities in Singapore and Johor Bahru (Malaysia). At the same time, these locations also have competence and application organizations for the respective region. This results in the following breakdown of the operating business: A total of four sales

SICK’S KTS/KTX family of contrast sensors

mvpromedia.eu

27


regions, namely Germany; Europe, Middle East and Africa (EMEA); Asia-Pacific and North, Central and South America (Americas), form the structure in which the Group operates. The largest manufacturing and development location is the Group’s headquarters in Waldkirch near Freiburg. It is from this head office that SICK AG carries out the tasks of group management. Q: Many deals are being done in the machine vision sector – do you anticipate buying companies over the short to medium term future? A: Please understand that we do not talk about possible strategic investments in public. Q: Would further consolidation within the machine vision sector be a problem? A: From our point of view, it wouldn’t be desirable for users and for customers if there was only one provider. Q: When is the company’s next financial statement? A: SICK isn’t listed at a stock exchange; so we are not legally obliged to publish our financial figures regularly. But we are publishing our annual figures once a year, usually at the end of April. Q: Looking forward, how would you describe the company’s prospects? A: The pressure towards rationalization of production, logistics and process flows worldwide remains high. The discussion about Industry 4.0 adds a new facet to this subject, which offers great possibilities for SICK. The intelligent factory can be implemented only if robust and intelligent sensor technology detects the required data which are needed for Industry 4.0 concepts. The future of SICK as an innovation and technology leader, as a multilayered and global organization will be significantly influenced by the success of Industry 4.0 and IoT. In all aspects the mutability is crucial – towards new solutions, new markets and new business models. SICK has solid roots, the suitable technologies, the right company culture and the prerequisite willingness to change in order to shape the change successfully. Q: Which current product would you like to showcase? A: SICK AppSpace represents a major step toward the digital future. The SICK AppSpace eco-system offers system integrators and original equipment manufacturers (OEMs) the freedom and flexibility to develop application software for their specific tasks directly on the programmable SICK sensors. This makes it possible to create customized solutions to meet customers’ individual requirements. SICK AppSpace combines software and hardware and consists of two elements: the programmable SICK sensors and SICK AppStudio, a development

28

system for applications. The flexible architecture and the programmable devices allow data to be generated for cloud services as part of Industry 4.0. The software is installed on the sensor and can transmit information directly. This provides users with the best possible support for quality control, traceability and predictive maintenance. Q: Which future product would you like to showcase? A: Networked production and control processes in complex machine environments determine the industrial future and make Industry 4.0 possible in the first place. SICK therefore expand its product portfolio ongoing. In order to exploit the potential of Industry 4.0, it is essential that SICK’s products are compliant with as many automation systems as possible and that they have the ability to communicate with higher cloud levels. Consequently, two of SICK’s focal areas of development are currently connectivity and data sovereignty. Q: How important is R&D to the company; are you seen as an innovator? A: In view of the considerable competitive pressure, continuous investment is needed in research and development (R & D) in order to secure and strengthen our leading market position. The innovation process at SICK therefore has one objective: we want to offer solutions in the form of intelligent serial products, systems, or services that help our customers master problems and thus contribute to raising productivity, increasing flexibility, or saving resources. To meet this high demand, the SICK Group expanded the area of R & D once again in the fiscal year 2016 and invested EUR 143.4 million. This is equivalent to 10.5 percent of sales. Thanks to the intensive R & D activities, we have a highly diversified product portfolio that meets the requirements of completely different industries and also serves markets ranging from those that respond quickly to cyclical fluctuations to those that are slower to respond. Q: What events/conferences do you rate as the best? A: Given the political dimension of the topic of Industry 4.0, we consider it important to present our considerations at the Hannover trade fair. This show provides an excellent opportunity to present and discuss industry trends, as it brings together decision-makers in industry, politics and on the customer side as well as representatives of associations and universities. Equally important is the annual industry meeting SPS/IPC Drives which is a fixed date in our calendar. Every year, exciting trends and innovations are presented there. In particular, digitization and networking in production processes has emerged as a major theme in recent years.

mvpromedia.eu


Designed to outshine

You won’t miss a thing with our smart, speedy, sensitive new camera modules. The exceptional resolution and sensitivity of our latest digital camera modules makes sure you’ll see finer details while spending less on ambient lighting. There’s a 2/3-type 5MP Global Shutter CMOS sensor that eliminates focal plane distortion, capturing accurate images on fastmoving inspection lines. High speed shooting at a blazing 150fps shortens takt time. And powerful processing boosts dynamic range while ensuring accurate exposure of selected areas of interest. So now you’ll make light work of the toughest conditions. image-sensing-solutions.eu

Digital Interface Camera Link

‘Sony’ is a registered trademark of Sony Corporation.


SPONSORED

WILSON PEDALS TO NEW DISTRIBUTOR DEAL WITH HIKVISION From epic bike ride to new distributorship – Scorpion Vision’s boss Paul Wilson has had a busy summer

The fact that managing director of Scorpion Vision Paul Wilson had just arranged a sole distributorship with Hikvision, would no doubt have kept him in high spirits as he completed a charity bike ride which saw him and fellows riders complete a ride from Basingstoke to Paris. The group covered 255 miles in four days in what he described as a “ride of a lifetime.” Wilson raised £2,500 for the Ark Centre Charity and says he was inspired by Gary Livingstone, managing director of LG Motion, who was also part of the group.

which are widely used in such fields as 3C, metal processing, industrial automation and logistics. The range consists of:  GigE and USB3 area scan cameras  Linescan cameras  Very high resolution 29mp cameras.  High quality FA lenses  Telecentric lenses

Congratulations to both of them for a good effort and here’s a quote from Wilson that I don’t think we’ll ever hear from Chris Froome: “So a great result, but I had a very, very sore bum when we got to the Eiffel Tower. I don’t mind saying.”

 Solid state industrial PCs for control systems

Once dismounted Wilson got straight to the serious work of announcing his deal, one that had taken some time to set up.

Wilson said: “We are delighted to be part of Hikvision’s global expansion here in the UK and Ireland. Their range of products is available via our comprehensive eCommerce site offering our customers quick and easy access to high quality hardware and software solutions for a wide variety of machine vision applications.”

Scorpion Vision has become the sole distributor in the UK and Ireland of the Hikvision machine vision product range. The new partnership agreement sees Scorpion Vision distribute Hikvision’s full range of products including industrial cameras, vision controllers, lenses, algorithm software platforms and industrial smart cameras as well as solutions. The Hikvision range is promoted as offering fast and accurate identification, measurement, detection, positioning and code reading

Hikvision is based in China and is committed to being a global leader in providing machine vision products and solutions.

Wilson added: “What is exciting about HIKVISION is that they are obviously very serious about taking market share from the current incumbents. They are making progress in this area by making strategic relationships and this year started work on their European distributor channel.

Email: sales@scorpionvision.co.uk Tel: +44(0) 1590 679333 www.scorpionvision.co.uk

30

mvpromedia.eu


SPONSORED

“They obviously have very competitive pricing that until now, the smaller competitors of the big industrial camera manufacturers have not been able to achieve. This fact and the knowledge that these cameras are of a very high quality with very good software drivers and SDKs bundled with them that are GeniCam compatible means they are a very convincing alternative from a company that is here to stay. “The GeniCam compatibility means that in theory, the cameras can be dropped in as a replacement without any adjustment to the host software.” Wilson explained that HIKVISION is already one of the biggest manufacturers of CCTV equipment globally and because of that, they have significant buying power for the latest image sensors, which leads to very competitive pricing. Hikvision goes by the official titles of Hangzhou Hikvision Digital Technology. Based in China, it was established in November 2001, founded with 49% foreign capital. The company was officially listed on Small and Medium Enterprise Board (SME Board) in China on the Shenzhen Stock Exchange on May 28, 2010. Hikvision places a great emphasis on research and development. Of its 20,000 employees, nearly half are R&D engineers. The company annually invests 7 – 8% of

its annual sales revenue to research and development for continued product innovation. It has created a complete, multi-level lR&D system that includes every operation from research to design, development, testing, technical support, and service. Cantered at its Hangzhou headquarters, the R&D teams operate globally, including R&D centres in Montreal, Canada and Silicon Valley, California in North America, as well as Beijing, Shanghai, Chongqing, and Wuhan in China. This approach has led to notable advances in video image processing, video and audio codec, video content analysis, streaming media network transmission and control, video and audio data storage, cloud computing, big data, deep learning and video structuralization. It has achieved a competitive advantage in both technology and cost by establishing a systematic technology, and product platform that enhances the product development efficiency. Over the years, it has established partnerships with world technology leaders including Intel, Texas Instruments, Ambarella, Sony, Hisilicon, Western Digital and Seagate. EU Sales Manager at Hikvision Kane Luo said: “Scorpion Vision as a product and solution provider, has significant knowledge and experience in the machine vision market. We’re delighted to work with Scorpion Vision on a partnership level, to provide value added vision solution for customers in UK and Ireland.” Scorpion Vision is part of Tordivel and provides innovative machine vision software and systems as well as vision guided robot applications for factory automation across a range of industry sectors including automotive, food and beverage, logistics and aerospace.

mvpromedia.eu

31


UPDATE FROM GARDASOFT’S HILIARY BRIGGS We caught up with Gardasoft over the summer, asking how business was doing and Managing Director Hiliar y Briggs gave us the details

Gardasoft designs and manufactures high intensity LED Illuminators, and high performance pulse/strobe controllers for LED Lighting, providing unique solutions to the global machine vision, intelligent traffic and security markets. It operates from offices in Cambridge, UK, and New Hampshire, US.

will standardise the use of lighting controllers within GenICam and GigE Vision and make them easier to use. “Growth in high-power systems is driven by the demand for larger lights, particularly high-

In 2016 it became a wholly-owned subsidiary of the OPTEX Group of Japan. Optex is a global solutions provider with Group sales of £180m (¥28bn) and a strong position in the worldwide market. It looks to grow its business ,and expand its product and technology offering to its global customer base.

32

“White light and infra-red can be interlaced on alternate lines without losing resolution on either scheme. We have achieved pulse rates of over 500kHz under ideal conditions with up to 50A per channel on our recently-released high power TR-HT 220 controller series. “Similarly, with area scan systems, sequences of light output can create multiple lighting schemes to obtain multiple images at a single camera station. This reduces cost and speeds up throughput. The new, fast liquid lenses such as those supplied by Optotune open up the possibility of dynamic inspection where the product remains in motion while imaging takes place.

Briggs told us:“We are experiencing greater interest in improved lighting control and in high-power lighting systems. The increased use of lighting controllers enables designers to save cost through better performance, improved system reliability and reduced maintenance costs. “For example, pulsing LEDs improves lighting lifetime and flexibility, and allows you to safely overdrive to more than 100% brightness. Gardasoft is currently leading a working group defining a standard for GenICam registers for lighting controllers which

a single linescan camera with two lighting schemes.

power linelight products (typically up to 150W) which enable linescan cameras to operate at higher line rates. For example, at Gardasoft we are seeing a lot more applications where systems save significant cost by using

“Combined with Gardasoft’s Lens Controller, speeds of up to 6ms between focal points are possible: if the conveyor doesn’t need to be stopped the system is faster and the mechanics more reliable. The rapid focusing of liquid lenses also allows swift focusing at multiple distances so that the camera can examine several features on a product using multiple switched lights.”

mvpromedia.eu


BOAZ ARAD IS THE EMVA YOUNG PROFESSIONAL OF 2017 Boaz Arad has won the EMVA Young Professional of 2017 award Th award was given for his work “Sparse Recovery of Hyperspectral Signal from Natural RGB Images.” The award was announced on June 24 during the 15th EMVA Business Conference in Prague/ Czech Republic. Arad also presented his work as part of the regular conference program. Arad, aged 32, is CTO of the startup company HC-Vision. He obtained his Computer Science BSc (cum laude) from Ben-Gurion University of the Negev (BGU) in 2012. Continuing towards a fast-tracked PhD at the BGU Interdisciplinary Computational Vision Laboratory (ICVL), he received an MSc in 2014. He plans to complete his PhD in 2018. In a statement from EMVA, it said: “Hyperspectral (HS) imaging systems are capable of collecting the complete spectral signature reflected from each point in a given scene – producing a much more spectrally detailed image than that provided by RGB cameras. To this date, scientific and industrial applications that require hyperspectral information have relied almost exclusively on traditional scanning hyperspectral imaging systems. These systems are expensive, bulky, and often require close to one minute acquiring an entire scene. Replacing these systems with low-cost, compact, and rapid RGB cameras could provide exciting opportunities in many fields.

mvpromedia.eu

“The awarded work addresses the task of recovering 31-channel hyperspectral information from 3-channel RGB images of natural scenes. Despite the severe 31-to-3 dimensionality reduction that occurs while projecting hyperspectral information to RGB, the methodology used is able to recover the former from the latter with 90-95 percent accuracy over a wide variety of scenes. “In addition to producing state-of-the-art results at the time of publication, this approach produced comparable results to previous methodologies which relied on hybrid HS/ RGB input. The methodology often surpassed the performance of the latter despite a significant information disadvantage. “The achievements above were made possible by a new natural hyperspectral image database. This database, collected over the course of the last four years of Boaz’s graduate studies, contains over 200 high spatial-spectral resolution natural hyperspectral images and is the largest, most varied, and highest resolution collection of natural hyperspectral images collected to date.” The EMVA Young Professional Award is an annual award to honour the outstanding and innovative work of a student, or a young professional in the field of machine vision, or image processing industry.

33


SPONSORED

TPL VISON - THE FUTURE’S BRIGHT TPL Vision gives MVPro access to its latest product and corporate news TPL Vision designs and manufactures high-brightness LED illumination solutions for machine vision systems. It provides various solutions to help vision integrators build their applications in robotics, whatever the aim, whether sorting, pick and place, quality control, code reading and more. The company has recently introduced a number of new products, including its MODULAR RINGLIGHT, the EBAR CURVE that corrects homogeneity, the OPTICAL TRIGGER that simplifies light integration. The Modular Ringlight (fig1.) is made up of a controllable, bi-colour LED lighting ring and a range of inter-changeable accessories, and is designed to let customers make their illumination choices quicker than ever before.

The product centres on a cleverly designed LED ‘square ring’ which is used in conjunction with either a Low Angle, Dark Field, or Dome Diffusor accessory to give drastically different illumination results. TPL explained that the variation of the light effects are quick and simple with manual push button control and ‘easy clip-on’ accessories. Once the most suitable accessory has been selected, the same robust products can then be installed onto a production line using conveniently located securing screws and engaging the remote-control capability. The ringlight comes in three sizes. A user can 2×2 as well as the colour and overdrive (x4). And as with all TPL Vision’s products, it is “Plug & Light.” A user can just connect their 24VDC power supply, or chose from the membrane keyboard or connect it to the PLC/ SPS to get full remote control. Application examples include Code Reading (1D, 2D Laser,

Fig 1.

Dot-Pin – laser, or engraved); quality checking; sorting; and, presence/absence verification. TPL believes that having sophisticated modular ringlight solutions, is a good platform from which it can introduce its products to new markets, including microscopy, fluorescence, and multi-spectral analysis. Another new member of TPL’s product stable is the new Optical Trigger. This is a very ingenious tool that will save a user time with their integration. It’s not difficult to add an external illumination on a smart camera (fig 2.). With the Optical Trigger, any camera with built-in illumination can now become a powerful tool without the need for additional wiring or complicated settings. All a user has to do is place the Optical Trigger in front of the camera and as soon as the built-in illumination of the camera is on, the Optical Trigger automatically switches the external lighting on synchronously. The Optical Trigger is very responsive: it has very short rising and falling times (4 µs).

Fig 2.

34

Whenever the embedded light is on, the Optical Trigger automatically sends a signal to the external illumination. The OPTICAL TRIGGER features can also be embedded

mvpromedia.eu


SPONSORED

into a vertical illumination solution, transforming your smart camera into a dedicated tool for use within the Vertical Markets such as Logistics, Traceability, food & Beveridge, and measurements to name a few! A third new product is a new bar light - the EBAR CURVE (fig 3.), which has been designed to correct the lack of homogeneity experienced with standard barlights. Up to now when you had to select a bar light you had to choose a product larger than the Field of View (FoV). With the EBAR CURVE this is no longer required as you select a bar with the same dimensions as the FoV. This saves money and space in your applications.

Fig 3.

Our fourth new product the EBAR LOGISTIX (fig 4.) is a high brightness lighting solution which integrates the EBAR CURVE technology optimised for use in logistics identification applications. Using the EBAR Logistix, a user can close the lens aperture and consequently, increase the depth of field from one to 2.5 meters. This new lighting solution benefits from the Curve Concept, offering full homogeneity on the illuminated surface, including on the edges. With the EBAR Logistix, a user can optimize the number of cameras and lights they are using. The EBAR Logistix is equipped with very specific high-power LEDs, either blue, or red, which help to increase label contrast on a cardboard box and code contrast on label. But, TPL hasn’t just been busy on the product front. It has opened the doors to TPL Vision USA in June and from the base in Dover, New Hampshire, it is already receiving a lot of interest from the US, Canada, and South America. The new direct sales team is on hand to discuss its range of Illumination solutions. The company plans to develop a strong partner network within the USA for TPL Vision. The advantage with TPL’s US presence, is that it can build on the many end-users it has on the continent already gained through its French and UK offices. It can now extend its partner networks

mvpromedia.eu

Fig 4.

throughout the US, spear-headed by TPL CEO Guillaume Mazeaud, Executive Sales Manager Chris Dolan and Kimberly Kennedy, who heads up the US sales department. The move of Dolan is an internal promotion which sees him move from a sales role for TPL Vision UK & Europe, to overseeing the Global Sales Team. With the penetration into the US, TPL has not lost sight of its aim to continue its successful growth into Asia. As such, it is working on its global roots and credentials, so that it can continue to spread its influence throughout Asia. Another important growth area for the company is the robotics market. TPL has noted a continuous growth of approximately 40% a year within this market, so it intends to continue to target its products within this Industry. As for how the machine vision sector generally is doing, TPL is upbeat. The company’s financial year starts in April, so they are now halfway through their second quarter. They have noted fast growth within the European machine vision sector with no

signs of this slowing, a trend which is seen on a worldwide basis. TPL finished off by telling us what conferences they are attending in the remainder of 2017 and up to April, 2018:  September 2017 – EMVA, Vienna  September 2017 – Pack EXPO, Las Vegas  September 2017 – IUVA, Croatia  October 2017 – MOTEK, Stuttgart  April 2018 – AIA Vision, Boston Make sure to say hello when you see them.

TPL Vision UK tel

+44 (0)173 84 50 504

web www.tpl-vision.com email contact@tpl-vision.co.uk

35


co-AxiAL

Dark Field

StroBe

Low AngLe

Dome

Bar light

Ring

Parallel

short exposure time

Diffuse fast Square light

StoRe 10 components BuiLd 16 illuminations SoLve infinity of PRojectS Reduced direct costs

Lower maintenance costs

Less spare parts to handle

Reduced downtime

Process standardization (unicity)

More flexibility within the production process

Reduced training costs

Recycling: re-use products from a machine to another

Maximized efficiency (low set up time)

doMe

Low AngLe

dARk fieLd

Ring

Dimensions

80 mm or 130 mm

coLouRs

red-cyan or Whi-ir

contRoL

manual or remote

4 sectoRs

(independent)

Watch

36 th Motek • 09-12 october 2017 • Stuttgart • gerMany Meet the tpl ViSion teaM: hall 8 • Stand 8016

+44 (0)1738 450 504 contact@tpl-vision.co.uk – www.tpl-vision.com

| Boston | Perth | nantes | hong Kong |

the video !


URGENT ACTION FROM THE TOP NEEDED TO ADDRESS THE SHORTAGE OF WOMEN IN UK ENGINEERING Recently it was International Women in Engineering Day and to mark the date, CEOs and senior leaders from some of the U K’s top engineering companies joined the Institution of Engineering and Technology (I ET) joined in calling for urgent action from the top to address the shortage of women in U K engineering The current situation is that women currently account for only 9% of the UK’s engineering workforce, yet 63% of UK engineering employers do not have gender diversity initiatives in place. Making the call are:  Peter Flint, CEO Building + Places EMIA, AECOM  Sir Michael Arthur, President, Boeing Europe and Managing Director, Boeing UK and Ireland  Mark Elborne, CEO & President, GE UK & Ireland

Further concrete action The industry is committing to take further concrete action within their own organisations to improve the 9% figure. Their commitment involves taking one or more of the following actions:  Formal gender diversity programme to measure and report on female recruitment and retention  New approach to advertising jobs in order to attract more women

 Elizabeth Hill, Chief Product Engineer, Jaguar Land Rover

 ‘Work returner’ programmes

 Norman Bone, Chairman and Managing Director, Leonardo MW Ltd

 Career planning and flexible working

 Dawn Elson, Group Engineering Director, Merlin Entertainments  James Harris, Managing Director UK and Europe, Mott MacDonald  Steve Hollingshead, Chief Executive Officer, J. Murphy and Sons Ltd  Mark Carne, Chief Executive, Network Rail  Nadia Savage, Director for High Speed Rail, Laing O’Rourke  Sharon White, CEO, Ofcom  Ian Ritchey, Group Chief Engineer, Rolls-Royce

 Mentoring and sponsorship programmes  Affinity groups and networking opportunities for women  Promote apprenticeship and work experience programmes to girls  Awards and initiatives to celebrate female engineering role models (such as the IET Young Woman Engineer of the Year Awards).

 Dr. Paul Gosling, VP Engineering, Thales UK  Marguerite Ulrich, Chief Human Resources Officer, Veolia UK and Ireland. The engineering leaders made the call during a panel discussion at the start of the IET’s #9percentisnotenough conference. This took place in Birmingham and is named after the IET’s multi-award winning social media campaign to highlight the gender diversity issue in UK engineering. The industry leaders were joined at the event by senior HR professionals, as well as other representatives from industry, academia and the professional engineering institutions.

mvpromedia.eu

37


SPONSORED

W H AT THE FUTURE HOLDS FOR THE FRAME GRABBER IN THE VISION MARKET Donal Waide, Sales Director at BitFlow, continues his story about the indispensable frame grabber

As we have mentioned in previous articles, the frame grabber has found it’s niche in the high speed applications in machine vision. But, is this enough to sustain a market for this components? In short, the answer is yes. Currently the frame grabber is required for Camera Link and CoaXPress interfaces, two of the fastest interfaces that was widely used in the vision industry. There are other two high data rate interfaces, CLHS and Fiber, which are available to a lesser extent as most of the machine vision component manufacturers have yet to widely adopt them. For the purposes of this article, we will focus on CL and CXP.

38

The Nouveau Vision Market Customers love CoaXPress As newer players enter the machine vision market especially from the non traditional machine vision world, the common questions we hear are how fast and how far. This has been aided by the new wave of sensors on the market. Ten years ago the idea of a 12MB, 300 fps sensor was a dream. Now, it is a mid sized sensor. These new-to-machine-vision companies who are in the Life Science industries, and other non traditional inspection markets, don’t know any vision boundaries. A potential

new customer will call and ask about the newest and fastest vision technology. These customers are brand new to the market and aren’t interested in Camera Link, as to them, it is a 20 year old standard with a max data rate of less than 1GB/S. However, when CXP and the idea of a 25MP sensor comes up, their eyes light up as this is new and exciting and also has a road map to greater heights. The next version of CXP has a lot of potentials, from Forward Error Correction (improving data integrity), Multi Destinations (where the camera sends data to two frame grabbers, potentially located in different

mvpromedia.eu


SPONSORED

For those who don’t need the Maximum

A Whole Lot of Data to be Processed

systems), Gen<i>Cam 3-D Data support, and a potential for introduction of a fiber transport layer. These next generation features are being discussed at the regular standards meetings held around the world every six months. The meetings are attended by representatives of 10-15 companies (each of which has a CXP product of some type), representing a truly democratic process and global view of the direction of the standard. One thing that everyone agrees on for CXP2.0 is the data rate. There will be two new speeds introduced, 10 and 12.5 Gb/S. This is potentially up to 5.0 GB/S data rates.

mvpromedia.eu

Of course the issue here will be the processing of this data. While today’s CPU’s are faster and multi core, and more adept at processing, an easier solution is a GPU card. This shifts the data processing to a dedicated GPU while the CPU is freed up for other tasks. The need for GPU integration is one of the reasons BitFlow has been developing on both NVIDIA and AMD platforms. BitFlow first launched an NVIDIA solution almost five years ago and the AMD solution is already over two years old. Being the only frame grabber company that offers direct GPU support for both options, this has led to customers trusting BitFlow to have their fingers on the pulse of the future of CoaXPress. This confidence has led to growth in CXP sales at a higher rate than normal.

While CXP has focused initially on the larger and quicker sensors, there is a demand in the market for the almost direct replacement to Camera Link. These customers don’t need the blazing speeds of CXP but do need the control, minimal latency, multi camera solutions and cable lengths not offered by USB3 Vision and GigE. The camera manufacturers are coming out with single link cameras capable of up to 600 MB/S data rates (USB3 x 1.7, ~CL Medium). To aid this BitFlow offers a low cost single link CXP solution, the Aon CXP, which is a fully functional CXP grabber but competitive to give these companies the edge over their competition. This product expands the use of CXP in the vision market. Conclusion The future for the frame grabber is always threatened, with embedded solutions non frame grabber solutions emerging, but there’s always a need for the frame grabber in vision. As other interfaces evolve, so too does the frame grabber. As a frame grabber manufacturer, BitFlow ensures that they are staying ahead of the curve.

39


PHX_mvpro_page_advert 01/09/2017 15:11 Page 1

THE EVENT WHERE LIGHT TECHNOLOGIES COME ALIVE!

90 EXHIBITORS

LIVE DEMOS

120+ EXPERT TALKS

Experience Innovations! The event that connects the supply chain with users of light technologies within industry & research

Wednesday 11th & Thursday 12th October 2017 · Ricoh Arena Coventry UK

‘‘

A very good event, rich in networking opportunities

FOCUS ON MACHINE VISION

’’ ‘‘

Excellent - Photonex never fails to deliver

Industrial Imaging | Machine Vision Factory Automation | Inspection

’’

ATTEND FREE MEETINGS WEDNESDAY Hyperspectral Imaging – Technologies and Application-ready Solutions Dispelling myths and providing a pan-technology review, introducing applications and uses.

THURSDAY Optical Metrology for Manufacturing Discussion on the latest developments and new applications for optical metrology.

THURSDAY Vision & Imaging Programme The Vision & Imaging Programme will be fast moving, informative and inspiring.

FREE

TO REGISTER

All you need to know at www.photonex.org


CALL FOR PAPERS: COMPUTERS IN INDUSTRY, ELSEVIER JOURNAL Special Issue on: Machine Vision for Outdoor Environments Professor Melvyn Smith and Professor Lyndon Smith of Centre for Machine Vision, Bristol Robotics Laboratory, UK, were recently on the look-out for papers. The first deadline was end of August, but we’re only just past that, so if you have something special, maybe it’s worth a try! Read on…

together with state-of-the-art developments in innovative imaging techniques, including 3D and multispectral, and in data processing, including those techniques that have been rapidly gaining traction in computer vision, such as the internet of things, big data analysis and deep learning.

Context

Aim and scope

Recent developments in machine vision technologies have enabled opportunities for automatic outdoor scene analysis to provide useful and advanced capabilities in a number of important sectors, including transport, construction and agriculture.

Reliable and robust operation of machine vision systems in unstructured and outdoor environments remains a significant and highly topical challenge. This special issue aims to report on the theoretical foundations, novel science and engineering solutions required for the application of machine vision to outdoor or highly unstructured industrial applications. We welcome original, high quality and unpublished manuscripts from academia and industry concerning recent advances in different aspects of outdoor vision research and its application. We expect proposed solutions to be innovative with a particular focus on new and exciting developments within the aims and scope of the journal.

For example, in the latter case, new precision agricultural techniques can particularly benefit from information generated in real-time by machine vision in the field. Here two- and threedimensional texture and shape analysis can now be employed for improved segmentation in automated weeding, thereby increasing efficiency in weed eradication, leading to improved crop yield and so reducing the environmental impact of herbicides and operational costs for farmers. Similar practical benefits can potentially be obtained in other sectors, and so outdoor imaging research is receiving growing attention from researchers and practitioners within academia and industry, as well as from government organisations seeking to support new and exciting research and development via collaborative project grants. In contrast to the laboratory or the relatively structured setting of a conventional indoor industrial process, outdoor machine vision requires rugged hardware and novel algorithms to address the formidable and challenging issues associated with withstanding the elements and variations in the environment. Common themes when applying machine vision solutions to real-world outdoor problems or in complex settings with only a limited or a complete absence of any form of environmental structuring, often include a need to tolerate harsh environmental conditions (for example heat, vibration, water and dust), an ability to cope with and adapt to uncertainty and change (for example in lighting and in the nature and position of objects) and a capacity to handle and interpret unprecedented quantities of noisy or incomplete data. Solutions may therefore call upon cutting edge aspects of novel hardware design,

mvpromedia.eu

Topics of interest include, but are not limited to:  Agri-technology – including forestry and timber; crop monitoring / inspection; automated harvesting; weed control; animal monitoring e.g. cattle, pigs and poultry.  Security and surveillance – including traffic monitoring; facial / demographic recognition; biometrics and directed advertising.  Transport – including autonomous / selfdrive vehicles (on and off road) and drones / UAVs; applications in mass transport passenger assistance / experience; road inspection.  Scene recognition / interpretation – including object detection / recognition / tracking; outdoor navigation and localisation / positioning.  Remote sensing and aerial imaging – including for building construction / architecture; applications in the construction industry and construction equipment.  Assistive technology – including outdoor augmented reality applications; wearable technology; human-computer interaction; embedded computer vision outside the factory; aids for visually impaired.

41


R&D TAX CREDITS – YOU NEED TO KNOW ABOUT THESE Editor Neil Martin talks to the founder of RandDTax, a consultancy which handles research and development tax credits for mostly U K SM Es. If you don’t claim these credits, or have made your own claims in the past, then you need to read this piece, it could be a valuable use of your time! I’m going to let RandDTax founder Terry Toms give it you from the horse’s mouth, see below, but first, I had a quick chat with Toms and an RandDTax Consultant, Dr David Barkel. Founded in the summer of 2012 and since then have, they told me, helped over 800 clients claim back from the HMRC around £63m. Now, don’t forget, this is a UK scheme, so if you’re looking on from Europe, or the US, you can’t get involved, unless of course you have a subsidiary based here, which should help. Toms reckons that RandDTax is one of the fastest growing players in the market over the last five years, and now have 12 shareholders, six Directors and around 25 Consultants. Certainly not the biggest, they admit, but one that has carved a healthy niche for itself, helping the UK SME community get what they deserve when it comes to R&D tax credits. Around 60% of companies they deal with are in the IT sector, but there are a wide range of companies in almost all sectors who also apply for R&D tax credits. Big companies are well served, but SMEs often miss out, or underclaim, mainly because they, or their accountants, don’t know how to claim. And that might be because they simply don’t see themselves as doing R&D.

Bear in mind that just 18,000 SMEs claimed out of some 4.5m SMEs in the country. Bear in mind that just 18,000 SMEs claimed out of some 4.5m SMEs in the country. Toms says that the reason for the lack of claims is not only because they are unaware that they can claim, but because there is a certain self-deprecating attitude from some technical staff within companies who have the attitude that they can’t possibly be doing R&D, as they are “not smart enough.” In fact, they most probably are doing R&D within the definition used by the HMRC. Says Toms: “There is a specific definition within the HMRC Guidelines, which I would say is broader than most people’s understanding of what R&D is, where they tend to think of a world first; you don’t have to

42

be doing a world first to qualify for R&D tax credits, otherwise there would not be 18,000 SMEs claiming. “Any company which is applying science or technology to develop anything, be it a new product, a new process, a new service, a new device, or material, could potentially qualify.” Tom explains that their firm charges a percentage of the claim, although they do charge a minimum fee at the point of which they have completed the scoping exercise for the first claim, which can cover the last two years, and can be paid within 28 days. But, RandDTax only take forward what they consider to be genuine claims, in other words, one that has an exceedingly high probability of succeeding. Their success rate is 100% on claims. I finished our chat by asking Toms the obvious question, isn’t it nice walking into companies saying they could be due money back from the Government? He replied: “I cannot imagine a business which is more pleasant to work in, because you are getting chunks of money for companies that they were not expecting, they do get quite enthusiastic.” And when you consider the average claim for their firm’s clients is around £76,000, and some have got back over £1m, then you can see why they are so enthusiastic. So, over to Toms, to give you the basic facts. R&D Tax Credits – A truly valuable contribution to business innovation! – By Terry Toms of RandDTax What is it worth to businesses? At a UK level the government invested about £2.45 billion in Corporation Tax Relief or Cash Credits in the year the April 2015 – the last year for which figures are available from the National Audit Office. This included just over £1 billion going to about 18,000 SME claimants, an increase of over 30% over the previous year, at an average claim value of over £50,000 to each claiming SME. In rough terms, this means that the scheme paid for up to around 30% of all the qualifying R&D costs claimed by those SMEs. The Large Company scheme is much less generous.

mvpromedia.eu


I often hear small business owners complaining that our governments do not do much for them. Having spent a large part of my life in or around science and technology based SMEs, and directly with the R&D Tax Credit scheme since 2002, I beg to differ. Innovation is at the heart of business success and R&D Tax Credits helps fund innovation in a very big way. Our 825 clients have benefitted by over £63m since September 2012. That works out as an average of over £76,000 per company. I would describe that as a very significant direct contribution to the capital needed for sustained business success. What are the challenges, if any, in making successful claims? Given the growth in companies claiming R&D Tax Credits, it is not surprising that a growing consultancy industry/profession has emerged. Some of our clients tell us that they receive more telesales calls on R&D tax Credits than any other topic – and as tends to happen in any new booming, and unregulated market, there is a temptation for sales people to make exaggerated claims in order to attract customers. Added to this is the simple fact that we live in a “self-assessment” world in relation to all taxes. This presents a potential trap that companies should take steps to avoid. The biggest challenge we find companies have is around understanding and interpreting the HMRC Guidelines on what constitutes R&D for Corporation Tax purposes and where that R&D begins and ends, especially in their own industry sector and specific type of R&D. Almost any company can be investing in advancing the sciences or/ and technologies to develop new or enhanced products, new services, improved processes (operational or manufacturing related), increased efficiency, better devices and materials, etc, etc. When selecting R&D consultancy services to help with claims there are at least three issues to consider:  Does what is being offered sound too good to be true? It may be just that.  Consider costs versus the service offered. The easiest way to reduce costs is to do a lot less. In a self-assessment environment where HMRC can question claims and go back up to six years, the financial dangers in cutting corners with claims will mount as the claim years go by.  Ask to look, and do so very closely, at Terms, Conditions and Disclaimers used by R&D consultancies and accountancy firms offering R&D consultancy services. There can be many potential issues in the small print. Disclaimers will often sum up the extent of the service. For example, the following was used by an otherwise very well respected large international accountancy firm: “ Our procedures did not include any verification work on the information provided, and we express no opinion on the accuracy of any financial data or other information referred to in this report”. As I said earlier, the easiest way to cut costs is to do less. But that can dramatically increase the risk for claiming companies.

mvpromedia.eu

Visit us:

Munich, Booth B1.511

43


Don’t miss the UK’s leading exhibition for robotics and automation solutions

Meet robot manufacturers, system integrators and experts in automation and machine vision and discover new ways to optimise your operations REGISTER FREE NOW AT ROBOTICSANDAUTOMATION.CO.UK Brought to you by:

In association with:

Media Partners:

Pharma BUSINESS INTERN ATION AL www.pbiforum.net

Robotics in Logistics


CONTRIBUTION

THE FOUR BASICS OF MACHINE VISION APPLICATIONS Teledyne DALSA’s Christopher Chalifoux takes us through the four basic machine vision applications “All I need to do is…” – as a machine vision applications engineer, the sound of those words puts me in “Danger Will Robinson!” mode. If a prospective customer says that, there’s a chance he hasn’t thought through the entire application; this is especially true if the customer is new to machine vision.” All I need to do is make a couple of measurements on a shiny metal part and kick the bad parts off the assembly line.” Can the shape and orientation of the metal part cause reflectance? “All I need to do is make sure the already-filled bottle of soda doesn’t contain any foreign material.” Will the bottle always be oriented so that its contents can be seen by the camera, or can it rotate such

mvpromedia.eu

that its label gets in the way? “All I need to do is read a barcode.” And do what with the code? There are four basic steps in a machine vision application. Each one must be carefully considered to avoid disappointment, frustration and heartbreak for the customer and nasty surprises for the application developer.  Acquire an image  Extract information from the image  Analyze the information  Communicate the results to the outside world

45


CONTRIBUTION

1

Acquire an Image

We urge customers to spend the time and money getting the right camera, lighting and lens to acquire the best image possible of real parts under real working conditions. Image preprocessing can sometimes be used to turn a not-great image into a good-enough image, but it’s much more difficult to turn a terrible image into a usable image. As the saying goes, you can’t make a silk purse out of a sow’s ear. But why you’d want to is beyond me.

The same part under three different lighting setups. While we can all agree that the first setup is useless, the inexperienced customer may think that the second setup is “good enough, because I can see the holes.” With just a little work, we can get to the third image, with a much higher chance of success. I’d’ like to emphasize a phrase in the previous paragraph: “Real parts under real working conditions.” A colleague recently spent several weeks developing an application to distinguish among four similar stamped metal parts for a customer outside the United States. (Remote application development can be a challenge!) The differences among the parts were small but detectable in the 100+ images the customer had acquired using the lighting and lens he had configured himself and sent to my colleague. (The customer was knowledgeable about machine vision, having formerly used a certain yellow product.) A few weeks later, the customer sent a batch of images in which all of the parts were slightly blurry. When my colleague asked why this was, the customer explained that the first images were taken while the parts were stationary, and the new ones, while the parts were moving down the conveyor belt, as they would be in the final deployment. The blurriness made distinguishing the small differences impossible. (The customer is investigating how to remove the blurriness, most likely with a shorter exposure time).

2

Extract information from the image

“I can see the feature/difference/flaw, why can’t your software?” is a not-uncommon question. We humans have had tens of thousands of years to hone our pattern-matching skills and develop our neural networks. Presented with an image, even one we’ve never seen before, and told to perform

46

a visual task (“Find the widget that’s different from all the other widgets”), we’re good at ignoring the background noise and zeroing in on what’s important. A computer is, by comparison, stupid. (Although I’m suspicious of my smartphone – I think it’s been taking my car for joyrides at night while I’m sleeping.)

The customer wants to measure the circle, which could be anywhere in the field of view. Easy for a human, but tricky for machine vision: The uneven lighting and “noise” make thresholding and blob analysis difficult at best. Other methods may take too long for the customer’s required throughput. A machine vision system must be programmed to explicitly measure the size of an object, locate a feature with a high degree of reliability, or determine whether a color is what it should be. A typical machine vision system is not configured to handle unexpected changes to the scene – deterioration of focus, change of scale of the objects to be measured, a loading dock bay door being opened and flooding the setup with outside light. (Yes, that was a real case!) Back to my colleague and the stamped metal parts… Even before the blurriness problem, the customer had sent a second batch of images of what he considered the same parts. But in fact, they had features that were not present in the first batch of images, and these features interfered with the software tools my colleague had successfully applied to the original images. This made it necessary to modify the software tools to ignore those features. The customer didn’t understand why these new images had caused a problem since the new features had “nothing to do” with the features that were being analyzed to distinguish the parts. There’s a lot of excitement these days about the coming wave of artificial intelligence – this is at least the 3rd time since I graduated college more years ago than I care to think about. At VISION 2016 in Stuttgart, a few companies were showcasing their “deep learning” software. It was impressive, but our customers are trying to solve problems in industrial environments, usually within a few

mvpromedia.eu


CONTRIBUTION

hundred milliseconds, for as little money as possible. It’s not clear yet how easily or economically deep learning will port to the factory floor.

3

Analyze the information

This step is sometimes optional, for example when the information extracted from the image in Step 2 just has to be logged to a file. But more often, the information has to be analyzed to make a pass/ fail decision. In most applications this is quantifiable and straightforward: If the part doesn’t meet the dimensional criteria, or if the label is crooked, or if there’s a mouse corpse in the bottle, it’s a reject. Other times, the pass/fail criteria are fuzzier. What’s the largest scratch allowed on a smartphone case before the case is a reject? Is a 0.05 mm straight-line scratch worse than a 0.05 mm spiral scratch? Customers sometimes have difficulty providing quantitative answers. Two humans who have performed the same visual inspection for years may accept or reject the same part, because their somewhat subjective ideas of good and bad may not be quite the same. Years ago I was sent a box of 40 metal cylinders to determine whether our software could distinguish the bad from the good. The cylinders bore scratches and pits of various sizes but were not marked to indicate which were good and which were bad. I called the customer to ask him which were which. “Oh, I forgot to label them,” he answered. “Can’t your software figure it out?”

4

Communicate the results to the outside world

By “outside world”, I mean beyond the analysis part of the application. Our products generally don’t work in isolation; there is usually other hardware and software with which they must interact. We provide interfaces to a lot of this – TCP/IP and serial communication, PLC protocols, etc. but we can’t vouch for the reliability of another company’s product. I’m not saying it’s never our fault when there’s a communication problem – we’re good, but we are not perfect – but there have been more than a few occasions when, after beating our heads against the wall trying to figure out why our software can’t pass data to a PLC in a factory in The Middle of Nowhere, North Dakota – where we can’t visit, and can’t access remotely — it’s turned out that the two products weren’t configured properly to communicate with each other. A critical metric for the vast majority of machine vision applications is processing time – the number of parts that can be acquired and analyzed per second or per minute. For many applications, the selling point for machine vision is not its ability to perform a task that a human can’t, but rather to perform it many times a second for 24 hours without its mind wandering off into… Oooh, look, shiny object!

Where was I? As Alexander Pope wrote, “A little learning is a dangerous thing.” And developing a machine vision solution presents challenges that require learning from both sides of the equation – from the application developer and from the customer side. A good system result can only occur when both parties have a good understanding of the challenge of the problem and the desired result and that, my friend, takes a certain amount of dialogue. At the end of the day, my goal is to ensure an understanding is in place for the delivery and continued operation of a stable, reliable system.

Too many scratches? You tell me.

mvpromedia.eu

47


CONTRIBUTION

reVISION: ACCELERATES YOUR SURVEILLANCE APPLICATION A guest white paper by Nick Ni and Adam Taylor from Xilinx

Surveillance systems rely heavily upon the capability provided by embedded vision systems to enable deployment across a wide range of markets and applications. These surveillance systems are used for numerous applications from event and traffic monitoring, safety and security applications, to ISR and business intelligence. This diversity brings with it several driving challenges which need to be addressed by the system designers in their solution. These are:  Multi Camera Vision The ability to interface with multiple homogeneous or heterogeneous sensor types.  Computer Vision Techniques The ability to develop using high level libraries and frameworks like OpenCV and OpenVX.  Machine Learning Techniques The ability to use frameworks like Caffe to implement machine learning inference engines.  Increasing Resolutions and Frame Rates Increases the data processing required for each frame of the image. Depending upon the application, the surveillance systems will implement algorithms such as optical flow to detect motion within the image. Stereo vision provides depth perception within the image, while machine learning techniques are also used to detect and classify objects within an image.

48

Figure 1 – Example Applications Top: facial detection and classification Bottom: optical flow

mvpromedia.eu


CONTRIBUTION

Figure 2 – Traditional CPU/GPU approach compared with Zynq-7000 / Zynq UltraScale+ MPSoC

Heterogeneous System on Chip devices like the All Programmable Zynq®-7000 and the Zynq® Ultrascale+™ MPSoC are increasingly being used for the development of surveillance applications. These devices combine high performance ARM® cores to form a Processing System (PS) with Programmable Logic (PL) fabric. This tight coupling of PL and PS allows for the creation of a system which is more responsive, reconfigurable, and power efficient when compared to a traditional approach. Traditional CPU / GPU based SoC approaches require the use of system memory to transfer images from one stage of processing to the next. This reduces determinism, increases power dissipation and latency of the system response, as multiple

mvpromedia.eu

resources will be accessing the same memory creating a bottleneck in the processing algorithm. This bottleneck increases as the frame rate and resolution of the image increases. This bottleneck is removed when the solution is implemented using a Zynq-7000 or Zynq UltraScale+ MPSoC device. These devices allow the designer to implement the image processing pipeline within the PL of the device creating a true image pipeline in parallel within the PL where the output of one stage is passed to the input of another. This allows for a deterministic response time with a reduced latency and power optimal solution. The use of the PL to implement the image processing pipeline also brings with it a wider interfacing capability than

traditional CPU/GPU SoC approaches, which come with fixed interfaces. The flexible nature of PL IO structures allows for any to any connectivity, enabling industry standard interfaces such as MIPI, Camera Link, HDMI, etc. The flexible nature also enables bespoke legacy interfaces to be implemented along with the ability to upgrade to support the latest interface standards. Use of the PL also enables the system to be able to interface with multiple cameras in parallel. What is critical however is the ability to implement the application algorithms without the need to rewrite all the high level algorithms in a hardware description language like Verilog or VHDL. This is where the reVISION Stack comes into play.

49


CONTRIBUTION

Figure 3 – reVISION Stack

reVISION Stack The reVISION stack enables developers to implement computer vision and machine learning techniques. This is possible using the same high level frame works and libraries when targeting the Zynq-7000 and Zynq UltraScale+ MPSoC. To enable this, reVISION combines a wide range of resources enabling platform, application and algorithm development. As such, the stack is aligned into three distinct levels: Platform Development – This is the lowest level of the stack and is the one on which the remaining layers of the stack are built. This layer provides the platform definition for the SDSoC™ Algorithm Development – The middle layer of the stack provides support implementing the algorithms required. This layer also provides support for acceleration of both image processing and machine learning inference engines into the programmable logic. Application Development – The highest layer of the stack provides support for industry standard frameworks. These allow for the development of the application

50

which leverages the platform and algorithm development layers.

Accelerating OpenCV in reVISION

Both the algorithm and application levels of the stack are designed to support both a traditional image processing flow and a machine learning flow. Within the algorithm layer, there is support provided for the development of image processing algorithms using the OpenCV library. This includes the ability to accelerate into the programmable logic a significant number of OpenCV functions (including the OpenVX core subset). While to support machine learning, the algorithm development layer provides several predefined hardware functions which can be placed within the PL to implement a machine learning inference engine. These image processing algorithms and machine learning inference engines are then accessed, and used by the application development layer to create the final application and provide support for high level frame works like OpenVX and Caffe.

One of the most exciting aspects of the algorithm development layer is the ability to accelerate a wide range of OpenCV functions within it. Within this layer, the OpenCV functions capable of being accelerated can be grouped into one of four high level categories.

The capability provided by the reVISION stack provides all the necessary elements to implement the algorithms required for high performance surveillance systems.

Computation – Includes functions such as absolute difference between two frames, pixel wise operations (addition, subtraction and multiplication), gradient and integral operations. Input Processing – Provides support for bit depth conversions, channel operations, histogram equalisation, remapping and resizing. Filtering – Provides support for a wide range of filters including Sobel, Custom Convolution and Gaussian filters. Other – Provides a wide range of functions including Canny/Fast/ Harris edge detection, thresholding and SVM and HoG classifiers. These functions also form the core functions of the OpenVX subset, providing tight integration with the application development layer support for OpenVX.

mvpromedia.eu


CONTRIBUTION

The development team can use these functions to create an algorithmic pipeline within the programmable logic. Being able to implement functions in the logic in this way significantly increases the performance of the algorithm implementation. Machine learning in reVISION reVISION provides integration with Caffe providing the ability to implement machine learning inference engines. This integration with Caffe takes place at both the algorithm development and application development layers. The Caffe framework provides developers with a range of libraries, models and pre-trained weights within a C++ library, along with Python™ and MATLABŽ bindings. This framework enables the user to create networks and train them to perform the operations desired, without the need to start from scratch. To aid reuse, Caffe users can share their models via the model zoo, which provides several network models that can be implemented and updated for a specialised task if desired. These networks and weights are defined within a prototxt file, when deployed in the machine learning environment it is this file which is used to define the inference engine.

Figure 4 – Caffe Flow Integration

mvpromedia.eu

reVISION provides integration with Caffe, which makes implementing machine learning inference engines as easy as providing a prototxt file; the framework handles the rest. This prototxt file is then used to configure the processing system and the hardware optimised libraries within the programmable logic. The programmable logic is used to implement the inference engine and contains such functions as Conv, ReLu, Pooling and more. The number representation systems used within machine learning inference engine implementations also play a significant role in its performance. Machine learning applications are increasingly using more efficient, reduced precision fixed point number systems, such as INT8 representation. The use of fixed point reduced precision number systems comes without a significant loss in accuracy when compared with a traditional floating point 32 (FP32) approach. As fixed point mathematics are also considerably easier to implement than floating point, this move to INT8 provides for more efficient, faster solutions in some implementations. This use of fixed point number systems is ideal for implementation

within a programmable logic solution, reVISION provides the ability to work with INT8 representations in the PL. These INT8 representations enable the use of dedicated DSP blocks within the PL. The architecture of these DSP blocks enables up to two concurrent INT8 Multiply Accumulate operations to be performed when using the same kernel weights. This provides not only a high-performance implementation, but also one which provides a reduced power dissipation. The flexible nature of programmable logic also enables easy implementation of further reduced precision fixed point number representation systems as they are adopted. Conclusion reVISION provides developers with the ability to leverage the capability provided by Zynq-7000 and Zynq UltraScale+ MPSoC devices. This is especially true as there is no need to be a specialist to be able to implement the algorithms using programmable logic. These algorithms and machine learning applications can be implemented using high-level industry standard frameworks, reducing the development time of the system. This allows the developer to deliver a system which provides increased responsivity, is reconfigurable, and presents a power optimal solution.

51


ONE PASS. TWO CONFERENCE TRACKS.

30+

Exciting Presentations

50

Exhibiting Tabletops

9

Networking Sessions

4

Inform Informative Keynotes

NOVEMBER 15-16, 2017 DOUBLETREE BY HILTON SAN JOSE SAN JOSE, CA Customize your learning experience with topics including: • Vision Guided Collaborative Robots • Embedded Vision • 3D Vision Techniques • Collaborative Factory of the Future • Robot Safety • Industrial Internet of Things and Big Data • AND MUCH MORE!

REGISTER TODAY AT CRAV17.0RG


CONFERENCES It’s a busy time for exhibitions and conferences from now until the end of the year and Editor Neil Martin casts his eye over the main events

We get started with drinktec, the world’s leading trade fair for the beverage and liquid food industry, takes place on 11 to 15 September, in Munich. This is an event that attracts a number of machine vision industry companies. I love the catch-line for this show, ‘Go with the flow.’ Brilliant. We leave September behind and rather bizarrely, we start a busy week in October which we have three industry shows coming almost at the same time.

Imaging technology, machine vision and factory automation will all feature at the event. As for the benefits of attending, the organizers told MVPro Magazine: “VISION UK provides you with the opportunity to meet top suppliers that can help design your systems. They will be able to show you technologies that can automate elements of your production line to improve reliability and productivity. Meeting like-minded individuals and experts can open your eyes to a wealth of possibilities.

The event is supported Kicking off the week by the UK Industrial on Wednesday 11 “Great components and effective Vision Association October and Thursday system integration is essential to which has organised 12 October at the the development of a successful a strong technical and Ricoh Arena, Coventry, vision system for your process.” application orientated is Photonex 2017. seminar called This covers effectively Industrial Vision Works! It takes place on the five shows, including Photonex itself second day and shows visitors how to adopt (billed as The Technology of Light); The industrial vision solutions to their processes, Enlighten Conference; Vision UK (machine increasing reliability and product quality. vision and factory automation); 2017 IEEE Conference (high power diode lasers There will also be a one-day programme and systems); and, Vacuum Expo. of talks entitled Hyperspectral Imaging – Technologies & Application-ready Solutions Vision UK is setting itself up as a key exhibition which will address the applications for for the machine vision industry. The focus is hyperspectral imaging. It plans to dispel on cameras, components and systems for the myths and provide a pan-technology applications in industry. It is targeted not only review, introducing applications and at component manufacturers, but it is also a uses for this developing technique. forum for system suppliers and integrators.

mvpromedia.eu

53


Next up is the new UK robotics and automation event. Called Robotics and Automation Exhibition 2017, it will be staged at Arena MK, Milton Keynes, on 1112 October and show highlights include: • 40 plus exhibitors demonstrating their technology, its practical application and competitive advantage; • 1000 plus visitors from retail, manufacturing, automotive, distribution, warehousing and pharmaceutical sectors discovering how the latest solutions could be applied to their business for massive operational benefit; • 15 plus seminar sessions where the minds behind the latest technology will share their insights and give real world examples of how they have transformed their customers operations; • Live demonstrations of solutions to get first hand insight into how they work in practice – source the right solution first time. The busy week finishes with the EMVA organised Embedded Vision Europe Conference, which takes place in Stuttgart on 12 and 13 October. The key speakers have been announced. David Moloney, Director of machine vision technology, NTG, at Intel Corporation, will talk about ‘Low-cost Edge-based Deep Learning Inference and Computer Vision in Consumer and Industrial Devices’. Also to talk is expert technologist at AMD, Dr Harris Gasparakis who focusses his speech to the question on ‘How to get the best out of heterogeneous system architectures for vision applications.’ Paul Maria Zalewski from Allied Vision Technologies speaks about ‘Bringing machine vision performance to embedded systems – Camera Modules with advanced image pre-processing for Embedded Vision’. Marco Jacobs, Vice President of Marketing at Videantis, will talk about ‘Demystifying embedded vision processing architectures’. Giles Peckham from XILINX dedicates his presentation to the topic or ‘reVISION

54

Accelerating Embedded Vision and Machine Learning applications at the Edge’. The list of confirmed speakers further includes Martin Wäny, CEO at Awaiba; Alexander Schreiber, Principal Application Engineer at The MathWorks; Dr Hans Ebinger, Head of Sales and Marketing at ESPROS; Jochem Herrmann, President of the EMVA; Olivier Despont from Cognex; and Dr. Thomas Däubler, CTO at NET. Florian Niethammer, team leader of VISION, at Messe Stuttgart said: “We have increasingly observed in recent months how important the “embedded” issue has become. We are dealing with an overarching technology here, which is of significant relevance in industrial as well as in non-industrial sectors. It is a logical step to organize a conference for developers and users of embedded vision systems together with our longstanding partner, the European Machine Vision Association. The topic is booming, which is why we are creating a professional platform with a strong partner.” The conference takes place at the ICS International Congress Center Stuttgart, next to Stuttgart Airport. For November, it’s a pop across the pond for the Collaborate Robots & Advanced Vision Conference which is scheduled for 15-16 November, at the Doubletree by Hilton San Jose, San Jose, California. The event is billed as one pass, two conference tracks. The organizers say that: “Collaborative Robots and Advanced Vision are two of the most cutting-edge topics in automation today. At this two day conference you will explore a range of current advancements in both fields focusing on technology, applications, safety implications, and human impacts. “Whether you’re looking to implement your first ever automation system, take your current system past its limitations, grow your understanding of the available technology, or learn more about the market in general, this conference is right for you!” The year wraps up with a trip to Nuremberg, Germany, for SPS IPC Drives. It’s a three day event (28-30 November) and claims to

mvpromedia.eu


be Europe’s leading exhibition for electric automation. Organizers say it covers all components down to complete systems and integrated automation solutions; the ideal platform for comprehensive information on products, innovations and current trends within the electric automation industry.

planning, the dates for Vision 2018 have been confirmed. It’s the same venue again of course (Stuttgart, Germany) and the 2018 show will have a lot to live up to. The 2016 show was a great success with record numbers of exhibitors and visitors.

And, finally, for those that like forward

And if Vision 2018 seems like a long way away, the UK still won’t have left the EU by then!

FORTHCOMING CONFERENCE DIARY CHECKER DRINKTEC 11-15 September 2017 Messe Munchen, Munich, Germany 2017 2nd INTERNATIONAL CONFERENCE ON ROBOTICS AND MACHINE VISION (ICRMV 2017) EI COMPENDEX AND SCOPUS 15th to 18th September 2017 Kitakyushu, Japan PHOTONEX 2017 (VISION UK) 11-12 October 2017, Ricoh Arena, Coventry, UK ROBOTICS AND AUTOMATION EXHIBITION 2017 11-12 October 2017 Arena MK, Milton Keynes, UK 1st EUROPEAN EMBEDDED VISON CONFERENCE 12-13 October 2017 ICS Stuttgart, Germany COLLABORATE ROBOTS & ADVANCED VISION CONFERENCE 15-16 November Doubletree by Hilton San Jose, San Jose, California, US SPS IPC DRIVES 28-30 November Messe Nuremberg, Nuremberg, Germany

mvpromedia.eu

55


VISION BUSINESS

WILHELM STEMMER BOWS OUT IN STYLE AND BASLER ACQUIRES MYCABLE Editor Neil Martin looks at how a company trade sale should be handled and the latest acquisition by Basler

The big news when it comes to the business of machine vison, was of course the sale of STEMMER IMAGING. And if you want to plan your company sale, then this might be just the template you should follow.

But, getting to the beach via a trade sale has to be handled with a certain amount of delicacy, unless the owner wants to leave with former colleagues sticking pins in their image.

When founder management reach that certain age and the beach or, wherever beckons, conducting a trade sale that pleases everyone, including the staff who are left, is a hard act to pull off. Some go down the flotation route, but few businessmen who have enough entrepreneurial spirt to get a company started, want to stick around after the public listing and subject themselves to a three, or six month ritual bullying when the results come out. That’s not something you’d wish on your worst enemy.

Wilhelm Stemmer with managing directors Christof Zollitsch and Martin Kersting

56


VISION BUSINESS

I have to say, from the surface at least, and I’ve yet to hear any moaning, STEMMER IMAGING appears to have pulled it off. The only criticism I have, and it’s a small point, is that the announcement came out on a Friday afternoon. This is when many journalists, whilst keen to sample their local brew and complain about how tough a week they’ve just had, are not at their desks. It is not the best time to choose. In fact, it’s a given that public companies select Friday afternoon as the best time to present bad news (because said journos are down the pub). But, let us push those thoughts aside. STEMMER IMAGING is a major player in the sector and has successfully carved out a market for itself. As it waves goodbye to its independent status after 44 years, its latest results showed some decent growth. For the year ended 30 June, 2017, turnover was €88.7m which, once exchange rates were adjusted, represented a growth of 6%. Exchange rate problems worked against the group, with operates in 19 countries. Managing Director Christof Zollitsch said: “The biggest increase as a percentage was realised by our branches in Finland, the Netherlands and Switzerland. It was unfortunate that a substantial growth, especially in the UK but also in Sweden, has been offset due to the depreciation of the local currencies against the Euro.” He added: “We have extensive resources in this area that enable us to significantly reduce the effort that our customers have to put in to solve their image processing tasks. Our range of services increases our customers’ profitability and thus represents a major criterion for co-operating with STEMMER IMAGING as a preferred partner for all aspects of image processing.” The results were interesting, but of course the big news was that Wilhelm Stemmer had announced his retirement from the operating business. He sold his shares, but did it in such a way that he handed over the ownership to another company – in this case, Munich-based AL-KO AG, parent company of the international AL-KO KOBER SE - who got 70.04% of the shares, but also sold the remaining chunk of stock, 24.96%, to his existing management team. It seems a great compromise – you sell your company, but give the senior management a sense of ownership as well. Now, of course, we are not privy to the details, or the actual price paid for the shares, but let’s call it a good deal for all concerned. Stemmer explained it like this: “I have been an entrepreneur for 44 years and am now 73 years old. It was time for me to find a sustainable succession plan to ensure the continuation of my life’s work. With AL-KO AG as the majority owner,

mvpromedia.eu

STEMMER IMAGING will continue to be able to tap into future markets and expand them further. From now on Martin Kersting, who has been a partner in the business until now, will control the fortunes of STEMMER IMAGING as managing director together with Christof Zollitsch, who I also appointed managing director back in 2001. So I know that the company is in the best hands for the future and I wish both of them and indeed the entire workforce all the best for the future.” Basler Basler concluded its takeover of mycable on 1 June, 2017. mycable has its base in Neumünster and founder and Managing Director Michael Carstens-Behrens will continue to work for the company and Basler in the future. mycable GmbH is a highly specialized consulting company in the area of embedded computing systems which was founded in 2001. Customers come from mainly the automotive and computer vision industry. It has 13 employees who support their customers in the selection of embedded computing architectures. Consulting and conceptual designs as well as prototypes and serial products are developed within the framework of a customer order. The deal is a tactical move from Basler which intends to increase its market penetration in the rapidly growing field of embedded vision technology. It also wants to significantly reduce the integration efforts of embedded vision technologies for its customers. With its camera module series dart, the company has been addressing a wide range of applications in the field of embedded vision for more than two years. Basler said that mycable is an experienced provider in the field of consulting and development of embedded computing architectures. And those who want to use Basler camera modules, but are hesitant due to the timeconsuming integration of embedded processing platforms, will benefit from the merger and the associated competence expansion said the company. CMO Arndt Bake said: “Together, we have a great potential to make Embedded Vision technology usable for a broad range of users.” For mycable the vision market is already its strongest market segment. The Basler deal offers it an ideal platform for the marketing of its products and services. Carstens-Behrens said: “We will benefit from Basler’s global sales and service network and its strong brand to provide our know-how and achieve the best possible growth in the embedded vision market.”

57


PUBLIC VISION

TEN YEARS ON FROM THE FINANCIAL CRISIS Editor Neil Martin takes a quick look at the U K and US financial markets, before turning the spotlight on the recent figures from Cognex and Basler Ironically, I started writing this article on the tenth anniversary on very day that most agree signalled the beginning of the 2007/2008 financial crisis. Ten years ago a French bank came out with the news that it had been sold a dud and that the package of mortgages it had bought were almost worthless. That sent alarm bells around the financial system and we all know what happened then. The financial house of cards nearly came crashing down. And ten years on we are still feeling the pinch, with many countries having to face austerity programmes and the knowledge that like it, or not, interest rates are going to rise again and what will happen to the system then? The system may have been wheeled out of intensive care, but can we say the patient is fully recovered yet?

co-Manager, Investec Global Quality Equity Income Fund, said: “At the beginning of the year global equity investors were hoping that the inauguration of Donald Trump in the US would lead to significant tax cuts, fiscal reform, deregulation and infrastructure spending. This in aggregate would lead to a huge fiscal boost to the US and global economy, driving global developed equity markets even higher. “What we have seen since is an unwinding of that reflation trade, as lower quality, more economically sensitive sectors such as energy and financials, the primary beneficiaries of the reflation trade, have seen their strong performance fade. Just as the market underestimated the support for Trump before the November US election, it then overestimated the speed and extent to which the 45th US president could enact his legislative agenda.

“Instead corporate Our banks are in a earnings have had to pick better state than ten up the slack, in areas of THE SYSTEM MAY HAVE BEEN WHEELED years ago; regulators the market not solely OUT OF INTENSIVE CARE, BUT CAN WE SAY have forced them to reliant on the cycle or THE PATIENT IS FULLY RECOVERED YET? be better prepared an external boost to for the next financial growth. Year-to-date shock. But, personal global equity market debt is on the rise and central bankers are performance has been driven primarily by the nervous that a credit bubble is building. And as technology sector as investor focus has shifted the growth of many economies is so reliant on back from macro events and sentiment to company ever increasing consumer spending, if that tap is fundamentals. Earnings growth, driven by structural switched off, then what happens to growth then? not cyclical forces, is what is currently being So, the big investment houses are busy figuring rewarded by the market, and this is an important out what is around the corner and the one area backdrop to the global equity outlook from here.” of common worry is that the UK and US equity Paolini, of Pictet Asset Management, reckons that markets are running on too hot a setting. the hour of reckoning is here, and that the equity The disadvantage with writing for a long lead time rally and economic growth, is losing steam. magazine such as MVPro Magazine is that things He said: “Riskier assets classes, particularly equities, in the financial system can change in a matter of continue to draw support from reasonable global minutes. Here I can worry about the height of the economic growth and continued monetary stimulus. markets, the froth that we all know is there, but by the Investors should be wary for several reasons, however. time that we publish, the markets could literally be anywhere, on their knees, or reaching new heights. “First, the equities rally is losing steam. Second, Everyone knows of course that when markets are at record highs, they will correct, not just because of the prevailing financial sentiment, but because a bout of profit taking can be very rewarding. The markets may look over-valued at the moment, but much will depend on how the recent earnings season was received by investors. In terms of a global equity outlook, Blake Hutchins,

58

economic growth appears to have plateaued. Third, central banks are slowly but steadily preparing for a winding down of some monetary stimulus measures. “The onus is now on equities to justify their strong performance after a spectacular run, with suitably strong earnings numbers. Expectations are running high, particularly in the US in turn raising the risk of disappointment.

mvpromedia.eu


PUBLIC VISION

“The consensus view on US earnings implies real GDP growth in excess of 3 per cent - which has not been seen in over a decade. This is in stark contrast with economic realities. So, in short, the markets are going to be an interesting spectacle from hereon in, for the remainder of the year. Cognex Having a significant player in the machine vision sector as a public company, which has to bear its soul every three months, allows us to take a good look at not only how they are doing, but how the rest of the market might be doing. And Cognex figures were good.

forward planning and R&D, the company’s cash position will be the envy of many. Founder and Chairman of Cognex Dr Robert J Shillman said: “What a great quarter! The highest quarterly revenue in Cognex’s 36-year history came from growth across the broad factory automation market. Equally important is that we also set a new, and ridiculously high, level of profit.” Chief Executive Officer of Cognex Robert J Willett said: “Activity at Cognex is at a higher level now than ever before. We are seeing strong demand across a broad range of geographies and markets. It is very gratifying to see that our investments in engineering and sales continue to pay off.”

The machine vision giant, which is based in Natick, Massachusetts, US, announced an impressive set of second quarter figures, a record for the NASDAQ quoted company.

And with public companies, it’s just as important to consider future guidance, as the past results, and here Cognex didn’t disappoint either. Revenue for Q3-17 is expected to be between $250m and $260m. Its financial results for EQUALLY IMPORTANT IS THAT WE the second quarter In its official statement, ALSO SET A NEW, AND RIDICULOUSLY of 2017 (ended 2 the company said HIGH, LEVEL OF PROFIT July, 2017) showed regarding its guidance: revenue at $173m for “This range represents a Q2-17, up 17% from substantial increase both year-on-year and sequentially Q2-16 and 28% from Q1-17. due to higher anticipated revenue from the consumer electronics industry. Cognex believes that the majority Net income for the quarter, for continuing of larger consumer electronics orders in 2017 will be operations, was $56m, compared to the recognized as revenue in Q3 as compared to 2016, prior year’s quarter Q2-16 of $43m. when they were more evenly split between Q2 and Q3. Growth year-on-year across a number of “Gross margin is expected to be in the mid-to-high industries was partially offset by lower revenue 70% range, closer to the midpoint of the range as from the consumer electronics industry. On a compared to the higher end reported in Q2-17. sequential basis, the largest contributions came Operating expenses are expected to increase by from consumer electronics and logistics. approximately 10% on a sequential basis due to Gross margin was 78% for Q2-17, 76% for Q2-16 continued investments in growth activities and costs and 79% for Q1-17. It increased year-on-year due to associated with the company’s recent acquisitions.” cost efficiencies related to higher sales volume and Observers see the third quarter as an exception, not an inventory charge in Q2-16 that did not repeat. a trend, with the fourth quarter likely being quieter. RD&E expenses increased 19% from Q2-16 and Traditionally, Cognex makes hay in the second and 3% from Q1-17. They increased both year-on-year third quarters, and has quieter first and fourth quarters. and sequentially due to higher employee-related So, Cognex is confident about the future, but as costs, including the addition of new engineering people point out, it’s not a cheap stock, so investors personnel from the company’s recent acquisitions. do expect to see healthy levels of income generation. As for the balance sheet, as of 2 July, Cognex had And although the third quarter is meant to be a $765m in cash and investments, and no debt. “monster”, to quote the management, investors will really want to wait until the fourth quarter guidance For a public company in a market which before they really make up make up their minds. demands a sizeable commitment in terms of

mvpromedia.eu

59


PUBLIC VISION

Basler Not to be outdone by Cognex, Basler has also brought in some very decent figures with strong growth and sound profitability. The second quarter results of 2017 show a very successful first half year for the leading global manufacturer of industrial cameras, with incoming orders up 100 %, sales up 62 % and a pre-tax result of up 243 %. In the first six months of the fiscal year 2017, the group’s incoming orders amounted to €100.4m compared with a previous year figure of €50.2m, up 100 %. This was just below total incoming orders of the previous year. The group’s sales of €78.5m were 62 % above the previous year’s level of €48.5m. The gross profit margin slightly increased and amounted to 50.3 % (previous year: 49.7 %). The earnings before taxes (EBT) for the group were positively impacted by economies of scale and amounted to €18.2m (€5.3m). The pre-tax return rate amounted to 23 % (11 %). At a slightly increased tax ratio, the result per share went up from €1.19 to €4.03. CFO Hardy Mehl said in a statement: “In a very dynamic market environment, Basler AG closed

60

the first half-year of 2017 with new record values in incoming orders and sales. For the first six months of 2017, the VDMA (Verband Deutscher Maschinen und Anlagenbau, German engineering association) reported the strongest growth for image processing components since 15 years. For German manufacturers of image processing components this meant an order growth by 47 % and a sales growth by 43 % - in the same period Basler’s incoming orders grew by 100 % and sales by 62 %. “With these tailwinds we are very well prepared for the second half-year of 2017 and will continue to forge ahead with our growth strategy. Regarding the market situation, after a very dynamic start of the year, we expect a slowdown in the second half-year that can already be seen in our incoming orders.” Note Mehl’s last sentence, which signals some slightly heavier weather on horizon. Although recently Basler raised its 2017 forecast and now plans within a group’s sales corridor of €140-150m at a pre-tax margin of 15-18 %. Basler’s price was off slightly following the results, which is expected, given the sentiment regarding future sales. It will be very interesting to see how the company is doing when it next updates the markets on its financial progress.

mvpromedia.eu


CUSTOMAXIMIZED! Sensor? Housing? Lens holder? Plug orientation? It´s your choice! The uEye LE USB 3.1 Gen 1

BOARDLEVEL VERSIONS

SINGLE-BOARD OPTION: PLUG ORIENTATION CAMERA

MIC OPTION

USB TYPE-C

USB POWER DELIVERY

1 SOFTWARE FOR ALL

OPTION: LENS HOLDER

WIDE RANGE OF SENSORS

Learn more about the possibilities with the uEye LE camera: www.ids-imaging.com/usb3.1

It´s so easy!

www.ids-imaging.com


WELL, THAT WAS A NICE SUMMER Almost before you know it the summer is over, and we’re back to shorter days and colder weather. But it’s set to be a very exciting time for the machine industry and at MVPro we’re looking forward to the remainder of the year. I had the good fortune to spend three weeks in the US over the summer and I really enjoyed it. The country has a buzz which was infectious and I’m looking forward to working more with companies and management teams out there. The same sense of excitement can be felt in Europe as well. The companies I’m talking to on a regular basis sound optimistic about the future and are looking forward to growing their businesses. From where I sit, the machine vision industry looks in good health and is confident of having a good second half of the year. We’re also entering another busy conference season, with the EMVA Forum in early September, the PPMA show at the end of the month, before we head off to Milton Keynes for Vision UK mid-October and just afterwards, attend the Embedded Vision Conference in Stuttgart. We finish our 2017 show schedule in Nuremberg near the end of November for the SPS IPC Drives event. We hope to see you at these events and as always, I’m happy to meet and talk about machine vision at every opportunity. And don’t forget that we’ve now launched our sister platform RoboPro. The website is up and running and has already attracted a lot of attention. The first issue of the magazine is out later this month and we hope you’ll take a look. There is a huge cross-over between machine vision and robotics of course, and many of my conversations with clients now involve discussions about joint marketing between the two platforms. I’m always happy to explore these opportunities and find the best solution to fit your business.

Cally Bennett Group Business Manager MVPro | RoboPro Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB Email: cally.bennett@mvpromedia.eu Tel: 0117 3705807 Mob: 07713 035 270 www.mvpromedia.eu

So, there we are, let’s have a great few months and as ever, I look forward to our future conversations.

Cally Cally Bennett Group Business Manager MVPro

62

Visit our website for daily updates

www.mvprome dia.eu

mvpromedia.eu




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.