EUROPEAN EMBEDDED VISION CONFERENCE 2017
TALKING TO START-UP HC-VISION
SPS IPC DRIVES 2017 NUREMBERG
ISSUE 6 - DECEMBER 2017
EMBEDDED VISION – MACHINE VISION ON STEROIDS
mvpromedia.eu
MACHINE VISION PROFESSIONAL
WIN WITH THE
IMPERX
3G-SDI CAMERA
SERIES NOW AVAILABLE IN ½” AND 2/3” OPTICAL FORMAT ● MIL-STD- 810F ● 1.5G: -40°C to +80°C, 3.0G: -30°C to +80°C ● HD-SDI (SMPTE 292M), 3G-SDI (SMPTE-424- 1) ● Tri-level sync ● 1080P @ 60fps ● 37mm (W) x 37mm (H) x 48.6 (L) sales@imperx.com +1-561-989-0006 www.imperx.com
MORE INFO
CONTENTS
MVPRO TEAM Neil Martin
4
Welcome to MVPRO
5
NEWS a round-up of what’s been happening in the Machine Vision sector
20
HC-VISION We catch up with Boaz Arad, winner of the EMVA Young Professional Award 2017 and ask him about his new start-up company HC-Vision
22
XILINX The future is Embedded, says Giles Peckham, Regional Marketing Director at Xilinx, and his company is not about to miss the boat
24
FUJI The focus for Germany-based FUJIFILM Optical Devices Europe is growing the market for its main machine vision brand FUJINON
26
MATROX As Matrox enters its fourth decade in business, it’s looking forward to the challenges ahead, especially in the direction of machine learning
28
MILITARY IMAGE PLEORA A manufacturing floor and battlefield may seem worlds apart, but the two worlds are coming ever closer
32
EURESYS Is a pioneering force behind the CoaXPress standard, we learn more about their new 8-connection CoaXPress frame grabber
34
TELEDYNE DALSA When and How to Use Multiple Smart Cameras in Complex Vision Systems
36
ADVANCED ILLUMINATION From its pioneering start in 1993, Advanced Illumination has managed to keep its nose in front
38
MACHINE VISION ON STEROIDS A close look at the first ever European Embedded Vision Conference
48
CONFRENCES Editor Neil Martin highlights SPS IPC Drives and also looks ahead to the 2018 conference season
50
BUSINESS STORIES Recent news and stories from the heart of businesses within the Machine Vision Industry
56
PUBLIC VISION Editor Neil Martin takes a look at the current economic situation, and then examines the latest figures from Xilinx and Cognex
Editor-in-Chief neil.mar tin@mvpromedia.eu
Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu
Cally Bennett Group Business Manager cally.bennett@mvpromedia.eu
Paige Haughton Sales and Marketing Executive Paige.haughton@cliftonmedialab.com
Visit our website for daily updates
www.mvpromedia.eu
mvpromedia.eu
MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0) 1179 089686 © 2017. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies. Designed by The Wow Factory www.thewowfactory.co.uk
mvpromedia.eu
3
THAT WAS 2017! Well, as I write this, 2017 is drawing to a close and as I point out in our quick look at the conference scene, later on in the magazine, the team here are looking forward to topping off a great 12 months with a visit to Nuremberg for SPS IDC 2017. Many machine vision companies will be out in force and it will be a good time to reflect how the industry has fared over the year and what 2018 holds for all of us. For the MVPro platform we have had a great year and it’s been great fun working with some of the best companies in the industry. We have established some great relationships and these will be enhanced further as we work through 2018. We also launched our sister publication RoboPro magazine. This is also supported by a media platform including a website and highly developed databases, following the same model as MVPro. The focus of RoboPro magazine is precision robotics, collaborative robots and next-generation automation. It allows us to range far and wide across one today’s most exciting industrial sector. Both publications now reach out into the US, as well as covering the UK and mainland Europe. In 2018, we hope to extend our reach to Asia.
Embedded vision is a topic no doubt we’ll be returning to many times in 2018.
In this issue our focus is on the first ever European Embedded Vision Conference which took place last month.
Okay, I know slightly early, but have a great end of year, however you celebrate it, and see you in 2018 for another exciting ride.
For a first show it was a success in terms of a nice attentive audience and some good speakers. Those who attended and invested flight, hotel and time costs will of course judge it on what they learnt for their businesses and how it could help their growth. That calculation will take place of a matter of time, suffice to say that embedded is generally seen as a huge catalyst for growth, one that will kick machine vision into a new stage of its development. It was billed by those at the show as a shot in the arm for an industry which has been increasingly been labelled as ‘mature.’ Embedded, coupled with deep learning, is just the tonic that the industry needs and you got the impression from the attendees, that they were at the show to understand how they can get the ticket to ride the rocket. We all know that there are riches to be made, what we want is to find the key to unlock those riches.
Neil
Neil Martin Editor, MVPro Neil Martin Editor neil.martin@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry RoboPro B2B digital platform and print magazine for the global robotics industry www.mvpromedia.eu
4
mvpromedia.eu
NEWS
SONY STRENGTHENS GS CMOS PORTFOLIO, AND SIMPLIFIES SWITCH TO DIGITAL WITH 75FPS SXGA XCG-CG160 MODULE Sony Europe’s Image Sensing Solutions has introduced the first in a new series of SXGA modules – the Sony XCG-CG160. The camera, which features a 1/3-type IMX273 sensor, is a low-disruption way to move from CCD to GSCMOS. The company says it’s an ideal replacement for cameras using the highly-regarded Sony ICX445 CCD sensor. The first modules to be announced use the GigE standard, running at 75 frames per second in SXGA resolution. Black and white modules are available immediately and colour modules will enter mass production in December 2017.
The XCG-CG160 provides a simple migration path from CCD to GSCMOS without necessarily having to upgrade or change architecture. The IMX273 shares comparable sensor and pixel size characteristics with the well regarded Sony ICX445, but offers huge technological improvements in sensitivity, dynamic range, noise reduction and frame rate capability. The camera’s performance is further enhanced through Sony’s world renowned know how in camera design.
Senior Marketing Manager at Sony Image Sensing Solutions Matt Swinney said: “These cameras again bring together Sony’s renowned module engineering with the best of Sony’s sensor technology, extracting the best possible image from the IMX273 with additional performance enhancing unique features. With it, we believe, we have set the industry standard for an SXGA module.”
A USB 3.0 module, Sony’s first, is also scheduled to enter mass production by Q1 2018. The modules have been designed to lead the market in terms of image quality, and are targeted at a wide array of markets; from print, robotics and inspection to ITS, medical and logistics, as well as being suitable for general imaging.
mvpromedia.eu
5
NEWS
NEW BASLER VIDEO RECORDING SOFTWARE AVAILABLE Basler (Ahrensburg, Germany), is now offering a software solution to enhance the possibilities of microscopic imaging. The software aims to make taking single images, recording videos as well as image or video sequences, simple and intuitive. It even offers camera control features to improve image quality, to set up different options for recording and to use hardware trigger signals. When using a Basler Microscopy ace camera, the software even takes images or videos automatically when using hardware trigger signals. This is useful for
6
many cases and can, for example, support hands-free documentation during material inspection when using a foot-operated switch connected to the camera. Comprehensive software features at a glance: • Live view and camera control • Image adjustments and automated settings • Videos in modern MPEG-4 format • High-speed recordings for slow-motion analysis
• Image capturing with hardware trigger signal • Easy installation and intuitive user interface • Supported operating systems: Windows 7, Windows 8.1, Windows 10 – 32 bit and 64 bit • User-friendly software design for ease of use. The Basler Video Recording Software comes with each Basler PowerPack for Microscopy and all Basler USB 3.0 cameras can be connected.
• Image and video sequences for time-lapse microscopy
mvpromedia.eu
NEWS
WAGNER HEADS UP EMBEDDED VISION BUSINESS AT MVTEC order processing, project management of customerspecific applications, as well as in research and development.
MVTec Software (Munich, German) has made Christoph Wagner responsible for growing the company’s embedded vision business.
Wagner will hold a presentation on “Embedded Vision – Efficient Development of Applications Using Professional Vision Software” at the embedded world Conference 2018 in Nuremberg, Germany.
Wagner, 34, has become the company’s new Product and Business Development Manager, and will work to further increase MVTec’s presence in the international embedded vision market. The mechanical engineer and certified business administrator has extensive expertise in machine vision. He has also gained more than ten years of experience in product management, technical support, as well as research and development. With HALCON Embedded, the company has been building expertise in porting its leading machine vision software, MVTec HALCON, to a wide range of embedded platforms since 2005. The latest version 13.0.1 of HALCON also supports Arm®-based platforms running the Linux operating system. In addition, MVTec implements
mvpromedia.eu
well thought-out embedded vision applications in close collaboration with a strong network of hardware partners. Previously, Wagner worked primarily on the hardware side, in particular in the area of 3D vision. Before joining MVTec, he was employed at wenglorMEL as a technical support manager and product manager for 2D and 3D profile sensors (optoelectronic sensors). Before that, Wagner worked at SmartRay GmbH, where he held various positions in customer support,
Wagner said: “I very much look forward to joining the leading manufacturer in the market for standard machine vision software. As part of this work, I will be able to purposefully combine my machine vision expertise with my knowledge in the area of hardware development. I am eager to help actively and profitably strengthen the embedded vision technology field – which is equally important for both customers and MVTec – and to broaden MVTec’s presence in this international growth market.”
7
NEWS
STEMMER IMAGING GETS INTEL 400 SERIES OF REALSENSE IMAGING TECHNOLOGY STEMMER IMAGING has been appointed as an approved supplier for Intel RealSense technology. It will promote and provide support services to accelerate the integration of the latest 400 series of Intel RealSense 3D imaging technology into real world applications. The new RealSense camera 400 series from Intel is a modular concept that provides depth information and full colour HD images in both indoors and outdoors applications. It utlises proven Intel depth
sensor technology and includes a range of cameras and board level modules with depth computation via a next generation Intel ASIC. The technology supports both PC and embedded applications. Mark Williamson, Director – Corporate Market Development at STEMMER IMAGING, said: “This exciting new technology represents a major development for 3D depth imaging and sets new levels of performance at a price point that opens up the use of 3D imaging to high volume embedded applications.
It will have many applications in a wide variety of industries including retail, robotics and drones. The modular approach provides the flexibility to address the needs of many different market segments. In particular it brings 3D perception capabilities to the growing embedded vision sector, with modules optimised for embedded power and performance.” Intel engineers are introducing the technology with talks and live demonstrations at the STEMMER IMAGING Machine Vision Forum Tour 2017.
FLIR WINS LARGE MILITARY CONTRACTS FLIR Systems (Wilsonville, Oregon, US) has won large military contracts worth a combined total of $90.3m. Firstly, it has been awarded a $74.7 million firm-fixedprice order to deliver TacFLIR surveillance cameras in support of the US Army EO/IR-Force Protection (FP) program. The US Army will purchase the systems through Army Contracting Command, Redstone. The units delivered under this contract will support the ongoing US Army EO/IRFP program, which provides enhanced perimeter security and force protection for US troops stationed around the world. As part of the same program, FLIR was also awarded an $8.8 million contract in the third quarter 2017 to deliver other FLIR Ranger radars.
8
CEO and President of FLIR Jim Cannon said: “We are honored to continue our long-standing support of the US Army. This program highlights our ability to rapidly deploy our technology for critical missions and underscores FLIR’s commercially developed, military qualified approach.” Secondly, the NASDAQ quoted company has been awarded a $6.8m firm-fixed-price order to deliver Black Hornet Personal Reconnaissance Systems (PRS) in support of the Australian Army. The units delivered under this contract will support platoon and troop level organic surveillance and reconnaissance capabilities. The Australian Army previously purchased the Black Hornet PRS for test and evaluation purposes, leading to the awarded contract for full operational deployment after a re-competed tender.
Australian Army to provide this previously non-existent personal reconnaissance technology. This recent contract highlights the increasing demand for the Black Hornet to be incorporated within the operational capability of the world’s leading militaries, providing immediate deployable security.”
Cannon again: “We are pleased to be selected by the
mvpromedia.eu
NEWS
FLARE 48MP SERIES NOW INCLUDES NEW CMV50000 SENSORS The Flare 48MP series, part of IO Industries’ family of industrial video cameras designed with advanced CMOS image sensors, now come with the all-new CMV50000 sensors. These sensors feature high pixel counts, producing highly detailed images with a 7920 x 6004 resolution. The sensor also feature very fast readout, allowing frame rates up to 30 frames per second. Multiple region of interest settings (up to then windows) allow lower-
mvpromedia.eu
resolution output formats at much higher frame rates. This flexibility makes the Flare series, says IO Industries, effective in applications including automated optical inspection systems, space launch imaging and wide area surveillance.
short exposure times or be otherwise unable to achieve fast frame rates. Global electronic shutter exposes each pixel in the frame at the same time, as opposed to rolling shutter designs which sequentially expose rows of pixels.
The CMV50000 sensors feature pipelined global shutter technology which allows the sensor to expose a frame during readout of the previous frame, whereas sensors without this feature must either use very
The latter approach leads to motion artefacts as well as uneven frame illumination if any bright flashes occur during the exposure period.
9
NEWS
NEW GENERATION OF PHOTOELECTRONIC SENSORS FROM WENGLOR Under the tag line “Making Industries Smarter”, intelligent sensors with five different functional principles were introduced to the market. Wenglor says that consistently equipped with IO-Link 1.1, the sensors stand for thinking, networked, learning sensors for the future of Industry 4.0. The 1K miniature design was presented as the first representative of the product range. All of the sensors included in this product range have a unique combination of outstanding communication capabilities and photoelectronic performance values. For example, sensor settings can be easily saved and duplicated for other applications thanks to the IO-Link interface. This simplifies initial start-up and saves time. Condition monitoring permits predictive maintenance. Should replacement become necessary, the sensor’s configuration is automatically transferred to the new device by means of the data storage function. Beyond this, clear-cut visualization provided by standard wTeach2 software makes it possible to match the sensors precisely to complex applications. Switching thresholds and functional reserve are rendered visible, thus permitting a qualitative assessment.
10
The sensors work with laser, red or blue light, and offer outstanding optical characteristics as well. They’re adjusted at the factory such that each sensor has the same switching distance when the settings are identical (potentiometer/IO-Link). Thanks to the balanced spot with aligned optical axis, no subsequent realignment of the spot is required. This simplifies initial start-up as well as replacement. And the sensors don’t influence each other when they’re mounted directly next to or opposite each other. Furthermore, they’re insensitive to interfering light and are consistently available in laser class 1.
can be configured by means of potentiometer, or via teach-in with the high-end variant. Thanks to a minimal weight of just 4 grams, they’re also very well suited for applications on robot arms. IP67/ IP68 protection, LEDs which are visible all the way around and an extended temperature range of -40 to +60° C round out the portfolio.
First Representative of the New Generation: The 1K Miniature Design
• balanced switching point and spot;
The first group of the new product range is based on the 1K miniature design. All five functional principles (reflex sensor with or without background suppression, retroreflex sensor with or without clear glass recognition and throughbeam sensor) are available in one of the world’s smallest housing formats, namely 32 x 16 x 12 mm. Sensors with 1K housing can be connected either via plug connector, pigtail or cable, and
• highly insensitive to extraneous light;
Highlights of the PNG// smart series: • all sensors consistently equipped with IO-Link 1.1; • condition monitoring; • data storage, high-end variants;
• no reciprocal influence;
• all laser sensors also in laser class 1; • diverse measuring tasks; • wTeach2 software.
mvpromedia.eu
NEWS
BAUMER QX CAMERA SERIES AIMS TO RAISE THE BAR The QX series cameras from Baumer set out to achieve new standards for highspeed image processing. With 12 megapixel resolution at 335 fps in Burst Mode, the cameras capture fine details and defects extremely precisely in high-dynamic processes. The 10 GigE interface continuously transfers data with a high bandwidth of 1.1 GB/s. This, says Baumer, makes the cameras ideal for applications that require very high frame rates for short sequences. For example, for process analysis in industrial applications, for living cell analysis in medical applications, for scientific research or for motion analysis in sports. A first model, equipped with the CMOS global shutter CMV12000 sensor by ams
SCORPION VISION APPOINTS NEW SALES DIRECTOR
(CMOSIS), is available in the fourth quarter of 2017. The high speed of 335 fps in burst mode is made possible with an internal image memory of 2 GB. Up to 169 images can thus be buffered at full resolution. At maximum speed, this corresponds to a recording time of 0.5 s. If an ROI (Region of Interest) with 2 megapixel is used, for example, flexible storage management permits the recording of almost 1000 fps, thus extending the recording time to 1 s at the highest speed. To transfer the buffered images quickly and to reduce the evaluation time, the QX cameras are equipped with the innovative 10 GigE Vision compliant interface. This makes them ten times faster than GigE Vision and 35 percent faster than Camera Link Full.
Julian Parfitt has joined Scorpion Vision as Sales Director. At Scorpion Vision, Parfitt is heading up the components division, responsible for sales and marketing of the full range of imaging brands. He has over 18 years’ experience in the global imaging and machine vision industry. Managing Director at Scorpion Vision Paul Wilson said: “We are delighted to welcome Julian to the growing team at Scorpion Vision. Julian is well known and respected in the industry and brings his extensive experience to the role.” Julian: “I am very pleased to be on board with the highly professional Scorpion Vision team here in the UK. We are at an exciting point in time in the Machine vision and Imaging industry with the rapid expansion of new applications and innovations which Scorpion Vision is well placed to serve. The recent partnership with Hikvision offers yet more potential to further expand our global customer base in both the integrated solutions and component markets and I look forward to driving this growth.”
mvpromedia.eu
11
NEWS
BASLER STARTS PRODUCTION OF ACE U STARVIS Basler (Ahrensburg, German) has started production of its ace U cameras with Sony STARVIS Sensors. The models have back-illuminated rolling shutter CMOS sensors from Sony’s STARVIS line. The eight new models feature resolutions of 6 and 12 megapixels, and are equipped with the IMX178 and IMX226 sensors from Sony’s STARVIS line, and deliver up to 59 frames per second. Basler said that the cameras stand out with their extremely high sensitivity at small pixel sizes of 2.4 µm (IMX178) or 1.85 µm (IMX226). What’s more, a special aspect of these sensors is the very low dark noise of only
three electrons, combined with a quantum efficiency of over 80%. The company also said that the new ace U cameras are particularly suitable for applications in the area of manual light or fluorescence microscopy as well as in the electronics industry for less complex assembly systems.
the unique combination of 5×5 debayering, color-anti-aliasing, denoising and improved sharpness.
All new ace U models are available with the proven GigE or USB 3.0 interface, and conform to GigE Vision 2.0 or the USB3 Vision standard. The ace U color models also include the PGI feature set,
PIXELINK ROLLS OUT USB 3.1 GEN 2 INTERFACE Pixelink (Ottawa, Canada) says is will support the USB 3.1 Gen 2 interface on all machine vision and microscopy cameras. The company, which is a global provider of industrial cameras for the machine vision and microscopy markets, explained that as the Universal Serial Bus was fast becoming the most popular digital interface used in imaging today, the newly released USB 3.1 Gen 2 specifications will ensure that the growth of USB in imaging will continue into the future.
12
Pixelink’s M-Series and PL-D line of cameras will be built to integrate with USB 3.1 Gen 2 specifications starting in early 2018. With a bus speed mode of 10 gigabytes per second, the USB 3.1 Gen 2 interface can deliver more than double the throughput of the current existing USB 3.0. All Pixelink 3.1 Gen 2 cameras will be fully backward compatible with existing USB 3.0 and USB 2.0 applications.
President of Pixelink Paul Saunders said: “We have embraced USB 3.1 Gen 2 and its technological innovations early on in R&D with the aim to provide our customers the opportunity to build and deploy vision applications with this new technology” explains Paul Saunders, President of Pixelink. “Pixelink takes pride in leading the industry by implementing the most advanced technology available in the market and we are excited for our customers to experience the improved system performance that will allow them to deploy more sophisticated multi-camera vision applications.”
mvpromedia.eu
NEWS
SP1 STEREO VISION SENSOR SUCCESSORS Two years since the launch of the Nerian Vision Technologies SP1 stereo vision sensor, two successors have been introduced: SceneScan and SceneScan Pro.
The new hardware allows processing rates of up to 100 frames per second at resolutions of up to 3 megapixels, which surpasses the SP1:
The table below illustrates the differences. Whilst SceneScan Pro has the highest possible computing power and is designed for the most demanding applications, SceneScan has been costreduced for applications with lower requirements. The customer can thus optimize his embedded vision solution both in terms of costs and technology. Nerian said that this technology opens up new possibilities for three-dimensional perception, even under difficult conditions. SceneScan will enabled completely new applications in the fields of robotics, automation technology, medical technology, autonomous driving and other domains.
SMARTEK VISION LAUNCHES NEW CAMERAS Smartek Vision (Munich, Germany) has launched six new models of its twentynine camera series with the latest SONY Pregius 9 Megapixel and 12 Megapixel CMOS Global Shutter image sensors. FRAMOS is the distributor. The new cameras feature the SONY IMX253, IMX255, IMX267 and IMX304 sensors. Smartek said the cameras combine high-resolution with the benefits of the second-generation SONY Pregius technology in a very small form factor. They open new possibilities to fast and affordable Machine Vision applications and Intelligent Traffic Systems (ITS). The SMARTEK Vision’s UCC and GCC models support USB3 and GigE Vision.
mvpromedia.eu
Product Manager for the SMARTEK Vision line at FRAMOS André Brela said: “With the 9 and 12MP Pregius sensors, SONY extends the range of applications, significantly benefiting from excellent 2nd generation Pregius technology, into the high-resolution range, including smooth 4K video capturing, e.g. for measuring tasks. “With the twentynine series, SMARTEK Vision now integrates these sensors in their very small 29×29 foot print. This enables clients to significantly extend their overall system performance at minimal effort by using existing mechanical concepts, or to increase the level of miniaturization. In an optimal combination, SMARTEK Vision offers the camera family as an easy-to-adapt platform enabling quick and cost-effective customer-specific solutions.”
13
NEWS
NEW STEREO 3D VISION SOLUTION A new stereo 3D vision solution has been launched by The Imaging Source. The company is an international manufacturer of machine vision cameras and has just announced that its new stereo 3D vision system, IC 3D, is available. The company claims it is a practical and flexible solution for many machine vision applications, and ideal for getting started in the field of 3D depth sensing. From image capture and rectification to precise depth estimation and visualization, the 3D system bundles application-specific industrial cameras with powerful 3D vision software to offer a complete 3D imaging solution.
image sequences can be exported and imported for later use. Established in 1990, The Imaging Source has branches in the US, Taiwan and Germany. It is one of the leading manufacturers of imaging products for scientific, industrial and medical applications. The industrial cameras, converters and frame
grabbers manufactured by The Imaging Source are robust and are designed to run maintenance-free for years in many applications: machine vision, AOI (automated optical inspection), visual inspection, factory automation, quality control, medical, life science, microscopy and amateur astronomy.
The IC 3D software allows for easy calibration via the userfriendly interface and ensures that the system can be quickly adapted to a large range of working distances and fields of view. Depth maps and 3D point clouds can be rapidly visualized via the interactive 3D viewer. Calibration datasets and stereo
THE SMALLEST SMART CAMERA ON THE MARKET EVT has introduced the EyeCheck ZQ smart camera, which it claims to be the smallest intelligent camera on the market. The company said that due to the speed, size and light weight of the EyeCheck ZQ smart camera, it’s a perfect solution for any robot arm in the production line in the automotive or electronic industry. Four LEDs provide onboard illumination. Also included in the EyeCheck ZQ is the EyeVision image processing software.
14
This is suitable for applications such as code reading (bar code, DMC, QR), OCR/OCV, pattern matching, recognizing errors and counting and measuring objects. The EyeVision software can be programmed via
drag-and-drop function, meaning that inspection programs are easily created. The commands are wrapped up in icons, which can be lined up to an inspection program. The EyeCheck ZQ, is not only very small, but due to the realtime evaluation based on the combination of the EyeVision software and the ZYNQ board, also really fast. The camera can process the application much faster than any other camera of the EyeCheck series, because of the powerful ZYNQ and the DualCore ARM processor.
mvpromedia.eu
»The Mitsubishi Electric LINE SCAN BAR solution offers high quality image acquisition in the smallest footprint ever!« Hans Gut VP Marketing & Sales, Hunkeler AG
Share our passion for vision. www.stemmer-imaging.de/CIS
THE PERFECT COMBINATION OF FAST DATA RATES AND LOW EFFORT
Authorized Distributor of LINE SCAN BAR
NEWS
NEW GENIE NANO CAMERAS BUILT AROUND SONY’S 1/3” CMOS IMAGE SENSORS Teledyne DALSA has launched two new models in its high-value, highspeed Genie Nano Camera Series.
frame rate almost seven times faster —up to 160 fps in 1.6 MP and up to 400 fps in VGA.
Led by the latest IMX273 sensors from the Sony Pregius CMOS family, these new Genie Nanos are designed around 1/3” CMOS image sensors that replace Sony’s soon-to-be discontinued CCD sensors. The new models are offered in 1.6 MP (1456 x 1088) resolution with a GigE Vision interface in either color, or monochrome.
The IMX273-based models feature a full frame electronic global shutter and a 3.45 µm pixel. The company said that users can expect high image quality, high resolution, and high-speed performance without distortion, and even faster throughput with Teledyne’s award-winning TurboDrive technology.
Key Features: • TurboDrive for fast frame rates and full image quality; • full frame electronic global shutter function; • small footprint / light weight: 21.2 mm x 29 mm x 44 mm/ 47g; • wide temperature range (-20 to 60°C) for imaging in harsh environments.
The Nano 1.6 MP models will be followed by two new additional 1/3” VGA Nano models to replace the ICX424 CCD-based cameras in Q4 2017. The new cameras are drop-in replacements for the original CCD-based models, at the same low cost, but with a
SIERRA-OLYMPIC LAUNCHES THE VAYU HD Sierra-Olympic Technologies (Hood River, Oregon, US) has introduced the Vayu HD, an uncooled thermal camera with true high definition (1920 x 1200 pixels) and capable of 1080p output. The company supplies infrared (IR) and thermal imaging
16
components, cameras, and systems solutions for innovative imaging applications. It showcased the new Vatu HD at ASIS Security Expo in Dallas, Texas in September of this year. The new thermal imager replaces the recently introduced Viento HD and Viento HD IP67 cameras. It features the same longwave infrared (LWIR) spectral response from 8 to 14 µm. Applications include security, border protection, military imaging, airborne recon and wide area surveillance.
The Vayu HD provides what the company call unprecedented image resolution utilizing a vanadium oxide microbolometer (VOx) sensor with a capacity of over 2.3 million pixels on a 12-micron pixel pitch, in a commercially-designed, IP67-environmentally rated, stand-alone camera. Other features include an athermalized 24 mm F1.1 customdesigned optic, 30 Hz frame rate, and three video formats: HD-SDI, h.264 IP-Video, or 16-bit Camera Link output.
mvpromedia.eu
NEWS
FRAMOS APPOINTED AS A GLOBAL APPROVED SUPPLIER FOR THE INTEL REALSENSE FRAMOS Group (Munich, Germany) has been appointed as a global approved supplier for the Intel RealSense technology product line.
2.0. FRAMOS will serve its global network of imaging clients, OEMs and camera manufacturers with the technology.
The technology adds human like sensing and intelligence to devices and machines. It enables applications in the growing fields of virtual and augmented reality, mobile products and other segments, to benefit from adding sense to make the experience better.
CEO of FRAMOS Dr Andreas Franz said: “We enable our customers to make their machines see and think. The Intel RealSense technology product line is an excellent suite of components which lets machines make situational decisions. When integrated, any device will have the ability to naturally and intuitively interact with its environment.
Under the agreement, FRAMOS will offer the entire Intel RealSense technology portfolio including its Intel RealSense Vision Processor, Depth Modules and Depth Cameras – all supported with the latest Intel RealSense SDK
“We are extremely proud to offer Intel’s RealSense technology products and to enable our customers to benefit better and faster from the major shift into intelligent devices and machines.
When it comes to drones, robots, VR, automotive, surveillance and 3D applications, the Intel RealSense line is an optimal completion to our portfolio of imaging solutions ranging from sensors to systems.” The Intel RealSense cameras contain a vision processor and depth module in a small form factor. They provide stereo depth sensing capability, giving devices and machines a more realistic view of the world. Intel RealSense tracking modules comprehend device position and orientation, providing the ability to navigate, map and learn the environment. Intel hardware and software platforms, said FRAMOS, enable the next evolutionary step of ‘smart’ devices and machines.
BASLER EXPANDS SERIES WITH 2 MEGAPIXEL STANDARD LENSES FOR SENSORS UP TO 2/3 Basler (Ahrensburg, Germany) has expanded its Basler Lenses series with 2 megapixel standard lenses for sensors up to 2/3". It’s two years since the launch of the Basler Original lens series 1/2.5”, followed by the Basler Original Equipment lenses with a resolution of 5 megapixels for sensors smaller than 1/2".
mvpromedia.eu
The new 2/3 lenses are suitable for sensors with a resolution of up to 2 megapixels and suit applications where low resolutions are sufficient. The new, C-mount lenses are optimized for use with Basler ace, dart and pulse cameras with sensors between 1/2' and 2/3' and have, said Basler, an
excellent price/performance ratio. With their optimized, frugal design and standard resolution of 2 megapixels, the Basler Lenses 2/3" suit cost-efficient image processing applications. The lenses are available in six different focal lengths (8, 12, 15, 25, 35, 50 mm) and can be used in the visible wavelength range of 400 – 700 nm.
17
NEWS
NEW CAMERA LOOKS THROUGH THE BODY EVT NOW SUPPORTS HIKVISION CAMERAS EVT software now supports Hikvision cameras. Thanks to EyeVision image processing software, users of Hikvision cameras can now make use of a number of extra functions, particularly increased possibilities in the field of quality control. The drag-and-drop function and the extended command set, allows solutions for applications including: • code reading (bar code, DMC, QR) and OCR/OCV; • pattern matching and metrology (matching, as well as measuring of distances, angles and diameters); • counting objects and error detection (such as scratches, cracks, holes, etc); • weld seam and surface inspection; • as well as 3D and thermal imaging applications. The software also supports different platforms such as smart cameras, vision sensors, embedded-systems and PCsystems (interfaces include USB, GigE, RS232, FireWire, CoaXPress, Camera Link).
18
A new camera is able to see through the body.
to work out exactly where the endoscope is located in the body.
A team from Edinburgh University and Heriot-Watt University has designed the camera to help doctors track medical tools such as endoscopes that are used to investigate a range of internal conditions.
The new camera can be used at the patient’s bedside.
The camera is able to detect sources of light inside the body, such as the illuminated tip of the endoscope’s long flexible tube. Before now, it has been nearly impossible to get a clear picture of where an endoscope is situated within the human body, without using X-rays, or other expensive methods. This is because light from the endoscope can pass through the body, but it usually scatters or bounces off tissues and organs rather than travelling straight through. The new camera employs a different technique. It takes advantage of advanced technology that can detect individual particles of light, called photons. The team have integrated thousands of single photon detectors onto a silicon chip, similar to that found in a digital camera. The technology is so sensitive that it can detect the tiny traces of light that pass through the body’s tissue from the light of the endoscope. It can also record the time taken for light to pass through the body, allowing the device to also detect the scattered light. By taking into account both the scattered light and the light that travels straight to the camera, the device is able
The project is part of the Proteus Interdisciplinary Research Collaboration, which is developing a range of revolutionary new technologies for diagnosing and treating lung diseases. Proteus is funded by the Engineering and Physical Sciences Research Council. Kev Dhaliwal, Professor of Molecular Imaging and Healthcare Technology, University of Edinburgh and Project Lead, Proteus, said: “The ability to see a device’s location is crucial for many
applications in healthcare, as we move forwards with minimally invasive approaches to treating disease.” Dr Michael Tanner of Heriot-Watt University added: “My favourite element of this work was the ability to work with clinicians to understand a practical healthcare challenge, then tailor advanced technologies and principles that would not normally make it out of a physics lab to solve real problems. I hope we can continue this interdisciplinary approach to make a real difference in healthcare technology.”
mvpromedia.eu
NEWS
MTC INVESTS IN TECH PHD PROGRAMMES The Manufacturing Technology Centre (MTC) is to co-fund nine PhD programmes with three universities as it looks to secure tomorrow’s technologies for the UK’s manufacturing industry, including machine vision. MTC has invested over £700,000 in the project and has created a partnership with three academic partners: the University of Birmingham, the University of Nottingham and Loughborough University. The MTC will contribute £80,000 per student undertaking the three-and-a-half year research programmes which start this autumn. There will be three students at each university. This will be repeated next year as the MTC aims to create a constant stream of innovation ideas from academia, which it can develop and transfer into industry. Technology Director of the Coventry-based R&D facility Ken Young (pictured below)
mvpromedia.eu
said: “It is about enhancing our relationships and making sure that the work done on the research comes through to the MTC, so we can mature it and get it into industry. “We want to be at the forefront of getting the ideas and innovations from academia and then advancing them into UK industry. For us, it is also about getting the right standards and the right sort of supervision inside and outside of university, so the students will get access to the MTC’s facilities and experts. “This will help make sure the PhD is appropriate to industry’s needs. This is a very positive move as it is ensures that the research landscape joins up.” The PhDs will focus on a number of key areas including next generation robotics, additive manufacturing, simulation and modelling and artificial intelligence. They will also be
split based on the academic specialisms of each university. Young added: “As we are jointly funding these PhDs with the universities, it means we can make sure the research areas are relevant to us and our industrial members. As new developments are made, this enables us to bring them into the MTC environment and for us to see what is capable for commercial exploitation. “The universities focus on the low technology readiness levels, where they can solve problems that are currently too far into the future to transfer into industry. Supporting this, and the individuals, will also help to deliver the future thought leaders for industry. Helping to develop these people is also very much part of our corporate responsibility.”
19
FEATURED
EMVA YOUNG PROFESSIONAL AWARD WINNER BOAZ ARAD TALKS ABOUT HIS START-UP, HC-VISION We caught up with Boaz Arad, winner of the EMVA Young Professional Award 2017, and asked him about his position as CTO of the start-up company HC-Vision
Arad won the award for his work “Sparse Recovery of Hyperspectral Signal from Natural RGB Images.” Aged 32, he obtained his Computer Science BSc (cum laude) from Ben-Gurion University of the Negev (BGU) in 2012. Continuing towards a fast-tracked PhD at the BGU Interdisciplinary Computational Vision Laboratory (ICVL), he received an MSc in 2014 and expects to complete his PhD by 2018. As well as studying, he’s a stakeholder in HC-Vision, alongside his fellow cofounder Professor Ben-Shahar. The other stake holder is the university commercialization company, BGN Technologies. The HC-Vision proposition is quite clear. As it points out, conventional cameras capture images using only three frequency bands (red, blue, green), but there is much more information hiding within the visual spectrum. They say that their technology allows conventional cameras to increase their spectral resolution, capturing information over a wide range of wavelengths without the need for specialized equipment or controlled lighting. We asked Arad first, what did the winning of the award mean to him? He replied: “It was quite exciting to receive this type of recognition for our work and I was pleasantly surprised by the amount of interest the award garnered in both academia
20
and industry. While I’ve spent the past year expanding my focus from academia to industry with HC-Vision, this award afforded me a unique and fascinating window to the European vision industry. I’m certain that this will result in some interesting collaborations.” Does he think winning the award will raise the profile of HC Vision? “Definitely, our company has already seen a surge in interest. Both investors and potential industry partners have reached out to us following the award announcement.” Getting onto talking about HC Vision itself, we asked at what stage is the company in its development? “At the moment we are operating within the Ben-Gurion University commercialization company, but expect to be spun out as an independent company very soon, perhaps before the years end.” Is the company looking for external funding? “Yes, we are currently looking for investors, as well as industry partners interested in incorporating HC-Vision technology into their products. Since HC-Vision capabilities can be added to existing or future image sensor with little to no hardware modifications and no assembly line retooling, we believe it can provide our partners with a significant technological edge.”
mvpromedia.eu
FEATURED
How many people are involved with HC Vision? “At this early stage, most our development is handled by the founders, while the university technological transfer company assists us with business development, IP and legal services. So I’d say about 5-10 people, depending on who you’d consider an employee. As part of our move towards full independence, we’ve already located some promising developers and engineers who are eager to join the HC-Vision team. At what stage is the product and do you have customers? “At the moment we have a working prototype, and most of the R&D hurdles have been cleared, but some engineering work remains before we can produce a final product for the general market. In the meantime, we are already working with several companies in order to test the performance of our technology when integrated into their systems. While we currently cannot disclose the identity of these companies - they include some very recognizable names in both consumer electronics and the defense industry.”
device, or professional camera. Once our platform becomes ubiquitous it will allow 3rd party developers to really take advantage of its material sensing capabilities.” How big is your potential marketplace with the product? “The image sensor market is fast approaching the 20 billion dollar mark, with image sensors for mobile devices accounting for about a third of that volume. Since our technology is relevant to nearly any type of image sensor, the potential market is truly huge. Even when we narrow our focus to smartphones or cameras produced by a single company, the production scales range from millions to hundreds of millions of devices per year.” Ben-Gurion University of the Negev is a startup hotbed that has sprung many successful projects such as MobilEye.
What do you hope the product will achieve over the coming years? “As our technology can provide significant gains in sensor light sensitivity (quantum efficiency) as well as material-sensing capabilities with little added cost to manufacturers - we’re really hoping to see it implemented at a large scale. Perhaps as the distinguishing “killer” feature of a flagship cellular
mvpromedia.eu
21
SPONSORED
EMBEDDED IN THE FUTURE The future is Embedded, says Giles Peckham, Regional Marketing Director at Xilinx, and his company is not about to miss the boat Xilinx sees embedded vision as a key and pervasive megatrend that is shaping the future of the electronics industry. Providing machines with the ability to see, sense, and immediately respond to the world creates unique opportunities for system differentiation; however, this also creates challenges in how designers create next-generation architectures and bring them to market. Integrating disparate subsystems including video and vision I/O with multiple image processing pipelines, and enabling these embeddedvision systems to perform vision-based analytics in real time is a complex task that requires tight coordination between hardware and software teams. Timely and relevant To remain timely and relevant in the market, leading development teams are exploiting Xilinx’s All Programmable devices in their next-generation systems to take advantage of the devices’ programmable hardware, software, and I/O capabilities. Xilinx provides embedded vision developers with a suite of technologies that support both hardware and software design. Xilinx offers devices and complete acceleration stacks (Reconfigurable Acceleration Stack and reVISION ( https:// www.xilinx.com/products/ design-tools/embeddedvision-zone.html )
mvpromedia.eu
providing better performance, flexibility and system optimising compilation for both Computer Vision and Machine Learning, and supporting multiple levels of design abstraction. These speed the development of embedded vision applications in markets where systems must be highly differentiated, extremely responsive, and able immediately to adapt to the latest algorithms and image sensors.
Only Xilinx provides a flexible, standards based solution that combines software programmability, realtime processing, hardware optimization and any-to-any connectivity with the security and safety needed for Industrial IoT systems, including low cycle times, deterministic performance, and low latency. Xilinx SDAccel, SDSoC, and Vivado High-Level Synthesis enables customers to quickly develop their smarter, connected, and differentiated applications. Xilinx
Not all embedded The Industrial Internet of Things (IIoT) is another key industry for Xilinx as it is driving the fourth wave of the industrial revolution. It is dramatically altering manufacturing, energy, transportation, cities, medical, and other industrial sectors. Most experts believe that IIoT is happening now and with very tangible, measurable business impact. The IIoT enables companies to collect, aggregate, and analyze data from sensors to maximize the efficiency of machines and the throughput of an entire operation.
As for Xilinx itself, it’s an American semiconductor company founded in 1984. The company is well known for inventing the Field Programmable Gate Array (FPGA) and as the first semiconductor company with a fabless manufacturing fabless model. Founded in Silicon Valley, the company is headquartered in San Jose, California. The current CEO and President is Moshe Gavrielov. The NASDAQ quoted company announced $2.35bn of revenues for the last fiscal year. Six years ago the company started its corporate transformation from the Programmable Logic Company to an All Programmable company delivering system-level solutions. Today Xilinx’s portfolio combines All Programmable devices in the categories of FPGAs, SoCs, MPSoCs, RFSoCs and 3DICs, as well as All Programming models, including software-defined development environments.
23
SPONSORED
Happy birthday It’s a 6mm
24
mvpromedia.eu
SPONSORED
FUJINON EYES GROWTH After a successful restructuring, the focus for Germany-based FUJ I FI LM Optical Devices Europe is growing the market for its main machine vision brand FUJ I NON. Here we take a look at how this influential company is positioning itself for 2018
It was just last month that FUJIFILM Corporation and FUJIFILM Europe decided to concentrate their European Optical Devices division into FUJIFILM Optical Devices Europe, which is based in Kleve, Germany.
The latest changes were made to help maintain the growth of the company. At the time of the re-structuring, Brawley told the media: “The new structure enables FUJIFILM to respond more efficiently
FUJIFILM will manage the area of optical devices Europewide from this company. The main lens brand known within the machine vision sector is FUJINON. FUJIFILM Optical Devices Europe is responsible at its Kleve location for the sales, marketing and servicing of professional FUJINON lenses for TV and film productions, CCTV and Machine Vision, as well as FUJINON Binoculars in Europe, the Middle East and in Africa. The management of FUJIFILM Optical Devices Europe has been taken over by the existing Senior Vice President Homare Kai and Christopher Brawley, managing director of FUJIFILM Electronic Imaging Europe. The other managing directors are Toshihisa Iida and Koichi Tamai. Since 2012 until October this year, the optical business and the electronic imaging business had been carried out in the FUJIFILM company headquarters in Japan by a single business area. So that they can take full advantage of these synergies in Europe as well, the two business areas are now concentrated at the Kleve site.
mvpromedia.eu
to the requirements in this market segment. A high speed and flexible adaptation to market requirements are required for success in this demanding field of business.” FUJIFILM Optical Devices Europe is one of the leading lens manufacturers worldwide. Not only for machine vision, but also a wide range of lenses for broadcast, cine, security, automotive industry and digital cameras. Within the machine vision sector Fujinon lenses have a reputation for well-known, high-quality lenses, which are robust and reliable, and come in a wide range that fit in almost every application, including print control,
robot vision, 3D scanner and factory automation. The main products the company will be promoting next year are the latest 2 lens series. The HF-12M Series, high resolution (12MP, 2.1µm pixel pitch) lenses with a small size, offers the smallest lenses currently available compared to other high resolution lenses. The HF-5M Series, 5 MP resolution (3.45µm pixel pitch) with focal lengths from 8mm to 50mm, is applicable on sensor sizes from 2/3” to 1.1.” Kenji Hirata, Sales Manager of CCTV & Machine Vision, explained: “The HF-12M series especially is highly robust against shocks and vibrations! Even under tough conditions, like vibrations, the lenses are extremely stable with minimized shift of the optical axis. For anyone wishing to check out the company’s products in 2018, they will be at the following trade fairs: SPS IPC Drives Parma, May 2018; BIEMH Bilbao, June 2018; and, Vision Stuttgart, November 2018.
Fujifilm Optical Devices Europe GmbH Fujistr. 1 DE – 47533 Kleve cctv_eu@fujifilm.com Phone +49 2821 7115 400
25
Track and trace with pinpoint accuracy Production line variables and packaging material affect the way text is printed. Deformed, degraded or obscured text? Not to worry. Matrox® SureDotOCR™ is the most comprehensive solution for reading dot-matrix text. Available in
Matrox Imaging Library (MIL) & Matrox Design Assistant software
With its unique reading capabilities and improved speed1, Matrox SureDotOCR ensures accurate, reliable reading of product information written as dot-matrix text by inkjet printers. Track and trace items from beginning to end— with utmost peace of mind.
Be sure with SureDot
™
www.matrox.com/trackandtrace/mvpro 1. Speed of over 2,000 ppm when using an Intel® Core™ i5-6500TE platform.
SPONSORED
M AT R O X I M A G I N G L O O K S FORWARD TO MACHINE LEARNING As Canadian company Matrox enters its fourth decade in business, it’s looking forward to the challenges ahead, especially in the direction of machine learning The most recent additions include: • SureDotOCR™, a dedicated OCR tool for reading challenging dot-matrix text; • 3D vision tools for alignment and cross-section analysis; • Circle and ellipse shapefinding tools; Matrox Imaging, which was founded back in 1976, is an industry-leading developer of component-level solutions, and an established and trusted supplier to top-tier OEMs and integrators. Its technology occupies a key role in automotive, electronics, flat panel display, semiconductor, and medical device manufacturing; packaging; medical imaging; logistics; surveillance; transportation; and pharmaceutical and food and beverage production. The company is a global player in the imaging market, and has branches in the UK, Ireland, Hong Kong, Germany, and the US. If you include its sales representatives, it has a presence in more than 25 countries worldwide.
Choice Matrox Imaging offers developers a choice of software based on their preference for development environments. For example, Matrox Imaging Library (MIL) is a complete software development toolkit (SDK) to design machine vision systems with unprecedented levels of performance and functionality. It’s ideal for those willing and able to invest in traditional programming.
mvpromedia.eu
• Color relative calibration to ensure consistent color imaging; • MIL CoPilot, an interactive prototype environment for experimenting, which generates functional code when the time comes. Matrox Design Assistant is an integrated development environment (IDE) providing a flowchart-based design methodology. It’s great for those who need to move quickly between machine vision projects by not having to write traditional program code. And the most recent additions include: • Project Change Validator, a clientserver architecture for ensuring that changes made to a deployed project are not detrimental to the functioning of that project;
As for the company’s future focus, the management team believes that machine learning is quite the story now. Matrox Imaging already makes use of it in their OCR tools, with the full intention to broaden its use and accessibility within its vision software. Matrox Imaging continues to expand other aspects of its vision software, notably in shape finding, ID mark reading, and 3D analysis. Sam Lopez, Director of Sales and Marketing, said: “Matrox Imaging products remain cornerstones of machine vision installations, with applications ranging from image capture, processing, and compression to machine and robot guidance and visual inspection. We work closely with our OEMs and integrators to develop effective software tools needed to address their machine vision challenges. That, coupled with Matrox Vision Academy—our new online learning platform dedicated to MIL and Matrox Design Assistant software— makes me excited to take on new challenges as we enter our fourth decade in business.”
• SureDotOCR; • Project templates, which serve as application frameworks to get started and allow for run-time tweaking of project steps.
Pair readily These software products pair readily with Matrox Imaging hardware offerings, including the latest smart cameras, vision controllers, I/O cards, and frame grabbers, all designed to provide optimum price-performance within a common software environment.
Matrox Imaging Imaging.info@matrox.com 1-800-602-6243 (514-8226020) www.matrox.com/imaging https://www.linkedin.com/ company/matrox-imaging-machine-vision
23
CONTRIBUTION
VISION ADVANTAGES FOR MILITARY IMAGING A manufacturing floor and battlefield may seem worlds apart, but technologies perfected for machine vision applications are increasingly finding a place in military imaging systems
28
mvpromedia.eu
CONTRIBUTION
Industrial vision evolved into viable commercial technology when it adopted Gigabit Ethernet (GigE) for real-time video delivery. Ethernet brought flexible networking, wider computing platform choice, and cabling advantages to real-time imaging. For military designers facing the same networking, computing, and cost challenges – plus an increasing demand for commercial versus proprietary technologies – GigE is a natural choice for video transmission.
Design Benefits of GigE Vision GigE allows military manufacturers to easily upgrade or design vision systems that integrate different types of cameras, displays, and processing computers into a real-time video network. In a local situational awareness (LSA) system for ground-based vehicles, crew members rely on real-time video to navigate the windowless vehicle and survey surroundings. (Figure 1) In these sophisticated systems, video and data from various imaging sources must be shared across multiple endpoints – including computers used for automated analysis and display panels for human observation. Real-time video from analog cameras is converted to an uncompressed GigE video stream and delivered over the multicast Ethernet network to displays and processing equipment at various points within the vehicle.
mvpromedia.eu
Figure 1: Video is converted to Ethernet packets by an external frame grabber and streamed over the multicast Ethernet network to displays and processing equipment at various points within the vehicle
29
CONTRIBUTION
Freed from the need for a peripheral card slot to receive video, designers can meet size, weight, power and cost (SWaP-C) objectives by choosing from a selection of small formfactor and low-power computing platforms for image processing and control. Video, control data, and power are transmitted over a single cable; reducing “cable clutter� in the vehicle. The flexible, lighter, field-terminated Ethernet cables cost less and are simpler to install than the bulky cabling and connectors of legacy interfaces. With all devices connected to a common infrastructure and straightforward network switching, multiple streams of video can be transmitted to any combination of mission computers and displays. In Ethernet-based system built with quality components, end-to-end (or glass-toglass) latency is less than 80 milliseconds.
goal to simplify design and lower costs, ensure interoperability, and support easily scalability. The British Ministry of Defense (MoD) Vetronics Infrastructure for Video Wover Ethernet (VIVOE) Defence Standard (Def Stan 00-82) outlines how multiple cameras, image sources, displays, and processing platforms should distribute and receive information using the same network infrastructure. The NATO Generic Vehicle Architecture (NGVA) is a further extension of this standardization effort to meet a broader set of requirements, including unmanned systems. Def Stan 00-82 defines an architecture that helps lower costs and improve performance for endusers. With an architecture based on open standards and protocols, multi-vendor solutions can be integrated into a system, the same technology is
Troops can decide which video streams they need to see without any changes to cabling or software configurations, and video from multiple sources can be combined for use by others in the vehicle. For example, the video feed from visible light cameras can be converted to Ethernet packets and blended with video from a native GigE thermal camera to provide more detail on a region of interest. (Figure 2)
GigE and Military Standards A complex set of military standards outline the mechanisms and protocols for distributing digital video in vehicle electronics (vetronics) systems. While these standards are intricate, design approaches adopted by the machine vision industry help designers meet their overarching
30
Figure 2: Images from an analog camera are converted into Ethernet packets and blended with video from a native GigE thermal camera.
mvpromedia.eu
CONTRIBUTION
easily redeployed across multiple platforms (for example, different vehicle types), and the system can be upgraded with more advanced sensors, displays, or processing systems. Video sources and display endpoints may have an integrated Ethernet interface, or interface modules can be used to convert legacy connections into Ethernet. (Figure 3)
Developing imaging systems based on Ethernet technologies helps designers meet two critical objectives of the VICTORY initiative: the use of commercial off the shelf (COTS) products and technologies to enable multivendor integration and avoid vendor lockin; and component selection to help reduce SWaP-C in space-constrained vehicles.
Similarly, the U.S. DoD Vehicular Integration for C4ISR/EW Interoperability (VICTORY) initiative was first introduced to avoid interoperability and scalability issues during “bolt on� retrofit upgrades of land-based vehicles. Today, VICTORY guidelines also encompass new situational awareness systems. In addition, the DoD Motion Imagery Standards Board (MISB) has selected GigE Vision as the recommended method to transmit and receive compressed and uncompressed video and associated metadata.
Considering the basic aims of military vetronics standards, Ethernet-based imaging systems ensure interoperability in a fully networked, multi-vendor environment. Designers can leverage the performance benefits of Ethernet, including lighter and longer reach cabling, networking flexibility, full-duplex connections, and support for uncompressed and compressed video. Imaging systems can be designed based on COTS technologies, including networking equipment, PCs, laptops, and displays, with a scalable architecture that enables future addition of new imaging sources, displays, and processing technologies. For military applications, real-time video is enabling new generations of vision systems that cost less, weigh less, and are easier to use than systems based on legacy point-to-point standards. Beyond LSA systems, Ethernet-based video solutions are ideal for vision systems for sighting, threat detection, weapons targeting, and surveillance in naval vessels, manned and unmanned airframes, and standalone systems for persistent surveillance. Ed Goffin is marketing manager at Pleora Technologies (www.pleora.com), a leading provider of video interfaces for military, medical, and machine vision imaging systems.
Figure 3: Def Stan 00-82 compliant video streamed over a GigE architecture for a navigation and targeting system.
mvpromedia.eu
31
SPONSORED
EURESYS MAKES IT EIGHT Euresys is a pioneering force behind the CoaXPress standard and likes to keep up a fast pace when it comes to innovations. It’s now introduced an 8-connection CoaXPress frame grabber Belgium-based Euresys is a leader in the development of the CoaXPress standard and is also a key member of the technical committee which leads innovation. The company is playing an active role in the introduction of the next version of the standard. It was four years ago that the first cards of the Coaxlink series (Euresys CoaXPress frame grabbers) were announced in December 2013. Today, Euresys provides the widest CoaXPress frame grabber range on the market. Eight models Coaxlink cards available, from one to four CXP inputs, PCIe and PCIe/104 form factors, with innovative features such as Data Forwarding and FPGA-based Laser Line Extraction for 3D analysis Sales grown CoaXPress sales have grown tremendously and in 2017, the Coaxlink series accounted for 27% of Euresys’ frame grabber sales. Overall, 2017 has been a great year for the company and alongside machine vision camera manufacturers reporting between
mvpromedia.eu
a 20% to 30% increase in sales, it has seen a 40% boost in frame grabber sales. The only downside for such increased growth was that it was not forecasted, so this has been a challenging year for electronic components procurement and manufacturing. Euresys has now introduced an 8-connection CXP-6 frame grabber, appropriately called the Coaxlink Octo. The word Octo in Latin translates as eight, and follows Mono, Duo and Quad variations. The camera data transfer rate of the Coaxlink Octo is 5 GByte/s with PCIe Gen 3 x8 bus and a peak delivery bandwidth of 7.8 GByte/s. The effective delivery bandwidth is 6.7 GByte/s. It is also compatible with the Memento Event Logging Tool. The main target for the new Octo is multi-camera applications, with support for up to eight cameras on a single frame grabber/single slot. Successful Coaxlink applications include 3D
AOI, FPD inspection, printing inspection and in-vehicle video transfer. Marc Damhaut, CEO of Euresys, said: “We’ll probably stop at eight, as there is no more space for connectors on the bracket, unless we included a second bracket. The next generation will be focused on increasing the connection speed to CXP-12 (12 Gbits/s). We have just met with our distributors and have listened to their feedback as to what the market wants over the coming months and years. “The great thing about the Coaxlink Octo, is it now allows eight cameras to work together and customers can vary the configuration. The request for a frame grabber with eight connections came from our customers.” Damhaut added: “We’re happy with how CoaXPress is evolving and the technical committee is now working on version 2.0. We originally launched our CoaXPress products three years ago and it was really at the right time. CoaXPress is really picking up and demand is good from customers.”
33
CONTRIBUTION
THE MULTIPLE SMART CAMERA MYTH When and How to Use Multiple Smart Cameras in Complex Vision Systems – Teledyne DALSA
When smart cameras were introduced about a decade ago, the machine vision industry declared them ideal for simple, straightforward applications such as presence/absence verification or reading bar codes. The idea was that a smart camera bestserved applications that required just one operation. They were and still are perceived as stand-alone vision systems. But many machine vision systems are complex and require several operations. So, can multiple smart cameras be used for more complex applications? Or is a traditional PC-based frame grabber and multiple camera system the only solution? The typical machine vision answer is, of course, it depends on your application. In some situations, multiple smart camera systems can do very well when compared to a traditional PC-based, multi-camera vision system.
Start with the cars The automotive industry was one of the first to embrace machine vision in an effort to improve product quality and its production process. The focus was on visionguided robots for the manufacturing process. Before smart cameras made their debut, PC-based systems with multiple cameras would have been used for inspection of the final product or verification of a sub-assembly. Today’s smart technology allows companies to include vision at every step of the manufacturing process. So an automotive customer might start with just one smart camera for a specific task, see the benefit and results, and find themselves adding more because the platform made it a practical and cost-effective solution. The automotive industry was one of the first to embrace machine vision. The assembly line for an engine block is a good example of how multiple smart cameras can be used together. In this case, multiple manufacturing cells are configured to assemble the product as it moves along a line. Starting from a tray of parts– valves, bolts, cast assemblies, gaskets, and the like –smart cameras in each cell direct robots to pick and place the parts on to the assembly– with each camera configured to pick one part. When the partial assembly is complete, additional cameras verify that the operation was performed correctly and the assembly moves to the next cell.
34
So in the first part of the cell, the smart cameras are configured to perform part identification using high-level pattern matching or 2D Matrix reading functions, while the other cameras in the cell are programmed to perform post-assembly verification.
80 smart cameras can’t be wrong A typical engine assembly line can have eight to ten cells and include from four to eight cameras per cell. That’s a system with as many as 80 cameras – which sounds very complex and very expensive! So why are smart cameras a good alternative for an application like this? Because each smart camera is its own entity, it is completely separate from the other cameras. If, after a system runs for a few months, a customer wants to add more vision or a new component for inspection, adding a new smart camera is simply a matter of configuring it for the required task and adding to the network. With a traditional PC-based frame grabber system, a systems integrator would have to change the source code to add an additional camera at the very least, or even add a new frame grabber. And more often than not, it’s much easier to program one smart camera for one part of an application than it is to take one large program – which takes the input from two or three cameras – and develop new code and ensure everything is synchronized so that the cameras all run at the same time. The automobile assembly plant needs a flexible system that can be easily reconfigured when specifications change. The modular nature of smart cameras also makes them ideal for complex applications. Adding them to an existing network is generally easier to do than writing new code. Plus, when new functionality needs to be added to the system, the camera can be programmed offline and connected so it can be tested live without disturbing ongoing inspections. Smart cameras make it easy for integrators to add new functionality to an existing system. BOA smart cameras (from Teledyne DALSA) can perform part identification and ensure part traceability throughout the product lifecycle. Developing a system with multiple smart cameras can also help to lower development costs. Because smart cameras are configured rather than programmed,
mvpromedia.eu
CONTRIBUTION
vision applications can be developed very quickly – as little as half a day for a simple application. Sherlock or iNspect, Teledyne DALSA’s smart camera environments, use graphical interfaces where a user ‘builds’ an application instead of writing lines of code. Again for an application that requires multiple cameras, your engineers can save a lot of time by configuring half a dozen smart cameras instead of cranking out C++ code for the acquisition and processing of those six cameras. In general, the ease of programming means it’s fairly easy to develop an application; the sooner the application is ready, the sooner you can deploy it and the customer can enjoy the return on the investment.
So how to choose? In the end, the processing and resolution requirements of a specific application will determine if multiple smart cameras are a good investment. Even when used in complex systems, smart cameras are great for verification, reading bar-codes, point-to-point measurements, pattern matching, counting blobs in an image, or presence/absence – in short, any process that can give a pass/fail result, or an expected measurement. As illustrated by the automobile assembly line application, it’s easy to see how a smart camera can be assigned a single task in the overall process. Remember though, that due to the nature of embedded components, smart cameras will never have the same performance as PC-based systems. A smart camera can perform at 60 or more frames/second, but that will slow down if combined with a low-level function such as bar code reading with high-level processing such as geometric pattern matching or OCR. Finally, if the application being used needs parallel or round robin processing, a traditional PC-based multi-camera system, with a frame grabber or vision processor, is the way to go. While a smart camera has a CPU it’s not a PC and is not meant to run complex, multi-threaded code behind those applications.
Future thinking A multiple smart camera “system” works very well for certain types of applications, and the following table can help gauge your application’s requirements. Choosing the smart camera route takes some forethought and good communication with your customer. Ask them how they envision the application and consider how it could evolve in the future. Will they want to inspect more features or incorporate more inspection stations on an assembly line? Will they want to keep a record of all the image data, or have the cameras communicate results? Their answers will help you find the best solution and keep them happy.
mvpromedia.eu
35
SPONSORED
ADVANCING A LIGHTING SOLUTION From its pioneering start in 1993, Advanced Illumination has managed to keep its nose in front in what is a very competitive area within the machine vision industry Flexible systems
Boston-based Advanced Illumination (Ai) has the distinction of being the first company worldwide, delivering its first products in 1993. It was founded that year by John Thrailkill and his father. John was born in Hammond, Indiana in 1958 and from 1982 to 1986, he studied at Northeastern University and the University of Lowell in Massachusetts, majoring in Mechanical Engineering. He was the President of Ai from 1996 to 2014, when he took the position of CEO. Prior to this, John was the Director of Engineering at Bleck Design Group, an Industrial Design firm located in the Boston area. From the very beginning, sophisticated strobe and continuous mode controllers have been an integral part of the company’s product offering. Its controllers range from miniature in-line devices to very high output, high precision strobes that offer up to 100 amps at 100 volts DC.
Demanding factory automation With a long history of serving the demanding factory automation and machine vision industries in North America, Ai has specialized in the development of flexible lighting and control systems. Tens of thousands of individual SKUs can be delivered in one to two weeks. As an ISO 9001-2015 company, quality assurance and customer satisfaction are at the core of everything the company does. Ai is proud of the automated systems it has developed and deployed throughout the company. These include a paperless, manufacturing e-doc system, a barcode-based WIP tracking system, automated controller programming and
mvpromedia.eu
test systems, online product configurators and many others. The Ai management team make the point that Ai is fully conforming with all of the latest compliancy requirements in the markets they serve.
Latest innovation The latest innovation at Ai involves its Line Light product family. It has developed an actively cooled line light, the LL230. Both forced convection and water-cooled models generate 2.1 million Lux at a 75mm standoff. The line light is expandable up to 2400mm and requires 230 watts of input power for each individually controlled 150mm segment. The LL230 has been designed to be compatible with low-cost, commercially available AC-DC current sources, making this line light the most cost-effective, highest performance product in its class.
As machine vision technology is applied to ever more complex problems, the need for these flexible systems has increased. At the same time, the complexity of deploying these systems has also increased. While the company was the first to develop a plug and play lighting system back in the 1990’s, it has made extensive refinements to its proprietary SignaTech (Signature Technology) to alleviate much of this complexity. The SignaTech system utilizes a coded memory chip for each lighting device to provide perfect compatibility with its controllers. This allows just a few controllers to drive tens of thousands of different SKUs while providing the highest levels of performance in both strobe and continuous modes. To further simplify the adaption of its products, Ai will be increasing its development of controller/visionappliance interfaces, technical application notes and videos, user-friendly SDKs and HMI control applets.
www.advancedilluminations.com info@advancedillumination.com 802-767-3830
Ai has always endeavoured to offer the widest array of products in the industry. Every product is designed as a part of a wider, flexible lighting and control system. This approach allows them to tailor a solution quickly and inexpensively when it makes the most sense for the customer.
37
FEATURED
EMBEDDED VISION – MACHINE VISION ON STEROIDS Editor Neil Martin kicks off coverage of the European Embedded Vision Conference which was held in Stuttgart on 12 and 13 October, 2017, with some personal observations, before asking sector companies what they think of it all On the eve of EVE I arrived late at night in Stuttgart, having flown from the UK where I had meant to attend an industry show with my colleague. That was a first for me – a trade show where I was almost not allowed to enter. No, I was not causing trouble, but on asking for our badges (bear in mind we were a media sponsor), we were told that strictly speaking, we weren’t allowed to attend. In fact, no journalists were allowed in – it was all going to happen ‘in-camera’ as we journos like to say, blown up with our own self-importance. I’m not going to name names, but in all my years of attending trade shows, across varied industries and across the globe, it was the first time I had been persona non-grata. In fact, mainly because of forced
expressions of bonhomie, we were given permission to wander through the show for ten minutes as long we did not ‘pester’ the companies. Please be discreet, the organisers said. And indeed, we were so discreet that we stared at a few stands, ventured to have a quick chat with a company we knew (the organisers were following us around at distance and they didn’t look happy at that) and we were gone, but not before noticing that a huge area of the hall was empty, suggesting that a little more media help might have increased the numbers of exhibitors. Actually, the whole thing gave us a good laugh when we got back in the car. Because, on arriving at the badge area, and one of the junior organisers sensing trouble, she had called over the head honcho. The junior backed away, leaving the senior man to handle the delicate refusal. But he stood for what seemed like a lifetime, staring at us, unable to articulate that we were simply not welcome. It was so long, I worried that he’d had some sort of seizure. It’s at times like that you instantly look for the first aid post, in case you need back-up. But my colleague had him sussed – he just couldn’t bring himself to utter the words that we were not welcome. From what I can remember, my colleague stepped in, asking a question that eventually broke the spell. Not his fault of course and our bad for not checking that a media sponsor is actually allowed to attend the show, but it gave us one of the most bizarre experiences of the year.
38
mvpromedia.eu
FEATURED
Anyway, back to Stuttgart.
Stuttgart
With the memories of my rejection uppermost in my mind, I landed in Stuttgart and as usual, having booked my hotel late, and because our chairman believes a park bench for the night is an unnecessary expenses indulgence, I used a hotel in the north of the City.
Contrast my reception to the UK show with the team at Messe Stuttgart. Nice big smiles, welcome pack, there’s the coffee and biscuits, help yourself, and have a great time. Anything you need, let us know.
I got there in a Mercedes taxi which was so large, and opulent, that I could have used the back seat area for my morning jog. For anyone from the UK, where Mercedes are synonymous with luxury driving, it’s always odd to see them being used as everyday taxis.
Now it’s always a risk, organising a new conference for an industry which has its fair share of events throughout the year, but its fair to say that EMVA and Messe Stuttgart pulled off the first ever European Embedded Vision Conference with a certain degree of aplomb, as you would expect.
When I told him the hotel’s name, the driver gave me a weird look, giving me the once over. Then he took a call whilst trying to find the place (he kept the leviathan’s speed up at around warp factor six, so I was unable to find my bearings that easily) and mentioned the hotel’s name to his mate – a laugh came down the line. By now somewhat unsettled as to the prospect of the hotel, I arrived to find a place which had all the charm of a detention centre. No problem, I thought, girding my loins, we are made of stern stuff here at MVPro, so I entered the establishment (no reception after 4pm, keys available in a box after typing in the correct number) and found my cell (oops, room). Single bed (one of), wardrobe (one of), desk and chair (one of), and mattress and duvet (one of). But, to be fair, it was clean, warm and the buffet breakfast in the morning very generous, and it was close to a Metro station. So, I think I’ll be going back! And, next morning, well slept and full of continental hams and cheeses, I was ready to attend the first embedded vision show.
mvpromedia.eu
Give me Stuttgart any day.
It was billed as the only industrial conference in Europe for embedded vision and its aim was to bridge the gap between machine vision and embedded devices. It set out to provide a networking platform for computer vision, machine vision and embedded vision ecosystems. Organisers were EMVA and hosts were LMS/VISION show. On the eve of the show the official figures suggested that 25 companies were going to be there to show off their wares. So, it was gratifying to see a reception area nicely lined with small company stands and a full auditorium of people listening to a long list of industry speakers. And it was even more gratifying to see that the lecture hall, where the talks were given, was nicely full. The official estimate said that 200 people had attended, and I wouldn’t argue with that. The event was kicked off by Thomas Walter, Divisional Head of Industry & Technology at Messe Stuttgart and the floor was given to Gabriele Jansen, Managing Director of Vision Ventures, who took us through a good place to start, what is embedded vision.
39
FEATURED
The definition appears below: Computer Vision: fundamental approaches and technologies to extract meaning from images and image sequences.
Machine Vision: application of computer vision and other imaging techniques to automate the analysis and interpretation of image information as a basis for automated decisions.
Embedded Vision: Implementation of computer vision and other imaging techniques on any computing platform that is not a general-purpose computer. She went on to explain that Machine Vision is a 10+ B EUR industry with annual growth rates of 10 plus %. What’s more, that embedded vision provides the tools to boost this growth far beyond traditional growth rates, which is happening right now.
money (he sold his company to Intel), but now has the confidence to range far and wide. I particularly liked the way he referenced his children when it came to saying how far we were with facial recognition for example, saying his young children could recognise the difference between animal species by being shown just a few photographs, whereas machine would have to be shown thousands of images, or more. As someone pointed out to me later, EVE was VISION very ‘lite’ and it was timed perfectly to act as a stand-in for the gap that the two-year scheduling on the industry’s largest show creates. The organisers said the that event had been a great success, adding
She also said that Europe is the technology leader in machine vision with a rich ecosystem of vision product manufacturers, vision machine builders and vision solution providers. And that machine vision and embedded vision are currently two different worlds that need to get together.
Keynote speakers Keynote speakers are meant to excite of course – that’s why they get top-billing – and Raj Talluri, Senior Vice Product Management of Qualcomm, and Alex Myakov, Chief Computer Vision Advocate at Intel, did the business. They had that air of men who have already achieved some success and were there to impress us all. There was a reverential hush when both stepped up on stage and started to speak to an audience that was keen to learn from industry titans. The only sense of frustration from the audience was articulated in the first question, as a person said yes great talk, and its all very exciting, but how do we apply what you’re doing in the everyday applications of industry? As Raj Talluri, walked out of the hall, having warmed up the audience nicely with his talk entitled ‘Innovations in Embedded Vision Processing’, to a raft of welldones and good speech, I heard him to say to one of the organisers, “… that’s what I do these days, give talks.” I couldn’t decide if that was a mark of modesty for a man who knew how to work his audience, or a lament that he wished he was doing real work back in the laboratory and that speaking dates took him away from his engineering roots. Alex Myakov was equally a big hit and had the calmness of a man who’s not only made his
40
mvpromedia.eu
FEATURED
that it achieved a fully booked attendance of about 200 participants. Florian Niethammer, Team Leader of hosts VISION at Messe Stuttgart said: “With around 200 attendees the debut of the Embedded Vision Europe was a full success. In this high-profile technical conference the latest developments in embedded vision were shown. We are especially proud that together with our partner EMVA we managed to provide this trend topic in imaging with the platform it deserves and to organize the first event of this kind on a European level here in Stuttgart.” Speaker VP Marketing at Videantis Marco Jacobs added: “Computer vision will soon give the power of
sight to all our electronic devices. This conference brought together key engineers and executives that are pushing the boundaries of this technology, making it a great place to understand important trends and how vision will be embedded into powerful new products,”, and one of the top-notch Speakers. Managing Director at Active Silicon Colin Pearce commented: “This event has been an excellent initiative by the EMVA to provide a focus to discuss the advancement of embedded System-on-Chip technologies and how they may impact but also give opportunities in the machine vision industry. The speakers have been of a suitable high-calibre as I have come to expect from EMVA events and the topics presented have provided thought provoking material for networking discussions” Product Line Manager at Allied Vision Technologies Paul Maria Zalewski: “The change to Embedded Vision already started a couple of years ago and will speed up further every year. Overall, I believe that in the classic machine vision applications within factory automation, the process of shifting to embedded systems will take longer than in applications and markets which demand embedded systems from the very start because they are restricted in terms of price and size. “On the other hand, if smart factory approaches will be further pushed by the industry, it can make PCbased systems less attractive faster than expected. Embedded Vision has the potential to divide our machine vision industry. Not all players might be able to adapt quickly enough and may find themselves back in niche markets, where PC-based systems will be still the dominant system in the future.”
What sector companies think of embedded vision Q&As In the run-up to the show, we asked a number of sector companies for their opinions on embedded vision. 1. Is embedded vision a truly disruptive technology that will shake the sector? Justin Moll, Vice President, US Market Development, Pixus Technologies and Vice President of Marketing, PICMG, said: “I believe there is a significant shift taking place towards more advanced embedded vision systems. As intelligent automation, industry 4.0/ IIoT, and real-time data analysis is required, users will require embedded systems such as CompactPCI Serial, COM Express, and other systems. Giles Peckham, Regional Marketing Director at Xilinx: “Embedded Vision is one of the most exciting fields in technology today. Providing machines the ability to see, sense, and immediately respond to the world creates unique opportunities for system
mvpromedia.eu
41
FEATURED
differentiation; however, this also creates challenges in how designers create next-generation architectures and bring them to market. Integrating disparate subsystems including video and vision I/O with multiple image processing pipelines, and enabling these embedded vision systems to perform vision-based analytics in real-time is a complex task that requires tight coordination between hardware and software teams. “To remain timely and relevant in the market, leading development teams are exploiting Xilinx’s All Programmable devices in their next-generation systems to take advantage of the devices’ programmable hardware, software, and I/O capabilities. Dr Christopher Scheubel, FRAMOS Business Development, said: “Yes, we will see far more embedded vision enabled devices that stand alone cameras. The main driver are consumer products with cognitive capabilities. An example would be the coffee machine that recognizes the user and anticipates her specific coffee preferences.
continues to improve as costs reduce and this is true for Embedded SoCs as well as in Desktop CPUs. Embedded Vision therefore develops in parallel to classical machine vision. More and more systems and applications could be realized with much smaller and cheaper new embedded systems that have more power than ever before. Nevertheless there are always applications that need more performance and therefore more powerful workstations, and this will continue in the future.” Christoph Wagner, who has taken over responsibility for MVTec’s ongoing expansion in the embedded vision market: “We currently believe that embedded vision will have a high impact within the branch and will thus bring machine vision to completely new areas of application. However, we do not believe that “conventional” PCbased image processing” will be completely replaced by embedded vision. So it would rather be a supplemental effect; embedded vision is the right choice everywhere where mobile applications are concerned and the size and costs of the components play an important role; PCbased applications show their strength e. g. in multi-camera systems, more complex applications, as well as in research & development.”
Steve Geraghty, Vice President Industrial Visions Solutions, at Teledyne DALSA: “In the truest sense of the term, embedded CEO of EVT Eye vision has been Vision Technology around for more than Michael Beising: 10 years in the form “It is an important of smart cameras technology which and vision sensors. enables us to use In fact, the term vision system at “embedded vision” placed where implies the integration With a fully booked attendance of about 200 participants the it’s otherwise of vision technology first Embedded Vision Europe conference (EVE) in Stuttgart difficult to realize in any form, including was considered a great success by the organisers. a solution, based cameras and PCs, on size or price.” for applications 2. Is it shaping the future of Europe’s across all market segments.” electronics industry? Mark Williamson, STEMMER IMAGING: “Every smart Justin Moll: “Both the US and Europe electronics camera is effectively an embedded vision system industry seem to be in the early stages of this shift and these have been around for a long time so on a large scale to embedded vision systems.” it’s not new. What is new is standards and off the shelf products that are reducing the entry cost into Giles Peckham: “Xilinx sees embedded vision as a key embedded vision. Embedded vision will create new and pervasive megatrend that is shaping the future markets and opportunities by making machine vision of the electronics industry. Europe has traditional viable for more higher volume applications. I don’t strength in many market segments that are being believe it will disrupt the existing market that much.” impacted by the rapid developments in embedded vision. In the automotive industry, Europe has led Oliver Senghaas, IDS Imaging Development Systems: the embedded vision revolution in ADAS and is now “Embedded Vision is more of an evolution than a pioneering developments in embedded vision for revolution of classical machine vision. CPU performance
42
mvpromedia.eu
FEATURED
fully autonomous driving. Embedded vision is also enabling Europe’s industrial companies to develop new capabilities in applications such as smart factories, security, augmented reality and robotics. “The full impact of embedded vision on these and other market segments such as Defence and Medical Imaging is considerable. The performance and flexibility of Xilinx’s All Programmable FPGAs and SoCs is enabling many European companies to establish a lead in these areas of development.” Dr Christopher Scheubel: “For sure it does. The majority of electronic devices will incorporate vision sensing. There are three reasons that add to this development: Vision sensors and processors got very cost effective, the technology got much better and the market is demanding for vision enabled products.” Steve Geraghty: “Embedded vision is not shaping the future of the industry, but it is gaining application momentum in many consumer electronics based products. On the manufacturing side, embedded vision is deployed extensively in the electronics industry for quality control, robotic guidance, inventory management and more. To support production increases and shrinking geometries, suppliers of embedded vision are continually improving sensor, processing and software capabilities to ensure highly accurate and repeatable results.” Mark Williamson: “Embedded Vision will form part of the next wave of innovation in the electronics market. We are already seeing companies that are willing to adapt reference PCB designs for mid volume applications. For the mass volume market the biggest impact will be Autonomous driving. Every car would have 8 cameras – that’s a lot of cameras, however there are many other areas of evolution in the electronics market.” Oliver Senghaas: “Yes, but in a complementary way to the way Machine Vision develops today. This means that the systems will not be mutually exclusive, but rather work hand in hand.” Christoph Wagner: “- Embedded vision enables new applications for a great variety of industries – of course, also In the electronics industry, it enables remote maintenance with mobile devices as well as augmented reality scenarios. For example, specific components in control cabinets may be reliably identified for replacement purposes using embedded vision software. Service technicians benefit from a multimedia manual, too. They can easily recognize assemblies and components with a mobile device and use this information to prepare instructions for certain maintenance processes or access additional information online.
mvpromedia.eu
- With all this said, please keep in mind, that the impact of embedded vision is definitely not limited to the electronics industry.” Michael Beising: “It’s a shaping factor – depending on the success of the vision in the different sectors, e.g. NPR number plate reading or people tracking it might also be big enough for to shape the electronics industry.” 3. What embedded vision products/services do you currently have, about to be launched? Justin Moll: “Pixus provides both enclosures for other developers’ vision systems to be housed as well as full backplane-based chassis platforms. Our modular enclosures allow a wide range of sizes to house the electronics. Pixus offers CompactPCI/ CompactPCI Serial enclosures for 3U cards in rackmount and desktop configurations.” Giles Peckham: “Xilinx provides embedded vision developers with a suite of technologies that support both hardware and software design. Xilinx All Programmable devices include FPGAs, SoCs and MPSoCs. The Xilinx Vivado HLx design environment supports both hardware and platform developers developing the latest embedded vision hardware. These tools include support for the industry’s latest highbandwidth sensor interfaces. Xilinx SDx tools including SDSoC allows software and algorithm developers to develop in familiar Eclipse-based environments in familiar languages like C, C++ and OpenCL. “The Xilinx reVISION Stack, launched in March this year, builds upon the SDx concept to include support for OpenCV and machine learning inference, including support for the most popular neural networks such as AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN, as well as the functional elements required to build custom neural networks (CNNs/DNNs), while permitting design teams to leverage pre-defined and optimized CNN implementations for network layers. This is complemented by a broad set of acceleration-enabled OpenCV functions for computer vision processing.” Dr Christopher Scheubel: “We also offer Image Sensor Board (ISBs) with NVIDIA Jetson integration for robotics and traffic infrastructure applications. Our latest portfolio addition is the Intel® RealSense™ line. This technology enables a whole range of applications such as drones for precision farming, self-driving vacuum cleaners and VR/MR.” Steve Geraghty: “We manufacturer a range of embedded vision technologies and products which we advance continuously. “For embedded system designers we develop and manufacturer sensor and camera technologies for
43
FEATURED
area and line scan imaging applications. We also develop technologies and products for the nonvisible spectrum. Our cameras comply with industry interface standards and support internal image preprocessing capabilities, effectively offering a complete embedded vision solution in its simplest form. “For industrial end users we manufacturer a variety of embedded vision solutions that can be setup and deployed on the factory floor. Our BOA vision sensors and smart systems are low cost, fully integrated solutions for single point inspection, guidance or identification applications. BOA products are easy to set up and integrate in any factory environment. For multi-point applications, our embedded GEVA systems combine with our camera technologies to offer high performance processing at low cost per camera solutions.
Oliver Senghaas: Existing products: • Embedded uEye camera driver for ARMv7-A compatible with linux platforms • Platform-independent uEye Python Interface About to be launched: • IDS NXT Vision App based Sensors (presented for the first time at the Embedded Vision Conference) Christoph Wagner: “We have been offering HALCON Embedded since 2005. With this, we have ported our powerful machine vision software software to a wide variety of embedded hardware and operating systems. The great thing about HALCON Embedded: develop your machine vision application on a PC and run it on embedded hardware; with the HDevEngine, we offer a powerful image processing engine, on which you can directly execute image processing scripts on different embedded hardware.
“Our end user solutions are offered with a choice of embedded software: the wizard-based iNspect for users requiring easy The organization team of the first Embedded VISION setup, and Sherlock, “An executable Europe, from left to right: Thomas Walter, Vice President and for advancedversion without the Member of the Board of Management at Messe Stuttgart; users who need need for porting has Gabriele Jansen, CEO of Vision Ventures and Member of the flexibility to support also been available for EMVA Board; Thomas Lübkemeier, EMVA General Secretary; challenging download since June Florian Niethammer, Team Leader VISION at Messe Stuttgart. or changing (HALCON for Arm®application based platforms).” needs. We also Michael Beising: “We have different VisionSensors offer a library of image processing functions (EyeSens) systems for replacement of a couple of to designers who are looking to develop normal sensor like light gauges. But also SmartCameras their own embedded vision solution.” (EyeCheck) Systems the entry level system for solutions Mark Williamson: “The challenge for many wanting as well as embedded systems based on ARM and x86 as to embark on developing a embedded vision the connection to the high end PC based vision systems. application is the lack of imaging knowledge. The On all the platforms we have a general purpose toll to first step in our services offering is that STEMMER make vision solutions, as specialized solutions for people IMAGING is providing embedded vision consultancy tracking and number plate reading and some other.” to help the customer specify the right level of 4. Are you committing R&D towards embedded vision? component to realise these applications. Products in this space include many of the existing products Justin Moll: “Yes, Pixus plans to expand our as well as Embedded versions of our Common CompactPCI Serial backplane/chassis offering, Vision Blox software including GEVision server which is an enabler of embedded vision.” and Genicam acquisition platforms which support all existing and future vision standards. Camera Giles Peckham: “Xilinx, with its All Programmable technology includes premium grade MIPI and USB FPGAs and SoC devices is ideally positioned to enable camera modules from Allied Vision in the form of the embedded vision revolution across multiple the 1 Product line and for 3D we are partnering applications and so is investing in an extensive roadmap with Intel to support the supply and integration of for both the reVISION Stack and silicon products to the RealSense D400 depth camera modules.” remain at the forefront of this exciting megatrend.”
44
mvpromedia.eu
FEATURED
Dr Christopher Scheubel: “Yes, with the abovenamed products and application we do. Part of our Research team is only doing application integration with Intel®s RealSense™ Technology right now. In a few days, we start two R&D cooperations with renowned Munich universities LMU (Ludwig-Maximilian University) and TUM (Technical University Munich) for additional R&D projects in this field.” Steve Geraghty: “Yes.”
Mark Williamson: “Very much so, our Common Vision Blox team are targeting embedded platforms in all areas of future development and ongoing validation of new embedded processor boards continues, enabling us to deliver impartial advise on what hardware architectures offer the right fit for each application.” Oliver Senghaas: “Yes, we have a new separate team working on our NXT vision sensors platform that will come with many new innovations. In parallel we continue to offer our platform-independent camera drivers with many 3rd party interfaces.” Christoph Wagner: “Definitely! Embedded vision is a major focus in our research, but also - in our collaborations with our numerous partner companies; - in our standardization contributions; - in our memberships, e.g. with the Embedded Vision Alliance, the Basler Partner Network/ Imaginghub, or the SICK Appspace. Michael Beising: “We have a different R&D toward embedded vision solutions, e.g. a smart thermal camera but different specialized hardware for deep learning solutions on embedded hardware. We combine these special SoC’s with state of the art line scan sensors e.g. to build high speed inspection for woven and nonwofen and others.” 5. Will your company attend the Embedded Vision Conference? Justin Moll: “We could not attend this year, but in visiting other industrial automation shows this year, we can see a trend towards embedded systems in the making.” Giles Peckham: “Yes, Xilinx will attend the first year of Embedded Vision Conference. Xilinx has demonstrated responsive and reconfigurable vision-guided intelligent systems through conference presentation and demonstrations. The company has showcased
mvpromedia.eu
how its tools, libraries and methodologies infuse computer vision, sensor fusion, and connectivity into vision-guided intelligent systems, enabling users to innovate, differentiate, and reduce time-to-market.” Dr Christopher Scheubel: “We did visit the summit in Santa Clara and we will visit the conference in Stuttgart. The conferences give a good view on what can be done from a technology perspective and which applications are currently developing.” Steve Geraghty: “No.” Mark Williamson: “Yes our Common Vision Blox product manager will attend.” Oliver Senghaas: “Yes, next to our new NXT sensor platform, we will show some embedded board camera demos running HALCON embedded or our new PyuEye interface with OpenCV image processing.
45
FEATURED
Christoph Wagner: “Yes, we will attend of this event. We think, these are interesting new formats to meet many known contacts, as well as to build new ones. It surely will show in the future, whether they will prevail. They could become important meeting places for the growing embedded market.” Michael Beising: “We are on the embedded show in Nürnberg so we didn’t saw an additional need for a specialised conference in Stuttgart.” 6. Anything else you would like to add on the subject? Justin Moll: “As an officer of the PICMG open specification group, I can share that we are working on concepts for Industrial IoT that leverage our existing specifications such as COM Express. These are smaller mezzanine-based systems that allow scalability and upgrades over long periods of time. Keep an eye out for some exciting revelations in 2018.” Giles Peckham: “Xilinx is a gold sponsor of the first Embedded Vision Europe conference.” Dr Christopher Scheubel: “We see a lot of movement in the industry and we are proud to be a part of it.” Steve Geraghty: “While the term “embedded vision” is not new, the proliferation of embedded vision outside of the manufacturing sector is. Vision in automotive, security, sports, medical, social and other applications is driving technical advances and innovations that will continue to expand adoption. For embedded system designers who have the means to develop their own solutions, this will result in greater component selection and lower cost. Industrial end users will also benefit from ongoing advances in software to make smarter, easier-to-use systems that can be monitored from anywhere in the world.” Mark Williamson: “Bringing together the machine vision industry and the new embedded vision ecosystem has a way to go, however standards are being created to enable the ease of development on the mid volume embedded market. This will have the biggest impact on the traditional machine vision market.”
46
“We expect machine vision to be used in additional links along the industrial production chain in the future. The technology will thus encompass ever more user groups, which will present new challenges. The aspect of interoperability, i.e. communication with the overall system, is very important in this context. Standardized communication protocols, such as OPC UA, will be needed for this purpose. “Another important current trend in machine vision market is self-learning technology based on artificial intelligence like deep learning. By using self-trained networks, customers save a great deal of effort, time, and money. For example, defect classes can be identified solely through reference images. Tedious programming for identifying different defect classes is therefore no longer necessary. In the industrial machine vision environment, deep learning is mainly used for classification tasks which appear in many applications areas, e.g., in the inspection of industrial goods or the recognition of components.” Michael Beising: “Embedded System would of course give us the access to new markets, where standard vision was not easy to integrate. Especially the new SOC’s either with build in FPGA (like the ZYNQ SOC from Xilinx) or core parts for deep learning open up fields which we can support right now only based on PC systems.”
Pleora Ed Goffin, marketing manager at Pleora, told us: “On embedded – I’d say the way we encounter the move towards embedded in vision in two areas. “In our traditional video interface business, we’re seeing a move towards the use of embedded or single-board platforms for processing and analysis, versus a more traditional PC. Machine vision tasks or processes that are automated and repeated, including image and video processing within the vision system, are good candidates to be handled by an embedded processor.
Oliver Senghaas: “With our new Tech Tip: “Embedded vision kit” we show how easy and fast customers could build their own embedded vision project using our platform independent camera drivers and our new pyuEye interface on a low-cost standard embedded platform (Raspberry Pi 3). We always consider the need for scalability of the application software for different performance levels to demonstrate the peaceful coexistence of classical MV and embedded vision.”
“We’ve seen this with GigE, and maybe more so with USB 3.0, and especially in robotics where there are cost, maneuverability, and weight advantages by moving to an embedded computing platform. It also allows designers to better locate intelligence and processing intelligence for the vision system at different points in the network; in a roadside cabinet, up a gantry or mounted with a camera. In addition, power efficiencies help lower operating costs and reduce heat output to prevent the premature failure of other electronic components and increase reliability.
Christoph Wagner: “Embedded vision also plays an important role in IIoT. Such environments typically have many sensors and cameras, mounted in different locations, which continuously monitor practically every production step. Most of these sensors work with embedded vision technologies. They can then be used for a wide range of tasks, such as recognizing and identifying objects, positioning and handling workpieces as far as for defect inspections.
“Overall, embedded is playing an important role in helping move vision in two important directions. It has the potential to move the technology into a whole range of new markets that would previously been inaccessible primarily due to cost. There’s obviously demand for lower cost, easier to use vision technologies that can be used by non-vision experts. (In our case, vision is moving into a number of applications where it’s not necessarily a good fit for our technology, but
mvpromedia.eu
I can see it as an appealing opportunity for camera manufacturers.) At the same time, embedded technologies will also be key as machine vision moves up the value chain into machine learning, IoT-based systems and applications. Just based on marketing and trade show activity, the traditional embedded suppliers see vision as a growth opportunity. “The other area where we’re intersecting with the traditional embedded world in in markets like security and defense, where system designers are trying to locate some image processing capabilities farther towards the edges of the network to help offset burden on the CPU. Basic image processing functions, like image stitching or overlay, and performed as part of the video interface before the imaging data is passed on to a primarily processing unit.”
Gardasoft Final word goes to Peter Bhagat of Gardasoft Vision who said: “Gardasoft manufactures products which provide integration between the major components of a vision system. These include products for lighting control, lens control and trigger timing, and we now have products with lighting control and trigger timing combined for even higher levels of integration. Just as traditional machine vision systems benefit from higher levels of integration, there is also a need to integrate such capabilities for embedded vision applications. “For all machine vision implementations it is essential to fully understand the requirements of the application in order to determine the optimum vision solution. The implementation of board-level embedded vision solutions has, to date, been the remit of vision specialists, with the economies of scale offered by low-cost embedded components only being realized through volume manufacture of the pieces of equipment containing the vision system. This is certainly reflected in Gardasoft’s experience, where most of our activity in embedded vision applications has been for OEM customers. We have had a number of projects where size or close integration with other devices has been a key requirement. “The continued high level of interest in embedded systems is reflected as part of our R&D program. We are currently working on further miniaturisation of our control technology, which will be achieved in forthcoming products. This will open up new applications and opportunities and will enable systems to use faster timing and advanced techniques where size or other constraints were previously a limiting factor.
So, did EVE work? In a word, yes. And good news that it is hoped to be an annual event, although how that works with VISION at the same time, remains to be seen. And also the question is, how long will embedded vision be able to have true, separate identity within the machine vision sector?
mvpromedia.eu
Contact us today
SPS IPC DRIVES AND 2018 CONFERENCE SEASON Editor Neil Martin highlights SPS I PC Drives and also looks ahead to the 2018 conference season
With the 2017 conference season winding down, we look to be finishing on a high note with SPS IPC Drives, the trade fair for electrical automation which is held from 28 to 30 November 2017 in Nuremberg.
Exhibitors participating in the VDMA Machine Vision Pavilion (hall 3A, stand 151) are:
After that, we can start to have to look ahead to 2018.
• iiM AG
It’s been a long year, with many good shows in the UK, Europe and US, so it will be good to hear what SPS IPC Drives has to offer.
• MVTec Software GmbH
• BECOM BLUETECHNIX GmbH
• Rauscher GmbH • Silicon Software GmbH
“SPS IPC DRIVES IS EUROPE’S LEADING EXHIBITION FOR ELECTRIC AUTOMATION”
• The Imaging Source Europe GmbH • SVS-VISTEK GmbH • MBJ Imaging GmbH • VDMA Machine Vision • VISION / Messe Stuttgart
SPS IPC Drives is Europe’s leading exhibition for electric automation and it’s a chance for attendees to meet suppliers of electric automation technology from around the world. A key focus of the exhibition will be how Industry 4.0 is developing from a vision to reality. The organisers say that in the age of digital transformation, IT and automation are increasingly merging and this will be reflected at this year’s event as never before. Alongside topic-based special display areas, there will be presentations, products and examples of applications, all devoted to digital transformation. And Hall 6 has a new thematic focus, which will address these new challenges in production technologies. VDMA points out the in the field of electrical automation, machine vision is steadily gaining in importance. Which is why, In addition to numerous individual machine vision exhibitors, the VDMA Machine Vision group is organizing a joint booth at the SPS IPC Drives with eight companies (hall 3A, stand 151). Director of the VDMA Machine Vision Group Anne Wendel said: “Trade fair visitors have the opportunity to explore the entire spectrum of image processing solutions at the machine vision pavilion, ranging from software, optical systems and lighting to frame grabbers, sensors and cameras.”
48
There are also going to be a number of forums and discussion points. VDMA Machine Vision panel discussion: “Are customizable sensors and machine vision systems the answer to Industrie 4.0?” When: Wednesday, November 29, 2017, 3:00 p.m. Where: VDMA forum (hall 3, stand 668) Participants: • BAUMER Optronic, Dr Albert Schmidt • STEMMER, Peter Keppler • MVTec Software, Dr Olaf Munkelt • Silicon Software, Dr Klaus-Henning Noffz • Sick AG, Andreas Behrens Vision Expert Huddles (hall 3A, stand 151) VDMA Machine Vision will be organizing a novel approach of presentation forum at the SPS IPC Drives 2017. Goal is to bring experts of the machine vision industry together with trade fair visitors, customers and users. Experts from the machine vision industry will give a presentation, followed by discussion and exchange. (See Timetable on the next page)
mvpromedia.eu
Vision Expert Huddles – Program Tuesday, November 28, 2017 Automation & machine vision 10:30 a.m.
Balluff & Matrix Vision
• Standardized industrial networks for optimized image processing.
11:00 a.m.
MVTec
• Successful business with state-of-the-art machine vision software
11:30 a.m.
Silicon Software
• Machine vision meets automation: inline image processing on the FPGA
12:00 p.m. Sick
• Providing greater values of 3D vision to a broader customer group
Embedded vision 1:00 p.m.
Imago Technologies
• Accelerating product development thanks to cutting-edge technologies
1:30 p.m.
Stemmer
• Is embedded vision set to revolutionize machine vision?
2:00 p.m.
Xilinx
• Accelerating MV applications with an Acceleration Stack
Wednesday, November 29, 2017 Vision sensors & intelligent cameras 10:30 a.m.
Sick
• Customized image processing solutions with AppSpace
11:00 a.m.
Rauscher
• Smart cameras at the heart of Industrie 4.0 machine vision systems
11:30 a.m.
Baumer
• Intelligent sensors and cameras for quality assurance
Machine vision 1:00 p.m.
Flir
• Cooled thermal cameras for machine vision
2018 Summary & Events Calender Of course, 2018 will have as its highlight Vision 2018, which is scheduled for 6,7,8 November at Stuttgart. This is Europe’s largest machine vision trade fair and will of course be again a celebration of all that’s great in the sector. I had one leading industry expert say to me a few weeks back that Vision should now be a yearly show, not least because things are moving so fast, that two years is too large a gap. That night be the case, but it’s more likely to be the larger companies, with better resources, who might be arguing for such a change. But, let’s not get ahead of ourselves and Vision 2018 is still a year away. Before that we have a number of great shows that will be grabbing our attention. Here’s a selection of the one’s we’ll be looking out for: • A3 BUSINESS FORUM 17th - 18th January 2018 / Orlando, US
• SPS IPC DRIVES ITALOA 22th - 24th May 2018 / Parma, Italy
• VISION CHINA 14th - 16th March 2018 / Shanghai New International Expo Center (SNIEC), Shanghai, China
• AUTOMATICA 2018 19th -22th June 2018 / Messe Munich, Munich, Germany
• EMBEDDED WORLD 2018 27th February - 1st March 2018 / Nuremberg, Germany
• THIRD EUROPEAN MACHINE VISION FORUM 5th - 7th September 2018 / Bologna Business School, Bologna, Italy
• THE VISION SHOW 10th - 12th April 2018 / Hynes Convention Center, Boston, US
• VISION 2018 6th - 11th November 2018 / Messe Stuttgart, Stuttgart, Germany
• UKIVA MACHINE VISION CONFERENCE 16th May 2018 / Arena MK, Milton Keynes, UK
mvpromedia.eu
49
BUSINESS STORIES
BALLUFF ACQUIRES MATRIX VISION Sensor and automation specialist Balluff Group (head office in Neuhausen auf den Fildern, Germany) has acquired 75% of Matrix Vision. The deal was agreed by management last month and 25% of Matrix Vision is being retained by the previous shareholders. A price was not disclosed in the announcement. Employing over 100, Matrix Vision will operate independently and with an unchanged brand in the field of image processing. The managing directors Uwe Furtner and Erhard Meier will continue to manage the company at its site in Oppenweiler. A statement said: “Machine Vision will grow in importance in the future against the backdrop of Industry 4.0. As part of the Balluff Group, we will be able to service this growing market better, both nationally and internationally. The resulting long-term opportunities for our company are enormous, giving our workforce a clear perspective for the future.” Uwe Furtner, Technical Director of Matrix Vision, said: “In the future, our product portfolio will complement the range of the Balluff Group in a field in which Balluff has played a relatively small role till now: Machine Vision.” Balluff Managing Director Florian Hermle added: “For four years, we have been closely cooperating with Matrix Vision in a development partnership. One of our two camera-based product lines is from Matrix Vision. So we already know that we are well-suited to one another and our product lines are optimally matched.” “Matrix Vision lets us extend both our product portfolio in the area of camera-based sensor systems and our software development capacities.”
several years of collaboration in the development partnership. Our product ranges do not overlap, but rather complement each other perfectly. “Many of our customers have production sites worldwide and expect local support from their suppliers. At some point Matrix Vision would have run into difficulties on that score, simply due to our size, but with the Balluff sales and service network, we now have access to a whole new range of possibilities.” Founded in 1921 in Neuhausen auf den Fildern, Balluff with its 3,550 employees, is a leading sensor and automation specialist. It is a family-owned company in its 4th generation and offers a wide portfolio of innovative sensor, identification and network technologies and software for integrated system solutions. In 2016 Balluff Group reported revenues of around €378m. In addition to its headquarters in Neuhausen, Balluff has sales, production and development sites around the globe, as well as 37 wholly owned subsidiaries worldwide. Founded in 1986 in Oppenweiler near Backnang, Matrix Vision is one of the major providers of image processing components in the German-speaking market. The company offers a wide range of frame grabbers, industrial cameras, intelligent cameras, video sensors, embedded systems and software in the industrial image processing sector. It also develops customized solutions for special requirements ranging from individual components to fully rounded functional units. Matrix Vision employs 100 people and generated revenues of €15m in 2016. Picture shows, from left to right: Michael Unger, Florian Hermle, Katrin Stegmaier-Hermle, Uwe Furtner, Erhard Meier, Gerhard Thullner, Werner Armingeon
Furtner agreed: “Balluff and we are an excellent match. We already know each other very well from
Wilhelm Stemmer with managing directors Christof Zollitsch and Martin Kersting
50
BUSINESS STORIES
MACHINE VISION GERMAN FORECASTS CORRECTED UPWARDS According to a new forecast from VDMA, machine vision will achieve an 18% increase in turnover instead of the initially expected 10%. This corresponds to an industry turnover of €2.6bn. The forecasts were part of an overall report which said that the growth forecast for the German robotics and automation industry has risen from 7% to 11%. Chair of the Board of VDMA Robotics + Automation Dr Norbert Stein said: “Both incoming orders and turnover development for the current year have greatly exceeded our expectations. For the first time, we will hit the mark of EUR 14 billion in sector turnover.”
VDMA said that all three segments of German robotics and automation are on a strong growth trajectory for 2017. As well as machine vision, German robotics is also proving to be much more dynamic than forecast. An initial projection of 8 percent growth in sales has been raised to 15%, with industry turnover pegged at €4.2bn. These results confirm statistics from the International Federation of Robotics (IFR), which indicate a global boom in robotics. Germany is the fifth largest robotics market in the world and Europe’s biggest by far. The largest subsector of German robotics and automation remains integrated assembly solutions, meaning the intelligent assembly and production solutions. For 2017, VDMA forecasts a turnover increase of 6% to reach a new record of €7.4bn.
US MACHINE VISION MARKET POSTS BEST FIRST HALF PERFORMANCE COMPARED TO ANY OTHER YEAR The latest industry figures show that the machine vision market in North America posted its best first half performance compared to any other year. The Association for Advancing Automation (Ann Arbor, Michigan, US), revealed that a total of $1.241 billion was sold in the first six months of the year, with an increase of 11% over the same period in 2016. Machine vision component markets were up 11% in total to $177 million and systems increased 10% to $1.058 billion. Some notable growth rates were: Lighting (20% to $35 million), Smart Cameras (16% to $183 million), and Optics (16% to $20 million). Experts expect software to trend up, cameras, lighting, and imaging boards to be flat,
mvpromedia.eu
and optics to trend down over the next six months. Additionally, expectations are for Application Specific Machine Vision (ASMV) systems to increase and smart cameras to remain flat in the next two quarters. President of A3 Jeff Burnstein said (pictured above): “Year over year, our membership has been on a steady growth trajectory, the result of more companies understanding, and embracing, the direct impact automation can have on their bottom line. We look forward to the continued advancement of our industry and helping companies of all sizes access the connections, information, and training they need to succeed with automation.”
51
BUSINESS STORIES
REORGANISATION OF FUJIFILM OPTICAL DEVICES EUROPE FUJIFILM Corporation and FUJIFILM Europe have decided to concentrate their European Optical Devices division into FUJIFILM Optical Devices Europe, Kleve, Germany. The change takes place on 1 October, 2017. FUJIFILM will manage the area of optical devices Europewide from this company. FUJIFILM Optical Devices Europe is responsible at its Kleve location for the sales, marketing and servicing of professional FUJINON lenses for TV and film productions, CCTV and Machine Vision, as well as FUJINON Binoculars in Europe, the Middle East and in Africa. The management of FUJIFILM Optical Devices Europe GmbH is being taken over by the existing Senior Vice President Homare Kai and Christopher Brawley, managing director of FUJIFILM Electronic Imaging Europe.
52
Brawley explained: “The new structure enables FUJIFILM to respond more efficiently to the requirements in this market segment. A high speed and flexible adaptation to market requirements are required for success in this demanding field of business.” Since 2012, the optical business and the electronic imaging business have been carried out in the FUJIFILM company headquarters in Japan by a single business area. So that they can take full advantage these synergies in Europe as well, the two business areas are now concentrated at the Kleve site. A company statement said “The changes are intended to contribute to maintaining the growth of the company. In addition, synergies are expected through the closer connection to FUJIFILM Recording Media GmbH and FUJIFILM Electronic Imaging Europe GmbH, which have already been operating successfully from Kleve for many years.”
BUSINESS STORIES
TECHNOLOGY M&A COOLS DOWN IN THE FIRST HALF, BUT LOOK DEEPER SAYS FIRM Technology M&A activity might have cooled down in the first half of 2017 says a new report, but there are nuances to be considered. The view comes from Hampleton Partners, an international mergers and acquisitions and corporate advisory firm for technology companies, today issued 11 technology M&A market reports for 2H 2017 in the key business segments of artificial intelligence (AI), augmented reality/virtual reality (AR/VR), automotive technology, cybersecurity, digital marketing, e-commerce, enterprise software, financial technology, internet of things (IoT), IT services, and SaaS & cloud services. Principal Partner of Hampleton Partners Miro Parizek said: “Overall, our research shows that technology M&A cooled down in the first half of 2017. “However, it is critical to be more nuanced and to look deeper into specific sectors and the related data when assessing deal activity and planning strategy.” Parizek adds: “M&A and funding is accelerating in select sectors, as more ‘non-technology’ or traditional companies and private equity firms move to acquire and invest in technology and innovation. Artificial intelligence, augmented reality/ virtual reality, and cybersecurity are three of the most promising sectors for technology M&A right now.”
• Digital Marketing: Deal sizes in marketing application software M&A grew at the start of the year, with total transaction values for 1H 2017 up 20% to $1.7 billion versus the previous half-year period. • E-Commerce: The global e-commerce industry is rapidly evolving as European investors dominated another half-year period of regional dealmaking. In the last 30 months, European buyers acquired 63% of regional targets compared to 32% of targets from North American investors. • Enterprise Software: Global M&A volume increased 12% in 1H 2017 versus the previous half-year period with earnings based valuation metrics (EV/EBITDA) remaining stable at 14.5x. Political uncertainty in Britain had little impact on deal flow in enterprise software as the number of UK deals grew by a modest 5% from the previous half year and accounted for 35% of all deals in Europe.
Key findings in the technology M&A market reports for 2H 2017 include the following:
• Financial Technology: M&A deal activity in Fintech is up 8% in 1H 2017 and beginning to recover from its sharp drop in 2H 2016, however, still not at the levels registered from mid-2014 to mid-2016. Within the Fintech sub-sector, Online Financial Services, valuations are increasing as private equity purchasers focus on purchasing payment providers.
• Artificial Intelligence: Acquisitions of AI related targets speeds up dramatically as deal volume increases 179% versus the previous year. Total M&A relating to artificial intelligence now exceeds 100 transactions in the last 24 months to June 2017, in concert with the growing media attention dedicated to the sector.
• Internet of Things: Intel, Verizon and ARM head up the list of Top Acquirers in IoT. 198 buyers were active snapping up 239 IoT assets from 2015 through 1H 2017. While the median revenues paid on disclosed transactions has come in at 3.5x during that period, some deals were disclosed with EV/S ratios as high as 21x.
• AR/VR: Investment in augmented reality and virtual reality has shot up in recent years, with a majority of related M&A activity occurring in the US. In the last 12 months, nearly 80% of the $620+ million worth of deals in AR/VR were related to hardware development.
• IT Services: Of the top 50 highest valued deal during 1H 2017, more than half were cross border deals, building on from the second half of 2016 where 40% of the top 50 transactions crossed national borders. Additionally, global private equity deal flow showed a marked turnaround. There were 48 private equity deals announced in 1H 2017, doubling the number of deals private equity buyers closed during the previous six months.
• Cybersecurity: Headline grabbing data thefts, government and corporate breaches underpin high growth in spending on cybersecurity driving investment and M&A activity in the sector. Deal volumes remain high with 80 security related acquisitions tracked in 1H 2017. Valuations maintain a healthy level as well with EV/S (i.e. revenue multiples) clocking in at 4.7x on disclosed transactions for the period 2015 through 1H 2017.
54
• Automotive Technology: European investors are setting the pace in automotive technology, with 59% of automotive technology companies acquired by European buyers compared with 37% purchased by North American investors.
• SaaS & Cloud: SaaS and Cloud sector picked up in 1H 2017 as deal flow in the information management and enterprise applications/networking sub sectors increased this year by 7%, as interest from buyout funds drove to the total value of $5.22 billion across the SaaS & Cloud sector.
BUSINESS STORIES
HYPERSPECTRAL IMAGING PIONEER GETS €3.5M INVESTMENT Specim Imaging (Oulu, Finland), a global pioneer in hyperspectral imaging technology and related products, has received a €3.5m investment from Bocap SME Achievers Fund II Ky. The hope is that Specim get double its current €10m revenues by 2020. The company’s products are used industrial, environmental and security applications. It’s solutions serve mining industries in drill core scanning and producing mineral maps, agriculture to screen the crops for disease and infestation. They are also key components in state-of-the-art waste sorting robotics. Founded in 2012, Bocap is an independent private equity company dedicated to established, entrepreneur-led, high growth SMEs. It prefers the role of active minority investor and champions its portfolio companies into successful growth. It is funded by mainly Nordic institutional investors such as life and pension insurance companies, foundations, pension funds and selected private investors. Specim’s products also help with the authenticity inspection of artwork, detection of explosives, and forensic investigation. Over the last two years, Specim has invested around €6m in developing new generation cameras. It plans to complete the rollout of a revolutionary new product family by the end of 2017. The products are designed particularly for industrial and
mvpromedia.eu
professional needs, and offers an easy access to hyperspectral imaging technology for industrial OEM-clients for their own applications. The company will invest the new funding in further strengthening its position in the global markets. Specim’s Chairman Risto Kalske said: “I am proud that as a high-tech company Specim will invest nearly as much in sales and marketing as in R&D. With this move it will take full commercial advantage of its superior knowledge, the entire R&D investment, and the new camera family.” One of the founders of Specim Timo Hyvärinen added: “I have every reason to believe that with miniaturization, hyperspectral imaging will step into people’s everyday lives. Usability will undergo a revolution, and soon mobile equipment can easily be brought to the measurement sites for real-time use. Collecting samples, transporting them to the lab for analysis, and experiencing delays in getting the results will become history.” Bocaps partner Vilma Torstila said: “Specim is an excellent company which combines superior technological knowledge with the ability to commercialize it to meet sophisticated and ever-demanding customer needs. We are happy to enhance Specim’s commercial success, and accelerate future growth via this investment.”
55
PUBLIC VISION
PUBLIC VISION Editor Neil Martin takes a look at the current economic situation, and then examines the latest figures from Xilinx and Cognex
Despite a few wobbles, the main stock markets seem to be holding their own and resisting the temptation to take a dive. Naysayers are still predicting a crash, which doesn’t involve that much foresight, given there’s always the next crash – it will come, it’s just when. However, the market optimists see no reason for a crash anytime soon, as the same conditions that precluded the last major fall, are not around today.
US markets are still near record highs and the economy - despite grumblings about the current occupant of the White House and who helped who in the presidential election - appears to be in a healthy state. Interestingly, the US military appears to be busy ordering kit from machine vision companies, as they gear-up for possibly a new-age of fighting which will see ever more reliance on electronic kit.
As for the UK market, it’s still worried about being left out in the cold as Brexit discussions in Europe seem to have all the excitement of a wet winter weekend in Scarborough. Inflation could start to climb over the 3% mark and that’s a major worry for the economy.
Market observers are mixed on the prospects for the US market. The fact is that investors are experiencing the second longest economic expansion in the country’s history. And should the current economic expansion last for another 18 months, then it will be the longest on record. Which means we might not see the next bear market until mid-2019.
Whilst the UK is feeling the strain, the Europeans are showing that Brexit is not going to hold them back and on the contrary, growth is at its best since 2007. The latest statistics from Eurostat show that the eurozone’s economy grew by 0.6% in the three months to September, 2017.
But, we were given a timely reminder of past events, as also in this period under review we remembered Black Monday.
Nick Dixon, Investment Director at Aegon, commented: “On 19 October 1987 financial markets What’s more, many individual economies, once across the world started to plummet in what became sources of mirth, are showing some decent growth. known as Black Monday. The UK market was QE is being geared back and the main problem down 25% and the US down 33% by the end of the seems low inflation, the opposite of what worries the subsequent week. The crash reflected a sudden Bank of England. But although Europe might seem change in market sentiment and the prospect of at full steam ahead, there are still worrying sings of rising inflation. While populism which seems the crash itself was to argue that going back short lived, it left a 200 years and creating “WHILST THE UK IS FEELING THE STRAIN, THE deep impression separate states and EUROPEANS ARE SHOWING THAT BREXIT on many investors. principalities is the answer. IS NOT GOING TO HOLD THEM BACK” Thirty years on and Catalonia is doing its best financial experts to become a separate are again questioning whether markets are entity and there’s talk of two wealthy Italian regions overvalued. Investors have experienced over who also want to go it alone. This trend must eight years of strong growth, even in the face of worry the EU leaders, so they must be hoping unexpected political events like Brexit and Trump. for a quick resolution to the ‘Spanish problem.’
56
mvpromedia.eu
PUBLIC VISION
“In light of rich valuations Aegon has de-risked asset allocations in its Core Portfolios. Three key changes are worth highlighting:- i) We have removed UK Property, as future rental growth looks subdued and could constrain capital values and aggregate returns, ii) We have reduced US equity exposure owing to elevated risks in US equity valuations vs other geographies which now look more attractive, iii) We have raised cash weightings as a defensive measure in light of the aggregate balance of risks.
Second quarter results from Xilinx
“While we do not expect dramatic falls of the nature we saw 30 years ago, elevated valuations have shifted the market environment considerably in recent months. Now is therefore the time to take a more cautious and circumspect approach.”
Sales of $620m for the second quarter of fiscal year 2018 represented an approximately 1% rise from the prior quarter and a rise of 7% from the second quarter of the prior fiscal year. This marks the eighth consecutive quarter of sales increase for the company said the company management.
As always, we have to keep open minds, but as we sit here now, with the end of Q4 in sight, then the prospects for 2018 currently look good. For the machine vision industry, It’s perhaps time to make hay whilst the sun shines.
Xilinx is a US semiconductor company founded in 1984 and is perhaps best known for inventing the Field Programmable Gate Array (FPGA). Headquartered in San Jose, California, Xilinx is a public company, quoted on NASDAQ. On October 25 it announced Second Quarter 2018 Results, the eighth consecutive quarter of revenue growth.
September quarter net income was $168 million, or $0.65 per diluted share. September quarter comparisons are represented in the chart below:
GAAP Results (In millions, except EPS) Q2 FY 2018
Q1 FY 2018
Q2 FY 2017
Q-T-Q
Y-T-Y
Net revenues
$620
$615
$579
1%
7%
Operating income
$185
$180
$177
3%
5%
Net income
$168
$167
$164
0%
2%
Diluted earnings per share
$0.65
$0.63
$0.61
3%
7%
mvpromedia.eu
57
PUBLIC VISION
Xilinx President and Chief Executive Officer Moshe Gavrielov told Wall Street: “Our multimarket diversification, technology leadership, and consistent execution are yielding sustainable results as we delivered our eighth consecutive quarter of revenue growth with strong profitability.
on Amazon Web Services (AWS) for use with F1 instances. In addition, Xilinx announced that Huawei has chosen the Company’s high performance Virtex UltraScale+ FPGAs to power their first FaaS instance as part of a new accelerated cloud service. Lastly, Alibaba Cloud, the largest cloud service provider in China, recently announced its next generation FaaS F2 and F3 instances, based on Xilinx FPGAs;
“Revenues from Advanced Products continued to be solid, increasing 21% from the same quarter a year ago, supported by accelerated growth • Xilinx, ARM, Cadence and TSMC announced a from our highly innovative Zynq SoC platform, collaboration to build the first Cache Coherent as well as from our industry-leading 20nm and Interconnect for Accelerators (CCIX) test chip 16nm technology nodes. At the 16nm technology in TSMC 7nm FinFET process technology for node, we have shipped 34 unique products to delivery in year 2018. The test chip aims to well over 900 customers, denoting a sequential provide a silicon increase of 50% in the proof point to number of products “OUR MULTIMARKET DIVERSIFICATION, demonstrate the and 75% increase in the TECHNOLOGY LEADERSHIP, AND capabilities of CCIX in number of customers.” CONSISTENT EXECUTION ARE enabling multi-core The highlights of YIELDING SUSTAINABLE RESULTS” high-performance the company’s ARM® CPUs working statement are: via a coherent fabric to off-chip FPGA accelerators; • the Industrial, Aerospace & Defense end market set a quarterly record with revenues of $278 million, an increase of 17% from the same quarter a year ago, and constituted 45% of total Xilinx revenues. The success of this end market is a clear illustration of the robustness and diversification of the Xilinx product portfolio; • the Advanced Products category continues to deliver solid revenue growth, posting an increase of 21% from the year ago quarter. Broad-based growth was driven from the Zynq SoC platform and from the industry-leading 20nm and 16nm technology nodes. Zynq revenue increased 65% from the same quarter a year ago with growth driven largely by applications in Advanced Driver Assist (ADAS), Industrial, and Aerospace and Defense. Revenues from the 20nm node increased more than 40% from the year ago quarter and revenues from the 16nm node nearly quadrupled during the same period, reflecting broader customer and multi-market adoption; • Xilinx made several significant announcements highlighting strong momentum in its Cloud Computing market expansion opportunity. Xilinx announced availability of the software defined development environment, SDAccel,
58
• Xilinx announced delivery of its Zynq UltraScale+ RFSoC family, a disruptive integration and architectural breakthrough for applications including 5G, cable and wireless backhaul. Based on Xilinx’s 16nm technology, the RFSoCs integrate RF data converters for up to 5075% system power and footprint reduction. With silicon samples already shipping to multiple customers, the early access program for this product family is now available. As for the business outlook, December Quarter Fiscal Year 2018, the key points are: • sales are expected to be approximately $615 - $645 million; • gross margin is expected to be 69% to 71%; • operating expenses are expected to increase to approximately $260 million. • other income is expected to be approximately $4 million; • December quarter tax rate is expected to be approximately 11 - 14%.
mvpromedia.eu
PUBLIC VISION
COGNEX - Table 1* (Dollars in thousands, except per share amounts) Revenue
Net Income from Continuing Operations
Net Income from Continuing Operations per Diluted Share
Current quarter: Q3-17
$259,739
$102,348
$1.14
Prior year’s quarter: Q3-16
$147,952
$53,675
$0.61
Change from Q3-16 to Q3-17
76%
91%
87%
Prior quarter: Q2-17
$172,904
$56,072
$0.63
Change from Q2-17 to Q3-17
50%
83%
81%
Nine months ended Oct. 1, 2017
$567,585
$204,075
$2.28
Nine months ended Oct. 2, 2016
$391,431
$111,574
$1.29
Change from first nine months of 2016 to first nine months of 2017
45%
83%
77%
Quarterly Comparisons
Year-to-Date Comparisons
*Table 1 excludes the results of discontinued operations, which relate to the company’s Surface Inspection Systems Division (SISD) that was sold on July 6, 2015.
Cognex NASDAQ quoted Cognex is maintaining momentum, reporting record quarterly revenue, net income and earnings per share from continuing operations for the third quarter of 2017 (ended October 1, 2017). Founder and Chairman of Cognex Dr Robert J Shillman told investors: “What a spectacular quarter! Cognex reported record-breaking revenue, net income and earnings per share that far exceeded the prior records set just last quarter. And we were extremely profitable, with operating margin
mvpromedia.eu
expanding to a record 42% driven by significant highmargin revenue growth. I am proud of Cognoids everywhere for delivering such impressive results.” CEO of Cognex Robert J Willett said: “Cognex’s remarkable performance is due to perseverance. Our ability to capitalize on the widespread adoption of machine vision is the result of many years of hard work by Cognoids around the world. It’s gratifying to see our efforts deliver such exceptional results. “Our view of the future continues to be positive as more automation processes that include machine vision are needed to perform an increasing array of complex manufacturing tasks. We are investing to take advantage of the substantial potential that we see for our company going forward.”
59
PUBLIC VISION
Statement of Operations Highlights - Third Quarter of 2017
increased spending on materials and supplies.
• Selling, General & Administrative (SG&A) expenses increased 45% from Q3-16 and 16% from Q2-17. SG&A increased both year-on-year • Revenue for Q3-17 grew 76% from Q3-16 and and sequentially largely due to higher personnel50% from Q2-17. Revenue from the consumer related costs. Investments were primarily in the electronics industry was a substantial contributor sales organization, but also included additions to growth both year-on-year and sequentially. to G&A to support future growth. Commissions, Outside of electronics, revenue growth was demonstration equipment, bonuses and travel strong in all geographic regions and in many costs increased as a result of higher headcount industries, including automotive and logistics, and growth in the business. Expenses related when compared to Q3-16. Revenue outside of to the company’s new ERP system and stock electronics declined option expenses also slightly on a sequential contributed to the basis because of the increase year-on-year. “REVENUE FOR Q3-17 GREW 76% FROM seasonal softness Q3-16 AND 50% FROM Q2-17” • Investment and that Cognex typically other income experiences during was $2,030,000 in the summer months. Q3-17, $2,421,000 in Q3-16 and $1,969,000 in • Gross margin was 76% for Q3-17 compared Q2-17. Investment income increased both to 78% for both Q3-16 and Q2-17. Higher year-on-year and sequentially as a result of revenue from a material customer in Q3-17 higher yields and a higher average invested was somewhat dilutive to the overall margin. balance. Offsetting that increase in Q3-17 and Q2-17 is an expense associated with changes • Research, Development & Engineering to the fair value of contingent consideration (RD&E) expenses increased 40% from Q3related to recent acquisitions. In Q3-16, the 16 and 12% from Q2-17 as Cognex continued change in fair value generated income. to invest in both current and new products. RD&E increased year-on-year due to additional • The effective tax rate was 9% in both Q3engineering resources (including employees 17 and Q2-17, and 5% in Q3-16. The rate added from recent acquisitions), stock option was 18% in all periods presented, excluding expense and higher expenses for materials discrete tax benefits related to the exercise of and supplies. RD&E increased on a sequential employee stock options and other items. basis due to the bonus accrual as well as Quarter details (from the official statement)
60
mvpromedia.eu
PUBLIC VISION
Revenue by market
Three Months Ended
Nine Months Ended
October 1 2017
July 2 2017
October 2 2016
October 1 2017
October 2 2016
£259,739
£172,902
£147,952
£567,585
£391,431
Europe
56%
36%
50%
44%
46%
Americas
20%
33%
25%
26%
29%
Greater China
13%
14%
13%
14%
13%
Other Asia
11%
17%
12%
16%
12%
Total
100%
100%
100%
100%
100%
97%
96%
96%
96%
95%
4%
4%
4%
5%
100%
100%
100%
100%
Revenue Revenue by geography:
Revenue by market: Factory automation
3% Semiconductor and electronics capital equipment Total
100%
Financial Outlook - Fourth Quarter of 2017 • Revenue for Q4-17 is expected to be between $170 million and $180 million. While this range represents a decline from Q3-17 due to the timing of large orders from the consumer electronics industry, it nevertheless represents expected growth exceeding 30% year-on-year. • Gross margin is expected to be in the mid-to-high 70% range. • Operating expenses are expected to decline by low single digits on a sequential basis. • The effective tax rate is expected to be 18% before discrete tax items.
mvpromedia.eu
61
Cast a bigger shadow. We plan. We create. We write. We design. We develop. But best of all, we get you noticed. Give us a call if you need some Wow in your business.
thewowfactory.co.uk
01622 851639
THE
WOW
FACTORY
looks familiar. performs better.
Our brand new SXGA digital cameras are worthy of everyone’s attention. Available with a choice of GigE and USB 3.0 interfaces, they’re the clear choice for a wide range of imaging applications, from general inspection and alignment to robotics, medical and ITS. With a resolution of 1.6 megapixels using our latest next-generation Pregius sensor the new XCU-CG160 & XCG-CG160 ensure crisply detailed SXGA colour or monochrome images from 75fps to over 100fps. They’re an ideal upgrade from your current CCD cameras, with a familiar form factor – plus all the accuracy and long-term durability you’d expect from Sony. Take a closer look today at image-sensing-solutions.eu
Digital Interface GigE Vision Digital Interface USB3.0 Vision