CEI - Meet the real cable guy | MVPRO 16 | September 19

Page 1

GOOD VIBRATIONS SHAKE UP PICK AND PLACE

ENHANCING INDUSTRIAL CYBER SECURITY

20 BEST ROBOTIC STARTUPS UNVEILED

ISSUE 16 - SEPTEMBER 2019

CEI – MEET THE REAL CABLE GUY

mvpromedia.eu

MACHINE VISION & AUTOMATION



MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu

Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu

CONTENTS 4

ED’S WELCOME - Opportunity Knocks

6

INDUSTRY NEWS - Who is making the headlines

10

PRODUCT NEWS - What’s new on the market

14

SVS VISTEK - See, Recognize, Grasp - case study with Asyril

18

FUJIFILM - Universal Machine Vision lens series for 1.1 sensors - without vignetting

20

BASLER - Med ace add USB 3.0 9MP and 12MP to range

22

CEI - CEO Ray Berst reveals what it takes to be the best

26

XLINX - Industrial cyber-security starts at the embedded hardware platform

28

ALRAD - Ian Alderton shares the current market trends in machine vision

30

FLIR - The heat is on - thermal imaging cameras

32

LMI - Measuring circular edges with smart 3D technology

34

GARDASOFT - Making the most of line scan imaging

36

INSPEKTO - How autonomous machine vision saves time and money

38

SONY - How Sony’s IMX250MZR has polarised the machine vision industy

44

MVTec - A new era beckons

46

EMVA - Dr Johannes Meyer on his future after winning the 2019 EMVA Young Professional Award

48

ROBOT UNION - The 20 best European robotics startups unveiled

Cally Bennett Group Business Manager cally.bennett@mvpromedia.eu

Rachel Bray Head of Design rachel.bray@cliftonmedialab.com

Georgie Davey Junior Designer georgie.davey@cliftonmedialab.com

Visit our website for daily updates

www.mvpromedia.eu

mvpromedia.eu

MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2019. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.

mvpromedia.eu

3


OPPORTUNITY KNOCKS A very good friend of mine uses the mantra ‘Always say yes’. It doesn’t matter what the offer is, take it. He has two reasons for doing so. One, the opportunity may never come along again; two, you’ll never think ‘what if…’. So, when the opportunity came along to become the new editor of MVPro Machine Vision and Automation, it was a very simple decision. It’s great to be part of a team that is very passionate about the industry and proactively promotes the excellent work and innovations happening within the sector. I want to share your successes, products and news through our various platforms. I also want to get to know the people making the news, those who shape the industry and I’m looking forward to crossing paths at the various shows, forums and events over the coming months. However, you can also drop me a message by email. Get in touch lee.mclaughlan@mvpromedia.eu It has been straight in at the deep end in bringing you this issue, which has seen the full integration of RoboPro. Rather than two magazines fused together, they have been blended to reflect how both sectors are also integrating more. There is a lot crammed into this issue which covers the broad spectrum of news and views from across the industry. We bring an in-depth, informative and engaging interview with CEI president Ray Berst; we discover what the future holds for MVTec’s Dr Wolfgang Eckstein; meet the latest winner of the EMVA Young Professional of the Year Dr Johannes Meyer and grab a quick 10 minutes with Alrad’s Ian Alderton. There are some fascinating insights into new developments and products across the MV and automation sector that are also worth a read. Until next time… Lee McLaughlan

Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.eu

Editor

4

mvpromedia.eu


HIGH END IMAGING CAMERAS | GRABBERS | PROCESSING

T: +49 (0) 8142 44841-0 | W: www.rauscher.de | E: info@rauscher.de


INDUSTRY NEWS

APPLIED MOTION PRODUCTS TO CONSTRUCT NEW HEADQUARTERS IN SILICON VALLEY to one location for greater efficiencies and synergy among the different departments. The larger facility will accommodate growth in the design and manufacture of motion control products including step motors, servo motors, drives and controls to meet growing demand while developing new innovations for different industries.

Motion control and automation solutions provider Applied Motion Products is to build new corporate headquarters in Morgan Hill on the southern tip of Silicon Valley. The new headquarters will consolidate all US operations, which are currently conducted in two separate buildings located in Watsonville. It will bring together production, warehouse, operations, engineering, customer service and executive management

“The new corporate headquarters marks a milestone in Applied Motion Product’s growth,” said Don Macleod, President and CEO. “The size, layout and location will allow us to optimise operations, better serve customers, accommodate business expansion and proactively collaborate with other tech companies in Silicon Valley.” Construction is expected to be completed by December 2020.

MTC APPOINTED LEAD AUDITOR FOR UK ROBOTICS The Manufacturing Technology Centre has been appointed a lead auditor by the British Automation and Robot Association (BARA) - the trade body which aims to promote the use of robotics in British Industry. The MTC, based in Coventry, will have lead responsibility for auditing UK robot integrators seeking certification from BARA, and engineers from across the High Value Manufacturing Catapult centres will be available to support the scheme.

the need for an industry benchmark to evaluate integrators’ technical knowledge and safety practices,” he said. “As things currently stand, anyone can claim to provide systems integration services but that doesn’t necessarily mean they are trained or competent to integrate a robot into a production line. The new audit system will offer end-users peace of mind that the company and integrator they have appointed to carry out the work has been independently verified.”

The audit will ensure that UK companies are qualified to integrate robotics into a production line. Certification will demonstrate that they have appropriate skills and procedures following a rigorous audit and on-site checks. Jeremy Hadall, chief engineer for intelligent automation at the MTC said the appointment was an important step in growing the capability and capacity of UK robotic integrators. “Over the past decade there has been a three-fold increase in the number of robots sold in the UK, with some analysts predicting that the global market for industrial robots could become a £30 billion industry by 2025. This has prompted

6

mvpromedia.eu


Six Essential Considerations for Machine Vision Lighting 1. Optimise the image at source Image processing software generates intelligence from the camera image. But software cannot deliver information that has not been captured by the camera sensor. Good management of illumination is essential to creating an optimal image and maximising system performance. Gardasoft Vision has used our specialist knowledge to help Machine Builders achieve innovative solutions for over 20 years. To read more about the Six Essential Considerations for Machine Vision Lighting see www.gardasoft.com/six-essential-considerations

Semiconductor

|

PCB Inspection

Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com

www.gardasoft.com

|

Pharmaceuticals

|

Food Inspection


INDUSTRY NEWS

NEW EUROPEAN DATACENTRE SURVEY HIGHLIGHTS KEY INDUSTRY ISSUES Business Critical Solutions (BSC) has identified the key areas of concerns of more than 300 of Europe’s senior datacentre professionals in its 10th annual Summer Report. Continuing unprecedented demand for new datacentres, fears around the shortage of skilled professionals, concerns about the future disruption of 5G, and the limited impact of Brexit are some of the key findings BCS has uncovered. The report highlights the rising demand for datacentres with almost two thirds of users exceeding 80 per cent of their capacity today, 70 per cent having increased capacity in the last six months and almost 60 per cent planning an increase capacity next year. This demand is currently being driven by cloud computing with over three quarters of respondents identifying 5G and Artificial Intelligence (AI) as disruptors for the future. With industry predictions that edge computing will have 10 times the impact of cloud computing in the future, half of respondents believe it will be the biggest driver of new datacentres. With regards to supply, there are concerns that a shortage of sufficiently qualified professionals at the design and

build stages will cause a bottle neck, with 64 per cent of datacentre users and experts believing there is a lack of skilled design resource in the UK. AI and Machine Learning may help to mitigate these issues with nearly two thirds of respondents confident that datacentres will utilise these to simplify operations and drive efficiency. James Hart, CEO at BCS, said: “As always this report makes for fascinating reading and I was encouraged by the overwhelming positive sentiment to forecast growth and the limited impact of Brexit. The fact that half of our respondents believe that edge computing will be the biggest driver of new datacentres tallies with our own convictions. “We believe that the edge of the network will continue to be at the epicentre of innovation in the datacentre space and we are seeing a strong increase in the number of clients coming to us for help with the development of their edge strategy and rollouts.”

To read the full report go to http://www.bcs.uk.com/ summer-report-2019/

FANUC TO HOST OPEN HOUSE FANUC is to host its first ever Manufacturing, Automation, and Digital Transformation Open House to answer questions on automation in UK manufacturing Taking place at its state-of-the-art UK headquarters near Coventry on 29-31 October 2019, the event is open to UK industry to understand how automation can improve productivity across manufacturing. Displaying a number of demonstrations, as well as hosting industry debate, manufacturers can sign up to attend the event via the FANUC website. Recent figures suggest that UK automation is currently falling behind other leading manufacturing nations, which is having an impact on its ability to keep pace in terms of productivity. To illustrate the point: there are just 71 industrial robots per 10,000 workers in the UK, positioning it behind 14 other European countries. In contrast, Germany has 309 units, contributing to a production rate which is 30 per cent higher per hour than the UKs.

UK industry, and highlight how turning to automation can drive significant and tangible productivity.” One area of automation which will be demonstrated is collaborative robotics (cobots). With built-in sensors and vision systems, cobots are capable of working alongside human operators in a variety of different operations. Cobots are already in use in a number of processes in the UK, across a range of industries. A team of experts will also be on hand to talk visitors through how these various pieces of technology can improve factory productivity, highlighting how the manufacturing process can be optimised. To attend FANUC’s inaugural open house event please visit: https://one.fanuc.eu/fanucoh2019

Tom Bouchier, managing director of FANUK UK said: “Many in UK manufacturing remain suspicious of automation, with a common misconception being that it equates to a high up-front cost. However, automation solutions are accessible to everyone – from family run subcontracting businesses, through to multi-national organisations. “FANUC’s inaugural Open House will showcase a range of technologies, seeking to address misunderstanding within

8

mvpromedia.eu


USB3 LONG DISTANCE CABLES FOR MACHINE VISION

Active USB3 Long distance cables for USB3 Vision. CEI’s USB3 BitMaxx cables offers the Industry’s First STABLE Plug & Play active cable solution for USB Vision, supporting full 5 gig USB3 throughput and power delivery up to 20 meters in length with full USB2 backward compatibility.

1-630-257-0605 www.componentsexpress.com sales@componentsexpress.com


PRODUCT NEWS

FIRST ALVIUM CAMERA SERIES MODELS COME TO MARKET Allied Vision has started production of Alvium 1500 C and 1800 U - the first series powered by ALVIUM® Technology. The company has released three 1500 series models with a MIPI CSI-2 interface and one 1800 series model with USB3 Vision interface. In the following months, the range of Alvium cameras will be continuously expanded with further models and sensors.

UNIQUE ASIC FOR INDUSTRIAL EMBEDDED VISION The Alvium camera series is an innovative camera platform that combines the advantages of embedded sensor modules with the performance of industrial cameras for image processing: extensive functions for image correction

and optimization, a large selection of state-of-the-art sensors, intelligent energy management, and a cost-optimized and compact design. The camera series is based on ALVIUM® technology, an Application-Specific Integrated Circuit (ASIC) with integrated Image Signal Processor (ISP) and Image Processing Library (IPL).

THE ALVIUM 1500 SERIES The Alvium 1500 series is the perfect camera for easy hardware and software integration in embedded applications. All models are equipped with a MIPI CSI-2 interface. The Alvium 1500 Series offers a basic feature set. Software integration can be done via Video4Linux2 (V4L2) to Gstreamer and OpenCV, or direct register access. Open source drivers for selected processor architectures are provided for V4L2 support.

1500 SERIES MODELS AT A GLANCE Camera model

1500 C-050

1500 C-120

1500 C-500

Sensor

ON Semi PYTHON 480

ON Semi AR0135CS

ON Semi AR0521

Resolution

0.5 Megapixel

1.2 Megapixel

5.0 Megapixel

Pixel

800 × 600

1280 × 960

2592 × 1944

Pixel size (µm)

4.8 × 4.8

3.75 × 3.75

2.2 × 2.2

Optical format

Type 1/3.6

Type 1/3

Type 1/2.5

Shutter

Global Shutter

Global Shutter

Rolling Shutter

Framerate (Frames per second)

116 fps

50 fps

67 fps

Interface

10

MIPI CSI-2 D-PHY with 1, 2 or 4 lanes and 1,5 GBps per lane

mvpromedia.eu


PRODUCT NEWS

THE ALVIUM 1800 SERIES The first model of the 1800 series is equipped with USB3 Vision interface. Models with MIPI CSI-2 are planned. The 1800 series can be used for both industrial embedded vision and machine vision applications. With an extended range of functions for image correction and optimization as well as various trigger functions, the camera series combines the advantages of classic industrial cameras with the advantages of integrated sensor modules. It opens new possibilities for the user to switch from PC-based image processing applications to embedded systems.

1800 SERIES MODELS AT A GLANCE Camera model

1800 U-500

Sensor

ON Semi AR0521

Resolution

5.0 Megapixel

Pixel

2592 × 1944

Pixel size (µm)

2.2 × 2.2

Optical format

Type 1/2.5

Shutter

Rolling Shutter

Framerate (Frames per second)

67 fps

Interface

USB3 Vision

THE FUTURE DEPENDS ON OPTICS™

EASY INTEGRATION The ALVIUM® Technology ASIC supports all common sensor interfaces and is designed for a wide range of current and future image sensors with resolutions from VGA to 21 megapixels. For the CSI-2 models, a single driver directly supports all camera models. With minimal development effort, different cameras can be tested with different sensors, different resolution variants of a system can be developed, or existing systems can be converted to new sensors. In close partnerships Allied Vision develops Vimba CSI2 drivers for Alvium cameras. A number of embedded boards, such as NXP i.MX 6/8-based boards and the Nvidia Jetson boards, are initially supported. The USB variants can be easily integrated with the established Vimba Suite on Windows, Linux and Linux arm platforms. MV https://www.alliedvision.com/en/products/embeddedvision-cameras.html

NEw CA Series Fixed Focal Length Lenses TECHSPEC® CA Series Fixed Focal Length Lenses are designed for high resolution large format sensors. Covering the APS-C format sensors with a 28 mm diagonal image circle, these lenses feature a TFL Mount. TFL Mounts feature a M35 x 0,75 thread with a 17,5 mm flange distance, and offer the same flange distance, robustness, and ease of use as a C-Mount. Find out more at

www.edmundoptics.eu/CAseries

mvpromedia.eu

UK: +44 (0) 1904 788600 I GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 I sales@edmundoptics.eu


PRODUCT NEWS

EYEVISION 3D NATIVE SUPPORT FOR INTEL REALSENSE

Eye Vision Technology has launched 3D software to provide an easy-to-use solution for industrial 3D and Robot Vision applications. EyeVision 3D software supports the Intel RealSense sensor for 3D image processing applications such as Robot Vision or Bin-Picking. Thanks to the native support of the RealSense sensor, less powerful CPUs can be used for complex 3D applications. The EyeVision 3D software can solve image processing 3D applications even without previous knowledge on the subject, thanks to the drag-and-drop programming. Several ready-made projects are available to use as a guideline, which are adaptable to your existing projects needs. The EyeVision Software can be downloaded as a trial version to allow the user to experiment and gain experience in the field of 3D but also 2D image processing.

STARTER KIT FOR THE EYEVISION SOFTWARE With the trial version of the EyeVision software you can test the complete command set of the software with either industrial image processing cameras or only with a webcam. You can create inspection programs, to see if you can solve your applications with the software tools at hand. There are options to test 1D, 2D, 3D and thermal imaging applications with a professional image processing tool, plus the possibility to test the complete command set for the solution of your application. With the Deep Learning and Machine Learning Algorithms it also has an intelligent tool to solve complex tasks. For an optimal image processing solution the camera is as important as is the fitting lens and illumination for a stable measurement and inspection result. MV www.evt-web.com

FUJIFILM LAUNCH HIGHLY INTEGRATED LONG-RANGE SURVEILLANCE SYSTEM FUJIFILM presents the new long-range camera module FUJINON SX800 to the surveillance systems market - its first foray into the sector.

and a separate matching lens, the concept of a fully integrated system consisting of camera and lens has been realized.

With a full HD camera and an optically stabilized, 40x zoom lens from FUJINON, two high-performance components are integrated into one system.

In addition to the high-quality zoom optics, the FUJINON SX800 has a powerful combined optical and electronic image stabilization mechanism that provides angle correction of up to ±0.22 degrees. The integrated highspeed autofocus provides a sharply focused image in less than a second, while a fog filter and heat haze reduction technology helps prevent weather interference.

With a powerful 1/1.8” image sensor, the long focal length range of 20 mm to 800 mm and state-of-the-art image processing technology, the FUJINON SX800 is ideally suited for aerial surveillance. Long range surveillance systems must cope with special challenges. For optics with long focal lengths, the smallest vibrations are sufficient to compromise the image information. Heat haze or fog also impair image quality. And, a focus drive that is too slow leads to safety-relevant information loss. However, with the new FUJINON SX800, FUJIFILM has found a way to minimize the impact of these challenges. Instead of developing a surveillance camera

12

The system provides consistently sharp images – even of objects several kilometres away. In addition, the integrated design of the new FUJINON SX800 reduces the high adjustment effort normally required for camera installation. As of Q3 2019, the FUJINON SX800 will be available as both a mobile stand-alone device and as a system that can be integrated into a pan-tilt head. MV

mvpromedia.eu


Cognex Robotic Guidance Solution for UR Robots Cognex® has developed a URCap™ solution with Universal Robots to guide users through communication and hand eye calibration with UR Robots. The solution is designed for Cognex In-Sight® 2D machine vision systems and leverages Cognex’s world class vision tools for any robot application. The system can be used to: ▪ GUIDE the robot to specific coordinates ▪ INSPECT the part after manipulation ▪ OUTPUT valuable data for downstream analysis

www.cognex.com/URCap

Visit us at PPMA, Booth C46


SPONSORED

SEE, RECOGNIZE, GRASP The goal of the Swiss company Asyril is to improve the performance of assembly robots. It uses sophisticated image processing systems to facilitate the gripping of bulk material components by robots in production, relying on an innovative idea and on industrial cameras from SVS-Vistek, writes Stefan Waizmann. At almost every automation trade fair in recent years, the attempts of various companies to realise the famous “reach into the box”, i.e. the gripping of unordered components by a robot, can be seen. Despite enormous progress in the field of robotics and image processing, this task still poses a great challenge. The reasons for this are obvious: Before a robot can grip a component, an image processing system must first reliably recognise it, calculate its orientation, and then communicate the position and orientation of the gripping points to the robot. In conventional technology, this is still a slow, multi-stage process (recognition, gripping, correct depositing, gripping with correct orientation). If the components to be gripped are chaotically mixed up and partially concealed, the safe and fast gripping of individual parts often becomes a complex and slow process. Asyril has taken a new approach to this task, which is frequently encountered in industry. The Swiss company builds fast, highly efficient feeding systems for ‘pick and place’ robots using a trick that is simple at first glance but very innovative in detail. The bulk material objects lying next to and on top of each other in a box are guided via a feeding hopper to a vibration platform, where they are separated and placed in a position that allows easy access by the robot.

vibrations”, the image processing system communicates the data of the position and orientation of the components to be optimally gripped to the ‘pick and place’ robot, for which access is child’s play. In order to optimize the speed of object detection, the system sends the information about the first detected, well-placed components to the robot before the entire image is evaluated.”

Asyril’s flexible Asycube feeders, in combination with SVS-Vistek’s EXO series cameras, provide a powerful alternative to “reaching into the box” and increase the productivity of the robots used. (Image source: Asyril)

VIBRATION IN THREE AXES The basic idea goes far beyond conventional mechanical systems such as vibrating pots, explains Asyril product manager Aymeric Simonin: “The special feature of our high-performance feeding systems is that the results of an integrated image processing system are used to control the vibrations of the platform in such a way that the objects are separated. “The specialised vision system delivers the necessary data almost in real time, ensuring that the parts are isolated in a controlled manner and brought into a gripping position that is optimal for the robot. After separation by “intelligent

14

The core of the image processing system used in the current Asycube feeders is a camera from SVS-Vistek’s EXO series. (Image source: SVS-Vistek)

mvpromedia.eu


SPONSORED

The technical basis for this procedure is a flexible feeder called Asycube. Asyril developed this innovative, patent-protected 3-axis vibration technology itself, manufactures it in-house and uses it in its highperformance feeding systems. The high-quality actuators cause a vibration platform to vibrate, which can be controlled in terms of strength, frequency and duration, allowing the components to move quickly and accurately on the vibration platform.

ECONOMICAL IMAGE PROCESSING The second core element of Asyril’s flexible feeder solution is the integrated SmartSight vision system, which assesses the quality of the separation and determines the positions of the next optimally positioned parts with the knowledge of the robot gripper’s capabilities. “An economical design was also important to us for this part of the overall system,” stresses Simonin. Asyril opted for several camera models from the EXO series with resolutions between 1.6 and 12 megapixels, after initial systems based on ECO cameras from SVS-Vistek, which, in addition to image acquisition, also take over control of the light and thus make an additional strobe controller obsolete. “This enabled us to reduce the hardware costs for the entire system and to operate incident and transmitted light with short flash times directly from the camera’s power outputs,” says Simonin, describing the image processing setup. The timings for light and exposure come directly from the camera, which controls the electrical processes and the integrated four-channel LED driver with its sequencer. Light, sequencer and camera are controlled via a single programming interface. “Our technology is very flexible and is suitable for loose parts and components of all geometries with sizes ranging from less than 0.1 mm to 150 mm,” says Simonin. The feeders used enable extremely gentle feeding of parts, which can be a decisive criterion depending on the application.

mvpromedia.eu

Asycube feeders are very flexible and are suitable for extremely gentle feeding of loose parts and components of all geometries with sizes from less than 0.1 mm up to 150 mm. (Image source: Asyril)

Thanks to their modular design, Asycube feeders can be flexibly and quickly adapted to the properties of the objects. In addition to easily exchangeable hardware modules, this configuration flexibility is also ensured by easy-to-use, PCbased image processing. “When switching to other products, the advantages of a programmable feeder become particularly obvious: Configuration is carried out very quickly via software and saves expensive hardware set-up times. This is a big advantage, especially in markets with very short product life cycles or small series,” added Simonin.

15


SPONSORED

EXCELLENT PARTNERS For the realisation of the SmartSight vision system integrated into the Asycube feeders, Asyril works together with Fabrimex from Volketswil, Switzerland, who, as partners of SVS-Vistek, complete their innovative camera technology to tailor-made optical solutions from a single source. The development of Asyril enables robots to access individual parts or bulk materials more quickly, which leads to considerable increases in efficiency. “We are rooted in the Swiss watch industry with its high demands, but the advantages of our technology have now also proven themselves in many other markets such as the automotive, medical and electronics industries,” says a delighted Simonin. “With Asycube SmartSight, we can offer users a fast alternative to the still slow, complex grip on the box and thus increase the productivity of the robots used. The advantages of material feeding through the innovative Asycube solutions are now also paying off in other ways:

SVS-Vistek has recently expanded its range of industrial cameras by 10 new USB3 models of the EXO camera series (exo342, exo367, exo387) with resolutions of 31, 19 and 17 megapixels. They are based on the innovative Pregius 2 CMOS sensors from Sony, which are highly light-sensitive with large, square pixels of 3.45 µm edge length and deliver an extremely high dynamic range. The new products cover sensor formats up to APS-C and Four-Thirds. Due to their pixel size, Pregius 2 sensors can be operated with many cost-effective lenses even at high resolutions. For the high resolutions, the EXO series offers variants with M42 mount and the MFT mount for focusable lenses. SVS-Vistek thus offers its customers an optimum selection from a wide range of lenses for every task and enables economical solutions from a single source. Despite their high resolutions, the new EXO cameras allow frame rates of 11.5, 18.5 and 21.5 frames/s with a USB3

16

The vibration platform of the Asycube feeders enables controlled movement and separation of bulk material objects. (Image source: Asyril)

At the MOTEK trade fair in Stuttgart at the end of 2018, Asyril was awarded a prize in the “Components for Handling and Assembly” category. MV

CONTACT DETAILS W: www.svs-vistek.com | W: www.asyril.com

bandwidth of maximum 360 MB/s net. This leaves enough room for the subsequent image evaluation until the next production cycle or the next object. Even higher image frequencies will be possible in the HR series with the high-performance interfaces 10GigE and CoaXPress. The EXO cameras have an integrated 4-channel LED flash controller, which saves users the use of an additional device. Extensive sequencer functions, the milled housing with extraordinary sensor and adjustment quality as well as excellent temperature management ensure constant results over a wide temperature range. In special tracer versions of the EXO cameras, low-cost MFT lenses (Micro-Four-Thirds) can be controlled via GenICam commands and allow the complete adjustment of focus, zoom and aperture to new tasks within milliseconds. All timings for sensor, illumination and lens come from a single source and are controlled via a single GenICam interface. The compact footprint and image quality qualify these new EXO cameras for many applications with high resolutions, among others in the fields of apparatus engineering, traffic engineering, photogrammetry, surveying, aerial mapping, high-end security technology as well as solar, wafer and display inspection.

mvpromedia.eu


24.-25.10. EUROPEAN EMBEDDED VISION CONFERENCE ICS Stuttgart, Germany

EVE 2019 will give insights into the capabilities of hardware and software platforms; will present applications and markets for embedded vision and will create a platform for the exchange of information between designers and users.

Organiser

www.embedded-vision-emva.org


SPONSORED

FUJIFILM UNVEIL NEW SERIES OF VIGNETTE-FREE MACHINE VISION LENSES FOR SENSORS The FUJ I NON CF ZA 1S series with six fixed focal length lenses is the universal solution for industrial cameras with C-mounts and modern image sensors with an optical format of up to 1.1” and small pixels from 2.5 um.

FUJINON CF-ZA: ONE-FOR-ALL LENS SERIES The technical advancement of image sensors for machine vision brings higher spatial resolution resulting in larger sensors and smaller pixels. This development created big challenges for vision system designers as it was difficult to find the right lenses for such sensors: High resolution power, strong light transmission, short minimum object distance (MOD), a small Chief Ray Angle (CRA), and all this from the center to the edges of the large image circle – these were just the optical requirements. Equally important remain the compact build of the lenses and their robustness against shocks and vibrations. FUJIFILM thoroughly listened to the needs and pain-points of its machine vision customers and created the CF-ZA-1S series to meet all the above-mentioned requirements and goes even beyond. The CF-ZA-1S lens series is designed for use with all standard machine vision cameras that use C-mount, an image sensor of up to 1.1” optical format, and pixel pitches from 2.5 µm, which equals up to 23 megapixels.

18

The series provides a lineup of six different lens models with different focal lengths from 8 to 50 mm, allowing customers to choose the lens most appropriate for their desired application. FUJINON’s patented Anti-Shock and Vibration design limits shifts of the optical axis to 10 µm and ensures constant resolution power in the typical, rough manufacturing environments. The MOD is just 100 to 200 mm (even at 50mm focal length) and the maximum CRA is at 4.9° as required by the latest high-resolution sensors. This guarantees a high relative illumination across the entire image without vignetting. These lenses also come with knurled screws which cannot accidentally fall out anymore and get lost or damage sensitive manufacturing machines during installation or maintenance. All these features are designed in a compact build with the world-smallest external diameter of just 39 mm for the 1.1” optical format.

mvpromedia.eu


SPONSORED

INNOVATION: HIGH RELATIVE ILLUMINATION DESIGN The new CF-ZA-1S lens series fulfills an essential requirement placed on machine vision lenses: an image illumination right down to the corners that is as uniform as possible, i.e. with minimum vignetting. Some of the sensors currently used in cameras (e.g. Sony IMX253/255) are prone to such vignetting due to their design. This effect can be minimized with a suitable optical design. It is important to keep the angle of incidence of the light as small as possible in relation to the optical axis. The angle of the should be less than 5º. Thanks to their innovative High Relative Illumination design, all lenses of the CF-ZA series meet this high-level requirement. They deliver uniform illumination across the entire image – right down to the corners – even to the “critical” IMX253/255 sensors and up to a focal length as short as 8 mm. The relative illumination is clearly above 90 per cent in the corners.

DISTORTION The focus for the lens design of the Fujinon CF-ZA series was set on the high resolution and the high relative illumination. Both requirements were clearly accomplished within the Fujinon CF-ZA series. Since physical limitations prevent building a “perfect lens” that delivers a “perfect picture”, a compromise had to be made during the lens design process. To incorporate both features of high resolution and high relative illumination into a lens body as small as possible, this compromise was set on the distortion. The distortion values of the Fujinon CF-ZA series are comparatively high, especially for the models with a short focal length of 8mm and 12mm. But what sounds like a huge disadvantage turns out to be an advantage in the end because the distortion of the lenses is a simple curve and can be corrected easily by commonly used software tools with just one or two correction parameters. Compared to that neither the loss of resolution nor the lack of relative illumination can be corrected afterwards - if the loss is caused by a bad lens design. So, in the end the distortion is high indeed for the short focal lengths. But it can be corrected easily and it´s the best compromise possible in order to keep resolution and relative illumination high. Therefore, the Fujinon CF-ZA series is the universal one-forall solution of modern machine vision systems. MV

CONTACT DETAILS T: +49 211 5089 0 | W: www.fujifilm.eu/fujinon A: FUJIFILM Europe GmbH Heesenstrasse 31 D-40549 Düsseldorf Germany

mvpromedia.eu

19


BASLER MED ACE ADD USB 3.0 9MP AND 12MP TO RANGE Camera manufacturer Basler is to produce eight more colour and monochrome models of the Basler M E D ace series for use in the Medical & Life Sciences market.

The new models, which takes the total number of models to 18, are USB 3.0 cameras with 9 MP and 12 MP resolution and are equipped with the best CMOS sensor technology and offer frame rates of up to 42 images per second. The Basler MED ace colour and monochrome cameras also impress with their unique Basler MED Feature Sets: easy compliance, brilliant image, perfect colour, low light imaging, high speed and industrial excellence. The sophisticated requirements which Basler configured specifically for the Medical & Life Sciences areas combine hardware, firmware and software functions. Their leading benefit is that they help customers to reduce their development effort. They enable images of the highest quality in the shortest possible time, but at the same time offer full flexibility for individual requirements.

“In addition to the excellent performance, all models have two stand-out features,” said Peter Behringer, team leader product market management medical at Basler. “First, they are produced, distributed and serviced according to ISO 13485:2016. Second, they excel with our Basler MED Feature Sets that combine high-performance hardware, firmware and software functions especially for applications in Medical & Life Sciences.” With the certification according to ISO 13485:2016, Basler offers additional and higher quality standards for the production, distribution and service of digital cameras. Manufacturers of medical or in-vitro diagnostic products (IVD) can thus benefit from an effective quality management system with clearly defined standards. Extensive documentation, reliable product quality thanks to a validated and monitored production, traceability and a comprehensive change management support the conformity with international standards and reduce the customer effort required for quality management audits and product documentation. The clearly regulated change management plays a particularly important role for customers. This includes the so-called design freeze, which refers to the fixation of hardware and firmware. This enables Basler to guarantee the product continuity of the Basler MED ace over a long period of time. MV For more information go to www.baslerweb.com

20

mvpromedia.eu


1-3 OCTOBER 2019 NEC, BiRmiNgham

ThE COmPLETE PRODUCTiON LiNE EVENT

FREE

TO aTTEND REgiSTER NOW OWNED & ORGANISED BY

FOR MANUFACTURING IN FOOD, BEVERAGES, PHARMACEUTICALS, TOILETRIES, COSMETICS AND MORE… ppmatotalshow.co.uk


SPONSORED

TO BE THE

BEST

IN THE WORLD

The quest to be the market leader in MV cabling has been there from day one for Components Express Inc (CE I). Ray Berst, who founded the company with his father 27 years ago, explains how CE I has become the world’s No 1 in cabling, how they maintain their position and the decision to expand into enclosures. HOW DID CEI START? Its origins go back to 1992 when my father and I left our jobs in a cable company. We started two companies, Components Express to make cables and another company called Dynamic Sources, which imported components from Asia. We later merged the two companies, which saw us have a manufacturing establishment in Components Express and the ability to import products and build cables both domestically and overseas and that was really the start of the company as it is today.

HOW DID YOU GET INTO THE MACHINE VISION SECTOR? Our initial focus was telecommunications cables, data distribution devices and patch panels. We worked for the major telecom providers such as Cisco Systems and Hewlett Packard, but the events of 9/11 saw the market crash and we needed a new direction. We had to regroup and pick one thing we were going to be great at and be the best in the world at. We had a group committee with our own suggestions. I was advocating to build patch panels, my father wanted to strictly import products from Asia, Steve Mott, CEI vice president, wanted to build cables for MV and we had another group that wanted to be a contract manufacturer. We all agreed Steve had made the best-case both market wise and capability wise. At that time we chose to be the best in the world at machine vision cables.

22

HOW HAVE YOU GROWN THE COMPANY TO WHERE IT IS NOW? Having entered the industry, we realised quickly that cables were under-performing and it was a major problem in our industry. We pressed for a standard for testing cables, while other more established companies established were reluctant to adopt a test standard. Rick Ragnini, who was an outside contractor at the time, developed a bit error rate test – B.E.R.T. - for Camera link. When we started testing the cables, we realised our product wasn’t working very well, but neither was anybody else’s. The test made us open our eyes and improve our product. We were then able to work with the Camera link committee and show them there was a benefit to the testing and at that point we really gained a lot of market share in the industry. We were able to give our distribution channels and our partners the confidence that we could test the cables and make consistently reliable cables. We also developed a measurement tool, provided great cables and that gave them the confidence to buy our products. I came from an aviation background where everything is mission critical. I see building cables the same as aviation. When CEI sells cables, we sell them through distribution channels so we don’t know if our cables are being used for brain surgery or to inspect a label on a package, so our assumption is, it is always mission critical and someone’s life could depend on it. We continue that philosophy now as last year we invested over $60,000 in new testing equipment for further R&D to study the performance of


SPONSORED

new higher speed cable assemblies (CXP-12). That constant testing provides assurances to our customers that every cable we build is tested to an industry standard or beyond.

IS THAT HOW YOU BECAME THE MARKET LEADER? I believe so because at that time we didn’t have a distribution channel. We wanted top tier distributors and we were able to go after them. We went to those distributors and they gave us the opportunity to be their cable supplier, as we were able to demonstrate the ability to validate cables which was a major problem for them.

WHERE ARE YOUR PRIMARY MARKETS? We have approximately 30 distributors located all over the world. The market for us is larger in Europe and Asia than it is in the United States. Machine vision is very large in Europe and Asia, which is where we focus our efforts, but it’s growing for us in North America. I really believe Europe will always be number one in machine vision sales.

WHICH SECTORS ARE USING THE PRODUCTS? In North America we’re really focused on automotive and military. In Europe, the market varies as we supply everything needed for machine vision cabling and power supplies plus we are now offering industrial camera enclosures. We are very strong with Camera link and CoaXPress. We’ve always focused on the high-end of the market. We have a new product coming out for USB 3 that should be exciting for the marketplace.

HOW HAVE YOU MAINTAINED YOUR POSITION IN THE MARKET? A major element to CEI market position is our robust online cable configuration tool. Years ago, we recognized through our ISO program most of our returns were due to what we characterised as “sales errors”. Our marketing manager, Jay Radcliff cured this issue by developing a website where our customers could easily configure a very complex cable assembly and download 3D models to fit in their engineering drawings. I think that’s been a major key to our success.

IS IT POSSIBLE TO CONFIGURE AND ORDER JUST ONE CABLE? Absolutely. Our mantra is that our minimum order quantity is one. If you configure the product on our website, we provide a price to the distributor and they contact you. The distribution partners we have are very technical and knowledgeable so they may ask questions and look at the configuration and potentially come back with alternative solutions. There is a bit of hand holding in this industry and you do need technical partners to assist the integrators and end customers. One of our criteria was that our distributors have the technical resources necessary to assist our customers. It is important they know how to fit the pieces together, know what is required to have the best implementation possible. Our distribution partners are our greatest asset. They also stock the product, which means if you’re 10,000 miles away, you don’t need to pay freight for a $25 cable.

“ We sell through distribution channels so we don’t know if our cables are used for brain surger y or to inspect a label on a package, so our assumption is, it is always mission critical and someone’s life could depend on it.” WHY HAVE YOU EXPANDED INTO ENCLOSURES? We wanted to see substantial growth in our sales. We didn’t feel that we could do that by solely providing machine vision cables to an already saturated market. We either had to build cables for a different market or offer additional products to the vision industry. We feel that enclosures are something that really needs help in our industry. There are good enclosures in the market, but we felt we could offer something that was different and much better. What we have done is to add connectors to the enclosures, which avoids the use of cord grips that ultimately leak. Given our knowledge of machining and cabling, we have the unique ability to provide the best enclosures in the industry.

23


SPONSORED

WHAT HAS BEEN THE PROCESS? Building an enclosure is quite a bit different to building a cable but we have a fully capable CNC shop. It took some time to get up and running because we wanted only brand-new equipment and the latest technology. As for the development and the designs for the enclosures, they have evolved based on specific design input from our customers, which is mostly from the automotive sector.

WHAT HAS THE IMPACT BEEN FOR THE BUSINESS? Enclosures are still very new and revenue wise it has added about five percent, but we would expect within five years it will probably be 50 per cent of our revenue. Most of the work now has been in R&D. We will also be working very hard to produce an enclosure configurator. It is more difficult than building the cable configurator because there are more variables, but this is something that will be a real benefit not just to the industry but also to our distributors.

24

“ I really believe Europe will always be number one in machine vision sales.” WHAT ARE THE CHALLENGES YOU ARE FACING? The trade war is probably our biggest challenge right now. It’s a real storm. The components we import from overseas have seen the tariffs go up by 25%. It’s a tremendous problem for us. Almost everything has been impacted. I hope the trade war ends as quickly as it started, which seems to have been with a tweet. We also see opportunity from the slow down as it’s a very good time to finish programs and create new designs because people have time to discuss it. We have found over the last seven months a lot of projects we’ve been working on are closing and going to market. Right now, we are launching new products for CXP 12, M12, ethernet, USB 3 cables and many new enclosures. It has given us the breathing room we needed to finish a lot of projects.

mvpromedia.eu


SPONSORED

WHAT TYPE OF BOSS ARE YOU? I’m pretty hands on. I try to be in the office most of the time and if I’m not in the office, I try to be in front of a customer, especially with new product launches. I did relax for a few years and our team ran it well enough that I only had to set metrics for existing programs. Over the past three years, I decided to be more involved. Launching new products is fun for me and it has allowed me the opportunity to get in front of customers and not just with enclosures but also with cables. I think it’s important for customers to see me again as for several years they really didn’t.

“ Launching new products is fun for me and it has allowed me the opportunity to get in front of customers.”

WHAT KEEPS YOU MOTIVATED AFTER 27 YEARS? It’s an adrenaline rush for me to visit a customer and solve a problem. During a recent customer visit, I was invited to my customer’s conference room, which they called the granite room. The owner of the company, who is an impressive character, said ‘This guy is here to sell you something’. He then said ‘Ray…go’. It was like a game show almost. I simply asked ‘What is your biggest headache with cabling?’ and they said ‘USB 3. We must have a 20m USB 3 cable that is reliable’. I said, ‘OK, we’ll give it to you. It’s already in the works’. Within eight weeks we delivered a working solution that they are overjoyed with because it was their biggest problem. It was fun but the greatest joy is that we solved a major problem for a very important customer. I love that part of my job and I have endless amount of pride when things like that happen.

mvpromedia.eu

HOW DO YOU SEE THE FUTURE FOR CEI? Right now, I see us as a company that is offering the best cables in the world, but we want to make sure moving forward that we also manufacture the best enclosures in the world, which is a real team effort. When we feel we are the best in the world at enclosures, then we may go on to another accessory. So far, our growth has always been organic but the next thing may be organic or it may not. It may be acquiring another accessory company, who knows? For now, we want to be focused and make sure we have the best cables and enclosures in the world and then think about the next challenges and next hurdles. There is still room for us to grow in our existing markets. MV

CONTACT DETAILS E: email@componentsexpress.com W: www.componentsexpress.com T: 1-630-257-0605

25


SPONSORED

INDUSTRIAL CYBER-SECURITY STARTS AT THE EMBEDDED HARDWARE PLATFORM By Wesley Skeffington, Xilinx Inc. Any asset connected to the industrial internet of things (IIoT) without proper security is at risk of cyber attack. In addition to causing financial losses or inconvenience, tampering with industrial systems can potentially cause injury or fatalities among workers or members of the public. A safety-critical system cannot be considered truly safe without adequate cyber protection.

Protect Defense in Depth

Producers of connected cyber-physical systems are moving quickly to build robust protection into their products. But while all would agree that cyber security cannot be left to chance, how can developers know how much security to provide? When is a device secure enough? The publication of IEC 62443 provides a great deal of help. It provides a spectrum of threat models and counter-measures required for each standard defined security level. This standard has allowed industrial security engineers to discuss a shared figure of merit. As more and more companies are looking for verifiably secure solutions, these types of certifications are becoming customerdefined requirements. The IEC 62443-4-2 part of the standard defines component-level platform requirements that can be either be enabled or accelerated by configurable hardware such as FPGA based system on chips (SoC)s.

HARDWARE SECURITY FOR LIFE Given cyber attacks often try to modify the behavior of a system, it is not possible to have a functionally safe system that is not also cyber-secure. Though it is possible to have a cyber-secure system that is not technically a functional safety system as defined by IEC-61508. To be sure of considering all potential threats against connected industrial devices, it is helpful to consider the security lifecycle in four stages (figure 1).

26

Detect Attestation

Resilience Isolate & Operate

Remediate Fix & Improve

Figure 1. Four-stage security lifecycle. Strong protection is a pre-requisite and - for newly deployed devices - should be as strong as the current technology permits. Hackers will eventually launch a successful attack as the security stance degrades with age so detection is the next stage of the lifecycle. The third stage, resilience, refers to the device’s ability to fallback to a safe operating mode and alert the operator to the situation. The fourth stage makes provision for devices to report back details of security attacks for remediation. Xilinx® configurable hardware has features to support a complete security life-cycle including a strong hardware root of trust, integrated cryptographic accelerators, physically unique functions (PUFs), integrated secure storage, and key-management functions. In addition, Xilinx IP partners offer FPGA-based security monitors, such as MicroArx, which can enhance detection capabilities and resilience of critical software applications via FPGA based monitoring.

PROTECTING EMBEDDED DEVICES Whatever technologies are used to secure the system, they must be built on a strong foundation. To establish a strong security foundation at the hardware and boot-time software level, Xilinx’s Zynq™ UltraScale™+ system on chips (SoC) provide features including an immutable device identity and boot ROM, anti-tamper functions, integrated secure-key storage in eFuses, and bitstream

mvpromedia.eu


SPONSORED 62443-4-2

Xilinx Platform Security / Customer Responsibility

APP

EDR - Embedded Device Requirements CR - Component Requirements

EDR 4.3

Xilinx Customer Responsibility

EDR 2.4, 3.10, 3.12, 3.14

Shared

Digital Signatures, User Passwords, Tokens, Biometrics Role-based Accounts, etc.

RUN-TIME

BOOT/CFG

Hypervisors, Microkernels, TrustZone, Isolation Design Flow Projections, Security Monitor, etc.

Asymmetric/Symmetric Authentication. AES Crypto, DPA Protections, etc.

SILICON HW

SUPPLY CHAIN

Security Critical Redundancy, JTAG Protections, Environmental Monitors, Tamper Detection/Penalties, etc. World Class Best Practices, Authorized Suppliers, Blind Buys, Anti-Counterfeit, etc.

CR 4.1, 4.3

EDR 2.8 & 2.13, CR 3.4

Xilinx Responsibility

EDR 3.11, CR 1.2, 4.1 & 6.2

Xilinx Whitepaper for ZU+ IEC 62443 Enablement Coming - March 2019

authentication and encryption for secure hardware loading. The protected boot firmware then enforces a secure boot and execution of the first stage bootloader and will stop the process if it detects the integrity of the software was compromised indicating that tampering has occurred. At the higher levels, only an authenticated digitally signed OS image will be loaded.

relevant security guidance. The Sub Group liaises with the Industrial Internet Consortium (IIC) and (among many contributors) has helped create the IIC’s Industrial Internet Security Framework (IISF), which recognizes the need for higher levels of safety, reliability, and resiliency in IIoT systems, over and above the needs of traditional IT environments.

Once a system is up and running communications with any other devices should be protected using authenticated communication channels, as well as encryption if protecting the data in-flight is deemed necessary. Xilinx FPGAs feature integrated hardware accelerators for industry-standard encryption algorithms such as RSA-SHA, and AES, to support secure, encrypted communications. Data exchanges with other ICs in the system, such as nonvolatile memory (NVM) chips, can also be protected using device-unique keys that are unreadable to the user.

Xilinx has helped to author IEC 62443, and is an active member of the TCG and IIC including participating in the TCG Industrial Sub Group. Important security functions supported in FPGA SoC silicon and design tools enable users to create industrial control platforms that are compliant to IEC 62443-4-2 and help accelerate their time to market. In addition, new mechanisms are being introduced to enable customer keys and unique device identifiers to be installed securely, within the supply chain. A mapping of some of the important security features identified by IEC 62433-4-2 and how Xilinx supports them is shown in figure 2.

Finally, the system monitoring functions such as measured boot, measured application launches, and use of TPM (Trusted Platform Module) is supported. These links in the chain are all necessary to protect the operation and integrity of each of the devices in the end to end security architecture. As well as protecting the operational state of the device, these interconnected layers of security features also protect the intellectual property associated with the FPGA hardware design and the SoC software running on it.

IMPROVING SECURITY BEST PRACTICE Since the publication of the international industrial control system security standard IEC 62443, equipment designers are better equipped to understand and implement best security practices for embedded systems. Recognizing the risks encountered by connecting industrial equipment to the Internet, the Trusted Computing Group (TCG) has set up the Industrial Sub Group to develop

mvpromedia.eu

CONCLUSION Today’s industrial control systems are facing ever increasing threats from cyber attacks. An effective security solution must start with the embedded hardware platform. Strong hardware authentication features, and features supporting secure boot, software measurement, and encryption, provide the foundation for minimizing attack surfaces and improving the abilities to attest to the integrity of each device. This knowledge holds the key to keeping industrial systems safe and secure. MV

CONTACT DETAILS A: Xilinx, 2100 Logic Drive, San Jose, CA 95124, U.S.A. E: cmoraga@xilinx.com W: www.xilinx.com T: (408) 559-7778

27


SPONSORED

FINGER ON THE

PULSE Ian Alderton, Technical Sales Director at Alrad Imaging, lifts the lid on the company, its impact on machine vision, its role in the market and the latest trends to hit the sector

WHAT IS ALRAD IMAGING’S BACKGROUND? We started in radiation detection and moved from there into optics and machine vision. Our main product areas cover machine vision components, optical detectors and laser products. We have been trading since 1970 and will celebrate 50 years in business in June 2020. We have three trading divisions, Alrad Imaging, Alrad Electronics and Alrad Photonics and offer thousands of product items from the world’s leading manufacturers. We also provide consultancy, demonstration and product support services.

HOW DO YOU ASSESS YOUR IMPACT ON THE MV MARKET? We were instrumental in growing it from nothing in the UK. We started off by promoting frame grabbers and later line scan cameras. We’ve helped introduce to people the idea of using vision as part of their process control. What started off as high-end devices have now become more widespread to the point where part or all of the control and measurement is done using a vision system. There are many production lines that are now 100 per cent inspected using this method. So, this has also led to people being upskilled and it has created new jobs.

28

HOW DO YOU DETERMINE WHAT PRODUCTS AND SUPPLIERS TO USE? We keep our eyes on the market to look for trends, we follow what is happening in the trade press, we visit relevant shows like Vision in Stuttgart to see what new products manufacturers are bringing out and we listen to word of mouth. We also have contact with other independent distributors like us across Europe and we regularly talk to each other to find the next new areas to be in. We also talk to our customers and listen closely to their requirements. In terms of who we work with, it is with companies that feel they can work with and benefit from being represented by a distributor. We started off with a lot of North American suppliers and then newer ones were primarily European. Now we also have Asian suppliers, so it is a good mixture. We have also been selected by businesses if they are in a specialist or niche market. We have seen businesses merging or being bought out. For example, recently SenTech, whom we have represented for nearly 20 years, got taken over by Omron. However, they continue to sell cameras via us because we have existing links into the machine vision market and also have specialist knowledge of the Camera Link and GigE interfaces.

mvpromedia.eu


SPONSORED

WHAT ARE THE ADVANTAGES OF WORKING WITH A DISTRIBUTOR? Being independent means we can offer impartial advice and recommend what’s best for the application. We can, and often do, provide end users with a mix of product from two or three suppliers and bring it together in an optimum package that will solve their problems. We can also test it first ourselves, or with the customer, and work together to get the best solution for each application. If a customer has two or three applications and they need a slightly different product mix for each of them, then we can do that with products from a variety of different suppliers.

WHAT ARE THE CURRENT TRENDS IN THE MARKET? Smart cameras and embedded systems with the GPUs as there’s now a trend towards use of multiple GPUs in a system, each of which are doing individual dedicated tasks thus reducing the data going back to the host PC. There’s a lot more smarter tasking. With embedded PCs the processing power has gone up and the cost has come down. Typically, starting at about £500 you can have a camera with a lens, a GPU system and run a process on that kit that will allow significant control.

mvpromedia.eu

“ We work with companies that feel they can work with and benefit from being represented by a distributor.” The need for a stand-alone PC in process systems is reducing and as things get smaller, faster and more powerful, which combined with the move from CCD to CMOS cameras means more applications are now in the realms of possibility. This underlines the need for us to talk to new customers, to be aware of when new applications and opportunities open up, as traditional control and inspection methods can increasingly be replaced by automated machine vision technology. MV

CONTACT DETAILS E: sales@alrad.co.uk W: www.alrad.co.uk T: 01635 30345

29


FLIR THERMAL IMAGING CAMERAS OVERCOME PICK AND PLACE CHALLENGE

Recognition issues can be a challenge when the item and background are similar. Flir, working with MAPLAN, has proven that thermal imaging can solve the problem.

Machine vision inspections using visual cameras can occasionally cause recognition problems if the product and background have too little colour contrast. In such cases, thermal imaging cameras can be a practical solution - especially if the product has a different temperature than the transport medium. In many cases such temperature differences are caused by the production process. Injection moulding applications are ideal candidates as manufactured parts come out of the machine at a relatively high temperature.

MAPLAN is also considering the possibility of additional evaluation of thermal information for rubber injection moulding machines. The use of a thermal imaging camera could also provide additional information about the quality of a product. This method is particularly interesting for complex shaped components and in this regard, FLIR thermal imaging technology has the potential to significantly optimise the injection moulding process.

The injection moulding machine manufacturer, MAPLAN, provides a good example of how thermal imaging can be applied. The company decided to make customisable, rubber luggage tags as giveaways on an extrusion line at a trade fair; they would then be re-positioned by a robot and labelled using an inkjet printer. However, the weak colour contrast between the conveyor belt and product proved to be a challenge. The conventional approach would have been for a visual camera to guide the robot to remove the luggage tags from the conveyor belt and position them for personalising with inkjet printing. However, the light grey luggage tags on a light grey conveyor belt provided insufficient colour contrast for the vision system to work effectively so the visual camera was replaced by a FLIR A615, fixed mounted thermal imaging camera. This way heat radiation from the extrusion process could be used for reliable product detection. “The solution was very simple and worked right from the start,” explained MAPLAN’s technical manager Rudolf Eisenhuber. “The high thermal imaging resolution of the FLIR A615 also enables quality analysis, which we would like to demonstrate with more complex injection moulded parts in the future.”

30

FLIR A615 thermal imaging camera

The thermal image shows not only the position but also temperature of the luggage tags (1)

mvpromedia.eu


FLIR A615-THE LOWDOWN The FLIR A615 is widely used for thermal monitoring and quality assurance of production processes. The compact thermal imaging camera can be fully controlled from a PC and, thanks to its compliance with a wide range of standards, is suitable as a plug-and-play device with software for machine vision applications from third party manufacturers such as National Instruments, Cognex, and Halcon. It is also compatible with the GigE Vision standard and supports the GenICam protocol. The thermal camera’s high-resolution detector with 640 x 480 pixels enables high-speed IR windowing. With its high thermal sensitivity of 50 mK, it captures and visualizes the smallest image details and the slightest temperature differences. Its Gigabit Ethernet port allows 16-bit image streaming to the computer in real time. MV

The FLIR A615 measures the exact position of the newly-produced rubber luggage tags

8 12 16 25 35 50 No lottery numbers, but the focal lengths of the new CF-ZA series for 1.1" sensors

Always six right numbers: With only 39mm diameter and 2.5µm pixel size Especially developed for 1.1" sensors, the new CF-ZA series offers a high resolving power of 2.5µm pixel size and consistent brightness from the image center to the corners without vignetting. And that with an extremely small diameter of only 39mm. More at www.fujifi lm.eu/fujinon. Fujinon. To see more is to know more.


SPONSORED

MEASURING CIRCULAR EDGES WITH SMART 3D TECHNOLOGY LM I’s new measurement tool – Surface Circular Edge in Gocator - offers an easier way to achieve high-precision measurements. Circular parts and features are one of the most common geometric shapes found in manufacturing today. The ability to accurately and efficiently measure their diameter and roundness deviation have a direct impact on part performance and lifetime. Roundness contributes to function and performance in a variety of ways, not least of which is maintaining a lubricating film between mating components (e.g., between automotive cylinders and pistons). This is why quality control engineers need to have a robust automated inspection method with standard measurement algorithms that meet the requirements for diameter and roundness inspection.

SURFACE CIRCULAR EDGE TOOL With Surface Circular Edge in Gocator®, the user can easily achieve high-precision measurements on different types of circular features with a single powerful measurement tool––regardless of part orientation, and without the complex component setups and unreliable results that come with using traditional methods (e.g., rotational drum or vee-block).

32

By scanning the circular object with a laser profiler or snapshot sensor, the built-in Surface Circular Edge tool can then measure the inner and outer edges of a circle. For example, it can measure the inside or outside diameter of tubes or pipes, disk-like features, or the X/Y position of their center points.

HOW IT WORKS The Circular Edge tool in Gocator® fits a circle to a circular edge in the scan data, using either height map or intensity data. The edge can be the outer edge of a disc-like feature or the inner edge of a hole. The tool allows engineers to measure the position and radius of circular features and determine roundness error. In the following sample images, the tool is used to measure the outer edge of a circular feature. MV

CONTACT DETAILS W: www.lmi3d.com T: +49 (0)3328 9360 0

mvpromedia.eu


30 & 31 October 2019 | NEC, Birmingham

The UK’s largest annual gathering of engineering supply chain professionals 15,000 engineering professionals in attendance

"I found the event a great networking opportunity to meet industrial professionals from different backgrounds with different products." Kat Clarke, Wing Manufacturing Engineer, Airbus

500+

Over 200

exhibitors showcasing their products/services

hours of free-to-attend industry content

Benefit from co-location with: AERO ENGINEERING

AUTOMOTIVE ENGINEERING

PERFORMANCE METALS ENGINEERING

COMPOSITES ENGINEERING

CONNECTED MANUFACTURING

NUCLEAR ENGINEERING

MEDICAL DEVICE ENGINEERING NEW FOR 2019

HEADLINE PARTNERS HEADLINE PARTNERS HEADLINE PARTNERS

NetComposites NetComposites S U P P O R T I N G A S S O C I AT I O N S

SUPPORTING S U P P O R T I N G A S S O C I AT IASSOCIATIONS ONS

H E A D L I N E M E D I A PA R T N E R S H E A D L I N E M E D I A PA R T N E R S

REGISTER TODAY SUPPORTING MEDIA SUPPORTING MEDIA

T: +44 (0)20 3196 4300 | E: aeuk@easyfairs.com www.advancedengineeringuk.com


SPONSORED

MAKING THE MOST OF

LINE SCAN IMAGING Line scan imaging systems are becoming faster and more powerful. Jools Hudson from Gardasoft explains how using dedicated lighting controllers and timing controllers can improve your line scan operations and increase versatility. Line scan imaging is used extensively for high-speed, continuous inspection of materials such as sheet steel, paper, textiles and packaging film. Line scan systems lend themselves to versatile imaging techniques by utilising high line resolutions, multi-line configurations, or using dedicated controllers to provide timing and lighting control. A typical system might look like figure 1 which utilises a lighting controller and a timing controller. Lighting controllers can help overcome many of the shortcomings of traditional line scan imaging systems and provide some significant benefits, including: • Reduced cost • Higher performance • More reliable inspections • Novel imaging configurations • More compact systems Line scan cameras will require precise control in most applications. For example, the camera line rate must be synchronised to the speed of the moving object to avoid image distortion, and precise control of lighting intensity is essential since the fast line rates permit each pixel very little light.

OPTIMISING LIGHTING INTENSITY Stable lighting is a crucial factor in all machine vision systems which need reliable, reproducible inspections. All LED lighting deteriorates in efficiency over time due to aging, so the ability of lighting controllers to compensate for this is of critical importance. Most line scan applications use continuous illumination and aging will be most problematic if the light is being run continuously at 100%

34

of its rating. The solution is to use an LED with a higher rating than necessary and drive it via a lighting controller. When the system is first set up, the controller can be used to set the brightness to 70% or 80% of maximum. As the light gets older, the brightness can be adjusted upwards to compensate for the loss of efficiency and extend the useful lifetime of the light. With Ethernet connectivity, these adjustments can be carried out remotely. Lighting controllers can also be used to ensure uniform illumination over the full length of the line sensor to compensate for geometric variations. For line lights with a uniform brightness, images tend to be darker at the ends for geometrical reasons. This can be overcome by splitting the light into segments and using a lighting controller to modify the brightness of each segment to create uniform illumination across the length of the sensor. This gives true flat field correction, which will perform better than a software-based compromise.

REDUCING THE NUMBER OF CAMERA STATIONS Since line scan cameras run at increasingly high line frequencies, there are more opportunities to be creative in their use. Applications that previously required two or more camera stations can now often be combined into a single camera station. This can lower costs as well as reducing the overall size of the system. Typically, a combined station will


SPONSORED

involve topologies where a controller drives multiple lights in a sequence. Images from each light are acquired on consecutive lines using a single camera and the separate images are extracted using software. The example in figure 2 shows a two-light system, where the object is illuminated by both dark field and bright field light sources. Bright field is often used to generate high contrast to reveal topographic detail whereas dark field lighting can emphasise surface imperfections. The controller drives the bright field and dark field illumination sources in synchronisation with alternative lines on the camera sensor. In the resulting interleaved image, the even lines of the sensor correspond to a dark field image of the object and the odd lines to the bright field image. The two individual images can then be extracted with software. Accurate timing control is necessary to ensure that the lines don’t get one out of step. Different combinations of light sources could be used in this approach with different geometries or wavelengths, making this a very versatile technique. Accurate trigger timing can also be used to eliminate the possibility of random noise disrupting multi-light sequences. This technique can also be used with 3 or more lights to save a lot of cost without losing resolution, by driving the lights hard at a high enough frequency, making the use of a high quality controller to do this accurately and safely. For example, an eight-light system can be used for the inspection of printed film. Four independent images can be created using a single line scan camera to check for print quality, surface defects from different directions and for holes.

provides an easy and complete solution for precision triggering of not only of the camera but also of the lighting controller. This can be particularly useful when an encoder is used to generate the basic timing, since it is generally not possible to for the encoder to directly trigger each line of the camera.

LIGHTING CONTROL FOR LINE SCAN IMAGING High-frequency line scan cameras require consistent and accurate pulsing of the LED output in close synchronization with the line rate, particularly in multi-light systems. It is important for the lighting to be pulsed with a fast and consistent rise and fall time. Poor lighting control can give an inconsistent pulse shape which may have overlapping pulses or doesn’t utilise the full exposure time which will compromise the resulting images. Timing repeatability is also an essential requirement.

CHOOSING THE BEST CONTROLLER Using dedicated lighting controllers and a specialist trigger timing controllers can significantly improve both the performance of a line scan machine vision system and the scope of applications. However, the best controller for a particular system depends very much on the application. Gardasoft, has a reputation for technology innovation in lighting control and has been solving lighting problems for over 20 years. MV

HIGH PERFORMANCE TRIGGERING Line scan cameras require accurate, high-speed trigger signals. Many machine vision systems use a frame grabber to generate these signals. However, not all systems feature a frame grabber, and if they do it may be located a long way from the camera, especially if GigE Vision or CoaXPress connectivity is being used. In these cases, a very fast and accurate trigger timing controller is needed. For example, Gardasoft’s CC320 Machine Vision Timing Controller offers the functionality of a high-performance PLC in a format ideal for machine vision applications and

AUTHOR: Jools Hudson, Gardasoft Vision Ltd, UK T: +44 1954 234970 | E: vision@gardasoft.com W: www.gardasoft.com A: Trinity Court, Buckingway Business Park, Swavesey Cambridge, CB24 4UQ

35


HOW AUTONOMOUS MACHINE VISION SAVES TIME AND MONEY Traditional machine vision solutions wreak havoc on operational efficiency and the industry is ready for change. As the late Japanese industrial engineer Shigeo Shingo once said, “Improvement usually means doing something that we have never done before.� Here Miki Gotlieb, VP of Operations at Inspekto, explains how Autonomous Machine Vision impacts operational efficiency in a way that has never been seen before.

Traditional machine vision solutions, which have been around since the 1980s, are implemented in a projectbased approach. The process requires the selection and acquisition of numerous components, including cameras, lenses, lighting, filters, computing platform and softwarefrom a range of suppliers. The QA manager is therefore subject to the lead times of the vendor of each piece of equipment — while the resulting overall lead time is governed by the long lead item (LLI).

Once the integrator receives the components, they can then integrate them with one another, alongside the tedious task of preparing the software selected and then building a hard-engineered solution on the shop floor, before beginning the software training process. For a simple application, implementation can take several weeks to a few months. For a more complex solution, typical time frames are several months and, in some cases, go up to an entire year. In addition to these lengthy waits, since traditional machine vision solutions are specifically designed for a location on the production line, it is nearly impossible to move the solution from one location to another. Alongside this, the diagnostic tools for a traditional machine vision solution are often unique too, which means they are an expensive investment. In most cases the manufacturer is unable to diagnose a fault themselves, due to a lack of machine-vision specific knowledge and must instead rely on the integrator for assistance. This inability to address the issue immediately inhouse delays manufacturing, which may have to be halted while the fault is addressed.

36

mvpromedia.eu


One way to reduce the downtime should a machine vision solution become faulty is to stockpile spare components. A factory with several system configurations will keep spares of all unique system parts in order to achieve a decent mean time to repair (MTTR) and minimise the downtime in case of a malfunction. This increases costs and effort for the parts which must be maintained and managed.

A NEW WAY OF WORKING Since the launch of Autonomous Machine Vision (AMV) first product entries in VISION 2018 in Stuttgart, these challenges can be completely overcome. While AMV systems can be purchased for 1/10th of the cost of a traditional solution and installed without the help of an integrator 1,000 times faster, they also bring about a vast array of operational benefits to the manufacturer.

Finally, thanks to the several artificial intelligence (AI) engines underlying the S70’s ‘brains’, very minimal configuration is required of the manufacturer’s employees and the system will self-adapt and self-adjust to its environment. For example, no proof of concept (POC) creation nor any solution development is required from an external integrator to set up the system. Once installed the system will self-adjust to any changes on the line or the lighting environment. Furthermore, once the manufacturer has set up an Inspekto S70 system, it is not fixed forever to that point on the production line, unlike a traditional solution would be. If at any time in the future the QA manager identifies another point on the line that would benefit more from visual QA, the system can be moved from its original location to the new one. Additionally, if the system is not required for a period of time, it can be stored and then re-installed as necessary in a couple of hours. This state of affairs is drastically different from that of machine vision solutions of the past, where moving a solution from location to another would in most cases be either impossible, due the new location’s specific physical and components requirements, or require a very notable bill of waiting time and cost for an integrator to make the necessary change.

COMPLETE AUTONOMY

The Inspekto S70 is the world’s first Autonomous Machine Vision system. Designed with the manufacturer in mind, the system is suitable for any quality assurance, gating or sorting application. The system’s arm is flexible enough to be installed on any machine, at any angle and position to get the required field of view (FOV). The system’s visual sensor, comprising all camera, lenses and lighting components, has a wide range of capabilities to suit different applications. In addition, the fact that AMV systems are an off-the-shelf product – and not a lengthy integrator project – removes the lead time of ordering the components for traditional machine vision solutions. The Inspekto S70 can be delivered as soon as the purchase order is received, and it can then be implemented upon arrival in the plant. In addition, as Inspekto, as a vendor, represents the cost structure of one company. This means purchasing a system from Inspekto is unlike purchasing components from the many vendors involved in a traditional integrated solution, which generates savings that are passed on to the customer in the form of the very affordable prices.

mvpromedia.eu

If any adjustment is required with an installed Autonomous Machine Vision system, it will perform self-diagnosis to identify the issue, significantly reducing the MTTR. It can set up its own parameters and use its intelligent capabilities to test itself and report back to the QA manager. The QA manager requires no specific training to achieve this, all is built into the system and it is easy to use. AMV offers very fast MTTR. For example, if the system’s visual sensor apparatus is faulty it can be efficiently fixed or replaced by a member of the factory QA team. This is in stark contrast to traditional solutions, which would require the return of the integrator to suggest tests, figure out the problems and attend the site to correct it —a timeconsuming and costly process. The improvement in operational efficiency offered by AMV represents, as Shigeo Shingo said, something we have never done before. Advancements in machine learning, artificial intelligence and optics have made AMV possible, paving the way for the Inspekto S70. Manufacturers can now finally benefit from powerful visual QA systems with no downtime, very short implementation lead times, rapid diagnostics and far less testing, training and spare parts. MV

37


HOW SONY’S IMX250MZR

HAS POLARISED THE MACHINE VISION INDUSTRY

Stephane Clauss of Sony’s Image Sensing Solutions Division examines on-chip polarisation sensors / cameras - from how the new technology works to the applications they enable and why the new skills required in application development present both a barrier to uptake and opportunity for camera developers.

Reflective glare can significantly hinder the accuracy and usability of an industrial camera. This is true in controlled visual inspection environments used for the quality-control analysis of components or packaging - where glare can hide a fault and increase the chance of recalls. It is especially true in outdoor applications, such as Intelligent Transportation Systems (ITS), where glare caused by highly-variable, unpredictable lighting can not only slow down systems such as automated motorway tolling barriers - for example by not capturing a license plate - it can prevent enforcement cameras from working and therefore reduce the risk of getting caught when, for example, running a red light.

POLARISED CAMERA SET-UPS While the use of polarisation filters has enabled glare to be removed or reduced in image capture and machine vision analysis, each filter could traditionally only act in a single plane. Therefore, a system requires either: 1.

38

A multi-camera set-up: this not only adds complexity and cost when developing and maintaining an application, but using multiple cameras also creates false-positive readings through perspective distortion.

2.

A single camera system but with multiple filters being switched at high speed. This eliminates the perspective distortion but creates a time delay with the changing of each polarisation filter; it also creates a point of system failure through the use of mechanical moving parts.

The second half of 2018 saw a third option introduced, when Sony’s semiconductor division launched the IMX250MZR sensor, which was the first mass production sensor to incorporate on-chip polarisation; integrating this functionality at the pixel level. The sensor houses a layer of wire-grid polarisers housed between the microlens and photodiodes. The wire grid is set for each photodiode to filter light in one of four planes - 0o, 90o, 45o, and 135o - with pixels assigned a plane in a 2x2 calculation unit (see fig 1). Fig 1 - sensor design and pixel/calculation unit layout

mvpromedia.eu


CALCULATING THE PRECISE ANGLE OF LIGHT There is no such thing as perfect polarisation. Some loss will occur even when light is perpendicular to the wire grid (the maximum transmission point) and some unwanted angles of light will come through when light is parallel to the wire grid (the minimum transmission point). The relationship between the minimum and maximum light enters through the wire-grid polariser is the extinction ratio, with high extinction ratios allowing more precise detection of a specific angle. This imperfect nature of a polariser allows you to calculate the precise angle of light coming through, not just for the four specified angles. Comparing the rise and fall in intensities transmitted between each of the four pixels in the calculation unit allows you to use Stokes parameters and determine both the precise degree and direction of polarisation in any plane.

through the glass and assigning a colour to the output for each polarisation angle (eg red for 0o, blue for 45o) the altered path of light can be graphically represented to show regions under stress, that would otherwise have gone unnoticed, to be identified and quality control processes improved - see fig 2 and part 4 below. Scratch identification Like stress, an otherwise hidden scratch or surface defect will also alter the angle of reflection, meaning reflection enhancement can be undertaken to more easily perform surface inspection and scratch detection. Fig 2 - by colouring the output from each of the four polarised light sensors it’s possible to create a visualisation that shows both hard-to-spot faults, such as scratches, or hidden defects, such as stresses and weaknesses in a transparent panel.

WHAT THE ANGLE OF LIGHT CAN TELL YOU As well as filtering unwanted reflection and glare, application engineers developing vision applications can also use colours for each plane of light to create a graphical representation of how the angle light changes as it goes through / reflects off a surface. This therefore allows the following to be detected: Weakness detection - stress analysis The level of stress in a transparent object - such as a piece of perspex / a glass smartphone screen - will transform the angle of light that is refracted slightly. By shining a light

mvpromedia.eu

Low-light analysis In low-light situations the outline of objects cannot be easily determined. Polarisation can help improve the contrast of objects by measuring the angle of light reflecting off objects. This also allows the more accurate identification of hidden and camouflaged items that would normally blend into the background using visual spectrum or thermal imaging cameras - be they man-made or an animal.

39


The technique has proven to be particularly effective in capturing images underwater, where the refractive properties have been shown to vary according to location, depth, time of day and direction. Indeed, several sea creatures rely on evolved polarised light sensors to both perform local/ long-distance navigation and render their prey’s camouflage ineffective. Glare elimination The accurate analysis of complex objects (from electronic components and mechanical objects to fruit and vegetables) and reflective materials (from pharmaceutical packaging to a car windscreen) can often be restricted by reflections. Additionally, for outdoor applications - such as drone-based imaging and traffic inspection systems - the variation in light intensity / angle of light, that alters (often unpredictably) throughout the day means glare can significantly affect the quality of captured images. Removing this glare will improve accuracy - for example, the contents of rounded drug blister packs can be observed

40

Fig 3 - elimination of glare from a windshield using a single, static camera; and traffic monitoring cameras can record not just that a dangerous road violation took place (eg speeding / red-light running), but more reliably capture the number plate, the driver and detail if they’re using a mobile phone.

POLARISED VISION CAMERAS AND APPLICATION DEVELOPMENT The sensor is already being incorporated into cameras, such as Sony’s XCG-CP510. However, industry surveys suggest there is a significant barrier to adoption - the skillsets and therefore the ability of system developers to work with the new sensor technology easily, with development time for a typical application being between six and 24 months (depending on the application / team).

mvpromedia.eu


At November’s Vision 2018 trade show, Sony previewed an SDK for its polarised camera, with it being made available to customers earlier this year. At the show, the camera and SDK were used to demonstrate its use in stress analysis (see below) and glare elimination for ITS applications (see fig 4). The SDK gives support functions including demosaic and raw extraction. Additional features include a ‘Cosine fit’ to allow a developer to define a virtual polariser angle for the image and an ‘Average’ function that enables the creation of a non-polarised image from the raw data for a simultaneous comparison of the polarised and standardcamera images.

“ One of the biggest challenges in the polarised camera technology ’s adoption will be the time taken to create applications when it requires new skill sets. The SDK approach will address a significant part of this.” Pre-processing functions allow the calculation of various polarisation specific information, such as the ‘Degree

mvpromedia.eu

of Polarisation’, ‘Stokes Vector’ and the ‘Surface Normal Vector’. At the higher-end level, ‘Applications-Oriented’ functions have been implemented to manage reflections and measure stress. Crucially, the SDK gives reference library applications that, with the above features, help cut development to 6-12 weeks (again based on the application developed and the team).

CONCLUSION Reflective glare can significantly hinder the accuracy and usability of industrial cameras, hiding faults and increasing the chance of recalls. Polarised cameras will play a major role in eliminating this, not only capturing better images but spotting where failure may occur before it is an issue. However, one of the biggest challenges in the polarised camera technology’s adoption will be the time taken to create applications when it requires new skill sets. The SDK approach, we believe, will address a significant part of this and it will be interesting to see new applications reach the market. MV

41


MEASURING STRESS

However, using the SDK’s algorithms it is possible to detect rapid changes.

Functions such as stress detection and automatic reflection removal require intensive internal software development and optimisation work to run at the fastest frame rate and deliver high resolution and are particularly demanding of the GPU.

Fig 4a/b: Polarised camera examining a Perspex block under stress at Vision 2018. The black screw on top of the block adds / removes stress. Where a standard image cannot detect these changes, the changed angle of light induced by the stress can be visualised using a polarised camera with a heat-map created.

As part of a demonstration for its SDK, at Vision 2018, Sony showed a stress measurement application, with its polarised XCG-CP510 camera set up 50cm from a PET block, which was back-lit using a monochromatic light and polariser. The camera was set to output both an averaged (non-polarised image) alongside a stress map image, where each plane of light was coloured (red, blue, green, yellow), with the software calculating the phase retardation correlated to the stress applied to the PET block.

www.image-sensing-solutions.eu

Above the block was a screw that users could turn to change the stress placed on the block. As per the images, in the averaged version for both no stress and under stress it is impossible to see the effect with a normal machine vision camera.

Reliable Reliably picky: 100 % quality control Keeping an eye on quality. You spotted the square tomato right away? Us too! It should always be so simple. With VeriSens® vision sensors, it‘s possible. Thanks to 100 % image-based inline quality control, they offer real added value for better quality and greater profitability. Learn more at: www.baumer.com/verisens


ONE EVENT. TWO DAYS.

THREE

GAME-CHANGING TECHNOLOGIES.

NOVEMBER 12-13, 2019

SAN JOSE, CALIFORNIA, USA

Learn about exciting new technologies, best practices and connect with 500+ industry leaders and potential business partners.

Register today at www.crav.ai


MVTEC ENTERS A

NEW ERA When MVTec Software announced that Dr Wolfgang Eckstein was retiring as managing director of the company he co-founded, it brought a planning process of several years to a conclusion – and an end to his 23 years at the top. But Dr Eckstein isn’t heading for the beach just yet, he still intends to play an active role in the business. Machine Vision and Automation Magazine found out more.

As a major player in the machine vision world, Dr Wolfgang Eckstein’s ‘presentation tour’ is going global before he changes his role within MVTec. Having taken the decision to step down from day-to-day management of MVTec Software, it was the final paragraph of the release announcing the news that gave an insight into how he would remain involved in the company. The release concluded: “Dr Eckstein will be visiting longstanding partners and will be holding presentations in different countries throughout 2019. He will be sharing his wide-ranging experience from more than 30 years in the machine vision community, including insights into MVTec’s future strategy.”

“ There are a lot of smart, intelligent people working in the company to continue what we started 23 years ago.” Dr Wolfgang Eckstein Since the announcement in May, Japan has already been visited, USA is on the list as are a number of locations in Europe where he will engage with MVTec’s major customers. He will also be giving public lectures, speaking at LinX Days, and smaller events such as those organized by the German IHK (Industrie und Handelskammer).

44

AMBASSADOR Dr Eckstein is placing great importance on his new role, which enables him to be an ambassador and figurehead for the company, a mentor to the business alongside his fellow co-founders Dr Olaf Munkelt and Professor Dr Carsten Steger, and a shareholder, which allows the business to remain solidly independent. The decision to stand aside and leave Dr Munkelt as the sole managing director, has been carefully and strategically planned. “I am getting older and so there are two considerations, me as a person and also the impact on the business. Being a shareholder, I have to consider both aspects,” explained Dr Eckstein. “It is very important that the company is running well for the next years and decades. I had this in mind to slowly step down and so we started preparing about five years ago. Hiring new people to fill in the gaps, to teach them further improvement structures, so that when we made the official announcement it meant everything was already in place. “Things are already working very well and nobody has realised that there are changes behind scenes. I am very happy that the company is working smoothly. There are a lot of smart, intelligent people working in the company to continue what we started 23 years ago.”

mvpromedia.eu


MILESTONES The co-founders have established MVTec as a formidable company in the world of machine vision software since launching it in 1996, perfectly combining their research and business skills. The company has grown organically and now has 160 employees, which has doubled in the five years that the retirement planning has been in progress. There has also been a lot of change in the industry, which Dr Eckstein reflected upon. “On the technical side there have been two milestones that have really changed the industry,” he said.

“ I view my role as more of a moderator rather than the conductor in front of the orchestra telling people what to do.” Dr Olaf Munkelt, MD, MVTec Software “The first one was a technology we now call matching. It is still a crucial technology as 70 to 80 per cent of customers still use that in the application, whether they use a library or not. “The other one is AI or deep learning. It is a new push into our business and has become very powerful in machine vision. “The expectations may be too high for this technology but it is still amazing what it can do. It is remarkable what kind of results you get which had not been possible using classical machine vision.”

“I will step back from the day-to-day operations, concentrate more on the company’s strategy and bring together different dimensions.” “We have our own research department and lot of valuable feedback from our customers on the way forward. I see it as my responsibility to bring all of this together with my colleagues. I view my role as more of a moderator rather than the conductor in front of the orchestra telling people what to do.” The vision for the company is also to develop a more ‘youthful’ looking management team, which has been ongoing for a while. It will also reflect the operational ethos of MVTec, which embraces collaboration and free-thinking across the business. Dr Munkelt added: “We have a very strong second level of management; we have strong people for research, as well as for marketing and sales. “As for our wider strategies, we have taken ideas used in HR, which we have found very useful across all of our departments. We want to be able to act and integrate new technologies very quickly as well as to react on ideas coming in via the research team or from customers. “This is the change we are doing right now, even strategic topics which are traditionally top management issues are now being handled in an agile manner. This way, knowledge can be used quickly to adapt our behaviour, our development and the structure of the company. It is not just restricted to the top management and it is something that is working very well and is very fruitful.” MV

VISION The business is now moving forward under Dr Munkelt and continues to grow. An extended partnership with STEMMER IMAGING in UK and Ireland has recently been announced – and Dr Munkelt is enthusiastic about further opportunities. “I continue what I have been doing these past years, there’s not too much change in my role,” said Dr Munkelt.

mvpromedia.eu

Dr Olaf Munkelt with Dr Wolfgang Eckstein

45


A NEW LEADING

LIGHT This year’s E MVA Young Professional Award winner was Dr Johannes Meyer for his “Light Field Methods for the Visual Inspection of Transparent Objects”. Machine Vision and Automation Magazine caught up with Dr Meyer.

Winning one of the industry’s biggest awards - the EMVA Young Professional Award – is something only few will achieve. Its magnitude cannot be overstated and demonstrates the talent and knowledge coming into the machine vision industry. This year’s winner, Dr Johannes Meyer, is no exception after the judges recognized his outstanding work Light Field Methods for the Visual Inspection of Transparent Objects.

EMVA NOTED: “His thesis introduces methods based on the concept of light fields for all main components of a visual inspection system, the illumination source, the sensor device and the signal processing algorithms. A novel sensor system, the laser deflection scanner, allows to acquire high resolution light fields of transparent objects. By means of suitable processing algorithms, material defects can be extracted out of these light fields in real time. Furthermore, a method for inverse light field illumination has been developed, that suppresses all intended structures of the test objects and reveals material defects with high contrast. A thorough experimental evaluation stated the superiority of the introduced methods over the state of the art with respect to several criteria.” The EMVA Young Professional Award is an annual award to honour the outstanding and innovative work of a student or a young professional in the field of machine vision or image processing. We spoke to Johannes, who is a lead engineer at ITK Engineering.

46

WHAT DOES WINNING THE AWARD MEAN TO YOU? Having successfully defended my PhD I somehow got the official academic confirmation that my research was scientifically novel and sound. However, I could not be sure if my methods would be really applied in some real system. Winning the EMVA Young Professional Award then also showed me that my work also has an industrial importance and that the machine vision industry acknowledges it – so I’m really glad and happy for the decision of the jury. Having won such an award could be beneficial for future applications.

mvpromedia.eu


DESCRIBE YOUR ROLE AT ITK ENGINEERING? I’m currently working as a Lead Engineer in the field of computer vision. This role is two-folded, on the one hand I’m dealing with practical engineering tasks and on the other hand I’m a senior consultant for vision questions. I try to have an eye on emerging topics that will gain practical relevance in the next years, like deep learning, which is already relevant, or computational imaging / compressed sensing. Most of my current projects are related to the topics of highly automated driving and to sophisticated visual inspection applications.

HAVING COMPLETED YOUR DOCTORATE LAST YEAR, HAVE YOU ANY FURTHER PLANS FOR ACADEMIC RESEARCH AND TO STAY IN ACADEMIA? I still have some interesting ideas in mind which I’d like to try out as soon as I find the time for them. Returning to academia in the future, maybe on a part-time basis, is a definite option for me. I did not want to lose contact with the research sector and as I really enjoyed teaching during my PhD, I will carry out a teaching assignment this winter term at the university.

HOW DOES THE AWARD FIT WITH YOUR ROLE AT ITK?

WHAT WAS THE APPEAL OF THE OPTICS/ VISION SECTOR?

The award is related to the work I did in my PhD thesis and the projects I have been working on during that time at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB.

I’ve always been interested in machines, robots or automated systems that are capable of perceiving their environment and making sensible decisions based on their sensor data. Besides, I think optics is a really fascinating field which you can think of as a bag of tricks when it comes to developing an inspection system. By designing the optics right and by using the proper sensor technology, e.g. light field optics, you might even be able to see differences in the index of refraction.

One of the unique selling points of ITK is that they usually develop white box software solutions for their customers including a transfer of the intellectual property if wanted. Hence, we are coping with challenging new problems every day. For finding the right solutions, I greatly benefit from all I learned at Fraunhofer.

YOU WERE PREVIOUSLY AT FRAUNHOFER, WHICH IS WORLD RENOWNED FOR DRIVING TECHNOLOGICAL ADVANCEMENTS. WHAT WAS YOUR ROLE THERE INVOLVE AND WHAT IMPACT HAS THAT HAD ON YOUR CAREER? The Fraunhofer-IOSB is about finding solutions for even the hardest visual inspection problems by combining novel optical setups with the appropriate processing algorithms. It looks to solve problems, for which there is no ready-to-use solution available on the market. My role in fulfilling this mission was to support the whole development chain, i.e., to come up with suitable ideas, performing an experimental proof of concept, making the algorithms robust and capable of real time requirements and finally installing the first system at the customer’s site. Hence, I really did learn quite a lot of different skills: getting in touch with customers, understanding their needs, sketching first concepts, building prototypes, implementing efficient algorithms and also how to connect the right wires in order to get the final system up and running.

mvpromedia.eu

WHAT IS THE FUTURE FOR COMPUTER VISION AND WHAT IMPACT WILL IT HAVE? In my opinion, the role of computer vision will experience a continual gain of importance in various fields including, highly automated driving, automation and the medical sector. Computer vision will also help to reduce the amount of pesticides and herbicides needed for farming by precisely locating weeds besides the core plants. Furthermore, we might be able to quickly assess whether food in our fridges is still consumable by just taking a – possibly hyperspectral – picture of it using our smartphones which will reduce the overall waste of food.

WHAT ARE YOUR FUTURE CAREER PLANS? Since I have just finished my PhD and started working in the industry, I haven’t thought too intensely about the next steps. Now it is about getting more experience in the industrial sector then maybe return to a research institute.

MV

47


THE 20 BEST EUROPEAN ROBOTICS STARTUPS UNVEILED From an initial 206 applications from 33 countries, RobotUnion had a tough task to find the best startups that would compete for up to € 223,000 in equity-free funding, alongside acceleration and mentoring services. RobotUnion, the first pan-European acceleration program fully focused on robotics, has announced the names of the 20 startups of the second open call who will enter this unique acceleration program. From more than 200 initial applications, The Jury Day featured 44 startups and SMEs who pitched four times in front of a different panel of experts. The event took place in Warsaw in July at the headquarters of PIAP, which is one of the partners providing technical mentoring for accelerated startups. The jury was composed of experienced technical experts, representatives of venture capital and members of large corporations from the RobotUnion consortium: Fundingbox (PL) as the coordinator of the program; ISDI (ES) responsible for business mentoring; ODENSE Seed and Venture (DK), Blumorpho (FR), Chrysalix Venture Capital (CA) in charge of the fundraising mentors, technical support and access to “premier-class” technology provided by VTT Technical Research Centre of Finland (FI), Danish Technological Institute - DTI (DK), TU Delft (NL), Tecnalia (ES) and PIAP (PL); direct access to top industry leaders is facilitated by MADE (DK), Ferrovial Servicios (ES), Fenin (ES), and Mobile World Capital Barcelona (ES) as the dissemination partner. Additionally, independent external advisors were invited in order to help the Consortium in the final decision. Guido Boehm, senior business developer from Dematic shared some vital tips he has picked up over the years in his talk Ten Rules For Making Corporate Partnerships With Startups Work. He mentioned the importance of staying focused on the development of a useful product, the advantages of partnerships with corporations and how to overcome problems while building a startup. Andrzej Garbacki, an expert in Industry 4.0 and Member of the Board and Solutions Department Coordinator from Astor, explained the importance of introducing

48

technology to industries to facilitate production processes. Startups who attended the event discovered how the automation process affects industrial plants. Mr. Garbacki shared several success stories of these processes through his company, which proposes solutions to adapt the traditional industry to a new operational structure based on human-machine cooperation.

THE PROGRAM: FROM RESEARCH TO THE SET UP The 20 startups and SMEs selected will initiate an acceleration process that will have a duration of up to 14 months. The acceleration process begins with the two month feasibility phase where startups will define a plan which will specify technical and market potential of their robotics solutions. This plan will be presented during a two-day Welcome Camp in Odense in early October 2019. In addition to equity-free funding, the companies will receive extensive technical and technological support provided by leading robotics R&D institutes in Europe. They will also access ISDI’s international network of recognized mentors from Google, Airbnb, Ikea, Yahoo, Prisa and Microsoft, among others. It builds on the expertise of the IMPACT Accelerator, which invested over €20 million equity-free funding between 2014 to 2018. IMPACT has been named among the top ten seed accelerators in the world and the second best in Europe by Gust’s Global Accelerator Report. The entrepreneurs will also have direct contact with leaders of agri-food, healthcare, civil engineering, and manufacturing sector through the participation of organizations including MADE, Ferrovial Servicios, ARLA Foods and FENIN.

mvpromedia.eu


*

W E N

E U N

VE

A showcase of the latest technology

You can’t afford to delay any longer, make sure you join us at Robotics & Automation 2019 to discover more about this vital area of innovation. *Due to rapid growth the 2019 exhibition is moving to the Ricoh Arena, Coventry

ROBOTICSANDAUTOMATION.CO.UK Brought to you by: Media partners:


Scaled Robotics (Spain): Deploys robots capable of navigating construction sites autonomously to collect 3D maps and upload them to an AI-powered software platform for analysis. Axiles Bionics (Belgium): AMP-Foot is a prosthetic anklefoot prosthesis capable of bringing back a natural gait and posture during daily life activities, being flexible and highly responsive to the person’s intention and to the environment.

ABOUT ROBOTUNION RobotUnion is designed to increase the number of unicorns in Europe in leveraging on European uniqueness and expertise in robotics related fields by implementing an ambitious programme. RobotUnion will invest €4 million in 40 companies during two open calls from 2018 to 2020. In each open call, we will select 20 companies and will invest €2million.

THE 20 START UPS The list of 20 startups and SMEs selected to enter the acceleration program of RobotUnion’s second call are: Rigitech (Switzerland): Drone delivery to integrate supply chains through hybrid drone hardware and cloud-based logistics. IM Systems (Netherlands): Archimedes drive is a planetary transmission that uses friction instead of gear teeth to transmit torque. Aether Biomedical (Poland): Zeus is a low cost-high efficacy prosthesis. This bionic limb can multiarticulate 14 grip modes. BLITAB (Austria): Blitab is the first-ever Braille tablet to create tactile text and graphics in real-time. MX3D (Netherlands): Software development for large scale Robotic 3D metal printing. Rebartek (Norway): Standardized robotic cell to assemble reinforcement bar (rebar) pieces into rebar cages. Robotical (United Kingdom): Marty the Robot is a real robot that can be used to teach about robotics and STEM. Rovenso (Switzerland): Agile robots that perform security and safety monitoring of industrial sites.

50

Automato Robotics (Israel): Robot development that works in soil/greenhouses/high tunnels to detect ripe tomatoes and harvests them. Cyber Surgery (Spain): Robotic assistant for spine surgery. Proxima Centauri (Denmark): Automation of the picking and sorting of natural casings. Kinfinity (Germany): The Kinfinity Glove is a new generation of multi-modal input device for use in virtual reality applications, robotics, gaming and more. LuxAI (Luxembourg): An expressive and engaging robot designed for autism. INTSITE (Israel): Autonomous and connected tower cranes. Moi Composites (Italy): Manufacturing of objects as one-off parts or small series combining our proprietary process with other additive manufacturing technologies or traditional manufacturing processes. Life Science Robotics (Denmark): ROBERT is a rehabilitation robot focusing on active resistive and assistive mobilization of the lower extremities. Subsea Mechatronics (Spain): Agile design and fabrication of prototypes involving mechanics, electronics and software, finding solutions within time and cost requirements to meet the specifications. Formhand (Germany): Granulate-based vacuum grippers that can adapt to and handle objects with different shapes. Each selected company will be eligible to receive up to €223,000 in equity-free funding upon reaching the milestones of the programme. 10 startups will also receive access to technical support from European robotics experts and business acceleration services. The best-performing companies participating in the program may obtain an additional €1 million of private investment in funding rounds led by Blumorpho and supported by Odense Seed and Venture and the VCs Chrysalix Venture Capital. MV

mvpromedia.eu



INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.