INVESTING IN AUTOMATION XDF EUROPE: THE LOWDOWN
NEW MARKETS: RUSSIA IS OPEN FOR BUSINESS
ACCESSORISE: CHOOSING THE RIGHT PRODUCTS
ISSUE 18 - DECEMBER/JANUARY 2020
mvpromedia.eu MACHINE VISION & AUTOMATION
MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu
CONTENTS 4
EDITOR’S WELCOME - The pace of innovation
6
INDUSTRY NEWS - Who is making the headlines
10
PRODUCT NEWS - What’s new on the market
14
DENIS BULGIN - The 3D Revolution
16
ORIGIN - How to break through the marketing noise
Cally Bennett
18
XILINX - Building the adaptable intelligent world
Group Business Manager cally.bennett@mvpromedia.eu
20
BAUMER - Easier than ever before
22
EMVA - Will you be the 2020 EMVA Young Professional?
Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu
Georgie Davey Designer georgie.davey@cliftonmedialab.com
Visit our website for daily updates
www.mvpromedia.eu
mvpromedia.eu
MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2019. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.
24 ALYSIUM - Choosing the right machine vision accessories 26
FRAMOS - Russian market is open for business
28
OPTOTUNE - Human eye inspired liquid lenses
31
MATRIX - Expansion of embedded vision modular kit
32
EURESYS - Core values
34
OAL - APRIL Eye tackles £60m food waste issue with AI
36
SIEMENS - Investing in industrial automation
38
HORIZON 2020 - TULIPP Project finally blossoms
40
OPTO-ENGINEERING - Powering vision solutions through simplicity
42
PHOTONEO - Move over AVG...the Autonomous Mobile Robot is here
44
ROBOTICS - Dr John Bates explores the benefits of Robotic Process Automation
46
EU AUTOMATE - Advances in gripper technology
48
FESTO - The automated chameleon tongue
50
LEADERSHIP - Do you possess emotional and technical intelligence?
THE PACE OF INNOVATION REVEALED Inevitably, the end of another year is always a time for reflection – and for looking ahead to the future. However, 2019 gives rise for ‘reflectionists’ – yes, I have probably namechecked some 1980s European electropunk trio - the opportunity to look back not just one year but the entire decade. While each decade has its innovations, surely this past 10 years has delivered more for the world in which we live. Ten years ago we didn’t have iPads, there was no Uber, Alexa wasn’t on call 24/7, we didn’t have smartwatches, contactless payments, Instagram or Bitcoin. The world really was a different place back then. Reflecting on more this past year, one of the defining moments of 2019 was the very first picture of a Black Hole – Galaxy M87 - captured by a connected world of telescopes. Technology always drives the future but the pace is definitely quickening with each passing year. At the recent European leg of the Xilinx Developers Forum (XDF), which you can read more about in this issue, CEO Victor Peng highlighted the company’s role in delivering 5G to the world. Futurist and business and technology advisor Bernard Marr has identified 5G as one of his seven technology trends for 2020, which will quicken up the pace of the Internet of Things, ‘smart’ manufacturing and the increase in ‘smart’ homes and cars. Marr also predicts that autonomous vehicles – taxis, trucks and shipping – will start to make waves and there will be a significant breakthrough over the coming 12 months. He also expects computer vision will have a greater impact, noting its necessity for autonomous vehicles as one example, but also a rise in face recognition applications – despite the great debate that surrounds the use of this application. Businesses within these spaces – and relevant to the machine vision, automation and robotics sectors – have to be riding this technological wave as we prepare for a much greater connected world. Talking of connections, I took the opportunity to meet more industry players at both the XDF event in Holland and the Stemmer Imaging Technology Forum in Birmingham, gaining additional insight into what lies ahead in the industry. It was an intriguing few days and the aim will be to reflect that over the coming months across the MVPro channels. Finally, thank you to all for your support throughout this year from myself and the MVPro team. Enjoy the festivities and we’ll see you all in again in 2020!
Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.eu
Enjoy the read! Lee McLaughlan Editor
4
mvpromedia.eu
USB3 LONG DISTANCE CABLES FOR MACHINE VISION
Active USB3 Long distance cables for USB3 Vision. CEI’s USB3 BitMaxx cables offers the Industry’s First STABLE Plug & Play active cable solution for USB Vision, supporting full 5 gig USB3 throughput and power delivery up to 20 meters in length with full USB2 backward compatibility.
1-630-257-0605 www.componentsexpress.com sales@componentsexpress.com
INDUSTRY NEWS
2020 UKIVA MACHINE VISION CONFERENCE ANNOUNCED The 2020 UKIVA Machine Vision Conference and Exhibition will take place on 14 May 2020 at the Marshall Arena in Milton Keynes, UK. While the event will follow a similar format to previous years, with a comprehensive program of technical seminars supported by an exhibition from leading companies in the industry, a new panel discussion session is being included for the first time.
Technology and Systems & Applications, as well as Understanding Vision Technology. Details of the 2020 Conference program as well as information about the panel discussion forum and the exhibition will be published on the Conference website www.machinevisionconference.co.uk - as they MV are finalised.
UKIVA chairman Allan Anderson explained: “This event has always been characterised by providing a platform where visitors can learn about many different facets of machine vision. In 2020 we plan to take this a stage further by opening the afternoon session with a discussion forum, where a panel of leading experts will answer questions about any aspect of machine vision.” The Conference will cover the key issues such as: Deep Learning & Embedded Vision, Vision & Robotics, 3D Vision, Optics & Illumination, Vision Innovation, Camera
PROPHESEE SECURES $28M IN FUNDING Prophesee, inventor of the world’s most advanced neuromorphic vision system, has announced it has closed $28 million in funding, bringing its total funding to date to $68 million. Led by the European Investment Bank, the round also includes staged investments from commercial backers, including existing investors iBionext,360 Capital Partners, Intel Capital, Robert Bosch Venture Capital and Supernova Invest. The latest funding builds on the $40 million Prophesee has successfully raised since its creation and will allow it to accelerate the industrialisation of the company’s revolutionary bio-inspired technology. The fundraising follows Prophesee’s successful launch of the first off-the-shelf and production ready eventbased sensor, built on three previous generations of the architecture that the company has worked with commercial partners to develop. This new investment will be used to drive the further development and commercialisation of this unique
6
Metavision® sensor and underlying neuromorphic algorithm innovations that unlock high-performance and predictive maintenance applications in Industry 4.0. A next-generation version will be aimed at vision-enabled opportunities in automotive and consumer markets, including autonomous driving and ADAS as well as uses in VR/AR and IoT. “Our event-based approach to vision sensing and processing has resonated well with our customers in the automotive, industrial and IoT sectors, and the technology continues to achieve impressive results in benchmarking and initial industrialization engagements,” said Luca Verre, co-founder and CEO of Prophesee. He added: “Our agreement with the EIB gives us a flexible and practical way to access capital we need and having the backing of our original investors further strengthens our ability to take advantage of the market opportunities we see in key sectors.” Bernard Gilly, chairman of existing investors iBionext, said: “Prophesee continues to execute on its strategy to deliver a truly disruptive and game-changing innovation to the world of machine vision.” MV
mvpromedia.eu
INDUSTRY NEWS
BRITISH AIRWAYS TRIALS GREEN DRIVERLESS VEHICLES British Airways is trialling autonomous, emissionsfree baggage vehicles at Heathrow to help the airline further improve punctuality and depart every flight on time. Part of the airline’s ongoing £6.5bn investment for customers, British Airways currently operates up to 800 flights a day to and from Heathrow, transporting around 75,000 bags back and forth between its baggage halls and aircraft.
the airfield to determine the shortest route to transport luggage. Unlike the current vehicles, the new autonomous dollies will depart for the aircraft as soon as each one is full, speeding up the aircraft loading process. In addition to improving operational efficiency, the trial also forms part of the airline’s wider environmental commitment to run an emissions-free airside operation. British Airways’ director of airports, Raghbir Pattar, said: “We are always looking at ways to improve efficiency and modernise our operation to ensure that we are delivering bags to and from our aircraft on time and without delay.”
Now, in what is believed to be a world first, the airline, in conjunction with Heathrow Airport and autonomous vehicle specialist, Aurrigo, is trialling driverless baggage vehicles, which are known as dollies.
David Keene, Chief Executive Officer of Aurrigo, added: “This is another fantastic example of British innovation and engineering. Our driverless pods are now in operation all around the world and the work with IAG, BA and Heathrow Airport shows how similar technology can be used in a completely different industry to deliver significant results.”
Carrying up to 40 bags in one journey, the driverless dollies use the latest navigating technology to memorise
If successful, the dollies could transport customers’ MV baggage to and from the aircraft by 2021.
HIGH FLYERS
Smart industrial cameras for perfect images plus real added value for your applications. Get inspired at: www.mv-highflyers.com MATRIX VISION GmbH Talstr. 16 · 71570 Oppenweiler Germany · Phone: +49 -71 91- 94 32- 0
INDUSTRY NEWS
LMI TECHNOLOGIES ACQUIRES FOCALSPEC LMI Technologies (LMI), the global leader in 3D inline scanning and inspection, has bought Finlandbased FocalSpec. The innovative optical metrology company designs and manufactures patented Line Confocal Imaging (LCI) products. LMI’s parent company - The TKH Group - will acquire 100 per cent of the shares of FocalSpec. The company will be integrated into the LMI group of companies and the LCI products will continue to be sold under the FocalSpec product brand. The acquisition of FocalSpec expands LMI Technologies’ smart sensor portfolio of laser profilers and structured light snapshots with patented confocal technologies. Together, LMI’s scanning and inspection solutions lead the industry in solving challenging applications across a variety of markets such as consumer electronics (CE), battery, pharma, semiconductor and medical.
“Line confocal sensors offer a leap in technological performance for scanning opaque, transparent and curved materials, such as hybrid glass assemblies common in cell phone manufacturing. By combining this game changing optical approach with our proven Gocator inspection software and volume manufacturing know-how, customers will be able to solve challenging inline metrology applications at a price/performance and ease of use never seen in the market today.” said Terry Arden, CEO, LMI Technologies. Sauli Törmälä, Chairman of FocalSpec, said “The addition of LCI technology to the 3D product portfolio of LMI Technologies builds a highly complementary set of solutions for metrology applications in critical assembly processes. Along with their leading inspection software, we believe FocalSpec and LMI will be a powerhouse of metrology in the years to come and look forward to MV joining forces.”
GARDASOFT APPOINTS CCS AMERICA AS MASTER DISTRIBUTOR IN NORTH AMERICA
Gardasoft has predicted increased sales potential across North America after appointing CCS America (CCSA) as the master distributor of Gardasoft controllers and accessories. From October 1st 2019, CCSA became the master distributor of Gardasoft controllers and accessories to USA, Canada and Mexico. Gardasoft, which has been in the US since May 2013, has worked closely with the US machine vision industry to enhance the capabilities of customers. This is a very exciting time for machine vision in the US and Gardasoft anticipates significant potential for increased US sales.
in North America will benefit from its large and experienced sales force and extensive technical support capabilities. Since Gardasoft and CCS are sister companies within the Optex group, the two companies are perfectly aligned to work together. Gardasoft will continue to support the market for its traffic strobe lights directly from its US office in Weare, New Hampshire. The Gardasoft US operation, led by John Merva, will focus on providing excellent sales and technical support for all MV Gardasoft distributors and end users in the US.
CCS is a leading supplier to machine vision markets and the existing network of Gardasoft resellers and end users alike
8
mvpromedia.eu
FRAMOS’ AI VISION SPIN-OFF RENAMED CUBEMOS The FRAMOS® Group, a global supplier of imaging products, custom vision solutions, and OEM services, is announcing that its spin-off FRAMOS AI will become cubemos. The cubemos entity focuses on Deep Learning and AI development, helping industrial customers to integrate leading-edge AI vision solutions. cubemos designs and implements proximal edge applications with AI functionality, providing innovative software solutions and products around imaging and AI. The Munich-based company, as part of FRAMOS Holdings, helps customers to benefit from AI implementations in realtime applications like retail, infrastructure, transport, mobility and industrial automation. cubemos has firmly established itself in the market and has launched the Skeleton SDK as their first in-house software product. This software launch is in addition to running many successful industry projects. cubemos aims to create its own new branding to enable further growth. “The re-naming of the AI spin-off to cubemos is the starting point to seed our own culture and footprint in the market, and to provide continuous growth,” says cubemos’ CEO Dr. Christopher Scheubel.
Versatile laser lighting solutions for high-speed imaging and machine vision systems.
See what you have missed.
cubemos was founded in 2018 after Dr. Andreas Franz and Dr. Christopher Scheubel from FRAMOS had identified a strong industry need to analyse imaging data with AI. Together with CTO Patrick Bilic they started their own business model, completely independent from FRAMOS. After successfully running several AI projects, the team launched its first product – an AI skeleton tracking SDK optimized for Intel® RealSense™ cameras. Complete with a strong track record, industry leader support, and venture capital partners behind them, cubemos is now MV prepared for continued growth.
mvpromedia.eu
+358 3447 9330 info@cavitar.com www.cavitar.com
PRODUCT NEWS
HARRIER USB/HDMI BOARD DELIVERS MORE Active Silicon has added a new product to their innovative Harrier range – the USB/HDMI Camera Interface Board. This interface solution provides simultaneous HDMI and USB Video Class (UVC) v1.1 output for autofocus zoom cameras including the Tamron MP1110M-VC, MP2030MGS and Sony EV series.
RS-485/RS-232/TTL, or by using UVC/USB commands over the USB connection. The USB interface serves as both (UVC) video output and USB based control input port. For developers who are new to creating UVC based applications, there is a working example UVC application and software API in the Harrier USB Software Development Kit. MV
It serves as both video output and UVC/USB control input port and supports modes up to 1080p60. USB Video output is enabled when the board is connected to a SuperSpeed USB 3.x host. On power-up, the camera video mode may be selected by the DIP switch settings on the board. Camera video modes, along with other camera and interface board functions, may also be controlled by serial communications over
THE FUTURE DEPENDS ON OPTICS™
NEW MEMBER OF THE EXO FAMILY SVS-Vistek has introduced the EXO428xU3 industrial camera which has a resolution of 7.1 megapixels and a USB3 interface. With this resolution and the aspect ratio of almost 3:2 (3208 x 2200 pixels), the camera is the modern, powerful CMOS alternative to the old ICX695 CCD sensor. At full resolution, it provides a frame rate of 51.4 fps. Based on the low readout noise and the excellent quantum efficiency of the Sony Pregius series (3rd generation), the sensitive, 4.5 x 4.5 µm pixels provide superior image quality. The pixel size ensures a high saturation capacity. The large pixels also simplify the selection of lenses fitting the 1.1” sensor size. The EXO428xU3 camera uses a global shutter and is available in monochrome or colour. The new camera comes with a high-quality industrial feature set that includes 8 and 12-bit colour depth, ROI, binning, offset and look-up tables. A unique feature is the integrated multi-channel strobe controller, which makes an external controller for lighting control redundant in most cases. Other features include a highly flexible, integrated sequencer, a logic module and PLC-compatible 24V inputs and outputs.
NEw CA Series Fixed Focal Length Lenses TECHSPEC® CA Series Fixed Focal Length Lenses are designed for high resolution large format sensors. Covering the APS-C format sensors with a 28 mm diagonal image circle, these lenses feature a TFL Mount. TFL Mounts feature a M35 x 0,75 thread with a 17,5 mm flange distance, and offer the same flange distance, robustness, and ease of use as a C-Mount. Find out more at
www.edmundoptics.eu/CAseries
The EXO428xU3 is suitable for the use in metrology, microscopy, automation, food and biometrics where resolutions between six and 12 Megapixels are state of the art. MV
10
UK: +44 (0) 1904 788600 I GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 I sales@edmundoptics.eu
PRODUCT NEWS
EMERGENT VISION TECHNOLOGIES 25GigE CAMERAS Emergent Vision Technologies have added three new models featuring the Sony Pregius S IMX530, IMX531, and IMX532 CMOS sensors to their BOLT camera series.
Other benefits of the BOLT series include low-cost accessories, low CPU overhead, low latency, low jitter, and accurate multi-camera synchronization using IEEE1588.
BOLT cameras are ultra high-speed cameras with a 25GigE SFP28 interface. Combined with the Sony Pregius S stacked CMOS image sensor technology, which comes with global shutter functionality made possible by Sony’s proprietary back-illuminated pixel structure, these new BOLT camera models deliver increased sensitivity, and quantum efficiency, as well as double the frame rate as previous generations.
All three cameras feature a C-mount and are ideal for high-speed applications that require excellent image quality and fast frame rates. They include industrial inspection, automation, ITS (Intelligent Transportation Systems), logistics, virtual reality, volumetric capture, and referee assist. The HB-16000-SB, HB-20000-SB and HB-25000-SB will be MV shipping Q4-2019.
HB-16000-SB is a 16.13 megapixel camera equipped with the Sony Pregius S IMX532 CMOS sensor. At full resolution (5320 x 3032) you get 145 frames per second. HB-20000-SB, is a 20.28 megapixel camera. It features the Sony Pregius S IMX531 CMOS sensor, which provides up to 100 frames per second at full resolution (4504 x 4504). HB-25000-SB is a 24.47 megapixel camera that comes with the Sony Pregius S IMX530 CMOS sensor. It offers up to 98 frames per second at full (5320 x 4600) resolution.
FLIR LAUNCHES INDUSTRY-FIRST DEEP LEARNING-ENABLED CAMERA FLIR Systems has launched the FLIR Firefly® DL, the industry’s first deep learning, inference-enabled machine vision camera with FLIR neuro technology. With its small size, low weight, minimal power consumption and deep learning capabilities, the FLIR Firefly DL camera is ideal for embedding into mobile, desktop and handheld systems. The Firefly DL with deep learning enables original equipment manufacturers, engineers, and makers to quickly develop and deploy solutions to challenging automation tasks. Additionally, system makers can reduce the cost and complexity of their work by deploying a trained neural network directly onto the camera, eliminating the need for a host system to complete the tasks of classification, and object detection and localization.
users to deploy their trained neural network directly onto the camera, making inference on the edge and oncamera decision-making possible. FLIR neuro provides an open platform and supports popular frameworks, including TensorFlow and Caffe for maximum flexibility. Neuro is ideal for classification and localisation and detection functionalities. FLIR Firefly DL is available through authorised FLIR MV distributors globally and online.
Firefly DL combines machine vision performance with the power of deep learning to address complex and subjective problems, such as recognising faces or determining the quality of a solar panel. The Firefly DL camera is the first FLIR camera to use FLIR neuro technology, which enables
mvpromedia.eu
11
PRODUCT NEWS
JAI REVEALS IMPROVED MULTI-SPECTRAL PRISM CAMERAS JAI has launched a new generation of prism cameras in its Fusion Series of multi-spectral imaging solutions. This unique prism camera design, which JAI introduced more than 10 years ago, enables multi-spectral analysis to be easily applied to a wide range of machine vision inspection tasks, without the cost and complexity of two separate camera/lighting setups and without the added mechanical systems and motion challenges created by filter-wheel types of multi-spectral cameras. The new FS-3200D-10GE and FS-1600D-10GE cameras have two-channel dichroic prisms that divide the incoming light to two precision-aligned CMOS area scan imagers. One channel captures light from the visible spectrum (approximately 400 nm to 670 nm) and delivers it to a Bayer colour sensor, while the second channel directs light from the near infrared (NIR) portion of the spectrum (approximately 740 nm to 1000 nm) to a monochrome, NIR-sensitive sensor.
full frame rate of 123 fps for 8-bit output. Meanwhile, the FS1600D-10GE features Bayer and monochrome versions of the Sony Pregius IMX273 CMOS sensor with 1.6-megapixel resolution and a maximum full frame rate of 226 fps for 8-bit output. This performance represents a substantial gain over the original Fusion Series multi-spectral cameras. The high throughput is supported by a 10GBASE-T (10 GigE) interface equipped with integrated auto-negotiation technology, providing automatic backwards compatibility to NBASE-T (5 Gbps and 2.5 Gbps) and traditional 1000BASE-T (1 Gbps) output. In addition to 8-bit output, the cameras can provide 10-bit and 12-bit output, independently-selectable for the Bayer colour and NIR channels. The 10 GigE interface complies with the GigE Vision 2.0 standard and uses dual streams. The interface also supports the Precision Time Protocol to enable network-level synchronisation in multi-camera systems. MV
The FS-3200D-10GE model features Bayer and monochrome versions of the Sony Pregius IMX252 CMOS sensor offering 3.2-megapixel resolution and a maximum
Precision Perfect images at high speed Precision at high speed. Precision skydiving is a perfect match for extreme athletes – and precision inspections at high speed are a perfect match for the LXT cameras. Thanks to Sony® Pregius™ sensors and 10 GigE interface, you benefit from high resolution, excellent image quality, high bandwidth and cost-efficient integration. Learn more at: www.baumer.com/cameras/LXT
THE 3D
REVOLUTION Ideas maybe nothing new but technological advances are delivering on them according to Technical Marketing Services writer Denis Bulgin.
The machine vision industry is extraordinarily dynamic and new techniques and methods are developed on a regular basis. Or are they? Mark Twain wrote in his autobiography: “There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations. We keep on turning and making new combinations indefinitely; but they are the same old pieces of coloured glass that have been in use through all the ages.� Of course, new ideas do evolve at some point, but Twain’s basic premise holds good for most of the recent emerging technologies in machine vision. If we look at some of the latest key topics, such as deep learning and embedded vision, the basic concepts have been around for many years, but advances in technology are allowing them to be turned into practical solutions for real world applications. In recent years, probably best illustration of a maturing vision technology is 3D imaging.
14
THE 3D REVOLUTION 3D machine vision is another technique that had been possible for many years. However, creating complex 3D images is computationally intensive. It has taken the emergence of processors capable of handling the computational overhead required for 3D cloud datasets at production line speeds that 3D vision technology became established. Early in 2012, I attended a 3D presentation event run by a leading supplier of machine vision components and systems. This featured a full day of presentations that covered all of the major 3D imaging techniques and although it was very well attended it was clear that the use of 3D was not yet widespread. Fast forward to 2017 and at the first UKIVA Machine Vision Conference and Exhibition, the presentations on 3D vision were by far and away the best attended.
mvpromedia.eu
During this intervening period, technological developments had allowed much improved performance and accessibility. We saw faster PCs, more sophisticated software for 3D point cloud handling and metrology, and the emergence of 3D smart cameras with on-board processing and measurement. Improvements in sensor technology and lighting yielded better resolution, but this in turn led to even larger 3D data sets, further increasing the demands on the PC. Today’s FPGA and multicore embedded processor architectures provide faster processing speeds, but now we are also seeing camera manufacturers starting to provide fast, direct memory access between image acquisition and processing on a dedicated FPGA processor before transfer to a PC for further processing. Most importantly, however, over the same period we have seen an explosion in both the actual usage of 3D imaging and the range of possible application areas.
“At the 2019 UKIVA Machine Vision Conference and Exhibition, over 25 per cent of the presentations involved some aspect of 3D vision” These include general volumetric measurements, completeness checks, part manufacturing inspection, portioning, OCR, distance measurements, packaging integrity and filling inspection, surface finish and many, many more. In particular, there has been extensive use of 3D imaging in robot guidance applications such as pick and place, random bin picking, palletisation and depalletisation and optimising space usage in warehouses. Coming right up to date, at the 2019 UKIVA Machine Vision Conference and Exhibition, over 25 per cent of the presentations involved some aspect of 3D vision. In addition, the winner of the PPMA’s 2019 ‘Innovative Machine Vision Project’ award had developed a 3D robotic solution for the application of labels to wedges of cheese. Speaking to a major machine vision supplier at a recent engineering exhibition, it transpired that every single enquiry they had received on the first day of the show was related to 3D imaging, even though they had many other techniques on show. Quite clearly over the last seven years we have seen a true maturing of 3D machine vision.
mvpromedia.eu
WILL THE NEWER TECHNOLOGIES EMULATE 3D? There are a number of parallels between 3D imaging and the current hot topics such as deep learning and embedded vision. As with 3D imaging, these techniques are not new, but have come to the fore thanks to advances in technology which make them viable for more general use. For deep learning, massive parallel processing at affordable costs through GPUs, large data storage capabilities and the availability of huge data sets for training have made it a reality. However, in a newer development, inference cameras are now emerging where a trained neural network can be implemented on the camera itself. We are still at the early stages of implementation of deep learning applications, but there is every likelihood that the technique will follow a similar maturation curve to that of 3D, although the timescales may be shorter. Embedded vision systems can be viewed in a slightly different way, since they mean different things to different people. Embedded systems are generally considered to be the direct integration of cameras or camera modules into machines or devices using bespoke computer platforms for image processing instead of a classic industrial PC. The best-known example of embedded systems is the smart camera where all of the image processing takes place in the camera itself and this is very mature. Probably the most exciting development in the field of embedded vision is that of SoC (System on Chip) ARMbased computer technology. This makes it possible to create bespoke systems utilising a wide range of image sensors, standard interfaces and various hardware. With compact designs, simple integration and low power consumption and the increasing move towards integration and connectivity within machine vision and the wider arena of Industry 4.0, the use of SoCs has huge potential, and it will be interesting to see how this develops. Wherever the latest ideas for machine vision originate, there seems to be a steady pipeline of new technology. It is fascinating watching them mature into MV established techniques.
15
HOW TECHNOLOGY BUSINESSES CAN BREAK THROUGH THE MARKETING NOISE Chloe Hill, content marketing manager at digital marketing agency Origin, discusses how technology businesses can move away from traditional tactics and utilise content marketing.
Gone are the days of messenger pigeons and people relying on mass marketing to get in front of their customers.
HOW CAN YOU TAKE ADVANTAGE?
Today, consumers are more aware of “traditional” marketing and they’ve become immune to tactics such as billboard and television advertisements.
To achieve content marketing success, it’s essential to do the following:
Take these statistics for example: 86 per cent of people skip TV ads, 44 per cent of direct mail is never opened, and per cent of email users have unsubscribed from an email they previously opted into. Businesses need to abandon outdated methods and look at more optimised digital methods to reach customers.
BREAKING THROUGH THE NOISE… The technology market can be incredibly noisy, with new products and services being launched and advancements being discovered nearly every day. This can make it difficult for you to break through the noise and have your brand voice heard by your target audience. A dedicated content marketing strategy can help you attract prospects and customers to your website, and is an alternative approach to mass marketing, which relies on pushing generic messaging out to a large number of people in an effort to get noticed.
SO, HOW DOES CONTENT MARKETING WORK? The Content Marketing Institute (CMI) identify content marketing as “a strategic marketing approach focused on creating and distributing valuable, relevant and consistent content to attract and retain a clearly defined audience, and ultimately, to drive profitable customer action”. And, it’s been proven to work, as small businesses with blogs get 126 per cent more lead growth than those that don’t (impactbnd), content marketing generates over three times as many leads as outbound marketing (demandmetric) and 93 per cent of B2B marketers are using it (CMI).
16
•
Research
Begin by conducting research into your audience. Identify what online and offline channels (e.g. social channels, media outlets, online news sites) they use, what topics they are interested in, and what content formats they prefer (e.g. written articles, videos, memes/GIFs etc). And, understand what their challenges are. This will enable you to create content that provides the solution to their problems, in the formats they prefer, so you can tailor your content to suit your audience’s preferences. •
Create promotional and knowledge-sharing content pieces
It’s good practice to ensure your content features a mix of promotional-led content that highlights the key features and benefits of your products, as well as informative, knowledge-sharing content that’ll teach your audiences something they may not already know. •
Analyse your activity
Throughout your content marketing activity, you should track and monitor all levels of audience engagement. So, look at your social media analytics and the number of likes, comments and shares to identify what types of content are performing the best. Look at your website analytics and what pieces of content are achieving the most clicks. Armed with this knowledge you know what works and what doesn’t. For more information on how to boost your content marketing efforts, visit www.origingrowth.co.uk, or connect with me on LinkedIn www. linkedin.com/in/chloemchughhill. MV
mvpromedia.eu
Six Essential Considerations for Machine Vision Lighting 3. Build in flexibility You cannot predict the future, but you can design flexibility into your system to cope with the unexpected. Changing requirements and environments are common and if your system includes a dedicated lighting controller with constant current and safe overdrive you can adapt easily to changing specifications and maximize the return on your investment. Gardasoft Vision has used our specialist knowledge to help Machine Builders achieve innovative solutions for over 20 years. To read more about the Six Essential Considerations for Machine Vision Lighting see www.gardasoft.com/six-essential-considerations
Semiconductor
|
PCB Inspection
Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com
www.gardasoft.com
|
Pharmaceuticals
|
Food Inspection
XILINX
BUILDING THE ADAPTABLE INTELLIGENT WORLD It was under the grey skies of The Hague, home to a wealth of international organisations, the Dutch government and the Dutch Royal Family, that Xilinx delivered its latest high-profile announcements. This was the second leg of the Xilinx Developers Forum 2019 tour. San Jose had preceded it, with Beijing up next. The slick showcase is an opportunity for Xilinx’s top brass to not only share the latest developments but to put a spotlight on the future. For Xilinx’s Californian CEO Victor Peng, despite the wet autumnal conditions of The Netherlands, the rain did not fall on his parade. In a slick presentation, Peng took the opportunity to address the key audience Xilinx want to reach – developers. They are, he claims, the gamechangers of the next decade and beyond and will achieve their purpose using the range of products that Xilinx now offer.
“Most of our traditional customers know us and how to find out about us,” said Madden. “We were considered for a long time as primarily a hardware company. We did have our software tools but now we view ourselves as a platform company. If you have a platform then you want people building on it. “Therefore, we need developers and that is why it’s a developer forum. We want people developing on our products. We want them to know we don’t just offer chips, but we offer the boards and a software stack that helps the developer to get access.” That was perfectly illustrated on stage, as Peng used this European leg of the XDF tour to announce the free download of the recently launched Vitis software platform, the latest ‘chips’ that will deliver specifically for the automotive market and to revisit its Alveo and Versal products. With a three-pronged focus on Data Centre, accelerating core markets and driving adaptive computing, Peng’s assessment was Xilinx had made ‘phenomenal progress executing this strategy’ over the past 12 months. There was talk of ‘new business models’, of ‘disruptive technology’ and of Xilinx ‘transforming from a component company to a platform company’ as it seeks to deliver its mission of ‘Building the adaptable intelligent world’.
The focus was on ‘adaptability’ and ‘transformation’ and how Xilinx and developers will together deliver this technologically connected future world. “It’s a really exciting time,” said Peng, acknowledging the room full of innovators. “You’re building communications and applications and infrastructure, in medical, in life sciences, transportation and of course computing at the edge and the cloud.
Xilinx, which is working with the likes of IBM, Microsoft, Amazon and many more in delivering this new world, will still though remain true to its core business of microchips and devices, even as it transforms and seeks new avenues. “In the end we’re still a chip company,” said Madden. “It is what we do but who can use that chip is getting broader and we’re delivering it in a much more palatable way than before. Our hope us that will allow us to continue to grow as well.” All eyes now on the XDF 2020 tour.
MV
“The intelligent connected world is becoming a reality and you are making that happen. It’s exciting to see you making this future world happen. Our mission is to help you build that better future and to make sure it is not only intelligently connected but also adaptable.” Engaging with developers is crucial according to Liam Madden, Xilinx executive vice president of the Wired & Wireless Group. This is the second year of XDF, which underwent a transformation to reach out to this new audience.
18
mvpromedia.eu
XILINX
XDF EUROPE: THE BIG ANNOUNCEMENTS XILINX VITIS NOW DOWNLOADABLE Xilinx used XDF to announced that it Vitis™ unified software platform and open source libraries are available for free download.
A Xilinx developer site provides easy access to examples, tutorials and documentation, as well as a space to connect the Vitis developer community.
Vitis is targeted at a broad range of developers – from software engineers to AI scientists – and enables them to work with and benefit from the power of Xilinx’s adaptable hardware, using software tools and frameworks they already know and understand.
It is managed by Xilinx Vitis experts and enthusiasts, providing valuable information on the latest Vitis updates, tips and tricks.
Software developers can accelerate their applications with Xilinx® adaptive hardware, without the need for hardware expertise. The Vitis platform plugs into common software developer tools and has a rich set of open source libraries optimised for Xilinx hardware.
https://www.xilinx.com/support/download/index.html/ content/xilinx/en/downloadNav/vitis.html
Key links:
https://www.xilinx.com/products/design-tools/vitis.html https://developer.xilinx.com/
HIGHEST PERFORMANCE ADAPTIVE DEVICES FOR ADVANCED ADAS AND AD APPLICATIONS UNVEILED Xilinx unveiled the expansion of its automotive-qualified 16 nanometer (nm) family with two new devices – the Xilinx® Automotive (XA) Zynq® UltraScale+™ MPSoC 7EV and 11EG. Designed exclusively for the automotive industry, they deliver the highest programmable capacity, performance and I/O capabilities enabling high-speed, data aggregation, pre-processing, and distribution (DAPD), as well as compute acceleration for L2+ to L4 advanced driver-assistance systems (ADAS) and autonomous driving (AD) applications. They deliver the world’s highest level of silicon integration that meets the safety, quality and reliability requirements for automotive, with a comprehensive line of products
mvpromedia.eu
scaling from small devices powering edge sensors to new high-performance devices for centralised domain controllers. The devices offer over 650,000 programmable logic cells – and nearly 3,000 DSP slices, which is 2.5X increase versus the previous largest device. The XA 7EV contains a video codec unit for h.264/h.265 encode and decode, while the XA 11EG includes 32 12.5Gb/s transceivers and provides four PCIe® Gen3x16 blocks. The devices enable robotaxi developers, and Tier-1 suppliers to perform the DAPD and compute acceleration in a power envelope that allows for scalable production deployments for AD vehicles. Full technical details on the XA family: https://www.xilinx.com/products/silicon-devices/soc/xazynq-ultrascale-mpsoc.html. For more information, visit: https://www.xilinx.com/products/silicon-devices/soc.html.
19
BAUMER
EASIER THAN EVER BEFORE: VERISENS VISION SENSORS CONTROL UNIVERSAL ROBOTS The new smart VeriSens vision sensors XF900 and XC900 control the collaborating robots (cobots) of Universal Robots within only a few minutes of setting up. The robot-compatible vision sensors are mounted directly on the cobot or above it. Thanks to the SmartGrid (patent pending), calibration in terms of image distortion, conversion into world coordinates, and coordinate alignment between the vision sensor and robot take place automatically and extremely easy. This eliminates the required conventional elaborate manual “hand-eye” calibration of the robot and vision sensor. This is not only more precise but also reduces the set up to a few minutes.
The installation and configuration of the vision sensors are transparent and easy to understand – via the specifically developed VeriSens URCap interface for robot control, only a few steps are needed to benefit from the diverse VeriSens image processing options. In the programming of the robot itself, only two additional commands (nodes) are necessary to allow a great number of applications across various industries to benefit from the advantages of Vision Guided Robotics. Instead of taught-in waypoints, free positions are used on which objects are then recognized visually. In addition, the already established functions can check object overlaps and gripper clearance. In addition, VeriSens vision sensors can, for example, verify free storage area, carry out quality controls of objects variably positioned in the provided space, as well as identify and measure objects. Learn more at: www.baumer.com/verisens-ur
MV
CONTACT DETAILS N: Nicole Marofsky | W: https://www.www.baumer.com E: nmarofsky@baumer.com
20
mvpromedia.eu
ACHIEVE 100% GLUE BEAD INSPECTION With 3D Smart Sensors
Smart 3D scanning provides all the built-in measurement and decision-making you need. Generate critical 3D shape data to quickly and accurately determine the width, position, height, and volume of the glue bead. Easily scan transparent and translucent glues. Leverage the built-in Surface Track tool to automatically inspect each location along the bead path, with no need for you to configure individual measurement algorithms.
0.300
Accurately identify adhesive material overflow and breaks with onboard pass/fail decision-making logic.
GocatorÂŽ makes glue bead inspection accurate (and easy). Visit www.lmi3D.com/CE
WILL YOU BE THE 2020 EMVA YOUNG PROFESSIONAL? Winner of this prestigious industry award will be announced in Sofia in June.
The European Machine Vision Association has launched its annual EMVA Young Professional Award. This prestigious industry award honours the outstanding and innovative work of a student or a young professional in the field of machine vision or computer vision. The 2020 winner will receive their award at the 18th EMVA Business Conference, which is being held in Sofia, Bulgaria, June 25th-27th. The winner will also receive 1,500 euros and the opportunity to present their work to the machine vision industry leaders from Europe and further afield. In addition, the winner will also secure a free delegate pass to the European Machine Vision Forum 2020 being held in Cork, Ireland, over September 11-12. The EMVA award is designed to support further innovation in our industry, to contribute to the important aspect of dedicated machine vision education and to provide a bridge between research and industry. Applications are invited from students and young scientists from European institutions that focus on challenges in the field of vision technology and that apply latest research results and findings in computer vision to the practical needs of the machine vision industry.
(2) Work (master thesis or PHD thesis) has to be made within the last 12 months at (or in collaboration with) a European institution. Meanwhile the student may have entered the professional field. To enter, a short abstract of one to two pages in the English language has to be submitted to the EMVA Secretariat, Ms. Nadine Kubitschek, at ypa@emva.org by May 11th, 2020. The most recent winner was Dr Johannes Meyer for his research “Light Field Methods for the Visual Inspection of Transparent Objects”. In an interview with MVPro he said: “Winning the EMVA Young Professional Award showed me that my work has an industrial importance and that the machine vision industry acknowledges it – so I’m really glad and happy for the decision of the jury.” MV For more information go to: https://www.emva.org/news-media/news/
The criteria of the works to be presented for the EMVA Award are: (1) Outstanding innovative work in the field of vision technology. Industrial relevance and collaboration with a company during the work is required. The targeted industry is free of choice.
22
mvpromedia.eu
MACHINE VISION ACCESSORIES WHAT YOU SHOULD EXPECT AND MORE
Alysium’s Thomas Detjen explains how to choose the right accessories for your technology and applications.
Most of the time, accessories are the very last item on the bill of material of new projects. However, depending on the source, ignoring the assemblies can produce a lot of headache further down the process. So, what are the key indicators, which will help you instead of producing additional headaches? The most common interfaces for the machine vision world are based on consumer standards, such as RJ45, USB or for the high-speed area based on interfaces used inside of data centres as CX4, SFP+ or LC fibre connectors. The advantages of using these is clear: They are always readily available, they are produced in high quantities and are low cost.
special about those accessories? Let’s take a deeper look into the details, so that you can identify the perfect accessories for your application in the future more easily: First of all, the mechanical robustness is an important. The unique Die-Cast Design of the A+ Product family (pictured below) of Alysium offers several improvements compared to the regular, consumer based, moulded assemblies.
However, have you tried to use consumer USB3x cables? It just isn’t feasible. Manufacturers will not take long to notice the difficulties. The cables are built for short distances, which in for industrial applications just isn’t sufficient. At Alysium, we have spent more than 15 years focusing on the industrial-quality cables and specifically for the Machine Vision market. Our products are designed to fit perfectly for industrial applications and with the most reliable interface technology you can find on the market. But what is so
24
mvpromedia.eu
Next to an improved 360° shielding, the Die-Cast design is perfect to reduce any stress through the screw locking. If you screw it with too much force, the mould material within the assembly can be too soft. In a worst-case scenario, you can destroy the receptacle inside the camera as the plug is applying to much stress on the mechanical connection. This can’t happen with the A+ Die Cast housing. Secondly, the USB3 and RJ45 A+ product group has a patented screw locking bracket design feature. For the RJ45 Connector and the USB A, Type C and USB B connector you can decide after buying the assembly if there is a requirement for screw locking and if so, you can opt for the horizontal or vertical version. The screw locking bracket makes easy adaption to your application possible and reduces the possibility of having the wrong assembly in stock. Finally, always keep an eye on the used raw cable. A thin cable, whether it is in USB3 or Camera Link, might improve handling but reduces the reliability possible in a passive cable length. If you wonder why some companies can support USB3 for machine vision for between three-to-five metres in length and others up to eight metres, its will be based on the experience and focus of the manufacturer. Some manufacturers focus more on the consumer market and don’t understand the unique demands of the machine vision market. Alysium, on the other hand, is producing for example passive USB3 assemblies up to eight metres, depending on the application even up to 10 metres. How? Those assemblies are based on an unique design raw cable, which is using a twinax construction for the superspeed signals. Above those lengths, Alysium is using a hybrid fibre cable, which is capable of transferring power and USB3 Signals up to 50 metres.
mvpromedia.eu
What has worked for USB3, also works for CameraLink. Cable length up to 14 metres for Full configuration applications are possible and that confirmed by several camera and FG Manufacturer.
Alysium is also working on the next generation of interface technology. CameraLinkHS ™ is already using fibre cables for the latest camera models; other interfaces will follow in the near future. For your long term strategy, it is crucial that you choose a reliable supplier, who understands not only today’s demands but those for tomorrow and have innovated so they are not using outdated techniques. Alysium is in a position to support businesses today, tomorrow and further into the future. Approved and recommended by many camera manufacturers, experience enables us to assist in providing advice for your applications. Currently, The A+ USB3 assemblies are literally flying in 2020 as they are being used in the next space rover destined for Mars. So, if they are good enough for Mars, shouldn’t they be good enough for your application? MV
25
RUSSIAN MARKET IS OPEN
FOR BUSINESS The vision solutions market is growing. FRAMOS ® discovered just how much when they attended the recent All-over-I P tradeshow in Moscow. It connects the I P industry with companies in surveillance and security, the Internet of Things (IoT), embedded vision, biometrics, artificial intelligence, digital cities, and smart factories. FRAMOS Russian sales representative, Daria Scheel, provides an insight about the specific opportunities and challenges in the Russian market for vision.
HOW DO YOU RATE THE RUSSIAN MARKET FOR VISION TECHNOLOGY?
WHAT STRATEGY DO YOU FOLLOW ON THE RUSSIAN MARKET?
Despite the many sanctions and political tensions, the Russian economy is on a very good trajectory. Russian companies are extremely motivated to develop technical innovations and, above all, their own progressive solutions. Fortunately, the technical education is very good in Russia; there are many experienced and well-trained developers and engineers. In addition, the Russian urban infrastructure is comparatively far ahead in terms of digitisation. These conditions are optimal for the growth of imaging and vision technologies.
FRAMOS has been active in Russia since 1981, but we are only beginning to get a foothold on the market. I approach the market in a completely new way, both as a native speaker and as a FRAMOS sales representative. Inherently, this situation shows the potential and importance that image processing has in Russia from our point of view. My goal is to strengthen existing connections and to establish and expand new customers and partners in Russia.
However, in Russia we see that the focus is less on pure components and more on the emphasis of solution packages that consist of hardware and open source systems. Usually, companies want to develop a primary hardware technology background first, and consequently their own software stacks. Therefore, project cycles sometimes proceed more slowly; Russia is at the edge of growth - we can see that the importance and intensity of image processing is increasing, significantly.
26
“ Russian customers particularly appreciate a high degree of competence and experience.” Embedded Vision is a very important and key sales driver, to provide customers with a platform and easy access to sensor technology. Then, we support customers with integration, customised software and firmware, with their own IP; for example, with the Cyrillic notation. Russian customers want to increase their competencies and rely on individual solutions and quality. The access to the market
mvpromedia.eu
and technology for most companies is crucial, as is time-tomarket, and we can help them to be self-sufficient. WHAT OPPORTUNITIES DO YOU SEE, AND WHICH INDUSTRIES DOES FRAMOS TARGET? With Sony sensors and our proprietary sensor module ecosystem, our focus is primarily on camera developers and OEMs in the Smart Industry and Production, life sciences and medical landscape, transportation, and infrastructure sectors. The modules in particular, that provide high-quality recognition systems for security and surveillance in the Smart City and ITS areas are in high demand. In addition, they are found in applications like facial recognition and people counting in public spaces or local traffic, and in traffic light control or parking management. Also, industrial end-customers for digital and smart production are very important. The potential in these areas is immense, whereby the focus is less on classic machine vision and more on digital automation workflows. The first customers of our 3D GigE camera D435e based on Intel® RealSense™ technology, come from the areas of robotics, drones, logistics, and warehouse management; and, smart farming and livestock. A Southern German automobile manufacturer is using the new 3D cameras from FRAMOS in a pilot project to increase the level of automation in its production facilities, with precise pick & place robots. Customers in all industries benefit from the fact that we offer technical consulting, software, and custom solutions in addition to classic vision components. For example, we are currently working on a major security project in the Russian financial sector, where our software expertise is crucial. For our Russian customers, we can be a technical vision partner who supports them throughout the entire course of the project from a single source. WHAT ARE THE ADVANTAGES OF THE FRAMOS D435E OVER CONVENTIONAL 3D CAMERAS AND THE CLASSIC IMAGE PROCESSING CAMERAS FOR INDUSTRY? The FRAMOS D435e is based on Intel®’s RealSenseTM technology. The industrial GigE Vision connection enables standardised and fast Ethernet transmission over long
mvpromedia.eu
cable lengths, and without performance latencies. The camera is compatible with all GigE Vision software and all Intel®RealSense drivers. All connections, either M8 and M12 format, can be screwed together to ensure permanent fixation on mobile systems, or on robots for stable calibration and image quality. Power over Ethernet is possible, and the dust and waterproof IP66 housing is designed for use in harsh environments. Also, customisations of the camera are possible; for example, in environments containing aggressive chemicals, an IP67 class housing with special sealing is required. This modification can be achieved with a minimum of additional effort.
WHAT DO RUSSIAN CUSTOMERS PARTICULARLY VALUE? Russian customers particularly appreciate a high degree of competence and experience. Until now, Russian companies were more price-driven, but with an increase in their technical competence and their need for individual solutions, this scenario is undergoing change. In the future they will focus more on quality, but need additional support, which is very important. Our engineers and developers are available to our customers for technical guidance and support. On the one hand, this guarantees high quality and fast time-to-market; at the same time, it guarantees high efficiency and the conservation of resources - this is just as important in Russia as anywhere else in the world. It is encouraging to see how the Russian market is opening up, starting positive discussions about how vision technology can help create cutting-edge devices and solutions to drive MV digitalisation and automation in every industry.
27
HUMAN EYE INSPIRED LIQUID LENSES ENABLE FAST FOCUSING Fast-growing Swiss lens manufacturer Optotune explain how their tunable liquid lenses addresses the issue of focal depth.
Optotune, a spinoff from ETH Zurich based in Switzerland is the market leader in focus tunable liquid lenses. Due to the high repeatability and accurate performance, the lenses are beneficial in a variety of optical applications such as laser processing, augmented reality (AR), MedTech and machine vision. With significant market growth and development of innovative products Optotune has grown rapidly, buoyed by being able to offer customised solutions besides the offthe-shelf imaging systems. “We enable reliable high-speed imaging with a dynamic working distance,” says Mark Ventura, co-founder. “It is great to build cutting-edge technologies together with our partners and customers.” Optotune ensures that the lens is compatible with optical elements to plug and play. There is a great deal behind the working principle of the Optotune lens and its integration into conventional imaging systems. In a wide range of applications such as logistics, quality control and sorting, machines now automate processes previously completed by humans. To image their targets, they use optical sensors. Computers process the acquired images and feed them into sophisticated decision algorithms.
28
Imaging systems often need to combine high resolution with a large field of view, which allows to identify small features on large objects. Conventional fixed-focus optical systems have a limited depth of field and therefore need to be adjusted or replaced when the working distance changes. Optotune offers innovative products to address the focal depth issue. A tunable lens added to an imaging lens allows focusing on different z-positions without compromising image quality. Through eliminating the need for manual adjustments or multiple objectives and cameras, cost and system complexity decreases drastically. Optotune tunable lenses can be added to standard imaging systems comprising of a camera and an imaging objective, which enable rapid, electrical tuning of working distance within milliseconds, whilst preserving resolution and field of view of the image system. Tunable lenses conceptionally follow the working principle of a human eye. An elastic polymer membrane covers a reservoir containing an optical liquid. A voice coil actuated system exerts pressure on the membrane outside of the clear aperture. The increasing pressure changes the deflection of the membrane in the optical path as Figure 1 illustrates, thereby changing optical power. Consequently, the current flow through the actuator is directly proportional to the resulting focal length of the lens.
mvpromedia.eu
In package sorting, objects of varying heights pass the optical scanning system at speed. The imaging system must adapt to a large field of view at different working distances rapidly. To track bar codes and addresses high resolution is required. Placing the liquid lens in front of the objective leads to a large range of achievable working distances. Vision systems can image with a wide focal range from infinity (tunable lens at 0 diopters) down to about 200 mm (tunable lens at 5 diopters). Figure 1: Working principle of a typical liquid lens. Left: A parallel beam is transmitted though the liquid lens. Right: When actuated, the curvature of the lens and the focal power changes simultaneously. Optotune offer clear aperture sizes of 3, 10, or 16 mm. The latter allow for a combination with large sensors and enable highest resolution. Depending on the size of the lens, focal power can be changed from one value to the other within up to 4ms. Computer actuation allows the liquid lenses to focus for billions of cycles, which equate to a very long lifetime. Figure 2 depicts the EL-16-40-TC liquid lens in both industrial and OEM configurations. As a core technology in machine vision, Optotune lenses are either used in combination with off-the-shelf imaging optics or are integrated into complete imaging systems with optimised optical designs.
Figure 2: EL-16-40-TC liquid lens with 16 mm aperture. Different mechanical configurations are available based on the requirements of the optical design. Left: Industrial version. Right: OEM version.
mvpromedia.eu
Electronics inspection runs at high speed, requires relatively large field of views for inspection of multiple PCBs in parallel and most importantly high resolution to detect tiny production errors. By placing the focus tunable lens at the back of an imaging lens it is possible to achieve high resolution and image circles of up to 30 mm. Robotics in precise manufacturing often rely on accurate measurements of features on objects at variable distance from the sensor. Placing the Optotune liquid lens right after the aperture stop of a telecentric lens creates an imaging system with variable focus at a constant magnification. Designing the liquid lens into the imaging objective leads to optimal optical performance for a certain application. An objective with the liquid lens close to the aperture stop achieves larger fields of view, lower F-numbers and can accommodate large image sensors. In addition, integrated systems are compact and straightforward to implement in the production line. Recently, new lens models for 12 megapixel 1.1� sensors with 12mm and 50mm focal length lenses have been introduced by Optotune together with their partners VST and Chance for Change. In conclusion, Optotune tunable lenses have been proven useful in a wide range of machine vision applications. Imaging systems with the Optotune liquid lens feature high temporal and spatial resolution and a long lifetime up to billions of cycles. Replacing multiple imaging systems with one single tunable solution reduces space and cost of the application. This is how Optotune has been defining the MV future of optics since 2008 and continues to do so.
29
HIGH END IMAGING CAMERAS | GRABBERS | PROCESSING
T: +49 (0) 8142 44841-0 | W: www.rauscher.de | E: info@rauscher.de
SPONSORED
MATRIX VISION EXPANDS EMBEDDED VISION MODULAR KIT The board-level mvBlueFOX3-5M camera series is the latest component in the Embedded Vision modular kit from MATRIX VISION. It meets the requirements for many projects for modular solutions for individual adaptation to a wide variety of installation situations and computer connections. In contrast to the recently introduced mvBlueFOX3-3M single-board camera with a 6.4 MPixel rolling shutter sensor, the mvBlueFOX3-5M has a modular board design. While the sensor board can be equipped with a wide variety of suitable Sony Pregius Global Shutter and Starvis sensors from 0.4 to 12.4 MPixels, the connector board with the BFembedded interface creates access to the Embedded Vision modular kit.
Customer-specific connector boards can be developed as needed—and the design possibilities are truly unlimited. For example, the options include connector boards to GPU boards, other plugs or plug orientations, etc.
The Embedded Vision modular kit from MATRIX VISION
ABOUT MATRIX VISION
As part of the Embedded Vision modular kit, many combinations are possible involving the mvBlueFOX3-5M This camera range has an integrated 256 Mbyte image memory for lossless image transmission. The large FPGA offers many smart features for image processing, and the 4/4 digital inputs and outputs give users free rein in process integration. There is also flexibility regarding further combination options: you can select from a range of different filters, lens holders and lenses. The camera is compatible with the GenICam™ and USB3 Vision® standards. Drivers are available for Windows and Linux. Moreover, the camera supports all third-party image processing libraries that are compatible with USB3 Vision®. The Embedded Vision modular kit with the BFembedded interface enables combination with various USB3 connector boards, which can additionally be separated from the camera thanks to flexible cable extensions.
mvpromedia.eu
Founded in 1986 in Oppenweiler near Backnang, Germany, MATRIX VISION is one of the leading providers of image processing components. As a pioneer in vision, the company offers a wide range of frame grabbers, industrial cameras, smart cameras, video sensors, embedded systems and software for industrial image processing. For special requirements, MATRIX VISION also develops solutions that are customer-specific and that range from the individual component to the complete functional unit. MATRIX VISION has been a subsidiary of the Balluff group MV since 2017.
CONTACT DETAILS N: Karin Ehinger | W: https://www.matrix-vision.com E: karin.ehinger@matrix-vision.de
31
CORE VALUES Euresys demonstrate how machine vision camera suppliers reduce time to market with transport layer I P cores.
The fundamental competency of engineers designing machine vision cameras and systems is usually configuring the core camera features to provide the best possible image while meeting size, weight, power budget and other requirements. But they also have to devote considerable time and effort to successfully streaming the image from the camera to the host. Leading edge vision transport layer standards such as GigE Vision, USB3 Vision and CoaXPress (CXP) are complex and are evolving, so several months of work by experienced protocol engineers is typically required to design the interface. A number of manufacturers of leading-edge machine vision cameras, such as Ozray (formerly NIP), Crevis and Sick are addressing this challenge by purchasing transport layer interfaces in the form of intellectual property (IP) which is provided ready to incorporate into field programmable gate arrays (FPGAs) along with other camera features. “Use of IP Cores enables us to develop more cameras at the same time while reducing time to market,” said Keith Ahn, executive director and chief technology officer for Korean camera provider Ozray. “The biggest advantage of using IP cores is that we can create a reliable standard transmission interface in a fraction of the time previously required,” said June Hwang, CEO of Crevis, also based Korea. A decade ago, Camera Link was the most widely used machine vision transport layer interface. The streaming part of Camera Link was well defined, but the control path was not specified, so every camera implemented its own configuration protocol, requiring individual tweaks on host side to fully support the camera.
standards themselves are evolving, requiring review of the standard and sometimes an upgrade of the transport layer implementation. The emergence of machine vision transport layer IP cores reduces the time required to develop camera-host interfaces. For example, Sensor to Image (S2I), a unit of Euresys, a leading frame grabber supplier, provides IP cores that meet the latest CXP, GigE Vision and USB3 vision interface standards. These IP cores secure the interoperability of the camera and host while ensuring compliance with the latest version of the interface layer.
MVDK evaluation board with MIPI CSI-2 sensor and CXP interface board
S2I’s Vision Standard IP Cores solutions are delivered as a working reference design along with FPGA IP cores that have been fully tested against a wide range of popular frame grabbers and image acquisition libraries. The IP cores are compact, leaving plenty of room for additional vision functionality. They are compatible with Xilinx 7 and newer and Intel/ Altera Cyclone V and more recent devices.
Fast forward to today and machine vision communications between the camera and host computer has been largely standardised, primarily using CXP, GigE and USB interfaces. The new vision standards are more complex and require tighter timing margins than earlier generations. Further complications are provided by the fact that the
32
mvpromedia.eu
The top-level design, consisting of the interface between external hardware such as the image sensor and transport layer PHY, is delivered as VHDL source code that can be adapted to custom hardware beyond the leading FPGA platforms supported by IP cores. The Video Acquisition Module of the reference design simulates a camera with a test pattern generator. This module is delivered as VHDL source code which is replaced by a sensor interface and pixel processing logic in the camera design. An FPGA integrated CPU (either MicroBlaze, NIOS or ARM) is used for several non-timecritical control and configuration tasks on the Vision Standard IP Cores. This software is written in C and can be extended by the customer. “By reusing IP cores, machine vision companies can focus on how to make the best image while maintaining full freedom to use any hardware needed to meet size, weight and power budget issues,” said Jean Caron, vice president sales and support, EMEA for Euresys. “We work closely with the CoaXPress, USB3 Vision and GigE Vision committees to ensure that our IP cores comply with the latest revisions of the standards,” said Matthias Schaffland, IP product specialist at S2I. S2I has recently introduced an IMX Pregius IP core providing an interface to Sony Pregius Sub-LVDS image sensors. The company will also soon introduce an interface to MIPI sensors primarily used in embedded vision systems and mobile devices. S2I offers a volume license best suited for companies with a large product line as well as a singlepiece license which is the best option for companies with smaller lines. Training and support are offered with either licensing arrangement. Ozray has implemented IP cores in its Pollux and Pamina area and line scan cameras and its Deneb thermal camera. In-house development of CXP and GigE transport layer interfaces would have been considerably more expensive than purchasing IP. “By purchasing IP cores, we can focus internal engineering resources on image processing and controlling sensor functions to a degree that wasn’t possible in the past when so many resources were devoted to the camera-host interface,” Ahn said. “We are also now able to address new markets by expanding our interface offerings from Camera Link alone in the past to now offering CXP and GigE as well. We are 100 per cent satisfied with the IP cores and services provided by S2I.”
mvpromedia.eu
Crevis’ Hwang said in the past it took a considerable amount of engineering manpower to develop the internal transmission logic, device drivers and Tx/Rx library for transport layer interfaces for its area scan cameras. “Now we purchase IP cores for GigE, CXP and USB interfaces from S2I while our engineers focus on developing sensor interface and camera functionality,” Hwang said. “S2I provides the reference design, training and technical support. This approach makes it possible to develop a reliable standard transmission interface in a fraction of the time required in the past. By incorporating IP cores into an FPGA that replaces many other parts, we have also reduced the size and manufacturing cost of our cameras.” Sick’s Ranger 3 3D streaming camera offers a greater number of 3D profiles per second in combination with a large height range and high image quality. “Previous generations of the Ranger 3 used a proprietary Gigabit Ethernet interface in order to provide capabilities that could not be delivered by following the standard,” said Mattias Johannesson, senior expert, Software 3D Camera for SICK IVP AB. “When the standard grew to include the features we needed, we wanted to adopt it but didn’t want to divert the engineering resources that would have been required to do the job internally. S2I offered a proven standard IP Core together with new custom modules to cover the extensions of the standard. “We had very good communications with S2I throughout the implementation process including several face to face meetings. Our engineering team was able to focus on our imager and signal processor, making it possible to get the latest Ranger 3 version to market in considerably less time than would have been required if we had developed the interface in-house.” “IP Cores enable machine vision companies to build FPGA-based products using the GigE Vision, USB3 Vision, and CoaXPress standards, delivering the highest possible performance in a small footprint while minimizing development time,” Schaffland concluded. MV
33
AI VISION SOLUTION TACKLES £60M FOOD WASTE ISSUE Harry Norman, managing director of OAL explains the logic behind APRI L Eye: the first artificial intelligence-based solution for label and date code verification.
A product recall due to incorrect labelling on food packages can have a devastating impact on a business, both financially and reputationally, and can result in vast quantities of waste. For food manufacturers, label and date code verification systems exist to ensure product recalls or withdrawals are avoided by checking the correct dates are printed on the correct packages. These systems can take a variety of forms – from a human eye reading dates to an automated system – but all have been historically prone to error for a variety of reasons. OAL is laser-focused on innovation and dedicated to helping manufacturers overcome the difficulties associated with label and date code errors so the company set about exploring ways in which it could eliminate the product recall for good.
THE PROBLEM Label and date code verification began its journey with operators checking the date codes against a pre-generated sheet containing the date codes for that product run. However, asking humans to check date codes for hours at a time meant that distraction and tiredness set in, leading to errors. As technology developed, some retailers began to insist that suppliers
34
installed either a vision system or an automated label and date code verification system on the line. Both systems utilised cameras and printers in the checking of date codes, removing the existing problems of tiredness or distraction. However, humans were still able to intervene with the systems and cause errors – whether this was by adding an extra printer to speed up production but not connecting it to the system, or changing the print information for that run. A solution was needed that could provide an independent check and take decisions away from the operators. This was certainly the thought process of a large retailer who approached its food manufacturing suppliers at the beginning of 2017 with a view to solving the waste problem that sits at the heart of food manufacturing. Incorrect date codes and packaging are one of the largest sources of food waste; an estimated £60-£80 million problem (excluding energy and environmental impact). Two of the retailer’s suppliers, leading global food manufacturers, contacted OAL to ask them to work alongside them to develop a failsafe solution to combat the issue of incorrect date codes entering the supply chain. The solution would need to meet the retailer’s expectations, and in the process eliminate waste, product recalls and cut the manufacturers’ costs.
mvpromedia.eu
THE TEAM OAL undertook the project as part of its Food Manufacturing Digitalisation Strategy. The company had already been awarded an Innovate UK grant in 2017 and used part of the funds in their partnership with the University of Lincoln to investigate how artificial intelligence could revolutionise this fundamental area of the food manufacturing process. The University of Lincoln put together a team of global experts in AI, including Professor Stefanos Kollias, the founding professor of machine learning, and Professor Xujiong Ye, who led the Computational Vision research group at the university, to work alongside OAL to develop a solution.
Incorrect date codes and packaging are one of the largest sources of food waste - an estimated £60-£80 million problem.
Professor Kollias, who headed the team, had been brought to the University of Lincoln to spearhead the machine learning division of the faculty. He has produced worldleading research in the field of machine learning and is an IEEE Fellow (2015, suggested by the IEEE Computational Intelligence Society). The University attracted more talent from as far afield as Greece, Iceland and China.
THE SOLUTION The project began in November 2017 and led to the launch of APRIL™ Eye, the world’s first artificial intelligence-based vision system. By combining machine learning with artificial intelligence, it’s able to read anything that is also legible to the naked eye, with just a basic camera and scanner backed by a ‘brain’, making it a much more cost-effective solution. The system takes a photo of each date code, and then reads them back using scanners to ensure they match the programmed date code for that product run. The production line comes to
mvpromedia.eu
a complete stop if a date code doesn’t match, ensuring that no incorrect labels can be released into the supply chain. What’s more, manufacturers benefit from comprehensive reporting and full traceability without relying on paper checks carried out by operators. Vision systems have previously relied on the kind of standardised printing that can only be achieved with optical character recognition (OCR). Unfortunately, due to retailers’ needs to adapt fonts and formats, OCR printing isn’t suitable for the industry. Inkjet printers have the flexibility to meet these needs but there is an extremely variable output, which means it can be difficult for the date code to be read by a traditional vision system. OAL has used over half a million photos of variations of date codes to ‘teach’ the system to recognise numbers and letters, whatever the format, and with a large amount of variability, in terms of different fonts and sizes, font distortion and packing changes with switching product runs, as well as lighting and heat sensitivity. In this way, food manufacturers can rely on the system to offer the level of security vision systems can achieve in other industries.
THE IMPACT Where APRIL Eye is installed, it verifies the date codes on over 1,000 products a minute, ranging from fresh produce to ready meals for a number of large, multinational food manufacturers. Following its installation on packaging lines, those with APRIL Eye are yet to experience a withdrawal. By incorporating APRIL Eye on to packaging line, manufacturers can remove operators and achieve full automation and unmanned operations, eliminating product recalls caused by MV human error.
35
SIEMENS
INVESTING IN INDUSTRIAL AUTOMATION Digitalisation is boosting the market for increasingly so ph is ticated m an u fac turi ng automati on tec hnol ogy. Neli Ivanova, sales manager, Industrial Equipment at Siemens Financial Services in the U K examines how integrated finance helps OE Ms and their customers capture the benefits of automation in a financially sustainable way.
Automation has been commonplace in the manufacturing sector for decades and can now be found in nearly every sector of industry1. Automated systems that reliably perform repetitive, standardised tasks continue to enable manufacturers to operate with greater efficiency.
One example of digitalisation in the manufacturing sector is the introduction of cloud platforms. By using cloudbased, open IoT operating systems such as Siemens Mindsphere, manufacturers can connect their products, plants, systems, and machines to collect, analyse and harness data from every area of the factory floor.
This is evidenced not only by speedier production rates but also aspects such as reduced factory lead times, more efficient use of materials, and increased control over product quality and consistency. 2 And yet, compared to other advanced economies, the UK invests relatively little in industrial automation and robotics. 3 This is surprising, as around 92 per cent of UK manufacturers are convinced that ‘Smart Factory’ technologies will help them increase their productivity levels. 4 But manufacturers are often, unsure where to begin when modernising their production process, concerned about ongoing costs, and worried that their products and processes are too bespoke to automate.5 Private sector finance can help relieve some of these pressures when investing in new technology by offering flexible financing solutions that are tailored to the needs of manufacturers.
36
mvpromedia.eu
SIEMENS
Cloud-based systems can also allow manufacturers to connect to customers and suppliers in order to understand their supply and demands, and tailor production processes to the requirements of the entire supply-chain. Moreover, these operating systems enable manufacturers to analyse real-time digital data such as vibration indicators and quality analysis, in order to make them aware of alerts and impending faults that cannot be identified by humans. This kind of predictive maintenance allows manufacturers to spot warning signs of problems before they occur, preventing damage to the machine and saving the cost of repair and machine downtime.
customers to invest sustainably in new technology and equipment with the help of providers that understand the demands of their industry. OEMs engaged in the manufacture of machinery can leverage these benefits to drive sales, by integrating Finance 4.0 into their overall offering and helping their customers invest in new technology. Such finance arrangements tend to be offered by specialist finance providers that have a deep understanding of how the digitalised technology, and the manufacturing industry works. Such financiers are able to work with OEMs to demonstrate how that technology can be practically implemented to deliver efficiencies to the manufacturing sector. As the financing arrangement can be an embedded component of the value proposition, OEMs are able to introduce customers to the latest equipment and technology and simultaneously present them with a financially sustainable method to invest in digitalisation. OEMs offering an integrated financing solution to their own customers have the potential to enhance their offering and remain competitive. In other cases, the technology provider will refer its customer to one or more finance providers to fund a sale.
Another advantage of digitalisation, and particularly of cloud systems, in manufacturing automation is predictive quality. Through real-time data analysis, defects in a production batch can be detected before they actually happen. Sensors analyse the quality of every product and warn of the tiniest changes. Crucially, these changes are flagged while they are still within the range of acceptable quality and are not yet considered defects. Being alerted to these marginal changes allows manufacturers to solve the problem before an entire batch of products is more seriously affected and has to be discarded. Predictive quality is especially useful when it comes to mass customisation, a growing segment in manufacturing. With customers demanding bespoke products, predictive quality allows manufacturers to cater to the needs of the client and produce product variations on a mass scale without allowing errors to creep in. As customers as well as potential providers of digitalised technology, OEMs play a crucial role in industry-wide adoption of new equipment. Not only can they capture the benefits of digitalisation for their own production processes, but they can create new product ranges that include the machinery as well as the digitalised technology and create new business opportunities by including sustainable financing in their offering . This creates new business models for OEMs and allows their
mvpromedia.eu
The advantages of investing in digitalised technologies in manufacturing are clear and manifold, but companies need the tools, the trust, and support to invest sustainably. With new technology being introduced into the sector, new opportunities for cooperation and business appear, opportunities that OEMs and their customers can exploit to leverage the benefits MV of Industry 4.0.
Ibis World, AI and Automation: How technology is shaping UK industries: https://www.ibisworld.com/ industry-insider/analyst-insights/ai-and-automation-howtechnology-is-shaping-uk-industries/ 1
Ibis World, AI and Automation: How technology is shaping UK industries: https://www.ibisworld.com/ industry-insider/analyst-insights/ai-and-automation-howtechnology-is-shaping-uk-industries/ 2
Robotics Business Review, Are U.K. Manufacturing and Industrial Automation Turning a Corner?: https://www. roboticsbusinessreview.com/manufacturing/are_u_k_ manufacturing_and_industrial_automation_turning_a_ corner/ 3
The Manufacturer: Annual Manufacturing Report 2018: https://www.themanufacturer.com/reports-whitepapers/ annual-manufacturing-report-2018/ 4
The Manufacturer, What are UK manufacturers’ views on automation?: https://www.themanufacturer.com/ articles/uk-manufacturers-views-automation/ 5
37
TULIPP PROJECT INTO EMBEDDED IMAGE PROCESSING AND VISION APPLICATIONS BLOSSOMS After three years of research at a cost of nearly € 4 million, TU LI PP, the E U-backed Horizon 2020 project has been hailed a huge success after ‘achieving all its objectives’. The TULIPP (Towards Ubiquitous Low-power Image Processing Platforms) Consortium has announced a highly successful conclusion to the EU’s three-year project. Beginning in January 2016, the TULIPP project targeted the improved development of high performance, energy efficient systems for the growing range of complex, visionbased image processing applications. The TULIPP project was funded with nearly €4 million from Horizon 2020, the European Union’s biggest research and innovation programme to date. The conclusion of the TULIPP project sees the release of a comprehensive reference platform for vision-based embedded system designers, enabling computer vision product designers to readily address the combined challenges of low power, low latency, high performance and real-time image processing design constraints. The TULIPP reference platform includes a full development kit, comprising an FPGA-based embedded, multicore computing board, parallel real-time operating system and development tool chain with guidelines, coupled with ‘real world’ Use Cases focusing on diverse applications such as medical x-ray imaging, driver assistance and autonomous drones with obstacle avoidance. The complete TULIPP ecosystem was demonstrated earlier in the year to visionbased system designers in a series of hands-on tutorials. “The TULIPP project has achieved all of its objectives,” said Philippe Millet of Thales and TULIPP’s Project Coordinator. “By taking a diverse range of application domains as the basis for defining a common reference processing platform that captures the commonality of real-time, highperformance image processing and vision applications, it has successfully addressed the fundamental challenges facing today’s embedded vision-based system designers.”
POWERFUL MULTICORE Developed by Sundance Multiprocessor Technology, each instance of the TULIPP processing platform is 40mm x 50mm and is compliant with the PC/104 embedded processor board standard. The hardware platform utilises the powerful multicore Xilinx Zynq Ultrascale+ MPSoC which contains, along with the Xilinx FinFET+ FPGA, an
38
ARM Cortex-A53 quad-core CPU, an ARM Mali-400 MP2 Graphics Processing Unit (GPU), and a real-time processing unit (RPU) containing a dual-core ARM Cortex-R5 32-bit real-time processor based on the ARM-v7R architecture. A separate expansion module (VITA57.1 FMC) allows application-specific boards with different flavours of input and output interfaces to be created while keeping the interfaces with the processing module consistent. Coupled with the TULIPP hardware platform, is a parallel, low latency embedded real-time operating system developed by Hipperos specifically to manage complex multi-threaded embedded applications in a predictable manner. Perfect real-time co-ordination ensures a high frame rate without missing any deadlines or data. Additionally, to facilitate the efficient development of image processing applications on the TULIPP hardware and in order to help vision-based systems designers understand the impact of their functional mapping and scheduling choices on the available resources, the TULIPP reference platform has been extended with performance analysis and power measurement features developed by Norges Teknisk-Naturvitenskapelige Universitet (NTNU) and Technische Universität Dresden (TUD) and implemented in the TULIPP STHEM toolchain.
USE CASES Also, the insights of the TULIPP Consortium’s experts have been captured in a set of guidelines, consisting of practical advice, best practice approaches and recommended implementation methods, to help vision-based system designers select the optimal implementation strategy for their own applications. This will become a TULIPP book to be published by Springer by the end of 2019 and supported by endorsements from the growing ecosystem of developers that are currently testing the concept. To further demonstrate the applicability of defining a common reference processing platform, comprising the hardware, operating system and a programming environment that captures the commonality of real-time, high performance image processing and vision application, TULIPP has also developed three ‘real-world’ Use Cases in distinctly diverse application domains – medical X-ray
mvpromedia.eu
The Tulipp project team at the HIPEAC 2019 event
imaging, automotive Advanced Driver Assistance Systems (ADAS) and Unmanned Aerial vehicles (UAVs). TULIPP’s medical X-ray imaging Use Case demonstrates advanced image enhancement algorithms for X-ray images running at high frame rates. It focuses on improving the performance of X-ray imaging Mobile C-Arms, which provide an internal view of a patient’s body in real-time during the course of an operation, to deliver increases in surgeon efficiency and accuracy with minimal incision sizes, aids faster patient recovery, lowers nosocomial disease risks and reduces by 75 per cent the radiation doses to which patients and staff are exposed. ADAS adoption is dependent on the implementation of vision systems or on combinations of vision and radar and the algorithms must be capable of integration into a small, energy-efficient Electronic Control Unit (ECU). An ADAS algorithm should be able to process a video image stream with a frame size of 640x480 at a full 30Hz or at least at the half rate. The TULIPP ADAS Use Case demonstrates pedestrian recognition in real-time based on Viola & Jones algorithm. Using the TULIPP reference platform, the ADAS Use Case achieves a processing time per frame of 66ms, which means that the algorithm reaches the target of running on every second image when the camera runs at 30Hz. TULIPP’s UAV Use Case demonstrates a real-time obstacle avoidance system for UAVs based on a stereo camera setup with cameras orientated in the direction of flight. Even though we talk about autonomous drones, most current systems are still remotely piloted by humans. The Use Case uses disparity maps, which are computed from the camera images, to locate obstacles in the flight path and to automatically steer the UAV around them. This is the necessary key towards fully autonomous drones.
LEGACY “As image processing and vision applications grow in complexity and diversity, and become increasingly embedded by their very nature, vision-based system designers need to know that they can simply and easily solve the design constraint challenges of low power, low latency, high performance and reliable real-time image processing that face them,” concluded Millet. “The EU’s TULIPP project has delivered just that. Moreover, the ecosystem of stakeholders that we have created along the way will ensure that it will continue to deliver in the future. TULIPP will truly leave a legacy.” MV
ABOUT TULIPP AND ITS PARTNERS TULIPP (Towards Ubiquitous Low-power Image Processing Platforms) is funded by the European Union’s Horizon 2020 programme. It began its work in 2016 and completed in 2019. Its focus is on the development of high-performance, energy-efficient embedded systems for the growing range of increasingly complex image processing applications that are emerging across a broad range of industry sectors. TULIPP focused on providing vision-based system designers with a reference platform that defines implementation rules and interfaces designed to tackle power consumption issues while delivering guaranteed, high performance computing power. For more information on TULIPP, please visit: http://www.tulipp.eu. For further information on the TULIPP consortium members see: Thales - www.thalesgroup.com Efficient Innovation SAS - www.efficient-innovation.fr Fraunhofer IOSB – www.iosb.fraunhofer.de Hipperos – www.hipperos.com Norges Teknisk-Naturvitenskapelige Universitet – www.ntnu.no Technische Universität Dresden – tu-dresden.de Sundance Multiprocessor Technology – www.sundance.com
The Tulipp Starter Kit with Lynsyn PDM on a PC/104 carrier board.
mvpromedia.eu
Synective Labs – www.synective.se
39
OPTO ENGINEERING
POWERING
VISION SOLUTIONS THROUGH SIMPLICITY
Luca Bonato, product manager at Opto Engineering explains how the company searches for simple solutions to complex challenges. He draws on the work done for baseClass Automation’s ShakePicker
In today’s industry, one of the main features we look for is simplicity. Vision engineers are faced everyday with many different tasks and challenges, but the beauty of a bright idea, an exceptional moment of genius, a simple trick which solves everything – that’s what we look for. As we say at Opto Engineering®, “simple works better”. We can work on a simple and efficient solution from many sides. Choosing the right hardware is of course critical – and here’s where your trusted supplier of components comes into play. But it is with software that things usually come to a halt. Many software products on the market are either too complex or too simple, and don’t provide the optimal combination of capabilities and ease of use needed to move things forward smoothly and save precious time.
40
That’s the spirit behind an innovative solution developed by baseClass Automation kft (Hungary), which recently came up with ShakePicker, an innovative system for part handling powered by two special products from Opto Engineering® – PENSO®, an artificial intelligence-based computational unit for imaging application, and FabImage, a data-flow based software designed for machine vision engineers. ShakePicker is a vibrating, shaking solution for flexible part feeding. The parts laying on its flat, backlit surface can be moved and distributed as needed by its 4D shake engine, with an adjustable frequency in the 0.5-100 Hz range and customizable wave shape. Using either one of the 32 shaking pattern program banks, or a custom program, parts distribution can be precisely controlled. According to the
mvpromedia.eu
OPTO ENGINEERING
actual parts distribution, directional shake programs can be executed to redistribute parts equally. Also, the system integrates a digital I/O based program selection. This module offers several advantages: •
It’s suitable for parts which cannot be handled by bowl feeders
•
Fine tuning is done via software rather than mechanically, and it can thus be done remotely
•
No part jamming
•
Part changeover doesn’t require hardware changes
Of course, to exploit all its potential, the module must be governed by a vision system and here’s where the collaboration with Opto Engineering® comes into play.
“ to exploit all its potential, the module must be governed by a vision system” We helped baseClass finalize the first version of “ShakePicker – System version”, which was presented at the 2018 Vision Stuttgart. ShakePicker – System version is a complete solution, which in its full form includes the ShakePicker module, the Opto Engineering PENSO® AI-based computational unit equipped with camera and fixed focal lens, a robot arm with gripper (NACHI, OTC-Daihen, Fanuc, ABB or other), a
mvpromedia.eu
conveyor based refilling system and a complete guarding and safety system. PENSO® is an artificial intelligence-based computational unit for imaging applications. PENSO® self-learns the expected features of an object by simply looking at a small series of samples, regardless of the possible presence of defective product in the midst. The information related to defective parts can be sent through TCP/IP protocol to the robot arm, which then moves or discards the part. In recent times, Opto Engineering® cooperated again with baseClass providing components for new projects with ShakePicker. This time, instead of using PENSO®, the robot was controlled using a vision application developed with Opto Engineering FabImage Studio Professional. FabImage is a data-flow based software designed for machine vision engineers. Its graphical design allows for fast software prototyping, while the easy “export to code” function provides developers with the freedom needed for the most advanced applications. Its architecture is highly flexible, ensuring that users can easily adapt the product to the way they work and to specific requirements of any project – and this was the case for baseClass ShakePicker – System version. The right choice of components, the support provided by the vendor to integrate them and the innovative ideas of MV the integrator are a sure recipe for success.
41
AUTONOMOUS MOBILE ROBOT =
NUMEROUS APPLICATIONS The times when AGVs (automated guided vehicles) were considered as super-modern tools boosting automation of repetitive material delivery in factories and warehouses are gone for good according to Photoneo.
Today, AGVs just cannot keep up and compete any more with a much more sophisticated approach - AMRs (autonomous mobile robots). Photoneo, based in Slovakia, decided to extend the global portfolio by introducing its first AMR that features the company’s state-of-the-art technology and know-how.
Phollower 100 has been designed as a universal mobile platform for transport and delivery of materials in warehouses, factories, hospitals, hotels and other large spaces. The robot is able to carry up to 100kg and pull up to 350kg of payload. It is aimed to unburden human workers of monotonous tasks and heavy material handling as well as to save time and increase efficiency. The employees’ skills and potential can instead be employed in areas which necessarily require human workforce. As the term “autonomous mobile robot” suggests, Phollower 100 does not require any wires or magnetic tapes attached to the floor. These and other infrastructure adjustments necessitate robust solutions that are susceptible to damage or solutions that need to be rebuilt in case
42
of changes in the trajectory. Being able to understand its surroundings, Phollower 100 can operate very flexibly and reliably. The robot is able to navigate itself on the basis of a lidar, 3D camera and a virtual map. The laser scanner area covers 360° whereby the body of the robot has an interchangeable front and rear with a zero turning radius which allows reversible movement. The robot uses odometry and allows trajectory creation with custom curves and instant map redrawing. The fact that human workers get tired with long shifts does not only lead to decreased efficiency but also to impaired concentration and alertness. Phollower 100 is very fast yet absolutely safe, meeting the requirements of the safety class SIL2 PL.d Category 3. It checks its surrounding environment 33 times per second and the system is able to detect obstacles every 30 ms with a minimum width of 30 mm. It also enables adaptive safety zones. The laser scanner prevents collisions with objects up to 200 mm above the surface and the 3D camera does so significantly above the safety layer. The variability of Phollower 100 offers many use cases - it can be used with collaborative robots, carry boxes, pallets or pull payload of any kind. Besides the usual transport of materials in factories and warehouses, Photoneo’s AMR helps the staff of a Slovak hospital with the distribution of pharmaceuticals, medical equipment and other stuff from one floor to another, saving the employees’ time and improving the overall efficiency. Photoneo is already working on a new generation of the robot, further improving its performance and adding new features so there is a lot to look forward to. MV www.photoneo.com Tel: +421 948 766 479 Email: sales@photoneo.com
mvpromedia.eu
HEAVY INDUSTRY AT THE HEART OF ROBOTIC PROCESS AUTOMATION Dr John Bates, CEO, of Eggplant explores the impact Robotic Process Automation is having across the industrial landscape.
In a world dominated by high volume production, machine-led production line processes are commonplace in every large scale operation. Factories around the world have grown exponentially, riding the wave of robots when it comes to repetitive tasks on the production floor. It’s a cycle that has become so sophisticated, it is now infiltrating the back-office operations in a mechanism commonly referred to as Robotic Process Automation (RPA). This is hardware and/or software that helps augment or replace human workers in repetitive, mundane processes. It often replaces fairly rudimentary tasks such as data entry, processing it more quickly, more efficiently, and with fewer errors. This is why RPA is so compelling.
RPA IN ACTION Renault, for example, is deploying RPA to support everything from design to the physical manufacturing process, while the same technology is also being used to automate and test systems like product lifecycle
44
management (PL/M). This requires the creation of robotic users, which can automate standard calibration tasks, and then measure to see if the PL/M system is performing correctly. This ensures smooth running, with cost and time savings, and can anticipate expensive slow-downs or production issues for large manufacturers. Another area is automated in-vehicle systems such as satnav or even elements of self-driving. This sort of control system can apply to anything, such as testing an aircraft, a tank or a drone, used in many heavy industries, like defence. These industries are incredibly mission-critical, so the automation and testing of these processes is a highly complex (and repetitive) business. But getting RPA right extends way beyond the realm of classified military operations. Underlying software systems in scenarios like a tube train have dedicated time windows for processing certain tasks - a so-called “hard real-time system” - with numerous fail safes attached to them, which guarantee they have enough bandwidth to handle any task. For instance, a train won’t fail to stop at a station because
mvpromedia.eu
it is ‘too busy’ running the air conditioning. That’s never going to happen because it has been statistically analysed to make sure it has enough cycles to handle the peak of whatever tasks it has to do at any given time.
RPA BECOMING LIFE CRITICAL So, when automating with RPA, businesses need to assess how “life critical” a particular process is, and, if necessary, anticipate every circumstance so that it won’t run out of bandwidth at a critical time. In a factory, there may be an incident on a production line where one out-of-control bot leads to tragedy. Not to belittle it by any means, but an RPA failure in this instance might have limited impact to people in its vicinity. Consider the potential impact of a bot running a line of code or an algorithm linked to drug production or control systems in the military, which has far more reaching effects to potential harm to society at scale. If it is a lifecritical system, businesses need to think about applying lessons from hard real-time systems so it is scaled to the maximum peak bandwidth. On another level entirely, if you take the ultimate in automated tasks, such as self-driving vehicles, then all the involved stakeholders need to consider the ethics involved in automating critical decisions. This is taking it to the extreme, of course. Where it’s not just the automation of manual tasks, but actually being able to adapt those tasks with some intelligent thinking. This brings in a whole level of ethics in terms of how a machine decides in these circumstances, as well as raises a number of questions. Who will regulate RPA for heavy industry? When is the industry able to use AI, and who will be liable?
and the revenue opportunity created by big data. We will see significant increase in value-added services throughout the lifecycle of products in heavy industries, requiring a high level of automation. Consider how much intelligence the connected tyre might provide to all number of third parties, from the automotive sector to highway construction companies, and billboard advertisers. Though, it won’t come as a surprise to hear that some of the world’s most forward-thinking organisations are doing this already. NASA is automating various processes in the Orion space vehicle instead of needing to have astronauts on hand at all times to do this. Equipped with three main displays to monitor and control the spacecraft, to ensure the software behind the glass displays operates without faults, rigorous automated testing is needed.
THE FINAL FRONTIER RPA is supporting NASA’s mission to take humans deeper into space, though it remains to be seen if it is the final frontier. However, what is for sure is that it is here and, if it makes anything like the impact automation on the factory floor had all those years ago, we’re in for quite a period of MV change - for the better.
THE GROWING ROLE OF IOT This is where we’re seeing major developments in RPA, particularly as products become more connected, which completely changes the commercials of heavy industries. IoT will present new revenue opportunities with companies being able to keep in touch with devices that are built for all manner of applications and services. Manufacturing will be less about the product sale, but about assets-as-a-service,
mvpromedia.eu
Dr John Bates
45
GRIPPERS
ADVANCES IN
GRIPPER TECHNOLOGY
The first autonomous, untethered and entirely soft robot was inspired by the octopus. The design, known as Octobot, includes no electronics, which enables it to be entirely compliant. The Harvard University researchers behind it hope their work in soft robotics inspires those working in the advanced manufacturing industry. They’re hopeful that it will pave the way for soft robots becoming more common in real-world tasks that require human interaction. Sophie Hand, U K country manager at automation parts supplier E U Automation, discusses how new gripper technology, including soft robotics, is making human-robot collaboration easier.
A gripper is a kind of end-of-arm tooling that is mounted onto a robot so that it can grip work pieces. Gripper technology consists of fingers that open and close, to pick up and put down items. Both electric and pneumatically powered grippers are used in manufacturing for pick and place applications, handling hazardous materials or for repetitive tasks to improve employee comfort. While grippers are a vital tool for many applications, there have historically been tasks that are more difficult for a robot gripper to complete. For example, holding more than one item at once or adapting the force it applies to suit the item it’s handling. The gripper may need to be gentle with soft fruits and vegetables such as a head of lettuce, while being firm with a tin of tomatoes.
Soft robotic grippers are now used across industry to handle delicate and fragile items, such as glass. They are still limited, however, by the weight of items — it is incredibly challenging to design a gripper that is gentle, compliant and strong. One recent development, achieved by researchers at the Massachusetts Institute of Technology, is the development of a soft gripper with an origami design, capable of lifting objects more than 100 times its weight. Incorporating additional sensors into robots, so that gripper technology can adjust its grip according to the object, as a human would, is another area of interest. The Polyskin Tactile Gripper, which is compatible with robots from Universal Robots and Kawasaki, includes tactile sensors in its two fingertips to achieve this.
GENTLY DOES IT Typical grippers are made from rigid materials and are better suited to handling robust items. Food products, such as bakery items or fruit, have presented a huge challenge for gripper technology to handle without causing damage and have therefore required human pickers and packers. However, developments in soft robotics — the field devoted to making robots from compliant materials, often modelled on biological functions, like Harvard University’s Octopus — have changed the status quo.
46
mvpromedia.eu
GRIPPERS
GETTING COLLABORATIVE Another challenging area for designers of gripper technology has been developing strong and safe grippers for collaborative robots, to work directly alongside humans. This growing market is predicted to be worth $12,303 million by 2025 according to Markets and Markets, with a whopping 50.31 per cent compound annual growth rate between 2017 and 2025.
To improve safety, in 2016, the International Organization for Standardization introduced the TS 15066 technical specification, to add additional safety requirements to collaborative robots. The new rules require that grippers should not have sharp corners or pinch points and limit the gripping force to 140 Newtons, to prevent injury if the technology were to grip a person, rather than the object it is meant to be handling. “End users want a collaborative robot application. You can’t make that if only the robot is collaborative,” explains Lasse Kieffer, CEO and co-founder of innovative gripper company Purple Robotics. “Everything in the system needs to be collaborative, including the gripper. Until now, everyone’s been focused solely on the robot being collaborative. This is the trend in the market, but ISO standardization is also going from looking at features on the robots to the grippers because they might come into contact with the people.” The 140 Newton limit also means that the maximum payload is only about 1.5 kilograms, which limits the number of applications where collaborative robots can be used to tasks such as assembly and handling. Any heavy lifting must be done by a traditional industrial robot, separated from human employees. For a collaborative robot to carry heavier payloads safely at high forces, the
mvpromedia.eu
industry must focus on improving the senses of the robot, such as by adding vision systems or other additional capabilities. Partnerships between gripper companies and collaborative robot manufacturers, such as Universal Robots, are making it easier for manufacturers to purchase an entirely collaborative solution for their application. Plug and play technology, which manufacturers can set up and program without a system integrator are also growing in popularity. Manufacturers can choose their own automation equipment suppliers, whether that means purchasing entire robot systems from a robot manufacturer, or individual components from businesses like EU Automation that supplies new and refurbished industrial parts. Not everyone has access to soft Octopus robots in their factories. However, developments in soft robots and in gripper technology for collaborative robots are making robots suitable for a broader range of applications, including working closely alongside humans. For more information on developments in automation equipment, visit: www.euautomation.com. MV
Sophie Hand UK country manager at automation parts supplier EU Automation
47
GRIPPERS
THE AUTOMATED CHAMELEON TONGUE Drawing on the insect-eating chameleon, the adaptive shape gripper DH E F by Festo can pick up anything.
Gripping workpieces just as a chameleon’s tongue grips insects – that is the operating principle of the adaptive shape gripper DHEF from Festo. This unusual gripper can pick up, gather and set back down again objects of many different shapes without the need for manual adjustment. The silicone cap of the adaptive shape gripper DHEF can fold itself over and grip objects of virtually any shape. This creates a firm, form-fitting hold. The elastic silicone enables the gripper to precisely adapt to a wide range of geometries. When combined with a pneumatic drive, the adaptive shape gripper requires little energy for a secure grip
FORMLESS, ROUND, SENSITIVE Unlike the mechanical grippers currently available on the market that can only grip specific components, the adaptive shape gripper is extremely flexible. It can even manage components with freely formed shapes and round geometries. The absence of sharp edges makes it ideal for gripping sensitive objects such as air nozzles or trim strips. In principle, the gripper can pick up several parts in one movement, for example nuts from a bowl.
This means that the bionic gripper can be used to handle small parts in classic machine building, in the electronic or automotive industry, in supply units for packaging installations, for human-robot interaction during assembly tasks or for prosthetic extensions in medical technology.
PRACTICAL PRODUCT CHARACTERISTICS The gripper has an elastic silicone membrane that is flexible and pliable; once it is supplied with compressed air and the standardised robot interface with integrated air connections has been added, it is ready to be used as a practical automation component. The standard sensor slot for position sensing as well as the bayonet lock for easy replacement of the cap are additional useful features.
NATURE AS A MODEL The unique combination of force and form of the chameleon’s tongue can be observed when it is on the hunt for insects. Once the chameleon has its prey in its sights, its tongue shoots out like a rubber band. Just before the tip of the tongue reaches the insect, it retracts in the middle whilst the edges continue to move forwards. This allows the tongue to adapt to the shape and size of the prey and firmly enclose it. The prey sticks to the tongue and is pulled in as though caught on a fishing line. The Festo Bionic Learning Network with researchers from the University of Oslo used these observations when developing a prototype with the name “FlexShapeGripper”. MV
48
mvpromedia.eu
TAKE CONTROL OF THE THIRD DIMENSION. THE PRECISION YOU DESERVE WITH THE ROBUSTNESS YOU REQUIRE. The new Basler blaze: Accurate and fast. Precise and cost-efficient. Robust and stable. 2D and 3D data with a multipart image from distance, intensity and confidence maps in real time for applications in robotics, industrial automation, logistics and medicine, and in the “Smart Factory”. The Basler blaze Time-of-Flight camera with integrated Sony DepthSense™ sensor technology combines high resolution, a speed of 30 fps and powerful features with an attractive price. Basler blaze. 3D imaging at its best. Learn more at baslerweb.com/blaze
LEADERS MUST HAVE EMOTIONAL AND TECHNOLOGICAL INTELLIGENCE
A second insight from the thought-provoking World Class Leaders Report highlights crucial ‘soft skills’ to succeed.
If business leaders in the manufacturing and industrial sectors are unable to tap into their emotional intelligence and communicate better, there will be no place for them in the board room.
will evolve from the bottom up – and it takes a leader that can listen, place value on new ideas, evaluate and accept elements of risk, whilst using their own experience to commercialise them.”
That’s one of the key findings from the ‘World Class Leader Report’, produced by executive search and recruitment specialist TS Grale.
Technology, and the ability to embrace it, is another vital quality that the leaders of the future must have according to the report.
It also reveals that future leaders must also be adept at networking, able to embrace technological change, and value corporate social responsibility.
Saunders explained: “World class leaders have to relish what the younger, tech-savvy generation bring to a business. It’s not something that can simply be delegated, the best future leaders must be open minded to new technological solutions, but crucially, understand them too.
Jason Saunders, co-founder and director at TS Grale, said: “It’s clear from our research that the old models of leadership skills and roles are no longer fit for purpose. Over the next decade we will undoubtedly see huge shifts in the personality traits and leadership skills required to succeed, and this report shines a light on what they are. “Good leaders need to be self-aware and have excellent clarity of thinking in order to understand themselves and others. They must have strong emotional intelligence, a trait often associated with women, and also genuinely live and breathe all of the core values of genuine corporate social responsibility. “Importantly, they must listen to their teams and peers from across all industry sectors, in order to maximise opportunities. Good listeners who network extensively will be able to grasp new ideas quickly and great communicators can implement visions effectively.” The findings from the ‘World Class Leader Report’ are based on interviews with C-suite executives and directors across private, listed and private equity backed businesses, with turnovers ranging from less than £20m to more than £1bn. Saunders added: “The emotional side of leadership will continue to grow in importance – some of the best ideas
50
“An appreciation of what the next generation can bring to a business - both through their ideas and technological advances - is critical for future leaders. “Great leaders need to appreciate that they don’t know it all, they must be hungry to keep learning and emotionally intelligent enough to understand that other people will only share their ideas if there is a real open platform for them to do that within a business – and this is something that has to come from the top. “The best leaders must engage with all their employees, industry associates and professionals from the wider business environment, and really listen to them, if they want to see long-term value and commercial success.” The ‘World Class Leader Report’ can be downloaded from TS Grale’s website at https:// tsgrale.com/leaderreport MV
mvpromedia.eu
Building the Adaptable, Intelligent World
Scalable IIoT Platforms Across Edge and Cloud
INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING