In the fast lane - Enhancing automobile performance | MVPro 22 | August 2020

Page 1

IN THE FAST L ANE ENHANCING AUTOMOBILE PERFORMANCE BIN PICKING: AN AUTOMATED SOLUTION

FAST FOCUS: LIQUID LENS TECHNOLOGY

ROBOTICS: FIVE YEARS OF YUMI

ISSUE 22 - AUGUST 2020

mvpromedia.eu MACHINE VISION & AUTOMATION



MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu

Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu

Cally Bennett Group Business Manager cally.bennett@mvpromedia.eu

Spencer Freitas Campaign Delivery spencer.freitas@cliftonmedialab.com

Becky Oliver Graphic Designer Lotte De Kam, Przemyslaw Falek, Hatim Haloul, Jools Hudson, Mike G John, Adrian Kratky, Alberto Moel, Andrea Pufflerova, Andie Zhang Contributors

Visit our website for daily updates

www.mvpromedia.eu

mvpromedia.eu

CONTENTS 4

EDITOR’S WELCOME - Loss of Vision a huge blow

6

INDUSTRY NEWS - Who is making the headlines?

8

PRODUCT NEWS - What’s new on the market?

10

MIKROTRON - Hit the fast lane

12

SONY - Polarised cameras leave no hiding place

15

ALYSIUM - Real life - it’s nothing like the movies

16

CAELESTE - Imaging technology bridges space and medicine

18

IDS - New dimensions in machine vision

19

DISTRIBUTORS - Adding Value

20

SEMPRE - A different approach to metrology

23

EURESYS - 3D for 2D vision engineers

24

LIQUID LENSES - Gardasoft’s fast-focus solution

26

XILINX - The power of AI in healthcare

27

MIDOPT - Not just a machine vision filter

28

SVS VISTEK - Avoid shading with CMOS cameras

30

ADVANCED ILLUMINATION - Semiconductor chip inspection

31

BAUMER - Software efficient camera integration

32

EXPRESSWORKS - Could coronavirus make VR mainstream?

34

COHERENT - Solving 5G antennae micromaching

36

GARDASOFT - What’s the point of strobe controllers?

38 PHOTONEO - Smart bin picking 40 EU AUTOMATION - Central Europe: The next Industry 4.0 leader? MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328

42

COMAU - Exoskeleton delivers weighty benefits

© 2020. All rights reserved

44

VEO ROBOTICS - FreeMove enables manufacturing flexibility

‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.

46

RARUK AUTOMATION - Autonomous mobile robots

48

INFLUENCER Q&A: ABB’s Andie Zhang on five years of YuMi

mvpromedia.eu

3


LOSS OF VISION IS A HUGE BLOW In the course of pulling together this latest issue of MV Pro, news came through that the industry’s biggest event of the year – Vision – was being cancelled. Yet another commercial victim of coronavirus. There were just too many complications to overcome to make machine vision’s biennial showcase a viable proposition. For an event of this magnitude, which brings the industry together from across globe, pulling the plug was no easy decision and is a huge blow for the sector. What will the impact be for those businesses in the machine vision sector, who purposely pinpoint Vision to launch their ‘headline grabbing’ innovation. Two years ago, Inspekto did just that. Vision 2018 was their moment in the sun – and the ideal launchpad for their S70. Since then, the rest, as they say, is history. How many innovations and new products has the industry now lost sight of as a result of Vision having to close its doors? Businesses are having to re-evaluate their next steps, while event organisers are also assessing how they overcome these challenges. When the next Vision will be held is unclear. The organisers are, wisely, consulting exhibitors and would-be visitors before taking the next step, while looking at how they can help and deliver for the machine vision industry. Across the pond, the Association for Advancing Automation have responded to the ongoing impact of the coronavirus by announcing FIVE virtual events over the autumn (see page six for details). Following the success of the AIA’s week-long vision week, it is no surprise to see their sister organisation use a similar format. For now, we must all accept that the ‘new normal’ remains digital. Talking of normal, MV Pro brings you the usual feast of articles and insight. Mikrotron and Sony both share how they are impacting the automotive sector. Gardasoft look at liquid lenses and Photoneo explore automated bin picking. Robotics and automation are well covered in this issue with a case study into the benefits of exo-skeletons; how the VEO Freemove is improving collaboration between humans and robots, while there is a celebration of ABB’s Yumi. Enjoy the read and enjoy the summer.

Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.eu

Lee McLaughlan Editor

4

mvpromedia.eu



INDUSTRY NEWS

A3 LAUNCHES FIVE NEW VIRTUAL EVENTS

The Association for Advancing Automation has announced five new virtual events taking place in the autumn of 2020. A3 is transitioning the following 2020 in-person conferences to virtual events, as a result of the COVID-19 pandemic, as well as adding two new events: • RIA Robotics Week: September 8-11 (new) • International Robot Safety Conference: October 6-8 • Autonomous Mobile Robot Conference: October 26-27 • AI & Smart Automation Conference: October 28-29 (new) • MCMA TechCon: November 9-10 The in-person Collaborative Robots, Advanced Vision & AI Conference is postponed until autumn 2021. However, A3’s Robotic Grinding and Finishing Conference, slated for December 2-3 in St. Paul, Minnesota, will be held inperson if possible, and online if not. “As the adoption of automation is expected to increase dramatically coming out of the COVID-19 pandemic, it’s more important than ever that A3 provide highquality education on how to successfully apply robotics, AI, machine vision, motion control, and related automation technologies,” said Jeff Burnstein, A3 president.

RIA Robotics Week RIA Robotics Week, September 8-11, will be the signature virtual autumn event for the robotics industry. Join A3’s 20+ expert speakers, four high-level keynote presenters, and thousands of attendees for this free four-day virtual conference and exhibitor showcase. International Robot Safety Conference The Virtual International Robot Safety Conference (IRSC), October 6-8, will examine key issues in robot safety and provide an in-depth overview of current industry standards related to industrial and collaborative robot systems. Autonomous Mobile Robot Conference At the AMR Conference, October 26-27, attendees will learn how to deploy AMRs in dynamic environments across multiple sectors while getting a full briefing on the new industrial mobile robot safety standard. AI & Smart Automation Conference The AI & Automation Conference, October 28-29, will help unlock the power of AI by featuring discussions on data strategy, advances in AI robotics and machine vision, and AI-powered optimization and prediction. MCMA Technical Conference MCMA TechCon, November 9-10, will provide the latest updates on motion control, motors, and related automation technologies. MV

“We’re very excited to reach thousands of people virtually this year while planning our return to live events such as Automate 2021 next year.” Agendas, keynote speakers, and specific conference session topics will be released for each event within the coming weeks.

6

mvpromedia.eu


INDUSTRY NEWS

OEM AUTOMATIC BECOME UK AND IRELAND DISTRIBUTOR FOR BASLER John Jennings, chief commercial officer at Basler, said: “We are confident that with OEM Automatic. we can further leverage our coverage in the UK market. We strongly feel that the combination of a Basler sales organisation, along with OEM as a distributor provides our customers the best support and most cost-effective solutions.” Richard Armstrong, managing director of OEM Automatic, said: “This is an important and strategic addition to our existing range of industrial automation products. We already carry approximately 8,000 product lines.

Basler has appointed OEM Automatic as new official distribution partner for the United Kingdom and Ireland.

“We will be adding substantial Basler stock to this to support our customers. The OEM group has worked successfully with Basler for over 10 years and we look forward to extending this success to the UK and Ireland market.” MV

The UK and Ireland are important business regions for Basler and the company is confident it will strengthen its presence in these markets with the new partnership.

EMVA LAUNCHES ‘VISION KNOWLEDGE CENTER’ The vision industry has been given an online hub of expertise after the EMVA announced the launch of its Vision Knowledge Center. Its focus is to be a hub of expertise and the meeting point for relevant technical and environmental topics in the vision industry. The new platform, accessible on the EMVA website, benefits from the enormous amount of valuable contributions collected over the years at the numerous EMVA events.

• Vision Ecosystems – Knowledge on economy and social life • Education – Become a certified expert of vision standards • Young Professionals – Innovative work by students and young professionals Discover the hub at https://www.emva.org/vision-insights/ vision-knowledge/ MV

The EMVA Vision Knowledge Center provides access to high-quality presentations on technology, applications and vision ecosystems, webinars, videos and technical papers which are of great interest to the vision tech network. The hub features a wealth of information across six distinct themes. These themes are: • Vision Technology – Benefit from expert knowledge • Expert panels – The best of EMVA panel discussions • Markets & Applications – Vision solutions for specific sectors and industrial applications

mvpromedia.eu

7


PRODUCT NEWS

SICK DEVELOPS SENSORAPPS FOR SOCIAL DISTANCING SICK has responding to the need for distancing technologies to slow the spread of Covid-19 with the launch of two specifically designed products. It has developed PeopleCounter and DistanceGuard SensorApps which help to ensure that people keep to the recommended distance apart in working and public spaces.

2530 3D Smart Sensor Blue Laser

The SICK PeopleCounter is a SensorApp based on a machine learning algorithm running on SICK’s MRS1000 3D LiDAR sensor. One or more sensors can be easily set up to count people at entry and exit points, or allow users to control the number of people occupying a pre-defined area in real time. Because the system can reliably detect human contours, while ignoring other objects, customers or workers are counted accurately as they enter or leave buildings or other pre-defined spaces. The PeopleCounter works by using a specially-developed on-board algorithm to evaluate the point-cloud data generated by the MRS1000. Because the SensorApp can use data from the MRS1000’s four 275° scanning layers to determine direction of movement, the number of people in a monitored area can be updated in real time. The system can identify more than one person in parallel, independent of their direction (In or Out) within a range of up to 3.5 m. As it only sees the human outline, the SICK PeopleCounter is able to process data at high-speeds and is completely anonymous with no need to detect or record any personal location or identification data. A master/slave mode makes it possible to combine several LiDAR Scanners to track larger areas with multiple entry and exit points, such as shopping centres, airports or stations. With eight multifunctional digital input/output connections on the device, the system can easily be set up to keep track of the number of people via mobile, PC and cloudbased systems, as well as integrating with wider controls or security systems where necessary.

HIGH SPEED 3D SCANNING AND INSPECTION FOR SHINY AND CHALLENGING SURFACES

SICK’s Social DistanceGuard App works on the SICK TiM 2D LiDAR Sensor to monitor areas where the recommended distance between people must be upheld, e.g. in a queue, and provide an alert if they are too close. A signal, in the form of a light, audible alarm, or visual signal is triggered as soon as the distance between two people falls short of the minimum. The on-device settings can be used to input or change the required distance at any time. The presence of people can be detected and evaluated thanks to the SICK TiM’s 270° scanning capability and range up to 25 metres. MV

8

2530

Gocator® 2530 provides 3D blue laser profiling, measurement, and inspection at speeds up to 10 kHz. Easily integrate this all-in-one smart sensor for robust inline inspection of shiny materials (e.g., batteries, consumer electronics), low contrast targets (e.g., tire sidewall), and for general factory automation applications.

Discover Gocator 2530 visit www.lmi3D.com/2530


PRODUCT NEWS

TELEDYNE’S NEW SWIR LINE SCAN CAMERA SEES BEYOND THE VISIBLE Teledyne DALSA has announced its first shortwave infrared (SWIR) line scan camera for machine vision. The new Linea SWIR features a cutting-edge InGaAs sensor in a compact package suitable for a variety of applications including food and packaged goods inspection, recycling, mineral sorting and solar and silicon wafer inspection. With exceptional responsivity and low noise, this newest Linea SWIR line scan camera allows customers to see their products in a new light. Linea SWIR is a 1k resolution camera with highly responsive 12.5 µm pixels, 40 kHz line rate, cycling mode, programmable I/Os, power over Ethernet (PoE), precision time protocol (PTP), and more. “The new Linea SWIR will help customers greatly improve the quality of their output,” said Mike Grodzki, product manager for the new Linea SWIR.

“With the ability to differentiate materials and detect moisture, Linea SWIR will allow customers to more easily identify foreign contaminants in their product stream. “And its capacity to image beyond the visible spectrum makes the camera ideal for applications such as food sorting, solar wafer inspection, and consumer packaged goods inspection.” Key Features: • High responsivity low noise 1k sensor • GigE interface • HDR and Cycling modes • High dynamic range • Programmable I/Os • Selectable 8 or 12 bit output • Flat field correction • ROI support For more information about the Linea SWIR visit www.teledynedalsa.com MV

MATROX IMAGING LASERS IN WITH 3D PROFILE SENSORS Matrox® Imaging has taken the bold step of launching a whole new product line - the Matrox AltiZ. It is a series of integrated high-fidelity 3D profile sensors featuring a dual-camera single-laser design. These fully integrated 3D profile sensors boast an optimised design that greatly lessens scanning gaps. Simultaneous viewing of the laser line by the Matrox AltiZ’s two opposed optical sensors reduces optical occlusions - frequently encountered at critical surface junctures - caused by the laser line being obstructed from the view of a single image sensor because of a surface’s orientation. Matrox AltiZ’s dual optical-sensor design and data-fusion capability offers higher 3D reproduction fidelity through their ability to combat occlusion and outlier data. The Matrox AltiZ also delivers incredibly high levels of control over spurious data, providing more robust reproductions than achievable with other models on the market. The integrated image sensors of the Matrox AltiZ can work in either synchronised or alternate fashion. In

mvpromedia.eu

synchronised mode, this 3D profile sensor achieves maximum reproduction quality and robustness; configured in alternate mode, the Matrox AltiZ’s scanning rate is almost twice that of the synchronised configuration while still providing key defense against occlusion. Regardless of the configuration chosen, the Matrox AltiZ is ideally suited for inspection tasks, delivering two powerful cameras within a single compact enclosure, and an IP67 rating ensuring its performance in tight spaces and harsh industrial environments. The Matrox AltiZ profile sensors output 3D data over a standard GigE Vision® interface with standard GenICam™ extensions. The Matrox AltiZ interoperates seamlessly with either Matrox Imaging branded and/ or suitable third-party machine vision software. Matrox AltiZ will be officially released in Q3 2020. MV

9


MIKROTRON HIT THE

FAST LANE Capturing images at speeds of more than 150mph, Mikrotron’s WheelWatch aids automobile performance as this case study examines.

By incorporating high-speed EoSens® GE GigE Vision® cameras from Mikrotron into its WheelWatch™ optical measurement system, AICON 3D System has set a new standard for non-contact monitoring of wheel motion in vehicle dynamics testing.

As part of this process, careful testing examines exactly how wheels behave in the event of extreme manoeuvres. How do they handle bumps and wet roads? Do they remain stable during evasive manoeuvres or when travelling at high speed? Do the wheel wells still provide enough room even when tackling sharp bends? These are the types of dynamic parameters that the WheelWatch captures for analysis. During testing, Mikrotron EoSens® GE cameras are exposed to extreme forces like sudden accelerations, sharp steering manoeuvres, and abrupt braking that would damage a standard camera. Equally challenging, the camera is expected to function during lengthy driving tests lasting several hours. Mikrotron EoSens® cameras - featuring a fanless, shock and vibration-proof design within a robust metal housing - proved up to the task, achieving long-term stability over hundreds of tests.

With a maximum speed of almost 500 frames per second, Mikrotron EoSens GE cameras feature a 1.3 megapixel CMOS sensor that captures resolution of 1280 x 1024 pixels with 90 dB dynamic range. As a result, the WheelWatch system is able to capture sharp, high-contrast images for critical analysis, even at driving speeds of up to 155 mph, including the accurate measurement of wheel parameters such as track, camber inclination, spring travel, clearance and steering angle. The WheelWatch is used at the prototype stage in automobile development. Before a car gets anywhere close to serial production, prototypes have already covered millions of miles during testing.

10

“We found the Mikrotron EoSens® camera operates reliably, despite sometimes facing tough operating conditions,” confirms Robert Godding, managing director of AICON 3D Systems. “The camera always provided high-quality image data.”

SIMPLE INSTALLATION System set-up requires one Mikrotron EoSens® GE camera be attached per wheel to enable it to fully capture the wheel and mudguard. Specially coded measurement targets are placed on the mudguard to identify the vehicle coordinate system. An adapter is also mounted on the wheel, featuring a unique pattern of dots.

mvpromedia.eu


Once installed, the Mikrotron EoSens® GE camera acquires images of the wheel while simultaneously aligning itself with the surrounding mudguard, so it does not have to be kept stable. Vibrations and bumps do not have any effect on the data measured. The system recalculates its position continuously to achieve the measuring data, thereby achieving maximum positional accuracy of approx. ± 0.1 mm and angular accuracy of approx. ± 0.015°. Consequently, driving characteristics of the vehicle are not influenced by the measurement equipment, nor is steering motion restricted in any way. The Mikrotron EoSens® GE triggering function allows multiple cameras to be fully synchronised for simultaneous monitoring of several wheels or axles. Special optical targets applied to the prototype’s fender define the vehicle coordinate system, while a lightweight carbon fibre wheel adapter is fixed to the wheel. The driver manually triggers the beginning and end of a measurement session. No other interaction with WheelWatch is required. Up to four cameras can be synchronized with each other, as well as with other measuring sensors. For example, movements in the engine block can also be detected using additional cameras. Getting into tight spots within the engine block is made possible by the Mikrotron camera housing measuring only 63 x 63 x 47 mm.

EVALUATING IMAGES IN THE CAMERA

“Being able to use the built-in FPGA for our own image processing purposes was one of the reasons we opted for a Mikrotron camera.”

WheelWatch measurement images are assessed in the Mikrotron camera sensor using an FPGA image analysis processor before they are sent to a laptop inside the car via a GigE interface. Wheel target positions and target trajectories are available shortly after the image acquisition. In addition, WheelWatch computes all six degrees of freedom of the wheel in the vehicle coordinate system.

The WheelWatch system can be deployed both on a test station as well as during drives on a test track. However, it is also ideal for other movement analyses such as:

Godding considers the integrated FPGA technology to be a key camera performance feature. He said:

• Vibration analysis of components

• Door slam testing • Examining the opening and closing behaviour of doors, covers, windows

• Robot rail measurement • Machine control • Error analysis in the production line (e.g. welding processes) • Component behaviour in wind tunnels or climatic chambers • Collision analyses • Material testing, structural analysis • 6D positioning and alignment of individual points and rigid bodies. Hexagon AB, a leading global provider of information technologies that drive productivity and quality across geospatial and industrial enterprise applications, acquired AICON 3D Systems in 2016. MV

mvpromedia.eu

11


NO HIDING PLACE Sony demonstrates the case for polarised cameras in enhancing road safety and preventing driver infringements by giving law enforcers greater visual capability.

Polarised cameras enhance image quality and usability in the prosecution of dangerous driving offences. The data is now glaringly clear: this significantly improves road safety According to data from the German state of Baden Württemberg, two thirds of driving violations captured by its automated camera systems stall due to an inability to identify the driver. While the technical capabilities of cameras have improved significantly in recent years, the ability to prove who is driving is labour intensive and therefore costly. ANPR cameras can now automatically capture the number plate, the colour, the make and the type of vehicle and then cross reference this against vehicle and criminal database. But seeing past the windscreen has presented a challenge. The limitation, however, is not caused by low resolution/ pixel counts, for many ITS applications modern digital cameras produce many times more pixels than is needed to see a face clearly (as you can see in the image below). Instead, the issue is typically glare from the sun reflecting off the windscreen and in effect preventing the accurate identification.

As the paper, presented by a Dutch team at 2013’s Beijing’s Four Continents Conference on road safety, put it: “[D]ue to e.g. lighting conditions, glare, coatings on car windows or intentional obstruction, the recognisability of the face of the driver on an evidence photo may be difficult. This reduces the prosecuting rate of the registered violations. Processing rejections if the owner was not driving also adds to the operational workload.” The paper also highlighted that analysing the photo is both and costly, meaning authorities would spend more to police it than they could make in fines. One approach taken in the Australian state of New South Wales when it rolled out an AI-based system to identify mobile phone use was to put the onus of proof on the driver, however this has been criticised by The Law Society of NSW as setting a “dangerous precedent” of being guilty until proved innocent, and predictions suggest that the number of challenges would overwhelm the local courts. And with fewer resources available to the police (both following the 2008 financial crisis and what will follow the expected 2020 Covid-19 global recession) there is a strong need to automate the policing of motoring offences – not just for speeding and jumping red lights, but for dangerous behaviours such as handheld mobile phone use and failure to use a seatbelt, as well as other illegal actions, from the misuse of multiple-occupancy lanes, to smoking in vehicles with minors.

POLARISED CAMERAS NEGATE GLARE

Ambient lighting limits the ability to reliably see past the windscreen and prove who is driving.

12

In 2018, Sony launched a new class of imaging sensor that brought polarisation onto the chip itself, the IMX250MZR. This is a global-shutter CMOS sensor which uses

mvpromedia.eu


monochrome quad polarised filters to capture polarised light in four pre-set planes: 45o, 90o, 135o and 180o, with pixels arranged in a 2x2 calculation unit (see image below). By using this approach ITS vision cameras are able to completely negate glare with already being built into camera modules with Sony officially launching its first, the XCG-CP510, at the 2018 Vision Show in Stuttgart.

TRANSPORT FOR NSW SPEEDING APP DEVELOPMENT Of course, developing applications that use sensor-level polarisation filters adds a level of complexity to any system build – development time for a typical application being between six and 24 months (depending on the application being developed, and the team doing it). To counteract this, Sony has launched the first (and to date, only) polarised camera SDK – with glare elimination in ITS applications being among the first added to its reference library of applications and suite of highly-optimised algorithms. With the SDK, development can be undertaken more quickly and simply, with lead times for a typical app going from six to 24 months to six to 12 weeks based on the application and the team.

This camera delivers 2448x2048 (5MP raw / 1.3MP per plane of light) images and outputs them at 23 fps over the GigE Vision interface (v1.2 and v2.0 supported). This not only fits well with ITS applications (the ITS sector has almost exclusively adopted the GigE standard in all territories bar some countries in Asia), it also allows synchronisation via the IEEE1588 precision timing protocol, with Sony GigE cameras able to act as both the timing master and slave in a multi-camera system. This multi camera approach allows both a high-res colour image and a polarised image to be captured, with triggers simultaneously activated to within a fraction of a millisecond. If we take a look at Australia’s AI-based system to identify mobile phone use, this takes two images to try to mitigate glare. One from above, which is less susceptible to glare but in which the face is not visible, and one from the front, which can show who is driving, but only if glare doesn’t block it. The addition of a polarised camera such as the XCG-CP510 solves this issue.

INCREASED ACCURACY LEADS TO FEWER INFRINGEMENTS According to US government statistics, 3,700 people are killed per day in vehicle crashes around the world. Policing has been proven to affect driver behaviour, with one study from California Polytechnic State University that examined red-light jumping on Qatari roads recording around a 60 per cent drop in violations when a camera was present. And while the paper noted Qatar was likely an outlier due to there being crippling fines, it noted fear of capture was a key factor in the behaviour change. But it’s not just the ability to identify when they’ve been caught speeding, but to identify dangerous behaviour taking place inside the vehicle. World Health Organisation data states the use of mobile phones while driving brings a four-fold increase in the risk of being involved in a crash. A separate 2018 literature review of 4,907 articles found studies (on average) “underestimate the actual prevalence of road traffic injuries related to mobile device use”. It also found one (admittedly outlying) paper that attributed 44.7 per cent of road traffic injuries and fatalities to mobile-phone-based distraction. More than 30 countries have made it illegal to use a handheld device while driving, and some have enacted significant punishments: Oman, for example, can give 10 days in jail and a fine of 300 OMR (c.$780). But if the risk of getting caught is low – due to fewer police resources being available and traditional camera systems at the mercy of glare – this behaviour will continue. MV

New South Wales has launched an automatic detection system for mobile phone use. During a six-month trial it caught 100,000 drivers illegally using a mobile phone while driving. Source: Transport for NSW

mvpromedia.eu

13


Effective? Here’s how. Perfect images at high speed

Precise inspection of fast processes. No limits when using the LXT cameras integrating latest Sony® Pregius™ sensors and 10 GigE interface. You benefit from high resolution, excellent image quality, high bandwidth and cost-efficient integration.

Learn more at: www.baumer.com/cameras/LXT


SPONSORED

REAL LIFE

IT’S NOTHING LIKE THE MOVIES! I love the technical simplicity in movies. The protagonist makes some herculean effort to reseat a component, insert the drive or plug in a cable, and the whole system springs to life and saves the day. I think that is cool. But in the real world, life is more complicated, and details matter. A lot. At Alysium we make cable assemblies for various demanding applications and want to ensure that the Hollywood ending is part of your everyday. To achieve this, we need to ensure that the challenges encountered in demanding applications resolve themselves to the same “it just works” outcome, as in the movies. The method is to rely on nonsubjective, durable and documented engineering and testing strategies, which tie in with agreed Standards, rather than a simple assertion of good product quality. A highlight underscoring this, is the excitement of the upcoming launch of the Mars Rover Perseverance, currently scheduled for the end of July 2020. Alysium USB3 assemblies connect its cameras. We have elevated this predominantly consumer interface to the more durable levels demanded by industrial, automotive or aerospace applications, not only through “care of assembly” but with a product realisation process that considers DFMEA and a defined Testing Specification commensurate to the agreed requirements. By having structure in the process and clear, documented goals, we can make sure that all the requirements are considered during every step of the product realisation process (creation, sampling, validation and mass-production).

mvpromedia.eu

In the Rover’s example, our die cast terminations guarantee optimal connection geometry hence reliability and ruggedness in hostile environments and also diminish the scope for samples to be contaminated, by avoiding the use of moulding plastic. The launch will also be a further celebration of how far machine vision has come (and indeed can go!). Defining all the requirements is therefore key. In the field of machine vision interfaces, we are proud to have worked with many of the leading players in the markets for many years, together exploring requirements and solutions for obvious and subtle challenges. These experiences have been funnelled into the requirements of our core products, to ensure that their use in your application meets your expectations. A follow on benefit is that a clear Test Specification can allow boundaries to be better explored in greater technical detail in terms of, “what is the maximum assembly length that can be used in this application?”, or “what is the minimum height of this this termination?”. It allows CameraLink™ assemblies to be qualified by camera makers as robust beyond 14M (Full Config / 85MHz / 10tap) or Gen1 (5G) USB3 assemblies to be common in 5M and above, which can open up further applications for existing machine vision systems. These are solely copper solutions, without the cost or complexities (risk factors) of re-drivers or additional electronics. In the past couple of years, optical assemblies, such as industrial USB or CameraLinkHS® assemblies, have been included into the portfolio, to ensure the market needs of even increasing data, longer lengths and / or high flex requirements can continue to be supported in the same fashion. When you launch your next project, Alysium is here to contribute to its effortless and successful implementation.

MV

Contact us: https://www.alysium.com/

15


IMAGING TECHNOLOGY: A BRIDGE BETWEEN SPACE AND MEDICINE Performance is crucial for technology in many fields, including space and medicine. So, when advances are made in imaging technology for space, it presents opportunities for applications in the medical field, and vice versa. Caeleste, which has been at the forefront of CMOS evolution in recent years, expands on its work with the European Space Agency (ESA) which benefits both sectors.

ESA Technology Transfer Network Broker Verhaert has worked with Caeleste since 2014, initially publishing a Technology Description of Caeleste’s work on the ESA Space Solutions database. Verhaert has since supported Caeleste in its successful submission to take part in an ESA Demonstrator Project, initiated by the Technology Transfer and Patent Office, in which Caeleste derived and tested a specific radiation hardness pixel and sensor for a client, based on its work for space applications.

hard” pixels for such sensors driven by the needs of space missions and medical X-rays. They are also used in particle physics applications including particle detectors. Space and medical applications now each account for around one-third of the company’s activities. Its partners are worldclass leaders in their respective domains, such as top tier medical companies and space agencies including the European Southern Observatory (ESO), ESA and others outside of Europe.

The expertise Caeleste gained from engaging in multiple ESA projects significantly contributed to the company’s competence and enabled the team to apply their new-found knowledge in a variety of innovative medical applications.

Space can be considered as ‘empty’ or effectively a deep vacuum, but it is not a benign environment. Instead, it is permeated with high-energy electromagnetic and particle radiation. This radiation is harmful to humans and also destroys electronic devices, due to either ‘total dose’ or ‘single event’ effects.

SPACE TECHNOLOGY FOR MEDICAL APPLICATIONS

Caeleste has developed a way of designing electronics to be resilient to the effects of such radiation, known as ‘rad-hard’. Following many cycles of innovation, its image sensors and readout integrated circuit (ROIC) designs have been proven in space missions and are now being deployed in the medical domain.

Caeleste specialises in the custom design and manufacturing of CMOS image sensors. One area that Caeleste has focused on is the development of “radiation

16

mvpromedia.eu


No lenses are used in X-ray imaging, so the size of an image sensor must match the size of the target area, which in dentistry is the patient’s jaw area. Caeleste’s experience in the space domain enabled it to produce rad-hard wafer scale devices of this size with a high production yield. CREDIT: Caeleste

X-rays are used in the most common forms of medical imaging, including dentistry. Working with Carestream Dental, Caeleste developed a new generation of X-ray sensors for 3D reconstruction in dental imaging, a process known as computed tomography. Unlike regular cameras, no lenses are used in X-ray imaging. Consequently, the size of an image sensor must match the size of the target area, in this case the lower jaw area of the patient. Caeleste’s IP and experience in the space domain made it possible to produce wafer scale devices of this size with a high production yield and simultaneous rad-hardness. In another collaboration, Caeleste’s expertise in low noise charge sensing circuits, originally acquired in the development of deep cryogenic ROICs for long wavelength infrared imaging, led it to work with Californian medical technology start-up Paradromics. The collaboration led to a fundamental shift in the field of brain computer interfaces with the development of a sensor that allows electrodes to be implanted into the brain at higher density than previously possible without causing thermal damage to the neural tissue. In addition, the sensor successfully deals with an extremely

mvpromedia.eu

challenging noise requirement to be able to read the neuron potentials. Solutions of this type cannot be purchased off the shelf but instead need to be developed by a team with the expertise and confidence to pursue challenging goals. This is where craftsmanship and passion for finding technological solutions were crucial for the project’s success. Sensors have to be designed to complement the overall system. Often that system benefits from being optimised as its design impacts the amount and nature of image processing, on-chip, within the system and downstream. Design choices will need to take account of any operational requirements such as low noise, low power, high speed and/or high or cryogenic temperatures. Caeleste’s know-how and IP enables it to balance these requirements and design sensors for a variety of medical applications such as ophthalmology, dental and mammographic imaging. MV This article was originally published on the ESA website

17


SPONSORED

OPEN UP NEW

DIMENSIONS IN MACHINE VISION

hardware-accelerated and run directly on the devices – enabling inference times of just a few milliseconds. With features such as C-mount, robust housing, GigE network connection with RJ45 or M12 connectors, RS232 interface and REST web interface, they are also fullyfledged industrial cameras. The IDS NXT rio and rome models are now available as serial cameras with different sensors and protection classes.

Whether for laboratory analyses, quality assurance or process optimization: image processing plays an important role in many different business sectors. IDS is one of the industry leaders and offers a wide variety of practical USB, GigE and 3D cameras. The company shows that machine vision technology is constantly evolving. With artificial intelligence, for example, it is now possible to realise tasks that were previously considered impossible or could only be achieved with great effort. IDS NXT ocean is a complete solution especially for AI-based image processing. Users do not need to be experts in deep learning or image processing to create a neural network. With the help of the IDS NXT lighthouse cloud software, which is part of the all-in-one system IDS NXT ocean, even non-experts can train an AI classifier with their own image data. Users do not have to set up their own development environment first, but can start training their own neural network right away. This involves three basic steps: To upload sample images, to label the images and then to start the fully automatic training. The generated network can then be executed directly on the IDS NXT industrial cameras, turning them into powerful inference cameras.

IDS also offers an IDS NXT ocean design-in kit which is particularly useful for anyone who wants to test the potential of AI for individual vision tasks. It provides all the components a user needs to create, train and run a neural network in his productive environment. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens and cable, the package includes six months of access to the AI training software. The use of deep learning-based image processing for individual applications can thus be realised in a short time. More information: www.ids-nxt.com In addition to the AI-based cameras already presented, IDS has many more cameras in its portfolio. uEye stands for high-performance, easy-to-use USB and GigE industrial cameras with a wide range of sensors and variants. Whether users choose the uEye FA (especially robust thanks to IP65/67 protection), uEye CP (only 29x29x29 mm in size), uEye SE (extremely versatile) or uEye LE series (perfect as a cost-effective project camera), whether they need models with or without autofocus or prefer individually configurable board-level cameras – the broad portfolio leaves nothing to be desired. Visit www.ids-imaging.com and discover the possibilities for your application! MV

An inference camera can apply the “knowledge” acquired through deep learning to new data. This makes it possible to automatically solve tasks that would either not be possible with rule-based image processing, or would require great effort. Since IDS NXT industrial cameras have a special AI core, neural networks are

18

mvpromedia.eu


ADDING VALUE The key role of distributors to the machine vision sector

For machine vision companies looking to reach the global market backed with local knowledge there is only one way to achieve this – via a distributor. Distributors are a vital cog in the entire system. They are not just a ‘shop window’ for products, they deliver much more than that. They bring product expertise, open up new markets and deliver customer service benefits to their local market – which can be beyond the reach of machine vision manufacturers. As Gardasoft’s Jools Hudson says: ‘a good machine vision distributor is more than just a box shifter’. She adds: “A good machine vision distributor will add enormous value to both the end user and the equipment supplier. “The value of the distributor to the engineer in the field is clear: the distributor will ensure that they choose the most suitable components for their application. “The machine vision distributor network allows equipment manufacturers to concentrate on their primary skill while the distributors provide the specialist support. The distributor will take care of smaller customers and standard applications, leaving the equipment manufacturer free to work closely with the customer on bespoke and larger projects, often in collaboration with a distributor.”

“They act as advisors, helping customers test, evaluate and integrate our products,” said Euresys CEO Marc Damhaut. “Working with distributors has been a key ingredient to Euresys’ growth for the past 25 years.” The defined geographical location is also important to Ximea, along with a comprehensive knowledge of the product and its benefits. This is crucial for any distributor working with Ximea. Henning von der Forst said: “Distributors are ‘individually specialised on different technologies and applications’, therefore our distributors need to be experienced and knowledgeable in their specific area. “Our distributors should be able to spend a certain amount of time investing in information sharing and demos. We trust distributors who can provide customers with a high level of service.” *MV Pro is going to examine the importance of the distributors’ role in greater detail. This will include an analysis of their online presence and services, plus additional insight via a consumer survey. MV

For Belgium-based Euresys and Photoneo, who are located in Slovakia, one of the major benefits of working with distribution partners it is the ability to reach new markets and avoid the costs involved with setting up a new office and hiring staff. Working with distributors has also been part of their success. Both companies work with more than 40 distributors. Photoneo acknowledges it provides instant lead generation, while Euresys feel the benefit, particularly with its partners in Asia.

mvpromedia.eu

19


A DIFFERENT APPROACH TO

METROLOGY Quality is often seen as a policing mechanism, rather than a vehicle for improvement. What manufacturers may not realise is by using metrology to police, rather than improve, they are missing out on a myriad of benefits. Mike G John, head of engineering at industrial metrology specialist, The Sempre Group, shares insight on how to incorporate metrology into production.

The underlying culture in British industry has always been to treat production and quality as two separate entities. Both have their individual targets and the two are often at loggerheads, despite being part of the same process. As a result, coordinate and other essential measurement solutions aren’t used until late in the manufacturing process, when components are taken off the production line to be measured. As well as being a much slower approach, by this stage, it could be too late to rectify any issues. If the product is found to be defective, time and energy has already been wasted on a product that could have been scrapped or saved earlier in the production process. The manufacturer has also lost the opportunity to identify the root cause of any problems. Consequently, it becomes unclear why parts, such as sheet metal components, are not up to specification or aren’t industry compliant — the manufacturer does not know at what stage things went wrong, let alone why. Armed with little information, manufacturers face broad sweeping reworks that can delay production and pile on added costs. A more integrated process would allow them to quickly isolate the compliance issues and rectify them with tool

20

changes, probe realignments or other quick improvements, before product quality is affected. Without streamlined communication between metrology and production, it can also take anywhere up to two weeks to get a product from the shop floor into the quality department — creating a huge bottleneck in the process. By incorporating metrology equipment to perform continuous inspection directly on the line, manufacturers can reduce scrap rates, minimise rework and remove delays. Quality also increases as parts are inspected at multiple stages, so it is easy to identify — and rectify — issues as they occur. Let’s put this in context. A scrap rate of just five per cent can wipe out up to 95 per cent of your profit. The bottom-line benefits of bringing metrology and production together can be huge.

BABY STEPS Shifting approach doesn’t involve an entire overhaul of a production line, but the incorporation of a series of stepping stone technologies. A good first step is to choose smaller parts that are easy to measure and

mvpromedia.eu


purchase technology to inspect them on the production line using a basic solution. For example, data loggers offer a compact, battery powered technology to sample temperature data from a range of parts and could provide valuable insights. The manufacturer can then move onto larger components while incorporating a wider array of measuring systems. These may include non-contact solutions like Opticline for shaft parts, micro-vu for small prismatic components or planar for flat parts. Many of these have the option to be automated with robots and other processes to further improve productivity in the age of Industry 4.0. For instance, a Scottish manufacturer recently approached us because it needed help improving its grinding productivity. We developed a bespoke optical

mvpromedia.eu

shaft measurement program and ran some trials, proving it was repeatable across its processes. The result was a zero per cent scrap rate and it could spend more time manufacturing, rather than reworking, its parts. The third and most important step is to embrace a change of culture around quality and production at every level in your business. Clearly communicating the benefits of treating metrology as a vehicle for change and for improvement will help to bring both teams together for a common goal. Quality should not be treated as a policing mechanism. By bringing metrology into your production process, you can decrease scrap, increase productivity and generate more profit. MV

21


CLIMB HIGHER WITH 3D

Easy3DLaserLine

Easy3D

Easy3DObject

AT A GLANCE • Single and Dual Laser Line Extraction into a depth map • Convenient and powerful 3D calibration for laser triangulation setups • Compatible with the Coaxlink Quad 3D-LLE frame grabber

AT A GLANCE • Point cloud processing and management • Flexible ZMap generation • 3D processing functions for cropping, decimating, fitting and aligning point clouds • Compatible with many 3D sensors • Interactive 3D display with the 3D Viewer

AT A GLANCE • Detection of 3D objects in point clouds or ZMaps • Metric detection criteria • Compatible with arbitrary regions • Computation of precise 3D measurements, like size, orientation, area, volume… • Automatic extraction of object local support plane • 2D and 3D graphical display of the results • Full-featured interactive demo application

3D laser line extraction and calibration library

www.euresys.com

3D image processing library

3D object extraction and measurement library


SPONSORED

3D FOR 2D

VISION ENGINEERS The job of conducting industrial inspection tasks, such as taking highly accurate height measurements of an object, measuring protrusion of embossed patterns, is usually given to expensive and time-consuming 3D systems.

The reference plane can be set explicitly, or calculated from the 3D point cloud, and thus has an object levelling ability (see illustration below).

But Euresys has a cheaper and more flexible solution which is available right now. Yuzairee Tahir, VP sales and support APAC from Euresys, takes a look at how 2D vision engineers can have 3D at their fingertips. The machine vision fraternity knows that it is a complex and computer intensive job to use a 3D point cloud to solve industrial inspection tasks. Yet what the team at Euresys understand, is that most 3D inspection problems can be solved using a 2.5D representation. What’s more, 3D sensors usually only generate a 2D array of heights and distances. This is where the Easy3D library product from Euresys steps in. Easy3D is a set of software tools which enables the development of 3D machine vision inspection applications. In this role, it provides functions to generate ZMaps which are an effective and very flexible way to deal with 2.5D data. A ZMap is the projection of a point cloud on a reference plane, where distances are stored as pixel grey scale values. Importantly, ZMaps are distortion free, with a metric coordinate system. Easy3D exposes functions to convert arbitrary 3D data like point clouds to 2D representations, which form the ZMaps. A ZMap is an array of real-world distances which are generated from a point cloud. The data in the ZMap are calibrated, so distances, heights and angles can be measured in metric units, or pixels. Each value of a ZMap represents the distance from a 3D point to a reference plane ( see illustration one).

And ZMaps are compatible with Open eVision 2D operators. For example, pattern matching with EasyFind, or EasyMatch; object segmentation with EasyObject; or, subpixel measurements with EasyGauge. And it means that by using ZMaps with well-known and fast 2D operators, the user has access to an effective 2.5D processing pipeline. Open eVision can be used alongside several 3D computer vision libraries, including: Easy3D, Easy3DLaserLine and Easy3DObject. To provide an example of how it works, illustrations below show how the various methods are used to measure hole diameters. 3D levelling is conducted via EPlaneFinder, ZMaps are generated, pattern matching is used for holes detection and holes diameter metric measurement is used in conjunction with EasyGauge.

The applications for such technology are wide and include the electronic manufacturing and general manufacturing industries. The result is a more cost-effective solution which can, depending on application, be up to 20 times faster than conventional 3D systems. And it is a system which is finding many fans in the machine vision industry. MV Please contact sales@euresys.com for more information. Author: Virginie AndrĂŠ - Senior Communication Manager www.euresys.com

mvpromedia.eu

23


LIQUID LENSES: THE IDEAL FAST-FOCUS SOLUTION Liquid lens technology mimics the human eye to achieve very rapid focus change. They can be easily combined with conventional lenses and mounted on equipment such as robot arms as Gardasoft’s Jools Hudson explains

In the machine vision environment, there is constant demand for increased camera performance. Camera manufacturers are always searching for ways to improve parameters such as field of view, scanning speed, and speed of focusing so that the vision system can process more items in the same amount of time. A conventional camera setup uses a fixed-focus optical system but these have a limited depth of field which means that different object distances must be accommodated using multiple static imaging systems. The result is increased cost and complexity of the system. To address the challenge of fast focus change, Optotune has develop a range of robust focus tunable lenses that can be added to a standard imaging system comprising a camera and imaging objective. These tunable lenses can change focus in under 10 ms and enable rapid, electrical tuning of working distance while preserving resolution and field of view of the original system. Using this cutting-edge technology allows end users to eliminate the need for multiple objectives and cameras, or the need to manually adjust focus. The result is a simpler and cheaper solution.

DATA COLLECTION AND ANALYSIS Tunable lenses are conceptionally like the human eye. An elastic polymer membrane covers a reservoir of optical liquid, much like the lens of the eye. A voice coil actuated system exerts pressure on the outside of the membrane which alters the shape of the reservoir, thereby changing the optical power of the tunable lens.

24

The optical path through the reservoir is shown in Figure 1. Optotune currently offers liquid lenses with clear aperture sizes of 3, 10 or 16 mm which suit a wide range of applications.

Figure 1: Working principle of a typical liquid lens. Left: A parallel beam is transmitted though the liquid lens. The arrows indicate where the actuator exerts pressure. Right: When actuated, the curvature of the lens and the optical power change.

IMPLEMENTATION OF TUNABLE LENSES Tunable lenses can be mounted either in front of or behind the conventional lens. Where the vision system has a focal length of between eight to 50mm, the Optotune liquid lens is best mounted in front of the conventional lens which provides a wide focus range from infinity down to 100mm. For extremely compact systems it is possible to combine the Optotune liquid lenses with M12 board lenses directly on a C-mount or even an S-mount camera, reaching an extreme compact and cost-effective design. A common application for this would be package sorting where barcodes on packages of varying sizes are read and tracked.

mvpromedia.eu


Where short working distances are needed, the liquid lens should be placed between the camera and the imaging lens which provides a high quality macro imaging with image circles up to 30mm. Compared to the front lens configuration, the back lens configuration offers better resolution and reproducibility of the focal plane with a smaller working distance range. A common application for this type of configuration would be highspeed inspection of PCBs. In the special case of a telecentric conventional lens, the liquid lens will perform best when it is placed directly after the aperture stop and a variety of designs are available with magnification ranges from 0.13X to 4X. The advantages of this configuration are that there will be no image distortion, resolution decrement, vignetting or orientation dependence in the system. This configuration is frequently used in robotics where precise manufacturing relies on measurement of features at variable working distance from the sensor.

THE GAME CHANGER: INTEGRATED LIQUID LENSES

of deformable lenses is to place the liquid lens inside the imaging objective. Optotune and VS Technology presented the first wide-angle liquid lens solution for high resolution, 1.1” sensors. With the ability to focus from 100mm to infinity in 20ms, this new 12mm lens is the most reliable and fastest focusing solution for today’s logistics and robotics applications.

INTERFACING LIQUID LENSES WITH CAMERAS The liquid lens actuator produces a focal length which is directly proportional to the current flowing through the actuator at any time and this requires careful control of the drive current. A variety of options are available, ranging from Optotune’s own USB-based drivers to the industrial and embedded controllers from Gardasoft. The Gardasoft solution is compliant with GigE Vision standards, meaning easy communication with the rest of the machine vision system. Gardasoft also produces integrated controllers which allows the liquid lens to interface directly with the camera through UART or I2C.

AN EYE TO THE FUTURE The combination of a liquid lens and a standard imaging objective already provides great imaging results in many applications. However, the next step in the evolution

Liquid lens technology is becoming increasingly popular in machine vision applications, where its ability to change focus in just a few milliseconds is enabling a host of new applications. Liquid lenses have a very long working life and provide a flexible, compact, and reliable solution for any system where the working distance may vary. Both system integrators and end users benefit from the high modularity and adaptability of liquid lenses and the saving in space, time and cost. The advance of integration between many components in the machine vision world is sure to lead to many interesting new solutions incorporating liquid lenses. MV

Figure 2: The Optotune EL-16-40 liquid lens. It is a smart and simple solution to increase resolution, speed and the overall performances of vision systems being at the same time compact and durable.

mvpromedia.eu

25


SPONSORED

XILINX UNLEASHES THE POWER OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE Subh Bhattacharya Lead, Healthcare, Medical Devices & Sciences at Xilinx

The use of artificial intelligence (AI) – including machine learning (ML) and deep learning techniques (DL) - is poised to become a transformational force in healthcare. From anatomical geometric measurements, to cancer detection, to radiology, surgery, drug discovery and genomics, the possibilities are endless. In these scenarios, ML can lead to increased operational efficiencies, extremely positive outcomes and significant cost reduction. There’s a broad spectrum of ways that ML can be used to solve critical healthcare problems. For example, digital pathology, radiology, dermatology, vascular diagnostics and ophthalmology all use standard image processing techniques. Chest x-rays are the most common radiological procedure with over two billion scans performed worldwide every year, that’s 548,000 scans a day. Such a huge quantity of scans imposes a heavy load on radiologists and taxes the efficiency of the workflow. Often ML, Deep Neural Network (DNN) and Figure 1 Radiology (Chest X-ray) Convolutional Neural application Networks (CNN) methods outperform radiologists in speed and accuracy, but the expertise of a radiologist is still of paramount importance. However, under stressful conditions during a fast decisionmaking process, human error rate could be as high as 30 per cent. Aiding the decision-making process with ML methods can improve the quality of result, providing the radiologists and other specialists an additional tool. Many procedures within radiology, pathology, dermatology, vascular diagnostic and ophthalmology could be on large image sizes, sometimes five megapixels or larger, requiring complex image processing. Also, the ML workflow can be computing and memory intensive. The predominant computation is linear algebra and demands many computations and a multitude of parameters.

26

This results in billions of multiply-accumulate (MAC) operations, hundreds of Megabytes of parameter data and requires a multitude of operators and a highly-distributed memory subsystem. So, performing accurate image inferences efficiently for tissue detection or classification using traditional computational methods on PCs and GPUs are inefficient, and healthcare companies are looking for alternate techniques to address this problem. Xilinx technology offers a heterogenous and a highly distributed architecture to solve this problem. Xilinx Versal™ Adaptive Compute Acceleration Platform (ACAP) family of System-on-Chips (SoCs) with its adaptable Field Programmable Gate Arrays (FPGAs), integrated digital signal processors (DSPs), integrated accelerators for deep learning, SIMD VLIW engines with a highly distributed local memory architecture and multi-processor systems are known for their ability to perform massively parallel signal processing of high-speed data in close to real-time. Additionally, Versal ACAP has multi-terabit-per-second Network on Chip (NoC) interconnect capability and an advanced AI Engine containing hundreds of tightly integrated VLIW SIMD processors. This means computing capacity can be moved beyond 100 Tera operations per second (TOPS). These device capabilities dramatically improve the efficiency of how complex healthcare ML algorithms are solved and help to significantly accelerate healthcare applications at the edge, all with less resources, cost and power. Xilinx has an innovative ecosystem for algorithm and application developers. Unified software platforms, such as Vitis™ for application development and Vitis AI™ for optimising and deploying accelerated ML inference, mean developers can use advanced devices – such as ACAPs - in their projects. MV

Figure 2 – Xilinx Vitis Unified Software Platform

mvpromedia.eu


SPONSORED

THE MIDOPT DIFFERENCE

A MidOpt machine vision filter is not just a machine vision filter.

Machine vision filters maximize contrast, improve colour, enhance subject recognition and control the light that’s reflected from the object being inspected. However, a MidOpt machine vision filter is not just a machine vision filter. Here are some of the key features to look for when selecting a filter for your system:

PERFORMANCE Most standard filters offer only a full dichroic, or interference, coating, which uses a coating to block the short and long wavelengths. If the coating fails, the filter is useless. And unfortunately, these types of filters are prone to batch-by-batch variations during production. Midopt machine vision filters utilise absorptive filter glass to block the lower wavelengths, and a dichroic coating to block the longer wavelengths. This design is highly reproducible and consistent, resulting in no variations during production. All MidOpt filters are also hard-coated for durability and come standard with an anti-reflection coating to maximise transmission. While standard filters are marketed to seem similar in performance, they often result in many future issues. A full dichroic filter is far more sensitive to angle-ofincidence shifting than the hybrid filter offered by MidOpt. The shorter the focal length of your lens, the more issues this can cause.

DESIGN MidOpt specifically designed all of their BP Series Bandpass Filters to emulate the output of the most common LEDs used in machine vision. The shape of the curve is extremely important. You may have noticed that the curve of a standard filter has sharp sides and a flat top.

mvpromedia.eu

A MidOpt filter has more of a Gaussian (bell-shaped) curve. A Gaussian curve is ideal in machine vision because it allows the desired emission of the LED to pass and doesn’t allow high transmission outside of the desired wavelength.

REPEATABILITY All MidOpt Bandpass Filters are 100 per cent inspected to guarantee repeatable performance. Unlike most companies that buy and resell filters from third-party manufacturers and distribute them as a secondary focus, our primary focus is on filters. To ensure optimal quality every time, we fully inspect every single filter for surface imperfections and transmission performance. We have very strict guidelines to ensure consistent results every time. And all our filters are lot numbered and easily traceable for full accountability.

MOUNTING SOLUTIONS MidOpt offers the largest variety of in-stock mounted filters in the industry, from M13.25 to M105, along with their exclusive C-mount M25.4, which mounts behind the lens, in front of the sensor. This is most important with new designs because we are able to quickly size any of our standard filters to fit any new cameras. MV

27


AVOID SHADING WITH

CMOS CAMERAS SVS Vistek explain the relationship between CMOS sensors, microlenses and shading – and how to combat it.

The combination of an expensive lens with excellent values and a modern high-resolution camera with a large CMOS sensor can lead to surprising and undesirable shading effects. With the necessary background knowledge and a suitable selection of camera and optics, this can be prevented. Shading is a known optics problem. In simple terms, the result is that pixels are displayed darker with growing distance from the sensor’s optical axis. Recorded images thus become darker towards the image edges, symmetrically around the optical axis. Closing the aperture increases the usable image area.

segment and a light-insensitive amplifier area, viewed from the incoming light side. Modern CMOS sensors also have so-called microlenses. A micro lens is placed above each individual CMOS cell, which directs incident light onto the light-sensitive part of the pixel. This construction significantly increases the sensitivity of the pixel and reduces pixel noise caused by the object structure.

Microlens shading with oblique light path

Lens and CMOS sensor shading 250

increased shading of the light-sensitive pixel area when using microlenses and oblique incident light path

Gray Value

200

lens shading and CRA shading lens shading

150

100

0

1.000

2.000

3.000

Distance [pixel]

Microlenses in modern CMOS sensors have the effect that incoming light has to come from a certain angular range, since otherwise it would be increasingly directed onto the light-insensitive areas of the pixel. (Image source: SVS-Vistek) 4.000

5.000

In addition to the lens, modern CMOS sensors also have a major influence on the shading behavior in modern, high-resolution cameras. (Image source: SVS-Vistek)

However, the unwanted shading phenomenon can also be triggered by image sensors. The reason for this can be found in the sensor architecture of current CMOS cameras: A CMOS pixel consists of a light-sensitive

28

However, this procedure also has a disadvantage: The use of microlenses means the incoming light has to come from a certain angular range. Light from outside of what is called the chief ray angle (CRA) is then also increasingly directed to the light-insensitive areas of the pixel. The result is a significant reduction in intensity and thus a shading effect on this pixel. When using modern, high-quality CMOS industrial camera sensors, such as a Sony IMX342 or Canon 120MXSM in

mvpromedia.eu


combination with good optics, shading can also occur – which seems to be surprising. The additional shading of a CMOS sensor results from the combination of the lens with the sensor and is added to the normal shading behavior of the lens.

Sensor sided telecentric lenses do not have this disadvantage, because their beam path runs perpendicular to the sensor over the entire sensor surface. The problem also does not occur with MFT lenses (Micro Four Thirds), that almost exclusively work tele-centrically on the sensor side.

MATCH CRA AND LENS

However, an exception to this are CMOS sensors with a so-called pixel shift: Here, the microlenses are arranged in such a way that the positional offset of the microlens image is compensated by a change of the microlenses´ positions on the sensor edge due to the light coming in obliquely from the lens at the edge of the sensor. Such sensors do not work correctly when combined with telecentric lenses. It is therefore important that the lens and sensor harmonise perfectly to minimise the loss of dynamic range in the image associated with shading correction.

Lens and CMOS sensor shading increased shading of the light-sensitive pixel area when using microlenses and oblique incident light path

ray blocked by aperture

intensity loss due to small CRA

physical aperture

A B C

A B C

wide angle ray reduced by CRA

optical axis

Chief Ray Angle

The losses in the dynamic range associated with shading correction can be minimized with a suitable coordination of lens and sensor. (Image source: SVS-Vistek)

On the lens axis, the incoming light is perpendicular to the sensor. In most sensors, the microlenses are arranged in such a way that the light should enter in a perfect perpendicular way. However, the further the individual pixel is located outside the lens axis to the sensor edge, the more the light enters at an angle, using entocentric lenses. The shorter the focal length, the stronger this effect. If this angle of the incoming light is larger than the CRA of the sensor, shading is caused by the sensor.

mvpromedia.eu

The CRA range and the sensor beam path of a lens do not depend on the focal length, but on its design. The beam path on the sensor side is not a qualitative but a constructive feature and depends mainly on the position and size of the exit pupil. However, there is no right or wrong here: there are good reasons for every lens design, and even lenses of the same focal length from the same manufacturer can be designed very differently. Due to this situation, special care must be taken when choosing a powerful combination of CMOS camera and lens in order to avoid the shading problem and to obtain high-quality images that also facilitate the subsequent evaluation. For decades, SVS-Vistek has also included the lens when advising camera solutions and uses intensive tests to ensure that a perfect solution for every individual task is found when optics and camera interact. MV

29


SPONSORED

SEMICONDUCTOR CHIP INSPECTION WITH

ADVANCED ILLUMINATION AND CYTH SYSTEMS Line Lights from Advanced illumination along with highresolution line scan cameras. The lights and cameras combined to build up a product image, singulate unique components, and then run those images through the Cyth’s Neural Vision software. The software then determined good or bad parts, classifying defects for over 50 inspections.

THE CHALLENGE A customer specialising in the handling and testing of high throughput semiconductor chips was looking to partner with an American vision specialist to help bring AI and Deep Learning technologies into a next generation system for the semiconductor inspection marketplace. They were seeking out a disruptive technology with intuitive inspection, so they partnered with systems integration company Cyth Systems to find a solution. One major inspection challenge was the size of the semiconductor chips; they were so small that the cameras needed to resolve to the single micron level, so the selection of optical components posed a significant hurdle. It would also be a challenge to find a camera-lens combination that had the ability to capture micron-level imagery, while mechanically fitting into the necessary footprint, as well as being capable of capturing images in the needed time frame. The customer also wanted the ability to selectively apply different solutions or criteria to their unique inspection needs at will.

THE INSPECTION SOLUTION They worked with Cyth Systems to develop a solution that inspects the client’s product by utilising High Intensity

30

Cyth Systems integrated a vision sub-assembly into the loading mechanism along with a Festo motor with linear stage, provided by the client. To create the system, Cyth’s team used LabVIEW software with two 12k resolution Basler line scan cameras, lenses from Edmund Optics, and specialised custom optical spacers for precision light control. Two Advanced illumination High Intensity Red Line Lights were used, which provided the ideal image quality when inspecting a reflective part. Every product image required 1GB of data, so the team implemented a powerful PC to handle the needed processing power. To utilise the final solution, the client will load a cassette of multiple lead frames, which will be indexed through the system for the inspection of individual parts. Each frame contains over 100 unique parts with over 50 inspections each and a target inspection time of 25 seconds. The output is a visual report detailing which components have been identified as rejects, with a detailed breakdown of defects based on client criteria, resulting in fast and accurate processing of frames. The fully integrated solution gives the customer greater control over their inspections, accommodating existing spatial requirements while increasing speed and decreasing human error in the customer’s highresolution inspection system. MV

mvpromedia.eu


SPONSORED

FOR FASTER AND BETTER PROGRAMMING AND CONFIGURATION NEW SOFTWARE FOR EFFICIENT CAMERA INTEGRATION With neoAPI and Camera Explorer, Baumer offers two new free-of-charge software packages for fast, easy, and efficient camera integration. The software is available for PC- and ARM-based systems under Windows 7, Windows 10, and Linux.

interface, the Baumer Camera Explorer can be used for a wide range of tasks, from camera configuration through process monitoring to recording and documentation tasks. The support of cameras in the field benefits from this as well as laboratory workstations. MV

Baumer neoAPI offers efficient camera integration in Python, C++, and C#. The new powerful and userfriendly GenICam camera application programming interface (API) allows quick familiarisation thanks to its modern design. This allows Baumer cameras to be integrated into various applications with just a few lines of code – even by software developers with limited experience in the area of image processing. Integrated automatisms reduce the necessary code lines to a minimum, e.g., to six lines for image acquisition and storage. Thanks to auto-complete support, not only code segments but also GenICam features of the camera are prompted and completed, while help options are also displayed. This makes evaluation and integration of the cameras more efficient and facilitates subsequent software updates.

Contact Details: W: www.baumer.com/neoAPI W: www.baumer.com/Camera-Explorer E: sales@baumer.com T: +41 527281122

The intuitive Baumer Camera Explorer GUI application allows the easy evaluation and configuration of Baumer cameras in no time. Familiarisation with, testing, and configuration of the multi-faceted camera features are optimally supported by the clearly structured user interface. Thanks to its flexibly customisable graphic

mvpromedia.eu

31


COULD CORONAVIRUS MAKE

MAKE VR MAINSTREAM? While experiencing work in lockdown, European practice director at Expressworks, Jonathan Berry, turned his thoughts to the opportunities offered to connect with colleagues by VR (Virtual Reality).

meeting, if it allowed us to easily share the materials we needed to discuss, if we had better focus during the meeting and if we had better recall after the meeting,” explained Berry.

“In recent weeks we’ve all become used to holding meetings and connecting with colleagues via video conferencing tools, such as Zoom and Teams,” said Berry.

“We had meetings in space stations, mountain top retreats, beach resorts and futuristic offices. Initially the novelty of each new setting was exciting, however, by the end of an hour long meeting the tiny lag between movement and result had us feeling physically sick.”

“But we haven’t always found the experience to be very satisfactory. As humans we are social animals and require the stimulus of other humans. We rely on more than words to communicate and many of the cues we are used to picking up on are lost through video conferencing, as useful as it has been during this time.”

COULD THERE BE A BETTER WAY?

Having your colleagues or employees feel sick is never a good idea when trying to hold a meeting, so the technology has some way to go before mainstream adoption is possible. One of the reasons why its full potential has still to be realised may be that VR for meetings hasn’t been properly commercialised yet.

More commonly thought of as a science fiction plot device or gaming accessory, VR has been gaining ground in real life applications. In a trial conducted by Oxford academics in 2018, they discovered that nearly three out of four patients with a serious phobia of heights could overcome it using VR. The study led them to believe that VR could be the way forward for treating a number of mental health issues.

There are a number of different platforms available, but none are specifically focused on meetings. AltspaceVR offers an amazing social experience, but it doesn’t have the professional feel necessary for the corporate world. Rumii is great for training and education, but it isn’t versatile enough for large meetings with input required from multiple sources. MeetinVR looks the most promising solution at the moment, but it isn’t on general release yet.

Expressworks decided to hold a few key company meetings in VR to see if there could be potential benefits to the workplace too. “It was agreed that the experiment would be a success if the VR enhanced rather than distracted us from the

32

“In our experiment there was a point in between the initial location excitement and the onset of movement lag induced nausea, at which the enormous potential for this technology was clear. You are immersed in a virtual world which allows for almost complete concentration

mvpromedia.eu


and focus. The illusion is in fact so complete that you hear the sound from a speaker who appears to be on your left, from that direction. In addition, because it feels like a game it is both fun as well as being serious.”

Berry concluded: “The technology isn’t there yet, but it feels close. Maybe our ‘new normal’ will provide the catalyst needed to encourage the investment required to make this possibility mainstream.” MV

This lockdown came too early for VR meetings, but with social distancing likely to continue for some time and future lockdowns a distinct possibility, VR meetings look certain to have their day.

mvpromedia.eu

33


ULTRASHORT PULSE LASERS CONQUER 5G PHONE ANTENNAE MICROMACHINING Coherent’s Hatim Haloui explains how 5G phone antennae are critical analogue components with complex shapes that are challenging to machine lasers.

Key components in 5G phones are the miniaturised antennae that are smaller and physically more complex than earlier devices for several reasons, including the shift to higher frequency (e.g., microwave) operation necessary for 5G. Plus, a key part of 5G will be the ability of mobile devices to simultaneously exploit signals from different transmitters. In a smartphone, this requires multiple miniaturised antennae with complex 2D (and even 3D) shapes. These shapes must also support so-called MIMO operation: multiple signals in, multiple signals out for the same antenna. Existing antennae already support 2x2 and even 4x4 MIMO function, but 5G is looking to increase this type of multiplexing further. The antennae are fabricated from laminated substrates, with a layer of copper supported on an insulator (e.g., LCP, modified PI), often including a bonding (adhesive) layer.

34

During the fabrication process they have to be mounted on some type of sacrificial tape or other carrier. Laser micromachining is the obvious choice to perform the necessary cutting/scribing (scribing is also called kiss cutting, and it involves a selective removal of layers without damaging the under layers). Nanosecond (Q-switched) lasers could readily provide the required spatial resolution, but not in a single process. The problem is that copper and polymer have very different ablation thresholds. Optimised micromachining requires a laser fluence of ~7X the ablation threshold. Increasing the fluence toward 10X over threshold and beyond does not improve process speed, it just increases the width of the cut and the extent of the heat affected zone (HAZ). This is the material adjacent to the cut, scribe or hole that is degraded by thermal effects, for example charring in paper and plastics, creation of a glassy phase in ceramics, or melting in the case of semiconductors. With a small electromagnetic device like a phone antenna, the HAZ must be minimised to avoid functional damage, e.g., melting, which could lead to a short circuit. HAZ can also reduce device reliability and lifetime. However, if a nanosecond laser process were optimised for ablating the copper, it would be very difficult to prevent significant HAZ damage in the polymer. As a consequence, with nanosecond lasers the antennae would have to be patterned in two separate processes with two different laser setups, adding to the process cost and requiring that tight registration is maintained throughout. Moreover, registration would be difficult because of material shrinkage after the first step.

mvpromedia.eu


The HyperRapid NX includes a novel pulse control feature called Pulse EQ, which further enhances its capabilities for complex shape cutting or scribing where the beam is rapidly scanned across the substrate. This inevitably involves finite acceleration and deceleration rates so that the motion in straight lines is faster than the motion around tight curves and corners.

Two well-proven ways to minimize HAZ are to use shorter (e.g., picosecond) pulse widths and/or shorter wavelengths. With ultrashort pulse (USP) lasers, much of the pulse energy is carried away in the ejected material, before it has time to spread and cause a HAZ. Moreover, although the pulse energy is typically lower in USP lasers, they offer much higher pulse repetition rates which supports processing in fast multiple passes, further minimising HAZ issues. The use of shorter wavelengths, i.e., ultraviolet, is also well-known to reduce HAZ effects. That’s because the high energy photons can directly break interatomic bonds in most materials, so that some of the material is removed in a photolytic process, rather than a thermal process. The use of a shorter wavelength also supports a larger depth of focus, thereby increasing the process window. The combination of short pulse width and short wavelength therefore make the picosecond UV laser an ideal candidate for micromachining the copper/mPI or copper/LCP laminates in this antenna application. Recently, industrial USP ultraviolet lasers have increased in average power, which is necessary for high process throughput in applications like 5G antennae cutting. An example is the HyperRapid NX, which is available with up to 30 watts of output at a wavelength of 355 nm. This enables scan speeds of several metres/sec with typically about 10 passes needed to process the latest antenna designs.

mvpromedia.eu

Figure 1. Demonstration of the benefit of active pulse rate control by real time feedback with a single pass over a sample of thin SiN on silicon.

This is potentially problematic, since excessive pulse-topulse overlap can lead to thermal accumulation and a HAZ, even with the small thermal load created by USP ultraviolet lasers. Instead, this new pulse control feature allows the pulse rate to be controlled in real time: in this case by slaving the pulsing to position/velocity feedback synchronisation signals from the scanners. This ensures that the pulse-to-pulse overlap stays at the constant amount that has been determined to be optimum for each application. Just as important, the pulse control includes active stabilisation of the pulse energy; with older pulsed lasers, changing the pulse repetition rate usually causes variations in the pulse energy. Figure 1 illustrates how this works with a single pass with a 30 watt ultraviolet USP laser (Coherent HyperRapid NX) on a SiN on Si sample chosen to highlight the pulse ablation pattern in these microscope images. MV

35


SPONSORED

WHAT’S THE POINT OF STROBE

CONTROLLERS? A good image is the core requirement for image processing and a strobe controller will drive machine vision lights so the camera always finds a well-illuminated target. For automated image processing, shifting illumination conditions can create massive problems and it’s a core requirement to achieve consistent illumination. Strobe controllers will guarantee perfect illumination of the object and help improve speed and reliability of the process. They can also save costs.

ACHIEVING CORRECT ILLUMINATION Perfect illumination of the camera image should be our goal and requires the following principles: 1. No shadows or reflections 2. All important areas visible at medium intensity 3. The highest brightness value should be just below the maximum pixel value, e.g. around 245 in an 8-bit system 4. The lowest brightness should be just above zero, e.g. around five in an 8-bit system To avoid motion blur with fast-moving objects, choose a short exposure time, in the range of a few milliseconds or less.

CHARACTERISTICS OF LED LIGHTING These days, nearly all machine vision illumination is based on LEDs. They reach maximum luminous flux very quickly after power-on and provide long life, high efficiency, and small form-factor. In addition, the emitted light has a consistent spectral curve which does not vary with power or during its lifespan. The brightness of LED sources is proportional to electrical current over a wide range of operating conditions. However, LEDs do have significant differences in maximum parameters and low-power switching thresholds. Therefore, it makes sense to regulate LEDs via current and not via voltage. This is where the strobe controller shows its real value.

MAIN FEATURES OF A STROBE CONTROLLER The principle job of a strobe controller is to switch power to the light and to regulate the brightness of the light. The light must be powered synchronously with the camera

36

exposure time and the exposure signal from the camera can be used for this. Some lighting controllers include sophisticated timing capabilities which enable the controller to become the timing hub for the machine vision system and trigger devices such as reject gates. Some controllers also provide features such as adjustable current ramps and recognition of the connected LED head.

OVERDRIVING LED LIGHTING An important benefit of a strobe controller is the ability to overdrive to achieve much more brightness than the manufacturer’s specification. LEDs are sensitive to heat and if power dissipation causes them to overheat, they may be damaged. However, the dissipated thermal power is an integral over time so the LED can be safely driven with a significantly higher current than the manufacturer’s specification so long as it’s done in a controlled way. While an exposure is not occurring, the LED can be turned off and allowed to cool. The result is higher light intensity during the exposure when the light is needed. Overdrive is particularly useful when exposure times are short because brighter intensity will be available.

CURRENT DRIVE CONTROLLERS AND PWM CONTROLLERS Since LEDs produce an illumination that is proportional to current, the most logical method to control LED lighting is via a variable current lighting controller. Current control has many benefits but sometimes requires careful management of power dissipation. An alternative is to use a Pulse Width Modulation (PWM) controller which uses an entirely different principle and is based on Voltage Drive. With PWM strobe controllers, the maximum current stays constant and the intensity is managed by pulsing the LED on and off several times during the exposure. The choice of current control or PWM control for a strobe controller is an important one. You can read more about the differences between current drive and PWM drive at www.gardasoft.com/voltage-drivecurrent-drive/. MV www.gardasoft.com

mvpromedia.eu


What Type of Strobe Controller is Best for Machine Vision? Shifting illumination conditions can create massive problems for machine vision software. Achieving consistent illumination is a core requirements.

But, what type of controller is best?

A strobe controller will guarantee perfect illumination of the object and help improve speed and reliability of the process. It may also save costs.

The illumination LEDs produce is proportional to the current flowing. So, the most logical method to LEDs is via a Constant Current Controller. Current control has many benefits but sometimes needs careful power management. An alternative is a Pulse Width Modulation controller which is based on Voltage Drive and may be susceptible to variations in power supply. PWM can also be less reliable at speed.

The choice of strobe controller is an important one. You can learn more about the different types of controller at www.gardasoft.com/voltage-drive-current-drive/

Semiconductor

|

PCB Inspection

Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com

www.gardasoft.com

|

Pharmaceuticals

|

Food Inspection


SMART BIN PICKING SETS AUTOMATION PACE Automated bin picking has crossed its own limits as customers can now easily enter the front line of automation. Photoneo, a Slovak-based company developing robotic intelligence software and industrial 3D vision, has released its Bin Picking Studio 1.4 which makes the process simpler as Andrea Pufflerova and Adrian Kratky explain.

developed software. Supporting a large database of major robot models, the solution enables customers to pick objects that generally pose challenges for most bin picking applications. These include parts that are very small (with the scanning range between 161 - 205 mm), made of metal or reflecting material, overlapping, or in random poses. In order to increase productivity and effectiveness, the solution allows customers to pick up to four object types and use up to four scanners in one single scenario.

Advanced automation that features sophisticated systems requiring minimum intervention is a great way to respond to the rising labour costs, manpower shortage, and the risk of workplace injuries. To a significant degree, it also helps boost productivity and cut production costs. Bin picking is often described as the key challenge in computer vision and robotics - “the holy grail in sight”. It refers to robotic detecting and picking of randomly arranged objects from within a bin using a suction gripper, parallel gripper, or other end effector. A prerequisite for successful bin picking is a robot equipped with high-quality machine vision and sophisticated software. The Bin Picking Studio from Photoneo is a powerful bin picking solution combining 3D robot vision enabled by PhoXi 3D Scanners and smart, in-house

38

The new Bin Picking Studio 1.4 was developed with the aim to push the world of automated bin picking to an entirely new level. A team of programmers, robotic engineers, and other experts set to focus on challenges of bin picking to introduce a game-changing, advanced system with yet stronger performance and easier use.

DEFINING THE ROBOT’S ENVIRONMENT The robot’s environment represents any construction, equipment or other objects placed within the robot’s reach. To prevent collisions with them, it is important to first define these potential obstacles with CAD models so that the robot “understands” its surroundings. One of the major tools introduced with Bin Picking Studio 1.4 is the new Environment Builder. To build a 3D model of the robot’s environment and the whole working cell, users do not need to use CAD files but can instead draw simple collision objects directly in the Bin Picking Studio. The tool enables them to draw a 3D model,

mvpromedia.eu


test the robot’s movements, and verify the new model by comparing it to a virtual point cloud gained from connected and calibrated vision systems.

viewpoints or making detailed scans of the bin corners. In contrast to this, if the scanner is mounted above the bin, it needs to be larger, depending on how much room is needed for robotic manipulation.

FAST AND SAFE PERFORMANCE

CALIBRATION As already suggested, calibration of the deployed vision systems is crucial - not only for correct verification of the created 3D model but for the whole bin picking process. In fact, it is the key to precise and successful bin picking. Proper calibration translates an object’s coordinates from the camera space to the robot space and thus ensures perfect alignment of the scanning volume with the robot’s working volume. Because it is so important to perform this step correctly, it is essential to make the process easy and user-friendly so that everyone can manage it without professional help. Photoneo provides a pre-configured calibration ball or a special marker pattern, depending on the calibration type, to help customers with the process.

MOUNTING THE “EYES” DIRECTLY ONTO THE ROBOTIC ARM Besides the classic construction with a scanner mounted in the robotic cell, usually above the bin, Bin Picking Studio 1.4 also allows mounting it directly onto the robotic arm. In this case, it needs to be positioned behind the very last joint - on the gripper for instance. This approach is a perfect option if one needs to scan a large bin with a smaller scanner to get great image details and high resolution. It also enables scanning the scene from custom angles and variable

mvpromedia.eu

For fast and steady localisation of objects in the bin, Bin Picking Studio uses Photoneo software Localisation SDK 1.3. After localising a part, the system plans the robot’s path and calculates its trajectory. The path planning process can be optimised with the new debugging Inspector tool. Users can examine the status of all localised objects, visualise the robot motion and computed trajectory, see the picking path stages as well as the grasping positions, and make changes to enhance the path planning performance.

Smooth bin picking without collisions is ensured by advanced collision checking algorithms. These compute collisions even for the de-approach stage, when the robot moves with the grasped part to a predefined place, and also collisions with the rest of the objects localised in the bin. These features of Bin Picking Studio 1.4 are only a fraction of the smart enhancements that have been made across the entire system. Bin picking got to completely new spheres, enabling users to automate applications that were inconceivable a few years ago. The means to boost the productivity of production lines are all at hand - one just needs to press the “automation” button. MV

39


CAN CENTRAL EUROPE BECOME THE NEXT

INDUSTRY 4.0 LEADER? Central and Eastern European countries rarely top the lists of leaders in innovation or digitalisation, but they are responsible for some important inventions. Insulin, modern contact lenses, parachutes and Skype were all developed in this region, highlighting the potential for Eastern Europe to make valuable contributions to modern day industry. So, is this area ready for the innovations of Industry 4.0? Przemyslaw Falek, sales manager for East Europe at E U Automation, examines the current initiatives.

Advances in technology such as digital platforms, automation and smart materials are currently disrupting the manufacturing industry. Adoption of these technologies can also significantly impact globalisation. Manufacturers can invest in artificial intelligence, internet connected devices and other forms of automation so that they can communicate and collaborate with businesses across the world. Adopting Industry 4.0 technologies can also benefit the economy of the country by introducing new jobs, better services and new export opportunities. Some countries, such as Germany, Japan and the United States have already benefitted from advanced technologies. These countries have invested heavily in robotics, sophisticated software and other forms of automation to establish themselves as leaders in Industry 4.0 and secure their economies for the

40

future. Countries that do not innovate are at risk of falling behind and negatively impacting their economy.

LATE TO INNOVATE Central and Eastern European countries such as Slovenia, Slovakia, Hungary and Poland have been slow to adopt Industry 4.0. The area transitioned from socialism to the free market in the 1990s, which led to the countries significantly restructuring their infrastructure to introduce new industries, jobs and other opportunities. Since this major restructure this group of countries has not invested significant funds in automation, which could be detrimental to the economy, employability and reputation of the area. If each country does more to promote digitalisation in manufacturing, improve education in new skills required for future manufacturing and encourage more businesses to innovate they can increase the GDP of the area and remain competitive in the European market.

mvpromedia.eu


So, what are these countries doing to keep up with Industry 4.0 leaders?

SMART INDUSTRY Industry currently represents about 25 per cent of Slovakia’s GDP and one in three jobs in the country are in manufacturing. Despite its importance to the economy, a report by the World Economic Forum suggested that the country is not prepared for digital transformation. The Slovakian Ministry of Economy created the Smart Industry Platform in 2016 to change its position and become an innovator. The platform was inspired by German Industry 4.0 initiatives and acts as a central authority that coordinates the promotion and adoption of new infrastructure and emerging technologies. Industry, academia and the Slovakian Government have come together to share their expertise and increase digital awareness of companies. The platform focuses on collaboration, research and development and digital transformation to strengthen the economy.

NATIONAL TECHNOLOGY PLATFORM In the 2017 Digital Economy and Society Index (DESI), Hungary ranked 21st out of 28 EU member states in digital readiness. The Industry 4.0 National Technology Platform was created to change this and aims to boost reindustrialisation in the country. The initiative aims to prepare the industrial sector for Industry 4.0 and increase the global competitiveness of the country. In 2016 Mihály Varga, the Minister of National Economy introduced the Irinyi Plan that sets out to develop Hungary as one of the most highly developed industrial sectors in the EU by 2020. The main objectives of the plan are to foster information exchange, accelerate digitisation and respond to any challenges that will act as a barrier to innovation.

A DIGITAL COALITION In 2016 key stakeholders in trade, research and development and the public sector across Slovenia came together to establish the Digital Coalition. The initiative aims to accelerate the digital transformation of the country by 2020. This initiative focuses both on industry and improving education to train the next generation of innovative workers. The coalition has invested in improving online skills, developing digital services and improve internet infrastructure. The project will also focus on developing infrastructure by investing in big data,

mvpromedia.eu

cloud computing and mobile technologies that can support cross sector business collaborations and build smart communities.

THE MORAWIECKI PLAN Poland is the sixth largest manufacturing country in the EU and the industry contributes 27 per cent of the country’s GDP. The country is already regarded as innovative because of its dedication to research and development and number of high-tech companies in large industries such as automotive and food manufacturing. To help the industry develop further and become a leader in future technologies, the Polish Government launched its Industry 4.0 platform. This platform is part of the Morawiecki Plan that was developed in 2016 to invest 235 billion euro in 25 years to improve quality of life in Poland. This publicly funded platform aims to raise awareness about the opportunities that Industry 4.0 can bring to Polish businesses and provide the infrastructure required to adopt these technologies. Manufacturers wanting to adopt new advanced technologies should research how ready their country is for Industry 4.0. A lack of infrastructure, skills or support from local initiatives or government can be barriers to digitalisation for these companies. By making these bodies aware of their needs, manufacturers can get the support they need and help their area remain competitive simultaneously. While you might not think it’s important to know where contact lenses or insulin was invented, manufacturers should take notice of how these countries are developing. Slovakia, Hungary and other countries in Central Europe are investing in infrastructure, knowledge and technology to support the economy and ensure their businesses can remain competitive as more countries realise Industry 4.0. Business across the world can look to this area for inspiration about how to invest for the future. MV

41


A WEIGHT OFF YOUR SHOULDERS

IVECO reap the benefits of Comau’s MATE exoskeleton as it powers the workforce to new levels and reduces fatigue with robotic technology and assessment app

The technology used in the industrial world should always be considered as a useful tool to support people’s activities, improving productivity, efficiency, working conditions and maintaining employee health. More and more manufacturers are investing resources in the search for solutions to improve the ergonomics of all workstations and, in particular, of those characterised by physically stressful tasks. The recent introduction of Comau’s MATE exoskeleton in one of the production lines of IVECO’s Brescia plant is part of the company’s desire to preserve the health of its employees while improving their comfort and, consequently, the quality of their work.

a strategic role, while evolving and adopting new production strategies and technologies. Good evidence of this is the Eurocargo, one of IVECO’s best-selling vehicles, which was created in Brescia 1991 and is still being produced at the plant, even after being renovated and restyled several times. The Eurocargo line is one of the highlights of the Brescia plant. It is a very complex system as everything starts in the bodywork department, where stamped panels are welded together to build the cabin of the vehicle. It then moves to the painting area, a fully automated line where six robots apply enamel to the body. The cabin is moved to the upholstering line, where the seats and dashboard are assembled. Meanwhile, the chassis is built in another department before being paired with the cabin. The frame enters another assembly line where the engine, gearbox, power pack, suspension system and mechanical components are mounted. The cabin and chassis complete their cycles at the same time, so that they can be paired and the Eurocargo vehicle is ready to hit the road.

TOTAL EFFICIENCY

OVER 100 YEARS OF AUTOMOTIVE HISTORY The Brescia plant opened in the early 1900s and has a long a proud industry in the automotive industry. Having being part of the Fiat Group, IVECO was formed in 1975 with the merger of companies based in France, Italy and Germany. Since then, IVECO has always played

42

“The production of Eurocargo makes us proud, because its huge range of configurations allows us to offer our customers about 13,000 different versions, with a projected repeatability index of about 2.85 for 2019,” explained Marco Colonna, the Brescia plant manager. “It’s very rare that two identical vehicles are delivered at the same time. Obviously, keeping high production efficiency with such significant product variability is not

mvpromedia.eu


simple, and requires a structured logistic organisation because many materials are delivered to the various stations on a just-in-time basis and according to the production cycle.” Such accurate and efficient organiations is the result of the application of the World Class Manufacturing (WCM) philosophy, that incorporates Total Productive Maintenance (TPM), Lean Manufacturing and Total Quality Management. As Paolo Gozzoli, WCM Plant Support of IVECO, explained: “WCM is a production approach that involves the company at every level and function, from production to safety, logistics and maintenance activities. The goal is to achieve efficiency in every department in an integrated way by means of tests and designed tools to manage specific inefficiencies.” The natural consequence of the application of WCM was the redeployment of some of the employees to the interior construction of minibuses with Daily engines. It’s an almost artisanal production that IVECO is standardising as much as possible and, like any other department, it is subject to constant analysis and research aimed at the continuous improvement of processes. Gozzoli added: “The aspects examined also include the working conditions of our operators. This department features many activities that must be carried out while lifting the arms. A demanding condition which requires greater attention, considering that the average age of IVECO employees is around 49 years. To make the task less burdensome, we started a collaboration with the Ideal Production System (IPS) division, whose task is to search for new ideas and tools in an Industry 4.0 perspective to guarantee the best operating conditions at every time for the people and the plant as a whole.”

A FORESEEN SUCCESS

IVECO assessed the potential of MATE, identifying the most suitable tasks for an exoskeleton. An extremely complex operation, made simpler by a specific app developed by Comau, quickly and objectively identified how the exoskeleton can help in carrying out a given task.

REDUCED FATIGUE IVECO provided MATE to some of the operators involved in the construction of minibuses. They used the exoskeleton for a few hours and immediately noticed a clear improvement in their working conditions. In one station, the operator is assisted by MATE in the placement of reinforcements and accessories in the upper part of the minibus. For these tasks, the operators have to keep their arms raised overhead, resulting in trapezius-deltoid muscle fatigue. This was immediately reduced with the introduction of the exoskeleton.

IVECO’s research was completed in 2018 on the occasion of the Automatica fair in Munich, where Comau’s MATE exoskeleton showed its capabilities in a number of tasks.

The improvement is witnessed everyday by the operators who count on the support of MATE, including Antonio Maccarinelli, the team leader of Section 1 of the minibus line.

Gozzoli confirmed: “During the event, we considered different types of exoskeletons, but Comau’s MATE immediately stood out as the ideal solution for our needs. First of all, we required a tool which could help our operators in activities involving the upper limbs without reducing their mobility due to its structure or size. Another crucial feature was our conviction that a wearable device should be easy to wear and lightweight, considering summer heat as a detrimental factor in terms of comfort. Comau’s exoskeleton met all these requirements.”

He said: “I have used the MATE exoskeleton for a few months now, and I must say that I immediately found relief, especially for my shoulders. At the end of my shift I always noticed that this apparently minimal effort took its toll on my body. Now, my shoulders are in an excellent condition. The device is really easy to wear, and can be adjusted to different builds as I share it with other operators.” MV

mvpromedia.eu

43


ENABLING MANUFACTURING FLEXIBILITY:

THE VEO FREEMOVE Veo Robotics’ FreeMoveTM system makes close collaboration between humans and industrial robots possible. The system introduces safe and flexible manufacturing processes that reduce production and retooling costs, while granting manufacturers the ability to respond to all kinds of demand fluctuations. Alberto Moel, vice president strategy and partnerships at Veo Robotics explains.

At Veo Robotics, our fundamental assertion is that full-on automation is inflexible and fragile, and that high levels of automation can be terribly uneconomic. Conversely, an all-human manufacturing approach is also suboptimal under many reasonable conditions—the best outcome is a mix of humans and machines safely working together. The underlying economic reason is that combining the complementary strengths of humans and machines gives the entire system valuable flexibility to respond to changing conditions and uncertainty. If a manufacturing process is fully flexible, it is easy to quickly ramp production up or down depending on demand, and that adjustment can be done without expense so that unit economics are not affected. If, on the other hand, the process is inflexible, adjusting production volumes up or down will entail costly fixturing and reprogramming, and unit economics will be negatively affected. The takeaway from all of this is that production flexibility has value, and its value is highest when process requirements are uncertain, and lowest when they are certain. The need for flexibility is increasing as product variability increases and production runs get shorter. And as humans are infinitely flexible, one of the easiest ways to

44

incorporate flexibility is to add more (not less) human input into the manufacturing process. Building flexibility into production processes by making them safe for human and machine collaboration is almost always going to be the most cost-effective choice. Using tools like the Veo FreeMoveTM system, manufacturers will be able to automate the tasks that are most efficiently done by machines while retaining the flexibility of human workers to safely manage tasks that

mvpromedia.eu


require adaptability, dexterity, and judgment. Essentially, FreeMoveTM provides many of the benefits of fully automated and fully manual approaches, without many of the costs each of those approaches entail. The Veo FreeMoveTM system provides three very specific and quantifiable sources of value:

some serious savings both in regards to per-pallet costs and overall factory throughput. Because both the fully automated palletiser and the Veo palletiser are “driven” off the same robot palletiser arm and therefore produce the same throughput, we can see the stark impact shorter or longer fault recovery times have on productivity.

• Lower overall workcell design and time costs, and lower overall capex, with the side benefit of faster and lower cost reconfiguration and redesign; • Faster fault recovery; • New forms of working and human-machine collaboration not possible before, such as dual fixturing or in-cycle human-machine interaction.

LOWER WORKCELL CAPEX AND LOWER DESIGN AND TIME COSTS In collaboration with Advanced Robotics for Manufacturing (ARM) and a major manufacturer of consumer-packaged goods, we developed detailed models of four palletising alternatives: a fully manual approach, a fully automated approach, a PFL robot-based palletiser, and the Veo FreeMoveTM solution. After examining the capital expenditures and commissioning metrics required to get the four different palletising solutions up and running, we concluded that the Veo solution is 40 per cent less expensive to install than the other automated solutions, while retaining a short process cycle time, a much shorter payback time, and quicker design, development, and implementation times. The speed and lower costs of the FreeMoveTM solution also had benefits when it came to reconfiguring the palletising workcells.

FASTER FAULT RECOVERY We looked at the impact the frequency and duration of faults have on per-unit economics in this palletising case study. When a fault occurs with a traditional fully automated palletiser, the workers monitoring the system must complete a series of steps that could take over 10 minutes. First the workers must stop the system, then they must find the person with the key, open the door of the robot’s cage, reset the fault, exit the cage, verify that no one is in the cage, lock the door, write the fault up in the logbook, and then restart the system. With the Veo solution, the robot is not caged and human workers can quickly and safely step in to correct faults in just a couple of minutes. Quicker fault recovery enables

mvpromedia.eu

Every time the palletiser is down, it becomes a bottleneck for the rest of the system. Assuming a 10-minute fault recovery time for the fully automated palletiser, the decline in number of pallets per shift as a function of faults per hour is quite steep. That lost throughput could result in a big revenue loss. On the other hand, assuming a one-minute fault recovery time with the Veo solution, although the number of pallets per shift necessarily declines as the number of faults per hour increases, the lost throughput is minimal.

NEW FORMS OF WORKING AND HUMANMACHINE COLLABORATION When humans and high-speed and payload robots are able to work in close proximity, they can complete tasks in parallel that would otherwise need to be done in isolation, improving the efficiency of production lines. For example, in our palletising case study, humans handle some of the tasks the robot palletiser cannot, such as installing bumpers. This in-cycle human-machine interaction saves design time and effort, and the elimination of a needlessly-complicated $175,000 custom piece of machinery whose sole purpose is to put bumpers on pallet corners. Currently, commercial state-of-the-art safety systems do not allow for this kind of safe human-robot interaction. The Veo FreeMoveTM solution, once certified and widely available, will introduce flexibility in manufacturing processes and reduce costs across the board, while granting manufacturers the ability to respond to all kinds of demand fluctuations. Close collaboration between humans and machines will define the future of the manufacturing industry. MV

45


OPTIMISING WAREHOUSE LOGISTICS WITH

AUTONOMOUS MOBILE ROBOTS Use robots where it makes sense and leave people to do higher-value tasks. That’s the philosophy behind the design of the entire logistics flow of ICM, one of Scandinavia’s leading suppliers of personal protective and technical equipment and work environment solutions. RARU K Automation explain the benefits of using autonomous mobile robots (AM R).

activities. They no longer have to spend time manually moving pallets from a stacker to the aisles in the highrise warehouse. Instead, they can place the pallets on special MiR racks, from which the AMRs collect the pallets and transport them to the aisles inside the high-rise warehouse. The MiR robots leave the pallets at the end of the aisles to be collected by high-reach trucks that place them in the relevant racks.

The ICM high-tech warehouse, with its myriad of pallet transport operations running from 7am to 10 pm, is located in Odense in Denmark. Every year, tonnes of goods arrive at the logistics centre on a total of 31,000 pallets and ICM staff manage 100,000 orders, most of which are next-day deliveries. Space is limited, customers are impatient and competition is fierce. This means that time, personnel and space must be utilised optimally. Thanks to an investment in three MiR1000 AMR robots, three employees now save several hours each on daily

46

“The high-reach truck operators automatically report when they have taken a pallet from a rack, so I can just press a button on the tablet screen and send one more MiR robot on a mission. This way the robots ensure the high-reach trucks are always supplied with pallets,” says Jesper Lorenzen, warehouse assistant and responsible for goods reception at ICM A/S.

40 HOURS A WEEK SAVED Using a map on a tablet in the truck, the truck operators can see at all times where the small robots are, and the AMRs make themselves noticeable using audio signals and lights in the busiest areas. This means that there is a close collaboration between the trucks and the AMRs and in a

mvpromedia.eu


company with constant internal traffic, communication between vehicles is vital to avoid different machines blocking each other’s path. ICM has made a dedicated route for the AMRs, freeing space for other traffic in the logistics centre. Previously, space was very cramped because of the many operations with manual stackers on the main traffic routes, which have now been replaced by the MiR robots. MiR’s fleet management software, MiR Fleet, also ensures that the tasks are optimally distributed between the AMRs, so that it is the robot that can carry out the task in the shortest time that is chosen. It also makes sure that the three MiR1000 automatically move to a charging station and charge up between tasks, so that downtime is optimised.

The AMRs have not just increased efficiency at ICM. According to Brian Brandt, warehouse manager at ICM, they have also improved the working environment. He sees many positive perspectives on investing in technology that both increases capacity and job satisfaction.

Overall, the AMRs have saved approximately 40 hours a week at ICM, time that the staff previously spent on internal transport and moving goods between the receiving area and the positioning areas. These employees can now focus on higher value activities, i.e. planning and optimisation. Assessing, handling and prioritising the pallets and the contents is a complicated task that requires insight and experience because many parameters must be taken into consideration. Therefore, these tasks must be solved by people. “The robots have saved time, which we can now use to optimise the warehouse and fine-tune flow. We have become used to the new technology and have learned to work in a completely different way. The more we apply it, the more time we save through automation using AMRs,” says Lorenzen.

USER-FRIENDLY ROBOTS MAKE THE JOB MORE ATTRACTIVE Logistics at the high-rise warehouse now proceed smoothly, using a modern mix of employees, AMRs and trucks. ICM’s setup consists of three AMRs, four manned high-reach trucks, 10 manual stackers and 26 dedicated employees.

“It’s just so much fun working with AMRs. Being able to move something from A to B without even touching it, that’s really cool. The design of the MiR robots is so simple and user-friendly that I could take a new colleague in from off the street, and they would also think they’re logical to use,” says Brandt as he observes a MiR1000 robot moving past, carrying a 600 kg load of cleaning cloths on a pallet.

ROBOTS MAKE WORK MORE ATTRACTIVE Søren Jepsen, supply chain director at ICM A/S, explains that devising the optimal workflow for the entire flow of traffic and transport of goods in the logistics centre has been a learning process. “Our warehouse uses the chaotic storage principle, managed by a warehouse management system. We must be geared to be able to drop everything in order to be able to deliver within 24 hours to our customers in Denmark. This means it’s about using our resources shrewdly. We’re investing in new technology in order to safeguard our staff and to attract new, talented people,” says Jepsen. At ICM, the management sees clear potential in automating more processes. Right now, it is the flow from the goods receiving area to the storage aisles but, in the long term, ICM will also automate transport from picking to the delivery of goods and get even more benefit from the robots. MV

mvpromedia.eu

47


FIVE YEARS OF

YUMI

How ABB’s small robot is answering some big questions around collaboration

It was this that inspired us to start our quest to create a collaborative robot that could be safely used alongside human workers to help them do their jobs more effectively.

MVPRO: WHAT WAS THE ORIGINAL CONCEPT BEHIND THE DEVELOPMENT OF YUMI?

Launched in 2015, ABB’s YuMi has helped to transform the face of robotic automation, opening new avenues over and above even ABB’s original expectations. To mark five years of YuMi, Andie Zhang, global product manager – collaborative robots for ABB Robotics, shares five things about the story behind YuMi and the impact that YuMi is having on people’s perceptions of robotic automation.

MVPRO: WHERE DOES THE YUMI STORY BEGIN?

AZ: Our vision was to create a collaborative robot to work hand in hand with humans without further protective measures or barriers, that would be simple to install and operate and to maximise user acceptance, with design that mimics the proportions and movements of a human operator. It was obvious at the outset of the project that a successful collaborative robot needed to have the human factor built in from the start. For this reason, we paid a lot of attention to factors like size, safety, speed, movement, simplicity and design to develop something that could work alongside people without harming, intimidating or confusing them. Over the eight years between 2007 and 2015, we developed and trialled various different designs, eventually arriving at the design that would become YuMi.

Andie Zhang (AZ): ABB’s quest for a collaborative robot began with our researchers being tasked with developing a robot that could be easily deployed alongside human workers in an industrial environment. Robots are simply a tool that can be used to help people to do their jobs better. The best possible scenario is to combine the ability of humans to think around a task with the dexterity and consistency of robots to achieve enhanced levels of productivity, efficiency and performance.

48

mvpromedia.eu


The design of YuMi has a lot of similarities to the structure of a human torso. A great example of this is the design of YuMi’s arms. With their structure and padding, they were deliberately designed to mimic the capabilities of a human arm, from moveable wrists through to flexible elbows, to allow seven axes of movement. When it comes to picking up and gripping objects, YuMi’s ‘hands’ feature flexible grippers that work in a similar way to fingers, allowing it to pick up a variety of items up to half a kilogram in weight. The size of YuMi has also been designed to match the upper part of a human body as closely as possible. This allows it not only to work next to a person without overwhelming them, but also enables it to be easily moved between locations, allowing one robot to quickly be transferred between different production lines if necessary.

MVPRO: WAS THERE A GAP IN THE MARKET FOR A ROBOT LIKE YUMI? AZ: From the success we’ve enjoyed over the last five years – and the range of industries and applications YuMi has found a role in, there clearly was a gap in the market. Our original intention to create a robot that could be used for small parts assembly in the electronics industry has been massively surpassed. Since 2015, we have seen YuMi being used to do a range of things that even we never expected, from spray-painting customised headphones to testing ATM machine functionality. There are even YuMi robots being used to scoop gelato and hand it over to eager customers in an ice-cream shop in Italy. Many companies often aren’t aware that they have tasks that could be automated, either because they take it

mvpromedia.eu

for granted that those tasks have always been handled by humans or because they think that automation is better suited to things such as moving heavy objects or performing complex tasks, in which case they tend to be put off by the perceived costs of implementing robotic solutions to handle them. Both of these are wrong. Automation is suited to a range of tasks, whether large or small, and can help companies to get more out of their people by taking them away from dull, repetitive or potentially unsafe tasks and putting them onto higher level duties that make use of their intelligence and decision-making abilities. Ultimately, I think what we have created with YuMi is a robot that finds its own niches in the market, inspiring people to keep pushing the boundaries of what can be achieved with robotic automation while also finding ways to make people’s jobs more rewarding.

MVPRO: LEADING ON FROM THIS, DO YOU THINK THAT YUMI IS HELPING PEOPLE TO BECOME MORE RECEPTIVE TO THE IDEA OF COLLABORATIVE ROBOTS? AZ: Definitely. The great thing about YuMi is that it quite literally removes the barriers that have traditionally separated robots from people. The extensive safety features incorporated in YuMi, from its padded, rounded design through to its reduced speed and collision detection technology, generally eliminate the need for safety fences, allowing it to be in close proximity to people whether on the factory floor, in a medical lab, or making coffee in a department store.

49


This safe and friendly design helps to remove the psychological barriers too. If people feel more comfortable around robots, they are more likely to accept them. Far from feeling threatened by having a robotic colleague, people who work with YuMi robots tend to realise the value they bring and view them affectionately – I have heard of many of our robots even being given nicknames! In short, as a way of encouraging the take up of robotic automation, YuMi has proven to be a great way of helping to encourage companies to look into robotic automation. Experience has shown that YuMi robots attract attention from anyone visiting the factory or passing by the store window. This in turn has had a positive impact on YuMi users and often leads to more robot automation being implemented in a workplace.

MVPRO: FINALLY, CAN YOU GIVE US FIVE INTERESTING FACTS THAT PEOPLE MAY NOT KNOW ABOUT YUMI? AZ: The first is that YuMi wasn’t always the working name for the robot. Over its eight years from concept to final product, the robot we now call YuMi had a range of different names, including Quasimodo, Esmeralda and Frida. The name YuMi is in fact an abbreviation of ‘You and Me’, which we felt really reflected the sort of close working relationship we wanted to build between our robot and its human colleagues. A second fact that many people may not know is that YuMi’s initial design came from an external designer who was from a famous art school in Stockholm. This was a deliberate decision by ABB to try to get away from conventional robot design and create something that was truly different to anything that had gone before it. The success of this approach really speaks for itself, with YuMi winning the Red Dot design award for its friendly and functional industrial design.

bend and rotate in the same way as a human elbow, YuMi’s elbow can help to reduce the amount of space needed – it also allows YuMi to do the chicken dance if required! Another thing that people may not know about YuMi is the precision of its movement. One particular application that I have seen that proves this is where a YuMi robot was being used to thread a bead bracelet. The hole in the bead had a diameter of 1mm and the transparent plastic thread was 0.5mm thin. The precise movement of the arms, coupled with the vision system built into YuMi’s wrist, enabled the beads to be picked up one by one and accurately threaded, even despite the thread being non-rigid – this is something that even the most skilled worker would struggle to do on a repetitive basis. A final fact that many people might not know about YuMi is its growing role in healthcare applications. We have YuMi robots being used in everything from smearing test samples on petri dishes through to assisting in pharmaceutical discovery in an application at the Houston Medical Centre in Texas. For more about YuMi, visit the ABB website and search collaborative robots. MV

A third fact is that the design of YuMi’s elbows allows it the freedom of movement to work in tight spaces. Able to

50

mvpromedia.eu


THINK INFERENT.

WITH IDS ocean THE ALL-IN-ONE INFERENCE CAMERA SOLUTION

IDS

ocean

grab

label

train

run AI.

www.ids-nxt.com


FILTERS: A NECESSITY, NOT AN ACCESSORY.

INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING

MIDOPT.COM


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.