HOW HYPERSPECTRAL IMAGING IS CONQUERING THE INDUSTRY INSPEKTO - WHY QA MANAGERS LOVE THE S70
FRAMOS THE FOOD FIGHTERS
GREEN VISION ECO-DRIVEN INITIATIVES
ISSUE 20 - APRIL 2020
mvpromedia.eu MACHINE VISION & AUTOMATION
Six Essential Considerations for Machine Vision Lighting
5. Make Setup Easy A well-designed lighting controller brings significant benefits to machine vision systems. Isolated trigger inputs make connection to signal sources easy and a front panel provides quick configuration. A quality controller has minimal delay between trigger signal and light pulse and should provide full ethernet compatibility with access to live performance metrics. Gardasoft Vision has used our specialist knowledge to help Machine Builders achieve innovative solutions for over 20 years.
To read more about the Six Essential Considerations for Machine Vision Lighting see www.gardasoft.com/six-essential-considerations
Semiconductor
|
PCB Inspection
Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com
www.gardasoft.com
|
Pharmaceuticals
|
Food Inspection
MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu
CONTENTS 4
EDITOR’S WELCOME - Out of darkness cometh light
6
INDUSTRY NEWS - Who is making the headlines
9
PRODUCT NEWS - What’s new on the market
14
FRAMOS - The Food Fighters
17
TELEDYNE DALSA - A perfect vision of food production
Cally Bennett
18
IDS - Industrial intelligent camera creation
Group Business Manager cally.bennett@mvpromedia.eu
22
XILINX - The future of medical imaging
Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu
24 INSPEKTO - Why QA managers love their jobs Spencer Freitas Campaign Delivery spencer.freitas@cliftonmedialab.com
Becky Oliver Graphic Designer Glen Ahearn, Neil Ballinger, Subh Bhattacharya, Harel Boren, Markus Kohnle, Nigel Smith, Jonathan Wilkins, Martyn Williams Contributors
Visit our website for daily updates
www.mvpromedia.eu
mvpromedia.eu
28
BASLER - More than just cameras
30
GARDASOFT - Machine Vision: An enabling technology
32
ADVANCED ILLUMINATION - Custom configurations
34
HYPERSPECTRAL IMAGING - Conquering the industry
38
AUGMENTED REALITY - The $20B industry
40
MVTec - Innovation Day
42
COPA DATA - Factory connectivity
44 TM ROBOTICS - Calculating robot ROI 46
COHERENT - Tracking down climate change
47
ELECTRONICS - The green revolution
48
EU AUTOMATION - Environmental manufacturing measures
50
ROBOTIC TRENDS - Top trends in 2020
MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2020. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.
mvpromedia.eu
3
OUT OF DARKNESS COMETH LIGHT Lockdown. Self-isolation. Coronavirus. Not words, but statements of the situation the majority of us find ourselves in right now. I write these words in lockdown. I have interviewed people in self-isolation, while I cannot imagine who has not been affected by the coronavirus outbreak. In my last column, at the start of the year, I spoke positively of the year ahead. Of looking forward to attending key industry events – only to see them disappearing from the schedules as the result of the pandemic. The impact is far reaching but, where possible, daily life continues. Importantly, and from the emails and statements being communicated, the industry remains active and is playing a key role in fighting this crisis whether that is supplying crucial equipment, knowledge or physical support. No-one can accurately predict how long we will all be enveloped by this crisis. It is inevitable that we – as individuals and businesses - won’t come out of this unscathed. However, one thing is for sure, and taking the motto of my football club - out of darkness cometh light. Resilience. Optimism. Opportunity. Three more words that should reflect our postcoronavirus world. Turning to this issue of Machine Vision & Automation, there is plenty of positivity and examples of opportunity and, as always, I am grateful to the individuals and businesses that contribute. There is an in-depth look at the growth and influence of hyperspectral imaging, Inspekto CEO Harel Boren shares how the business is going Stateside and we delve into the world of AR. Plus we discover more on how businesses are tackling environmental issues and recycling, FRAMOS and Teledyne DALSA share case studies on their impact in the food and beverage sector, while Xilinx’s Subh Bhattacharya provides the answers to our latest Influencer Q&A. Enjoy the read and stay safe. Lee McLaughlan Editor
Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.eu
4
mvpromedia.eu
THINK INFERENT.
WITH IDS ocean THE ALL-IN-ONE INFERENCE CAMERA SOLUTION
IDS
ocean
grab
label
train
run AI.
www.ids-nxt.com
INDUSTRY NEWS
EMVA SHARE LATEST STANDARDISATION ON GENICAM The EMVA-hosted GenICam standard has become the backbone of all machine vision standardisation activities over recent years. Practically all popular hardware interface standards in the machine vision industry refer to the GenICam standard and, in particular, its generic programming interface. The latest GenICam release 2019.11 is available for download. Part of the current GenICam release is the new version 3.2 of the reference implementation (RI). The RI offers hardware and software vendors an efficient method to ensure compatibility of a camera with the standard in their image acquisition software. In addition to many minor improvements, the new version of the GenICam reference implementation now includes support for GenDC ChunkData. Furthermore, the RI includes modular logging and bindings for Python and Java. The current release also includes the latest versions of the GenICam modules SFNC 2.5 and GenTL 1.6. Furthermore, the new module GenDC has been added to the GenICam
release version 2019.11. The abbreviation stands for “Generic Data Container” and defines a transport media-independent data description that enables devices to transfer almost any form of image and metadata between camera and host system, using a uniform and standardised data description. GenDC thus completes the GenICam family of transport media independent modules that define the control and data exchange between imaging devices and the host. The complete GenICam Release Package 2019.11 can be found here: https://www.emva.org/standards-technology/genicam/ genicam-news/ MV
JAN HARTMANN JOINS IDS MANAGEMENT At the same time, Jan Hartmann, eldest son of the company founder, joins the IDS management board as the second family generation. This underlines the independence of the family business. In addition to his existing tasks in the recently founded sister company IDS Innovation GmbH with its associated “b39 Academy”, Jan will be responsible for the human resources, finance and IT divisions. Alexander Lewinsky, who has been head of operations since 2018, joins the extended management team and will be given a procuration authority.
Daniel Seiler, managing director of IDS since 2015, has chosen to leave the company after 14 years. He has made a significant contribution to the company’s success including the development of the North American business. Jürgen Hartmann, founder and owner of the leading industrial camera manufacturer, will remain managing director and will continue Seiler’s main operational tasks from March 1, 2020.
6
“We would like to thank Daniel Seiler for his successful work. In particular, he had great success in the internationalisation of the core business. We regret his decision but we would like to wish him all the best,” said Jürgen Hartmann. IDS has more than 300 employees. It is headquartered in Obersulm, Baden-Württemberg, and has subsidiaries in the USA, Japan, South Korea and UK. MV
mvpromedia.eu
INDUSTRY NEWS
ISRA VISION STRIKES PARTNERSHIP WITH ATLAS COPCO The future of ISRA VISION has been settled as CEO Enis Ersü has found a strategic industrial partner in Atlas Copco. The Darmstadt-based SDAX company, one of the world’s leading providers of surface inspection of web materials and for 3D machine vision applications, has now reached a long-term and future-oriented succession agreement that will enable the company to continue to pursue its growth strategy and its innovation roadmap.
Ersü said: “Our two segments surface vision and industrial automation as well as our global presence offer enormous potential for growth and synergies with Atlas Copco’s business activities.” ISRA, which employs 900 people at 25 sites around the world, will operate and continue to develop from Darmstadt as an independent pillar in the Atlas Copco structure. The plans for the new building for the company headquarters in Darmstadt, which is designed to accommodate growth and an increase in the number of employees, are also being continued. The ISRA team plans to move into the new building in 2021. MV
A strategic partnership provides the framework for ISRA to continue to realise its visions for the future, to exploit market potential, to continue the continuity of its business and to pursue its growth strategy. With a strategic industrial partner, ISRA can further develop the main focuses that the company has successfully set for its markets. Atlas Copco, which has 37,000 employees in more than 180 countries, has identified machine vision as a key technology and now intends to develop a new machine vision division with ISRA VISION as its nucleus.
DISCOVER THE S70 FOR FREE! The S70 can even adapt its own settings in response to environmental changes – including lighting, handling methods and line vibrations. Set up requires only 20-30 good samples - and no defective ones. In addition, the device can inspect multiple products and multiple models of the same product at a single inspection location, something not possible with any other product. The Inspekto S70 is already bringing huge profitability and productivity gains to manufacturers in all industry sectors, including Bosch, BSH, Daimler, Mahle, Schneider Electric and BMW. Inspekto, the founder of Autonomous Machine Vision (AMV) inspection category, is offering free demonstrations on the end user’s production line The INSPEKTO S70 can be installed in 30-45 minutes at any point on any line to inspect any product. It can be set up by a plant’s own personnel removing the need for any vision systems integrator or machine vision expertise.
mvpromedia.eu
“Inspekto has proved that AMV is possible, forever changing the machine vision ecosystem,” explained Harel Boren, CEO of Inspekto. “However, decades of long wait times and expensive, ineffective integrated solutions means some people in the industry still believe AMV is too good to be true.” To book a demonstration visit https://inspekto.com/ free-demo. MV
7
INDUSTRY NEWS
UK’S ‘FIRST’ DEDICATED UNIVERSAL ROBOTS TRAINING CENTRE RARUK Automation has opened the UK’s first Universal Robots Authorised Training Centre at its headquarters in Shefford. This development will allow the company’s certified UR trainers to provide tuition in collaborative robot programming, empowering UK customers to get the very best return from their UR investment. Universal Robots is central to RARUK Automation providing easy and flexible automation solutions for its customers. Its range of cobots are lightweight, spacesaving and easy to re-deploy to multiple applications with the help of a wide variety of UR-approved apps and accessories. One of the main objectives in RARUK Automation moving to its new premises was to extend its applications engineering facilities to assist customers in the development of bespoke solutions based on off-the-shelf automation elements.
To meet the need for a skills development pathway the Universal Robots Academy created on-line training modules to provide the necessary tuition. The establishment of its Authorised Training Centre Network, of which RARUK Automation is now part, allows these modules to be delivered in a local, classroom environment. They cover core to advanced cobot programming, including cobot scripting, preventative maintenance, system troubleshooting and parts replacement. “Universal Robots has helped companies address the automation skills gap by providing online training modules through its Academy for a couple of years now,” explains Mark Gray, UK Sales Manager at Universal Robots. “Through our partnership with application specialists, such as RARUK Automation in the UK, we can now expand that training to hands-on classes.” MV
1.1"
For sensors and still only 39mm in diameter
The new Fujinon CF-ZA series. Small size, great ideas Especially developed for 1.1" sensors, the new CF-ZA series offers a high resolving power of 2.5µm pixel size and consistent brightness from the image center to the corners without vignetting – for all six models with focal lengths from 8mm to 50mm. More at www.fujifilm.eu/fujinon. Fujinon. To see more is to know more.
PRODUCT NEWS
BAUMER REDEFINE PERFORMANCE Six new LXT cameras with resolutions from 0.5 to 7.1 megapixels and third generation Sony Pregius CMOS sensors with 10 GigE have been launched by Baumer. With greater sensitivity, improved image quality and frame rates of more than 1500 fps, Baumer’s new series has been given considerable performance improvements. With a pixel size of 4.5 or 9 µm, the cameras offer very high sensitivity, enabling them to provide better support for applications with a short exposure time or NIR illumination. The exceptionally high image quality with an SNR (signalto-noise ratio) of 44 dB (pixel size 4.5 µm) facilitates stable image evaluation even under difficult conditions, particularly where applications have a very high light intensity such as laser welding or fluctuating illumination conditions as experienced in sports and motion analysis. An additional feature is the new Dual Conversion Gain which allows flexible setting of the gain directly in the sensor. The setting “High” is suitable for applications with a low light intensity or short exposure times. The alternative “Low” optimises the image capturing in respect to SNR and dynamic range with a higher light intensity. The camera also features an integrated HDR function that calculates the images with a dynamic range of over 82
dB (pixel size 4.5 µm). This facilitates image evaluation for applications with light and dark areas in a scene and does not create CPU load on the PC. The serial production of the new LXT cameras starts in the second quarter of 2020. MV
CAMBRIDGE ELECTRONIC INDUSTRIES LAUNCHES CXP-12 CXP REPEATER Cambridge Electronic Industries has launched the world’s first CoaXPress Repeater capable of working at CXP-12. This device supports all speeds up to CXP-12 and removes the problem with the potentially prohibitive short connection lengths of CXP-12. This small lightweight qualified device is available in either a single, dual or quad connection format and has been approved by the JIIA as compliant with the CoaXPress V2.0 standard.
The CXP Repeater is available with Micro BNC, BNC or DIN 1.0/2.3 connections. There is also a wall mounting kit for the CXP Repeater, available separately. Cambridge Electronic Industries can also offer bespoke manufactured cables to complete the system, using their precision components, to provide the highest image quality available. MV
The CXP Repeater is a plug and play device which provides a cost-effective copper alternative to a fibre system, whilst significantly increasing the connection length at a given bit rate between the host and device. The repeater requires no external power supply as it uses Power over CoaXPress (PoCXP) technology. Cambridge Electronic Industries technical director, Peter Fayers, who developed the CXP Repeater, said: “This is an evolutionary product, which has emerged from the potential requirement for a longer connection length than would otherwise be possible at the high-speed data rates of CXP-12.
mvpromedia.eu
9
PRODUCT NEWS
OLED MICRODISPLAY FOR AR APPLICATIONS FRAMOS has taken a leap forward to enhance AR applications by offering a new microdisplay series from Sony Semiconductor Solutions. Featuring outstanding brightness, contrast and resolution plus a wide viewing angle, the new ECX335S is suited to the fast-growing market of augmented reality (AR) devices including head-mounted displays (HMDs), electronic viewfinders (VFs), and small monitors.
by a factor of three compared to previous models, while maintaining the same operational life. With its brightness characteristics, extremely small form factor (21.44 mm x 15.62 mm) and a contrast ratio of 100,000:1, this module will continue to spur innovative AR solutions. Information levels in the HMD or VF are rich in contrast and blend seamlessly into the real world, creating a “real” AR experience. The ECX335S microdisplay can be pre-ordered from FRAMOS. Availability of the product series will be announced in Q2/Q3. MV
The ECX335S is an OLED panel module with active matrix colour design, exceptional brightness of up to 3,000 cd/ m², Full HD resolution with 1,920 x 1,080 RGB pixels, and a diagonal of 1.8 cm (0.71 inch). Its power consumption is low even at a frame rate of 60 fps. AR applications require brightness in excess of 1,000 cd/m². This means the OLED microdisplay must have a nominal value of at least 3,000 cd/m², to allow for transmission losses during projection. Through a clever combination of enhancements, Sony Semiconductor Solutions has increased the brightness of the new ECX335S microdisplays
KASSOW ROBOTS LAUNCHES PRODUCTION OF THE 7-AXES COBOTS Danish firm Kassow Robots has added two 7-axes cobots models to its product portfolio. With its 1.80-metre robot arm – giving it an unrivalled reach in the cobot market – the KR 1805 opens up completely new applications for customers in industry. Thanks in part to its seventh axis, which allows it to reach around corners, a single cobot can now look after all the stacking and labelling tasks for Euro-pallets. The KR 1410 has a reach of 1.40 metres and a payload of 10 kilograms – a strong combination in the lightweight robotics market. This opens up a wide range of applications for human-robot collaboration (HRC), including sectors such as the metal industry where high payloads are essential.
Kristian Kassow, founder and CEO of Kassow Robots, said: “After first presenting our company in 2018 and introducing the first two models in 2019, we can now offer a strong product family of four cobots. For small and medium-sized enterprises, they are a strong, cost-efficient cobot package with almost infinite potential applications.” The cobots all have three ports, including a data/Ethernet connection to enable easy working. Data/Ethernet port enables wireless connection between the end-of-arm tool and the robot. MV
It takes the total range to four models with the latest models joining the KR 810 (reach 850 mm/payload 10kg) and KR 1205 (1200 mm/ 5kg).
10
mvpromedia.eu
PRODUCT NEWS
LMI LAUNCHES GOCATOR 2530 BLUE LASER PROFILER LMI Technologies (LMI), has officially launched the Gocator 2530 smart 3D laser line profiler. This sensor achieves inspection speeds up to 10 kHz, at high lateral resolution with a field of view up to 100 mm. A custom 2MP high speed imager, advanced optical design, and blue laser light allow the 2530 to generate high quality 3D data with highly repeatable results on shiny metal or black materials common in battery, rubber and tyre, and consumer electronics inspection and factory automation applications. This sensor is ideal for customers who need to perform: • Surface inspection of battery electrodes, cells, and packs • Dimensional gauging of battery cells • Mobile phone midplate inspection • Tyre sidewall and inner wall inspection • Tyre uniformity inspection • Tyre layer control inspection Take advantage of higher speeds by enabling multiple exposures to measure specular and low contrast surfaces simultaneously (e.g. shiny metal of battery cells, mobile phone midplates, rubber). The sensor’s speed is also a key advantage in achieving high Y resolution (spacing in
direction of travel). Submillimeter X and Z resolutions deliver detailed inspection of small assembly features such as edges or gaps and accurate 3D height measurement of surface geometry and defects (such as scratches and pits). The 2530’s wider field of view allows engineers to scan complete targets with a single sensor (e.g. mobile phone midplate). Large field of view and measurement range allow the sensor to handle a wider variety of scan targets. The Gocator 2530 has one of the smallest footprints in the industry while maintaining an IP67 rating. This allows the sensor to be mounted in virtually any machine environment. MV
INDUSTRY-LEADING STACKED EVENT-BASED VISION SENSOR DEVELOPED Prophesee and Sony Corporation have announced they have jointly developed a stacked Event-based vision sensor with the industry’s smallest 4.86μm pixel size and the industry’s highest 124dB (or more) HDR performance. The new sensor and its performance results were announced at the International Solid-State Circuits Conference (ISSCC) held in San Francisco in the United States.
sensor, resulting in small pixel size and excellent low light performance that are achieved by the use of Cu-Cu connection*1, with Prophesee’s Metavision® Event-based vision sensing technologies leading to fast pixel response, high temporal resolution and high throughput data readout. The newly developed sensor is suitable for various machine vision applications, such as detecting fast moving objects in a wide range of environments and conditions. MV
The new stacked Event-based vision sensor detects changes in the luminance of each pixel asynchronously and outputs data including coordinates and time only for the pixels where a change is detected, thereby enabling high efficiency, high speed, low latency data output. This vision sensor achieves high resolution, high speed, and high time resolution despite its small size and low power consumption. This accomplishment was made possible by combining technical features of Sony’s stacked CMOS image
mvpromedia.eu
11
PRODUCT NEWS
THE WATERPROOF MVBLUECOUGAR-X MATRIX VISION has launched a cost-effective and waterproof variant of the mvBlueCOUGAR-X industrial camera to work in earth’s harshest conditions. The IP67C – the C representing compact - illustrates its most important advantage. Waterproofing has been integrated into the standard housing to ensure that only a small amount of installation space is required. The plug connections can be screwed and there are two options for leak-tightness on the lens. Either a standard lens is used in connection with a protective tube available in various lengths (40 mm, 71 mm, 100 mm), or waterproof IP67 lenses are used. MATRIX VISION has added the latter product—from Kowa—to its portfolio as a waterproof BAM LS-VS-008 lens series.
Thanks to the various equipment options, the family of mvBlueCOUGAR-X Gigabit Ethernet cameras covers the majority of possible application areas. The camera has a large number of smart features such as flat field correction, colour correction, white balance, etc., which can be performed directly on the camera, thus relieving the host system. The image memory within the camera ensures lossless image transmission and additionally serves as a buffer, which saves and outputs images in the camera with flexibility. The camera is compatible with the GenICam and GigE Vision standards. Drivers are available for Windows and Linux. Moreover, the camera supports all third-party image processing libraries that are compatible with GigE Vision. MV
XILINX VERSAL PREMIUM SERIES DELIVERS FOR NETWORK AND CLOUD ACCELERATION Xilinx has announced the development of Versal™ Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimised cores and the industry’s highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration. Versal is the industry’s first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC’s 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and preengineered connectivity and security features to enable a faster time-to-market.
“The Versal Premium series takes ACAPs to the next level delivering breakthrough networked hard IP integration enabling the development of single chip 400G and 800G solutions,” said Kirk Saban, vice president of product and platform marketing at Xilinx.
The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
The Versal Premium series is built on a foundation of the currently shipping Versal AI Core and Versal Prime ACAP series. New and unique to Versal Premium are 112Gbps PAM4 transceivers, multi-hundred gigabit Ethernet and Interlaken connectivity, high speed cryptography, and PCIe® Gen5 with built-in DMA, supporting both CCIX and CXL. MV
12
mvpromedia.eu
PRODUCT NEWS
OMNIVISION’S NEW 12 INCH WAFER-BASED SENSOR OmniVision Technologies has announced the latest member of its two megapixel (MP) image sensor family—the OV02B. OmniVision is using 12 inch wafers to produce this sensor, instead of the 8 inch wafers that are in tight supply but are typically used for 2MP sensors. This enables the company to better address the increasing demand for 2MP in the entry level and mainstream smartphone and tablet markets.
format. Building on the success of its predecessor, the OV02A, while maintaining the same cost, the OV02B has an added SCCB ID (SID) pin, which provides two available hardware I2C addresses to meet the requirements of multicamera applications. It also adds a hardware strobe pin to sync LED flash photography, along with 32 bytes of on-chip OTP memory for storing automatic white balance (AWB) and manufacturer production information. Using a Bayer pattern, it supports both colour and monochrome, while also providing a chief ray angle (CRA) of up to 30.69˚.
“The strong growth trend in smartphone multi-cameras that we saw in 2019, for both main and front-facing cameras, is continuing to accelerate. In the entry level and mainstream markets, smartphone camera designers favour 2MP image sensors,” said Parson Li, OmniVision’s senior mobile product marketing manager.
The OV02B is designed for the main and front-facing bokeh cameras in entry level and mainstream smartphones where 2MP has become the industry standard. It also provides a cost-effective solution for the main tablet and notebook cameras. Output formats include 1600x1200 at 30 frames per second (fps) and 800x600 at 60fps.
The OmniPixel®3-HS pixel technology provides the OV02B with a 1.75 micron pixel pitch in a 1/5” optical
Samples of the OV02B image sensor are available now.
MV
32K TDI CAMERA DELIVERS HIGHEST RESOLUTION IN LINE SCAN IMAGING Teledyne DALSA has announced the release of its newest charge-domain CMOS TDI camera – the Linea HS 32k TDI camera using patent-pending pixel offset technology. “One of the greatest challenges in machine vision today is to increase resolution while maintaining or even reducing system-level costs. Our new Linea HS 32k TDI camera provides an innovative solution to meet such contradictory requirements. OEMs can readily integrate the new camera into existing systems to achieve much higher performance without needing to change any components,” said Xing-Fei He, senior product manager for Teledyne DALSA’s line scan portfolio. The Linea HS 32k uses two 16k/5μm TDI arrays with pixel offset. Two 16k/5μm images are captured in real time, then reconstructed to achieve a higher resolution image of 32k/2.5μm. This upconversion significantly enhances detectability for subpixel defects. One advantage of the patent-pending pixel offset technology is that existing lighting and 16k/5μm lenses can be used without sacrifice in responsivity and MTF with a smaller physical pixel size. Combined with Teledyne’s Xtium™2 CLHS series of high-performance frame grabbers, these new products represent a breakthrough in data throughput. Built on
mvpromedia.eu
field-proven technology, the next generation CLHS fibre optic interface provides reliable and high throughput data transmission. Fibre optic cables lower system costs, offer longer cable lengths (up to 300 m), are immune to electromagnetic radiation in industrial environments. Teledyne DALSA’s Xtium2 family of highperformance frame grabbers feature the PCI Express Gen 3.0 x8 platform. Key Features: • Up to 150 kHz line rate in 32k/2.5um resolutions, or 5 Gpix/sec • Compatible with existing lighting and lenses for 16k/5μm • Very low noise and high sensitivity • Active pixel assisted alignment • Camera Link HS fibre optic interface for high reliability and long cable data transmission • Lowers system costs
MV
13
THE FOOD
FIGHTERS Foodphone TM takes advanced vision technology to turn mobile phones into precise food and nutrition analysis devices using AI, 3D and Hyperspectral approaches. FRAMOS reveal how they took the calorie counting application to a new level.
Over 200 million people deal with Type-1 or Type2 diabetes and obesity worldwide. Now, vision technology helps in fighting these diseases by taking a smartphone picture of a meal on a plate and instantly knowing its nutritional contents in scientific-like precision – as easy as sharing on social media. FoodPhone™ provides a mobile solution using Intel®’s RealSense™ (RS) 3D technology to determine the volumes, texture, and shapes of all types of food, with seemingly only one image capture. Besides identifying carbohydrates and counting calories, this new phone case, equipped with embedded vision, detects the ingredients of commercially prepared food based on their chemical composition and instantly displays FDA-formatted (US Food & Drug Administration) nutrition labels. The FoodPhone™ device’s NIR (near IR) capabilities also recognise any natural imperfections, both visible and non-visible, and help in detecting the foods quality and freshness at the grocery store. Selecting the freshest fruits or vegetables like avocados, is made simple by instantly displaying “freshness” levels on the user’s smartphone.
14
WATCHING YOUR NUTRITIONAL INTAKE Diabetics, athletes, fitness-lovers, and many others struggling with their weight need to watch what they are eating. In the US alone, over 100 million people use smartphones and smart devices to monitor their weight, fitness and diet each day. In all modern western and rising eastern societies, diabetes is rapidly increasing in numbers. Especially for diabetics, counting carbs is key to managing their disease and, for them, it is a matter of life and death. Their carbohydrate intake influences their insulin levels. The carbohydrate total is the key meal information needed by a diabetic for using emerging technologies like CGM (Constant Glucose Monitoring) and automated tubeless insulin pumps. There lies the problem: These health-saving monitoring tools are only successful with the right user input. Manual input and user’s food estimations tend to be very inaccurate, causing incorrect insulin dosages which can be very dangerous and even life threatening for the diabetic.
ONLY ONE SMARTPHONE IMAGE NEEDED Today, smartphones have cameras, access to the internet, include modern, powerful Artificial Intelligence (AI) algorithms, and are used to take millions of food images every minute. The FoodPhone idea is to turn these smartphones into diet-helpers by analysing the food directly on the plate. By snapping what seemingly is one image of the meal, the FoodPhone app, with its SpectraPixel™ technology, connects with the company’s proprietary cloud-based AI to recognise the meal’s content, specifically its chemical composition, quantity in ounces/grams and quality, while also segmenting mixed meals. With the help
mvpromedia.eu
IDEA, INNOVATION AND IMPLEMENTATION The first prototype was built with off-the-shelf components and had dimensions of 8” x 7” x 3”; This is what Christopher M. Mutti, CEO & Founder of FoodPhone refers to as the “Million Dollar Blue Box”. At the time it was the smallest available solution to merge 3D, RGB, and NIR and the cost was about $3,000. It took more than five years for technological advancements to reach the level of performance and affordability to make FoodPhone a practical solution. Mutti often refers to this “perfect technological storm” that was perfectly timed as the basis for bringing his idea into reality. Nowadays, Intel®’s RealSense™ cameras are the size of a little finger, enabling new devices, like smartphones, to enter into a new world of 3D data collection and processing. With this, Mutti and his team have now found the perfect product, both in size and price, to provide the necessary information needed for the food recognition and analysis. They embed the Intel® RealSense™ cameras into a normal-looking phone case, maintaining a similar form factor to standard phone cases. With all the advancements in the technology used in their application that now come at a lower cost, the product can now be offered for just a few hundred dollars. Users simply swap their existing phone case with the FoodPhone’s embed phone case and download the app. From there, they can start capturing images of their actual meals and get its nutritional information within seconds. of the multispectral cameras and NIR sensors embedded in the FoodPhone’s new smartphone case design, the user instantly gets a scientific analysis of their meal. This one-shot analysis provides details on the carbohydrates, fats, proteins and other nutritional contents along with the true portion size of the prepared meal. Combining different imaging technologies with AI based intelligence, FoodPhone precisely identifies the amount and composition of food. There is no need to input any information into the application, nor touch or probe the food, or guess its volume. The analysis is fast. In a very user friendly and efficient way, the FoodPhone case delivers exact Nutrition Fact sheets with accuracies beyond 90 per cent. With a mix of AI and AR, the technology will impact the control of diabetes and support food-watchers in reaching their goals.
mvpromedia.eu
COMBINING MULTIPLE VISION DATA TYPES WITH AI? The FoodPhone solution uses multi-spectral imaging to precisely identify the macronutrients and volume or portion sizes. Mutti developed FoodPhone’s measurement IP without using a “Fiducial Object” on the plate of food or within the FOV (field of view) of the imaging system. The engineers decided to use the Intel® D435 RealSense depth camera because it is a USB-powered depth camera consisting of a pair of depth sensors, RGB sensor and an infrared projector. Mutti holds a patent for producing hyperspectral images by merging the output of multiple cameras. This vision technology closely emulates the way humans identify their food. Colour is the first element that people look at in a meal and, for this reason, FoodPhone uses the RGB camera to
15
classify against the different food types, as each one has a unique spectral fingerprint. “Millions of images are used to train this powerful AI machine”, says Mutti. “To reach accuracy levels beyond 90 per cent it was a lot of hard work performing tens of thousands of food classifications, confusion tables and other processing steps. In order to calculate the individual food labels and weight correctly, the colour, texture, spectral signature, and volume have to match. Raw image data are first processed by an Intel® Edison, a very small computer on a module, to identify carbohydrates, proteins, fats, and water content. All the information that is collected is then sent to the cloud and processed by FoodPhone’s AI driven database. The smartphone receives the results and displays the nutrition fact label.
ADDITIONAL TECHNOLOGICAL BENEFITS
identify colours in the captured image. The 3D stereo pair generates the data needed to identify the shape, outline and texture of the elements in a similar way to how people experience it. The raw 3D image data gives the dimensions and total volume or portion size of the food on the plate. By using the NIR data in the images captured by the multiple cameras and sensors, FoodPhone’s image processing (IP) algorithms are able to interpret the chemical composition of the food; kind of like how people taste and smell the aromas of the food they eat. An overlay of more than ten images and raw data are categorised into components of visible light, colour, spectral data, and 3D information. The optical, spectral and physical information retrieved from these images are used to find the specific and individual characteristics of each morsel. The spectral profiles captured from the images are used to compare and
16
Grocery shopping can be optimised by wasting less time looking for the freshest and healthiest products while saving money in the process. FoodPhone’s technology can also be used to detect food’s quality and freshness, with consumers receiving the food’s freshness in realtime, displayed on their smartphone. People with food allergies can check the stated ingredients of their food through a single image capture and not by attempting to decipher “cryptic” ingredient lists on the packages. A simple scan of the food provides a more detailed list of the ingredients along with a more exact shelf-life. By using information in the NIR spectrum, FoodPhone’s technology helps detect imperfections, like the ripeness and presence of bacteria, regardless of the product’s “Best Before” date. Picking the freshest and ripest avocado is just a click away. The innovative FoodPhone technology is a perfect example how vision-based solutions, in a very small footprint, with cutting-edge new applications, can be used in everyday devices, like smartphones and smart home appliances. MV
mvpromedia.eu
A PERFECT VISION OF
FOOD PRODUCTION Bizerba brings digital imaging to the massive food production industry, with a view toward more perfect products. Glen Ahearn, sales and application support manager at Teledyne DALSA explains. “Our customers didn’t ask for vision systems,” says Martin Taube, global product manager of Inspection Systems at Bizerba, “but they are under more pressure than ever to produce perfect products as a result of tighter regulations and higher standards. Adding vision was the way to give them what they needed.” With 144 years of history in the food production business and customers around the world, the company has seen the rise of many technological innovations and process improvements.
on it as well as the colour of the foil. Packaging sizes and bar codes can also be verified using the system. “Having the right label applied to the intended package is not something we leave to chance,” says James Farmer, head of product development at Bizerba North America. With a background in both mechanical and electrical engineering, he isn’t blind to his customers’ needs. The inspection system keeps a sharp eye out for imperfections in packaging and labelling, performing crucial checks that prevent recalls and product claims. Any packaging that fails a producer’s standards is identified quickly and segregated from production. CUSTOMER FOCUS For Bizerba, customer service stays high on the priority list. To continue to deliver that meaningful product experience, and make it better and longer lasting, the company is constantly pushing forward. Keeping things easy and convenient for their customers is part of that list and is reflected in the systems they develop. “We’ve created a standard tool set that was tailored to our industry,” said Farmer.
The goal here was to put an end to costly product claims from customers. Bizerba uses line scan cameras in its Bizerba Vision System (BVS) to inspect production line food products, like packaged meats - the kind you see in the supermarket. The system inspects the bar code on the label, the text, and even the product itself. FOOD PRODUCTION THROUGH THE MACHINE’S LENS When integrated into labelling lines, the BVS will automatically inspect the label position and the text printed
mvpromedia.eu
Previously, Bizerba didn’t have this standard toolset and their product was application based. “Today, our customers can buy a standard piece of equipment, bolt it up, train it, and go. That’s another thing that really sets us apart,” says Taube. This approach ensures that training requirements for new machinery or a new employee is as short as possible. It also makes the technology Bizerba packs into their systems more accessible. The user interface makes it easier to take advantage of innovations in machine vision. Machine vision still has a long runway for food production and inspection. It’s a critical industry with a lot of room to explore and it’s not yet clear how far the technology can go. MV
17
SPONSORED
CREATE AN
INTELLIGENT INDUSTRIAL CAMERA WITHOUT PRIOR KNOWLEDGE Where rule-based machine vision has not been attempted or has reached its limits, there is a high potential for deep learning algorithms to support employees and drive forward automation. AI solutions usually require specialist knowledge, development effort, and investment in computing and data storage hardware. With the rise of cloud-based computing and dedicated training services, however, deep learning becomes more and more accessible. Lowering this threshold is the main focus of the new all-in-one AI vision system IDS NXT ocean. It requires neither special knowledge in deep learning nor camera programming in order to create and execute individual neural networks. Camera hardware, software, infrastructure and support come from a single company: IDS. Users only need to provide sample images and knowledge on how to evaluate them (e.g. “good” / “bad”). This makes the start into AI-based image processing particularly quick, easy and user-friendly.
TRAIN AND EXECUTE INDIVIDUAL NEURAL NETWORKS Thanks to the IDS NXT lighthouse training software, even non-experts without prior knowledge of artificial intelligence or camera programming can train an AI classifier with their own image data. It only requires
18
three essential steps: to upload training images, to label these images and to train the desired net. Access is easy: users simply need to call up the web application, log in – and are able to start training a neural network right away. Instead of first having to set up an individual development environment, they have immediate access to all functions as well as the required infrastructure. The generated network can then be executed directly on the IDS NXT industrial cameras, turning them into powerful inference cameras.
GET STARTED: WITH THE IDS NXT OCEAN DESIGN IN-KIT Anyone wishing to test the potential of AI for their applications should take a closer look at the IDS NXT ocean Design-In Kit. It provides all components a user needs to create, train and run a neural network in their productive environment. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens and cable, the package includes six months of access to the AI training software. The use of deep learning-based image processing for individual applications can thus be realised in a short time. More information: www.ids-nxt.com MV
mvpromedia.eu
INFLUENCER Q&A THE FUTURE OF MEDICAL IMAGING Continuing our series of key insights with leading figures across the machine vision, automation and robotics sectors, Subh Bhattacharya, Lead, healthcare, medical devices & sciences at Xilinx, examines how machine learning and artificial intelligence are impacting on medical imaging.
WHAT IS THE OUTLOOK FOR ARTIFICIAL INTELLIGENCE IN MEDICAL IMAGING? The use of artificial intelligence (AI) – including machine learning (ML) and deep learning techniques (DL) - is poised to become a transformational force in medical imaging. Patients, healthcare service providers, hospitals, medical equipment makers, pharmaceutical companies, professionals, and various stakeholders in the ecosystem all stand to benefit from ML driven tools. From anatomical geometric measurements, to cancer detection, to radiology, the possibilities are endless. In these scenarios, ML can lead to increased operational efficiencies, extremely positive outcomes and significant cost reduction.
WHAT ARE SOME OF THE OPPORTUNITIES FOR MACHINE LEARNING IN MEDICAL IMAGING? There’s a broad spectrum of ways that ML can be used in medical imaging. For example, digital pathology, radiology, dermatology, vascular diagnostics and ophthalmology all use standard image processing techniques. Chest x-rays are the most common radiological procedure with over two billion scans performed worldwide every year, that’s 548,000 scans a day. Such a huge quantity of scans imposes a heavy load on radiologists and taxes the efficiency of the workflow. Often ML, Deep Neural Network (DNN) and Convolutional Neural Networks (CNN) methods outperform radiologists in speed and accuracy, but the expertise of a radiologist is still of paramount importance. However, under stressful conditions during a fast decisionmaking process, human error rate could be as high as 30 per cent. Aiding the decision-making process with ML
22
methods can improve the quality of result, providing the radiologists and other specialists an additional tool.
WHAT IS THE REGULATORY ATTITUDE TOWARDS MACHINE LEARNING IN MEDICAL IMAGING? Regulatory support is steadily increasing and the US Federal Drug Administration (FDA) is approving more and more ML methods for diagnostic assistance and other applications. The FDA has also created a new regulatory framework for ML based products. This new framework refers to ML techniques as “Software as a Medical Device” (SaMD) and envisions significant benefits to quality and efficiency of care. To support this initiative, the FDA introduced a “predetermined change control plan” in premarket submissions which would include the types of anticipated modifications and the associated methodology to be used to implement those changes in a controlled manner. The FDA expects commitments from medical device manufacturers on transparency and real-world performance monitoring for SaMD, as well as periodic updates on changes that were implemented as part of the approved pre-specifications and the algorithm change protocol. This framework enables the FDA and the manufacturers to monitor a product from its premarket development to post market performance and allows the regulatory oversight to embrace the iterative improvement power of an SaMD, while assuring patient safety.
WHAT ARE THE CHALLENGES WITH USING MACHINE LEARNING IN MEDICAL IMAGING? Many procedures within radiology, pathology, dermatology, vascular diagnostic and ophthalmology could be on large image sizes, sometimes five megapixels or larger, requiring complex image
mvpromedia.eu
2490
processing. Also, the ML workflow can be computing and memory intensive. The predominant computation is linear algebra and demands many computations and a multitude of parameters. This results in billions of multiply-accumulate (MAC) operations, hundreds of megabytes of parameter data and requires a multitude of operators and a highly-distributed memory subsystem. So, performing accurate image inferences efficiently for tissue detection or classification using traditional computational methods on PCs and GPUs are inefficient, and healthcare companies are looking for alternate techniques to address this problem.
WHAT DOES XILINX OFFER FOR MACHINE LEARNING IN MEDICAL IMAGING? Xilinx technology offers a heterogenous and a highly distributed architecture to solve this problem for medical imaging companies. Xilinx Versal™ Adaptive Compute Acceleration Platform (ACAP) family of System-onChips (SoCs) with its adaptable Field Programmable Gate Arrays (FPGAs), integrated digital signal processors (DSPs), integrated accelerators for deep learning, SIMD VLIW engines with a highly distributed local memory architecture and multi-processor systems are known for their ability to perform massively parallel signal processing of high-speed data in close to real-time.
THE SMART 3D LASER PROFILER WITH A 2 METER FIELD OF VIEW
I AM NOT A HARDWARE DEVELOPER; HOW DOES THIS HELP ME? Xilinx has an innovative ecosystem for algorithm and application developers. Unified software platforms, such as Vitis™ for application development and Vitis AI™ for optimising and deploying accelerated ML inference, mean developers can use advanced devices – such as ACAPs MV in their projects.
Scan Large Targets. At Production Speed. Gocator 2490 is able to scan a 1 m x 1 m area at a rate of 800 Hz, delivering 2.5 mm XYZ resolutions even at conveyor speeds of 2 m/s. The sensor also delivers robust quality inspection of surface defects such as punctures, dents, and folds.
mvpromedia.eu
Discover FactorySmart® 23 visit www.lmi3D.com/2490
SPONSORED
WE MAKE QA MANAGERS
LOVE THEIR JOBS The ‘well directed arrow of development’ continues to show no sign of stopping for Inspekto and the ‘game changing’ S70 autonomous machine vision product. Harel Boren, Inspekto CEO, reveals the company’s expansion into the USA, future product developments and the ‘disruptive’ impact they are having across industry in this exclusive MVPro interview.
Given the current worldwide COVID-19 crisis, which is a chance event we view as a Black Swan, we are in a position where we are able to stay afloat for a very long period regardless of how or when the market picks up. We continually allow for a Black Swan in our budgeting in each year of business, and our budget discipline led to us ending every year of our existence better than our budgets in both cash flow and budgets versus actual figures. Bottom line, the COVID-19 crisis finds us well prepared to continue supplying and supporting our many industrial customers in the USA, Europe and Asia, for years to come. MVPro: In your recent MODEX press conference, which you conducted while in self-isolation, you mentioned Inspekto were opening an office in the United States. Where and what will the benefits be?
MVPro: On the first anniversary of launching the S70 in November 2019, you announced a further investment into the business of $15m. What has happened since then? Harel Boren: Since November, we have added more investment. We have invested about $24 million in total.
24
HB: Firstly, some background. We initiated our activities with Europe, as we decided we must operate at the world level market which is closest to us. This is why Heilbronn in Baden Württemberg was chosen as our European headquarters. The region holds 52 per cent of the world’s automotive industry, and many other industries as well. Using this model, we have chosen to go to the United States in parallel with Asia, but with different tools. In Asia, we are initiating operations with some very powerful distributors who are leaders in their respective fields. In the United States, we have decided to initiate a very hands on approach similar to what was already so successful in Germany.
mvpromedia.eu
SPONSORED
We decided that Detroit is the right place to be as opposed to starting on the West Coast or on the East coast, as many startups would have done. Detroit is a major centre of industrial operations. The very intense presence of the automotive industry and other industries in the area or within an hour’s drive from Detroit, pointed clearly in that direction. We also considered the prices of real estate, the comparative price of labour and the existence of talent. We were embraced warmly by some of the associations in Detroit. There’s one association, the Michigan Israel Business Accelarator, which I hadn’t been aware of until going to Detroit. It has been substantially very helpful to us with connections and its broad and deep understanding of Detroit and how we can operate more successfully in the city and many other aspects. This has been very successful, and we already have our first local sales engineer and support on board, able to support our customers in the region. The office in Detroit, will do many things. It’s a replication of what’s happening in our European headquarters. It will do pre-sales and sales engineering, it will serve our sales executives, run training courses in Vision Inspection and the other areas applicable to the INSPEKTO S70. It will also hold our post sale and support and make sure assembly is done close to our customers’ premises. Our objective is for North America to be able to carry itself on its own – on anything and everything which is customer facing. This will pave the way for us to access the US market quickly and effectively, which is exactly what we want. We have to assess and wait to see what happens with regards to the Coronavirus, but I would expect our normal growth to be up to 10 to 15 people in the office by the end of this year. MVPro: Again, during your MODEX presentation you said that Inspekto was on a ‘well directed arrow of development’. Can you expand on this?
mvpromedia.eu
HB: Even before we’ve founded Inspekto, we understood that we combine several key issues in our preparation and our development. This led us to put in place several pre-set phases of development for the company. First and foremost, we decided very early on that we’re not going to do the same mistakes of many tens of startups that we’ve viewed, consulted or had been part of their management over the past three decades. The first key difference, we’re going to develop our product “across the chasm”. This was our mantra before the company was even founded. So, we developed our product with alphas, pre alphas, betas installed on tens of manufacturing lines of actual industrial manufacturers. In a field where ‘data is king’, our ‘arrow of development’ had to first brew on actual manufacturing lines, with actual products on their lines, tens of lines, and by that we accomplished three objectives, which were premeditated.
1) Our product has a brain that is divided into three AI engines working in tandem. During that period we logged more than four million images of actual products on those manufacturing lines. This helped us tremendously to develop our algorithms, and tie them together into a tightly bound, running product. Well, three AI engines working in tandem, in millisecond takt times, is no walk in the park. Not even for the group of ex-military, world-class AI scientists running our labs. 2) To establish a very good understanding of the software level of our product, and we had weeks where we had three different versions tested in the field on those lines by customers. So, this not only aided our algorithm engines, but showed that the product was much more useable than anything else out there, and ensured our product is launched past teething problems – and directly usable
25
SPONSORED
by real production personnel on real production lines. Therefore, we had a much better product than if it had come into production straight from the development lab. 3) When we went live on 6th November 2018 at Vision, we already had world-class customers that were accomplished Tier 1 and OEM world leaders. They had been working with us for a year and a half or more, so the level of trust and ability to provide reference was already there from the get-go. Our second stage is what we call the hilltop strategy. We have between four to six years of technology head start compared to anyone else on the market in the field of autonomous vision inspection, as technologies we’ve been using since late 2015, haven’t even reached the academia yet. Those six years will pass and till then we have to address well how we best utilise for the sake of our customers and for the sake of the company, the unique opportunity of early entrance into the market, being the
26
first autonomous vision inspection product in the market. We have decided to work very, very fast and make a mark where it matters. Not think of our progress in terms of only growth and sales, but also of the strategic impact of those sales in the very long run. So, when we become the vision inspection product of choice for the Tier 1 automotive manufacturer, car OEM, home appliances manufacturer or automation provider, when we become the first choice certified internal user, we have accomplished the strategic objective on our arrow of development. MVPro: You have said, ‘we make QA managers love their jobs’ because of the applications you have developed for the S70. Can you tell us more? HB: Since its launch in November 2018, the product has advanced leaps and bounds. The AI engines making the product are far more powerful today. It’s an Everest, and every time we peak on the way, we discover the
mvpromedia.eu
SPONSORED
grand peaks ahead. Today we can address successfully the grand majority of mid-complex cases on the shop floor very quickly in an autonomous way. But on top of vision inspection that comes built-in with every INSPEKTO S70, the QA manager can install many more useful applications. For example, Inspekto-Tracks is an application that save the images of all products inspected – good or defected – alongside their metadata, time of day, integration-ID, bar code and defect details. It enables claim rejection and optimisation and people love it. Inspekto-Types in another application that enables the Quality Assurance manager to support any number of different products or product models on the same location on the production line. The one S70 installed on the line can inspect even a thousand completely different products, switching from one to another in less than 1.8 seconds. This is a world
revolution. From the moment we launched the Tracks and Types applications, literally every S70 has been ordered with these. We’ve advanced Tracks for ‘on edge’ back up capacity. From our own experience, the S70 is being applied to so many more use cases than we could even imagine. People are also fitting S70s on COBOTs and robots, and what not. One customer has excelled and identified 1,672 immediate locations in the 50 plants’ firm. We also know first hand that we are making a huge difference to quality assurance managers. To look one in the eye after we’ve installed the S70 in 38 minutes and solved a black on black issue for which they have taken 80,000 different images and been unsuccessful, provides you with a very personal aspect of what you are actually doing. What love and satisfaction you are bringing to the shop floor.
plant – it proves to be the natural path from industry 3.0 to industry 4.0. We want the S70 to be able to cope with the array of mid-complex issues on the manufacturing line and to have the full array of applications needed to cope with any manufacturing line in the world. We’re in the process of developing quite a few new surprises which we’ll launch at VISION later this year. We are planning future applications which will complete, on a broad and deep level, the ‘Appleisation’ of vision inspection. In addition, we do have an arsenal of products in development, but these will be launched according to the competitive situation, ensuring we are always years ahead of the competition. MVPro: Finally, you have said “the monarchs are doing nothing”. Has Inspekto, as an industry disrupter, usurped the crown? HB: The S70 is being used by many world leading firms, replacing tedious expensive projects by this one simple to install and affordable vision inspection product. It is now clear that vision inspection is finally making its way back to where it belongs, in the industry automation family. This was never done before. It will have a devastating impact on the bottom line of world industry, in so many good ways. At the same time, we view the major players in this field not rising to acknowledge the fact that the industry has changed, and the way things are done has changed. Forever. We have seen it where other industries have been turned upside down and there is an oblivious manner in which the ‘monarchs’ are failing to address such revolutions. Those that are occupying their comfortable palaces are not rising while the revolution reaches an unstoppable momentum. The ‘monarchs’ are still tied to a business model which for nearly four decades has been successful and the right way to do things - until now. However, so many elements of the previous systems of manufacturing and QA have been made redundant, the landscape has changed, and the product now being utilised is independent, affordable, quicker, and adaptable. Industry as we know it will change tremendously by this one single product. It’s happened so many times before. It’s happening now to vision inspection. MV
MVPro: You are currently a one product company. Do you plan diversification? HB: As the INSPEKTO S70 covers already a multi-billion dollar yearly market, we are very focused on the S70 and the applications which are being built to install on it. Many people like that the S70 is their “eyes” in the manufacturing
mvpromedia.eu
27
BASLER´S VISION COMPONENTS MORE THAN JUST CAMERAS An image processing system needs more than just a camera. Only a camera in combination with a lens, light source, reliable data transfer and additional components such as frame grabbers, trigger cables, PC cards and power supplies turn a vision system into a functioning unit. Basler explains more on how it has Computer Vision system components that match perfectly.
Basler is particularly careful when selecting vision components because vision systems must meet high requirements for reliability and maintenance. This is because they are installed in machines or systems that are designed for long operating periods and must receive the right support throughout their entire lifecycle. For this reason, Basler applies the same accuracy to all vision components that customers already receive with Basler’s high-quality industrial cameras. The priority in the careful selection of the portfolio is the compatibility and reliability of the components, as Basler strives to provide the right needs-oriented setup for complex, efficient systems as well as for costeffective solutions. As technology leader, Basler is substantially involved in the development of new standards and offers all the necessary, perfectly matched vision components from one source. As a result, Basler´s customers benefit from the superior reliability of their entire vision system.
NEW LENSES EXPAND THE PRODUCT PORTFOLIO In February, Basler enhanced its Lens Portfolio and now offers the right lens for every Basler camera. The Basler Lens series comprises two product lines: Standard and Premium. Combined with a camera and lighting, lenses are instrumental in determining the image quality. Of interest when choosing the right lens is the balance between the price and the required imaging performance, i.e. high resolution with optimal image quality. Basler offers the right lens for both scenarios.
28
The Standard product line The lenses in the Standard product line are suitable for standard vision applications, with an excellent price/ performance ratio. These lenses have a needs-oriented design and correspond to the lower requirements of many cost-sensitive applications. Thanks to their solid basic performance, they are ideal for fast cameras with a lower resolution. The Premium product line The lenses in the Premium product line are designed and tested for more demanding applications. Thanks to a very high resolution, low distortion and low vignetting,
mvpromedia.eu
they offer the best image quality. This makes them optimal for cameras with very high resolutions for the analysis of even smallest structures. The cost aspect was also taken into account for lenses in this product line. Both new product lines support the popular image circles of sensors available in Basler cameras, from 1/2.5” to 1.1”, as well as all conventional focal lengths. The lenses are equipped with a C-mount and can also be conveniently used with CS-mount cameras by use of an adapter. Basler helps users choose the appropriate lens with its convenient Lens Selector. This tool makes it easy to search for the right lens for Basler area scan cameras. Visitors to http://www.baslerweb.com/lens-selector can enter their application data (such as required angle of view, working distance, object size, etc.). The Lens Selector then calculates the necessary focal length and proposes suitable lenses for the size and resolution of the sensor.
to easily integrate lighting early on, thus shortening the time spent on design and installation. The new and unique lighting concepts are designed for Basler ace U and L cameras equipped with the Basler SLP feature. Depending on the application requirements, customers can choose between two approaches.
THE FLEXIBLE APPROACH – BASLER SLP CONTROLLER The flexible approach is to use the Basler SLP Controller. The Basler SLP controller is used wherever special demands are made on the lighting of a vision system. In this case, users are able to select the best suitable lighting for their vision application, which is controlled through the Basler SLP controller and is physically connected to the camera, enabling the communication via the SLP feature. The solution is suitable for users with lighting experience who have applications with specific lighting requirements.
NEW BASLER LIGHTING SYSTEM OFFERS EASY INTEGRATION Illumination plays a major role in a vision system as it provides the light that ensures the best possible and repeatable image quality in a wide range of applications. It is therefore important to think about a lighting solution as early as possible.That is the reason why clever, innovative lighting solutions, such as the Basler SLP Controller and the Basler Camera Light series, are also part of Basler´s vision components portfolio. Basler Lighting offers an easy lighting approach for industrial cameras, which greatly reduces the time and the costs regarding the implementation of light sources. The entire process, from the selection of the lights to the installation, is made simpler and shorter by the Basler SLP feature, a feature that enables direct communication between camera and light. It allows easy synchronisation between camera and lighting, gives non-expert users easy access to the popular strobing operation and simplifies the setup for vision applications. Basler’s patent-pending feature enables direct communication between the camera and the light source. This significantly reduces the complexity of a vision system and enables any user
mvpromedia.eu
The easy approach – Basler Camera Light Series The upcoming easy approach is to add lighting to a vision system by using a Basler ace U or L camera with SLP feature and a Basler Camera Light with an integrated controller. The Basler Camera Light series is a distinctive and well-known combination of high quality and costeffectiveness that meets the strong demand of a standard lighting series for cost-oriented system designs. The approach is suitable for users with less complicated applications who want to achieve results quickly. In both setup scenarios, the Basler pylon Camera Software Suite supports easy integration, installation and operation as it is the only software interface for both camera and light in the system. The perfectly matched components Basler cameras with SLP feature, the Basler SLP Controller and the Basler Camera Light series - allow an easy setup via plug and play. Easily accessible lighting functions such as strobing and overdrive can significantly increase the luminous efficacy and the life time of the LED lights used. In the overall package, both solutions offer high savings potential in material costs and valuable time in the entire process from acquisition to installation. MV
29
SPONSORED
MACHINE VISION: AN ENABLING TECHNOLOGY
As Gardasoft celebrates 20 years of producing lighting controllers for machine vision, we present the second in a series of articles that examine how our fast-moving and exciting discipline evolved from its modest beginnings and what we can expect from future developments.
operation, smaller physical size and ever more powerful image processing systems enable increasingly complex inspections to be carried out. There is an increasing use of imaging systems using wavelengths outside the visible spectrum, such as short wave infrared and long wave infrared (thermal imaging) to reveal information not normally visible. Newer polarisation cameras can also reveal otherwise invisible information, including physical properties, such as stress or birefringence. These cameras feature CMOS sensors with on-chip nanowire micropolarisers which allow on-sensor detection of the plane of light in four directions. Forty years after the first embryonic machine vision systems saw the light of day, the machine vision industry has grown into a mature core technology. Established vision techniques are key to a diverse range of applications. Industrial inspection systems improve the quality and efficiency of manufacturing processes by decreasing the likelihood of a defective component progressing down the line. Video technology in sport can uphold or correct an official’s decision and allow the watching public to see the outcome (even if the VAR system in football’s English Premier League is experiencing some teething troubles!). Vision systems and robotics are being used to replace manual labour in numerous applications such as pick and place, palletisation and even picking and trimming vegetables. Automatic number plate recognition (ANPR) at car park security barriers is now commonplace, and the list goes on. In short, machine vision technology has become an integral part of our daily lives.
KEEPING PACE WITH DEVELOPMENTS Machine vision is a fast-moving technology. Developments in processing power, CMOS camera sensors, illumination, optics, software capabilities and data handling constantly push at the boundaries of what’s possible. Well-established techniques continually evolve and a constant stream of new technologies is arriving in the industry. Higher-resolution CMOS sensors for both area scan and line scan cameras, faster
30
DATA HANDLING AND PROCESSING The enormous volumes of image data generated by higher-resolution cameras make it important to choose the best transmission standard. This is usually made on the basis of the speed, cable length distance and configuration needed. Gen<i>Cam provides a generic programming interface for all common interface technologies and a standard feature naming convention for lighting devices has also recently been agreed. Data transmission is just part of the challenge, however, since many of the imaging techniques currently in use are particularly computationally intensive and have become front-line techniques thanks to faster PCs with FPGA and multicore embedded processor architectures. One such technique is hyperspectral imaging which combines infrared spectroscopy with machine vision to allow the chemical composition of organic materials being
mvpromedia.eu
SPONSORED
imaged to be determined. This has opened up major new possibilities for detecting impurities, notably in the food, pharmaceutical and agriculture industries. Deep learning, or machine learning, is now available as part of commercial software suites running on PCs with GPUs. Using sets of ‘training’ images, deep learning systems learn to recognise features or defects for classification purposes. We’re also seeing neural networks that can be trained and then run directly on a dedicated camera with on-board processing power, opening up even more possibilities. Deep learning is particularly useful in classifying organic objects, where there are lots of natural variations.
ILLUMINATION AND CONTROL Optimising illumination is a critical requirement for machine vision in order to get the best quality image possible of the features to be measured and this includes lighting control. Running LEDs in excess of their maximum rating for short periods of time for increased light output; pulsing the light to allows imaging of objects at high speeds and maintaining a consistent light output are just some of the ways dedicated lighting controllers can help. Gardasoft’s FP200 series of high speed lighting controllers has extended trigger frequencies to up to 10kHz, for high speed pulsing applications such as line scan imaging. OLED panel lighting together with full lighting control capabilities opens up exciting new opportunities thanks to the exceptionally stable and uniform light intensity across the width of the panel and small form factor. Lighting controllers can also provide a wide range of illumination sequencing options involving multiple lights and/or multiple cameras. Multilighting schemes involve multiple lights being triggered individually at different intensities and durations in a predefined sequence from one trigger signal, allowing multiple measurements to be made using a single camera station. This reduces mechanical complexity and saves money because less equipment is required. Muiti-light imaging using line scan cameras involves acquiring multiple views of an object during a single scan by capturing information from different illumination sources on sequential lines. Individual images are extracted using image-processing software. Sequential multi-shot imaging is also the principle used in computational imaging. Here a programmable lighting control system is used to generate a sequence of
mvpromedia.eu
images of an object using different illumination directions or wavelengths. Key information is extracted from each captured image in software and combined to form a composite image that contains information that cannot be seen in an individual image. The most popular use of this technique is photometric stereo, where an object is sequentially illuminated with light from four or more directions. Combinations of these images allows shape and texture components to be separated.
EMBEDDED SYSTEMS Embedded vision generally refers to the integration of vision systems into machines or devices without the use of a PC for control and image processing. Smart cameras have the camera sensor, image capture, processor for image evaluation, vision software and the I/O interfaces, as well as, in some cases, the lighting and the lens, combined in the camera housing. The camera can be set up for a particular inspection with the results delivered directly to the process control system. Other embedded solutions include compact vision systems where multiple cameras can be connected to a dedicated imageprocessing controller, deep embedded vision and system on chip (SoC). Deep embedded systems have extremely low unit production costs, but high development costs since they are developed for specific tasks and cannot be easily reprogrammed. SoC is an extremely flexible ARMbased embedded computer technology that enables bespoke systems with low investment and system costs using standard cameras and board-level cameras using standard interfaces. Integrating hardware, such as FPGAs, GPUs or DSPs, makes local pre-processing and data reduction possible. MV
Typical line scan system with bright field and dark field inspection
Camera Station 1
Camera Station 2
31
SPONSORED
CUSTOM CONFIGURATIONS FROM ADVANCED ILLUMINATION to customisation results in unique lighting solutions with reduced engineering, design, and documentation costs often at no additional cost to the customer. Our adaptable build-to-order system delivers lighting configured from a large set of predetermined parameters within short lead times. And our customers agree: according to one lead engineer: “Being able to talk out the unique lighting challenges with the Ai team and get expert thoughts on wavelength, colour and light type is the best part about working with Ai.”
UNIQUE VALUE-ADDED TOOLS FROM AI Machine vision systems are often as diverse as the objects they inspect, and the lighting required to illuminate these applications should be flexible enough to accommodate. Whether you’re inspecting specular, curved, textured, or other unique surfaces, it’s important to understand that the illumination solution used in your inspection system can have a significant impact on the application’s success.
CHALLENGES IN SOURCING ILLUMINATION Sourcing appropriate lighting systems can pose a challenge for some inspection applications, particularly in situations with tight spatial constraints or when lighting was not considered early in the application build process. At Advanced illumination (Ai), we believe the flexibility and degree of customisation provided by your lighting vendor in these circumstances can alleviate these added pressures and result in a more robust, successful application. After all, we built a company around it.
OUR MISSION In 1993, we founded Advanced illumination based on a model of delivering expandable LED lighting for machine vision – a vision that so far has grown into an offering of hundreds of thousands of lighting configurations available within one-to-three weeks. We understand the unique challenges of machine vision, including the sheer variety of inspection applications, and we believe custom-built, quickly available solutions provide the greatest opportunity for success.
To support users, we’ve developed an online configurator which places the power to customise lights in the hands of our customers. Through the innovative Ai Configurator, users can build each of our LED lights to their specifications by selecting their desired light size, wavelength, power source, cable length, and more to best fit their inspection application. This is all accessible through the Advanced illumination website: simply visit a product page, click “Configure This Light” on the right-hand side of the page, and explore the thousands of configurations our expandable solutions offer. Should a customer have an application that requires additional customisation outside of any options on the configurator, our engineering team is just a phone call away. Our knowledgeable representatives can consult with you and recommend a solution designed to best meet your unique challenges, whether it’s a SemiCustom or Fully Custom product. Says one customer, “Advanced illumination’s ability to solve our specific application challenges has made them stand out from the competition. Ai has some unique lights not available elsewhere, and we have only received exceptional support from their technical team.” At Ai, we take pride in our ability to deliver our customers any degree of custom lighting to ensure precise, robust MV illumination for their unique inspection applications.
Ai’s objective is to deliver solutions for these challenges by providing Build-to-Order, Semi-Custom, and Custom lights that allow for complete flexibility. Our multi-tier approach
32
mvpromedia.eu
www.inspekto.com
W ORL D’ S FIRS T A UTO NO M O US VI SI O N I NS PE CTI ON S YSTE M
ANY INDUSTRY
ANY PROCESS
ANY PRODUCT
• 1000 times faster to install setup in 45 minutes • No experts needed plant QA personnel can install by themselves • 1/10 the cost of a traditional solution major savings, come rain or come shine • 30 good parts only No defected parts needed
Inspekto has reinvented industrial vision inspection. Our product – the INSPEKTO S70 – has disrupted the industrial vision inspection experience forever. Plug & Inspect™ runs three AI engines in tandem. It merges computer vision, deep learning and real-time software optimization technologies to achieve true plug-and-play vision inspection.
TO AR R ANGE FOR A F R E E V I RTU A L D E M O NS TRA T IO N VISIT WWW.INSPEKT O.CO M OR EMAIL I N F O@I N S PE KTO . COM
HYPERSPECTRAL IMAGING IS CONQUERING THE INDUSTRY
Hyperspectral-based imaging is establishing itself in ever new industrial fields of application. How does this technology work and where can it be used? Conventional image processing systems have become established in automation in industrial applications in recent years. Machines that can SEE can do more than systems without image processing, and this knowledge has now become established among developers and designers. While conventional 2D and 3D vision systems check the quality of objects by recognising certain error features on the surface, hyperspectral imaging (HSI) goes one step further: With the help of this technology, spectroscopic analysis of objects can be carried out in order to determine organic or inorganic impurities - not only on the surface, but also partly inside the inspected materials. HSI systems usually utilise 100 or more different wavelengths and a spectrograph that splits the light reflected from the object into its spectrum and reproduce it on the camera sensor. An HSI system assembles the resulting images into a three-dimensional hyperspectral data cube that can contain very large amounts of data. The result is a “chemical fingerprint” of the object under consideration, which enables the analysed material properties to be determined precisely. With the help of special evaluation software, each identified chemical component can then be flagged with its own colour in the images taken in order to visualise the exisiting substances in a simple way for the user. This technology is called Chemical Colour Imaging (CCI).
DIVERSE APPLICATIONS “Hyperspectral imaging can be used in a wide variety of industrial application areas and in certain cases offers
34
solutions for tasks where conventional image processing systems fail,” explains Markus Burgstaller, managing director of the Graz-based company Perception Park that specialised in that innovative technology a few years ago. As an example for an application, Burgstaller mentions the classification of substances that have no visual differences, but are not chemically identical: “Plastics of various compositions can look very similar and can therefore hardly be classified by conventional image processing. On the other hand, HSI systems are able to analyse chemical properties and therefore recognise materials very reliably. With this technology, the concentration and distribution of ingredients can largely be recorded in real time.” A special feature of HSI systems makes them particularly attractive for certain applications: Some substances are not transparent to visible light, but can be penetrated by infrared light. This gives the possiblity to check the chemical composition of packaged content even through a suitably designed packaging. According to Burgstaller, applications in which this property comes into play are found primarily in the pharmaceutical and food industries, but also in numerous other industrial segments.
FAULT DETECTION IN THE PHARMACEUTICAL INDUSTRY As in many other areas, production speeds in the pharmaceutical industry are increasing rapidly worldwide. In order to reduce the risk of product recalls and to protect consumers from contaminated drugs, particularly strict safety regulations apply in this industry. Image processing systems have therefore been state of the art in the manufacturing processes of pharmaceutical
mvpromedia.eu
With a combination of HSI and CCI technology, the quality of retard based medication can be reliably controlled, explains Burgstaller: “With a hyperspectral camera working in the NIR area and the use of chemical colour imaging technology with our software suite Perception STUDIO we were able to clearly demonstrate that previously artificially generated coating defects can be recognised with 100 per cent certainty and also in real time in high-speed production.“ This quality check is possible even through blister packs, provided the blister material is not made of aluminum, which would reflect the NIR radiation.
products for some time in order to evaluate products in real time according to criteria such as shape, size or weight. However, the use of HSI and CCI systems can further optimise the monitoring of pharmaceutical production processes: Pharmaceuticals can be 100 per cent examined for their molecular properties. A typical application of HSI systems in the pharmaceutical industry is the checking of retard tablets for correct coating. This form of medication releases the active ingredient after its administration over a longer period of time or to a specific target in the body. The sustained release coating of the tablet is decisive for this controlled release of the active ingredient: if it is damaged or missing completely, the medication gets into the body faster than desired and fails to deliver its long-term effect.
mvpromedia.eu
According to Burgstaller, testing retard coatings is just one of many possible use cases for HSI technology in the pharmaceutical industry. It can also be used to reliably check whether the correct number of tablets are packed in blisters, whether they are undamaged and free from any contamination, whether the correct ingredients are contained in drug capsules or whether they are completely sealed. “Hyperspectral imaging offers numerous application options in the pharmaceutical sector and thus increases safety for patients and manufacturers.”
MORE SECURITY IN FOOD PRODUCTION Similar guidelines as in the pharmaceutical industry apply in the production of food: In order to exclude health risks to consumers, no contamination must remain undetected in the products. The food must also contain exactly the ingredients that the manufacturer wants and that are defined in the product descriptions for the consumer. HSI and CCI also offer numerous application options for this industry. These techniques make it easier to find any contamination in food and identify unwanted objects
35
such as stones or earth when sorting potatoes, carrots or other vegetables, as well as shell parts or other substances in the production of nuts, even in high-speed production lines. If food is stored for too long, maggots can also nest or fresh fruit can start to rot, according to Burgstaller: â&#x20AC;&#x153;In food production, the task very often not only means to detect any contamination, but also to identify rotten, immature or pest or mold infected goods. These and many other quality defects can be safely ruled out by the use of hyperspectral imaging systems.â&#x20AC;?
Industrially manufactured food such as sausage and cheese are usually offered for sale in a shrink-wrap form. Similar to the pharmaceutical industry, HSI systems also allow quality checks through the packaging in many cases. A special task here is the inspection of heat-sealed joints, which are supposed to guarantee absolutely tight packaging of the food. Even the smallest contamination or damage to these sealed joints can lead to leaky packaging and spoilage of the goods before the calculated best before date. Unsellable products or expensive product recalls would then be possible consequences for manufacturers in this area, which can be avoided in many cases by using hyperspectral imaging.
field of application to check the quality of products such as sawn timber, wood-based materials, wood chips and paper and paper products with HSI systems. Even features that are invisible to humans and possible defects can be reliably detected. The detection of defects such as resin pockets or knotholes is a common task in this industry, which can be reliably solved with the help of a hyperspectral system in combination with a near-infrared hyperspectral camera. Resin in wood can be identified reliably even if it is covered by a thin layer of wood. Even adhesives that are often used in the production process to compensate for small holes with fillers can be easily recognised by using chemical colour imaging - a task that conventional image processing cameras often fail because the adhesive is usually transparent. Another important characteristic of wood is its moisture. With HSI analyses, damp spots in the wood can be clearly identified and displayed as a CCI image.
HSI IN WOOD PROCESSING In recent decades, woodworking technologies have made extraordinary strides. There are also many options in this
36
mvpromedia.eu
It is even possible to calibrate a perception system to measure the fraction of water. By adapting a hyperspectral camera to such a calibrated perception system, Chemical Colour Imaging transforms the camera system into an easy-to-understand “moisture camera” for wood and can be implemented into any image processing system. “For high-quality wood products, the reliable detection of such defects is an imperative,” emphasises Burgstaller. “HSI systems offer a powerful tool for this industry to avoid undesirable quality defects in time.”
“A distinction e.g. between polypropylene and polyethylene or other materials that look very similar at first glance, hyperspectral imaging makes it easy to do so. With our Perception STUDIO, appropriate systems can be developed even by people who have little or no experience with the subject of spectroscopy. The technical possibilities for a significant expansion of the recycling quotas for plastics are therefore already available and for environmental reasons should be used substantially,” said Burgstaller, who expressed his hopes for the future. MV
HSI SYSTEMS FOR PLASTICS SORTING Even at the end of their lifespan, plastics are still too valuable to simply throw them away. If the full potential of the currently deposited plastic waste were used in an environmentally friendly way applying the best recycling and energy recovery methods and technologies, many millions of tons of plastic could be recycled. This would also make it possible to generate large amounts of heat and electricity. Appropriate measures are required for such improvements in order to stop the disposal of plastics and to set up recovery-oriented collection systems. These must be reconciled with modern sorting infrastructure and improved recycling and recovery processes in order to exploit the full potential of this valuable resource. Recycled plastics can be reused in many everyday products, e.g. in clothing, in vehicle parts, in packaging products and for many other purposes. However, too little plastic is currently being recycled, although innovative technology such as Perception Park’s Perception STUDIO would offer the needed potential.
mvpromedia.eu
EXPERIENCE HYPERSPECTRAL TECHNOLOGY LIVE The fourth conference on Hyperspectral Imaging in Industry (chii) in Graz, Austria will be held on On 28-29 October, 2020. The focus of this meeting is the use and application of hyperspectral systems, through short lectures by leading technology providers and an exhibition. Matchmaking meetings are also planned to answer individual questions. Program details and registration for chii2020 can be found at www.chii2020.com
37
THE $20B INDUSTRY Augmented and mixed reality: what is it, and where is it going?
Augmented Reality (AR) and Mixed Reality (MR) are two technologies which have become more prominent in the past 10 years. AR is the use of computer technology to superimpose digital objects and data on top of a real-world environment. MR is similar to AR, but the digital objects interact spatially with the real-world objects, rather than being superimposed as “floating images” on top of the realworld objects.
XR is a term that has become more prominent in the last few years. It encapsulates virtual, augmented, and mixed reality topics. The definition of each of these has become saturated in the past decade, with companies using their own definitions for each to describe their products. The latest IDTechEx Report, Augmented, Mixed and Virtual Reality 2020-2030, distils this range of terms and products, compares the technologies used in them, and produces a forecast for the market’s next decade. The report examines 83 different companies and 175 products in VR (virtual reality), AR (augmented reality) and MR (mixed reality) markets.
38
AR and MR are also closely related to VR. There is a crossover in application and technology, as some VR headsets simulate the real space and then add in extra artificial content for the user in VR. But for the purposes of this article, AR and MR products are considered those which allow the user in some way to directly see the real-world around them. The main target sectors of AR and MR appear to be in industry and enterprise markets. With high costs of individual products, there appears to be less penetration into a consumer space. AR and MR products are being used in a variety of settings. One way they are being used is to solve a problem called “the skills gap”. This describes the large portion of the skilled workforce who are expected to retire in the next 10 years, leading to a loss of the knowledge and skills from this workforce. This knowledge needs to be passed on to new, unskilled employees. Some companies propose that AR/VR technology can fill this skills gap and pass on
mvpromedia.eu
this knowledge. This was one of the key areas discussed at some events IDTechEx analysts attended in 2019, in researching for this report. AR use in manufacturing and remote assistance has also grown in the past 10 years, leading to some AR companies targeting primarily enterprise spaces over a consumer space. Although there have been fewer direct need or problem cases which AR can solve for a consumer market, smartphone AR can provide an excellent starting point for technology-driven generations to create, develop and use an XR enabled smartphone for entertainment, marketing and advertising purposes. One example of smartphone AR mentioned in the report is IKEA place. This is an application where a user can put a piece of IKEA furniture in their room to compare against their current furniture. It allows users a window into how AR can be used to supplement their environment and can be used in day to day activities such as purchasing and visualising products bought from an internet marketplace. AR and MR companies historically have typically received higher funding per round than VR – e.g. Magic Leap which has had $2.6Bn in funding since its launch in 2017, but only released a creator’s edition of its headset in 2019. AR and MR products tend to be more expensive than VR products as they are marketed to niche use cases.
mvpromedia.eu
The report compares both augmented and mixed reality products and splits them into three categories: PC AR/MR, Standalone AR/MR and Smartphone/mobile AR/MR. PC products which need a physical PC attachment, standalone products which do not require a PC, and smartphone products – those which use a smartphone’s capabilities to implement the immersive experience. Standalone AR/MR have had more distinct product types in the past decade, and this influences the decisions made when forecasting the future decade to come. The report predicts an AR/MR market worth over $20Bn in 2030, displaying the high interest around this technology. This report also provides a complete overview of the companies, technologies and products in augmented, virtual and mixed reality, allowing the reader to gain a deeper understanding of this exciting technology. In conclusion, VR, AR & MR, as with nearly any technology area, must build on what has come before. This technology is heavily invested, targeting the future potential of XR headsets. For deeper understanding of the technology, the companies and products see the report. Augmented, Mixed and Virtual Reality 2020-2030.
MV
39
MVTEC INNOVATION DAY The third MVTec Innovation Day earned high praise as more than 200 delegates attended the event in Munich to discover the latest insights of machine vision software. Targeted at developers, programmers, and experienced machine vision users, MVTec experts illustrated trends, technologies, and solutions in a variety of presentations. The one-day event, on February 20, covered a range of topics that were aimed at meeting industry needs. The focus was on the latest trends and technologies as well as their application in daily practice. One of these was the efficient anomaly detection in deep-learning-based inspection tasks using the standard machine vision software MVTec HALCON. Other topics included deep-learning-based optical character recognition (OCR), examples of the latest embedded vision applications, HALCON’s generic box finder for pick-and-place applications and identification via subpixel bar code reader. MVPro Media partnered with MVTec to profile one of the businesses, plucked at random. That prestigious honour goes to AKU Automation. Managing Director Markus Kohnle provides an insight on the company and the benefits of attending the MVTec Innovation Day.
40
WHAT IS THE BACKGROUND OF THE COMPANY? Thomas Abt and I founded aku.automation in Aalen, where the headquarters are still located today, in 2007. We have developed from a local high-tech forge in Aalen to a global specialist for image processing in the field of industrial automation. Many demanding customers around the world rely on aku.automation. We began with five employees but over the past 13 years the premises have expanded, new systems developed, a branch in Linden in Hesse established and sales offices opened in Germany. The Wernberg office opened in November 2018 and serves as a technical base in the Upper Palatinate. We now have more than 50 employees and that is expanding. At the start of March, Boris Gierszewski was appointed Commercial Director.
WHAT SERVICES/PRODUCTS DO YOU OFFER? We are a supplier of systems and components in the field of industrial image processing. Our core competence – system solutions – are manufacturer-independent and individually adapted. So far, we have implemented over 3,000 systems and successfully placed them in the market. The components we offer are image sources, cameras, lenses, illumination and appropriate accessories. In addition, we have an academy for training courses and workshops, where you can choose between basic training or application-specific training.
mvpromedia.eu
THE FUTURE DEPENDS ON OPTICS™
WHAT IS YOUR RELATIONSHIP WITH MVTEC AND HOW DO THEY HELP YOUR BUSINESS?
LH Series
LS Series
As a CIP (Certified Integration Partner) we have a very close relationship with MVTec. This benefits of this are we are aware of what is on their agenda, we can influence the roadmap and can have a good exchange of ideas with the developers.
WHAT WERE THE BENEFITS OF BEING AT THE MVTEC INNOVATION DAY? It is very important to learn which technologies, trends, methods and algorithms show up in machine vision. We offer state-of-the-art and cutting-edge machine vision systems, so this is an opportunity to meet partners as well as potential customers and we get some ideas about what’s going on left and right of our own business. If there was one takeaway from the event, in deep learning and also in the field of 3D, there are a lot of new applications solving what wasn’t previously possible. MV
CA Series
NEW
High Resolution Lenses for Large Format Sensors A new range of lenses specifically designed for large format high resolution sensors. CA Series: Optimized for APS-C sensors featuring TFL mounts for improved stability and performance. LH Series: An ultra-high resolution design for 120 MP sensors in APS-H format. Also compatible with 35 mm full frame sensors. LS Series: Designed to support up to 16K 5 micron 82 mm line scan cameras with minimal distortion. Get the best out of your sensor and see more with an imaging lens from Edmund Optics. Find out more at:
www.edmundoptics.eu
UK: +44 (0) 1904 788600 I GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 I sales@edmundoptics.eu
FACTORY CONNECTIVITY: EASY AS ONE, TWO...TEA How many steps does it take to make a teabag? Before it makes its way to your mug, the leaves have been rolled, fermented, sieved, inspected and bagged in a factory and then they have strings added, tags and packaging. A slow process without automation, which has done wonders for streamlining production. Today, many facilities are striving for complete plant automation but how feasible is it?, Martyn Williams, U K managing director of automation software provider, COPA-DATA, explains.
The media is awash with recommendations for manufacturers on how to invest in automation, with warnings of dire consequences for those that fail to keep up. Despite pressures, a study by Cisco suggests that just 26 per cent of Internet of Things (IIoT) projects can be considered a complete success. Perhaps more worryingly, approximately 15 per cent of projects currently in progress will either be delayed or will fail completely. Considering these statistics, achieving complete automation sounds like a daunting task. Thankfully, it needn’t be. Complete plant automation doesn’t require manufacturers to reach “lights out” production standards — a term used to describe factories that can operate automatically with no need for human intervention. In reality, complete plant automation describes a manufacturing facility that is entirely connected and can therefore be optimised and automated to its full potential.
A SUSTAINABLE ROUTE TO AUTOMATION Achieving total automation in a facility should be an end goal for manufactures with digitalisation plans, but
42
there is no reason why this cannot be achieved through incremental changes. This can mean starting with a much smaller automation project; such as one focussed on improving a small area of production. Keeping the project manageable allows plant managers to determine whether the investment is worthy of site wide deployment. Focussing on a specific metric, like energy consumption, allows manufacturers to determine whether the project has been a success and how the technology would perform if deployed site wide. That’s providing an energy data management system is used to measure efficiency before, during and after deployment. Only after careful evaluation can manufacturers begin to consider scaling up the project and making a larger investment towards complete plant automation. That being said, this method does not necessarily connect the entire facility. Moreover, this incremental process of deploying new automation can risk communication issues due to technology incompatibility. When attempting complete automation, a common objection from manufacturers is that a high level of connectivity is not possible in facilities that use legacy equipment or machinery from several original equipment manufacturers (OEMs). Many believe that an entire system
mvpromedia.eu
overhaul is necessary to get the plant to a homogeneous standard. However, that is not always the case.
HARDWARE AGNOSTIC SOFTWARE A method of achieving complete plant automation is to consolidate existing production technology onto a single platform. While this doesn’t homogenise communication standards, it can provide operators with a complete view of their production, allowing them to optimise it accordingly. This can be achieved by deploying software that can communicate across several standards, including those of new and legacy equipment.
“ Keeping the project manageable allows plant managers to determine whether the investment is worthy of site wide deployment.” To achieve complete automation, software platforms must be hardware agnostic. This means that, regardless of the equipment manufacturer or communication standard of equipment, data can be seamlessly acquired and fed to one central tool. With over 300 native drivers and communication protocols, COPA-DATA’s zenon platform can provide this. The software can seamlessly connect a facility with a mixture of old, new and proprietary programmable logic controllers (PLCs). Using open standards, such as OPC Unified Architecture (OPC UA), and standardised industrial protocols, such as Modbus and BACnet, the technology makes it possible to converge all equipment in a facility regardless of age, brand or communication standard. From here, manufacturers can turn data into insights.
mvpromedia.eu
Creating a central platform means that manufacturers use data to identify where production can be improved. Again, consider tea bag manufacturing as an example. Industrial fermentation of tea leaves requires high levels of humidity for oxidation, which means temperatures must be regulated with controlled timing. Software to visualise the entire production line could be used to detect delays in earlier processes that could postpone production. Using this knowledge, operators can reschedule the time in which the fermentation area needs to be preheated, to save energy and reduce waste. As an automated software platform, zenon also allows operators to set parameters that ensure processes are always optimised in this data-informed manner. This includes integration with operations typically associated with the IT realm, such as enterprise resource planning (ERP). Should an unavoidable production error result in a defective batch of tea bags, zenon can automatically alert the purchasing team of this problem, to ensure the inventory is restocked and production is rescheduled to replace the unfulfilled order.
CREATING A UNIFORM SYSTEM ENVIRONMENT Plant automation is not necessarily limited to industrial and IT processes. Taking it a step further, software platforms can also integrate building technology and HVAC control to give manufacturers control of every aspect of their facility. Automation has long been used to enhance production and efforts to achieve complete plant automation are taking these advantages a step further. This is allowing manufacturers to strip out inefficiencies and reap more benefits. Achieving this level of connectivity may seem daunting, but it doesn’t have to be. Taking incremental steps, by experimenting with small automation projects and connecting existing equipment to centralised software, is a more feasible option. Despite the media buzz — with its recommendations or warnings of dire consequences — complete plant automation can be achieved through methods that are sustainable and risk-free. Indeed, achieving these levels of connectivity could be as simple as making a brew. MV
43
CALCULATING
ROBOT ROI Determining the total cost and return on investment (ROI) of a robot isn’t straightforward, as Nigel Smith, managing director of industrial robot supplier TM Robotics, explains.
When you factor in the robot’s engineering and maintenance costs, budgeting isn’t always as easy as requesting a quote. As well as installation costs, factories may need to build segregated work areas or additional backup power units before a robot can be deployed. That’s not to mention peripheral technology such as sensors, variable robot grippers and any necessary mounting apparatus. A report by the Boston Consulting Group suggested that, in order to arrive at a solid cost estimate for robots, customers should multiply the machine’s price tag by a minimum of three. Let’s say a six-axis robot costs £65,000, customers should therefore budget £195,000 for the investment. That said, should the robot require a more extensive equipment overhaul, a multiplication of four or five times the cost of the robot may be required. Then, of course, there are variable costs to contemplate. These include the labour, energy, materials, ongoing maintenance and production supplies required to deploy a robot successfully. Due to the varying nature of manufacturing facilities, these costs can fluctuate dramatically depending on the industry sector and size of the operation. Manufacturers can only calculate the ROI of an investment after establishing the robot’s total purchasing cost. Even
44
then, manufacturers must consider several other elements, starting with robot use. Consider the following example. A food manufacturer plans to use two SCARA robots to automate pick-and-place processes. The robots will run for three shifts a day, six days a week, 48 weeks of the year. The equivalent labour usually requires two operators per shift, equating to six operators to complete the same throughput over a working week. Using the average salary of a UK production operative as an example, at £25,000 per annum, removing these roles would reduce labour costs by £150,000 a year. However, human labour is not eliminated. A good rule-of-thumb for labour estimations alongside a robotic system is 25 per cent of current costs, reducing the total labour budget to an impressive £37,500 per year. Minus this figure from the total robot purchasing cost we determined earlier, and manufacturers have an estimated ROI for the first year. That said, there are some flaws in this method of ROI calculation. Many of these figures are estimates. For a true reflection of ROI, manufacturers should conduct a thorough cost analysis based on the operations of their facility, as well as a risk assessment. But what about the complementary benefits of robots that aren’t considered in this calculation? Robots offer peace-of-mind for delivering productivity gains to improve
mvpromedia.eu
a factoryâ&#x20AC;&#x2122;s bottom line. For instance, eliminating the likelihood of human error in manufacturing processes can reduce scrap material, minimise reworks and improve the consistency of products. According to the Annual Manufacturing Report 2019, over three quarters of manufacturers are ready to invest
mvpromedia.eu
in new technologies to boost productivity. No doubt, robots will be among these investments. However, as manufacturers consider investing in robotic technology for productivity, itâ&#x20AC;&#x2122;s vital they have a clarity of ROI validating purchasing decisions. MV
45
GREEN VISION TRACKING DOWN CLIMATE CHANGE To investigate long-term temperature changes, climate researchers on Germanyâ&#x20AC;&#x2122;s highest mountain Zugspitze can for the first time measure the distribution of the most important greenhouse gas in climate-relevant air layers using a high-performance laser system. Coherent explains how its excimer laser is used to measure the stratosphere.
Powerful global networks of lidar systems for remote sensing of natural and man-made atmospheric trace gases in our atmosphere are becoming increasingly important against a background of global warming. A new generation of lidar systems is the prerequisite for predicting the climatic influence of greenhouse gases and their transport processes more accurately in the future by creating meaningful concentration and temperature profiles over a wide range of altitudes. The Raman lidar method is used to explore the important greenhouse gas water vapor. A UV-laser pulse is emitted into the atmosphere and its resulting backscattering signal, which is influenced by the water molecules, is captured by a collecting mirror. The signal is measured in a timeresolved manner, so that the height from which the signal originates can be determined. The Raman scattering intensity decreases strongly with increasing height, which is why laser technologies delivering considerably lower UV output, which have been in use up to now, could only view the troposphere - the atmospheric layer which determines our weather.
beam divergence. The remaining narrow-band laser emission with an average power of 180 W is still 10 times that of a powerful UV-Nd:YAG laser system. In combination with four times larger collection optics, a 40 times improved signal-to-noise ratio could be achieved compared to the Raman lidar systems available so far. This means that, for the first time, the greenhouse gas water vapor can now be detected more accurately, quantitatively and by a factor of 10 faster and further into the atmosphere than ever before, namely up to a height of over 22 km.
CLIMATE RESEARCH BENEFITS FROM OPTICAL HIGH TECHNOLOGY With the aim to extend the height profile of the water vapor concentration for the first time into the stratosphere and thus to investigate its possible influence on global warming, the scientists of the environmental research station Schneefernerhaus on the Zugspitze mountain in the German Alps opted for a particularly powerful 350-W UVexcimer laser from Coherent.
The unique Zugspitze lidar thus makes an important contribution to climate research. Additional highperformance lidar stations in other parts of the world are needed to understand possible transport processes of water vapor into the stratosphere and the resulting temperature-increasing feedback mechanisms on a global scale. MV
They modified the excimer laser to generate linearpolarised UV pulses with small line width and reduced
46
mvpromedia.eu
BRINGING THE
GREEN REVOLUTION TO ELECTRONICS
From biomemory to implants, researchers are looking for ways to make more eco-friendly electronic components.
The scientists expect that including biocomposite materials in the design of electronic devices could lead to vast cost saving, open the door for new types of electronics due to the unique material properties, and find applications in implantable electronics due to their biodegradability. For example, there is widespread interest in developing organic field effect transistors (FET), which use an electric field to control the flow of electric current and could be used in sensors and flexible flat-panel displays.
Researchers are investigating how to make electronic components from eco-friendly, biodegradable materials to help address a growing public health and environmental problem: around 50 million tonnes of electronic waste are produced every year. Less than 20 per cent of the e-waste we produce is formally recycled. Much of the rest ends up in landfills, contaminating soil and groundwater, or is informally recycled, exposing workers to hazardous substances like mercury, lead and cadmium. Improper e-waste management also leads to a significant loss of scarce and valuable raw materials, like gold, platinum and cobalt. According to a UN report, there is 100 times more gold in a tonne of e-waste than in a tonne of gold ore. While natural biomaterials are flexible, cheap and biocompatible, they do not conduct an electric current very well. Researchers are exploring combinations with other materials to form viable biocomposite electronics, explained Ye Zhou of Chinaâ&#x20AC;&#x2122;s Shenzhen University and colleagues in the journal Science and Technology of Advanced Materials.
mvpromedia.eu
Flash memory devices and biosensor components made with biocomposites are also being studied. For example, one FET biosensor incorporated a calmodulin-modified nanowire transistor. Calmodulin is an acidic protein that can bind to different molecules, so the biosensor could be used for detecting calcium ions. Researchers are especially keen to find biocomposite materials that work well in resistive random access memory (RRAM) devices. These devices have non-volatile memory: they can continue to store data even after the power switch is turned off. Biocomposite materials are used for the insulating layer sandwiched between two conductive layers. Researchers have experimented with dispersing different types of nanoparticles and quantum dots within natural materials, such as silk, gelatin and chitosan, to improve electron transfer. An RRAM made with cetyltrimethylammonium-treated DNA embedded with silver nanoparticles has also shown excellent performance. â&#x20AC;&#x153;We believe that functional devices made with these fascinating materials will become promising candidates for commercial applications in the near future with the development of materials science and advances in device manufacturing and optimization technology,â&#x20AC;? the researchers conclude. MV
47
THE FUTURE LOOKS GREEN REDUCING THE ENVIRONMENTAL IMPACT OF A MANUFACTURING FACILITY People are doing what they can to help the environment. However, creating an impact takes more than installing a solar panel or swapping to energy efficient lightbulbs. Manufacturers should consider how making small, impactful changes in their facility can actually make a difference in the environment. Neil Ballinger, head of EMEA sales at industrial automation parts supplier EU Automation, explains how we can create the green factories of the future. Demand for resources is growing. According to the World Business Council for Sustainable Development (WBCSD) the world is currently on track to consume four Earthsâ&#x20AC;&#x2122; worth of resources by 2050. Governments across the world have warned that everyone, from homeowners to large, global manufacturers must consider how they can reduce demand for resources such as energy and raw materials to cut carbon emissions and safeguard the planet.
MAKE IT CIRCULAR Most facilities currently work following a linear model of make, use and dispose that creates a lot of waste because the product will only have one life and left-over energy or material will be wasted. The circular model differs and encourages manufacturers to keep resources in use for as long as possible. Manufacturers should consider how they can design waste out of the production process, the goods manufactured and the everyday running of the facility. For example, powering large facilities requires huge amounts of energy and water that can be very costly. Some of this energy will also be wasted during production. Manufacturers can look at redirecting this energy, such as wastewater, to help power the facility. Facilities that have high levels of automation can also consider reducing the lighting or heating in the facility in areas where there are no human workers to save energy.
regularly maintained to run efficiently. If any of the equipment breaks down, manufacturers must make quick decisions to return to production to avoid any financial losses due to downtime. When a machine breaks down manufacturers can choose to repair or replace it. To extend the lifetime of the machine, reduce costs and reduce environmental impact, manufacturers should consider repairing the machine. Industrial automation parts suppliers can source the broken part if it is new, reconditioned or obsolete and deliver it to the facility quickly so it can return to production.
ARE YOU SURE YOU WANT TO PRINT THAT? Manufacturers are introducing more automation to their assembly lines to improve productivity and efficiency. However, some manufacturers do not realise the full potential of connected devices. By transferring internal protocols from paper to digital mobile devices facility workers can reduce their reliance on paper, in turn reducing their carbon emissions. Going paperless can also improve logistics of everyday activity. The ability to access real-time information about inventory, orders and administration from anywhere on or off site gives manufacturers the visibility they need to improve productivity. Manufacturers cannot ignore the importance of reducing carbon emissions in production facilities. Investing in renewable energy and sustainable materials is important but not the only way to improve sustainability. By improving visibility of data and operations, extending equipment lifecycles and following a circular model manufacturers can improve productivity without negatively impacting the environment. MV
MACHINERY Sophisticated assembly lines require automation and equipment that will use a lot of energy and must be
48
mvpromedia.eu
RETROFITTING LEGACY EQUIPMENT FINDING A SOLUTION Design engineers should aim to implement a roadmap of the factory’s existing digital capabilities and focus on aims, targets and prioritising actions that will effectively increase business value. Some machinery may need replacing with new technology, however, retrofitting viable legacy equipment with smart technology can be far more cost effective than replacing the entire production line and can extend the lifespan of equipment.
After her mother took away her devices, a teen girl went viral after allegedly tweeting from her fridge. The reason this was so entertaining is because it’s unexpected — you don’t typically associate fridges with communication, just as you don’t with legacy equipment. Here Jonathan Wilkins, director at automation equipment supplier EU Automation, discusses the issues and solutions surrounding retrofitting legacy equipment with smart technology. ISSUES WITH LEGACY EQUIPMENT Functional legacy equipment, such as drives, sensors and PLCs, are often the backbone of a factory. As technology progresses these machines may need to be integrated with newer machines, which come equipped with data collection and communication capabilities. This can cause issues for manufacturers in connectivity and interoperability. New machinery is being produced and saturating the market at an accelerated rate, despite the lifespan of older models not being complete. We don’t want legacy machinery to be left behind — replacing the backbone of the facility would be costly and time-consuming. However, manufacturers don’t want to be held back from collecting data on their processes and equipment that could hold valuable insights. So, asides from ripping entire systems out and replacing it all with new versions, what can we do?
mvpromedia.eu
The ultimate dream for many manufacturers is full digitisation, vertically and horizontally, across the company, as well as its suppliers and distributors. Thankfully, manufacturers working towards this do not have to invest in a haul of new equipment. One step to take when upgrading systems is improving human to machine interaction, by retrofitting a human machine interface (HMI), with an easier-to-use graphical interface, such as a touch screen, or additional capabilities. For example, a HMI could be easily integrated into the system by connecting a USB, RS-232, RS-485 between the HMI and PLC. If the units have wireless capabilities it can be even easier. Smart sensors, which can measure vibrations, temperature and pressure can be fitted onto legacy machinery, allowing data to be collected and made available across the whole factory network. This can feed into a predictive maintenance approach to glean insights on machine performance and upcoming maintenance needs. If a smart sensor detects that a piece of equipment may break down, the manufacturer can take steps to order a replacement. While these simple steps might not meet visions of tapping into every piece of equipment from your smart fridge, it does provide a good starting point to access, monitor and control information remotely in factories. MV
49
THE TOP ROBOTICS TRENDS IN 2020
From 2020 to 2022 almost two million new units of industrial robots are expected to be installed in factories around the world. New technology trends and market developments enable companies to react to changing requirements. The International Federation of Robotics (IFR) shows top trends to innovate. “Smart robotics and automation are vital to deal with new consumer trends, demand for product variety or challenges from trade barriers”, says Dr. Susanne Bieller, general secretary of the IFR. “New technological solutions pave the way for more flexibility in production.” Simplification, collaboration and digitalisation are key drivers that will benefit robot implementation.
ROBOTS GET SMARTER Programming and installation of robots has become much easier. How this looks in practice: Digital sensors combined with smart software allow direct teaching methods, so-called “Programming by Demonstration”. The task that the robot arm is to perform is first executed by a human: He literally takes the robot arm and handguides it through the movements. This data is then transformed by the software into the digital program of the robot arm. In future, machine learning tools will further enable robots to learn by trial-and-error or by video demonstration and self-optimise their movements.
ROBOTS COLLABORATE WITH WORKERS Human-robot collaboration is another important trend in robotics. With the ability to work in tandem with humans, modern robotic systems are able to adapt to a rapidly changing environment. The range of collaborative applications offered by robot manufacturers continues to expand. Currently, shared workspace applications are most common. Robot and worker operate alongside each other, completing tasks sequentially. Applications in which the human and the robot work at the same time on the same part are even more challenging. Research and Development (R&D) focuses on methods to enable robots to respond in real-time. Just like two human
50
Robots collaborate with workers on different levels © IFR
workers would collaborate, the R&D teams want them to adjust its motion to its environment, allowing for a true responsive collaboration. These solutions include voice, gesture and recognition of intent from human motion. With the technology of today, human-robot collaboration has already shown huge potential for companies of all sizes and sectors. Collaborative operations will complement investments in traditional industrial robots.
ROBOTS GO DIGITAL Industrial robots are the central components of digital and networked production as used in industry 4.0. This makes it all the more important for them to be able to communicate with each other – regardless of the manufacturer. The so called “OPC Robotics Companion Specification”, which has been developed by a joint working group of the VDMA and the Open Platform Communications Foundation (OPC), defines a standardised generic interface for industrial robots and enables industrial robots to connect into the Industrial Internet of Things (IIoT). The digital connectivity of robots with e.g. cloud technology is also an enabler for new business models: Robot leasing for example – called Robots-as-a-Service – has advantages that might be especially attractive for small and medium-sized enterprises (SMEs): no committed capital, fixed costs, automatic upgrades and no need for high-qualified robot operators. MV
mvpromedia.eu
FILTERS: A NECESSITY, NOT AN ACCESSORY.
INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING
MIDOPT.COM