Stemmer Imaging: The Three Pillars Of Success | MVPro 19 | March 2020

Page 1

STEMMER IMAGING: THE THREE PILL ARS OF SUCCESS THE IMPACT OF SMART CAMERAS

HOW SAFE ARE COBOTS?

AN INNOVATION IN EDUCATION

ISSUE 19 - MARCH 2020

mvpromedia.eu MACHINE VISION & AUTOMATION



MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu

Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu

CONTENTS 4

EDITOR’S WELCOME - Planes, trains and automobiles

6

INDUSTRY NEWS - Who is making the headlines

10

PRODUCT NEWS - What’s new on the market

14

STEMMER IMAGING - UK MD Mark Williamson on the three pillars to success

18

DENIS BULGIN - It’s good to talk

Cally Bennett

20 FACIAL RECOGNITION - What is the future?

Group Business Manager cally.bennett@mvpromedia.eu

21

HYPERSPECTRAL IMAGING - A $25b market

22

SMART CAMERAS - Dan McCarthy on their impact

24

BASLER - Dive into AI with NXT Ocean

26

GARDASOFT - A machine vision success story

Becky Oliver Graphic Designer

28

CEI - Making a case for a case

Denis Bulgin, Pascal Echt, Jim Heppelmann, Dan McCarthy, Sean Robinson, Nigel Smith. Contributors

32

AUTOMATION TECHNOLOGY - Modular 3D Laser Triangulation

34

ADVANCED ILLUMINATION - High-Intensity LEDs

36

TELEDYNE IMAGING - Sherlock has the answers

38

BAUMER - Verisens vision sensors

39

MVTEC - Using AI

40

NOVOTEK - Fail to prepare, prepare to fail

43

SCHAFFHAUSEN INSTITUTE - Innovation in education

44

FROM THE TOP - Dr ‘Can do’: Serguei Beloussov

46

PTC - The Connected Worker

48

TM ROBOTICS - Are cobots safe?

50

AUTOMATION - The global impact

Spencer Freitas Campaign Delivery spencer.freitas@cliftonmedialab.com

Visit our website for daily updates

www.mvpromedia.eu

mvpromedia.eu

MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2020. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.

mvpromedia.eu

3


PLANES, TRAINS AND AUTOMOBILES Whenever I am asked to pick my top films of all time, the Steve Martin and John Candy classic Planes, Trains and Automobiles is always there. A comedic tale of two mis-matched travellers and their logistical nightmare getting home for the holidays – long before we had the internet and mobile phones! Even with technology at my disposal, the majority of the year so far seems to have been spent looking at flight schedules and hotel availability as 2020 goes big on events. I’m sure I’m not the only one with several windows open at once on their laptop looking how to get from A to B to C and checking that there is a hotel room preferably (one with fluffy pillows and within walking distance of the venue) available. There isn’t a month in the calendar where there isn’t a trade show or conference delivering some incredible insight and product launches. Of course, for those in machine vision all eyes are looking to Vision 2020 in Stuttgart. Did the organisers foresee that eventuality when they launched it all those years ago? Brilliant planning if they did. It is Europe’s flagship event and hugely anticipated, so if you haven’t booked anything…I suggest you don’t delay any longer. There is also Automatica, which is also biennial and brings together so much from automation, robotics and all its interconnected industries. Events play a huge part in how we network, showcase products and share our knowledge. In this issue of Machine Vision and Automation, Denis Bulgin looks at what events are happening this year and why they are so beneficial to the industry. We also get an insight from Stemmer Imaging’s top man in the UK, Mark Williamson, who discusses the company’s three pillars of success. There is also an interview with Dr Serguei Beloussov, who has delivered his dream of a new technologyfocused university in the Swiss city of Schaffausen. There is a look at the impact of smart cameras and an insightful article on just how safe are cobots.

Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu

It is another packed issue with plenty to read – maybe while you’re on a plane to your next event.

Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB

I look forward to seeing you there. You can find me next at UKIVA in May, feel free to get in touch and let’s meet up (shower curtain rings not required).

MVPro B2B digital platform and print magazine for the global machine vision industry

Enjoy the read!

www.mvpromedia.eu

Lee McLaughlan Editor

4

mvpromedia.eu


Matrox 4Sight EV6 with Matrox Design Assistant X

Single vision controller + multiple cameras =

One powerful, flexible solution Manage multiple independent projects with the robust combination of the Matrox 4Sight EV6 vision controller with Matrox Design Assistant X software. Integrating a capable Intel® Core™ embedded processor, it can capture from multiple GigE Vision® and USB3 Vision® cameras, and interfaces directly to factory automation equipment and enterprise systems through additional communication ports as well as discrete real-time I/Os. Vision application development and deployment are accomplished with relative ease with the pre-installed Matrox Design Assistant X design-time and run-time environments. Together, this combination delivers the processing power and expansive connectivity for driving demanding multi-camera vision applications on the factory floor. Configured using Matrox Design Assistant flowchart-based software

Learn more

www.matrox.com/imaging/4sight_ev6_with_design_assistant/mvpro


INDUSTRY NEWS

EMVA ANNOUNCES NEW PRESIDENT Chris Yates has been appointed president of the European Machine Vision Association. Yates, who took over at the start of 2020, succeeds Jochem Herrmann, who held the role since 2015. The change comes after an outstanding term in office, during which Herrmann provided strong leadership and guidance over a period of significant growth for the association and rapid change in the industry. The new president is a director of advanced technology within the Safety, Sensing, & Connectivity business of Rockwell Automation, having previously been the CEO and founder of Odos Imaging prior to the company’s acquisition by Rockwell Automation during 2017. He was elected to the Board of Directors of the EMVA in 2018 and has focused on greater engagement with startup

companies together with operational direction of the association. In accepting the role, Yates said: “Vision systems remain one of the most important and widely used automation technologies in the continued evolution of industry, and the EMVA represents many significant organisations active in the sector. I appreciate the responsibility the Board of Directors has placed in me to lead the association over the coming period, and I am wholly grateful for their confidence and ongoing support. “I believe that the EMVA must continue to advocate and promote the use of vision technology across all sectors, and is well placed to provide a focal point for dissemination, education, and collaboration within the market. I must also thank Jochem for his remarkable leadership and contribution over the past years, leaving behind a legacy which places the association on an excellent foundation for the future.“ MV

IDS 2019 SALES GROWTH BOOSTED BY US MARKET IDS Imaging Development Systems GmbH has exceeded industry expectations in 2019 with increased sales. The company has held its ground in the market with a sales increase in the higher single-digit percentage range, in contrast to the seven per cent sales decline forecast by the VDMA for machine vision. The increase in sales was particularly strong in North America compared to the previous year. The German-based camera manufacturer is again aiming for double-digit growth in 2020. This is based on the positive order situation, with a 16 per cent increase over the previous year, and the strong development of the foreign business. Delivery of its 1,500,000th camera is expected in the second quarter. Overall, the foreign presence at almost all locations was further strengthened by new hires in the areas of sales and machine vision consulting. The relocation of the subsidiaries in the USA, Japan, UK and Korea to larger premises reflects these developments and paves the way for further growth.

New products will complement the IDS product range, especially in these areas. The IDS NXT ocean all-in-one solution has already been on the market for a short time and promises the user AI-based image processing without specialist knowledge. “We continue to grow, develop innovative products and therefore remain a reliable and powerful partner for the future,” explains managing director Daniel Seiler. “Thanks to high investments in our infrastructure in 2019, we are ideally positioned for the increased order intake and a further increase in demand.” The new Innovation and Technology Centre “b39” will be ready for occupancy in the second half of the year. In addition to IDS Imaging Development Systems GmbH, the main user of the 4,500 sqm complex, in which jobs for 200 employees will be created, will be the sister company IDS Innovation GmbH, which will operate the “b39 Academy”, specialising in the teaching of digital professional, methodological and technical skills. MV

The growth forecast for the VISION year 2020 is based primarily on the new IDS NXT cameras with artificial intelligence and the high demand in the 3D segment.

6

mvpromedia.eu


INDUSTRY NEWS

ALL EYES ON VISION 2020 present their innovative approaches to various machine vision applications. There has also been major further development from a technological perspective since the last VISION in 2018. “Since then significant advances have been made in the areas of Artificial Intelligence, Deep Learning, Embedded Vision, Polarisation and Hyperspectral Imaging, among others, thus making possible numerous ground-breaking innovations”, says VISION Project Manager, Florian Niethammer. “Many experts see machine vision as an essential element for the profitable use of Industry 4.0 concepts in automated production”, adds Niethammer. Over 300 exhibitors have already booked their stand areas and Niethammer is anticipating around 500 exhibiting companies by the start of the event. This year’s much-anticipated VISION show takes place for the 29th time in Stuttgart from 10-12 November. The leading world trade fair for machine vision is characterised more than ever by the dynamic changes of the industry: The long-forecast consolidation of the machine vision market was used very dynamically in recent months and led to a significant structural change. As a result of these developments, a number of new names now enhance the trade fair in addition to the established VISION exhibitors such as Basler, Cognex, IDS, HIK Vision, MVTec, Sony, Stemmer Imaging, Teledyne Dalsa and many other international market leaders.

Niethammer is delighted that the current key industry topics and technological advances will be competently presented anew each time at VISION: “At VISION 2020 visitors can once again expect the who’s who of the global market leaders in all technical disciplines of machine vision, as well as a comprehensive accompanying programme on all aspects of this technology. There is no better opportunity to obtain information about the current trends and developments in this industry than at VISION.” For more information go to: www.vision-messe.de.

MV

New exhibitors include young companies such as Autosensic, DeeDiim Sensors or Photolitics, who will

mvpromedia.eu

7


INDUSTRY NEWS

EURESYS DELIVERS NEW DEEP LEARNING LIBRARY: EASYSEGMENT Euresys has announced the availability of a new Deep Learning library: EasySegment.

- Includes the free Deep Learning Studio application for dataset creation, training and evaluation

EasySegment works in unsupervised mode. After being trained with “good” images only, it can detect and segment anomalies and defects in new images. EasySegment works with any image resolution, supports data augmentation and masks, and is compatible with CPU and GPU processing.

- Only available as part of the Deep Learning Bundle

The new EasySegment complements EasyClassify as Open eVision’s Deep Learning libraries. EasyClassify is a classification tool that can detect defective products and sort them into various classes. EasyClassify and EasySegment are easy to use and have been tailored, parametrised and optimised for analyzing images, particularly for machine vision applications. EasySegment

EasyClassify Deep Learning classification library AT A GLANCE - Includes functions for classifier training and image classification - Able to detect defective products or sort products into various classes - Supports data augmentation, works with as few as one hundred training images per class - Compatible with CPU and GPU processing

Deep Learning segmentation library

- Includes the free Deep Learning Studio application for dataset creation, training and evaluation

AT A GLANCE

- Only available as part of the Deep Learning Bundle

- Unsupervised mode: train only with “good” images to detect and segment anomalies and defects in new images

MORE: https://www.euresys.com/en/Products/MachineVision-Software/Open-eVision-Libraries MV

- Works with any image resolution - Supports data augmentation and masks - Compatible with CPU and GPU processing

KEYNOTE FOR 2020 UKIVA REVEALED UKIVA has announced that the keynote address at the 2020 UKIVA Machine Vision Conference and Exhibition will be given by Richard Love from NVIDIA. Love is EMEA marketing manager for NVIDIA’s Jetson™ Embedded Processor Family. The keynote will be the first presentation of the conference at 10 AM on 14th May 2020 at the Marshall Arena, Milton Keynes, UK. UKIVA chairman, Allan Anderson, said: “With embedded vision currently one of the major development areas

8

in machine vision, this will be an excellent opportunity for visitors to find out about the use of embedded platforms in a range of applications that include vision, robotics and more. This promises to be a really interesting presentation.” Love has over 25 years’ experience working with developers of 3D graphics, cloud software and AI hardware technology solutions, including Autodesk, Microsoft and of course NVIDIA. Further details on the content and the rest of the 2020 Conference programme, as well as the exhibition will be published on the event website – www.machinevisionconference.co.uk – as they are finalised. MV

mvpromedia.eu


INDUSTRY NEWS

EMVA YOUNG PROFESSIONAL AWARD 2020 LAUNCHED The European Machine Vision Association has launched its annual EMVA Young Professional Award. This prestigious industry award honours the outstanding and innovative work of a student or a young professional in the field of machine vision or computer vision. The 2020 winner will receive their award at the 18th EMVA Business Conference, which is being held in Sofia, Bulgaria, June 25th-27th. The EMVA award is designed to support further innovation in the industry, to contribute to the important aspect of dedicated machine vision education and to provide a bridge between research and industry.

(1) Outstanding innovative work in the field of vision technology. Industrial relevance and collaboration with a company during the work is required. The targeted industry is free of choice. (2) Work (master thesis or PHD thesis) has to be made within the last 12 months at (or in collaboration with) a European institution. Meanwhile the student may have entered the professional field. To enter, a short abstract of one to two pages in the English language has to be submitted to the EMVA Secretariat, Ms. Nadine Kubitschek, at ypa@emva.org by May 11th, 2020. MV

Applications are invited from students and young scientists from European institutions that focus on challenges in the field of vision technology and that apply latest research results and findings in computer vision to the practical needs of the machine vision industry. The criteria of the works to be presented for the EMVA Award are:

CHII 2020: THE MECCA OF HYPERSPECTRAL IMAGING technology and convey ideas and experiences for their economic use,” say organisers. “Important components of the conference are therefore numerous short presentations of successful applications as well as detailed presentations by leading technology providers. Those presentations will provide insight for interested visitors into the current state of the art as well as into future options for applications.”

The chii 2020 Conference on Hyperspectral Imaging in Industry will take place in Graz, Austria, from 27 to 28 May 2020. Leading international experts and companies will be attending to discuss the latest developments in this innovative technology. The capabilities and the application possibilities of hyperspectral systems will be given additional relevance than at previous events. “At chii 2020, we want to show hyperspectral imaging system developers the exciting opportunities of this

mvpromedia.eu

Accompanying the conference, leading technology suppliers from around the world will exhibit their latest hyperspectral imaging developments and products. chii is the only conference worldwide focusing on the industrial use of hyperspectral image processing. The target audience for this event are application engineers, hardware manufacturers, research institutions, plant operators, international distributors and the major manufacturers of hyperspectral sensors, optics, lighting and software Programme details and registration for chii 2020 will be available soon at www.chii2020.com. MV

9


PRODUCT NEWS

LMI TECHNOLOGIES LAUNCHES GOCATOR 2490 3D LASER PROFILER resolutions of 2.5 mm in all three dimensions (X, Y, Z) – even at conveyor speeds of 2 m/s. The sensor is also used for robust surface inspection and pass/fail control of defects such as packaging dents, tears, punctures and folds.

LMI Technologies (LMI) has launched the Gocator 2490 smart 3D laser line profiler. This sensor achieves a two-metre field of view, large measurement range, and a 1m X 1m scan area for measurement and inspection of large targets in packaging & logistics and other applications where wide coverage is required.

Gocator 2490’s combination of wide field of view and large measurement range allows engineers to cover a very large scan area with a single sensor, and is suitable for applications such as volume dimensioning of packages in warehouse automation, automotive body frame inspection, monitoring loading levels on wide conveyor belt systems, sawmill board optimisation, and high-volume food processing inspection. MV

Gocator 2490 has been designed to give engineers an ‘out-of-the-box’ all-in-one, pre-calibrated 3D vision solution ready to scan and measure. In packaging and logistics applications, Gocator 2490 is able to scan 1 m X 1 m packages at 800 Hz, with

WORLD’S SMALLEST HI-RES INTEL REALSENSE LIDAR DEPTH CAMERA FRAMOS, a global partner for vision technologies, has added Intel’s first LiDAR device – the new L515 depth camera – to its product range. The L515 is the world’s smallest and most powerefficient solid-state LiDAR depth camera with XGA resolution and a large range of 0.25m to 9m. It generates 30 frames per second with a depth resolution of 1024 x 768. It has a field of view of 70° x 55 ° (±2°) and the ability to generate 23 million depth points per second. Under controlled indoor lighting, the L515 depth camera achieves unparalleled depth quality,

10

with a Z error of less than 20mm at maximum range. The short pixel exposure time of less than 100ns minimizes motion blur artefacts even with fast-moving objects. Its millimetre accuracy is retained throughout the depth camera’s lifespan, without the need for calibration. Consuming less than 3.5 watts of power, the tiny LiDAR depth camera enables easy mounting on handheld devices. It weighs around 100g and is smaller than a tennis ball, with a diameter of 61mm and height of 26mm. This makes for easier integration into mobile devices such as portable scanners for volumetric measurement. Logistics is a market than can benefit, while other applications can be found in industry and robotics as well as 3D (body) scanning, healthcare, and retail. The camera will also be of interest to end users in the maker space and 3D enthusiasts. MV

mvpromedia.eu


PRODUCT NEWS

ABB DELIVERS ROBOT TO COPE WITH HARSH CONDITIONS ABB is introducing a new harsh environment version of its IRB 1100 robot and OmniCore controller, designed with enhanced protection against water and dust. The entire body of the new IRB 1100 model has an IP67 rating, with all electrical components sealed against contaminants. This makes it resistant to water and provides the robot with complete protection from dust ingress.

protection enabling it to be installed in close proximity to dirty, wet and dusty processes. The controller also offers extra internal space to support process-related equipment for communication, conveyor tracking and external axis, while its lean format enables it to be installed in tight spaces. C90XT is the smallest high protection class robot controller in the industry. MV

For manufacturers, the new IP67 protection rating will enable the robot to be used in applications generating substantial dust, water and debris including 3C polishing, wet grinding, buffing and deburring. The IRB 1100 is ABB’s most compact and fastest robot, offering best-in-class repeatability. The IRB 1100 is available in two variants – one with a 4kg payload with 475 mm reach and the second with a 580 mm reach. The new OmniCore C90XT - XT stands for ‘Extra Tough’ extends the comprehensive OmniCore controller family, bringing all the benefits of their best-in-class motion control and path accuracy to harsh environments. The C90XT is a rugged yet compact controller with full IP54-rated

BASLER’S NEW AI VISION SOLUTION KIT WITH CLOUD CONNECTION on the edge device. For this purpose, pre-trained neural networks are available in the cloud as software containers that are designed for direct use. Users also have the option of expanding these networks as required.

Embedded vision systems solutions provider Basler has introduced its new AI Vision Solution Kit with Cloud Connection. Together with the AI accelerator of the Processing Board, they form the basis for lean prototyping of AI-based IoT applications for powerful, integrated vision systems. The AI Vision Solution Kit enables developers to use, train and deploy machine learning models provided in the cloud

mvpromedia.eu

The software containers with the selected machine learning models can be loaded onto the edge device, the embedded system, for prototyping of application examples. The inference and actual image processing is thus performed on the edge device. In this way, cloudspecific application examples can be tested easily and with little programming effort and metadata can be generated. In the next step, users can send the metadata to the cloud via a defined interface and store it in a database, for example, or visualise it with the help of a dashboard using the appropriate tools. The kit’s embedded hardware consists of the robust and industry-proven Basler dart BCON for MIPI camera with a resolution of 13 MP. The processing board is based on SolidRun’s new Hummingboard Ripple, an AI optimized board that houses an NXP i.MX 8M Mini SoC and an AI accelerator. MV

11


PRODUCT NEWS

OMNIVISION LAUNCH 48MP SENSOR FOR MOBILE PHONES OmniVision Technologies, a leading developer of advanced digital imaging solutions, has launched the OV48C sensor. The OV48C is a 48 megapixel (MP) image sensor with a large 1.2 micron pixel size to enable high resolution and excellent low light performance for flagship smartphone cameras. It is the industry’s first image sensor for high resolution mobile cameras with on-chip dual conversion gain HDR, which eliminates motion artefacts and produces an excellent signal-to-noise ratio (SNR). This sensor also offers a staggered HDR option with on-chip combination, providing smartphone designers with the maximum flexibility to select the best HDR method for a given scene. “The combination of high resolution, large pixel size and high dynamic range is essential to providing the image quality required by flagship mobile phone designers for features such as night mode,” said Arun Jayaseelan, staff marketing manager at OmniVision.

In low light conditions, the sensor can use near-pixel binning to output a 12MP image for 4K2K video with four times the sensitivity, yielding a 2.4 micron-equivalent performance. MV

THE FUTURE DEPENDS ON OPTICS™

MercuryTL™ Liquid Lens Telecentric Lenses “The OV48C is the only flagship mobile image sensor in the industry to offer the combination of high 48MP resolution, a large 1.2 micron pixel, high speed, and on-chip high dynamic range, which provides superior SNR, unparalleled low light performance and high quality 4K video.” Built on OmniVision’s PureCel® Plus stacked die technology, this 1/1.3” optical format sensor provides leading-edge still image capture and video performance for flagship smartphones. The OV48C also integrates an on-chip, 4-cell colour filter array and hardware remosaic, which provides high quality, 48MP Bayer output, or 8K video, in real time.

12

The new TECHSPEC® MercuryTL™ Liquid Lens Telecentric Lenses combine the performance of a telecentric lens with the flexibility of a liquid lens to eliminate parallax error and provide quick working distance adjustment while maintaining telecentricity, distortion and image quality. Find out more at

www.edmundoptics.eu/Mercury Visit us at Optatec May 12-14, 2020 Hall 3.0 – Booth C14

UK: +44 (0) 1904 788600 I GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 I sales@edmundoptics.eu


PRINCETON INFRARED TECHNOLOGIES LAUNCH MEGAPIXEL SWIR MVCAM

2490

Princeton Infrared Technologies (PIRT), has launched its compact MVCam series shortwave-infrared (SWIR). The megapixel indium gallium arsenide (InGaAs) camera provides 1280 x 1024 resolution SWIR imagery at up to 95 frames per second (fps), with higher frame rates for userselectable regions of interest (ROI). At 12 µm pixel pitch, the MVCam InGaAs image sensor yields extremely low dark current and high quantum efficiency, providing sensitivity across the SWIR and visible wavelength bands from 0.4 to 1.7 µm. The standard camera configuration uses a single-stage thermoelectric cooler (with no moving parts), integrated in a sealed package to stabilize the image sensor at 20°C. MVCam’s advanced digital array generates 14-bit digital image data with no image lag and read noise less than 45 e-, which is lower than all other industrial SWIR cameras. The camera uses Medium configuration Camera Link to output the video imagery at the full data rate of 95 fps. Base Camera Link can also be used at lower frame rates. Princeton Infrared Technologies’ MVCam is ideal for highresolution machine vision and microscopy applications. MV

THE SMART 3D LASER PROFILER WITH A 2 METER FIELD OF VIEW

Scan Large Targets. At Production Speed. Gocator 2490 is able to scan a 1 m x 1 m area at a rate of 800 Hz, delivering 2.5 mm XYZ resolutions even at conveyor speeds of 2 m/s. The sensor also delivers robust quality inspection of surface defects such as punctures, dents, and folds.

Discover FactorySmart® visit www.lmi3D.com/2490


THE THREE PILLARS

TO SUCCESS After a significant 2019, which saw changes at the top, acquisitions and a review of the company’s business strategy, Stemmer Imaging has much to develop and deliver in 2020. In an exclusive interview with MVPro Magazine, Mark Williamson, managing director of operations in UK & Ireland explained how Stemmer Imaging is enhancing their offering to the machine vision sector.

MVPRO: FIRSTLY, CAN YOU REFLECT ON 2019 AND ITS IMPACT ON THE COMPANY? MW: It was a very big year. We floated the year before but last year we moved to the prime standard in the Frankfurt stock market, which was the first significant thing we did. We gained a new CEO as Arne Dehn joined us and that kick started a review of the strategy and everything that we are doing. We then made our biggest acquisition with Infaimon, which took us into Spain, Portugal, Brazil and Mexico. That gives us a first step into the Americas, which is a major move forward and, in the process, we acquired technology from them for bin picking. This effectively introduces one of the things that we will do in the future, which is having more ready-to-go subsystems. At the moment we concentrate on components and services and what we will be doing in the future is to bring more components together and deliver them as a more complete solution. We’re still not going to provide end user systems, we’re still selling to integrators and

14

OEMs but will reduce their pain by providing more for them. Then we had our first capital market day where we presented our strategy to the financial market outlining the three pillars of the organisation.

MVPRO: CAN YOU EXPLAIN MORE ABOUT THE THREE PILLARS? MW: A lot of this had been happening already but the message wasn’t clear. Historically, people would look at us and see that we are a distributor, which of course we are, but we’re actually way more than a distributor. We sell components and we will continue to do so. That will continue to be our biggest market and we are investing very heavily in making machine vision technology more easily accessible. What we are also trying to do is add more tools and services to our website to make it easier for people to work with us. We want to be best of breed. However, what I would say is a significant amount of our customers buy services from us as well and that differentiates us from ordinary distributors. I would say probably 70 per cent of our customers buy value-added services and that could be anything from pre-configuring the lens on the camera in a clean room to doing custom software development. Effectively we’ve always done these things but we are going out now to customers and the wider world and telling them about the three pillars to the organisation. The biggest pillar is the components business, and we offer lots of services to support people that want to buy the components.

mvpromedia.eu


Then we have our subsystem business where we are trying to create solutions to solve common problems. For example, if someone wants a redesign and needs help to work out which cameras they need, which software they need to use to solve particular problems’ then we can do that. Inpicker is the first one in that example. We know if a business wants to do random bin picking, they can go and buy Halcon which we sell, you can buy a 3D camera and write the software but there is a lot of software to write. What we are saying is, ‘We’ll write 80 or 90 per cent of that’ The customer pays a little bit more but by doing so they save four month’s worth of development work. The subsystem business, while it is still relatively small, complements the components business.

sell it. We are not doing installationss and we don’t have a lot of installation engineers and mechanical engineers.

The third pillar is projects, which again is something we have always done but never promoted. It’s never been something that is spoken about or on our website. So, what we are now doing is to make that a little bit more public. Typically what happens is we have lots of OEM customers that come along to us and they have no machine vision experience at all but they might be the world leader in, let’s say, meat slicing and they’ve built big slicers. They sell hundreds of these all round the world and machine vision can help them to massively improve the slicing accuracy and weights. So, the customers we’ve been writing the software for just need to buy the specialist sub systems for their production machines.

MVPRO: YOU’VE GOT THE THREE PILLARS STRATEGY. LOOKING AT 2020, WHAT DO STEMMER WANT TO ACHIEVE AND HOW IS THE COMPANY GOING TO DO THAT?

Our project business is now quite significant and is intended for companies where they have their own products. Firstly, we go into their factory, we work with them and get the first system integrated into their machine. Then, every time they build a machine, they just install the components and

mvpromedia.eu

To summarise, all of this was happening before but we are just making it clearer to differentiate what we are doing and what our added value is compared to other distributors, most of whom are purely box shifting. Finally, there’s the financials. The VDMA indicated that the market last year would shrink.. We probably won’t know for a few months, but the prediction is that a downturn has started happening and the market is shrinking. We are beating that trend, so for us, we will still have organic growth last year, which in this climate is pretty good.

MW: We’ve got a strategy and we are investing very heavily on our internal systems. What that effectively means is that our goal is to make machine vision easy and accessible. We are trying to put more tools onto the website, more internal tools to make our performance quicker, faster and with even better delivery. The first thing we are doing is investing very heavily in our existing traditional business in making it as easy as possible for people to deal with us. At the system design level that means choosing the right product. Customers can now go on our website and literally design the system by inputting what they want and we will then suggest the right camera.

15


We are putting all our knowledge into a platform within the website to guide people to the right products. We are also investing quite heavily internally. Our Vision PC business, which is building the optimised PC’s for machine vision has been upgraded. We are refocusing and restructuring the way that people work with us again just to be more efficient. We did a survey a little while ago and 95 per cent of our customers said that we were very good or excellent in customer satisfaction. What we have now got to do is upscale that. Inpicker is going to get rolled out across the whole company. At the moment it’s an Infaimon product and they have been very successful with it, so we are now deploying that across all of Europe. Finally, we are still looking to acquire companies and expand into new territories, so I expect you are going to see more of that happening.

MVPRO: WHAT ABOUT NEW TECHNOLOGIES AND MARKETS? MW: Machine vision is a cross sector technology and we’ve got big customers in medical, automotive and electronics. Historically machine vision is perceived as industrial automation. If I look at our German company they probably have a higher percentage of industrial customers than other countries. If you look at France and the UK, where perhaps our manufacturing bases are not as strong, we’ve ended up finding other markets that aren’t industrial.

What we are saying is: ‘Yes we know machine vision and factory automation very well but we believe significant growth is going to also be in non-industrial applications.’ So the likes of IOT, autonomous driving, traffic systems, retail systems, checkout-less supermarkets is where I think the market growth will come from. I wouldn’t say it will be big growth in 2020 but certainly there is significant interest and things such as embedded vision are enabling these new applications. What embedded vision is doing is making it easier to create small low cost high volume systems. I think the main market for that is not industrial, it’s creating appliances that can get used in many different markets where vision acts as an aid. The existing industrial stuff is still there and embedded vision will have an effect on it, but I don’t think it’s going to be as ground-breaking in industrial as it is non-industrial mainly because the volumes aren’t high enough.

MVPRO : HOW DO YOU SEE YOUR ROLE IN THE ORGANISATION MW: I’m working closely with our CEO and Director of Corporate Sales to ensure our product management and marketing teams who report to me are aligned with our strategy. Product management is being re-organised and we have created a new position to lead the portfolio strategy. Marketing has had changes to focus further on new digital platforms and ensuring the customer experience in selecting products and services continues to lead the market - plus I continue to lead the UK team. MV

We recognise that we are very good in that area so this is why we have introduced the concept of artificial vision.

16

mvpromedia.eu


Precision Perfect images at high speed

Precision at high speed. Precision skydiving is a perfect match for extreme athletes – and precision inspections at high speed are a perfect match for the LXT cameras. Thanks to Sony® Pregius™ sensors and 10 GigE interface, you benefit from high resolution, excellent image quality, high bandwidth and cost-efficient integration.

Learn more at: www.baumer.com/cameras/LXT


IT’S GOOD TO TALK! Trade shows have become a vital source of knowledge sharing, product awareness and networking. Denis Bulgin of Technical Marketing Services previews the array of events on offer across the globe this year.

In 2012, the decision was taken to move VISION Stuttgart, the flagship machine vision exhibition in Europe, to a twoyear cycle. It was suggested in some quarters that this was the beginning of the death knell for trade shows, however, this has proved to be far from the case. Not only has VISION itself gone from strength to strength, but newer shows such as the UKIVA Machine Vision Conference and Exhibition have also emerged in recent years to reaffirm the important role that face-to-face engagement still has in a technology driven world. With vast amounts of product and application information readily available online, combined with leaner workforces in many organisations, it may seem surprising that huge numbers of people are prepared to take anything from a day and upwards of four from their never-enough-time schedules to attend a trade show. However, one of the most useful benefits of a face-to-face meeting at an exhibition is the opportunity to discuss vision problems informally, get to know people and to develop

18

networking opportunities. These types of interactions can lead to greater understanding of ideas and technology. In addition, there is also the opportunity to expand knowledge and understanding by attending some of the technical seminars that accompany most exhibitions.

MACHINE VISION FOCUS The new decade gets under way with machine vision events across three continents this year. These include:

• The Korea Vision Show, Seoul, Korea, 4-6 March,

• UKIVA Machine Vision Conference and Exhibition in Milton Keynes, UK, 14 May 2020,

• Vision Show in Boston, USA 9-11 June

• VISION Stuttgart, Germany 10-12 November.

All of the events mentioned above are attractive since they cover the full range of machine vision technologies, but

mvpromedia.eu


there are also plenty of more specialised meetings that concentrate on specific aspects of machine vision and will attract a different audience profile.

participating in a major trade show requires a significant investment by the vision companies and it is only practicable to exhibit at a finite number of them.

Examples of these include:

Nevertheless trade shows provide a unique two-way benefit. Visitors get the chance to see most or all of the major vendors under one roof; to see different products in action and compare them and easily return to a stand if another question arises as a result of discussions with someone else.

• Image Sensors Europe 2020, London, 10-12 March,

• Embedded Vision Summit, Santa Clara, California, USA 18-21 May

• Chii 2020, Graz, Austria 27-28 May 2020.

The latter is an event devoted to hyperspectral imaging. In addition, there is the European Machine Vision Forum 2020 in Cork, Ireland in September, where the machine vision industry and academic researchers come together to discuss a multitude of machine vision topics including research cooperation between them.

A SIGNIFICANT INVESTMENT There are plenty of other opportunities to see vision in action at robotics and control and automation events as well as exhibitions in vertical markets such as food, engineering, processing and packaging.

Most importantly, however, they can engage with experts in their field who can offer valuable advice on how an application problem might be solved. For the exhibitors there is the opportunity to meet new potential customers, reinforce relationships with existing customers and demonstrate their expertise in the field. So overall, it is a win-win situation, and the continued investment in these events is testament to their importance in providing a platform that no digital technology has yet been able to replicate. In short, it’s good to talk! Denis Bulgin, Technical Marketing Services E-mail: denis@technicalmarketingservices.co.uk Web: http://www.technicalmarketingservices.co.uk

MV

These include shows such as the European Robotics Forum, Control Stuttgart, Hannover Messe, Interpack, Mach, the International Conference on Robotics and Automation, Automatica and the PPMA Show. Of course,

mvpromedia.eu

19


THE FUTURE OF FACIAL RECOGNITION

have to pay a receptionist or front of house to stay working all night. This, however, raises the conversation of technology taking over. Many people are concerned at the rate technology is adapting and progressing at - a new study reveals that robots could take over 20 million jobs by 2030. While researchers believe the rise of robots will have many benefits in terms of economic growth, they also acknowledge the drawbacks that will arise too: Facial recognition technology is becoming increasingly prevalent in our everyday lives, with many of us using the technology every time we use our face to unlock our smartphone - a study found that we use our phones around 52 times per day. Whilst it has transformed how we access our phones, facial recognition technology is also being used in a number of industries outside of tech to improve the service that companies provide customers with. If you’re a company that isn’t adopting the use of facial recognition, it’s time to start researching into it before you get left behind. Devices recognise their users by scanning facial features and shapes - specific contours and individual unique features help the likes of smartphones recognise users and open certain settings up on phones. For example, many banking apps now allow users to login to their internet banking through the use of their face - this, in some ways, is far safer than the previous ways of using online banking which would either include an individual code or a series of questions to answer that only the user would know. Not only has facial recognition made online banking easier for its users, but it has also made it safer. However, banking isn’t the only sector in which facial recognition is making waves in. Hotels have now begun using facial recognition to check their guests in, enter their room and receive more personalised stays. This is useful if guests are checking in at late hours as companies do not

20

“As a result of robotisation, tens of millions of jobs will be lost, especially in poorer local economies that rely on lower-skilled workers. This will therefore translate to an increase in income inequality,” according to British-based research and consulting firm, Oxford Economics. Facial recognition is also having a positive effect though - facial recognition check-ins have already been trialed amongst some airlines, as well as ePassport gates now being more prominent in many airports to reduce queues at busy periods. There is the discussion that we are constantly being watched - a little like Big Brother. As soon as we wake up and check our phones, our faces are being watched and scanned. If we pop to the shops, cameras scan our faces at self checkout cashiers, if we go on holiday, cameras and facial recognition catch our every move; we now live in a switched-on society where technology is taking over. Facial recognition is here to help us though - it is being used in many schools to help identify if a student or individual is allowed on campus. In a busy and hectic world, this is actually very useful as it would be physically impossible for a human to have eyes on this 24/7. New research conducted by RS Components reveals how other industries, such as consumer electronics, food, automobiles, and marketing are using facial recognition. So, how do you think facial recognition will adapt in years to come? See the research findings at https://uk.rs-online. com/web/generalDisplay.html?id=did-you-know/futurefacial-recognition MV

mvpromedia.eu


HYPERSPECTRAL IMAGING MARKET EXPECTED TO HIT $25B BY 2024

The global hyperspectral imaging (HSI) systems market is projected to more than double to $25.2 billion by 2024 from $11.1 billion in 2019 according to the latest insight from ResearchandMarkets.com This equates to a compound annual growth rate (CAGR) of 17.8% over the five-year period.

Cameras are expected to command a larger share of the hyperspectral imaging systems market, by product, in 2019. Technological advancements, the development of affordable hyperspectral imaging cameras, and the increasing adoption of hyperspectral technology for defence and industrial applications are driving the growth of the hyperspectral cameras segment. THE LIFE SCIENCES AND MEDICAL DIAGNOSTICS SEGMENT WILL WITNESS THE HIGHEST GROWTH IN THE HYPERSPECTRAL IMAGING SYSTEMS MARKET.

Image: Photon Market growth can largely be attributed to factors such as increasing funding and investments in this field and the growing industrial applications of HSI. The untapped market opportunities in emerging markets are also expected to provide growth opportunities for players in the market. On the other hand, data storage issues and the high costs associated with HSI systems are expected to limit adoption, thereby restricting the market growth during the forecast period. Attaining super-resolution in a cost-effective manner is a major challenge faced by the industry, which may hamper the market growth to a certain extent. THE CAMERAS SEGMENT IS EXPECTED TO DOMINATE THE MARKET DURING THE FORECAST PERIOD. On the basis of product, the hyperspectral imaging systems market is segmented into cameras and accessories.

mvpromedia.eu

Based on application, the hyperspectral imaging systems market is segmented into military surveillance, remote sensing, machine vision & optical sorting, life sciences & medical diagnostics, and other applications (includes colourimetry, meteorology, thin-film manufacturing, and night vision). The military surveillance segment is estimated to account for the largest share of the hyperspectral imaging systems market in 2019, while the life sciences & medical diagnostics segment is projected to register the highest CAGR during the forecast period. This is attributed to the increasing use of hyperspectral imaging in medical diagnosis and image-guided surgery. NORTH AMERICA WILL DOMINATE THE MARKET DURING THE FORECAST PERIOD. Geographically, the hyperspectral imaging systems market is segmented into North America, Europe, the Asia Pacific, and Rest of the World. In 2018, North America accounted for the largest share of the hyperspectral imaging systems market. The large share of this region can be attributed to the presence of highly developed research infrastructure, availability of technologically advanced imaging products, and growing adoption of hyperspectral imaging in military surveillance, environmental monitoring, mining, machine vision, and life sciences and diagnostics. MV

21


SMART CAMERAS CONTINUE TO OPTIMISE FORM, FUNCTIONALITY AND FLEXIBILITY Dan McCarthy, contributing editor to AIA, explores the continued impact of smart cameras.

cameras can coexist on the same production line, inspecting different features.”

Smart cameras continue to be a major growth engine for the machine vision industry, contributing a compound annual growth rate of 10.5% since 2010 — nearly twice that of the overall market — according to AIA market research. Compact, highly integrated, and easily programmed, smart cameras offer an attractive alternative to more complex PCbased vision systems. While these qualities help to define and differentiate smart cameras, their growing adoption of late is due more to steady advances in the size, resolution, and sensitivity of imaging sensors.

DEFINING “SMART” Within the realm of industrial machine vision, embedded vision systems are often conflated with smart cameras, as both essential package sensor, sensor interface, and some level of processing in a self-contained unit. Smart cameras have traditionally offered more robust processing, but embedded systems are blurring the line with the emergence of highly functional embedded cameras with MIPI interfaces, image pre-processing, and even IP cores for decoding video streams in an onboard FPGA. Smart cameras still pack more processing power as a rule and are further distinguished by incorporating system storage, digital I/O, and common industrial communication interfaces within an often rugged stand-alone housing. As self-contained vision solutions, smart cameras offer speedier integration and simpler programming than PCbased systems. They also tend to offer a lower price point, though there are occasions when the latter might provide a more cost-effective solution. “Smart cameras are a good choice for applications that require point inspection as they are easy to set up and easy to maintain,” said Steve Geraghty, general manager at Teledyne DALSA Industrial Products. “Multiple smart

22

However, Geraghty adds, conventional vision systems tend to be favoured for applications that require faster processing speeds, broader flexibility over sensor choice, or multiple camera inputs. “While the performance gap between smart cameras and vision systems is closing fast, there may still be a cost and integration benefit for using vision systems in multi-camera applications,” he explains. The rugged enclosures that house all-in-one smart camera systems also make them better suited for harsh industrial environments, according to Fabio Perelli, product manager of smart cameras and vision controllers at Matrox Imaging. “Any industry with messy factory floors, regularly sanitized workspaces, or typified by a dirty, dusty environment is a natural fit for a smart camera,” he adds. “Pharmaceutical manufacturing and food and beverage production were early industry adopters of smart camera technologies. As the range of available IP67-rated smart cameras continues to grow, we are seeing much more adoption within these same industries.”

SENSOR SIZES EXPAND For the past couple of years, the benchmark resolution for smart camera sensors centered around 2 MP. Now, 5-MP sensors are quickly becoming the new normal. Not only are such resolutions available today from Cognex, Datalogic, JADAK, Matrox Imaging, Omron Microscan, Teledyne DALSA, and Vision Components, to name a few,

mvpromedia.eu


FLEXIBILITY VS. EASE OF USE Richer image data requires greater processing performance. Smart cameras have evolved accordingly - limited more by power consumption, heat dissipation, and maximum package size than by the necessary computing power. The two most common processor architectures — x86 and ARM — undergo frequent upgrades to increase performance while lowering power consumption. x86 processors require fewer commands to perform more complicated tasks, while ARM processors typically execute simpler commands within a much faster clock cycle.

many of these suppliers and others, such as Germany’s Eye Vision Technology and Italy’s Tattile, further offer smart cameras incorporating 12-, 16-, and even 20-MP sensors. Two converging forces are driving this trend. One is the constant competition among sensor manufacturers to deliver ever higher resolution and frame rates to their customers. “This translates to lower-cost cameras and larger fields of view that can reduce the number of cameras required in an application,” explains Geraghty. “Sensor manufacturers are also integrating smart features, such as specialised filters for hyperspectral imaging, or image processing able to automatically locate features of interest.” The other factor driving sensor resolutions higher is the growing number of users and end markets seeking to leverage the benefits of smart cameras — while minimising compromises in performance. “We see this as a symbiotic relationship,” says Perelli. “Technological capabilities are growing each day, and consumers and manufacturers are very aware of what technology can do. In turn, their expectations are driving up market demand, which is then fuelling even more technological innovations.”

From smartphones to watches to pharmaceutical pills, many of the products that smart cameras are called on to inspect are increasingly small and detailed The impetus from these complementary forces is unlikely to fade, Perelli adds. From smartphones to watches to pharmaceutical pills, many of the products that smart cameras are called on to inspect are increasingly small and detailed. It is only natural for suppliers to continue sharpening sensor resolutions to capture details with the expected level of precision.

mvpromedia.eu

Manufacturers of smart cameras adopt either technology depending on their target customer and performance criteria, though processor architecture can also influence the predominant software on which a supplier relies. This is an important consideration, as many suppliers bundle proprietary software with their smart cameras to help users with minimal programming expertise control imaging parameters. Software, in short, is an important value-add for some users. “A smart camera is essentially a software application, and the feature set of this application will change or adapt with new deployment opportunities,” says Geraghty. “Many smart cameras on the market now include some level of image processing and custom programmability that make them more adaptable to applications that were traditionally aligned with PC solutions.” “Speaking from our end, Matrox Imaging made a point of developing its Matrox Iris GTR smart cameras using an open architecture, which allows our customers to customise the camera functionality to match their exact unique needs,” Perelli says. “We also apply this flexibility to the Matrox Design Assistant software we pair with our smart cameras. This flowchart-based IDE is hardware-independent and can be used on our smart cameras, industrial vision controllers, or any computer with GigE Vision or USB3 Vision cameras. We believe that flexibility is a tremendous value-add, and that type of customisation is certainly well received by our markets.” Despite their simplicity and self-contained packaging, smart cameras have demonstrated a tremendous ability to incorporate more functionality and performance, and to penetrate further into new and current markets, with no end in sight. As sensor and processing capabilities continue to march ahead, deep learning, greater connectivity, and other key industry advances in machine vision promise to make smart cameras even more intelligent. MV Article reproduced courtesy of AIA.

23


DIVE INTO AI WITH THE NXT OCEAN

Deep learning opens up new fields of application for industrial image processing, which previously could only be solved with great effort or not at all. The new, fundamentally different approach to classical image processing causes new challenges for users - a rethink is necessary. I DS presents a user-friendly all-in-one embedded vision solution to implement AI-based image processing.

IDS combines deep learning experience and camera technology with an all-in-one inference camera solution. This enables every user to start immediately with AI-based image processing. With IDS NXT ocean, IDS lowers the entry barrier and provides easy-to-use tools for creating inference tasks in a few minutes without much prior knowledge and executing them immediately on a camera.

The cloud-based training software IDS NXT lighthouse leads step-by-step through data preparation to the training of artificial intelligence in the form of a neural network. The user never gets in contact with any basic tools or has to deal with the installation of development environments. As a web application, IDS NXT lighthouse is immediately ready for use. The user has sufficient storage space and training performance for all their projects in an easy-to-use workflow. Log in, upload training images, label and then train the desired net. With a few configuration settings, the user specifies the speed and accuracy requirements for his application in simple dialogs. IDS NXT lighthouse then selects the network and sets up the necessary training parameters completely

The concept is based on three important components:

• An easy to use training software for neural networks

• and an intelligent camera platform

• including an AI accelerator that executes the neural networks on the hardware side. All components have been developed directly by IDS and are designed to work together perfectly.

24

mvpromedia.eu


independently. The training results already give the user a good prediction about the quality of the trained intelligence and thus enable quick modification and repetition of the training process. The system is continuously improved and upgraded as the latest version of the software is always available. The user can concentrate completely on solving his application without having to build up the knowledge about learning methods and artificial intelligence. IDS uses supervised learning at IDS NXT lighthouse to train neural networks. The Deep Learning algorithms learn with predefined pairs of inputs and outputs. The teacher - in this case the user - has to provide the correct function value for an input during learning by assigning the correct class to a picture example. The network is trained to be able to make associations independently by making predictions regarding image data in the form of percentages. The higher the value, the more accurate and reliable the prediction. The seamless interaction of the software with the IDS NXT camera families Rio and Rome ensures a quick success because fully trained neural networks can be uploaded and executed directly and without programming effort on one of these cameras. Thus, the user immediately has a completely working embedded vision system that sees, recognises and derives results from captured image data. With its digital interfaces, even machines can be controlled directly.

EMBEDDED VISION HYBRID SYSTEM IDS has developed its own AI core for the FPGA of the intelligent IDS NXT camera platform called “deep ocean core�, which executes pre-trained neural networks hardware-accelerated. This turns industrial cameras into high-performance inference cameras that make artificial intelligence useful in industrial environments. Image analysis is performed decentralised, avoiding bandwidth bottlenecks during transmission. Cameras based on the IDS NXT platform can thus keep pace with modern desktop CPUs in terms of accuracy and speed of results - with significantly less space and energy consumption at the same time. The reprogrammability of the FPGA offers additional advantages in terms of future security, low recurring costs and time-to-market. The perfect adaptation of IDS’ own software and hardware allows the user to choose the target inference time before

mvpromedia.eu

training. IDS NXT lighthouse then ensures optimal training settings while considering the AI core performance of the camera. Thus, the user expects no surprises during the subsequent execution of the inference, eliminating the need for time-consuming re-adjustment and re-training. Once integrated, the IDS NXT system remains 100% compatible and consistent in its behaviour for the user. Especially with industrially certified applications this is a significant advantage. Due to the powerful hardware, the embedded vision platform is much more than just an inference camera used to execute neural networks. The feature set of the CPU-FPGA combination will be extendable according to the needs in the next development step using vision apps. Recurring vision tasks can be set up and changed quickly. Even a completely flexible image processing sequence can then be realised. Captured images are first pre-processed, for example, before a quite simple and fast classification sorts good and bad parts. If errors occur, a much more complex neural network can be reloaded in milliseconds to determine the error class in much more detail and transfer the results to a database. Customised solutions can then be easily implemented using an app development kit. Users can then create their own individual vision apps in just a few steps and install and run them on IDS NXT cameras. IDS NXT cameras are designed as hybrid systems to enable both pre-processing of image data with classical image processing and feature extraction using neural networks side by side to efficiently run image processing applications on a single device.

SUMMARY IDS NXT ocean makes deep learning meaningful and user-friendly for everyone. IDS presents a hardwaresoftware combination that is perfectly matched to each other. Intelligent detection tasks and automation in many (new) application fields are enormously simplified or made possible for the first time. In just a few steps and without programming knowledge, AI-based image processing solutions can be created and executed. With the IDS NXT lighthouse training software, the manufacturer has knowingly moved into the cloud in order to be able to scale storage space and training performance to user requirements. In addition, no update and maintenance phases need to be scheduled to benefit from continuous improvements. This ensures that the latest version of the software is always ready for every user. The manufacturer also offers an inference starter package with all the necessary components as a first step into AI-based image processing. A camera with power supply and lens as well as a training license for IDS NXT lighthouse - everything you need to get started right away. MV

25


SPONSORED

MACHINE VISION: A MAJOR SUCCESS STORY As Gardasoft celebrates 20 years of producing dedicated lighting controllers, we present the first in a series that reviews the history of machine vision and explores thrilling possibilities for the future.

MUFFINS AND SPRINGS Machine vision evolved from experimental systems that were first created in the late 1970s. The automotive industry and its supply chain were some of the early adopters of machine vision and General Motors used imaging to check that collet holding springs were properly seated long before vision systems could be bought “off the shelf”. The components making up these early systems would be familiar to us now but in those days each individual item would have been sourced from a different application. Just as today, the first vision systems consisted of a camera, a lens and an illumination device which functioned together to produce an image of the item to be inspected. The image would then undergo some kind of digital image processing or analysis to produce the required information. Minicomputers, such as the Digital Equipment Corporation’s PDP series, or even custom-designed hardware, were often used to process image data. The early cameras were always TV-type until smart cameras emerged in the mid-1980s. Smart cameras, which feature onboard image processing, eliminate the need for external computer processing so that just a result signal is transmitted back to the factory control system. At that time, if the ambient illumination was insufficient for the application, light sources such as quartz halogen or

26

even fluorescent lights were used. Every machine vision systems would have been custom-built for its specific application and systems were very expensive. Yet, despite their primitive construction, these systems produced some very impressive results. One early industrial system was developed for muffin inspection and featured an image resolution of just 30 x 32 pixels. However, it proved highly effective at rejecting rogue, oversized muffins that would otherwise have jammed up the packaging machines.

CHIPS AND PROCESSORS The capabilities of machine vision have been transformed since the 1970s by ever-increasing computing power, memory capacity and image sensor performance. Improvements in PC programming power and memory paved the way for the first PC-based image-processing toolkits and libraries. From these came the very first selfcontained machine-vision applications, which had a simple interface framework and offered a plug-and-play approach for PC-based systems. The introduction of the PCI bus in 1993 enabled image data to be transferred within a PC

mvpromedia.eu


SPONSORED

and the advent of Windows 95 in 1995 made ‘point and click’ programming easier to implement. Image-processing boards with integrated processors were also developed to take advantage of the new Field Programmable Gate Arrays (FPGAs).

‘point and click’ or ‘drag and drop’ capability made vision much more accessible to non-specialists. The advent of dedicated machine-vision data transmission standards simplified connectivity in machine vision systems and made components exchange easy.

NEW POSSIBILITIES BROUGHT TO LIGHT

The advance of the PC clearly had a huge influence on image handling and processing capabilities, but the development of new semiconductor fabrication methods was also vital because of the creation of new generations of CCD and CMOS image sensors. By the end of the 20th century, a machine vision system would typically feature analogue output cameras using TV standards (eg NTSC, EIA, CCIR, PAL), frame grabbers with relatively long cables (up to 30m), Pentium II PC with PCI bus, and lighting consisting of halogen, discharge or fluorescent components. However, the 21st century was to herald a massive change in machine-vision technology and by 2010, a typical system would be quite different.

21ST CENTURY REVOLUTION During the first decade of the 21st century, new camera technology yielded more sophisticated cameras with vastly improved frame or line rates, resolution and form factor. The faster frame and line rates facilitated higher-speed inspections and made it possible to achieve multi-light inspections at a single camera station. Successive frames or lines could now be rapidly captured with different light configurations. The smaller camera form factors made it far easier to integrate cameras into the industrial process and integrate machine vision systems into crowded production lines. Further advances in microprocessors generated a huge rise in processing speeds while computing costs fell. This advance in camera and computing capability was accompanied by the advent of highly-sophisticated image-processing software which offered an extraordinarily versatile array of tools for image analysis. Systems became used routinely for quality assessment, metrology, error/fault detection, sorting and process control. Product tracking and traceability was enabled by dedicated software for 1D and 2D code reading, pattern matching and optical character recognition. Simplified user interfaces with

mvpromedia.eu

A major breakthrough at this time came from research into super-bright LEDs. Experimentation with different semiconductor materials and phosphor coatings yielded lights with much higher intensity than was previously possible and a choice of wavelengths. LEDs offer very significant benefits over older illumination methods because of their much longer operational lifetime (typically up to 50,000 hours), better stability and low cost. The small size of LED lights allows them to be arranged in various geometries and it has become possible to create many different lighting configurations such as front and back lighting, on axis, diffuse, bright field and dark field lighting. The ability to rapidly pulse LEDs has made them the first choice for illumination in most machine-vision applications from this point forward.

In 2000, Gardasoft introduced the world’s first dedicated lighting controller for machine vision. It was now possible to precisely control the drive current to LED lighting and prevent illumination intensity from fluctuating. The new lighting controllers made pulsing, or strobing, the light easy to achieve and also allowed lights to be briefly driven at much higher intensity than is possible for continuous lighting. Pulsing the light also extends the lifetime of the light and can facilitate high-speed inspections. The new, dedicated machine vision timing controllers at this time also created the flexibility to sequence cameras, lighting and other components in a wide range of lighting and timing schemes. MV Author: Jools Hudson, Gardasoft Vision Ltd, Swavesey, Cambridge, UK.

CONTACT DETAILS T: +44 1954 234970 W: www.gardasoft.com E: vision@gardasoft.com

27


SPONSORED

MAKING A “CASE” FOR A CASE CEI managing director Ray Berst shares his anecdotes and the trials and tribulations of developing camera cases for the machine vision industry. Components Express, Inc. (CEI) is in its 27th year of trading. The company had attained a position as the largest manufacturer of Machine Vision Cables in the industry. In 2008 amidst the World financial collapse, CEI in a concerted effort to improve its business made a bold move. CEI decided that it would strive be the best in the world at producing machine vision cabling, but that was over 10 years ago. It was time for another great product. We decided that we either had to make cabling for markets other than machine vision or expand our offering by developing a brand new product line for the vision industry. CEI would have to assess both its manufacturing capabilities, and technical abilities. CEI had a small machine shop for making its tools and modifying moulding tools. The shop had become a playground for engineering and upper management (all gear heads at heart). This combined with what management believed was a lack lustre product offering for industrial camera enclosures “Cases” led to the development of CEI first camera case. The design input for the first unit was limited. It was: • Robust • Small • IP67 / IP69 • Versatile mounting • NO CORD GRIP Why no cord grips? To put it plainly, cord grips suck… We are a cable manufacturer and I cringe every time I

28

found out one of my customers put a “cord grip” on one of my cables. Cord grips do exactly what they say. They “Grip” a cable, very tightly. This is very bad for twisted, pairs, or any sort of high-speed data cabling. I could go on for hours here talking about reflection, signal degradation or outright cable breakage. They leak! Just like installing a skylight in your new house. It looks cool, but eventually, it’s going to leak and cause the need for a major repair or in our industry, the camera will be destroyed. Try getting that camera replaced after you drown it. It’s a pain in the butt to install! The customer has to ideally mill, or realistically drill the cord grip and install the grip on the cable, then they can pray that it doesn’t leak. Also, its time consuming for the customer to replace the failed cable when it finally succumbs to the forces of the cord grip. The only thing truly unique about this first enclosure was the cable integrated as a part of the case. The problem was that it was too complicated as tooling would have to be built for every camera model and that was expensive and time consuming. We showed it to our first customer and they weren’t impressed. They wanted a case that was small, inexpensive and IP67. Our last two months of work was met with a yawn and a sigh. Fortunately, I had a couple of other prototypes in my bag that I honestly wasn’t excited about but engineering was. I presented them with a small square and a small round enclosure. My dreams were yet to be shattered as my customer was impressed with a very small simple round enclosure that I had in my briefcase. I hadn’t presented it because it was engineering’s idea for a good case and not mine.

mvpromedia.eu


SPONSORED

My customer thought the case that I hated was really cool. What was so cool about it? It weighs almost nothing: 120g and just 320g assembled with mount/camera. • It had a built-in connector (no cord grip) • It was half the size of the competitors’ case • It was half the price of their current solution. • The customer could mount it any direction he wanted All that was great and I rudely asked my customer for a commitment based on the prototype rolling around in my sample case. Fortunately, our customer turned out to be more business consultant than customer. I think the man felt bad for me which was my luck… Here was the next list of features that the customer wanted us to add: • Needs an easy way to mount a light to it. • Needs an alignment mark because it’s round! • Needs a QR code so that his technicians could find technical data on the product… With this I said, “no problem for me” which is code for “this is a big problem for engineering”. Within a week, our engineering team had developed a unique mounting system for a variety of lights, added the QR code and the alignment marks. Voila, we had a finished product! Or so I thought. My good customer had another great idea. They needed to mount the camera onto a collaborative robot. Currently, they would go to their own garages or design pieces to send to their own machinist. My customer wanted an “end effector erector

set”. Fortunately, I am 50 years old and having grown up with erector sets and bleeding over their sharp edges, I understood what he wanted. My 26-year-old engineer had never seen an erector set, so this time it was good that I was there. Engineering quickly modelled up an erector set and emailed me the kit about five minutes before my meeting started. I’m not sure who was more amazed, my customer or me, but either way, we were about to build another accessory and possibly another business division. Because of our relationship with our customers, we were quickly realising that every enclosure opportunity is different. The only thing that customers have in common is their need for a one stop shop for a complete solution and we were ready to deliver on multiple front. After months of development, it was time to visit the system integrators. The integrators needed a repeatable way to mount the cameras on a production line. They needed a 4-axis mount with precision alignment markings. Time for another new product, the M4 mount: They also wanted a way to mount their lights, they also wanted to be able to remove the light easily, they also didn’t always want the light to be mounted at a 90-degree angle to the object. In parallel to the efforts to make a small case for a 29mm camera, we had largely ignored a very popular camera that almost all of our distributors worldwide were selling. The Genie Nano made by Teledyne Dalsa. The problem: The Nano is built to accept a larger sensor than typically found in its 29mm square predecessor. In short, we were designing enclosures for square cameras but our customers want an enclosure for a rectangle.

mvpromedia.eu

29


SPONSORED

As our customers’ saw our willingness to adapt to them instead of having them adapt to us, we found more opportunity, the next was the Matrox GT-R. The Matrox GT-R is a complete vision system in itself, but you still have to mount a lens and perhaps a light. We wanted to make it easy for our customers and created multiple models to fit all of their light mounting needs and we coined it a “light box”. Seems like once the word is out that you’re willing to take on the odd project that brought another camera in our hands. The Imperx Bobcat:

Fortunately engineering had a solution. We went back to our original extruded design and made it our platform for the Nano. But the customers wanted more. More options: • A mounting bracket • Air curtain • Retractable wash down flap • NO CORD GRIP • IP 67 / 69 • Lightweight • Small form factor

THIS ENCLOSURE NEEDED TO BE THE SWISS ARMY KNIFE OF THE ENCLOSURE INDUSTRY The variety of lenses in the industry has put customers in some very large enclosures in the past since that’s all that was offered. Our enclosures had to have another key difference, which was every enclosure must be built to the exact requirements of the customer” In short, just as my customers don’t want a three-metre cable for a one-metre application, they also don’t want a 300mm enclosure for a 100mm application. We decided to let the customer’s camera and lens selection dictate the length so that we could provide the smallest possible.

We needed to find a way to house the Bobcat for the food and beverage industry. One small problem. The camera is powerful and as all cameras do, it generates heat. We developed a stainless enclosure around the original Bobcat design which included adaptation to the Bobcat’s cooling fins. This was a very time-consuming machining process, so with the help of the OEM, Imperx, they offered us the same camera in a different frame, without the fins. This made it much easier for use to transmit the heat from the camera and out of the case. A set of movable shims were installed that have two purposes, they mount the camera, and they transmit the heat from the camera to the outside of the case. After only a short time of flying around the Midwest United States and having great success with the cases, it occurred to me that I was showing off our cases without our cables. In all of my effort to be a good case salesman, I had not shown our own cases with our own custom cables. With CEI’s ability to produce custom right angle cables in any orientation, we are able to reduce the overall profile of the case / cable combination solving many of our customer’s biggest headaches in one package. Look for more product innovation from Components Express, Inc. and visit us online at www.componentsexpress.com MV

Not genius ideas by any stretch, but that was the state of our industry. By this time, the development was in high gear and the small machine shop had become a CNC operation with fully automated multi axis machines and the staff to run them.

30

CONTACT DETAILS T: 1-630-257-0605 W: www.componentsexpress.com E: email@componentsexpress.com

mvpromedia.eu


USB3 LONG DISTANCE CABLES FOR MACHINE VISION

Active USB3 Long distance cables for USB3 Vision. CEI’s USB3 BitMaxx cables offers the Industry’s First STABLE Plug & Play active cable solution for USB Vision, supporting full 5 gig USB3 throughput and power delivery up to 20 meters in length with full USB2 backward compatibility.

1-630-257-0605 www.componentsexpress.com sales@componentsexpress.com


MODULAR 3D LASER TRIANGULATION SENSORS –TAILOR-MADE INSPECTION AND

AUTOMATION SOLUTIONS More and more industrial companies are using 3D imaging based on laser triangulation for inspection and automation. A new modular concept for 3D laser triangulation sensors now enables tailor-made sensor solutions at no extra cost as well as maximum productivity and quality gains. Pascal Echt, marketing manager of Automation Technology explains.

With the modular 3D laser triangulation sensors of the MCS series, the user can configure the solution required for his application himself and receives a perfectly tailored sensor – at no extra cost

The areas of application for 3D imaging based on laser triangulation are extremely diverse. Whether inspection of printed circuit boards, ball grid arrays, smartphones, glue beads, welding seams, packaging, wood, tyres, or train bogies and chassis – 3D laser triangulation sensors enable high-precision quality control. Equipped with all necessary industrial interfaces, they are easy to integrate, communicate directly with control units and allow the automation of numerous production processes. Prerequisites for this are suitable characteristics of the sensor components, camera and laser, in terms of field of view and resolution or wavelength and power, respectively, as well as correct parameters of the triangulation setup, such as triangulation angle, working distance, and scan width (x-FOV). Components with matching characteristics are always available, the challenge so far has been the optimal application-specific triangulation setup.

DISADVANTAGES OF PREVIOUS SOLUTIONS Classic setups with separate components (discrete setups) offer maximum flexibility, and the user can use

32

components with exactly matching characteristics. However, the procurement, setup, integration and maintenance effort is high, and support is required in many cases. The components are not protected from contamination and moisture from the outset and camera calibration is often far below optimum. With the C5-CS series of 3D laser triangulation sensors launched by AT - Automation Technology - in 2015, these disadvantages were a thing of the past. C5-CS sensors combine 3D technology and laser electronics in a compact IP67 housing. The laser triangulation setup, optimised according to the Scheimpflug principle, guarantees high-precision measurement results for every area of the measurement object. C5-CS sensors are factory calibrated and equipped with everything needed for industrial applications, from GigE Vision to digital I/Os to an encoder interface. This reduces the procurement, setup, integration, support, and maintenance effort to a minimum. However, there are limitations in terms of flexibility. With its 45 models, the C5-CS series offers solutions for many applications, but not for all. And since the sensor components are permanently installed,

mvpromedia.eu


application-specific adaptation is only possible for the working distance.

THE SOLUTION OF THE FUTURE With the MCS series of modular 3D laser triangulation sensors, AT takes the next development step. With this series, the user can configure the solution required for the application themselves. They specify the desired data such as height resolution, working distance, scan width (x-FOV), points per profile as well as laser wavelength and safety class and receives a perfectly tailored sensor composed of corresponding sensor, laser, and link modules. Apart from the modular design, MCS sensors are performance identical with C5-CS devices and combine the advantages of the latter with those of discrete setups, thus offering maximum flexibility with minimum effort.

• measuring resolution for squared timber width/ height: 0.25/0.1 mm

• measuring resolution for transport direction: 6 mm

• transport speed: 1,200 m/min

Since the C5-CS series did not offer a suitable model, AT produced a tailor-made MCS sensor for the application at no extra cost. Its characteristics were: • sensor module cx1280 with 1,280 measuring points per profile

• measuring width (near/far): 250/400 mm

• measuring resolution x (near/far): 0.2/0.3 mm

• measuring resolution z (near/far): 0.01/0.02 mm

• z-range: 240 mm

• triangulation angle: 30°

• profile speed: 5,600 Hz with z-range 120 mm (max. 200 kHz)

The modular concept of the MCS series

Such application-specific solutions have so far been associated with considerable design and manufacturing costs and have therefore only been available to OEMs who purchased very large quantities. However, the modular concept of the MCS series eliminates these extra costs so that every user can get exactly the right sensor, even as a single piece since there is no minimum order quantity.

AN APPLICATION EXAMPLE FROM THE WOOD INDUSTRY Wood processing companies want to obtain the maximum quantity of quality products from each tree. Therefore they need reliable inspection and automation solutions for round timber sorting and optimisation (separation of main and side timber), optimised cutting of 360° scan of squared timber using MCS sensors beams to boards, board sorting, optimisation of boards (trimming) and detection of surface defects such as knots, cracks, rotting and stains. An application example shows the advantages of the modular concept of the MCS series. A sawmill was looking for a solution for board sorting. A full contour measurement of the boards in the longitudinal run and the determination of characteristics such as length, width, height and volume were to be carried out. The aim of the application was to optimise the squared timber yield.

• working distance: 400 mm

• laser module: 660 nm, 60°, 130 mW, class 3R

The customer uses four of these sensors in a 360° setup and was able to significantly increase the squared timber yield and quality.

OPTIONAL DUAL HEAD SENSOR FOR “DUAL PERFORMANCE” Thanks to the modular concept of the MCS series, all configurations can also be implemented with two sensor modules. This enables even higher measurement quality due to occlusion-free 3D scans or the combination of different sensor modules for parallel execution of different measurement tasks. A sawmill could use such a dual head sensor, for example, for occlusion-free scans during round timber sorting. The currently available sensor modules of the MCS series support an output of up to 4,096 points per profile and achieve a profile speed of up to 200 kHz. They have a scan width (x-FOV) of 70 to 1,800 mm, a z-range of up to 1,200 mm and a Dual head MCS sensor for occlusion-free 3D scans or parallel execution of different triangulation angle of 15 measurement tasks to 45°. Depending on the configuration, a resolution x of up to 17 µm and a resolution z of up to 1 µm can be achieved. The laser is available in red or blue and three classes: 2M, 3R and 3B. The MCS series is continuously being expanded with additional sensor and laser modules. MV Source of all images: AT – Automation Technology GmbH

The customer’s requirements were:

• max. squared timber width/height: 200/180 mm

mvpromedia.eu

33


SPONSORED

HIGH-INTENSITY LEDs AND AUTOMOTIVE INSPECTION

THE CHALLENGE

THE RESULT

Historically, lighting options for inspecting vehicle doors and body panels in automotive manufacturing were often limited to fluorescent tube or fiber optic lights. Although both light types occasionally provided adequate illumination, there were shortcomings: fluorescent lights were unstable and offered a relatively short life span, whereas fiber optic bundles and sources were bulky, expensive to install, and the light source had to be replaced approximately every 500 hours. In both cases, relatively frequent light changes decreased inspection efficiency and increased the actual per-hour cost of the light.

The use of red LEDs allowed the integrator to create a more efficient inspection using a matched 635 nm red band pass filter fitted to the camera lens, effectively blocking ambient light wavelengths not emitted by the Bar Lights. This technique is not possible using white only fluorescent or fiber optic lighting without severely diminishing the amount of light available for the inspections.

THE AI SOLUTION Able to light larger areas or project high intensity illumination greater distances than standard LEDs, high current LEDs combine high-intensity illumination with many of the features that make LEDs a desirable light source – structured output, long life, low power consumption, and solid-state performance. Taking advantage of the benefits of the new LED technology in the mid-1990’s, Advanced illumination created a line of standard and custom length Bar Lights. A pair of custom length high current red (625nm) Bar Lights was tested for suitability in a B&W camera automotive body panel inspection system. Other light options were tested and rejected, including a large area flood light, diffuse lights, a robotic arm mounted with a vision system, and a smaller light source requiring multiple passes. The high intensity Bar Lights, placed opposite each other at a very low angle of incidence and a large working distance, produced dark field illumination, allowing the camera, mounted directly above the production line, to detect very minor surface defects over a large area in a single pass as parts passed below.

34

One distinct advantage of Bar Lights is their flexibility. Integrators have successfully used high current Bar Lights in robotic work cells where targeted illumination replaced area fluorescent lighting. The lights also deploy well in hazardous work areas, such as welding sites, heavy stamping plants, or large robotic work cells. Conditions in these applications often require the lights to be placed at a safe distance, while producing intense, stable, targeted illumination. High intensity Bar Lights are also effective in applications where higher light intensity is needed, but typically at the smaller fields-of-view and working distances [typical] of standard LED light applications. Ai’s offering of high current Bar Lights includes the AL295 MicroBriteTM Series, the AL247 UltraSeal Washdown Bar Lights, the LL174 High Intensity Bar Lights, and the AL126 and AL116 diffuse, close-work Bar Lights. Every Bar Light from Advanced illumination is available in multiple wavelengths for flexible customization to unique inspection applications. Model-dependent options may include washdown, integral heat sink, different dispersion cone angle lenses, and a variety of light conditioning filters. MV

CONTACT DETAILS - Katie Barnes W: www.advancedillumination.com E: marketing@advancedillumination.com T: 802.767.3830

mvpromedia.eu


Adaptable. Intelligent.


CASE STUDY

SHERLOCK SOLVES AUTOMOTIVE INDUSTRY ISSUE

WITH MACHINE VISION Hand-counting piston rings as small as 0.29 - 0.79 mm in width and packaging them in various batch counts, was not only adding to labour costs, but also proving unreliable in the highly competitive automotive sector. A far more reliable answer came in the form of a custom machine vision solution from Teledyne Imagining.

Pundits predict that India’s automotive sector will emerge as the world’s third-largest passenger-vehicle market in the next decade. One of the peripheral industries that supplies this burgeoning market is the piston and piston ring market — a market that owes its success mainly to the automotive industry and its demand for higher-powered engines.

end of assembly requires recounting the rings manually to ensure there aren’t any missing piston rings. While the customer hadn’t received a high number of complaints about product quality, IP Rings needed a more reliable system to decrease cycle time and lower production costs.

IP Rings, based in Chennai, India is a major player in the piston and piston ring market.

CHOOSING THE RIGHT TECHNOLOGY

With the continued growth of the automotive industry, there is a constant search for new efficiencies and innovative ways to enhance the associated production, packaging and distribution processes. Piston rings are primarily used to seal the engine outlet thereby avoiding the leakage of gas during the combustion process.

ONE RING AT A TIME IP Rings manufactures piston rings for Tier 1 and Tier 2 original equipment manufacturers (OEMs), and it’s imperative that each batch meet the exact number of rings specified for installation in each vehicle. At the IP Rings manufacturing facility, manual inspection had been used for sorting, counting and packaging each batch of rings before shipping them to the customer, which was time consuming and labour intensive. Each batch contains 100 or more rings varying in size from a minimum width of 0.29 mm to a width of up to 0.79 mm. As the rings come off the production line they are counted and packaged as per specifications before being dispatched. Counting the precise number of rings is critical and avoids confusion during piston set assembly. Having extra rings remaining or fewer rings than specified at the

36

With expertise in this manufacturing environment, Qualitas Technologies, a provider of industrial automation solutions in India, approached IP Rings about automating their production process by implementing a semi-automatic machine tailored to handle their ring counting process. Qualitas has successfully used machine vision technology as an answer for automation streamlining for its clients, and they were able to demonstrate how vision technology could help IP Rings improve efficiency, lower overhead costs, and increase ROI. After learning about the benefits of vision technology, IP Rings was ready to work with Qualitas to automate their production process. For this application, Qualitas designed a vision system using Teledyne DALSA’s Sherlock machine vision software configured to a single camera solution, with a red light for illumination.

THE SHERLOCK SOLUTION “Our experience using Teledyne DALSA’s solutions made them an ideal partner for this application,” said Vinay Arabatti, Solutions Architect at Qualitas Technology. “Sherlock’s vision tools are able to ensure a high degree of accuracy by recognizing minor variations with the rings that could alter the final counting process.”

mvpromedia.eu


CASE STUDY

For this inspection the operator loads a batch of rings onto a specially constructed jig for the application and triggers the camera to capture the image of the rings. The camera is positioned vertically to capture images with 2592 x 1944 resolution to cover a field of view of 145mm.

“ Our main objective here was to decrease the cycle time and increase accuracy by automating the tedious task.” A red diffused bar light is mounted at a 45° angle to get a proper illuminated area, and to avoid the effect of ambient lighting. Two stoppers, one fixed, and the other movable, are provided to hold the rings within the given region-ofinterest where Teledyne DALSA’s Sherlock software “counts” the rings by tracking the edges. Sherlock’s robust Edge Count Tool is able to recognise slight discrepancies such as larger gaps than expected between the rings and substances on a ring that alter its appearance. For edges that appear to be missing, Sherlock generates a warning and highlights the problem so the operator can verify the count and adjust the stack to resolve the issue. “Our main objective here was to decrease the cycle time and increase accuracy by automating the tedious task of counting the piston rings, with the help of a highly accurate vision system,” said Arabatti. “Arresting the ambient light was a major challenge. We used a customised jig to overcome the issue. A diffused red light

mvpromedia.eu

was used to properly illuminate the rings of all sizes and textures.” To complete the application, Qualitas developed a fully customised user interface for the packing and shipping process using Sherlock’s advanced imaging capabilities and extensive software development library. Sherlock easily integrates to an industrial PC to share images and report statistical data. Operators start by choosing the particular model name along with the predefined number of rings for a bundle to trace the inspection. Based on the selected model name, the solution is loaded in Sherlock where the image is captured; and processed; and results are displayed as “pass” or “fail.” The “pass” batch is then sent for packing and shipping to the end customer. Cloud-based image storage provides operators with convenient access to monitor inspection results and overall performance so they can adjust inspection parameters as needed. The piston ring counting machine has been operational for several months at IP Rings. The manual counting of 100+ rings that took five minutes previously is now being completed in 10 seconds, including the time needed to feed the rings to the jig, and the company is planning to deploy more of these jigs for further efficiency. IP Rings is thrilled with its new, low-maintenance vision system and the Sherlock software that has introduced major productivity gains to the ring counting process with time savings and increased accuracy for overall production efficiency. MV

37


SPONSORED

EASIER THAN EVER BEFORE: VERISENS VISION SENSORS CONTROL UNIVERSAL ROBOTS The smart VeriSens vision sensors XF900 and XC900 control the collaborating robots (cobots) of Universal Robots within only a few minutes of setting up. The robot-compatible vision sensors are mounted directly on the cobot or above it. Thanks to the SmartGrid (patent pending), calibration in terms of image distortion, conversion into world coordinates, and coordinate alignment between the vision sensor and robot take place automatically and extremely easy. This eliminates the required conventional elaborate manual “hand-eye” calibration of the robot and vision sensor. This is not only more precise but also reduces the set up to a few minutes. The installation and configuration of the vision sensors are transparent and easy to understand – via the specifically developed VeriSens URCap interface for robot control, only a few steps are needed to benefit from the diverse VeriSens image processing options. In the programming of the robot itself, only two additional commands (nodes) are necessary to allow a great number of applications across various

38

industries to benefit from the advantages of Vision Guided Robotics. Instead of taught-in waypoints, free positions are used on which objects are then recognized visually. In addition, the already established functions can check object overlaps and gripper clearance. Furthermore, VeriSens vision sensors can, for example, verify free storage area, carry out quality controls of objects variably positioned in the provided space, as well as identify and measure objects. Learn more at: www.baumer.com/verisens-ur

MV

CONTACT DETAILS N: Nicole Marofsky W: https://www.www.baumer.com E: nmarofsky@baumer.com

mvpromedia.eu


USING AI TO DETECT ANOMALIES EASILY AND PRECISELY Machine vision plays a key role in defect inspection for product quality assurance purposes. Rule-based systems as well as modern technologies based on artificial intelligence (AI) are used here. In particular, this includes deep learning based on convolutional neural networks (CNNs). Rulebased solutions must cover a wide range of anomaly manifestations, which means that they require an equally great amount of programming effort. The enormous advantage of AI systems, on the other hand, is that they learn new information independently through training. Defects are identified in multiple steps. Firstly, a sufficiently large number of “training images” of all the defects to be detected must be collected. These images are then labelled and used to train the underlying CNN.

DIVIDING IMAGES INTO CLASSES VIA A LABELLING PROCESS In this context, deep learning algorithms are used in different detection processes. Classification involves dividing objects or defects into specific classes based solely on image data. For object detection, the labelling process is carried out by drawing rectangles around the objects to be recognised in each individual image and then indicating the object class according to the particular application. In this way, the deep learning algorithm learns which features fit each particular class. As a result, objects or defects can be located automatically and assigned to a special class. Finally, each individual pixel of an image is allocated to a specific class during semantic segmentation. This results in regions that can be assigned to a class. The challenge facing all deep-learning-based detection methods is that they often require a relatively large number of training images, all of which must be labelled for assignment to a class. Images displaying objects with the defects to be detected are also needed for the training process. Depending on the application, 300 or more images need to be captured showing various versions of the corresponding object with a specific defect, such as a scratch or deformation. This involves a significant outlay, which many companies prefer to avoid. Moreover, there are applications that do not provide a sufficient number of these “bad” images.

SIMPLIFYING DEEP-LEARNING-BASED INSPECTION TASKS MVTec offers a practical solution to this problem: HALCON 19.11, the latest version of the company’s standard machine vision software. A feature called “Anomaly Detection” is integrated into this software raising the detection of

mvpromedia.eu

The new HALCON feature allows anomalies to be precisely located.

anomalies to a whole new level. What is special about this tool is that it requires very few training images. This means that as few as 20 and a maximum of about 100 images are sufficient for training the deep learning network. In addition, “bad” images are no longer required. The system is able to carry out the training process based exclusively on defectfree images. Following the training process, many types of deviations are precisely located in all additional images. For this type of defect detection, it is therefore no longer necessary to first label training images of objects with defects. Consequently, deep-learning-based inspection tasks can be implemented even more efficiently and with far less effort. The new feature makes it possible to detect anomalies even when their appearance is not known ahead of time. These deviations may relate to colour, structure or contamination. For example, a beverage bottler can reliably locate small scratches, cracks or fissures in the neck of a bottle when inspecting the containers. As part of the training process, an “anomaly map” is created on which a grey value is assigned to areas where an anomaly is likely to be present. The regions that are extremely likely to contain a defect as well as the size of that defect can then be determined with pixel accuracy by segmenting the image. In tests involving only 20 training images, it was possible to implement this process in just six minutes using MVTec HALCON 19.11.

SUMMARY Deep-learning-based defect detection methods generally require a large number of training images that display the object with the particular defect. The new Anomaly Detection feature allows the number of images needed for training to be reduced to as few as 20 and a maximum of 100. These images can also be defect-free, meaning that they do not have to display the anomaly to be detected on the respective object. This eliminates the need for labelling the images, which saves companies a great deal of time and money. MV

39


FAIL TO PREPARE,

PREPARE TO FAIL Could investing in supporting software save headaches? Sean Robinson, service leader at industrial automation expert Novotek U K & Ireland, explains how a little forethought and investment can sidestep imposing future problems.

Benjamin Franklin enjoyed this old proverb: “A little neglect may breed mischief. For want of a nail, the shoe was lost. For want of a shoe the horse was lost, and for want of a horse the rider was lost.” The message can be easily ignored, that seemingly unimportant factors have dramatic knock-on effects, and omitting them entirely invites grave consequences. Sadly, the famous proverb ends with the battle being lost, for want of a rider. While military equipment logistics might be somewhat beyond our remit here at Novotek, this analogy has deafening parallels with many industries. Instead of losing battles, however, industries face unplanned downtime. A recent study from ServiceMax shows that a staggering 369 of the 450 companies surveyed, over 80 per cent, had suffered unplanned downtime within the three years in question. There are many ways that unplanned downtime can negatively impact a business. In continuous process industries — which range from butchery and water treatment to steelmaking — downtime causes produce to be lost. The costs of this further impact the bottom line, beyond just lost profits. When producing less process-sensitive products, the negatives are still profound. Imagine if one of your upstream suppliers has experienced an unplanned fault, and your business depends on their supply. How long would your company put up with it? The tighter logistical times get, the more of an issue it becomes — particularly in cases of just-in-time production.

40

This is a major concern that’s shared by a large portion of businesses. More than half feared they would lose their customers’ trust if they were to suffer a high-profile incident. One-in-ten said that such a failure would be unrecoverable, and ultimately spell the end for their business.

PREVENTING POOR PERFORMANCE Another military analogy comes to mind, here. Specifically, the old British army saying known as the six Ps: prior planning and preparation prevents poor performance. Preparation in these contexts refers to investments in, and the thorough integration of, supporting oversight software, but also in ongoing technical support of that software.

mvpromedia.eu


One approach is to work with the concept of the digital twin. These systems emulate physical processes in software, meaning failure modes can be safely simulated, processes can be tested against changes in products or ingredients and even entire line redesigns can be simulated, all without having to make any material changes. However, as with most things in industry, these schemes must be well supported, otherwise they can easily cause trouble. Just as unmanaged physical equipment can lead to inefficiencies and failures creeping in through malfunction and wear, implementing incorrect decisions derived from poorly managed digital twin software can be equally detrimental.

MAKE MAINTENANCE A SURE THING This is a pervasive, perennial issue that technology-reliant businesses face: The mismatch in perceived values between active equipment, the infrastructure supporting it and lifetime technical support. An upgrade to faster lineside robotics, for example, is easy for budgeters to rationalise because the results are reflected immediately in the finances. However, investing in seemingly invisible support networks is a much harder argument to make without the numbers to back up your point. So, what do the numbers say? Within the companies ServiceMax surveyed, losses from unplanned downtime averaged more than half a million pounds per company, per year. That’s not an insignificant

mvpromedia.eu

sum and, while many companies might have the fortitude and flexibility to shoulder such an impact, there’s no reason why they should have to. Businesses are waking up to these facts. Over 60 per cent of respondents in the previously mentioned study agreed that digital twins would prevent unplanned failures, and more than half said they plan to invest in support infrastructure in the coming years. Similarly, technical integration and support for this infrastructure is similarly entering the minds of business. In 2013, only 15 per cent businesses deemed new software implementations as ‘very successful’, according to Wood, Hewlin and Lah in their book B4B. This is why industrial IT is best deployed by experts with ongoing support. For example, we encourage businesses to partake in programs such as our Accelerated Plan for GE Digital software, which provides ongoing technical guidance and support on a rolling basis. If a software deployment goes awry or if an issue arises during operation, it’s better to nip the issue in the bud sooner with technical support than wait and become entrenched in the weeds. Neglect does indeed breed mischief. As industries and businesses worldwide become more complicated and interdependent, plenty of details can slip through the cracks only to come storming right back as major problems. MV

41



AN INNOVATION IN EDUCATION MVPro gets the inside track on the newly-launched Schaffhausen Institute of Technology and its founder, Dr Serguei Beloussov.

For more than a century the Swiss Canton of Schaffhausen has managed to provide the energy to drive change and innovation. Emboldened by the power of the River Rhine, which flows through the city, the development of the Moser dam back in 1863 gave it power to be at the heart of industry. Industrial pioneers of that age – still thriving today – include Georg Fischer and the International Watch Company (IWC). Fast forward to today and it also boasts the likes of ABB, Johnson & Johnson and Garmin amongst a plethora of businesses that have been drawn to this small, but charming city, that sits north of Zurich and to the south of the German border. It has, through its economic policies, become an appealing location for businesses at the forefront of developing the technologies of the future. But what the city didn’t possess was a university to play an active role in supporting the city and the businesses with additional research, knowledge and education. That has all changed with the arrival of the Schaffhausen Institute of Technology, which is the creation of Dr Serguei Beloussov, founder and CEO of Acronis, which has headquarters in the city. Very much in its early stages, SIT has set up educational links with the National University of Singapore, where Acronis also has headquarters, and Carnegie Mellon University in the USA to begin tutoring its first students. Educationally, it will focus on three areas of advanced technology: software, physics and digital transformation. Christian Wipf, SIT board member said: “The whole idea is that for an industrial technology eco system you need three things: Industry, people and educational facilities. Schaffhausen has industry, and a very longstanding history with industry, lots of technology companies are based here and there are also people with manufacturing and R&D facilities.

mvpromedia.eu

“What they are lacking is a local education institution at a tertiary level that supplies them with educated graduates. It seems very logical because it is the missing piece.” SIT has secured three million Swiss francs from the Schaffhausen Canton to enable its development, as it fills the vital missing link in the city’s infrastructure. Christian Amsler, State Councillor of Department for Education of Schaffhausen said: “Schaffhausen could strongly benefit from SIT’s education, research and innovation offering. It will provide the region with new experts in the field of information and communication technology, technology transfer between the university and industry, and the establishment of start-ups in the SIT Tech Park.” SIT has identified a site next to the famous Rhinefalls, in which to establish a permanent base for its students but also develop the SIT Tech Park. Wipf added: “We’re only a few months old. We are looking to rent an old industrial area of Schauffhausen by the Rhinefalls. Heavy industry was situated there but with globalisation, that has changed and so the site is now being redeveloped and we’re looking to become a significant part of that redevelopment. Its location is attractive overlooking the falls, while having the old building renovated to modern standards gives it a special campus feeling.” MV

43


DR ‘CAN DO’ Premier League football managers, MV Pro, got to discover more about his views on SIT and the impact it will have.

MVPRO: WHY IS SIT IMPORTANT TO YOU?

It was 25 years ago that Dr Serguei Beloussov first had the idea of creating, what he calls, a deeply fundamental university. The vision was one that would be a scientific research centre university which will be able to produce business applications or applied technology. He discussed this idea with friends when in the United States, but the belief was it would take billions of dollars to ever realise this dream. Dr Beloussov is a very smart and well-educated individual with degrees and PhDs in Computer Science, Physics and Electrical Engineering and Physics – two of them cum laude. He has, throughout life, been a technology entrepreneur and investor and has more than two decades experience of building and growing tech companies across the globe. Today he is the CEO and chair of the board of directors Acronis, a global leader in cyber protection, which he founded in 2003 and now has more than 5.5m customers worldwide. The success of the company, has enabled Dr Beloussov the opportunity to realise his dream and create the Schaffhausen Institute of Technology (SIT). In an intriguing one-on-one interview, which included mentions of Einstein, wine production and even the role of

44

SB: I try to do things which are fun and good business and it fits. When people ask me this question 15 years ago or 10 years ago I used to answer ‘I have no choice’ but I don’t answer this way now because people find it funny. I mean why not? When I was building schools for example in different countries with Acronis people kept asking ‘Why are you building schools?’ and I’d answer, ‘Why not? It’s a good thing to do, it’s good business because it is a good way of engaging partners, customers and employees. Everybody is very happy and it’s fun, so why not do it’. That is the way I think because I am an entrepreneur. I do what I can do. The university I can do. It is something I want to do, I have to do and it fits my aims but it starts from ‘I can do it’.

MVPRO: IT’S NOT A TRADITIONAL UNIVERSITY EITHER IS IT THAT YOU ARE SETTING UP. SB: It will have all the elements of a traditional university in terms of the qualifications. It will have a Master’s programme, under graduation programme, post doctorate programme. It will provide degrees, do education and science. But it is a different model, a different approach and a different method but the ultimate goal is the same. That is, to put knowledge in the brains of people so that they can do incredible things on computers.

MVPRO: WHAT IS THE CURRENT STATUS OF THE UNIVERSITY? SB: We currently have five students who were based at the National University of Singapore, one of our educational partners. They are now at Carnegie Mellon University in the United States, another partner. The educational model we have is that the first year we give a joint degree and so the students study there and then come back and work. The second year they study two thirds, third year one third and

mvpromedia.eu


the fourth year they study full time. So, from the third year we will start an undergraduate programme here and we have a ramp up plan for that. What they study is controlled by us and we give them all the materials. We will also look to partner with probably one more school in Switzerland, which we haven’t announced yet, so that we’ll have three educational partners.

MVPRO: WHAT BUSINESSES OR WHAT INDUSTRIES WILL YOU TARGET? SB: Any. That’s why we want to build it. In every industry right now you need science, you understand that every industry in the world can be scientifically understood and improved. We can engage with any industry and that’s what is exciting and that is where there is a huge demand for science as a service and education as a service. It is the demand which allows us to be able to succeed.

little bit of extra money but it is a very different type of work and government funding can potentially be decreased because they are getting income from industry and so they can’t make a business out of science. We don’t have those same restrictions. So, delivering science as a service can be done two ways. One is the companies just approach you with a specific problem they have and you can try to find the solution by doing scientific research in one or other areas, which is why we need interdisciplinary research. You can also do it in second way where you approach companies and talk to them about their business and propose what you can do for them in scientific research which is relevant to their areas.

MVPRO: CAN YOU EXPLAIN MORE ABOUT SCIENCE AS A SERVICE. SB: In Europe many universities can do science as a service but they are not in a situation to do that because they are funded. If they start doing science as a service they make a

mvpromedia.eu

Today everybody knows that almost in any area of human activity if you apply science you can improve productivity and profitability and quality. MV

45


THE ‘CONNECTED WORKER’ CAN STOP THE UK’S SKILLS DRAIN Augmented reality and artificial intelligence can ensure knowledge is passed on to next-gen employees.

can protect the knowledge and expertise of retiring workers by training next-gen and existing employees. The chief executive of the company, which has its UK offices in Farnborough, pointed to an increased uptake in the number of companies investing in AR as a way of protecting traditional skills and securing IP. In its simplest sense, PTC’s Vuforia Expert Capture lets experts record a task as they carry it out using a wearable device, such as Microsoft’s HoloLens. The content is then turned into a step-by-step video guide with instructions for other workers to follow through the wearable tech locking valuable skills in place forever. “The terms Artificial Intelligence and Augmented Reality automatically conjure up images of robots taking human jobs - well the ‘connected worker’ paints a completely different picture,” explained Heppelmann, who has spoken across the globe about the importance of embracing digital technology in manufacturing. “One of the biggest threats to UK industry is an ageing workforce, with recent data from the European Labour Force Survey revealing that 16 per cent of the total EU workforce is aged 55 or older. There is a real danger that these experts will retire before the next generation has had the chance to learn from them.” The rise of the ‘connected worker’ could help end the skills drain being accelerated by an ageing workforce, according to the boss of one of the world’s leading providers of industrial innovation technology. PTC’s President and CEO Jim Heppelmann believes the use of Augmented Reality (AR) and Artificial Intelligence (AI)

46

Heppelmann continued: “This no longer needs to be the case. Adoption is growing thanks to the ability to combine AR and AI to offer cost-effective solutions to manufacturers, not to mention a change in mindset from industry, who have now realised the importance of investing in businessready software and hardware.

mvpromedia.eu


“We have countless examples of small, medium and large firms that are embracing ‘connected worker’ technology to protect knowledge when workers retire, to reduce the costs of onboarding new employees and even the ability to quickly reskill and cross train existing staff. “I can only see this trend continuing, especially as we see technology and platforms mature to meet the requirements of the modern-day manufacturer. These technologies can bring the superpower of computing into the arms and legs of the workforce. “According to the recent PwC ‘Seeing is Believing’ Report, wider adoption of VR and AR is going to add £1.5trillion to the world economy over the next ten years. It’s not something businesses can ignore any longer.”

“ One of the biggest threats to UK industr y is an ageing workforce. There is a real danger that these experts will retire before the next generation has had the chance to learn from them” Augmented Reality is still a relatively new technology, with its use in industry only dating back five years ago. Previously, it has been used mainly to enrich static views by information being overlaid on to reality, but now new functionalities are being developed and rolled-out over the next 12 months. This will overlay information dynamically and, using low-cost or high-quality glasses, enabling nearly

mvpromedia.eu

every industrial application imaginable to benefit from Augmented Reality. Heppelmann concluded: “Augmented Reality is one of the most effective user interfaces ever developed, but it isn’t that useful if it never makes it out of R&D as a true off-theshelf business tool. “PTC is heavily investing and working hard to ensure that organisations can leverage the additional technologies related to the Internet of Things (IoT), Product Lifecycle Management (PLM) and Generative Design, ultimately leveraging the full spectrum of what Industry 4.0 has to offer whilst also ensuring people are at the centre of Digital Transformation.” MV

47


ARE COBOTS INHERENTLY SAFE? Results from the Global Robotics Report 2019 identified that 79 per cent of automation distributors do not believe their customers understand the safety requirements of installing a collaborative robot. Nigel Smith, managing director of Toshiba Machine partner TM Robotics, quashes some common misconceptions about collaborative robot safety.

Collaborative robots, often referred to as cobots, have been heavily marketed as unguarded and easy to integrate machines that can work seamlessly alongside human workers. However, this doesn’t necessarily make these machines exempt from the safety regulations associated with regular industrial robots.

STANDARDS FOR COBOT SAFETY While there are significant differences between cobots and their industrial counterparts, the industry does not acknowledge cobots as a separate entity. As far as safety is concerned, cobots are subject to the same stringent regulations as traditional robot variations — that’s your SCARA, six-axis and Cartesian models. Robots for use in manufacturing are subject to two distinct standards, ISO 10218-1:2011 Robots and Robotic Devices - Safety Requirements for Industrial Robots and ISO 10218-2:2011 - Part 2: Robot Systems and Integration. At present, there is no comprehensive standard that has been exclusively developed for the safety collaborative robots, but there is plenty of guidance available. Cobot end users should adhere to the most relevant published guidance contained in the ISO 10218 standards, a report entitled Collision and Injury Criteria When Working with Collaborative Robots. Additionally, there was a technical specification released in February 2016, ISO/TS 15066. This specification was published to provide safety guidelines for the use of robots in collaborative applications

48

and determines guidelines for force limitation, maximum allowable robot power and speed.

PERFORMING A RISK ASSESSMENT There’s plenty of literature on the safety requirements of collaborative robots, but the problem is that this information is often overlooked. Due to the way cobots have been marketed, many plant managers mistakenly assume that all cobots are automatically safe for use alongside their employees. After all, they are ‘collaborative’. However, this misconception simply isn’t true. Deploying a cobot safely requires a comprehensive risk assessment. This should consider the risks that may occur while the robot is in operation, performing the tasks required of it, as well as the potential risks when the cobot is between tasks. Unlike traditional variations, cobots are often lightweight and portable. Therefore, these machines are ideal to be used for various tasks within a factory. In this instance, it is imperative that the plant manager assesses how the safety

“ There’s plenty of literature on the safety requirements of collaborative robots, but the problem is that this information is often overlooked.”

mvpromedia.eu


operating speed in order to remove safety fencing does not make sense from a business or manufacturing perspective. What’s more, physically separating the robot from human workers removes the entire nature of the machine. Put simply, it is no longer collaborative. In these instances, it is worth considering whether a cobot is what you really need or if a traditional robot might be more suitable. Six-axis robots, for instance, have long been used to increase productivity in packaging applications. For many of these packaging and palletising tasks, there’s no real need for human interaction with the robot. As a result, enabling this collaboration through investment in a cobot doesn’t assist productivity or output. may be compromised when the cobot is in transit. For instance, being moved from one section of the production line to another. In addition, an assessment is required for every separate activity and task the cobot will perform.

There’s no doubt that cobots have their place in the factory. In fact, reports suggest that the global cobot market will grow to a huge $3,811.483 million by 2021 — and we’re not surprised.

Considering packaging applications as an example. A risk assessment may find that, in order to operate at full speed and meet palletising KPIs, fencing around the cobot is required to maintain worker safety. Albeit standard practice with traditional industrial robots, fencing usually isn’t considered when purchasing a cobot. Therefore, these additional safety features often aren’t budgeted for.

The huge growth in the cobot market represents the view that cobots can be an ideal first step towards automated processes. However, as the results of the Global Robotics Report 2019 suggest, understanding of these machines and their safety requirements is lacking. To avoid hazards in the factory — and poor investments from end users — greater clarity of what makes a cobot is required. MV

TO COBOT, OR NOT? Motivation for most automation investments is to increase productivity and output. Therefore, reducing a cobots

mvpromedia.eu

49


THE GLOBAL IMPACT OF

AUTOMATION New research by MerchantMachine.co.uk has revealed the countries most prepared when it comes to automation and which nations are at most risk. The research also offers insight into the amount of global revenue automation is set to generate in the next three years and which industries are at highest risk around the world. According to the research, Slovakia is the least prepared country for implementing automation and robotics, with the nation being only 29% ready, despite a workforce of 44% that is at risk. Russia is also one of the least prepared nations when it comes to automation, with the country only being 19% for the changeover, yet 23% of jobs being at risk. This was also true for the Czech Republic and Greece. Poland and Italy were also featured as some of the riskiest countries, with Poland being 33% ready for the change, but with 33% of jobs at risk. This was the same for Italy, with the country being 39% ready for automation, but also 39% of their workforce in jeopardy of losing their jobs from these changes. One of the most industrialised countries in the world, the Netherlands, leads the way with the country being crowned as the most prepared for implementing automation. The research reveals the Netherlands is 95% ready for the change, with 31% of their workforce at risk from losing their jobs.

The Scandinavian region followed closely behind, with three countries featuring in the top five. Denmark was the second-most prepared country for automation at 88% with around 31% of jobs at risk. Sweden, home to Sony Ericsson and Spotify, followed closely behind with the nation being 83% prepared, with 25% of its workforce at risk of automation replacing workers. At 82% the United Kingdom takes fourth place in the list of countries being most ready for automation and robotics, whilst 30% of the UK population’s jobs are at risk of automation. Finland rounds off the top five, with the county being 81% ready, whilst only having 22% of jobs at risk - the lowest number in Europe. The research also reveals the amount of revenue the automation sector is expected to make over the next few years, with worldwide revenues in 2016 amounting to around $271 million - and this year, it’s set to make in excess of $2 billion. As the world prepares for automation, the amount of revenue is predicted to double by 2022 - reaching an alltime high of $3 billion. To see which nations are at high risk of automation and whose the most prepared go to https://merchantmachine. co.uk/job-automation/ MV

The Netherlands is home to companies such as ING Group and Shell but is also internationally recognised for its agriculture, materials, and technology.

50

mvpromedia.eu


Six Essential Considerations for Machine Vision Lighting 4. Maintain stable illumination The brightness of your LED lighting is very susceptible to tiny changes in supply voltage and all lighting manufacturers recommend using a current drive for LED lights. The graphs show that a tiny change in LED drive voltage causes a huge change in light intensity. For example, just a 10% variation in supply voltage to an LED light will double the brightness. All Gardasoft LED controllers give a stable current drive to ensure that your machine vision performance remains accurate and repeatable.

Relative Output Flux vs. Forward Current

Forward Current vs. Forward Voltage

20ms, single pulse, Tj-25C

20ms, single pulse, Tj-25C

600%

20 18 16 14

400%

12 If (A)

Relative Lumens (%)

500%

300% 200%

10 8 6 4

100%

2

0%

0 0

2

4

6

8

10

12

14

16

18

20

0

1

If (A)

2

3

4

Vf (V)

Graph of Luminus SST90R optical characteristics, acknowledgement: Luminus Inc

To read more about the Six Essential Considerations for Machine Vision Lighting see www.gardasoft.com/six-essential-considerations

Semiconductor

|

PCB Inspection

Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com

www.gardasoft.com

|

Pharmaceuticals

|

Food Inspection


FILTERS: A NECESSITY, NOT AN ACCESSORY.

INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING

MIDOPT.COM


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.