STEMMER IMAGING CEO INTERVIEW
TOP SECTOR NEWS AND FEATURES
MACHINE VISION’S GOLDEN YEARS
mvpromedia.eu
SOFTWARE: LET’S HEAR IT FOR THOSE UNSUNG LINES OF CODE
ISSUE 9 - JUNE 2018
MACHINE VISION PROFESSIONAL
A+ USB3.1 Gen1 AOC: up to 50m Plug&Play The A+ USB 3.1 active optical Cable is used to transmit 5Gbps over distances up to 50M through a USB3.1 Standard A plug and to the device through a USB3.1 micro-B Plug with locking screws, compatible with the USB3 Vision Standard. The Assembly will power the Device through the cable from the Host and is ready for HighFlex and Torsional applications.
A+ USB 3 Gen1
Discover more: www.alysium.com
CONTENTS
MVPRO TEAM
4
Welcome to MVPRO
6
LATEST NEWS - The latest and biggest stories from the Machine Vision sector
8
NEWS - a round-up of what’s been happening in the Machine Vision sector
20
MAIN FEATURE - Lines of code are not so photogenic, but without them, nothing would function. Here we take a look at some of the software packages that make our world go around
32
STEMMER - MVPro Magazine’s editor Neil Martin takes the opportunity to speak with Stemmer imaging CEO, Christof Zollitsch to see how he’s taking it all after the recent IPO.
36
FRAMOS - Sebastien Dignard of FRAMOS Technolgies is impressed with SONY
40
PHOTONEO - As Multipix Imaging begins a deal to distribute Photoneo´s 3D scanner products within the UK and Ireland, we take a look at parameters of 3D sensing techniques, as described by Tomas Kovacovsky, CTO of Photoneo
46
BITFLOW - BitFlow has offered a Camera Link frame grabber for over 20 years. This latest offering, combines the knowledge learned from the handling of the high data rates of CoaXPress with the requirements of Camera Link 2.0
48
NERIAN - Efficient stereovision thanks to hardware-based image processing
50
BAUMER - Tough when it’s rough: new cameras with IP 65/67 protection ensure reliable operation from -40 °C to 70 °C
52
LMI - Canadian based LMI Technologies, is a leader in 3D smart sensor technology and MVPro Editor Neil Martin caught up with CEO Terry Arden to ask him some questions and see how things were going
55
ALLIED VISION - Sad industry news: Allied Vision’s CEO Frank Grube passes away
56
PUBLIC VISION - Many economies have been in that Goldilocks position of being neither too hot, nor too cold, and enjoying the benefits. Editor Neil Martin asks is the porridge now turning a little cooler?
58
VISION BUSINESS - Editor Neil Martin looks at the window of opportunity for machine visions companies who want to float, raise finance, or sell.But how long before the window starts to close?
60
CONFERENCES - MVPro Magazine Editor Neil Martin wends his way again to the heart of England and then takes a look at what else has been keeping the conference sector busy
Neil Martin Editor-in-Chief neil.mar tin@mvpromedia.eu
Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu
Cally Bennett Group Business Manager cally.bennett@mvpromedia.eu
Paige Haughton Sales and Marketing Executive Paige.haughton@cliftonmedialab.com
Visit our website for daily updates
www.mvpromedia.eu
mvpromedia.eu
MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2018. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies. Designed by The Wow Factory www.thewowfactory.co.uk
mvpromedia.eu
3
Let’s Doubleh! the Bandwidt
COMING SOON
Coaxlink Quad CXP-12 Four-connection CoaXPress CXP-12 frame grabber AT A GLANCE • Four CoaXPress CXP-12 connections: 5,000 MB/s camera bandwidth - 32-bit & 64-bit, MS Windows, Linux and macOS drivers/SDKs included - x86 & AArch64 64-bit ARM platforms supported
• PCIe 3.0 (Gen 3) x8 bus: 6,700 MB/s bus bandwidth • Feature-rich set of 20 digital I/O lines • Extensive camera control functions • Memento features a new logic analyzer
LEARN MORE
www.euresys.com - sales@euresys.com
MILTON KEYNES SETS THE SCENE It may have been a cloudy day in Milton Keynes (in a rare bout of sunshine for the UK), but the sun seems to be shining on the machine vision sector at the moment. This was highlighted at the second UKIVA MVC. It was good to be back in Milton Keynes for what is a flourishing event now firmly set in the event calendar. Stand-out for the second year were the various presentations which occur during the day. The show was busier than last year, which was perhaps to be expected, but nonetheless the UK needs a sound machine vision event, especially with Brexit just around the corner. With UKIVA MVC out the way, there’s hardly time to catch our breath before it’s on to Munich for automatica 2018. This is promising to be a comprehensive show and covers a wide range of products and sectors. I had an interesting comment from one colleague in the office who said, when asking a marketing director if they were attending Vision 2018, that it was a little small for them. The point being that the larger shows, which see machine vision as just part of a greater industry, are for some companies now more attractive than the sector specific shows. Is this a trend for the future, perhaps a sign that machine vision will have to work harder to maintain its own identity? Or do the various sector shows offer better value? Recently, I also managed to get some time with Christof Zollitsch, CEO of the newly floated STEMMER IMAGING. The company recently went through the IPO process and I was interested to hear how Christof found it. All is revealed in the feature IPO CEO. Software might not catch the headlines so much as hardware in the machine vision sector (a camera, or a sensor is far more photogenic than a line of code), but without it, nothing would work. We take a look at a few of the key products currently doing the business. We have some great features from top industry companies, including the CEO of LMI Technologies Terry Arden. LMI is known for its Gocator product line and Terry takes us through his business. I also address the question as to whether there are a few storm clouds gathering on the very distant horizon. Last year and the first half of 2018 has provided great conditions for the machine vision market, but for how long will these last? Will we look back on 2017 and 2018 and call them the golden years of the industry, days when company values for high and external investors were keen to join the party?
Neil Martin Editor neil.martin@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB
Only time will tell!
MVPro B2B digital platform and print magazine for the global machine vision industry
Neil
RoboPro B2B digital platform and print magazine for the global robotics industry
Neil Martin Editor, MVPro
www.mvpromedia.eu
mvpromedia.eu
5
LATEST NEWS
UKIVA Machine Vision Conference & Exhibition opens The second UKIVA Machine Vision Conference & Exhibition was declared a success.
road, the former home of the some of the greatest early achievements in computers and code-breaking.
Event Organiser Chris Valdes said: “We are delighted to report that 350 visitors attended the show - a significant increase on last year. Not only that, exhibitor feedback tells us that the enquiries that they received were of very high quality, with visitors having practical applications that needed vision solutions.
He also said that Milton Keynes sits right in the middle of an Oxford to Cambridge tech corridor which the Government is promoting. And the $40bn machine vision sector is central to that he said.
“Our keynote addresses were also very well attended with both attracting well over 100 people. The Starship Technologies’ personal delivery robots that patrolled the venue throughout the day also generated a lot of interest. The sessions on Machine Learning and Deep Learning proved to be particularly popular with attendees keen to learn how these techniques could be used in real-world vision applications.” The event was officially opened by MP for Milton Keynes Iain Stewart who welcomed exhibitors and delegates to the event. The MP highlighted the importance of Artificial Intelligence in the industry and noted how apt that was, given the fact that Bletchley Park was down the
Optics balzers subsidiary wins award A research group with the participation of Optics Balzers Jena was awarded the prize of the Stifterverband für die Deutsche Wissenschaft. It was awarded to the research work within the growth core Freiformoptik +, which is funded by the Federal Ministry of Education and Research. Optics Balzers Jena was involved in the development of optical coatings that are optimally adapted to the requirements of freeform systems. In addition to anti-reflective and anti-reflective coatings, this included the development of microstructured optical filters used in space-based Earth observation. Optics Balzers Jena, a subsidiary of Optics Balzers, has been honoured with the Science Award of the Stifterverband for participation in research and development work in the field of freeform optics.
6
With its technologies for plasma-assisted optical coating and the microstructuring of optical layers, Optics Balzers is regularly involved in research in the field of bioanalytics, laser technology or industrial process control.
mvpromedia.eu
LATEST NEWS
MVTec at automatica MVTec Software will use the forthcoming automatica 2018 trade show to focus on the latest versions of MVTec HALCON and MERLIC; highlight deep-learning on embedded platforms; and, provide live demonstrations of automation scenarios. The leading provider of standard machine vision software plans to show how machine vision can be used to optimize processes in automation and robotics. This is done with a focus on the latest versions of the software products HALCON and MERLIC. What’s more, live demonstrations enable visitors to experience the use of modern machine vision technologies in automation scenarios up close. Also, there is a chance to gain insight into new machine vision functionalities in the areas of parallelization and matching. HALCON 18.05: Deep learning on embedded boards At the MVTec booth, experts will demonstrate the use of complex deep learning algorithms on embedded platforms. Using the JetsonTX2 embedded board from NVIDIA, they show how deep learning can be used to classify a wide range of objects, such as pills or fruit, quickly and precisely. Character and number combinations are also reliably recognized in OCR applications. The live demo shows that embedded hardware is also suitable for sophisticated machine vision tasks. Another demo visualizes an application scenario in the area of robotics.
mvpromedia.eu
A robot arm reaches into a collection of objects and reliably locates the position of the relevant object thanks to the modern matching technologies of MVTec HALCON. The arm precisely removes this object from the crate, recognizes it with a 2D camera and innovative deep learning technology, and sets it aside. MERLIC 4 preview: Parallelization and use of Hilscher cards Another demonstration illustrates functions of the latest preview version of MERLIC 4. Various inspection tasks are performed with the aid of two cameras, demonstrating the new parallelization capabilities, i.e. the parallel execution of independent tools. This demo also shows how MERLIC uses deeplearning-based OCR technologies to precisely recognize different fonts on packaging, such as expiry dates or batch numbers, in fractions of a second. MVTec also illustrates the seamless integration of a programmable logic controller (PLC) into vision systems using MERLIC. In the future, MERLIC will be even better integrated into automation solutions via Hilscher cards, for example with the aid of Profibus. The first successful tests have already been carried out for this trend-setting development. Professor Carsten Steger, Director of Research at MVTec, will take part in the automatica Forum with a presentation about “Usage scenarios for machine learning in industrial imaging – examples of current projects in the food and pharmaceutical industries”.
7
NEWS
C5-CS FAMILY EXPANDS 3D LASER SENSOR RANGE WITH FOUR NEW MODELS Automation Technology has expanded its series of 3D laser sensors with four new models.
precise imaging of the laser line due to its physical properties. Application examples can be found in industries such as the electronics and semiconductor industries, where ball grid arrays (BGAs), pin plugs or printed circuit boards (PCBs) are checked for completeness, or defects such as micro-cracks.
The new members of the C5-CS series, based on the world’s fastest 3D sensor C5-1280-GigE, have a compact design that allows high-speed 3D sensors to leverage their capabilities. The latest models all support an output of up to 1280 points/profile and achieve, said the company, a unique scanning frequency of up to 200 kHz. The entire design concept is tailored to the outstanding features of the high-speed 3D sensors and combines high-end 3D technology with the latest laser electronics in a compact housing. All 3D laser sensors are already pre-calibrated and come with a working distance between 31 mm and 90 mm. This results in measuring ranges of up to 40 mm in width and 46 mm in height. Which means that the lateral resolution achieves an accuracy of 5 μm and the height resolution achieves an accuracy of up to 0.11 μm. Furthermore, the high-speed 3D sensors feature a linearity of +/-
0.05 % of the Z-Range and repeat accuracy of 0.5 μm, which allows very accurate measurement results. A special feature of the 3D sensors from Automation Technology has always been the internal evaluation algorithms, as none of them influences the measuring speed. This, said the company, gives the 3D sensors a clear competitive edge in terms of measuring speed. The user can therefore obtain the best possible 3D measurement data, regardless of which algorithm (e.g. MAX, TRSH, COG, FIR-PEAK) is used. The quality of the profile recordings is also optimized by a blue laser projection, which enables a more
Due to the various application areas, the developers of the C5-CS sensors put special emphasis on industrial capabilities for the new housing type. Therefore, the new C5-CS models feature a robust design with protection class IP 67 and tensile and tear-resistant M12 connectors for tensile and tear-resistant cable connections. Thanks to their Gigabit Ethernet interface, the C5-CS sensors support the GigE Vision standard. This offers integrators a high level of convenience, because they have access to all common development environments (Common Vision Blox, Halcon, EyeVision, LabView, etc.) which enable a fast and easy integration of AT’s high speed 3D sensors.
KOWA LAUNCHES NEW VIBRATIONRESISTANT WIDE-ANGLE LENS Kowa has extended its vibration-resistant lens series for cameras with up to 2/3" chip size by the focal length of 5mm. By using the 2MP JCM-V series, measurements can be made without pixel deviation even when tilting the lens and in high-vibration environments. This, said the company, makes the lens ideal for robotics and 3-D measurement applications.
8
The inner glass elements are glued and the focusing ring has a double nut thread, and for different apertures, there are variable step-up rings. The ruggedized 2/3" series is now available in seven focal lengths from 5mm to 50mm. The new focal length allows a horizontal angle of view of up to 82.4°. As well as the 2/3" series, Kowa’s HC-V Series is a shock and vibration resistant series for cameras up to 1" in size. This is available in six focal lengths from 8mm to 50mm.
mvpromedia.eu
NEWS
SONY EXPANDS THIRD GENERATION PREGIUS SENSOR LINE WITH IMX425/432 IMAGERS FOR HIGH-SPEED FACTORY AUTOMATION FRAMOS is making available SONY’s new image sensors from within its third generation Pregius line, featuring the SLVS-EC high-speed interface. Both the CMOS Global Shutter IMX425 and the IMX432, with a 1.1” optical format, provide 1,78 Megapixel resolution, and a pixel pitch of 9µm for excellent image quality. IMX425 is suited to high-speed factory automation with an 8 / 10 / 12-bit A/D converter and a throughput of up to 565 frames per second (fps) at a 10-bit pixel depth. The IMX432 is a
mvpromedia.eu
reduced-lanes version providing up to 98fps with a pixel depth of 12-bits, and covers ITS and traffic solutions, in addition to life sciences applications, including microscopy. Sibel Yorulmaz-Cokugur, Sensor expert at FRAMOS, said: “The excellent speed and image quality is one of the biggest advantages of SONY’s 3rd Gen Pregius devices. Maximizing the throughput of production lines with high frame rates, objects can be inspected with precise detection, and increased accuracy from short working distances. A capacity of up to
eight Regions of Interest (ROI) make even higher frame rates possible. The SLVS-EC standard ensures fast transmission for real-time processing.” The IMX425 imager, like all third generation Pregius devices with the 8-lanes high-speed SLVS-EC interface standard, provides a maximum output of 18.4Gbps. It produces excellent image quality by featuring high/low conversation gain modes, dual triggering, dual ADC, and self-triggering. These features achieve high sensitivity, low dark current, and, low PLS characteristics.
9
NEWS
ALLIED VISION OPENS SILICON VALLEY OFFICE Allied Vision has opened a new sales and support office in Cupertino, California. Situated in the heart of Silicon Valley, the new site, said the company, will allow it to provide better service to the growing customer base in the high-tech industries. Michael Troiano, Senior Director, Worldwide Sales at Allied Vision, said: “Our business has experienced strong growth in Silicon Valley over the past the few years. It is a global hub for technological innovation which extends into developing new imaging applications. We feel that a local office
10
gives us a unique time to market perspective and will help us prepare for closer collaboration with emerging applications and companies.” Allied Vision currently serves the North American market out of its Sales Office in Exton, Pennsylvania.
He said: “I am excited to join the Allied Vision team and take this new challenge to grow our business here. Allied Vision has already been very successful in the region and I am sure that a local presence will enhance our partnerships with California-based companies.”
The new office will be led by Matthew Hori (pictured), a new Sales Manager of Allied Vision’s US team since April 2018. He has a 20-year technical and sales track record in the machine vision industry.
mvpromedia.eu
NEWS
ALLIED VISION APPOINTS ASHIK PATEL AS NEW SALES DIRECTOR AMERICAS Allied Vision Technologies has appointed Ashik Patel as its new Sales Director Americas. He will be based in the company’s US Sales Office in Exton, PA. He heads up the North and South American Sales team, and will manage the sales activities in the Americas.
He said: “I am thrilled to join Allied Vision. The vision industry opens a lot of exciting opportunities and Allied Vision has been very successful building a strong position in North America. I look forward to growing our business even more and offer the best possible service to customers.”
Patel has a successful track record in engineering and sales in various industries such as automation and metrology, including management positions.
Michael Troiano, Senior Director, Worldwide Sales, added: “We are committed to grow our sales presence in the
Americas with a stronger team. Ashik brings very comprehensive experience in serving demanding industries with highly technical products. I am confident that he and his team will grow our business by delivering the best-in-class experience our customers can expect from Allied Vision.”
SILICON SOFTWARE INTRODUCES TWO NEW MEMBERS TO ITS MICROENABLE 5 MARATHON CXP FAMILY Silicon Software (Mannheim, Germany), the manufacturer of frame grabbers and intelligent image processing solutions, has introduced two new members to its microEnable 5 marathon CXP family.
The frame grabbers support color (RGB and Bayer) and monochrome area, line scan and CIS cameras across different topologies (single, dual and quad configurations) and up to 25 GB/s incoming bandwidth.
The ACX-SP and ACX-DP focus on the CoaXPress standard for demanding high-speed applications. All compatible CoaXPress camera types can be connected to the image acquisition and processing boards. They are suitable for all CoaXPress configurations (CXP-1 to CXP6) according to version 1.1.1.
The new microEnable 5 marathon ACX-SP and ACXDP frame grabbers consist of smaller versions with one or two camera ports for single link and dual link CXP cameras. They offer similar feature sets like the quad port frame grabber with an equally high bandwidth of 6.25 Gbit/s data rate per single CXP-6 connection.
The FPGA based microEnable 5 marathon frame grabber series has been developed for the Camera Link, Camera Link HS and CoaXPress cameras. Four CoaXPress boards are now part of the series: The A-Series, ACX-QP with four ports, ACX-DP with two ports and ACX-SP with one port as well as the programmable FPGA version VCX-QP (V-Series).
The four ports of the microEnable 5 marathon ACX/VCX-QP frame
mvpromedia.eu
grabbers can be connected to four different CoaXPress cameras at the same time with a multitude of pixel formats and bit depths. The VCX-QP version (V-Series) FPGA is graphically programmable with VisualApplets using data flow models. In a short period of time, developers create complex image processing pipelines for real-time, deterministic and low-latency applications. Existing FGPA hardware code (created with VHDL or Verilog) can also be integrated using VisualApplets Expert. (picture shows microEnable 5 marathon ACX/VCX-QP frame grabber with four ports)
11
NEWS
COGNEX LAUNCHES NEW MX-1502 MOBILE DATA TERMINALS MX-1502 readers for mobile logging of logistics data.
Cognex (Karlsruhe) has launched its new MX-1502 mobile data terminals. These modular devices are mainly used in logistics. For example, VTL Vernetzte-Transport-Logistik, a general cargo cooperative based in Fulda, uses the Cognex
For future-proofing and flexibility, VTL focused on Androidbased devices. Smartphones mounted in the shockproof housing of the MX-1502 are used as mobile data terminals together with an imagebased Cognex scanning unit. VTL uses the Samsung Galaxy J3 smartphones in the MX-1502. Cognex said that due to the product’s modular design, with its special rubber inserts and a screen protector, nearly any
Android- or iOS-based smartphone can be used, and easily replaced with a new model in the event of damage or a generational change. The smartphones are housed in a shockproof, splashproof polycarbonate housing. The mobile terminal is suitable for harsh industrial environments, and even withstands 50 drops from a height of two meters onto concrete. Repairs are now much simpler than before, said Cognex, since with MX-1502 customers only need to replace individual components, such as the mobile device or optics, and no longer has to replace the entire terminal. In addition to low operating costs, this provides high investment protection.
SPECIM APPOINTS TAPIO KALLONEN AS CEO Tapio Kallonen has been appointed CEO at Specim Spectral Imaging. He succeeds Dr Georg Meissner, who requested to leave his position after having led Specim successfully for over four years. Tapio Kallonen will join Specim to lead it after significant investments in new products for the next phase of its ongoing growth plan. Risto Kalske, Chairman of Specim Board of Directors, said: “Specim has completed its all-time largest investment in new products which are particularly fitted to meet the needs of Industry OEM clients in a variety of professional applications. Tapio brings a wealth of experience from his earlier role to further develop Specim as a close-to-customer supplier and OEM partner. “Without losing Specim’s role as the leading hyperspectral imaging developer, Tapio with his team will ensure that Specim’s products
12
and partnership are easily and professionally available to our rapidly growing industrial clientele. At the same time Specim will further deepen its position as a globally respected manufacturer of demanding solutions for the scientific clientele.” Kallonen added: “Specim is a global industry leader in its field with an extensive and solid track-record for over 20 years. I am proud and honored to join and lead the Specim team which possesses exceptional industry insights and understanding of both the technology and how it creates a significant value add to our clients and partners. We have an intriguing job ahead of us to extend the usage of our technical solutions to machine vision and other industrial applications at a large scale. Specim’s impressive inhouse capabilities from all of our functions provide a solid base for us to grow further together with our partners and clients.”
Kallonen, 34, holds a MSc in Technology from Aalto University, Finland. He completed his studies in Universidad Politécnica de Valencia, Spain and Waseda University, Japan. Previous positions included Sales Director at Obelux, a Finnish globally operating manufacturer of LED aviation lights for professional use.
mvpromedia.eu
Experience the power of deep learning with MVTec HALCON
NEW VERSION available now!
NEWS
OPEN-ARCHITECTURE ILLUMINATION SYSTEM ENHANCES COMPUTATIONAL IMAGING CCS has launched the new LSS Computational Illumination Series, a Computational Imaging (CI) lighting solution to advance machine vision imaging. CI, with its targeted feature extraction, directly outputs the image users need, allowing for more robust MV solutions. In contrast, said CCS, traditional image acquisition often requires substantial post-capture image processing and can still fall short in producing the optimal image. With Computational Imaging technology, MV systems produce better, or previously
14
impossible images, reducing development times and significantly lowering costs. Computational Imaging works by creating a composite image through multi-shot image capture. The new LSS Series from CCS addresses these needs with precision light controllers and a wide array of lighting including segmented ring and bar lights – with wavelength options including full Color and multispectral. The LSS-2404 Light Sequencing Switch provides the timing at the heart of any Computational Imaging system and is easily
integrated with industry-standard imaging software and MV libraries. The LSS Series is openarchitecture so that it can be used with any machine vision camera and most Smart cameras. Marc Landman, VP of CCS America, said: “We regard this development in Illumination as an important enabling technology. It provides users with the opportunity to greatly enhance the effectiveness of their machine vision applications – and, importantly, they can achieve this with the leading imaging software and lighting that they are essentially already familiar with.”
mvpromedia.eu
NEWS
ARCHOVER FIRST P2P LENDER TO OFFER UK R&D TAX CLAIM FUNDING ArchOver is the first P2P lender to offer a plus £100,000 funding service for companies waiting for their UK R&D tax claims to be settled. The Research & Development Advance (RDA) was created to fund those companies who have had over two years of successful claims and require in excess of £100,000. Funds come from the ArchOver community of lenders. CEO of ArchOver Angus Dent: “Investment in research and development is crucial not only to individual businesses, but to
the wider economy as well. While the government deserves praise for unlocking cash for R&D, the long wait for reimbursement puts this funding out of reach for many of the businesses that stand to benefit most.” ArchOver explained that only 1.67 per cent of national income is currently being spent on R&D compared to an average of over 2 per cent across the EU. Government initiatives have been put in place to encourage further investment in innovation in the UK. Under the current system, UK businesses can claim cash
repayments of up to 33 per cent of their R&D expenditure, but it can take up to six months to receive payment from HMRC. Dent continued: “ArchOver is committed to unlocking access to capital while providing new opportunities for investors. With the RDA service, businesses no longer need to worry about having to wait for months to be reimbursed for R&D, which means they can start putting their investment plans into action immediately. That’s great news for individual businesses, and for the UK economy generally.”
MVTEC TO LAUNCH HALCON 18.05 HALCON 18.05 makes it much more comfortable to work with handles. With the new version, they are automatically deleted once they are no longer required. Thus, the risk of memory leaks is significantly reduced because users no longer have to manually release unused memory.
MVTec Software (Munich, Germany), the leading provider of standard machine vision software, has launch the latest version of HALCON on 22 May. With this new version of HALCON (18.05), the deep learning inference, for example, the use of a pretrained Convolutional Neural Network (CNN), is now running on CPUs for the first time. The company said that in particular, this inference has been highly optimized for Intel-compatible x86 CPUs. This means that a standard Intel CPU can reach the performance of a mid-range graphic processor (GPU) with a runtime of approximately two milliseconds. The operational flexibility of systems can therefore be significantly increased. For example, industrial PCs, which usually do not utilize powerful GPUs, can now be used for deep learning based classification tasks without any problems. What’s more, the new HALCON version also offers several other improvements that
mvpromedia.eu
further increase the usability of machine vision processes, including enhanced functions for deflectometry. These improve the precision and robustness of error detection for objects with partially reflective surfaces. MVTec pointed out that developers in particular benefit from two other new features: first, they can now access HDevelop procedures not just in C++, but also in .NET via an exported wrapper – as easily and intuitively as a native function. This significantly facilitates the development process. Also, that
Product Manager HALCON at MVTec Johannes Hiltner said: “The release of HALCON 18.05 marks another milestone for trendsetting machine vision software. In particular, the new release addresses the growing importance of AI-based technologies such as deep learning and CNNs for machine vision processes.” Managing Director of MVTec Dr Olaf Munkelt added: “With this new version, we are pleased to offer users and developers machine vision features that are well-thought-out and extremely progressive in equal measure. The features allow our customers to further simplify their machine vision processes and raise them to a whole new level with HALCON’s handy new functions.”
15
High in Quality and
Features
120 6.7 MP
47 7 MP
FPS
FPS
w
new
w
new
20 14 MP
The perfect Picture for your Application. > CMOS or CCD sensor > Four LED light controller > 256 – 512 MB of Burst Mode Buffer (GigE only) > Sequencer, PLC, Safe Trigger > Extended operating temperature range: -10 up to 60°C
www.svs-vistek.com
SVS-Vistek GmbH / Germany +49 (0)8152 9985-0 info@svs-vistek.com
FPS
NEWS
BASLER’S FIRST CAMERA SERIES SPECIFICALLY DESIGNED FOR MEDICAL AND LIFE SCIENCES Basler has introduced cameras designed specifically for the medical and life sciences sectors. The Basler MED ace series offers CMOS sensor technology and provides a speed of up to 164 frames per second and will be enhanced with cameras with resolution of up to 20 MP. With Sony’s Pregius sensors and PYTHON sensors by ON Semiconductor, the small and light-weight cameras are equipped with the newest CMOS technology. They, said Basler, offer a compelling value proposition with pixel sizes up to 5.86 µm, low temporal dark noise down to 2e- and sensor sizes up to 1.1 inch. With certification in accordance with DIN EN ISO 13485:2016, Basler now provides additional quality standards for the development, production, distribution and service of digital cameras as well as for placing them on the market. International
mvpromedia.eu
manufacturers of medical devices benefit from an effective quality management system with clearly defined standards. Reliable product quality due to validated and monitored production, traceability and comprehensive change management reduces effort required for audits, product documentation and support in complying with European standards.
in Medical & Life Sciences and to reduce customers’ development efforts. Basler’s 6 Axis Operator and the Color Calibrator Beyond provide full control of the image’s color appearance, which is highly relevant for applications in ophthalmology or microscopy. PGI as well as other new auto image functions bring supreme image quality out-of-the-box, now also for mono cameras.
Basler MED ace incorporates unique Basler MED feature sets: Easy Compliance, Brilliant Image, Perfect Color, Low Light Imaging, High Speed and Industrial Excellence. They combine marketleading hardware, firmware and software features. Basler developed unique features specifically designed to address the high imaging demands
Also creating high impact, Basler added, is the newest CMOS sensor technology, which delivers even better image quality at much lower costs than the discontinued CCD sensors. With 30 years of vision experiences, Basler offers the broadest camera portfolio of top-notch CMOS cameras to support the transition faced by medical device manufacturers.
17
NEWS
EMVA ANNOUNCES KEYNOTE SPEAKER FOR BUSINESS CONFERENCE IN DUBROVNIK Jeremy White will give his keynote titled ‘The Rise of Artificial Intelligence’ at the 2018 edition of the EMVA Business Conference taking place June 7-9 in Dubrovinik, Croatia. From the Internet of Things to AI, smart homes to smart cities, flying cars to passenger drones, EMVA said that White (pictured) has first-hand experience of emerging trends as well as personal contact with the global business leaders driving them. He has been
writing about technology and design for more than 14 years. Also Michal Czadybon, General Manager at Adaptive Vision, will talk about ‘Deep Learning in Industrial Quality Inspection: Experiences from the field’ in the technical part of the conference program. The Megatrend Deep Learning will also be covered by another speech given by Professor of EECS Jitendra Malik from UC Berkeley who will dedicate this presentation to ‘Deep Learning for Deep Visual Understanding’.
SONY LAUNCHES ITS FIRST USB3 INDUSTRIAL VISION MODULE At its heart is the 1/3-type Sony Pregius IMX273 sensor, which is an ideal replacement for the highly-renowned ICX445 CCD sensor and gives huge technological improvements in sensitivity, dynamic range, noise reduction and frame rate capability.
Sony Europe’s Image Sensing Solutions has launched its first industrial vision module to use the USB3.0 transmission standard. The GS CMOS module, which is available in both colour and black and white variants, has a 1.6 MP resolution (1456 x 1088 pixels) and transmits data at over 100 frames per second. The XCU-CG160 has been designed to give a simple migration path from CCD to GS CMOS, allowing the switch without system upgrades or a changed architecture.
18
The modules have been designed to lead the market in terms of image quality and are targeted at a wide array of industrial vision and non-manufacturing markets – from print, robotics and inspection to medical, logistics and general imaging. Sony’s Matt Swinney said: “For those who have overall responsibility for machine vision systems, the migration path from analogue to digital is front of mind. The XCU-CG160 makes this an easy process with the added advantage of superb performance. We believe this will be very favourably received by the market.”
Key image-processing features included on the device include area gain and defect pixel correction. Shading correction has also been implemented. The b/w module has a minimum illumination of just 0.5 lx, the colour module requires just 12 lx and comes with a manual, auto and one-push white balance setting. Both modules have a sensitivity of F5.6, a gain of 0 to +18 dB, a shutter speed of 60 s to 1/10,000 s. The C-mount modules measure 29 x 29 x 42 mm, weigh 52g, and have an operating temperature of -5oC to +45oC. It meets UL609501, FCC Class A, CSA C22.2No.60950-1, IC Class A Digital Device, CE : EN61326 (Class A), AS EMC: EN61326-1, VCCI Class A and KCC regulations.
mvpromedia.eu
NEWS
LUCID RELEASES NEW GENICAM3-BASED ARENA SDK LUCID Vision Labs, a designer and manufacturer of industrial vision cameras, has released its new Arena Software Development Kit (SDK). The SDK has been designed to maximize the performance of LUCID cameras and is based on the latest GenICam3 and GigE Vision image acquisition standards. The Arena SDK features a comprehensive API toolkit, providing users with easy access to the newest features and software technology compliant with current industry standards. The company said that the GenICam 3.0 based C++ API leverages GenICam’s
MACHINE VISION ‒ KEY ENABLER FOR INDUSTRY 4.0
WWW.STEMMER-IMAGING.COM
Reference Implementation for robustness, stability and reliability and uses the Standard Feature Naming Convention (SFNC) for camera features and control. It has been designed for forward-compatibility with new device features and enables fully-featured chunk data support, device events and triggers. The Lightweight Filter (LWF) driver improves image transfer performance and lowers CPU usage when streaming large images at small packet sizes. The Arena SDK includes an intuitive image acquisition software called ArenaView, which allows users to access
VISION. RIGHT. NOW.
and validate camera features quickly and easily through the GenICam XML based feature tree. Its flexible user interface framework is based on HTML5, CSS3, and JavaScript, modernizing the approach, look and maintenance of user applications. Founder and President at LUCID Vision Labs Rod Barman said: “The Arena SDK has been designed from the ground up and optimized for today’s diverse range of user preferences. It features an intuitive, modern and flexible architecture that enables an easy integration and rapid development of all kinds of machine vision applications and embedded systems.”
SOFTWARE:
In the machine vision sector, it’s hardware which grabs most of the headlines. Shiny cameras, pouting sensors and fancy cabling grab the limelight like cosseted divas. Lines of code are not so photogenic, but without them, nothing would function. Here we take a look at some of the software packages that make our world go around
20
mvpromedia.eu
EVT We had a chat with the team at EVT Eye Vision Technology and asked them about their current software offering. Their EyeVision software has all commands for metrology (measure distances, angles, diameters), thermal imaging, read OCR/OCV, colour inspection, object detection, pattern matching, LED and imprint inspection and Code Reading (bar code, QR, DMC). The EyeVision 3D software version also has commands for measuring on a point cloud captured with 3D sensors of different technologies (such as time-of-flight, stereo vision, laser triangulation and shape-of-shading). The 3D commands are for bin-picking, pin inspection, or profile matching (detects a pre-trained profile) for weld seam or adhesive bead inspection, 3D commands for PCB & Smart Phone, embossed or engraved writing. And last, but not least, is the new deep learning function for Number Plate Reading (NPR) and Make & Model Recognition (MMR). The VECID (vehicle identification) functions can recognize the type and model of cars, motorbikes, planes, ships etc. and can read the number plate. This is especially important, said EVT, for law enforcement, toll-roads, or car park management. The EyeVision Software has customers all around the world and is used in many industries. It runs on all platforms (PC, smart camera, vision sensor and embedded systems) and with all interfaces (GigE, USB 2.0 and 3.0, RS232, Camera Link and Fire Wire). What’s more, it can be used with almost any hardware on the market. Currently the software supports about 20 industrial cameras and ten smart cameras and of course as many 3D sensors and some thermal imaging
mvpromedia.eu
21
cameras. It runs on Odroid, Raspberry Pi and the “homemade” EyeVBox. Also on the EVT developed RazerCam LS (the smart line scan camera) and the smart thermal imaging camera. As for the software’s main applications, they are 3D inspection, the 3D object match and profile match and recently, it can measure holes and the distances between those. The Code Reader commands and OCR/OCV as well as all the metrology commands. Currently it is mostly used in the automotive, semiconductor, food & beverage, electronic, pharma and packaging industries. And when it comes to the future and the next versions of the software, EVT said that the Deep Learning VECID function should eventually work with all car models and number plates on the world. There should also be a defect detection based on deep learning in the future. And
22
the thermal imaging solar panel defect detection with Deep Learning and UVA combination might be something to continue and fine tune they said. The EyeVision 3D commands should be improved in handling and also should increase in number and function. And soon EVT will support: - Zivid (3D stereo vision sensor) - Teledyne Dalsa cameras - Hikvision smart camera.
MVTec One of the best known software companies in the machine vision sector is MVTec Software. Their Dr Maximilian Lückenhaus took us through a few trends, features and market insights. On the current product range, he said: “MVTec is the developer and vendor of the general purpose machine vision software products HALCON
and MERLIC. MVTec HALCON is the comprehensive standard software for machine vision with an integrated development environment (HDevelop) that is used worldwide. HALCON is optimized for the needs of OEMs and system integrators and allows engineers to set up their own solutions for a specific machine vision task. It enables cost savings and improved time to market. “HALCON’s flexible architecture facilitates rapid development of any kind of machine vision application. The software provides outstanding performance and a comprehensive support of multi-core platforms, special instruction sets like AVX2 and NEON, as well as GPU acceleration. It serves all industries, with a library used in hundreds of thousands of installations in all areas of imaging like blob analysis, morphology, matching, measuring, and identification.
mvpromedia.eu
“The software provides the latest state-of-the-art machine vision technologies, such as comprehensive 3D vision and deep learning algorithms. MERLIC is a powerful, all-inone machine vision software product that enables users to quickly build and integrate complete solutions. Not a single line of code needs to be written whilst working with MERLIC. “The individual tools are named after the tasks concerned, describing these in a language which the user can easily understand: if the user wants to measure something, he clicks on “Measure”; if he wants to count the number of objects in an image, he needs to click on “Count”. MERLIC also grants easy access to all the elements of the machine vision periphery and offers seamless PLC connectivity – enabling ideal integration into the production environment.” As for what the future holds in terms of new feature HALCON and MERLIC, Dr Lückenhaus said: “With the latest release HALCON 18.05, the deep learning inference, i.e., the use of a pretrained Convolutional Neural Network (CNN), is now running on CPUs for the first time. In particular, this inference has been highly optimized for Intel-compatible x86 CPUs. “This means that a standard Intel CPU can reach the performance of a mid-range graphic processor (GPU) with a runtime of approximately two milliseconds. The operational flexibility of systems can therefore be significantly increased. For example, industrial PCs, which usually do not utilize powerful GPUs, can now easily be used for deeplearning-based classification tasks. In addition, the new HALCON version also offers several other improvements that further increase the usability of machine vision processes. Enhanced functions for deflectometry, for instance, improve the precision and robustness of error detection for objects with partially reflective surfaces.
24
“Developers in particular benefit from two other new features: first, they can now access HDevelop procedures not just in C++, but also in .NET via an exported wrapper – as easily and intuitively as a native function. This significantly facilitates the development process. “Second, HALCON 18.05 makes it much more comfortable to work with handles. With the new version, they are automatically deleted once they are no longer required. Thus, the risk of memory leaks is significantly reduced because users no longer have to manually release unused memory. “Additionally, HALCON 18.05 also features optimized edge detection, which improves the ability to reliably read bar codes with very small line widths as well as slightly blurred codes. Moreover, the quality of the bar codes is also verified in accordance with the most recent version of the ISO/IEC 15416 standard. HALCON 18.05 also offers optimized functions for surfacebased 3D matching: they can be used to determine the position of objects in 3D space more reliably, making development of 3D applications easier. “Furthermore, a new camera model within HALCON makes it possible to correct distortions in images that were recorded with hyper-centric camera lenses. These lenses can depict several sides of an object simultaneously, thus enabling a convergent view of the test object. With this technology, users only need a single camera system for inspection and identification tasks, e.g. the inspection of cylindrical objects. “With MERLIC 4 Preview we offer a hands-on experience of new and optimized functions in MVTec MERLIC, the software for developing
complete machine vision solutions quickly and easily. The main new feature is parallelization, i.e., the ability to run separate tools at the same time. This makes it possible to implement multi-camera systems more effectively and to use computing capacities more efficiently. “Parallelization allows effortlessly implementing and running independent processing threads and optimizes the throughput time from the start of a cycle to its completion. MERLIC 4 Preview also contains additional practical functions, all of which further improve user-friendliness. For example, a newly developed Tool Flow window provides a clearer overview of the tools used by arranging them in a grid. In this way, connections can also be found more easily. “Using copy and paste functions, the tools in this window can be moved around effortlessly. Tool Flow can be completely restructured without any additional parametrization. This further speeds up and simplifies the creation of machine vision applications. Finally, the newly designed Branchon-Condition tool improves handling of connections, making it possible to immediately see which tools are executed.” Moving on to how it went at Hannover Messe and what’s in store for automatica 2018, he said: “At this year’s Hannover Messe we showcased our portfolio and live demos which gave practical insights into the benefits of machine vision in Industry 4.0 (aka the Industrial Internet of Things). A special focus was on the innovative deep learning functions of HALCON based on artificial intelligence. The MERLIC demo showed the software running on an ADLINK smart camera. Here, deep-learning-based OCR functions were used to exactly recognize a wide variety of fonts on packaging in fractions of a second.
mvpromedia.eu
“The feedback was great and we saw a lot of interest in the market in topics like Embedded Vision as well as the growing convergence of automation and machine vision (vision integration) and the integration of programmable logic control (PLC).
THE FUTURE DEPENDS ON OPTICS
“The integration in highly automated processes and the role of machine vision as the ‘eye of production’ will also be our focus at automatica 2018. We will show practice-oriented demos, e.g., robust robot bin picking powered by MVTec HALCON and modern matching as well as deep learning technologies. “Additionally, visitors can look forward to a speech of MVTec’s Director of Research, Prof. Dr. Carsten Steger, on “Usage scenarios for machine learning in industrial imaging - examples of current projects in the food and pharmaceutical industries” at the automatica forum.”
Silicon Software Silicon Software provided us with an overview of VisualApplets.
VisualApplets – Simple and Fast FPGA Programming VisualApplets is the integrated development environment for real-time applications on FPGA processors in image processing. Due to the high parallelism the essential strength of FPGAs lies in the processing of high data quantity synchronously and in real-time with high speed. A disadvantage of the technology has been so far the programming of processors requiring VHDL expert knowledge. With VisualApplets this situation for image processing has considerably changed.
mvpromedia.eu
NEW Rugged Blue Series M12 µ-Video™ Lenses The new TECHSPEC ® Rugged Blue Series M12 µ-Video™ Lenses are Stability Ruggedized, protecting the lens from damage, while reducing pixel shift and maintaining optical pointing stability after shock and vibration. Find out more at
www.edmundoptics.eu/RuggedBlueM12 UK: +44 (0) 1904 788600 GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 9 74 18 96 19 sales@edmundoptics.eu
The approach of representing FPGA programming by data flow models on a graphical user interface makes it easy for hardware and software developers as well as application engineers to create individual designs for complex image processing tasks intuitively and in few steps — even with no hardware programming experience. Compared to classical VHDL / Verilog based programming the development of productive systems with VisualApplets is 10 to 100-fold faster. For data signals and image processing, it is possible with VisualApplets to design complex applications as applet designs and process them, to use pre-configured directly applicable applet examples or to further utilize existing VHDL libraries. Silicon Software’s programmable V-Series frame grabbers are pre-licensed for use with VisualApplets. The development environment is furthermore available for VisualApplets compatible industry cameras, vision sensors and image processing devices. Manufacturers and also the end users’ application development profit from enormous time and resource advantages, accelerating the market availability of their products. Beside image processing solutions, data signals too can be programmed and processed in VisualApplets. Signal in- and outputs as well as connections to external digital peripherals such as PLC controls, lighting, rotary encoders and other control devices, can be programmed directly in the device using VisualApplets. VisualApplets contains an image processing library with over 200 operators in 14 sub-libraries as the base for the designs. The porting of designs on other frame grabbers is easy to realize by an integrated conversion function. This allows a rapid prototyping on a highperformance frame grabber at
mvpromedia.eu
the start and a conversion after completion of the design on the most economical platform. Many analysis errors and corrections are performed automatically. Parameter changes are automatically corrected in the overall design by inheritance. The resources of the FPGA are recalculated after every design change. Bandwidth bottlenecks are analyzed in the design, which can be solved e.g.by a graphical configuration to increase the parallelism. The high-level simulation calculates a visual result at each link of the design with bit accuracy and can also be used for visual debugging. By this it is possible to check the created algorithms and designs. The synthetization tools of the FPGA manufacturer are included in VisualApplets and generate a hardware applet after successful creation of a design. In parallel, a SDK example is generated, which can be integrated into the own application and is immediately executable. In the code are listed all parameters which have been defined in the design as dynamic operators. The parameters can be adapted during runtime via software. Version 3 has been published as 64 bit version with extensions and new functionalities
VisualApplets 3 Extensions VisualApplets Expert enables advanced users to easily import existing HDL code from VHDL and Verilog into VisualApplets and to further work with them as generic operators. Operators created on their own can be added to the custom library. With Expert it is also possible to debug the design under runtime condition and to set paths to parameters in hierarchical structures. With VisualApplets Embedder the image processing hardware from suppliers of cameras and image processing devices becomes compatible with VisualApplets. In so doing, a
majority of the image preprocessing takes places on the device’s FPGA, saving resources. Users get a tailormade camera with the greatest flexibility possible and can implement image processing applications directly in the camera independent of the camera manufacturer. VisualApplets Libraries expands the scope of operators to encompass valuable image processing functions, such as those for segmentation, classification or compression. VisualApplets Protection protects your design and IP know-how, and binds it, encrypted, to the FPGA hardware.
New Functionalities There are new operators that represent loops in the data flow model, whereby image sequences and comparisons, as well as image batch processing can be calculated on the FPGA, conserving resources. All image formats supported by VisualApplets can also be used for loops. Thereby, the image formats remain unchanged. Loops are for example used in the following applets: rolling average (movement measurement), depth of focus (restoring 3D information), pattern recognition and pyramids (Gauss, Laplace, scaling, (de-) construction, contrast enhancement, frequency domain). The FFT (Fast Fourier Transformation) operator has been expanded to resourceefficiently implement more complex filters with high computing load, such as band pass filters. Moreover, Xilinx Vivado software can be used for the final creation (synthesis) of the FPGA hardware code as applets, which in many cases increases the implementation speed.
CNN-Operator New operators for deep learning have been implemented in
27
a VisualApplets library. They can be used both for training and execution (inference) of neural networks (CNN). Users are able to integrate the inference in a VisualApplets design. The CNN operators in VisualApplets allow them to create and synthesize diverse FPGA application designs without hardware programming experience in a short time.
Since FPGAs are up to ten times more energy-efficient compared to GPUs, CNNbased applications can be implemented particularly well on embedded systems or mobile robots with the required low heat output. Silicon Software will offer bigger processors in the frame grabbers especially for deep learning applications with the necessary high bandwidths.
By transferring the weight and gradient parameters determined in the training process to the CNN operators, the FPGA design is configured for the application specific task. The operators can be combined into a VisualApplets flow diagram design with digital camera sources as image input and with further image processing operators for optimizing image preprocessing.
Inspection tasks having so far been difficult to solve, such as the determination and classification of defects on reflecting metallic surfaces, can be solved very well with deep learning. The application implemented on a frame grabber FPGA reaches highest bandwidths and recognition rates in real-time.
VisualApplets 3.1 (New version to be released in June 2018) The VisualApplets 3.1 version now officially supports Windows 10 64bit and the new Camera Link frame grabber microEnable 5 marathon VCLx. The CNN ready acquisition and processing board is equipped with a stronger FPGA offering higher processing power to transfer more image pre-processing and complex designs on the FPGA. The new version comprehends TCL commands enabling to create scripts for design specific works and thus to implement for example an automated simulation. New pre-configured VisualApplets examples, e.g. complex image pre-processing designs, will complete this version offering users a faster implementation: •
normalized Cross Correlation determines the position of objects in an image, e.g. for the inspection of PCB boards;
•
exposure Fusion combines images with different exposure times as an alternative to HDR using a simpler algorithm without tone mapping and needing less resources, e.g. for the inspection of metallic surfaces;
•
distortion Correction corrects effects such as barrel or pincushion like distorted images resulting from a geometric aberration of the lens system.
Advantages of VisualApplets
28
•
application development by simply using a graphical interface and data flow diagrams with image processing operators and transport links;
•
access to more than 200 operators for single image processing tasks; development of own operators is possible
mvpromedia.eu
Example of an application design in VisualApplets
Pixel-accurate simulation and testing of the functionality
Graphical programming of different FPGA devices
mvpromedia.eu
29
with VisualApplets Expert and adding them to an existing library; •
•
expert knowledge in FPGA development (VHDL/ Verilog) is not necessary, thus also suited for software programmers and application engineers; more than 80 immediately deployable application examples spare time and effort;
•
development projects can be realized in less than 10% of the originally needed time;
•
integration in FPGA third-party devices such as cameras and vision sensors with VisualApplets Embedder.
Basler Camera developer and manufacturer Basler believes they have the best camera software suite in the market. Called Pylon, it’s a software package comprised of an easy-to-use SDK and drivers and tools that an operator can use to operate any Basler camera using a Windows, or Linux PC, or a Mac. Thanks to the latest GenICam 3 technology, Pylon offers unrestricted access to the latest camera models and features. The programming interface on pylon can work with a wide range of camera interfaces, allowing existing programming code to be used without modification for any Basler cameras and operating systems as well. Highlights:
30
•
a software package for all operating systems, such as Windows, Linux x86, Linux ARM and OS X;
•
a software package for all interfaces and standards such as GigE Vision, USB3 Vision, IEEE 1394, Camera Link and BCON for LVDS;
•
•
unique context-sensitive camera documentation and programmers’ guides for easy camera evaluation and software development; a wealth of sample programs for all typical camera applications in all supported programming languages such as C, C++, C#, VB.Net;
•
easy-to-use and highperformance pylon Viewer, including multi-language support for camera features, for the simultaneous activation and evaluation of multiple cameras;
•
IP, USB (Windows) and CL (Windows) configuration tools for the simple use of Basler’s GigE, USB3.0 and Camera Linkcameras;
•
GigE and USB bandwidth manager for simple setup optimization;
•
unique camera emulator for software development without physical cameras.
Matrox Canadian-based Matrox Imaging is an established supplier to top OEMs and integrators involved in machine vision, image analysis, and medical imaging industries. The product range consists of smart cameras, vision controllers, I/O cards, frame grabbers, and processing platforms. All are designed to provide optimum price/ performance within a common software environment. The company recently released the Matrox Imaging Library (MIL) 10 Processing Pack 3 software update featuring a CPU-based image classification module that makes use of deep learning technology for machine vision applications. Processing Pack 3 also includes the addition of a photometric stereo tool to bring out hard-to-spot surface anomalies or features and a new dedicated tool to locate rectangular features.
Deep learning for image classification Leveraging convolutional neural network (CNN) technology, the Classification tool categorizes images of highly textured, naturally varying, and acceptably deformed goods. The inference is performed exclusively by Matrox Imagingwritten code on a mainstream CPU, eliminating dependence on third-party neural network libraries and the need for specialized GPU hardware. The intricate design and training of a neural network is carried out by Matrox Imaging, taking advantage of the accumulated experience, knowledge, and skill of its machine learning and machine vision experts.
Registration toolkit now includes photometric stereo The photometric stereo technique is now available within the Registration module to produce a single image that emphasizes object surface irregularities such as embossed and engraved features, scratches, and indentations. The composite image is composed from a series of images taken with light coming in from different directions; the lighting is produced by illumination solutions based on the CCS Inc. Light Sequence Switch (LSS) or Smart Vision Lights LED Light Manager (LLM).
Dedicated shape-finding tool for rectangles Part of the Geometric Model Finder (GMF) module, the Rectangle Finder tool is a faster, more flexible, and more robust option than generic geometric pattern matching. The tool is able to simultaneously search for multiple occurrences of rectangles with different scale and aspect ratios. “Montreal is a hotbed of artificial intelligence development, and Matrox Imaging is perfectly situated to take full advantage of this growing know-how,” said
mvpromedia.eu
Pierantonio Boriero, director of product management, Matrox Imaging. “Combined with our long-standing and trusted expertise in the field of machine vision applications, we continue to deliver the best machine vision solutions to our customers.’’
Active Silicon Active Silicon, founded in 1988, designs, manufactures, markets and supplies embedded vision systems and interface cards. Frame grabbers provide the interface between high-end cameras and computers in vision systems, while embedded vision systems provide the industrial-grade computer environment on which vision systems operate. As well as being a leader in the development and application of new technologies, Active Silicon said that it is unique in being able to support a wide range of operating systems and a diverse range of hardware formats to go beyond traditional ground fixed environments. It’s products have been used in applications from space missions to deepsea vehicles and UAVs. These products have applications in virtually all areas of science and industry, including manufacturing, life sciences, medical imaging, security and defence. The company recently announced the launch of ActiveCapture, its latest front-end software for FireBird frame grabbers. The software provides optimized image acquisition, analysis and display, allowing the user access and control to all installed cameras and frame grabbers in a clear and intuitive manner. Active Capture works with CoaXPress and Camera Link cameras and provides a simple and straightforward method to configure the system hardware, allowing control of various acquisition features such as triggering, image resolution. ActiveCapture is a GenICam GenTL program that can be
mvpromedia.eu
used with cameras supporting GenICam, such as CoaXPress, and Camera Link cameras using the CLProtocol. It is also designed for use with non-GenICam Camera Link cameras. A device tree provides quick access to any Active Silicon frame grabber products installed in the system as well as connected cameras. ActiveCapture has several features to aid camera testing and integration. The histogram feature shows the distribution of pixels in the image - 2D and 1D functions are provided. Image sequences can be acquired and played back within ActiveCapture, or saved to disk for off-line analysis. A hardware events controller provides real-time feedback on asynchronous events that are generated by the hardware, which aids system debugging and speeds up integration time. The feature browser allows control of the GenICam features of the cameras and frame grabbers and includes simplified searching and filtering. Several integrated tools are available such as a bandwidth test, a Flash programming utility and a GenTL CL setup utility.
LUCID Canadian-based LUCID Vision Labs designs and manufactures innovative machine vision cameras and components that aim to utilize the latest technologies to deliver exceptional value to customers. Its compact, high-performance GigE Vision cameras are suited for a wide range of industries and applications such as factory automation, medical, life sciences and logistics. It said it dynamically innovates to to create products that meet the demands of machine vision for Industry 4.0. Its expertise combines deep industry experience with a passion for product quality, technology innovation and customer service excellence. LUCID was founded in January 2017. It recently announced the release of its new Arena Software
Development Kit (SDK) which has been designed to maximize the performance of LUCID cameras and is based on the latest GenICam3 and GigE Vision image acquisition standards. The Arena SDK features a comprehensive API toolkit, providing users with easy access to the newest features and software technology compliant with current industry standards. The GenICam 3.0 based C++ API leverages GenICam’s Reference Implementation for robustness, stability and reliability and uses the Standard Feature Naming Convention (SFNC) for camera features and control. It has been designed for forwardcompatibility with new device features and enables fullyfeatured chunk data support, device events and triggers. The Lightweight Filter (LWF) driver improves image transfer performance and lowers CPU usage when streaming large images at small packet sizes. The Arena SDK includes an intuitive image acquisition software called ArenaView, which allows users to access and validate camera features quickly and easily through the GenICam XML based feature tree. Its flexible user interface framework is based on HTML5, CSS3, and JavaScript, modernizing the approach, look and maintenance of user applications. The initial release of the Arena SDK supports Windows, with subsequent Linux support targeted for a Q2 release. All LUCID cameras feature a built-in web interface for easy firmware updates and with additional capabilities to be added in the near future. “The Arena SDK has been designed from the ground up and optimized for today’s diverse range of user preferences,” says Rod Barman, Founder and President at LUCID Vision Labs. “It features an intuitive, modern and flexible architecture that enables an easy integration and rapid development of all kinds of machine vision applications and embedded systems.”
31
Talking with Christof Zollitsch, CEO of newly floated STEMMER IMAGING It’s just over three months since Christof Zollitsch steered STE M M E R I MAGI NG through what by all accounts is one of the most challenging processes that a company management team can undertake, an I PO. So MVPro Magazine’s editor Neil Martin thought it a good opportunity to have a chat with the CEO, to see how he’s taking it all An initial public offering (or, flotation as its better known by many), is a process by which a company and its management are laid bare. When a company decides to go public, it offers itself up to a level of scrutiny from external advisers that would make some blush. And for the senior management team - usually Chairman, CEO and Finance Director - it means a lot of work and extra worry. Whether a company takes the step into the limelight is of course dependent on the main shareholders and what they are looking for from a public quote, for example a chance to cash-in (after a purchase, or years of investment), or a route to the increased funds that a stock market profile can deliver. I started by asking Christof to describe the benefits of a public listing. “STEMMER IMAGING has grown very successfully in recent years. We are convinced that as a listed company we can push further ahead with our growth strategy as going public accesses an additional source of funds that can be used to fund investments or acquisitions. “Another advantage is an increased public awareness, which may lead to new opportunities and new customers. Our market visibility, that was already high before, has been significantly enlarged among suppliers or existing and new clients recently.” Raising the profile of a company is a fundamental reason for going public, as Christof was happy to expand on: “Going public is a significant stage in the expansion of our business, as it provides us with access to the public capital market and therefore opens up new markets and possibilities for STEMMER IMAGING that were not possible before. “The main difference in our daily business is that the company is now subject to a large number of disclosure requirements. We see this as an advantage as we have become even more transparent for customers or suppliers.”
32
mvpromedia.eu
So the time was right for the IPO and STEMMER IMAGING’s history is marked by a strong founder, Wilhelm Stemmer, who probably decided last June that for him, a trade sale was a better option that taking the company public himself, even though his ambitious management team might be anxious to take the step. With new owners, the route was open for an IPO. This was made clear when I asked Christof having been through the process, would he have done anything differently in hindsight? “Some might say that we should have taken the step earlier. However, the great success of the IPO is largely due to our shareholder PRIMEPULSE, who only last summer became the main shareholder of STEMMER IMAGING. They helped us to become really IPO-ready with their experience and contacts.” As they say in the entertainment industry, timing is everything, and who knows whether the odd six to 12 months makes that much of a difference to company’s standing for its introduction to a stock market. IPO timing is more often dictated by external issues, such as the global economy, or geo-political considerations.
mvpromedia.eu
I then asked Christof about how he coped with the IPO process? “Directing an IPO is one of the most challenging and rewarding tasks to undertake. We created a comprehensive IPO plan that balanced our shortterm objectives with its long-term goals and that allowed for the coming onslaught of real-time reporting required of a public company.” “Organizing this process is very time-consuming. For example, at one point I had to visit four countries within 24 hours. Essential is a wellfunctioning team in the background consisting of colleagues, the bank and the main shareholders who gave us the best support. The successful IPO shows that we have done everything right.” What interested me as well though, was whether the prospective shareholders he was meeting during the IPO were up-to-speed with the potential of the machine vision sector. “As the global machine vision industry continues to expand, many investors were very well-informed. I guess at least half of our potential investors did. Others focus more on figures such as future profitability. The successful IPO shows that STEMMER IMAGING is convincing in all aspects.” It’s always good news for an industry when those sitting in the investment space are aware of the opportunities. I then asked, looking ten years out, how did you see machine vision developing. “Machine vision is the key enabler for Industry 4.0. This means digital transformation is not possible without machine vision. Due to the high demands on sensory systems, machine vision represents a great opportunity for the industry to establish itself further in production as a key technology. For example, in the future machine vision will be used increasingly in non-industrial applications.”
33
With his float in the bag, and the positive reaction of investors, did he think that many other companies in the sector will choose to float? “The fact that we were the first machine vision company to go public in years shows the general restraint of the market. For STEMMER IMAGING I can say, that as a solid, promising and fast-growing company it was the right decision to go public and that this can only be of benefit to investors.” Christof makes a fair point and one which possibly reflects how the sector is starting only now to realise its own potential and how important it is becoming. Whether that leads to more IPOs, or more consolidation as smaller companies realise their value from a trade sale, remains to be seen. And talking about acquisitions, how was that playing out for STEMMER-IMAGING? I reminded him that in the company’s IPO statements, they talked of non-organic growth. I asked if he thought that acquisitions will drive future growth, as opposed to organic growth?
“In the coming years, we intend to further drive and accelerate our growth through focused expansion and to systematically improve profitability through concept and product innovations. This means organic and anorganic growth will take place simultaneously.” A careful answer but expect to see the cheque book being taken out at regular intervals over the coming years. The challenge for Christof and his team is not to pay top dollar for deals that must be earnings enhancing. Investors expect their public companies to grow, so the pressure will be on to ratchet up the deal flow and report some decent non-organic, as well as organic growth. I was also intrigued by the mention of Asia within the IPO material. They mentioned Asia as being an area for the company’s expansion, so I asked, how far are they with that strategy? “The proceeds generated by the listing will be used primarily to help expand our position in Europe, although expansion into Asia is interesting. We have just completed a very successful “Vision China” trade fair in Shanghai, where we were able to make good contacts.” I take that as Asia is going to play a key part in the company’s development. I finished the interview with a slightly unfair question. How often does Christof check his share price? “My team are constantly checking our performance and keep me updated on a regular basis. So there is no need for me to check personally all the time. This means I can focus on our core business machine vision industry and on producing gains for our shareholders.” Well said Christof, even though I don’t entirely believe him. In my experience, a CEO knows minute-by-minute just what his share price is doing, whether his team is telling him, or not. Rightly, or wrongly, it’s what keeps him attuned to what his investors might be thinking and keeping them happy is a large part of his new role.
34
mvpromedia.eu
1.1” 12 MEGAPIXEL
8.5 mm 16 mm 25 mm 35 mm
FC SERIES
> High resolution machine vision lens > Large image size of Φ17.6mm (C-mount) > Compact size > Kowa’s wide-band multi-coating > High transmission from visible to NIR
Extensive lineup of focal lengths: Spring 2018: 8.5mm, 16mm, 25mm, 35mm Autumn 2018: 6.5mm, 12mm, 50mm
NEW Kowa Optimed Bendemannstraße 9 40210 Düsseldorf Germany fn +49-(0)211-542184-0 lens@kowaoptimed.com www.kowa-lenses.com
CONTRIBUTED
Why Sony’s CCD EOL shows their best-inclass ‘customer first’ mind-set Sebastien Dignard of FRAMOS Technolgies is impressed with SONY
A few weeks ago, on a flight from Montreal to Newark, I noticed that the business class cabin was filled with Air Canada employees, while the Premium Economy seats were taken by Super Elite clients who were audibly upset with the fact that they didn’t receive their upgrade due to staff. I sat there wondering: should I be impressed with the way Air Canada treated their employees or should I be concerned with the way Air Canada treated their best customers? My meeting in New York was with a sensor customer on the transition path from CCD to the latest Sony CMOS imaging technology. I couldn’t help but think of how Sony Semiconductor had handled the End of Life (EOL) of their CCD sensor line. For me, this was a great example of how a company truly lives a best-in-class, ‘customer first’ approach and culture, while other companies are content with it just being mentioned on websites and in annual reports but in reality, their customers take a back seat. Raw material shortage In 2014, Sony was forced to execute on the EOL of its widely successful CCD product line due to a raw material shortage. They could have simply passed on all associated cost and risks, but they decided to extend their EOL process by a full 10 years and aligned customers with a non-committal forecast. They didn’t have to do that. Financially and logistically, it made no sense. They were the market leader and already had an alternative CMOS technology available. Most electronics companies simply provide a few months’ notice for EOL’s. Can you imagine the financial cost of carrying hundreds of millions of unsecured inventories? Or even the cost and trouble associated with maintaining the technology know-how for support? It made sense for one reason, and one reason only. The customer. They did what had to be done to protect their customers, even if it would cost them a great deal. Customers
36
were now able to build reliable transition plans within a ten-year-window. In the high endindustrial or medical business with certification processes and long-term design-ins, this is crucial. Sony also took on the capital risk, allowing customers to use the money they normally had to spend in EOL stock, and invest it in product innovation based on cutting-edge CMOS. We see a tipping point on how new imaging applications are being developed. Focus Part of this is that the Sony EOL forced customers to focus on the future. People had to invest to push their business forward. Bankrolled by Sony, they “covered” their customers back with a non-committed forecast and sensor availability for five to ten years. Customers could now start immediately making sure to innovate with new technology and develop new markets. Thinking about autonomous vehicles or embedded vision, Sony’s decision was pushing the whole industry forward and did wonders on everything tied to hardware, innovation and cognitive systems - vision is now in places where it’s never been before. That’s a great story on how a big decision by a major corporation has enabled the industry to push forward. FRAMOS, a Sony partner for over 34 years, is very happy and proud to be aligned with a company that is truly customer focused. This culture is very important to us as we work together to make the world a better place by building machines that see and think. Sebastien Dignard is President of FRAMOS Technolgies Inc., Global Head of Sales, Marketing and Support, FRAMOS Group
mvpromedia.eu
CONTRIBUTED
ABOUT Sebastien Dignard Sebastien Dignard brings 15 years of senior level experience in international business, including 8 years of experience in the Imaging industry. Sebastien is both responsible for our entire business in the Americas as well as management of the Global Sales, Marketing and Support teams for the FRAMOS Group. He is a recipient of the “Top Forty under 40” was presented by the Ottawa Business Journal and the Ottawa Chamber of Commerce, an annual award program that recognizes notable industry leaders under the age of 40 based on their business achievements, professional expertise, and community involvement.
mvpromedia.eu
37
CONTRIBUTED
FRAMOS 3D SYSTEM S U P P O R T S V I S U A L LY I M P A I R E D I N D A I LY L I F E Wearables with Real-Time 3D Technology Create New Way of Sensing by Translating Visual Information In cooperation with the CDTM institute of the Technical University of Munich (TUM), FRAMOS has developed an innovative wearable using realtime 3D technology to support visually impaired people in daily life. The glasses are equipped with the latest Intel® RealSense™ stereo cameras, intelligent algorithms translate the visual impression into haptic and audio information. While audio information relies on object and character recognition, the haptic feedback is provided by a wrist band equipped with vibration motors. This new way of sensing enables visually impaired people to fully understand their environment and to have advanced guidance for safe navigation.
The glasses are a smart assistant helping the visually impaired to master their life and provides a new level of safety and knowledge by text and object recognition enabled with intelligent algorithms. The prototype leverages state-of-the-art vision technology and entirely reflects FRAMOS’ mission of making machines to see and think. With over 37 years of experience, FRAMOS is in tune with today’s imaging requirements and positioned to help clients innovate and remain competitive by developing cutting-edge vision solutions. As a global imaging partner, FRAMOS can support with a broad solution portfolio that ranges from sensors to systems and value-added service in every stage of the imaging solutions value chain.
The eye is probably the most important human sense. But visual information is hidden for 108 million visually impaired people worldwide. Shop names, street names, route numbers of public transport or traffic signs are invisible, navigation without this information is a true challenge. The FRAMOS developed glasses now represent a new possibility for the visually impaired to explore the surrounding benefitting from the latest technology. Dr Christopher Scheubel, FRAMOS Business Development: “We are proud having found a way to bring state-of-art technology into an application, which provides a huge impact on the daily life of the visually impaired. This project hits the sense of innovation by really supporting humans and improving their lives. The exceptional beauty of this technology is the ability to provide visual information normally given by the human eye. Our technology creates a new way of sensing.” The 3D enabled wearable creates a new way of sensing by translating visual information into haptic feedback on a wristband in real-time. The prototype includes an Intel RealSense 3D camera and speakers for audio feedback. The setup is controlled by a processing hub with a GPS sensor and a LTE module for mobile data connection. Connected via Bluetooth, a micro-processing unit translates visual data into haptic-feedback through an 2D array of vibration motors. Based on the exact location and movement of the vibrating feedback on the arm, the visually impaired are informed about the position and distance of things in the surroundings. A voice-controlled interface makes interaction easy and rechargeable batteries enable a full day of use.
38
mvpromedia.eu
GOVERNMENT BACKED - GREAT BRITISH INVESTMENTS - EIS - SEIS - BR - SITR - VCT
Time is short. Invest it wisely. Comprehensive and in-depth analysis of the alternative M AGAZINE
investments market, specifically designed to help busy IFAs navigate this crucial area for their HNW clients.
www.gbinvestments.co.uk
CONTRIBUTION
3D sensing techniques As Multipix Imaging begins a deal to distribute Photoneo´ s 3D scanner products within the U K and Ireland, we take a look at parameters of 3D sensing techniques, as described by Tomas Kovacovsky, CTO of Photoneo
Machine Vision is one of the driving forces of industrial automation. For a long time, it’s been primarily pushed forward by improvements made in 2D image sensing, and for some applications, 2D sensing is still an optimal tool to solve a problem. But the majority of challenges machine vision is facing today has a 3D character. From a wellestablished metrology up to new applications in smart robotics, 3D sensors serve as a main source of data. Under a 3D sensor, we understand a sensor that is able to capture 3D features of inspected surface. While we are talking about the machine vision, we will not consider nonoptical systems in this category. Nowadays, market offers a wide variety of 3D sensoric solutions, most of them claiming a superiority over their competition. While a lot of these claims are based on a rational reasoning, one needs to understand differences and the need for individual applications. For QR code reading, a 2D smart camera can be the best solution on market. But it will probably not guide a logistic robot from one facility to another. In this field, it can’t compete with LIDAR based solutions currently dominating that market. Not considering interferometry and a nm range, we can list typical, most common technologies currently used in the industry:
40
mvpromedia.eu
CONTRIBUTION
• Laser triangulation (or profilometry) • Photogrammetry • Stereo vision (passive and active) • Structured light (one frame, multiple frames) • Time-of-flight (area scan or LIDAR) A more detailed description with the primary use cases and categorization based on our chosen parameters can be found at the end of the paper. It is important to realize it is impossible to create an optimal solution that satisfies all needs. Let’s concentrate on the most important parameters and reasons why they can not be easily extended, or what are the trade-offs of having some parameter pretty high. We will define 5 levels in each category that will help us compare individual technologies and possibilities they provide.
PARAMETERS Operating volume A typical operating volume of a system used for metrology application is about 100mm x 100mm x 20mm, while typical need of a bin picking solution is about 1 m3. This looks just as a simple change in parameters, but in reality, with different operating volumes, different technologies excel.
While increasing the range in the XY directions is more related to FOV of the system and can be extended by using wider lens, the extension in Z directions brings the problem of keeping the object into focus. This is called a depth of field.
mvpromedia.eu
Deeper the depth of field needs to be, the smaller the aperture of the camera (or projector) has to be. This strongly limits the number of photons reaching the sensor and as a result limits the usage of some technologies in a higher depth range. We can define five categories based on its depth of field range:
sufficient (e.g. offline metrology, reconstruction of factory floor plan, crime scene digitalization ...). If your application is time limited, structured light could provide a good balance between speed (both, acquisition and processing) and resolution. Data acquisition time:
1. Very small: up to 50 mm
1. Very high: minutes and more
2. Small: up to 500 mm
2. High: ~5s
3. Medium: up to 1500 mm
3. Medium: ~2s
4. Large: up to 4 m
4. Short: ~500 ms
5. Very large: up to 100 m
5. Very short: ~50 ms
While extending the depth range of the camera is executed by shrinking the aperture, it will limit the amount of captured light (both from your light source, in active system, and of an ambient illumination). A more complex problem is to extend the depth range of the active projection system, where shrinking down the aperture will limit only signal without limiting the ambient illumination. Here, laser-based projection systems (as used in Photoneo’s 3D sensors) excel, with their ability to achieve almost unlimited depth of field.
Data acquisition and processing time One of the most valuable resources in 3D scanning is light. Getting as many photons of a correct light source into pixels is essential for a good signal to noise ratio of the measurement. This could be a challenge for an application with a limited time for data acquisition. Another parameter that makes a difference in terms of time is an ability of the technology to capture objects in motion (on a conveyor belt, sensor on moving robot, etc.). When considering moving applications, only “one shot” approaches can compete (marked with score 5 in our data acquisition time parameter). Last but not least aspect to consider when defining the cycle time you need to achieve is whether your application is reactive and requires an instant result (e.g. smart robotics, sorting ...) or a result delivered later is
Data processing time: 1. Very high: hours and more 2. High: ~5s 3. Medium: ~2s 4. Short: ~500 ms 5. Very Short: ~50 ms
Resolution Resolution is the ability of the system to capture details. High resolution is necessary for applications where there are small 3D features in a large operating volume. The greatest challenge in increasing the resolution in all camera-based systems is a decrease in the amount of light reaching individual pixels. Imagine an application of apple sorting on a conveyor belt. Initially, only the size of an apple is the sorting parameter. However, the customer needs to check the presence of a stalk. The analysis shows that we need to extend the object sampling resolution two times to get the necessary data. To increase the object sampling resolution two times, the resolution of the image sensor has to increase by a factor of four. This is in general well known and will limit the amount of light by the factor of four (the same light stream is divided into four pixels). However, the tricky part is that we need to ensure the depth of field of the original system. To do that, we need to shrink aperture that will limit the light by a next factor of four. It means that to capture
41
CONTRIBUTION
the objects in the same quality, we need to expose for sixteen times longer time or we need to have sixteen times stronger light sources.
3. Medium: ~2 mm
As a rule of thumb, use the correct resolution to be able to capture scanned objects fast. You will also save some time thanks to the shorter processing time. As an alternative, some of the devices (e.g. Photoneo’s 3D Scanner) have the ability to switch between medium and high resolution to fit the need of the application. To categorize systems, let’s define these 5 categories by average 3D points per measurement, or XY-Resolution.
2. Small: ~300k points (VGA) 3. Medium: ~1M points 4. High: ~4M points 5. Extended: ~100M points The other part of the resolution is the ability to retrieve depth information. While some technologies are scalable to satisfy precise measurement (most triangulation systems), some cannot scale down because of physical limitations
42
1. Very small: >10 cm 2. Small: ~2 cm
This strongly limits the maximum possible resolution of real-time systems.
1. Very small: ~100k points
(like time-of-flight systems). We will call this Z-Resolution:
4. High: ~250 um 5. Very high: ~50 um
Robustness While most of the systems could offer a reasonable lifetime of the components and can be, if necessary, enclosed in an external box with adequate IP rating or cooling, we will draw your attention rather to inevitable challenges. For instance, some of the systems rely on external light (like the sun or indoor lighting) or are able to operate only within limited ambient light levels (light that is not a part of the system operation). Ambient light increases the intensity values reported by internal sensors and increases noise of the measurement. A lot of approaches try to achieve higher level of resistivity using mathematics (like black level subtraction) but these techniques are quite limited. The problems rely in a specific noise, called “shot noise”, or “quantum noise”. In general, it says that if ten thousand photons reach
Static
In Motion
Delayed result / Reconstruction / Mapping
takes minutes eg. photogrammetry, LIDAR
“Recording” eg. for Motion Tracking
Instant result / Reactive systems, low latency
takes seconds eg Structured Light
Time-of-Flight Structured Patterns
mvpromedia.eu
Full Featured
•
Cost Effective
•
Ready to Ship
EuroBrite Series TM
Maximizes Performance with Proprietary Design • IP67 sealed and submersion protected • High intensity LED output with built in controller • Adaptive PowerTM adjusts to its environment for for up to twice the light output • Adaptive OverdriveTM provides maximum performance in strobe mode • Flexible analog dimming capability in both strobe and continuous modes
Why not test drive a EuroBrite sample? Visit: advancedillumination.com/tryeurobrite Email: info@advancedillumination.com
CONTRIBUTION
the pixel in average, a square root of that number, hundred, is the standard deviation of uncertainty. So, sometimes more photons are sensed, and sometimes less. The problem lies in the levels of ambient illumination. If the shot noise caused by it is similar to signal levels from the active illumination of the system, noise level rises. Let’s define external conditions where the device can operate: 1. Indoor, dark room 2. Indoor, shielded operating volume 3. Indoors, strong halogen lights and opened windows 4. Outdoors, indirect sunlight 5. Outdoors, direct sunlight When we are talking about the robustness in scanning of different materials, the decisive factor is the ability to work with interreflections. 1. Diffuse, well textured materials (rocks, ...) 2. Diffuse materials (typical white wall) 3. Semi-glossy materials (anodized aluminum) 4. Glossy materials (polished steel)
1. Very high: ~100k EUR 2. High: ~25k EUR 3. Medium: ~10k EUR 4. Low: ~1000 EUR
4. Light: ~ 1 kg 5. Very light: ~ 300 g
Budget At the end of the day, the application you are working on needs to bring a value to the customer. It can be either solving a critical issue
TOF systems computes time of light travel between light emitter (usually near the detector) to the inspected object and back to detector. There are two distinctive techniques using TOF approach, either LIDAR or Area sensing.
Triangulation based systems inspects the scene from more positions. These positions form a baseline that has to be known. By measuring angles of a triangle formed by the baseline and the inspected point, we can compute the exact 3D coordinate. The length of the baseline and the accuracy of retrieving the angles strongly affect systems precision.
Lidar These systems are sampling one (or a few) 3D points at a time. During the scanning, they change the position or orientation of the sensor to scan the whole operating volume.
On the other hand, area sensing TOF systems uses a special image sensor to measure the time for multiple measurements in a 2D snapshot. They cannot provide such data quality as LIDAR but are well suited for dynamic applications, where just a low resolution is needed. The other problem of area sensing TOF systems are interreflections between parts of the scene, that can easily bend the result measurement. The popularity of TOF systems in these years expand by the availability of cheap, consumerbased systems designed mostly for human computer interaction.
3. Medium: ~ 3k g
Time-of-flight (TOF)
TECHNOLOGIES
Weight
2. Heavy: ~ 10 kg
Two big groups can be formed. Group one that uses triangulation as a final technique to compute 3D data, and group two that mainly consists of technologies utilizing the time of flight principle.
Triangulation based
Area sensing
1. Very heavy: >20 kg
more common variants of the technology in a particular category, they are visualised in the same chart to highlight dependencies between parameters. The data we provide is informative and serves for a raw understanding of differences between categories.
5. Very low: ~200 EUR
5. Mirror-like surfaces (chrome)
Weight and size of the device limits the use in some application. Having light and compact, yet powerful solution will allow you to mount it everywhere. This is why we choose a carbon fibre body. Alongside it temperature stability, it can offer light build even for longer baseline systems.
44
(possibly a big-budget one) or making a step in a process more economical (budget sensitive). Some of the price aspects are related to particular technologies, others are defined by a typical volume of production, services and support provided. In recent years, consumer market has been able to bring cheap 3D sensing technologies by utilizing the mass production. On the other hand, disadvantages of such technologies are: lack of possibility for customization and upgrades, robustness, product line availability and limited support. Let’s categorize 3D vision technologies based on their price as follows:
We did create a category for each major technology on the market. In most of the categories, you can find multiple companies with a similar product. We strived to choose an average representant for the evaluation. Positioning of each category is represented by a radar chart. If there are
Laser triangulation (or Profilometry) Is one of the most popular 3D sensing method. A line profile (or a point) is projected onto a surface. This profile if deformed when looked at different angle. This deviation encodes depth information. Because it captures only one profile at a time, to form whole snapshot, either the sensor or the object needs to move, or the laser profile needs scan through the scene.
Photogrammetry Is a technique of computing a 3D reconstruction of object from a high amount of unregistered 2D images. Similar to stereo vision, it relays on the object own texture, but it can make benefit of multiple samples of the same point with high baseline. The technique can be used as an alternative to LIDAR systems.
mvpromedia.eu
CONTRIBUTION
Stereo vision Classical stereo vision is based on a pair of cameras imitating human depth perception. It matches the texture features between two images to retrieve depth information. Passive 3D stereo, because of its dependency on object material, is used for applications that has non-measuring character, like people counting. To compensate for this disadvantage, an active stereo vision system was developed, that uses a structural projection creating an artificial texture on the surface.
Structured light With the ability to capture a whole 3D snapshot of the scene without the need of moving parts, the structured light provides high level of performance and flexibility. It uses sophisticated projection techniques to create a coded structured pattern that encodes 3D information directly to the scene. By analyzing that with the camera and internal algorithms, the system can provide a high level of accuracy and resolution with a short acquisition time.
The higher resolution structured light systems available on the market use multiple frames of the scene, each with different structured pattern projected. This can ensure a per pixel 3D information with high accuracy, but it demands the scene to be static for a moment of acquisition. Typical technology used for structured pattern projection is based on DMD (Digital Micromirror Device), originally used for consumer digital projector. These systems generally use White or Monochrome light (if there is a higher demand for ambient light resistivity, e.g. production facility). Scanners optimized for offline metrology are built for the most accuracy driven applications but are mostly suited to laboratory conditions. One of the biggest drawbacks of projection-based approaches like DMD is depth of field (of depth range). To keep the projector focused, the system needs a narrow aperture. This is not optically efficient, as the blocked light creates the additional heat and
internal reflections in the projection system. In practice, it does limit the use of the technology for higher depth ranges. In Photoneo, we have overcome the problem using laser for creating structured patterns. With nearly unlimited depth range, it also provides the possibility to use narrow bandpass filters to block out ambient light. For a moving application, a one frame approach has to be used. A conventional technique is to encode distinctive features of multiframe systems into one structured pattern, with strong impact on XY and Z resolution. Similar to TOF systems, there are consumer-based products available in this category. As a solution of this limitations, Photoneo has developed a new technique of one frame 3D sensing that can offer high resolution common for multiple frame structured light systems, with fast, one frame acquisition of TOF systems. We call it Parallel Structured Light and it runs thanks to our exotic image sensor. First pieces will be available in early 2018.
RAISE YOUR INSPECTION IQ with
Solving today’s challenges in product quality inspection takes brain power. That’s why experienced engineers . The 3D sensor with choose a mind for smarter quality control.
Discover
Inspection
visit www.factorysmart.com
SPONSORED
Axion-CL BitFlow has offered a Camera Link frame grabber for over 20 years. This latest offering, our 6th generation, combines the knowledge learned from the handling of the high data rates of CoaXPress with the requirements of Camera Link 2.0 The Axion-CL is fully compatible with every high-speed, highperformance Camera Link camera including base, medium, full and 80-bit (10-tap) CL configurations. It can acquire from up to two 80- bit/85 MHz cameras simultaneously with the board appearing to application software as two independent frame grabbers which greatly simplifies setups for multiple cameras. The Axion wasbuilt on a halfsize x8 PCI Express Gen 2.0 board. The Gen 2.0 PCIe bus doubles the data rate of the Gen 1.0 bus while using the same footprint and connectors and is fully backwards compatible with Gen 1.0 motherboards. The Axion 1xE supports one base, medium full or 80-bit camera while the Axion 2xE supports two cameras at any of those modes. Additionally, Power over Camera Link (PoCL) is available. Specifications: • Camera Link 2.0 compliant • Industry standard SDR Camera Link connectors • PCI Express x4 Gen 2.0 interface (also works in x8 and x16 slots), also compatible with PCI
46
Express Gen 1.0 slots • Supports dual connector PoCL • Supported on both 32bit and 64-bit platforms • Requires BitFlow SDK 6.2 or later Like the Cyton-CXP frame grabber, the Axion-CL leverages features such as the new StreamSync system, a highly optimized DMA engine, and expanded I/O capabilities that provide unprecedented flexibility in routing. The StreamSync system consists of an Acquisition Engine and a buffer manger.
Additional features include efficient support for variable sized images with fast context switches between frames, per frame control of acquisition properties, hardware control of image sequencing, enhanced debug capabilities, efficient support of on-demand buffer allocation and graceful recovery for dropped packets. The Axion-CL is a culmination of the continuous improvements and updates BitFlow has made to Camera Link frame grabbers. It is the most powerful CL frame grabber BitFlow has ever manufactured.
mvpromedia.eu
Gardasoft Vision are the industry expert on controlling lighting for machine vision applications. Our specialist knowledge can help you create that
Gardasoft controllers provide reliable, repeatable
vital competitive edge to serve your customers better.
control of lighting, safe overdrive, adaptable
Gardasoft have developed a versatile and powerful
sequencing and Ethernet communications to enable
range of lighting controllers and timing systems which
remote configuration of the lighting.
can enable faster inspection whilst simplifying your For nearly 20 years, Gardasoft have used our
equipment and minimising investment.
specialist knowledge to help Machine Builders achieve innovative solutions. Contact Gardasoft now to find out how we can help your company.
Semiconductor
|
PCB Inspection
Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com
www.gardasoft.com
|
Pharmaceuticals
|
Food Inspection
CONTRIBUTED
Efficient stereovision thanks to hardwarebased image processing Nerian Vision Technologies talk about their special hardware solution for stereo image processing An accurate three-dimensional environment detection is a basic requirement for many applications in robotics and automation technology. In the past, active camera systems were mostly used for this purpose, which can determine the spatial depth - and thus the 3D position - of the captured pixels by emitting light. Under controlled conditions, these methods can provide very accurate readings. In difficult lighting conditions, however, they reach their limits. Depth perception is only possible with active camera systems if the emitted light can clearly outshine the existing ambient light. Especially outdoors, however, this is difficult to do in bright daylight. For applications such as automated logistics and mobile service robotics, where the prevailing lighting conditions often cannot be controlled, other sensors usually have to be used. Another problem for active systems is the measurement of large distances. The greater the distance, the larger the area to be illuminated. Since the light source of an active system can only supply a finite amount of light, there is usually a fixed upper limit for the maximum measurable distance. This makes active camera systems unattractive for applications such as autonomous vehicles, as they require a wide range of foresight. A possible alternative is passive stereo vision. The environment is captured by two or more cameras with different viewing positions. By means of intelligent image processing, the spatial depth and thus the three-dimensional structure of the depicted environment can be reconstructed. As no light is emitted in stereo vision, the brightness of the environment is not important and there is no fixed upper limit for the maximum measurable distance. Furthermore, only one image per camera is required, which makes stereo vision particularly suitable for the observation of moving objects. Despite these advantages, stereovision is currently rarely used in robotics and automation technology. One of the main reasons for this is the enormous computing power required for image processing.
48
DESPITE THESE ADVANTAGES, STEREOVISION IS CURRENTLY RARELY USED IN ROBOTICS AND AUTOMATION TECHNOLOGY. ONE OF THE MAIN REASONS FOR THIS IS THE ENORMOUS COMPUTING POWER REQUIRED FOR IMAGE PROCESSING
As an example, the case of two cameras with a resolution of 720 x 480 pixels and a frame rate of 30 Hz is to be considered. If we limit the maximum difference between the pixel positions of two identical pixels from both camera images to 100 pixels, we have to compare more than one billion pixels per second with each other. However, if high-quality results are to be achieved, pure image comparison is not sufficient. Modern methods of stereo image processing therefore rely on optimization methods which try to find an optimal assignment of matching pixels from both camera images. This enables a drastic increase in quality to be achieved, but also increases the computing load many times over. If you leave image processing to ordinary software, you have to choose between fast processing or exact results. This can be remedied by outsourcing image processing to high-performance high-end graphics cards. However, they have a high power consumption, which prevents them from being used in mobile systems. To solve these problems, we at Nerian Vision Technologies have developed a special hardware solution for stereo image processing. By means of a programmable logic module - a so-called FPGA - it is possible to directly map the necessary image processing algorithms into hardware. This results in massive parallelization, which leads to an enormous increase in performance compared to a purely software-based solution. Despite this high performance, FPGAs are still extremely energy-efficient, which allows them to be used even on mobile systems.
mvpromedia.eu
CONTRIBUTED
Fig. 1 (Above
Fig. 2 (Below)
The result of this development work is the SceneScan stereo vision system shown in Figure 1: a small embedded system for stereo image processing. SceneScan is connected to two USB industrial cameras and performs all image processing steps. The calculated depth image is then transferred via Gigabit Ethernet to a PC or another embedded system. A complete system setup with cameras is shown in Figure 2.
Thanks to the FPGA used, SceneScan is able to calculate depth data of more than 30 million pixels per second. This corresponds to a resolution of 640 x 480 pixels at 100 frames per second, or 800 x 592 pixels at 65 frames per second. An example of a calculated depth image, as well as the corresponding image of the left camera, are shown in Figures 3a and 3b.
mvpromedia.eu
49
SPONSERED
BAUMER Tough when it’s rough: new cameras with I P 65/67 protection ensure reliable operation from -40C to 70C Thanks to the extended temperature range from -40 °C to 70 °C, the new CX series cameras with IP 65/67 protection are ideal for demanding applications under extreme conditions. Selected models eliminate the need for additional cooling or heating units and therefore ease thermal integration, saving time and system cost. The IP 65/67 rated housing protects all sensitive camera elements against dirt, water jets and short-term immersion and makes additional housing protection unnecessary. Furthermore, the optional modular tube system with a variable number of extension tubes allows lenses to be quickly adapted to the application with great flexibility. The new IP cameras with Sony Pregius and ON Semiconductor PYTHON CMOS sensors feature GigE Vision compliant interface and are available with six resolutions from 1.3 to 12 megapixels. Series production begins in the first quarter of 2018. Four opto-decoupled outputs with pulse width modulation and an output power of up to 120 W (max. 48 V / 2.5 A) enable control of up to four external lighting units, including adjusting brightness. Using the shape-from-shading method for example, this ensures cost-efficient and precise 3D surface inspection with detection of even smallest deviations in shape without requiring an external lighting controller. The compact 40 × 40 mm housing with M3 mount at each side endures shocks up to 100 g and vibration up to 10 g. Thanks to their light weight of only 137 g in combination with the x-encoded M12 connector, the cameras allow for reliable one-cable solutions via Power over Ethernet (PoE) and are therefore a perfect choice for applications in robotics, e.g. in the automotive industry. The hard-anodized camera surface eliminates the need for additional housing protection which makes them perfect also for the food and beverage or pharmaceutical industry.
50
The CX series now includes robust IP 65/67 cameras with maximum application flexibility by a selection of more than 70 industrystandard CMOS cameras with resolutions from VGA to 12 megapixels. Sony Pregius sensors of the second generation offer exposure times down to 1 μs and make the cameras ideal in light-intense applications such as laser welding or in high-speed tasks like pick and place to minimize blur. The CX cameras with ON Semiconductor PYTHON sensors enable more than 1000 frames/s by ROI (Region of Interest) selection. Where used in combination with GigE models in burst mode and a sequencer capable of taking image sequences with image-related settings, the cameras offer virtually unlimited solutions in highly-dynamic applications. More information on the new IP cameras of the CX series at: www. baumer.com/cameras/IP65-67 The Nouveau Vision Market Customers love CoaXPress
Have it all! Precise inspection at high speed: the LXT cameras.
Thanks to the new LXT cameras with latest SonyŽ Pregius™ sensors and 10 GigE interface you no longer have to choose. Have it all at once and benefit from excellent image quality, high resolution and frame rates as well as easy and cost-efficient integration. Want to learn more? www.baumer.com/cameras/LXT
LMI senses the future Canadian based LM I Technologies, known for its flagship Gocator product line, is a leader in 3D smart sensor technology and MVPro Editor Neil Martin caught up with CEO Terry Arden to ask him some questions and see how things were going The first question was aimed at LMI’s position within the sensor market. The company focuses on inline 3D inspection applications requiring metrology grade resolution. Terry said: “We estimate this market to be around 10-15% of the overall machine vision market size (say $500M) growing at 10-15% CAGR. The larger 3D market is reported to be around $2B today with a bandwidth to grow to $5B in the coming 8 years. If we try to break it down, we have traditional 3D metrology that uses touch probes on CMMs, coordinate measurement machine. “This is rapidly moving into non-contact 3D using laser or structured light scanners by companies like Hexagon/Leica, Faro, Creaform, GOM, Zeiss, and many others. There are growing industrial 3D applications involving robot vision guidance, robot bin picking, and autonomous guided vehicles in logistics and packaging. There is disruptive consumer 3D from Intel Realsense and Apple FaceID which started with Microsoft XBox Kinect. “And there is commercial 3D LiDAR driving the highgrowth autonomous vehicle and drone markets. So 3D is an expanding and varied market space. We consider ourselves one of the top suppliers of 3D inline scanning and inspection solutions and the only company driving ease of use with web browser driven smart sensors. Main geographical markets LMI is an international company with offices serving major design and manufacturing regions around the world including Europe, Americas, and Asia (specifically China). Terry: “We serve a broad range of industrial markets. Automotive, consumer electronics (CE), factory automation, packaging, rubber & tire, road, wood, solar, and battery are some of the major industries included in that list. As far as the applications that we solve, it’s quite a diverse range. Our 3D sensor technology is built on a flexible platform that allows us to configure solutions to enable automation, inspection, and material optimization.” Main competitors As for their main competitors, Terry is clear: “Our main competitors are Keyence, Cognex, and SICK. They make great products. They understand their markets and their customers and are well established worldwide brands. “Despite their size and brand presence, we strive to innovate our products to create performance and user experiences that exceed our competitors by remaining agile to fit and adapt our technology into the challenging applications for which our customers need solutions.
52
Developing sectors When asked how he sees the machine vision and robotics sectors developing over the coming years, Terry commented: “Machine vision and robots are a “hot” technology today. Industries require automation to compete and these technologies are one of the major enablers to achieving that goal. “3D machine vision provide the ‘eyes’, and in many cases the ‘mind’, for robots. As robot technology grows, so does 3D. I believe we are entering an exponential period of growth. It’s a very exciting time. Trends As for trends in both markets, Terry was thoughtful: “I think advancements in collaborative robot technology, deep learning software, and the role 3D smart sensors will play in that equation are major trends. This fascinating relationship between robot and smart sensor is the critical next step in realizing the smart factory. Delivering the next-generation of responsive robot is going to demand a lot of brainpower from a sensor software and integration perspective. How the industry approaches this challenge will have a profound impact on the future of manufacturing.” Deals We then turned onto a business question. As many deals are being done in the machine vision sector, does he anticipate buying companies over the short to medium term future? “Our biggest challenge today is we have too many opportunities and not enough engineers. If we can identify companies with great talent and innovation that align well with our smart technology approach, we would certainly engage them in a conversation to join us so we can accelerate our growth together.” Prospects Coming closer to home, how does he see the company’s prospects: “We are very optimistic about where LMI is positioned in today’s landscape. If we
mvpromedia.eu
continue to be true to our roots, developing high-quality, market-driven 3D solutions with first-class support, and a keen eye for innovation that benefits the customer, then we can continue to learn, grow, and contribute something meaningful to the world of industrial production.” Showcase As we’re coming up to halfway through the year, what does he have for the market over the remaining months of 2018. “We are finalizing an exciting set of products that will come out later this year. These products will redefine resolution, speed, size, and what it means to be “smart”. We are achieving new levels in resolution, operating at high speed in a product size that is truly groundbreaking. This technology combines both 2D and 3D and will change the face of inline inspection. Stay tuned!”
mvpromedia.eu
ABOUT Terry Arden CEO Terry Arden joined LMI Technologies in 2003 as Chief Technology Officer where he applied his engineering and management skills to consolidate several R&D groups onto a common sensor platform. In 2009, he took over as CEO and created the company’s flagship Gocator product line offering an all-in-one sensor capability focused on ease-of-use leveraging web technologies. Prior to LMI, he worked in several senior management roles including Founder and President of Logical Vision (later sold) and Director of Operations at Coreco (now owned by Dalsa Teledyne). He holds a BSc. degree in Computing Science and Mathematics from Simon Fraser University, and several patents in image processing and optical triangulation methods.
53
Sad industry news: Allied Vision’s CEO Frank Grube passes away A statement from Allied Vision on the sad loss of its CEO Frank Grube: The visionary manager was a respected personality of the machine vision industry and turned a small German distributor into a world-leading camera manufacturer. Frank Grube, Allied Vision’s President & CEO, suddenly passed away on April 14, 2018 while on weekend leave at his family home. “We are all shocked by this terrible loss,” said Michael Cyros, Chief Commercial Officer of the company and a member of the Management Board. “Our thoughts are with his wife and his family. Frank was a passionate leader and all Allied Vision employees know how much the company owes its success to his entrepreneurial spirit.” Alexander van der Lof, Chairman & CEO of Allied Vision’s parent company TKH Group, said: “It is with deep sorrow that we heard of Frank Grube’s death. Frank fought for his company and the people in the company. Together with his team at Allied Vision, he was on the right track to disrupt the vision industry.” A lifetime for vision Frank Grube spent most of his career in the computer vision industry, pioneering the rise of machine vision in the 1990’s. He was appointed CEO of Manfred Sticksel CCD Kameratechnik in 2000 after the small German camera distributor was purchased by Augusta AG (now integrated into TKH Group). Grube swiftly conducted a strategic
turnaround of the company and renamed it to Allied Vision Technologies in 2001. Anticipating the trend to digital interfaces in machine vision cameras, he transformed the company into a camera manufacturer, building up R&D and production. With its FireWire cameras, Allied Vision drove the digitization of machine vision camera interfaces and quickly became one of the leading machine vision camera manufacturers worldwide. Frank Grube grew the company into a truly global player with the acquisition of Canada-based Prosilica in 2008 and infrared and specialty camera manufacturer VDS Vosskühler in 2011. He also expanded the footprint of the company by opening sales and support offices in the United States (2006), Singapore (2010), and China (2012). He invested a large amount of his time and energy in further building up business in Asia and Embedded Vision, which he considered the key growth markets for Allied Vision. Strong human values Frank Grube was not only a first-class business man, he was also driven by strong values, which made him highly appreciated and respected not only within Allied Vision, but also in the whole industry. “Frank was a demanding leader, but he also cared a lot for his employees. He was very attentive to employees’ well-being and always cared to share success with his whole staff through companyfunded parties, Christmas presents or extraordinary bonus payments,” remembers Gerd Völpel, Chief Operations Officer. Generously giving back was an important value for Frank Grube. He wanted Allied Vision to share its success with communities it was located and donated every year to local charities supporting children and youngsters in need – something he didn’t want publicity coverage on. In 2014, when Allied Vision celebrated its 25th anniversary, he made an exception to this rule and included employees, customers and partners in a fundraising campaign for the benefit of Sightsavers, a charity performing eye surgery in developing countries. Building on his legacy “Frank’s spirit obliges us to perform and make a success out of what he has built,” said Alexander van der Lof. While the succession process is already under way, Allied Vision’s Management Board remains fully committed and empowered to run the company’s operations, as Frank Grube had insured when he took medical leave in December 2017. “Frank’s vision, his ambition and his fighting spirit have been our inspiration for 18 years. They will be even more so in the future to make Allied Vision the leading company he wanted it to be,” said Andreas Gerk, Chief Technology Officer.
mvpromedia.eu
55
PUBLIC VISION Is Goldilocks about to exit stage left? Stock market soothsayers are predicting that things are about to change, that we are that part of the cycle when lots of companies are doing deals, knowing that money is cheap and that the good times we saw in 2017, might just run out of steam at the end of this year. Many economies have been in that Goldilocks position of being neither too hot, nor too cold, and enjoying the benefits. Editor Neil Martin asks is the porridge now turning a little cooler? The Goldilocks term was giving to economies that enjoyed a blissful state in which the business environment was in as about as useful as you could get. But, are things about to change? Many firms think so, not least Luca Paolini, chief strategist at Pictet Asset Management. He said: “The global economy is changing gear and investors need to be prepared for markets to react as this unfolds. “Its still a little bit too early to call time on the equity market rally, but the outlook for bonds is improving. We are therefore moving to neutral positions across asset classes. “As a result, we’ve increased our allocation to bonds, prompted by a steady rise in US 10-year Treasury bonds yields at a time when there are worrying signals about the momentum of global economic growth. “That’s not to say we expect economies to roll over – but that the gap between hitherto very bullish sentiment surveys and more moderate underlying economic data is closing. “This could signal that the two are coming back into synch again, or simply that growth momentum has peaked. “If the latter is the case, then there’s reason to expect the Fed to rethink the pace of tightening. The rise of US 10-year T-bonds yields above 3 per cent is already starting to be felt by some interest-rate sensitive sectors of the economy. “The US government’s fiscal spending programme is likely to mitigate some Fed tightening, but won’t reverse it entirely. If the Fed does recalibrate, we think the fiveyear part of the Treasury curve is most compelling.
56
“The best value in the fixed income markets remains emerging market local currency debt followed by US Treasury bonds. “Within developed equity markets we prefer energy stocks which should draw strength from rising oil prices. “Whilst we are still underweight defensive stocks, this is an area we may look to add to if and when weakness in global economic growth persists. “Some cheap cyclicals like financials and materials are currently more attractive but are becoming less compelling based on their relative valuation to defensives. “At a country and regional level, we prefer the Eurozone to the more expensive US market and see likely Euro weakness as a catalyst for outperformance. “We still like emerging markets in the longer term but remain neutral on emerging market equities due to strengthening headwinds.” These comments were penned before US President Trump decided to withdraw his country from the Iranian deal. This, for many signalled the start of a new era of volatility. Tom Elliott, International Investment Strategist at deVere Group, sounded a warning: “Investors should expect an increase in market volatility following Trump’s announcement that he is quitting the Iran nuclear deal. “There will be global stock market sell-offs as the world adjusts to the news. “Due to the severity of the U.S. President’s approach, in the shorter term at least it is likely gold and the U.S. dollar may rally on growing fears of further conflicts in the Middle East breaking out; and risk assets, namely stocks and credit markets, may weaken. Oil may rally strongly.
mvpromedia.eu
“We will need to wait for the full Iranian response. However, I expect that they will try to continue to appear the reasonable partner and work with Russia and the Europeans, playing them off against the U.S. If they take a more aggressive stance, oil, gold and the dollar will go considerably higher. “Geopolitical events such as these underscore how essential it is for investors to always ensure that they are properly diversified - this includes across asset classes, sectors and geographical regions – to mitigate potential risks to their investment returns.”
“IT’S STILL A LITTLE BIT TOO EARLY TO CALL TIME ON THE EQUITY MARKET RALLY, BUT THE OUTLOOK FOR BONDS IS IMPROVING. WE ARE THEREFORE MOVING TO NEUTRAL POSITIONS ACROSS ASSET CLASSES”
Ironically, volatility is good for many short-term investors, it’s how they make a turn, and even for long term holders, the effects of sudden peaks and troughs are ironed out, but for companies trying to plan their way forward, it can be very tricky. Invesco Perpetual Global Equities Fund Managers, Stephen Anness and Andrew Hall, believe it is right to embrace volatility: “For the first time in a while, we are finding new ideas that stem from more defensive areas. Financials and energy are still fairly elevated but interestingly Pharma and Biotech spreads are much wider now than in the past “If we think back to early 2016, the market had shunned anything cyclical and favoured businesses with traits such as defensiveness, quality and stability. As such, the valuations of traditionally defensive businesses were pushed to very high levels; something we labelled “stability at any price”. That led us to find opportunities in more cyclical (but very high quality) businesses.
“THAT DYNAMIC HAS NOW REVERSED SOMEWHAT AS A NUMBER OF THE MORE STABLE, DEFENSIVE STOCKS HAVE DE-RATED.”
“That dynamic has now reversed somewhat as a number of the more stable, defensive stocks have derated. We heard a very good soundbite from a favoured contact last week: ‘the market is now giving you some reward for playing defence’. We very much agree with this sentiment. For the first time in a while, we are finding new ideas that stem from more defensive areas.” As the year moves on, we now hear a different terminology entering the market chit chat. Words such as volatility and defensive are coming to the fore. So with Goldilocks retreating, are three bears about to take centre stage? We’ll have to wait and see, but keeping checking the temperature of your porridge.
mvpromedia.eu
57
VISION BUSINESS
The window is open, but for how long? Editor Neil Martin looks at the window of opportunity for machine vision companies who want to float, raise finance, or sell. But how long before the window starts to close? Management teams in many mid-sized machine vision companies will be no doubt be thinking that the time is to right to make some hay whilst the sun shines. Minds will be thinking about IPOs, trade sales, or getting support from private investment houses. The window of opportunity is open, but for how long? To put it bluntly, now could be the best time to expand, or cash-in. The machine vision sector is looking healthy at the moment and its having two effects: firstly, companies within the sector are looking to grow via non-organic deals which starts to increase company values; and, secondly, those on the outside of the sector, investors, are beginning to realise that committing money to this sector could deliver some decent returns. Which means that for many company management, they are faced with some fundamental decisions about their futures, whether to forge ahead by themselves, or complete a trade sale. And now is the ideal time to start doing corporate deals. The sector is maturing, but it’s not over the hill yet – indeed, some of its best years are yet to come as general industry, and increasingly the consumer sector, embraces what machine vision sector has to offer. We are also in what investors call a favourable environment. The global economy is doing okay, even though there are pockets of worry about what’s around the corner in terms of debt and still-felt repercussions of the last financial crisis. All good. However, are there clouds gathering on the very distant horizon. Stock market observers are beginning to say that the latest raft of corporate deals are signs of a late cycle, signalling
58
mvpromedia.eu
that things might not be as favourable in second half 2018 and 2019, as they were in 2017. Money is set to become more expensive, slowing down M&E activity and capital expenditure, and growth in the global economy could be putting the brakes on. Over the years there has been a number of deals with smaller companies being snapped up by groups who know that acquisitions are not only good for widening the product range, but also good for revenue. Those companies with a public quote also know that they have to grow, otherwise they will feel the wrath of disappointed shareholders. As long as they don’t do too many deals for shares rather than cash, then they should be earnings enhancing. They problem is that they must do good deals and that’s a lot harder when everyone has a fair idea of what they are worth and want top dollar. For those companies who have very strong products, a particular part of the marketplace sown up and an experienced team, they will have the offers coming in on a regular basis. Many don’t want to sell of course, so are faced with the question as to how they can exploit their own success and potentially a rosy future. Even small companies can consider an IPO, but the stress of having your company turned inside out by advisers and investors, and then having to regularly communicate your failures, as well as successes, to an unforgiving stock market, is not to everyone’s taste. One company which decided to go down the IPO route was STEMMER IMAGING (see our interview with the CEO of the company on page 32). They were one company which timed their flotation to perfection, seeing the open window as the ideal chance to go to the markets for money to fund their expansion throughout existing markets, and new ones. If an IPO seems too onerous, then seeking help from an investment house is another option. It saves the rigours of a public listing, even though their slide rule will just be as demanding. But they have databases of institutional and private investors who would just love to invest in sound machine vision companies. Money is still cheap and some say that Western economies are going to have to resist moving interest rates up too quickly, which means that the ‘new normal’ might be with us for some time yet. So, expect a lot of M&A activity over the remainder of the year, as the sector companies squeeze the most from the current favourable conditions. The trick will be get the deals done before the window closes and the exact timing of that is everyone’s guess. But by the time we all gather in Stuttgart in November for Vision 2018, the sector might look a little different than it does now.
mvpromedia.eu
59
CONFERENCES
CONFERENCE FOCUS
Milton Keynes, so good they named it once MVPro Magazine Editor Neil Martin wends his way again to the heart of England and then takes a look at what else has been keeping the conference sector busy UKIVA MVC 2018 The second iteration of an industry show is rightly a nervous time. The question was always going to be, was the inaugural event good enough to attract people back for a second showing? As it was, from the event floor so to speak, the UKIVA MVC appeared a resounding success. I´ve yet to receive any official word as to what the organisers and UKIVA thought (they were coy about giving out numbers last time), but from an attendees point of view, everything looked good. Two things stand out for me. Firstly, the seven theatres which surround the booths were very busy and all well attended, and the presentations were interesting. Secondly, because
60
each of the booths are the same size, each exhibitor, no matter how small, or large, gets a fair pitch. There is nothing elitist about booths as the larger companies compete to outdo each other in floor space, gimmicks and product. If I had any criticism, I would say that the organisers of the theatres need to give the speakers a strict 20 minutes talk time and five minutes to answer questions, giving the attendees chance to get from one another without sprinting across the venue, interrupting talks trying to find chairs and then doing it all over again 30 minutes later. I seemed to have a chosen similar talks to a number of people, and we made a hurried procession from theatre to theatre, desperate not to miss anything.
mvpromedia.eu
CONFERENCES
Overall the event looked far busier than last year, with a more diverse group of attendees. I did notice one, or two companies missing from last year, but that may have been for a number of reasons. Of course, these shows owe their survival to whether the companies paying the fee to set up a booth and have staff there for a day, find it worth their while. What you can never see if whether business is being done, or prospect lists being expanded. The feeling was that the industry was going to support the first event more as a sense of duty than strict business criteria and maybe that was what rolled over to the second show. From the showing of that, it would appear that this show has a bright future, even though I guess now the business case of allocating marketing funds will be stricter than before. Funnily enough, the need for a UK machine vision show might be greater than ever before. Next year we have the added excitement that come the third show, let’s assume again in May 2019, the UK will have pushed on from the continental pier it has been tied to for the last few decades and have begun to plot its own course across some possibly choppy waters. Brexit might be the biggest example of mass lunacy since they took to deciding if witches were innocent by drowning them, but it’s coming, and there is little we can do about it. Which might be good news for this niche trade show, as it demonstrates that machine vision, and robotics, is alive and well, and thriving, in the UK. Let’s hope so.
THE VISION SHOW – RECORDS SET Also, before Milton Keynes we had, across the pond, The Vision Show and Conference. Organisers AIA said that the show broke all previous records with over 2,500 people in attendance, which was a 12% increase over the previous record set in 2016. It’s North America’s leading event devoted to machine vision and imaging, and features over 150 exhibitors and 23,000 square feet of floor space. On display are the latest machine vision systems, imaging components, deep learning vision software, embedded vision solutions, collaborative robots and more. “As a first time exhibitor, I was impressed with the amount of connections we made with new customers and the level of engagement through conversation,” said Jamie LaCouture, Tradeshow Supervisor with Thorlabs. “We will be returning for the next show in 2020!” “The Vision Show was an excellent opportunity to not only obtain the latest industry trends and information, but to get targeted and qualified leads,” said Chris Beevers, Manager of Device Connector Solutions Engineering & Market Development with Phoenix Contact. “We are looking forward to the next event!”
mvpromedia.eu
“The show was busy, the visitors were well informed about Vision, and the networking among exhibitors was excellent!” said Laurie Partington, Senior Marketing & Communications Representative for Matrox Imaging. The Vision Conference, which runs in parallel to the exhibition, featured five in-depth tracks exploring vision integration, embedded vision, IIOT and AI, AIA’s Certified Vision Professional (CVP) Basic level training program, and CVP Advanced. Over 275 people attended the five day conference, which is a 10% increase over 2016. Photius Wins Startup Competition The Vision Show Startup Competition provided five upstart vision companies with an opportunity to generate awareness of their technology and find new sources of funding. On April 13, a panel of judges comprised of venture capitalists and industry leaders announced that Photius, a 3D measurement technology company using drones, was the winner of the $5,000 grand prize. Photius’ technology combines photogrammetry, laser scanning, and structured light measurements and promises the range, accuracy, and speed of a laser scanner, but ten times cheaper than current devices. Photius plans to target infrastructure and construction management applications during the initial rollout of their technology, and the $5,000 prize will allow them to accelerate their technology and business development activities. A Record Time for the Vision Industry The record attendance at The Vision Show and Conference mirrors the overall growth the vision industry has recently experienced. According to AIA’s latest statistics, more imaging components and systems were sold than ever before last year in North America, as sales for machine vision solutions grew 15 percent over 2016 to $2.6 billion. “Interest and enthusiasm about vision technology is building, as we saw first-hand at The Vision Show. This is an exciting time for our industry,” said Alex Shikany, Vice President of AIA. “The companies who exhibited at The Vision Show are leading the way with tomorrow’s innovations in this space. Be it embedded vision technology, smart cameras, AI, machine learning, or 3D imaging, there is a solution to nearly every challenge customers will face.”
AUTOMATICA 2018 Taking place in Munich between 19th and 22nd of June, automatica 2018 sets out its stall as including everything that revolves around optimizing a production operation. The organisers said: “The leading trade fair features innovative automation and robotics solutions with pioneering key technologies for all branches of industry. So you can manufacture higher-quality products quicker and more cost effectively.”
61
CONFERENCES
Discover automation technology solutions for your entire value chain automatica claims to feature the world’s largest range of robotics, assembly solutions, machine vision systems and components. It gives companies from all branches of industry access to innovations, knowledge and trends with a great deal of business relevance. As the digital shift continues, the aim of automatica is to ensure market transparency and provide orientation with a clear objective: “being able to manufacture higher-quality products with greater efficiency.” The last time the show took place was in 2016 and it was attended by 833 exhibitors from 47 countries, up 15% from when it was last held. Over 43,000 visitors from some 100 countries attended, up 25%. Of those, 35% of them were international visitors, up 50%. Some 66,000 m² of exhibition space is used to provide an experience of all all relevant key technologies at a single location. The organisers said: “automatica is the only event that brings all pioneering key technologies together at a single location. After all, intelligent industrial operations are only possible when the right hardware is combined with appropriate software and specific know-how. That is how the leading trade fair significantly advances the realization of smart production scenarios.” Companies from the automation technology sector are spread across six exhibition halls presenting
62
the entire range of industrial automation solutions. Visitors can see components and systems, complete solutions and services in the following sectors: • assembly and handling technology (Integrated assembly solutions); • industrial robots and professional service robotics; • machine vision; • positioning systems; • drive technology; • sensor technology; • control system technology and industrial communication; • safety technology; • supply technology; • software and cloud computing; • services and service providers; • research and technology.
VISION 2018 Also looming up on the far horizon is Vision 2018 which takes in the spiritual heartland of machine vision, Stuttgart. In the next issue of MVPro Magazine, we will be taking a long look at what’s in store at Europe’s largest machine vision show.
mvpromedia.eu
3rd European Machine Vision Forum Where research meets industry
Vision for Industry 4.0 and beyond Top invited talks, panel discussion, networking and teaser sessions for all posters & demos Submit a contributed talk by June 22, 2018 Submit a poster and/or demo by August 10, 2018 Sponsored by:
September 5-7, 2018 Bologna Business School Villa Guastavillani, Bologna, Italy
More information at
www.emva-forum.org
ŠGoneWithTheWind/Fotolia