Illumination | MVPro 7 | March 2018

Page 1

STEMMER IMAGING READIES FOR IPO

PLAN YOUR 2018 CONFERENCES

LET THERE BE LIGHT – ILLUMINATION

ISSUE 7 - FEBRUARY 2018

WHAT LIES AHEAD FOR THE MACHINE VISION SECTOR?

mvpromedia.eu

MACHINE VISION PROFESSIONAL



CONTENTS

MVPRO TEAM

5

Neil Martin

Welcome to MVPRO

Editor-in-Chief neil.mar tin@mvpromedia.eu

6

Alex Sullivan

LATEST NEWS The latest and biggest stories from the Machine Vision sector

Publishing Director alex.sullivan@mvpromedia.eu

Cally Bennett

9

NEWS a round-up of what’s been happening in the Machine Vision sector

22

ILLUMINATION Editor Neil Martin asks two illumination companies and one filter company about their main product focus for the year

26

2018 Editor Neil Martin asks around to see what senior sector executives think 2018 will hold for the machine vision industry

38

CONFRENCES We take a look at what happened at the end of 2017 and also look forward to 2018

46

TELEDYNE DALSA Considering a Smart Camera? Keep These Five Key Features in Mind

50

MATROX How a NASA Facility is Digitizing over 90,000 Planetary Mission Images in record time and with perfect accuracy

52

PUBLIC VISION As the main stock markets continue to fluctuate, one of the sector’s leading lights is about to go public. Editor Neil Martin looks at the backdrop

54

BUSINESS STORIES A round up of some of the biggest business stories within the machine vision sector including Entner, North American markets and the UK budget

Group Business Manager cally.bennett@mvpromedia.eu

Paige Haughton Sales and Marketing Executive Paige.haughton@cliftonmedialab.com

Visit our website for daily updates

www.mvpromedia.eu

mvpromedia.eu

MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0) 1179 089686 © 2018. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies. Designed by The Wow Factory www.thewowfactory.co.uk

mvpromedia.eu

3


& EXHIBITION 16 May 2018 Arena MK, Milton Keynes, UK

60 technical seminars for newcomers to vision to expert vision users and engineers

• 3D VISION • CAMERA TECHNOLOGY • OPTICS & ILLUMINATION • SYSTEMS & APPLICATIONS • DEEP LEARNING & EMBEDDED VISION • UNDERSTANDING VISION TECHNOLOGY • VISION INNOVATION

An exhibition of the latest vision technology and services from the leading companies in the world of industrial vision and imaging Official event publication

FREE

to attend

FREE

parking

REGISTER TO ATTEND

@UKIVAconfex

machinevisionconference.co.uk

4

Enquiries Chris Valdes (chris.valdes@ppma.co.uk) Tel: +44 (0) 20 8773 5517

Organised by

part of

mvpromedia.eu


WELCOME TO 2018 Well, as this is the first MVPro issue in 2018, I can legitimately say happy new year to you all. I hope you all had a good break and are looking forward to what’s set to be a very exciting year indeed Sounding out the industry (see the comments in our main feature on 2018) and reading industry reports, suggests that 2018 will be a rewarding year for those companies who are properly positioned to take advantage of the growing demand for machine vision. By all accounts, the macro economic climate is good (bar any stock market corrections which seem inevitable) and the machine vision sector has a sense of confidence that bodes well for the future. The sector is being strengthened as it is becoming integral to the development of so many industries, especially robotics. The world is now relying on machine vision in a way that is invigorating. As I write this introduction, and as if to emphasise the sense of expectation, I got an email from Messe-Stuttgart inviting me to an event. The intro reads: “Machine vision is booming. For years now, the sector has been reporting record growth and turnover. The reason is that this key technology is used not only in the worldwide automation competition in the classical industrial sectors, but is constantly conquering new areas of application.” It will also be interesting to see how the individual sector companies develop over the year, especially in terms of bottom-line growth and M&A activity. STEMMER-IMAGING has kicked off the year in style announcing that its seeking a public listing on the Frankfurt Stock Exchange. I always find it fascinating to watch companies who move from the relative obscurity of private status, to the spot light of having their shares quoted on a public exchange. Some management teams love the attention and thrive, others find that the constant explaining of their results and handling of the public arena (nervous shareholders, fund managers and investment analysts) is just too distracting. It’s certainly good to see sector companies seek public status, as the greater profile allows them access to greater funds and should fuel growth. The floatation of STEMMER-IMAGING will I’m sure be followed with great interest by everyone in the sector and I wish them well for what will be a very busy time.

mvpromedia.eu

In this issue we also take a look at the illumination sector and also the key conferences coming up this year, with automatica (Munich) and Vision 2018 (Stuttgart) underpinning a busy year for trade shows. I hope to see you at both.

Neil Neil Martin Editor, MVPro

Neil Martin Editor neil.martin@mvpromedia.eu Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB MVPro B2B digital platform and print magazine for the global machine vision industry RoboPro B2B digital platform and print magazine for the global robotics industry www.mvpromedia.eu

5


NEWS

STEMMER IMAGING PLANS IPO TO RAISE €50M Just seven months after being sold, STEM M ER I MAGI NG (Puchheim, Munich, Germany) is planning an I PO in the first half of the year to raise around €50m from the placement of new shares

The public offering will only be made in Germany and the shares will be listed on the Frankfurt Stock Exchange. Currently, SI Holding (Munich, Germany), a PRIMEPULSE Group company, holds all the shares in STEMMER IMAGING AG. It will continue to hold at least 51% of the shares in the company after the IPO capital increase is carried out and the existing shares are sold in a secondary offering. The company regards itself as a pioneer of machine vision and pitches itself as a one of the largest independent solution providers in the area of machine vision components and systems in Europe, driven by product innovations and targeted acquisitions.

“STEMMER IMAGING HAS GROWN VERY SUCCESSFULLY OVER THE PAST FEW YEARS. WE ARE NOW IN A VERY GOOD POSITION TO PROFIT FROM THE CONTINUED GROWTH OF THE MARKET FOR DIGITAL MACHINE VISION”

STEMMER IMAGING plans to continue making further company acquisitions in the future, in the area of both machine vision and non-industrial applications that use technologies complementary to machine vision – such as the entertainment, transport and food industries.” Created in Germany in 1987, the company currently has 260 employees in 19 countries

in Europe and generated sales revenues of EUR 88.3 million in 2017. CEO of STEMMER IMAGING Christof Zollitsch said: “STEMMER IMAGING has grown very successfully over the past few years. We are now in a very good position to profit from the continued growth of the market for digital machine vision. In the coming years, we plan to boost our growth through focused international expansion, and improve our profitability in a targeted way by means of concept and product innovations. We are convinced that an initial public offering is the next step for us to take in order to drive our growth strategy.” For further details, please turn to Public Vision.

In an official statement STEMMER IMAGING said it wanted to strengthen its current position and expand, taking advantage “…of the growing challenges of the digital world (e.g. embedded vision and hyperspectral imaging). Additionally,

6

mvpromedia.eu


LATEST NEWS

MACHINE VISION INDUSTRY SECTOR ON GROWTH PATH The machine vision industry is continuing to grow and prosper That is the confident view from the joint CEO Round Table of Messe Stuttgart and the VDMA Machine Vision Association at the Stuttgart television tower which has just taken place. It was an event held just nine months before this year's VISION opens its doors in Stuttgart from 6 to 8 November, and the preparations for the leading world trade fair for machine vision are well under way. Team Director at Messe Stuttgart Florian Niethammer spoke at the event, saying "The prospects are highly promising. The industry is still on a growth path. In Germany alone, growth in the machine vision industry in 2017 provisionally amounted to 18 per cent. Throughout Europe turnover in the machine vision industry rose by between 12 and 14 per cent last year. All the forecasts point to another successful year in 2018," Over 450 exhibitors are again expected to take part in VISION 2018. More than 300 companies from all over the world have already registered for the trade fair. They include market leaders such as Basler, Cognex, ISRA Vision, MVTec, Teledyne DALSA, Sony and Stemmer Imaging. The first-time exhibitors include, for example, Connect Tech (Canada), Micro-Epsilon Messtechnik (Germany),

mvpromedia.eu

Genesi Elettronica (Italy) and Neadvance Machine Vision (Portugal). In addition to traditional industrial applications, non-industrial applications have also become much more important in the last few years. The exhibitors will present, for example, applications for the areas of traffic, food and beverage technology, and medical technology. Machine vision is also gaining ground in the agricultural sector, for example in sorting tasks during or after harvesting, in so-called precision farming or through greater use of drones. Special synergy potential and mutual exchange opportunities will be created here this year due to Intervitis Interfructa Hortitechnica, the technology trade fair for wine, fruit juice and special crops, which will be held concurrently with VISION. Niethammer added: "We have an exciting VISION year ahead of us, all the signs still point towards growth. The industry is also preoccupied with the megatrends of embedded vision and deep learning." With embedded vision, machine vision intelligence is migrating from external PCs into devices. The biggest drivers of embedded vision technology at present include autonomous driving, but also integrated

face and object recognition in smart cameras or surveillance cameras. In machine vision it is therefore possible, for example, to integrate intelligent camera modules in machines or robots, thus making them an indispensable component in the implementation of the smart factory. Deep learning is a radically new approach for solving image recognition tasks in future. Unlike current processes, these self-learning systems are taught a large number of images and scenarios which they can access during testing in the production process and therefore recognise the smallest possible deviations. These systems are therefore being continuously optimised still further. Niethammer: “Another topic is hyperspectral imaging. In addition to purely recording images, this technology can also generate spectral information from different wavelengths. This information is then used to make statements on the chemical properties of objects. Hyperspectral imaging has become financially affordable in the last few years and its operability has been made so easy that its results can be interpreted, for example, by colour marking of different materials even without special knowledge.�

7


Tough where it counts. CX series IP 65/67 cameras for applications with temperatures ranging from -40 °C to 70 °C.

Just right for applications which go beyond the ordinary due to dust, water spray or extreme temperatures! A hard-anodized housing makes the new IP 65/67 CX series cameras with up to 12 megapixel resolution suitable for the food and pharmaceutical industries. Find out more: www.baumer.com/cameras/IP65-67


NEWS

STEMMER IMAGING ACQUIRES DATA VISION Data Vision is a highly specialized provider of imaging and machine vision solutions in the Netherlands, and was a business unit of Batenburg Mechatronica. Managing director of Data Vision Harm Hanekamp said: “The acquisition by STEMMER IMAGING is the logical next step in serving this market from left to right: Ralph van den Broek (CEO Batenburg), segment. By being part of Christof Zollitsch (CEO STEMMER IMAGING AG), Gert-Jan the STEMMER IMAGING de Waard (Director of Batenburg Mechatronica B.V.) and Dietmar Serbée (Director of STEMMER IMAGING B.V.). Group, Data Vision will reach the next level and we can now follow our customers into Europe and benefit STEMMER IMAGING, a from STEMMER IMAGING’s larger European market leader in product range, and superior level the field of machine vision of competence. Our corporate technology which has just cultures are very similar and we announced plans for an IPO, know each other very well. This has acquired Data Vision by will strengthen services available means of an asset deal.

for customers in the Benelux countries who will have access to the support and security they need to continue to develop their vision applications with our help.” Director of STEMMER IMAGING BV Dietmar Serbée said: “STEMMER IMAGING’s and Data Vision’s product portfolio have a high degree of overlap in terms of suppliers represented. Thus, the businesses perfectly complement each other.” He added: “The acquisition of Data Vision by STEMMER IMAGING is also beneficial for our suppliers, due to the doubling of the sales and support workforce through a competent partner with a high degree of market know-how and technical expertise. As one of Europe’s largest imaging technology providers, this enables us to support the technological developments of our manufacturers even better for the benefit of our customers.”

LMI TO SHOWCASE NEW RUBBER AND TIRE SCANNING PRODUCTS AT TIRE TECHNOLOGY EXPO 2018 LMI Technologies (Teltow/Berlin, Germany), a leading developer of 3D scanning and inspection solutions, will officially launch its new highspeed, high-sensitivity Gocator line profilers designed specifically for rubber and tire applications, at Tire Technology Expo 2018. The expo is Europe’s leading international tire design and tire manufacturing exhibition and conference. It takes place in

mvpromedia.eu

Hanover, Germany, from February 20-22, 2018. The company said that the show will also be a great opportunity for industry professionals to immerse themselves in LMI’s innovative FactorySmart approach to inline automation, inspection and optimization. FactorySmart goes beyond the simple data acquisition of standard sensors to provide customers with a flexible, distributed and scalable solution to the real-world challenges of tire manufacturing today. Gocator delivers this capability through a complete, built-in 3D inspection platform that runs seamlessly within the factory environment to dramatically improve rubber & tire production.

Christian Benderoth, Regional Development Manager for LMI Europe, said: “LMI has been developing solutions for the rubber & tire industry for many years now, and we’ve built that knowledge and experience into our new line profiler design. Show attendees will find our Gocator 2430 and 2440 deliver the speed and sensitivity required to meet the challenges of low-contrast rubber & tire scanning. We welcome everyone to our booth to see these nextgeneration sensors in action.” Visitors to the LMI booth (Hall 21, #9008) will be able to interact with LIVE product demos of the all new Gocator 2430 and 2440 line profile sensors in various tire scanning applications.

9


Experience the power of deep learning with MVTec HALCON!

Try the new www.halcon.com/now

VERSION for free!


NEWS

PIXELINK LAUNCHES NEW VERSION OF PIXELINK CAPTURE Pixelink (Ottawa, Canada), a global provider of industrial cameras for the machine vision and microscopy markets, has released a new version of their software application Pixelink Capture. The new release adds an enhanced set of callback filters; improved video and still frame capture mechanisms; the ability to flip and rotate images; and, enhanced region of interest (ROI) capability. It also supports continuous auto exposure, user interface enhancement and support for the latest Pixelink Cameras, which utilize the Sony Pregius IMX264, IMX267 and IMX304 image sensors.

pleased and proud with this new release of Pixelink Capture. We have a talented team of engineers, here at Pixelink, and their commitment to quality is reflected in this new release that offers improved functionality to this best in class software application.” President of Pixelink Paul Saunders added: “We are

constantly looking to improve the user experience for our customers. We are excited about this new release. Customers using our cameras to build vision application solutions can now take advantage of this robust piece of software to better their multicamera inspection application development and processes.”

Vice President of Engineering Lisanne Glavin said: “I am very

NEW SHUTTERLESS THERMAL CAMERA MODULE Tamron (Saitama, Japan), a leading manufacturer of optics for diverse applications, has developed a shutterless thermal camera module. It has achieved this by adopting an amorphous silicon thermal

sensor that has excellent temperature reproducibility during temperature changes. The company said that classic thermal camera modules need to acquire reference data by actuating a mechanical

shutter approximately every two to three minutes to achieve accurate temperature measurements and stable thermal images. However, when the shutter closes some noise is generated by the shutter and the video stops during that time. Furthermore, mechanical shutters are naturally prone to failure during long-term operation. Now Tamron has developed a shutterless thermal camera module by adopting an amorphous silicon thermal sensor that, the company claims, has excellent temperature reproducibility during temperature changes.

mvpromedia.eu

11


NEWS

EYEVISION SUPPORTS VISION BOX BY HIKVISION AND INTEL REALSENSE D400 As well as supporting Hikvision cameras, EyeVision now also supports the Vision Box. The company said that many more applications can be solved with the combination of EyeVision and Hikvision. Key Features of the Vision Box: • On-board Intel, 1.91 GHz CPU; • 2 Intel-chip GigE ports; • 2 independent HDMI display outputs; • Ultra-compact structural design, suitable for industrial environments; • Provide high level protection; • 4 GB DDR3L memory, optional: SSD capacity. Features of the EyeVision software: • over 100 commands for 2D and 3D measure and inspection applications; • drag-and-drop programming; • NEW: Compilation of Projects: saves all components of an inspection solution (inspection program, process mode, images) in one complete folder; • several configurable views: development view, adjustment view, user view, as well as the user administration.

12

Intel Real Sense D-400 EyeVision now also supports the new Intel Real Sense D-400. The 3D command set of the EyeVision 3D software allows the capture of 3D images and process point clouds with the Intel sensor. The new RealSense stands for high-speed-stereoscopy and exact depth perception, combined in one system. With the combination of RealSense and EyeVision software, it is possible to calculate point clouds, from the depth image, which are processed on the sensor, in only a few milliseconds. The user can make evaluations on the point clouds with the commands of the EyeVision software. For example height and depth measurements, object counting and recognition, and also 3D pattern matching with the 3D Object Match. The Intel RealSense is based on the Vision Processor D4 and uses advanced algorithms to process raw image streams from the depth cameras and computes high resolution 3D depth maps without the need for dedicated GPU or host processor. A variety of depth

modules and housed camera devices provide an easy solution for rapid integration into industrial vision systems. EyeVision 3D has used the advantages of the high-speed stereoscopy for several years now and has integrated several stereo-3D-sensors, as for example the Ensenso by IDS or the previous model of the RealSense. The 3D commands of the EyeVision software are so easy and userfriendly manageable, that programming knowledge will not be necessary. Commands such as the 3D Blob – for object detection – are so straightforward in their use, one could almost say that the commands can be used haptic. The EVT 3D image processing is emphasizing not only precision, but also simplicity. Intel RealSense technology supports a wide range of operating systems and programming languages. The EyeVision 3D enables you to extract depth data from the camera and use the interpretation of this data in the platform of your choice, including Intel and ARM processors.

mvpromedia.eu



CUSTOM PRODUCTION, MADE TO MEASURE QUALITY

I am sure that you have often seen robots “at work”. They are constantly moving. They twist and turn and they get on with things without complaining regardless of whether it is hot or cold or if they are exposed to water or chemicals. There is no doubt that a robot needs to be able to stand up to rough treatment. The same goes for the cables used in them. Particularly in the automotive industry, robots are now an indispensable part of production lines. As automation advances, the demand for perfectly adapted robot cables is constantly increasing. That is why, in addition to its well-known standard ÖLFLEX® ROBOT cables, Lapp Group also supplies special robot cables developed for individual requirements and applications. For the world-famous paint system manufacturer Dürr from Stuttgart, Lapp Group developed special robot cables for wiring the painting robots. This involved performing the required torsion and bending tests in Lapp’s in-house test laboratory, in close consultation with the customer. To guarantee 100% functionality, customer-specific test adapters were even produced and used. Dürr’s painting systems use cables that can withstand extremely high bending and torsion cycles (+/- 600 degrees per metre /10 million cycles) and are both flame retardant and oil resistant. The conductors are made of ultra fine copper braids with high quality TPE (thermoplastic elastomer) compound insulation. Teflon films keep the friction between the different elements to an absolute minimum. The special cables also employ special stranding techniques and use high quality polyurethane for the outer sheath. In the case of the Dürr painting systems, there are nine different special robot cables, from BUS lines to sensor leads to servo motor and feedback cables. The particular challenge and complexity of special robot cables lies in the fact that a tailored solution has to be developed for every application. The research and development team at Lapp Group has state of the art laboratories and testing facilities. All robot cables are developed in close consultation with the customer and all requirements in terms of cross-sections, movement and the surrounding location are incorporated into the development process. A prototype cable is normally developed first and then tested in every detail. In the case of Dürr, the first prototypes were produced in just three weeks. After a successful trial period of several weeks, the cables were approved for series production.


NEWS

HR APPOINTMENT AT MTC AS FOCUS MOVES TO WOMEN IN ENGINEERING The Manufacturing Technology Centre (MTC), which sets out to inspire Great British manufacturing globally, has appointed Vicki Sanderson as HR director. Sanderson aims to use her role to inspire more women into the engineering profession. She took her seat on the MTC board on Monday January 8. She has held similar positions at Marshall Aerospace and Defence Group and Domino Printing Sciences. Sanderson is a member of the Chartered Institute of Personnel and Development. She arrives at the MTC as it continues rapid growth having seen a significant rise in employees over the past three years from just under 300 to in excess of 600. It is expected to pass 700 in 2018.

mvpromedia.eu

She said: “I am really excited to be joining a rapidly expanding organisation that is full of people who are very creative and are working to ensure UK manufacturing is as efficient and effective as it possibly can be.

“I hope that in my role I can play an active part in bringing in more female engineers to the MTC. I’ll be looking at ways to outreach, increase diversity and promote STEM, which I am passionate about.”

“The foundations of a successful business is putting people at the heart of the organisation. It is engaging those working there and planning for the expected growth it will have in the next few years.

MTC chief executive Clive Hickman said: “The MTC has experienced unprecedented growth over the last seven years and we expect that growth to continue for the foreseeable future. “Recognising that people are our most important asset we have strengthened the MTC Board through the appointment of Vicki Sanderson as HR Director. I am delighted that Vicki has agree to join MTC.

15


NEWS

BASLER EXPANDS EMBEDDED VISION PRESENCE new concept, we want to give our customers the highest flexibility and offer them exactly the camera module that meets their project and market requirements.”

Basler (Ahrensburg, German), a leading manufacturer of industrial digital cameras, is expanding its presence in the embedded vision field. It plans to present a new product concept at embedded world tradeshow in Nuremberg from February 27 to March 1, 2018. The company said that the new concept offers the cost efficiency required for the consumer market and meets high-end technological standards.

Head of Product Market Management at Basler Gerrit Fischer said: “Our customers’ needs in the embedded vision area are directly incorporated into our product road map and have significantly influenced the development of the new camera modules. With our

At the show, Basler will also introduce partner products, reference designs and development kits that demonstrate the compatibility of Basler cameras with various SoMs/SoCs and different processor architectures. These products and kits were created in cooperation with various companies in the Basler Partner Network.

PIXELINK EXPANDS LINE OF SONY IMX BASED USB 3.0 MACHINE VISON CAMERAS format and dynamic range of 70 dB.

The global provider of industrial cameras for the machine vision and microscopy markets Pixelink (Ottawa, Canada) is expanding its line of Sony IMX based USB 3.0 machine vison cameras. This, said the company, will position it as having the most complete line of Sony IMX based USB 3.0 machine vison cameras in the markertplace. Pixelink President Paul Saunders said: “Pixelink has long prided ourselves in providing high quality cameras to a broad range of industries and for varying applications. These new product solutions will address the demand in the marketplace from customers who are looking for the quality of our cameras, but who do not need the higher frame rates that Sony IMX image sensors are traditionally known for.”

16

The new cameras, all based on Sony Pregius global shutter CMOS technology, will be available in color or monochrome versions. The Pixelink PL-D795 camera, which is based on the Sony IMX264 sensor, has 5 megapixel resolution, a 2/3” lens format and dynamic range of 70 dB. The Pixelink PL-D799, based on the Sony IMX267 sensor has 9 megapixel resolution, has a 1” lens format and dynamic range of 70 dB. The Pixelink PL-D7912 camera, based on the Sony IMX304 sensor, has 12 megapixel resolution, a 1.1” lens

Saunders added: “These new camera offerings demonstrate our commitment to the USB 3.0 marketplace for industrial cameras. We are excited about these new camera solutions from Pixelink and we are confident that the market will embrace these additions to the Pixelink family of products.” All cameras have global shutters and are available in board level, or enclosed configurations. They are available with an external trigger. They are USB 3.0 compliant and offer the most, said the company, discerning customer low-noise and high-resolution images, for a broad range of markets, including machine vison, microscopy, or life science vision applications.

mvpromedia.eu


NEWS

WORLD’S FIRST MOBILE HYPERSPECTRAL CAMERA

SPECIM Spectral Imaging has launched what it claims to be the world’s first mobile hyperspectral camera that allows users to analyze material samples anywhere, in seconds. Called the SPECIM IQ, the company said that industries ranging from food and health, forensic investigation, recycling, art and agriculture, will benefit from the new camera.

SPECIM IQ is an advanced measurement and imaging solution that provides information in an instant for critical decision making and response. The camera and software are said to be easy to adapt and configure for a wide range of applications. It is also suited to the needs of OEM industry for building their own applications for their own clientele. Hyperspectral imaging, which combines spectroscopy and digital imaging, is regarded by most as the best available measurement technology for demanding measurement applications. By enabling spectral analysis down to the pixel level,

it provides unprecedented capabilities for analysing the physical and chemical make-up of both large and small samples. A founder of SPECIM Spectral Imaging Esko Herrala said: “SPECIM IQ is a truly smart design which enables users to concentrate on problem solving rather than complex data acquisition and processing. The graphical user interface is simple to use, and it provides instant measurement results and insights into the problem without requiring complex mathematics or signal processing skills. This makes SPECIM IQ an ideal OEM product for medical, cosmetics, and other industries. “We are very excited to present a next-gen device that can help solve many of the world’s pressing problems in the future.”

OPTO DIODE INTRODUCES 13.5 NM DIRECTLY-DEPOSITED, THIN-FILM FILTER PHOTODETECTORS Opto Diode (Camarillo, California, US), an ITW company, introduces the SXUV100TF135 and SXUV100TF135B photodiodes with integrated thin-film filters. The detectors each feature a 100 mm2 active area and a directly-deposited thin-film filter for detection between 12 nm and 18 nm. Both detectors have typical responsivity of 0.09 A/W at 13.5 nm and are optimized for different electrical performance. The photodiodes are ideal for use in applications such as laser power monitoring, semiconductor photolithography, and metrology systems that utilize extreme ultraviolet light.

mvpromedia.eu

The SXUV100TF135 model is optimized for higher speed reverse bias voltage operation. The device has low capacitance, typically 260 pF, with a reverse bias voltage of 12 volts. The SXUV100TF135B is optimized for zero bias voltage operation where low dark current is of paramount importance. The detector has a high shunt resistance greater than 10 MΩ.

storage temperatures range from -10 °C to +40 °C in ambient environments and from -20 °C to +80 °C in nitrogen or vacuum environments. Both devices are shipped with protective covers.

Opto Diode’s photodiodes with integrated thin-film filters offer superior stability and a robust design for use in extreme ultraviolet environments. Operating and

17


NEWS

NEW HI-RES CAMERA JOINS BASLER MICROSCOPY PORTFOLIO Basler (Ahrensburg, Germany) has added a highresolution camera model to its microscopy portfolio: The new Microscopy ace 12.2 MP features the latest rolling shutter CMOS sensor technology from Sony and delivers up to 15 frames per second. The sensor works with a new technology using back side illuminated pixels

18

and offers very low dark noise of only three electrons (3.2e-), combined with a quantum efficiency of over 80%. With this particularly light-sensitive sensor, the new Microscopy ace 12.2 MP delivers detailed images and at the same time high image quality, even in low light conditions. It reaches up to 71 dB of dynamic range.

Basler said that the new camera is ideal for a wide range of microscopy applications, including material inspection and lab routine work. The Microscopy ace 12.2 MP is available within Basler’s PowerPack for Microscopy which offers a variety of components.

mvpromedia.eu



NEWS

MVTEC BRINGS DEEP LEARNING TO NVIDIA PASCAL ARCHITECTURE MVTec Software, a leading provider of innovative machine vision technologies, has made extensive deep learning functions available on embedded boards with NVIDIA Pascal architecture. The deep learning inference in the new version 17.12 of the HALCON machine vision software was successfully tested on NVIDIA Jetson TX2 boards based on 64-bit Arm processors. The company said that the deep learning inference, i.e. applying the trained CNN (convolutional neural network), almost reached the speed of a conventional laptop GPU (approximately five milliseconds). It added: “This is an unusually high execution performance for an embedded device – compared to a standard PC. Users can thus enjoy all the benefits of deep learning on the popular NVIDIA Jetson TX2

embedded board. This is possible thanks to the availability of two pretrained networks that MVTec ships with HALCON 17.12. One of them (the so called “compact” network) is optimized for speed and therefore ideally suited for use on embedded boards. MVTec will provide interested customers with a software version for this architecture on request.” MVTec’s Embedded Vision Product Manager Christoph Wagner said: “We have provided successful technological proof that allows us to offer advanced deep learning functions in the embedded vision segment. This will greatly benefit users. They can now utilize the extensive new HALCON 17.12 features on standard devices with NVIDIA Pascal architecture – at an extraordinary high speed for embedded technologies.” Managing Director of MVTec Dr Olaf Munkelt added: “The rapidly

growing market for embedded systems requires corresponding high-performing technologies. At the same time, AI-based methods such as deep learning and CNNs, are becoming more and more important in highly automated industrial processes. We are specifically addressing these two market requirements by combining HALCON 17.12 with the NVIDIA Pascal architecture.”

ACTIVE SILICON SUPPORTS REALTIME GPU PROCESSING All of Active Silicon’s frame grabbers are now compatible with both AMD‘s DirectGMA and NVIDIA’s GPUDirect for Video. Active Silicon is a leading manufacturer of embedded

20

systems, machine vision products and imaging solutions. Active Silcicon said that DirectGMA and GPUDirect for Video enable many filter, convolution and matrix-vector operations to be performed by the GPU directly on data from a frame grabber without the need to be processed by system buffers or by the CPU. This makes data acquisition very fast with very low latency as the GPU memory is made directly accessible to the frame grabber. Modern GPUs are extremely efficient at processing images and graphics, and their parallel structure makes them

particularly well suited to uses where large blocks of data need to be processed in parallel. All Active Silicon FireBird and Phoenix frame grabbers are compatible with GPUDirect for Video and DirectGMA. What’s more Active Silicon’s SDK includes a comprehensive suite of C++ examples for GPUDirect for Video and DirectGMA with full source code. Their API supports CUDA, OpenCL, OpenGL and DirectX and is consistent across operating systems and hardware platforms allowing easy migration. And with Active Silicon’s RISC based ActiveDMA technology, their FireBird frame grabbers work virtually latency free.

mvpromedia.eu


NEWS

SPECTRAL EDGE APPOINTS NEW CTO Dr Ilya Romanenko has joined Spectral Edge (Cambridge), specialists in computational photography using image fusion, as its Chief Technology Officer (CTO). He will lead an expanded Research and Development (R&D) team in its Cambridge office, which will combine Spectral Edge’s proven Phusion image processing technology with a new approach based on Deep Learning. It’s hoped that this will result in an exciting new range of imaging products designed for integration into smartphones. Previously, Romanenko played a key role in R&D leadership for 12 years at Apical, which specialised in developing next generation camera and display subsystems. After the company was acquired by ARM in 2016 he became R&D Director for ARM’s computer vision team.

invisible information (such as infrared and thermal) in realtime on smartphones and other consumer electronics, to enhance detail, aid visual accessibility, and create ever more beautiful pictures. It helps smartphone manufacturers build devices that automatically take more vivid, natural looking photos every time, allowing manufacturers to deliver noticeably superior standard images to end users. Romanenko said: “Spectral Edge is built on impressive fundamental technology, which sits at the intersection of the image processing and computer vision fields, meaning I can use my knowledge and expertise in both to move the company forward. It is already delivering significant

benefits to companies in the broadcast market, and I am confident that working with the team we can bring this technology to life, particularly within products in the mobile sector, improving the user experience and bringing a new quality to existing products.” Spectral Edge CEO Rhodri Thomas said: “Ilya’s appointment is a further major step in Spectral Edge’s growth, bringing worldclass R&D leadership and experience to our team. Image processing is now a vital part of differentiation in smartphone development and I’m delighted to welcome Ilya on board as we develop our IP and support our customers in delivering marketleading visual experiences to their consumers.”

Romanenko brings a proven track record of developing deployable products, based on his expertise in image processing, computer vision, Deep Learning and IP development to Spectral Edge, along with extensive experience in creating and running R&D teams. His appointment follows that of new CEO Rhodri Thomas, who joined from SwiftKey/ Microsoft in February 2017, adding to Spectral Edge’s world-class senior management team. Spectral Edge aims to improve the viewing experience of images, videos and TV content by delivering the perfect picture for each individual viewer. This is done with patented image fusion technology which can combine visible and

mvpromedia.eu

21


AND THEN THERE WAS LIGHT! Editor Neil Martin asks two illumination companies and one filter company about their main product focus for the year. Illumination is a key sub-sector of the machine vision sector and although it does not always get the attention it deserves, without it things would literally be very gloomy indeed

Gardasoft Vision Based in Cambridge, UK and New Hampshire, US, Gardasoft is a global leader in the design, manufacture and application of high performance LED control technology and high intensity LED lighting. Its products and technologies aim to meet the ever increasing demands of lighting for Vision Systems and Image Recognition. The company’s LED Controller range offers precise light intensity control of illumination, as well as fast and microsecond accurate timing for strobe control and high speed applications; and its specialist, high intensity LED Lighting, is a market leader for Line Scan, Web Inspection and Traffic ANPR applications. Gardasoft is a wholly-owned subsidiary of OPTEX, Japan. Trade shows The company is an active participant at industry shows and, for example, it will be previewing a new approach to rapid focusing for traffic applications at Intertraffic Amsterdam, held in The Netherlands on 20 – 23 March. The system being previewed utilises a shapechanging liquid lens from Optotune AG in combination with a traditional fixed focus lens. Driven by Gardasoft’s TR-CL180 lens controller, the focus of a liquid lens can be changed in ten milliseconds, allowing precise focusing over a wide range of distances.

22

The company explained that many ITS applications, including speed and red light enforcement, require several images to be captured at different distances. For a single camera solution using a fixed focus lens, the aperture needs to be stopped down to provide sufficient depth of field for adequately focused images to be taken at each distance. This severely reduces the amount of light reaching the camera sensor, and also means that the images cannot be in precise focus at all distances. However, the liquid lens configuration allows the aperture to be opened up to allow more light through while still obtaining precise focus at any working distance. The liquid lenses are based on a patented combination of optical fluids and a movable polymer membrane. The TR-CL180 controller drives an electrically controlled outer diaphragm which moves the membrane to change the shape of the lens. When used in conjunction with a 200mm macro lens the focus can typically be adjusted from 100mm to infinity.

mvpromedia.eu


“WE PLAN TO EXTEND OUR EMBEDDED VISION PRODUCT OFFERING, INCLUDING A BOARD-LEVEL VERSION OF THE LIQUID LENS CONTROLLER…”

Vision Timing Controller which provides an easy and complete solution for accurate triggering of cameras and lighting. The CC320 works like a very fast PLC (SPS).” New products And when it comes to new products: “We plan to extend our embedded vision product offering, including a board-level version of the liquid lens controller, as well as focusing on triggering over Ethernet in traditional machine vision applications. This will allow much more plug and play for users, eliminate trigger wiring and make it easier to set accurate trigger timing. We already have our triniti vision utility which helps with this.”

Sector growth The question for any company operating in a particular sector, is whether they expect growth. Jools Hudson, at Gardasoft, replied: “Various analysts predict compound growth in the global machine vision market of around 8% in coming years, with growth in traditional manufacturing sectors and most likely growth in the lighting sector as well. “We expect Gardasoft sales to grow at a significantly higher rate as engineers gain a better understanding of the performance benefits to be obtained from lighting controllers.”

Gardasoft has clear objectives for the year: “GVL will work to enhance the ease with which our product portfolio can be integrated into machine vision systems. We will further extend the use of GenICam and promote the GenICam Standard Features Naming Convention standard that we championed, which enhances interconnectivity between machine vision components. “We will be raising awareness of the benefits of precise lighting control and wider trigger timing control and further develop the use of triggering over Ethernet. We will also be looking to develop new products both for machine vision applications and the traffic and transport sector.” Trend When asked how the sector will develop over the coming years, Hudson said: “The continued trend towards cameras operating at higher resolution and faster frame rates will require larger, more powerful lights, especially line lights. These, in turn will need controllers to drive them with faster pulsing and higher pulse power.”

As for their main product focus for the year, Hudson said: “Our current product portfolio still has a lot of untapped potential. With our triniti Intelligent Machine Vision platform now including control for industrial liquid lenses we can provide systems integrators and machine builders with seamless access and control of system cameras, lighting and lenses using conventional image processing software or via a triniti SDK. We also intend to expand the use of our CC320 Machine

mvpromedia.eu

23


Sunglasses for your system The company added that Neutral Density Filters, recognized in the industry as “sunglasses for your system,” are designed to reduce light intensity neutrally over a specific wavelength range without affecting image color, or contrast. They also serve as a great solution for lens aperture control and reducing depth of field. ND Filters are available in both absorptive and reflective style options and can be used with monochrome, or color cameras.

Midwest Optical Systems With its headquarters in Palatine, Illinois, Midwest Optical Systems, or MidOpt as its commonly known, has been around for 30 years, innovating in the fields of optical design, fabrication and inspection. The company is a worldwide leader in machine vision filters and optical solutions, and is represented in over 30 countries, offering over 3,000 products. MidOpt filters aim to ensure flawless control, dependable results and the very best image quality. Unlike traditional filters, MidOpt filters are designed with a Gaussian transmission curve to allow maximum transmission, which emulate the output of the most common LED wavelengths and are less angular dependent. Recently it launched the MidOpt NS100 Neutral Density (ND) Filter Swatch Kit. ND Filter Swatch Kits include all of the most popular ND Filters and, said the company, allows the user to stack multiple ND Filters to achieve a custom optical density. The NS100, said the company, is a great tool to have in the field, or in a laboratory to test the effects of ND Filters, solve applications quickly and improve image quality.

24

Mike Giznik, Vice President at MidOpt, outlined the company’s main product focus for 2018: “We are directing a lot of our focus to some of the up-and-coming industries, including autonomous vehicles, agricultural inspection and 3D metrology.” “We are directing a lot of our focus to some of the up-and-coming industries… As for whether the company is introducing new products, Giznik said: “We are creating a new series of bandpass filters and polarizers to meet the advancements in camera and lighting technology.” Short wavelength infrared When it came to the sector in general and how it should perform in 2018: “We’ve witnessed the advancements in short wavelength infrared (SWIR) LEDs and feel there will be a greater need for filters in these wavelengths in the near future. “With the advancements in industrial vision, there will be a need for more efficient and advanced illumination.” As for MidOpt’s main objectives in 2018: “We are putting a major focus on implementing new advanced technology to continue raising the standard for machine vision filters worldwide.”

mvpromedia.eu


to showcase these at the Vision show in Stuttgart, Germany this November.”

Spectrum Illumination Founded in 1999 by David and Naomi Muyskens, the team at Spectrum Illumination believed they could deliver a better lighting solution for the machine vision industry at a lower cost. The company has grown and now provides customers with a high level of customer service, smart lighting solutions and a complete suport package. When a customer orders a light, they are supplied with everything they need to get it up and running. All of the company’s signature Monster Lights come standard with light, controller, and cable. Spectrum Illumination’s products include backlights, dome lights, linear lights, diffused axial lights, oxy lights, ring lights, spot lights, and several washdown options. They also sell their patented LDM controllers, cables, and other replacement parts. Many of their newer products are outfitted with a built-in driver. These include their E-Series of lights and their Xtreme Monster series. Their E-Series was created as an economy, or budget friendly line for their customers, while their Xtreme Monster Series was created for applications that require a high intensity light. Both offer lightweight aluminium housings, making them a robotfriendly weight. They also distribute Master Distributor of Latab Lighting in North and South America Launch They recently announced the launch of a Dual Channel Ring Light, a new edition to the Monster Light family. This ring light allows the customer to have two colors in one light, with customers choosing from a number of color combinations. The current available options are 365, 395, 470, 530, 630, WHI, NWH, and WWH. This light is also available in IR in limited combinations. The package includes the light, two LDM 700s and a four-meter cable. Valerie Muyskens told us that as regards the product focus for the year: “Our focus this year is going to be on the introduction of some new products. We hope

mvpromedia.eu

The company are releasing new products: “We are very excited about some new products we plan to launch this coming year. Unfortunately, we believe these products are too valuable to hint at just yet, but we expect to launch some innovative new products that will definitely change Machine Vision LED lighting going forward.” “…we believe these products are too valuable to hint at just yet…” Ambitions Muyskens went on to explain that as far as the company’s ambitions went for the year, they were centred on “…broadening our reach within different industries and different geographical areas.” As for the sector itself, and the question of whether the sector will grow during the year, the reply was: “Well I can’t speak to what it was worth this past year, I can say that we have seen an increase in LED lighting sales during the second half of 2017. We expect this growth to continue especially with the release of new products.” And, finally, how will the sector grow over the coming year: “With the continued movement to automation we expect LED lighting to keep growing. Many of our distributors have been working on completely new and special applications. In addition, many of the end users are companies trying to automate and increase efficiency. “The companies who have been ahead of the curve using automation already are continuing to improve the systems they have in place. LEDs have come a long way since Spectrum Illumination began in 1999. We are constantly upgrading our signature Monster Series to implement the newest and best LEDs on the market. And our newer lines continue to improve as well.”

25


2018 – MACHINE VISION Editor Neil Martin asks around to see what senior sector executives think 2018 will hold for the machine vision industry

26

mvpromedia.eu


We sounded out a wide range of sources, concentrating of three major questions: 1. what are their main objectives for 2018; 2. what in their opinion will be the biggest machine vision sector trends next year; and, 3. what are the main sector challenges.

Active Silicon We started with Active Silicon’s CEO, Colin Pearce, (Pictured below) who told us that they have many opportunities they can embrace in 2018, and that they have a strong strategic direction.

“WE COMMENCE 2018 IN A STATE OF MEASURED EXCITEMENT…”

He said: “We commence 2018 in a state of measured excitement – we’ve got a lot going on at Active Silicon and several new products to bring to market. This year is a milestone for us as we celebrate our 30th anniversary, and our overall objective remains continued organic growth as we have seen over the last 25 years. In particular, this year we are reinvesting our profits into further expanding the business in the embedded vision area. We are looking to increase our team further and will recruit additional talented engineers to support our R&D department. From a wider viewpoint, we will also remain involved in the overall direction of the machine vision industry by participating

Colin Pearce CEO of Active Silicon

mvpromedia.eu

in conferences and facilitating forums on industry standards. “We are already proactive with our medical embedded products, and have passed customer audits against ISO 13485. However, we are working towards formal certification for this standard and expect to further expand and enhance our offering in this sector. “Our products waiting in the wings to be launched include a new camera interface board, innovative front-end software and superior frame grabbers. These will help us keep at the forefront of technology and help sustain continued growth.”

Main challenges He added: “We’re making sure we are embracing changes in technology, particularly those in broad computer standards, such as 10GigE and USB3. It is vital that we’re at the forefront of assimilating the latest technology to our offerings, as well as ensuring our products are backwards compatible with legacy software and hardware. “As an example, the uptake of USB3 has displaced some of the low-end demand in the frame grabber market. Addressing this, we’ve included four USB slots in our latest embedded vision processor, while maintaining compatibility with other standards. We also see the move towards multichannel 4K video, particularly in the medical market, as a challenge which brings its own opportunities, and we are working towards our first 4k video product. “The frame grabber sector is also changing to keep up with developments and requires a proactive approach. Our new ActiveCapture is a front-end, out-of-the-box software which provides enhanced support for our customers, even those with non-GenICam compliant cameras. In a highly competitive marketplace, our top-class customer support helps us to stand out from our competitors.”

27


Machine vision trends He added: “2017 was the year of the rise of embedded vision, and we expect this trend to continue apace in 2018. We believe there will be a greater adoption of consumer and off-the-shelf embedded technology within the industrial and commercial sector, driven by demand for higher speeds and higher resolution. Multiprocessor system-on-chip (MPSoC) development will mature for use in video, multimedia and telecommunication applications – Xilinx are leading the way here with their Zynq family of SoCs and APSoCs, and we’re investing resource in developing and applying this technology. “The next 12 months will see, without doubt, an increase in processing power offered by CPUs and GPUs, opening the door for more applications to benefit from AI and deep learning advancements. NVIDIA’s Jetson GPU is one example which makes the CUDA platform accessible to computer vision and robotic applications. You can see more about how we think AI is influencing the machine vision industry in our AI Series of blogs (www.activesilicon. com/artificial-intelligence). “To meet the increasing data transfer rates required in machine vision, we’ve invested in expanding our CoaXPress frame grabber range, and will be launching our latest FireBird single, dual and quad CXP-6 boards, designed to address the lower cost volume market along with the highend. Retaining all the features of our current FireBird series, these new boards offer faster processing at less expense. Additionally, we’ll be working towards CoaXPress v2.0 and have CXP-10 and -12 boards under development.”

28

Machine vision challenges Looking ahead, he said: “The greatest challenges will be brought by developments around us – innovations in software and hardware technology always keep us on our toes but also drive our own progression. We see the greatest challenge in machine vision as managing the expectations of customers on the scalability and adaptability of consumer technology; having to explain why, just because smart phones are getting smaller and faster, machine vision systems for use in industrial or scientific environments can’t fit in their pocket yet. And, of course, working towards a world where

this is possible! The machine vision industry will certainly see unprecedented change in the size and processing capability of components in the coming year, and it’s crucial to welcome and adopt these revolutionary changes.

mvpromedia.eu


“THE MARKET PLACE IS CROWDED WITH CAMERA COMPETITORS OFFERING MANY VARIETIES OF CAMERAS IN TERMS OF RESOLUTION, FRAME RATE, SIZE, INTERFACE AND SENSOR TYPE AMONG OTHERS.” “Sony has a strong brand image and deep history in camera technology from consumer, broadcast & professional through to industrial imaging products and components which is not always visible in Machine Vision, even though for many years we have been an industry leader in this market. “Our challenge is to continue to be the leader in innovation, not just in our sensors, but also in our Machine Vision cameras. A usable image is more than just the sensor and at Sony, our drive, determination and mission is to continue to build the best cameras.”

Sony

“We’re looking forward to hearing opinions about future trends at the VISION show in Stuttgart, and hope to see many of our customers, partners and suppliers there to discuss the industry’s challenges and opportunities.”

mvpromedia.eu

Matt Swinney, of Sony Europe’s Image Sensing Solutions Division, outlined their main objectives for 2018: “We are focusing on further developments in our GS CMOS-based camera line-up as we seek to enhance complete camera performance and drive the market forward in an industry already very familiar with Sony sensor technology.” As for their main challengers: “The market place is crowded with camera competitors offering many varieties of cameras in terms of resolution, frame rate, size, interface and sensor type among others.”

As for the bigger machine vision trends in 2018: “Innovations in our very own sensor technology will continue to open up new application possibilities during 2018. There are also a number of developments in areas such as machine learning and AI which should be on everyone’s radar in the coming years. And the challenges? “We can view this question in a number of ways. “From a camera point of view the machine vision industry in recent years has been dominated by a mixture of dedicated MV camera companies alongside well-known brands with broader offerings such as Sony. What we have seen over the past few years are new entrants coming into the MV camera market that have originally operated successfully in other markets such as video and network security. “I think in 2018 we will see further disruption to the activities of traditional players, which naturally

29


will encourage a response; for example this may mean greater diversification into nonmanufacturing environments and even other markets altogether in order to mitigate competitive pressures. “In terms of vision systems, the barriers to entry for new players have never been higher, with system complexity driven by specialism and customisation. This creates an ecosystem with inertia against change. This has helped to retain attractive market opportunities for such providers and system integrators who operate within the vision system ecosystem. “I am also interested to see how further technological innovations in terms of Industry 4.0 and future cloud-based machine learning will develop and how this may start to challenge current conventions.

Matt Swinney, of Sony Europe’s Image Sensing Solutions Division

Gardasoft Vision Jools Hudson at Gardasoft Vision answered first the question of the biggest trends in the sector: “There will be continued development in embedded vision systems and the integration of vision into Industry 4.0. The movement towards cameras operating at higher resolution and faster frame rates will require larger, more powerful lights, especially line lights. These, in turn will need controllers to drive them with faster pulsing and higher pulse power. More sophisticated and integrated control of vision system components will allow multiple

30

inspections at a single camera station leading to reduced costs.”

“THE NEW OPPORTUNITIES OF EMBEDDED VISION WILL CONTINUE TO PRESENT CHALLENGES.”

When it comes to challenges: “The new opportunities of embedded vision will continue to present challenges. There are many high volume, low cost, commodity application opportunities and an everincreasing number of embedded vision components. The challenge will continue to be to reduce development costs so that end users can really benefit from economies of scale.” For Gardasoft itself: “GVL intends to improve both the ease of use of the product portfolio and the overall user experience for our controllers. We intend to extend the use of GenICam and promote the GenICam Standard Features Naming Convention standard that we championed, which will help with interconnectivity between machine vision components. We will be promoting the very powerful concept of triggering over Ethernet using GigE Vision and IEEE1588. This makes pulsing much easier to set up and get the timing right. We will also be looking to develop new products both for machine vision applications and the traffic and transport sector. In terms of challenges: “Gardasoft will be raising awareness about the many advantages that precise lighting control can bring to machine vision and the enhanced capabilities that a dedicated trigger timing controller will bring. For example, precise lighting control can be used to compensate for changes in lighting and protect against degradation of performance. The benefits of precise timing control can be illustrated, for example, where multiple

images can be produced by a single line scan camera using multiple illumination sources, so reducing system costs.”

STEMMER IMAGING With news of its imminent IPO and a recent acquisition, STEMMER IMAGING is set for a busy year. Mark Williamson, Director – Corporate Market Development, told us that regarding main objectives for 2018: “To expand further the markets we address and enhance our services to make it easier for customers to access advanced machine vision knowledge.” As for main challenges: “Managing the high level of business growth and finding the right skilled people to help deliver this growth.” For machine vision trends: “3D will continue to gain traction and increase its share of the market and Embedded Vision will open up new possibilities.” For sector challenges: “Embedded Vision and Traditional Vision have very different business models. The traditional market needs to learn rapidly where and where not embedded vision makes financial sense.”

“EMBEDDED VISION AND TRADITIONAL VISION HAVE VERY DIFFERENT BUSINESS MODELS. “

PICMG Justin Moll is Vice President of Marketing for PICMG, the leading open specification development organization in embedded computing. He said that for the organisation, 2018 will be an important year: “As an open specifications development organization with approximately 150 member

mvpromedia.eu


companies, PICMG is launching a major initiative into IIoT. Machine vision is an important component of the smart factory. Embedded computing systems will play a larger part in the upcoming demands of these connected systems.” As for challenges: “An IIoT hardware specification requires participation and collaboration between the sensor, data analytics, and the IT portion of the system/ network. PICMG is working on gathering companies large and small in these areas to develop a versatile specification. The approach is to leverage proven hardware specifications in industrial automation such as COM Express and CompactPCI Serial and add these IIoT elements on top of them.” Regarding sector trends: “It appears that machine vision is relying on more powerful and complex embedded computers. Data analytics and real-time processing are growing requirements

and ubiquitous connectivity and advanced security are increasingly important too. With these trends, the need for scalability and ease of upgrades becomes very important. Therefore, we expect PICMG’s mezzanine approach with COM Express will be popular. Typically, a baseboard is designed for the machine that matches the exact I/O and other requirements of the system. Then a mezzanine with the processor, enhanced memory, etc, can be attached. As performance needs advance over time in the system, a new mezzanine can simply replace the old one.

“AS THE SHIFT TO THE SMART (OR SMART-ER) FACTORY CONTINUES, COMPANIES WILL NEED TO MAKE THE VERY TOUGH DECISION WHICH TECHNOLOGY TO CHOOSE.”

“In more advanced system requirements, we expect the backplane approach of CompactPCI Serial will be used. This allows highperformance boards to be plugged/swapped with various I/O, processing, storage, FPGA, and other specialty features.” And sector challenges: “As the shift to the smart (or smart-er) factory continues, companies will need to make the very tough decision which technology to choose. PICMG and our members can help show them how they can match current performance needs with future ones, with a technology that will be around in 20-30 years with multiple sources, and reduced risk and time-to-market.”

Xilinx Giles Peckham, Regional Marketing Director at Xilinx, said that as regards machine vision trends: “Xilinx sees embedded vision as a key and pervasive megatrend that is shaping the future of the electronics industry. Providing machines the ability to see, sense, and immediately respond to the world creates unique opportunities for system differentiation with the adoption of Machine Learning and the development of new Neural Networks. “There have been as many new neural networks developed in the last two years as in the previous forty years. These newer architectures are often deeper and employ new layer types, requiring developers to retain flexibility in their designs in order to adopt the most recent developments. This need for flexibility also extends to the field of computer vision, where new and more complex algorithms are being developed to more accurately interpret images captured by vision-based systems. With higher frame rate, higher resolution colour images being used to increase productivity and accuracy, the data processing rates are increasing often beyond the capability of traditional software approaches, requiring more of the algorithms to be hardware-accelerated.

mvpromedia.eu

31


“Sensor technology is also continually advancing and more machine vision systems are now using heterogeneous sensor fusion techniques to enhance their view of their environment, supplementing the visual image with others such as infrared. “Integrating disparate sub-systems including video and vision I/O with multiple image processing pipelines, and enabling these embedded-vision systems to perform vision-based analytics in real time is a complex task that requires tight coordination between hardware and software teams.”

“ADDITIONALLY, DESIGNERS WILL BE CHALLENGED TO CREATE HIGH PERFORMANCE, DETERMINISTIC AND LOW LATENCY SOLUTIONS TO DEAL WITH INCREASED DATA RATES IN REAL-TIME APPLICATIONS.”

standards. This precludes the use of ASSPs or ASICs for companies which wish to access their markets early. To remain timely and relevant in the market, leading development

Sector Challenges On the sector’s challenges: “One of the biggest challenges facing designers in the machine vision sector in 2018 is creating products that employ the latest computer vision algorithms, the latest neural networks and the latest communications interfaces, and being flexible enough to adapt to future enhancements in all those areas, whilst also being compatible with legacy equipment interfacing with older or proprietary communications

32

Giles Peckham, Regional Marketing Director at Xilinx

teams are exploiting Xilinx’s All Programmable devices in their next-generation systems to take advantage of the devices’ programmable hardware, software, and I/O capabilities. “Additionally, designers will be challenged to create high performance, deterministic and low latency solutions to deal with increased data rates in real-time applications. As software developers turn to hardware acceleration to address these challenges, they want to be able to continue using familiar languages or frameworks such as C/C++, OpenCL™, OpenCV or Café. “Xilinx offers devices and stacks providing better performance, flexibility, system optimising compilation and complete acceleration for both Computer Vision and Machine Learning. Multiple levels of design abstraction are supported, which speeds the development of embedded vision applications in markets where systems must be highly differentiated, extremely responsive, and able to immediately adapt to the latest algorithms and image sensors.

mvpromedia.eu


“Xilinx provides embedded vision developers with a suite of technologies that support both hardware and software design. Xilinx All Programmable devices include FPGAs, SoCs and MPSoCs. The Xilinx® Vivado® HLx design environment supports both hardware and platform developers developing the latest embedded-vision hardware. These tools include support for the industry’s latest highbandwidth sensor interfaces. Xilinx SDx tools including SDSoC™ allow software and algorithm developers to develop in familiar Eclipsebased environments in familiar languages like C, C++ and OpenCL. In March 2017, Xilinx launched the Xilinx reVISION™ Stack building upon the SDx concept to support computer vision and machine learning inference applications. “The reVISION Stack supports the most popular neural networks such as AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN as well as the functional elements required to build custom neural networks (CNNs/ DNNs) while permitting design teams to leverage pre-defined and optimized CNN implementations for network layers. This is complemented by a broad set of acceleration-enabled OpenCV functions for computer vision processing.

FRAMOS CEO of FRAMOS Dr Andreas Franz spelt out his company’s objectives for 2018. He said: “Our focus stays to create IP (Intellectual Property) for imaging solutions and embedded systems to enable our customers to move quicker in this fast moving environment. We recently joined the Embedded Vision Alliance to strengthen our position in embedded vision. Further globalize our business

mvpromedia.eu

Dr Andreas Franz CEO of FRAMOS

in cooperation with new partners and suppliers is one of our major goals to provide value-added services for our customers wherever they need it.” As for their main challenges: “The imaging market and the markets where imaging

“THE IMAGING MARKET AND THE MARKETS WHERE IMAGING HAS BECOME A FUNDAMENTAL TECHNOLOGY ARE VERY VOLATILE AND, LET’S SAY, VERY INTERESTING.”

has become a fundamental technology are very volatile and, let’s say, very interesting. To answer this and to achieve our objectives, it is very important to strengthen our worldwide team by attracting the best talent. His answer to what would be the biggest machine vision trends in 2018: “Technical wise, the post processing will mostly take place directly on the imaging devices. In tune with this compression and miniaturization, embedded vision will accelerate faster than expected and foster disruptive industry currents.”

33



And the biggest sector challenges: “With Machine Vision, we have the chance to conquer new markets. Based on techologization and digitalization, it’s possible to create opportunities in fields we never thought of. In addition, Embedded vision will change the competitive environment completely, we can already see this in the acceleration of consolidation.”

VTec Software Dr Olaf Munkelt, Co-Owner and Managing Director of MVTec Software, detailed his company’s objectives for 2018: “In 2018, we will continue to optimize our flagship products, the standard machine vision solutions HALCON and MERLIC, and add even more in-demand and improved technologies. In this context, it is very important that the main trends in the industry are implemented in our solutions, i.e., develop tailor-made features for the needs and requirements of our customers. In addition, we have been working for a number of years on making the entire professional functional range of machine vision more accessible to a wider audience. “We are continuing our efforts in this regard, for example with MERLIC, our software for the simple, intuitive and image-centered creation of machine vision applications. In addition, we also want to continue to grow at a rate similar to previous years, and to expand our position as a market and technology

mvpromedia.eu

“AN IMPORTANT CHALLENGE IS TO OPTIMIZE THE INTERACTION AND COLLABORATION OF HUMANS AND MACHINES.”

leader for machine vision standard software.” When it comes to the challenges: “An important challenge is to optimize the interaction and collaboration of humans and machines. On assembly lines, new compact and mobile robots, such as collaborative robots (cobots), often work side by side with their human colleagues. State-of-the-art machine vision technologies can help to improve the safety and efficiency of these processes. Our standard machine vision software HALCON is designed to identify, allocate, and reliably handle diverse objects along the entire value chain. In the new version HALCON 17.12, which was released at the end of 2017, we have integrated many new features to improve handling and interaction processes such as bin-picking. In 2018, we will continue to add more helpful functions to the new HALCON releases. “Another challenge is to combine extensive deep learning functions with embedded vision technologies. Recently we demonstrated that HALCON’s deep learning functions also

run on low-cost embedded boards based on the NVIDIA Pascal architecture: We successfully tested the deep learning inference of HALCON 17.12 on NVIDIA Jetson boards with 64-bit Arm processors. This enables a significant acceleration of deep-learning processes on an embedded device.” Moving onto the biggest machine vision trends in 2018: “We believe that machine vision technology will remain just as relevant and fascinating for factory automation in the future. Hot topics such as the Industrial Internet of Things (IIoT), embedded vision, the convergence of automation and machine vision (vision integration), the integration of programmable logic control (PLC), as well as new technologies such as deep learning and convolutional neural networks (CNNs) will become more important. “As a leading company in the machine vision segment, we have been addressing these topics for a long time. Especially in an IIoT world, companies are faced with several new challenges concerning connected manufacturing processes. This requires more intelligent sensors and better connectivity at the same time. Additionally, new technologies like deep learning will increasingly be used for the identification and classification (e.g., traceability) of objects, particularly in conjunction with optical character recognition (OCR). “Moreover, we will see deflectometry methods for

35


detecting defects on objects with specular reflecting surfaces (e.g., mirrors). Using deflectometry, several “light patterns” are projected onto the surface of the object via a screen. Observing the mirror images of these patterns and their deformations on the surface allows for the reliable identification of errors, which could not be achieved with conventional surface inspection methods. This method is particularly interesting for applications in car manufacturing and the electronics industry.” And for sector challenges: “One of the biggest machine vision challenges is to simplify machine vision processes and to make the technology accessible to an ever-wider target group. Our objective is to eliminate complexity and the need for deep expert knowledge, which we address with MERLIC. With its “noprogramming” approach, the software offers users intuitive, ergonomic tools for all routine vision tasks that enable the quick, simple and robust compilation and immediate integration of end-to-end machine-vision applications. “Furthermore, both the importance and deployment of deep learning technologies will increase. Thus, we will make these technologies accessible to a larger user group. As an example, with the most-recent version 17.12 of our off-theshelf software solution MVTec HALCON, companies can train their own classifier using convolutional neural networks (CNN), without having to invest heavily in time and money: The software comes with two neural networks that are optimally pretrained for industrial use – one optimized for speed, the other for maximum recognition rates. “This allows customers to train the network for their specific task using far fewer sample images than are typically needed for training a CNN. The result is a neural network that can be precisely

36

adapted to the customer’s unique requirements.

solution of above mentioned applications, such as:

“And not least, we will continue to actively promote the further integration of machine vision and programmable logic controls (PLC). As one of the leading providers of machine vision standard software, we are also actively involved in the VDMA OPC Vision Initiative, and thus directly contribute to the development of the OPC UA Machine Vision Companion Standard, which is already supported by all of our products. As a result, interconnected process chains can also be seamlessly implemented between machine vision and the PLC – an important prerequisite for Smart Factory scenarios.

a

EV DL Pupil: Finds single or multiple features in images by learning those features from annotated images.

b

EV DL Iris: Detects defects (scratches, cracks, holes, etc.) or other damages by learning the normal appearance of an object.

c

EV DL Cornea: Classifies complete scenes to identify for example container types, car models, etc. Cornea learns by collection of images, which are labeled.

… There will be additional AddOn commands such as: - OCR - NPR - CoreFinder - PeopleTrack - ContainerFinder - Also there will be the Chromasens support, which will be a fast Camera Link 3D solution with EyeVision - And in the second half of 2018 EVT will release a support for Multispectral cameras and respective commands.

Dr Olaf Munkelt, Co-Owner and Managing Director of MVTec Software

EVT Eye Vision Technology saw their main objectives for 2018 as: “In the first quarter of 2018, EVT will release the first EyeVision version with its new Deep Learning Tools. There will be: … EyeVision Standard networks for applications such as: Number Plate Reading (NPR), OCR, MaM (Make & Model) … The EyeVision Deep Learning (EV DL) will provide standard network-tools for the

- Our new parallel processing possibilities with the EyeVision software. Now you can not only run several cameras with one license, you can run several cameras with several interpreters (depending on the environment of your processor), which means you can have: Interpreter 1 -> connected 3 cameras -> runs inspection program A One PC: Interpreter 2 -> connected 2 cameras -> runs inspection program B Interpreter 3 -> connected 4 cameras -> runs inspection program C

mvpromedia.eu


“THE MACHINE VISION COMMUNITY IS NOW INTERESTED IN DEEP LEARNING.”

When asked about their main challenges, EVT said: … to process Deep Learning in real time (at least 30 frames per second). … to complete the multispectral commands and integration into EyeVision. And to make the EyeVision software more customer friendly and more accessible for unexperienced users as well.

THE FUTURE DEPENDS ON OPTICS

As for the biggest sector trends: “The machine vision community is now interested in Deep Learning. And also the multispectral image processing is a big trend in machine vision.” And for challenges: “To make a CNN feasible for industrial applications is a big challenge as well as real time processing for Deep Learning.”

OPTICS is a true enabling technology, empowering applications in advanced manufacturing, communications and storage, defense, display technologies, energy, health and medicine, and test and measurement. At Edmund Optics®, we aim to ENABLE THE FUTURE by focusing on advancing all aspects of life and overcoming technological limitations with imaging. Find out more at

www.edmundoptics.com/future Contact us today

mvpromedia.eu

37


CONFERENCE TALK We take a look at what happened at the end of 2017 and also look forward to 2018

SPS IPC Drives 2017 – The biggest ever! This was our last show of 2017 and as usual, we’d left it late to make our arrangements, discovering that even the most modest Nuremberg hotels had hiked their prices to $700 a night. Oh, the joy of supply and demand algorithms. And, with no direct flights to the city from our various locations, we decided to fly to Frankfurt, pick-up a hire car and stay in a Airbnb halfway between the two cities. Great idea, we thought at the time. Except, it turned into a slight comedy of errors. Firstly, the car we’d chosen was only available at Frankfurt railway station, not the airport. Again, it seemed like everyone was in town and hire cars were mostly out, unless you wanted an executive Mercedes which required a deposit that would challenge a tech billionaire. So, okay, we booked our car from Frankfurt station, only 15 minutes from the airport the website boasted. Yes, 15 minutes if you took the timing from the airport train station. Not if your plane landed at 7pm (at the correct time to be fair), then took 15 minutes to taxi to the terminal (I thought we were driving back from where we’d started) and, once through the terminal, taking an age to find the first bendy bus that would take us from the terminal to the railway station (in the other terminal). We’d left two hours to get from landing to car hire office, as the crow flows, about ten miles, yet we were up against it from the minute we landed. A 15 -minute wait for the bendy bus didn’t help, in company with about a thousand other passengers similarly disgorged into the cold and trying to find transport to the other terminal and the train station. Eventually the bus plodded along to the station and we rushed past the numerous info boards which seemed to list every mode of transport, including skate board, but actually provide little information. We guessed a platform, asked a fellow passenger if they thought it was the right train and made it to Frankfurt station with just 20 minutes to spare – the cleaner was already mopping the floor of the car hire office. First panic over, we’d got the car. Second panic came when the car hire man’s instructions to find the car proved less than accurate. Turn left, walk out the station, turn right, look for the big car park

38

and your car is there. Oh great – another ***** test of our initiative. I suppose the guy said it a hundred times a day and if knew it exactly where the cars were parked. Whereas out-of-towners unfamiliar with the City, late, hungry and needing to find their lodgings for the night, are not so relaxed. And no matter what City, railway stations, late at night, they seem to attract the more dubious types of society. So, wheeling your cabin luggage around, in the cold, with predators watching your every move, is not the best start to your business trip. Eventually I nipped into a hotel and asked directions for the car park. We found it, a desolate new lump of concrete which had all the charm of a 1970s power station. We got the car started, miraculously found our way out of the City and onto the autobahn (which is a whole new story. How great is the idea of no speed limit, until that is, when you’re peering through the sleet, looking for the signs, and cars are coming past at warp factor six! Yeah.) The show itself was a great success (see the details below), but was slightly marred by the fact that every driver is expected to cough up €10 to get out of the car park. Now, this wouldn’t have been so bad, had we been warned. Paying ten euros for a car parking spot near a major City cannot be seen as extortion. But, when I checked the organisers’ website for such charges, it proudly told me that the surrounding access to car parks were free. I took this mean to mean free car parks. What it literally meant, and fair play to them, was that the roads around were actually free. Well wow, thanks – I’ve actually yet to be charged to drive to a car park, but I suppose it will happen one day.

mvpromedia.eu


What I did spy on the bendy-bus ride back to where the hire car was taking a nap, was one small sign (I’d have missed it had I not been wearing my glasses) which said that each car would have to pay €10 to leave. I only saw that one sign. Now, fair enough buried in small print somewhere I bet was listed the charge, but it did have the impression of being somewhat underhand and frankly unnecessary. It left a bad taste and luckily I had cash on me – the guys collecting the cash didn’t appear to have credit card machines on them! Still, that was my only complaint for what was a superb show, setting a record in its 28th year. It featured 1,675 exhibitors on 130,000 square meters across 16 exhibition halls this year. It was the biggest SPS IPS Drives ever and also drew a record number of visitors in 2017, over 70,000. The event was officially opened Dr Thomas Schäfer, the Hessian Minister of Finance. He praised the SPS IPC Drives as a central platform for industrial automation, which is one of the core competencies of the German economy. Proving particularly popular was the newly aligned Hall 6, which had a focus on software and IT in manufacturing.

KEY FIGURES • Exhibitors: 1,675 (2016: 1,605) • Exhibition space: 130,000 m² (2016: 122,200 m²) • Visitors: 70,264 (2016: 63,291) Markus Sandhöfner, Managing Director, B&R Industrie-Elektronik GmbH, said: “This Fair affords us the opportunity to not only come together under one roof but also talk in detail about the requirements of the innovations, which are of particular importance to our customers, as well as plan the next steps for the upcoming months.” Thomas Höfling, Managing Director, SICK Vertriebs-GmbH, added: “The SPS IPC Drives is an absolute highlight in my annual calendar. Automation can be technical and dry but one can experience it as humane up-close at the SPS IPC Drives. At the end of the day people buy from people and it is important for us that alongside meeting technical decision-makers, we can encounter decision-makers from higher management levels. On the one hand, it’s about showing technical competence, on the other it’s all about trust which can be developed and built up with business partners.” Heinz Eisenbeiss, Head of Marketing Automation Systems, Siemens AG added: “The SPS IPC Drives has developed in leaps and bounds but has always been true to its core topic. Here we can meet the users or potential users of our products. Our Product Managers have the chance to discuss different subjects with them and in this way, customers can obtain information that they may have to spend a long time looking for. Our Product Managers, on the other hand, can find out what’s important to our customers – a real winwin situation. This really is a great Fair!” Gunther Koschnik, Managing Director, Trade Association Automation, ZVEI, added: “I am so enthusiastic about this Fair. Anyone who wants to see current technology and innovation can certainly obtain an overview here within a short amount of time.”

mvpromedia.eu

39


Teledyne DALSA and Teledyne Optech together at ITS World Congress 2017

Crowley exhibits as Canon Visionary at CES 2018

Teledyne DALSA and Teledyne Optech (Waterloo, Canada) showcased their latest imaging solutions for intelligent transportation systems via live demonstrations at the ITS World Congress.

The Crowley Company exhibited as one of Canon’s visionaries at the recent Consumer Technology Association’s CES expo which is held every year at Las Vegas. It demonstrated the use of Canon EOS 5Ds R cameras in its digital imaging operations and also highlighted a concept camera utilizing Canon’s 120-megapixel (MP) image sensor.

The summit took place from October 29 to November 2, at the Palais des Congrès de Montréal, in Montréal, Canada.

40

Teledyne offers capability for ITS applications, from mobile and terrestrial laser scanning for transportation engineering and urban planning, to uncooled, shutterless infrared and highresolution line and area scan imaging for navigation, traffic enforcement and license plate recognition. Teledyne Optech offers powerful mobile lidar technology with its bestin-class position and orientation systems to produce 3D data with survey-grade precision.

Ross Held, senior vice president and general manager, Imaging Technologies and Communications Group, Canon, US, said: “We are excited to introduce The Crowley Company as one of Canon’s visionaries at this year’s CES. By leveraging our technologies to explore the development of Canon’s 120megapixel image sensor into a high-resolution camera, The Crowley Company is innovating new concepts of imaging and showcasing the true essence of being a Canon visionary.”

Teledyne DALSA also displayed their revolutionary and low-cost Genie Nano, with resolutions from VGA to 25 megapixels. It offers wide range in sensor size and image quality with more than 40 color and monochrome models built around the industry’s most common interface standards. Teledyne DALSA will feature a live demonstration of the Genie Nano and its unprecedented multiexposure feature, useful for red light camera and license plate capture at ITS World Congress.

The Crowley Company president Christopher Crowley added: “We’ve successfully engineered a 71MP camera for use in the security, machine vision and cultural heritage industries. With Canon’s 120-megapixel image sensor, we will develop the MACHCAM 120, a camera that can be used in machine vision applications and to digitize archival materials at an extremely high resolution, meeting stringent imaging guidelines such as FADGI, metamorfoze and others.

Teledyne Optech also showcased their ultralight lidar/camera mobile mapper, Maverick. Weighing in at only 9 kg, the Maverick is capable of collecting data anywhere and everywhere. Mounted on a backpack, Segway or vehicle the Maverick collects dense lidar data and 360° camera imagery.

“As the preservation and records management industries rapidly move toward a consolidation of imaging specifications, it is essential that cameras and scanners are developed to meet these specs. Currently, there are very few products on the market – and none that are complete at this resolution – that can meet these specifications.”

mvpromedia.eu


Why walk when you can fly? Meet MIL CoPilot, the interactive environment for Matrox Imaging Library (MIL). Machine vision application developers can plan their course by readily and efficiently experimenting and prototyping with MIL—all without writing a single line of program code. With the trajectory set, MIL CoPilot accelerates the journey towards application deployment with a code generator that produces clear, functional program code in C++, C#, CPython, and Visual BasicŽ.

Available as part of

MIL

Developers using MIL get projects off the ground quicker and easier than ever before with MIL CoPilot and ready access to Matrox Vision Academy online training.

Get there faster with a CoPilot www.matrox.com/imaging/co-pilot/mvpro


Precision Glass & Optics to showcase products

Next UKIVA Machine Vision Conference and Exhibition date announced

Precision Glass & Optics (Santa Ana, CA, US), which provides optical thin film coatings and custom optical solutions for a wide variety of life science and biomedical applications, presented a wide range of optical products for the biomedical sector at SPIE’s BiOS ( January 27 – 28) and at Photonics West (January 30 - February 1) at Moscone Convention Center, San Francisco.

The second UKIVA Machine Vision Conference and Exhibition will be held on Wednesday 16th May 2018 at Arena MK, Milton Keynes, UK.

It showcased plano optics, beamsplitters, prisms, optical assemblies, windows, hot and cold mirrors, indium tin oxide (ITO), maximum reflectors (Max Rs), anti-reflection coatings (AR), and customized optical solutions utilizing a variety of shapes and substrate materials. With the recently-announced installation of an in-situ optical monitoring and advanced rate control system, the company produces single and multi-layered thin films with ultraprecision, up to 10x higher accuracy than previously available. Additionally, the company has an extremely large, in-house inventory of glass substrates, fabrication services and turnkey optics solutions for use in biomedical, MRI imaging, display, projection, scanning, laser, and other instrumentation applications. The company said that its cost-effective and reliable optics and advanced thin film coatings are ideal for military, aerospace, astronomy, biomedical, imaging, laser, digital cinema and solar markets.

42

The organisers say it will follow a similar format to the highly successful inaugural event held in 2017, which attracted more than 300 visitors. The conference is designed to provide an educational experience and will once again feature a comprehensive program of technical seminars across multiple presentation theatres. It will be supported by an exhibition of the latest vision technology and services from the leading companies in the world of vision. UKIVA Chairman Paul Wilson said: “The date for the 2018 event was only announced at the recent PPMA Show, held right at the end of September. We already have 25 companies signed up for the exhibition, including several who are not UKIVA members. This year we had 57 exhibitors, so this early take up suggests that we are on track for an even bigger exhibition component in 2018. “Key to the success of this year’s event, however, was the educational seminar program, which ran throughout the day. Visitors could choose their own timetable from 56 presentations held in 7 themed presentation theatres. The program was carefully designed to cover a wide range of topics and provide interest both for newcomers to vision and for highly experienced vision users and engineers. By co-locating the exhibition with the presentation theatre area, it was easy for attendees to switch between the seminar sessions and exhibition according to their own personal schedule. This format also provided excellent opportunities for networking between presenters, exhibitors and other visitors.”

mvpromedia.eu


2017

1 & 2 November 2017 | NEC, Birmingham

The UK’s largest gathering of engineering professionals

AERO ENGINEERING COMPOSITES ENGINEERING AUTOMOTIVE ENGINEERING PERFORMANCE METALS ENGINEERING

BOOK YOUR

STAND TODAY

CONNECTED MANUFACTURING NUCLEAR ENGINEERING NEW FOR 2018

Book now for the best rates and stand location

www.advancedengineeringuk.com

10

th

ANNIVERSARY


automatica 2018 is going to be bigger A stand-out event next year in the conference diary will be automatica 2018. And it is set to be bigger than the last time it was held, back in 2016. The organizers of the event, which takes in Munich from June 19 to 22, 2018, have already allocated more exhibition space than in 2016. It said that the increase in area from international exhibitors is 16% and the area increase for first-time exhibitors is 12%. Exhibitors include Dürr Systems, Rollon, SIASUN Robot, Siemens, Sumitomo, TÜV SÜD, Volkswagen and WAGO Kontakttechnik.

Euclid Labs is an Italian software OEM and service provider plans to introduce a highly intuitive and rapid approach to define sequences of movements and actions of robot arms without any programming skills. Roberto Polesel, CEO of Euclid Labs, said: “This approach will revolutionize the application of robot arms. It reduces development cycles and costs of robot systems to a fraction. Thanks to our solution, it is possible to adapt robot arms to new processes at any time. This increased flexibility fulfils the requirements of modern manufacturing lines, which are built on the principles of Industry 4.0.”

CEO of Factory Automation at Siemens RalfMichael Franke: “At automatica, we want to show in particular how industries of any size can benefit from digital transformation along the entire value chain: from product design and production planning to the engineering process all the way to new services. In addition, we will present the integration of robotics in mechanical engineering based on specific solutions.” The event will have its own topic area, IT2Industry, covering everything from robotics and automation to information technology, and cloud computing and big data. These topics will be discussed in an ICT exhibition area as well as the IT2Industry Forum. At the same time, the OPC Day Europe 2018 will again take place within the context of automatica. What’s more, the world’s leading robotics conference International Symposium on Robotics (ISR) 2018 will take place from June 20 to 21 within the context of automatica. More than 150 talks will provide insights into “state-ofthe-art” robotics technologies. And look out for a special networking event being help by Robotics specialist Euclid Labs (Nervesa della Battaglia/Italy). The company is inviting operators, developers and integrators of manufacturing equipment to an evening, it says, of inspiring presentations and the chance to network with leading representatives from the industrial manufacturing field. Euclid promises it will be the first to experience a novel and entirely intuitive solution to controlling robot arms – without any programming. The event is called “The Future of Robot Programming” and takes place on 19 June, 2018.

44

mvpromedia.eu


Key Conference Diary

A3 Business Forum

Jan 17-19

Marriott Orlando World Center, Orlando, FL

Spie Photonics West

Jan 27 – Feb 1

The Moscone Center, San Francisco, California

Embedded World

Feb 27 - March 1

Nuremberg, Germany

Vision China 2018

March 14-16

Shanghai, China

The Vision Show

April 10-12

Hynes Convention Center, Boston, MA

Hannover Messe (4.0/ Automation)

April 23-27

Hannover Messe, Germany

UKIVA Machine Vision Conference and Exhibition

May 16

Arena MK, Milton Keynes, UK

chii2018

June 6-7

Graz, Austria

Automatica

June 19-22

Waldstraße 3, 88214 Ravensburg, Munich

Vision

Nov 6-8

Landesmesse Stuttgart GmbH, Messepiazza 1

Cobots & Advanced Vision

Nov 15-16

San Jose, US

SPS IPC Drives

Nov 27-29

]=Messezentrum 1, 90471 Nürnberg, Germany

mvpromedia.eu

45


CONTRIBUTION

CONSIDERING A SMART CAMERA? KEEP THESE FIVE KEY FEATURES IN MIND a White Paper from Teledyne DALSA

Unlike PC-based vision systems—with their distinct cameras, frame grabbers, and I/O boards—today’s smart cameras incorporate embedded lenses, processors, software, I/O capabilities, and sometimes even lighting in an “all-in-one” package that can simplify and streamline the integration and deployment of machine vision systems. Combine this with a small form factor and cost-effective price point, and it is easy to see why smart cameras are being used in many new application deployments, as well as in existing machine vision processes, from barcode reading to object recognition, and process monitoring to quality control. Before you invest in a smart camera solution, however, it is critical to understand how you’ll use the data the application provides; the environment in which your vision system will operate; the expertise level of the team that will program, use, and maintain the system; and even the budget available to invest in the system and its deployment. Some applications are better suited to one type of vision system versus another. Once the goals for the application you are planning to deploy have been clearly established, you can refine your thinking about which camera solution will be most ideally suited to achieving your goals. There are five important criteria to keep in mind as you consider whether a smart camera system is the solution you need.

46

1

Will a smart camera meet the processing speed and throughput requirements of your application?

It’s important to say at the outset that smart cameras, in general, do not offer the throughput and processing speeds delivered by PC-based machine vision systems. However, achieving exceptionally high processing speeds may not be what is most important with your application. Determining whether any machine vision system, even a PC-based system, can deliver the throughput you need will depend on the content and quality of the images the camera must capture, the inspection area a system’s software tools will need to process, and the types of software tools available.

2

Will the smart camera deliver the image sensor resolution needed for your application?

Like PC-based machine visions systems, smart cameras are available in a range of resolutions, and it is critical that you select a camera that can provide the right resolution to capture product details across the inspection area needed for your application. Teledyne DALSA’s BOA2, for example, uses CMOS sensors with resolution up to five pixels, which allows for a larger area to be inspected at once and enables the camera to capture even the smallest detail.

mvpromedia.eu


CONTRIBUTION

3

Does the smart camera offer the software tools required by your application, and once deployed, will the smart camera system be easy to program and maintain? Embedded vision software is a key component of every smart camera system and is essential in determining how easy it will be to set up and program the camera. Different vendors offer different tools, and the same vendor may even offer multiple collections of tools. These tools are usually targeted at the most common inspection tasks, including pattern matching, precision measurements, flaw detection, robot guidance, and optical character recognition. Determining whether a smart camera system provides the software needed for your application is a key factor in your choice of camera or system. A smart camera’s user interface minimizes the knowledge needed to program and deploy the system and should be easy to use, even by someone who is not a software engineer or a trained integrator. This is a key differentiator between smart camera and PCbased camera systems. While a smart camera system may be deployed initially by someone with expertise, making changes to accommodate new application requirements or creating a new application from scratch should ideally be accomplished by an

mvpromedia.eu

operator with just basic knowledge, without the need to write code. Today’s smart cameras incorporate embedded lenses, processors, software, I/O capabilities, and sometimes even lighting in an “all-in-one” package. Ideally, the smart camera you choose will have a graphical user interface that allows you to specify operations, parameters, and program flows simply. Modifying or deploying a new application on PC-based systems may require the services of a trained engineer or support from an integrator, which could be costly. With software embedded into the camera, applications can be configured, monitored, and accessed easily using a web browser on a remote PC. Then, after a smart camera is programmed, it can run on its own, without a connection to a client PC. This is particularly valuable to those users who prefer not to have PC-related hardware on the factory floor or in organizations where security policies limit remote access to systems. For those applications that can be completed with a smart camera yet require more flexible software, some vendors also offer software options that have the flexibility and tools needed to accommodate changing conditions. It is important to note, however, that software that offers greater flexibility might also require greater expertise for the developer and take more time to set up and deploy.

47


CONTRIBUTION

4

Does the smart camera system support your required communication protocols?

When evaluating camera systems in general and smart cameras in particular, it is easy to focus solely on the processes required to capture an image and the types of data the software tools can deliver about each image, but it is equally important to consider how and where you’ll use this information once you have it. Any smart camera you choose should provide standard communication protocols so that it can integrate seamlessly—and with minimal integration work on your part—into your network. PC-based machine vision systems transmit captured images to host computers to be analyzed, but smart cameras process images in-camera. Results can be transmitted easily using the smart camera’s low-cost Ethernet interface, which can accommodate long distances. This can speed the rate at which data is delivered to your application running on a PC, shared with other steps in a production line, or logged and archived for future analysis. It is important to remember that with their small size and fully embedded systems, smart cameras do not usually have a native display, so images are not typically output to a monitor. Images can be sent easily to a connected HMI device, laptop, or tablet, but this process should be tightly controlled to avoid potential bottlenecks.

5

Does the smart camera system make it simple to migrate my application to a more powerful camera if needed?

As your inspection process evolves, you may look for ways to enhance or change elements of the inspection process. It is possible for example, that you may be satisfied with the resolution delivered by your smart camera, but require a faster processing speed to meet new production requirements. Choosing a smart camera that is part of a “family” of cameras will simplify the migration process; in many cases, a camera with a faster processor will be “plug and play,” allowing you to transition your application seamlessly. While it isn’t as simple to migrate from a smart camera that delivers the required throughput to one with the same throughput but a higher resolution, it is still possible with a smart camera.

48

This type of migration is accomplished by transitioning the solution file and making some adaptations to the tool set used for the application. What if you’re forced to replace your smart camera due to damage? First of all, as you decide whether or not to choose a smart camera system and which system to choose, look for a smart camera designed for the harsh environment common in industrial deployments. If damaged, keep in mind that smart cameras are less expensive and easier to replace than traditional PC-based machine vision systems. Any smart camera you choose should provide standard communication protocols so that it can integrate seamlessly into your network. Thanks to embedded lenses, sensors, processors, software, and I/O capabilities, smart cameras simplify the deployment of machine vision systems as they lower system cost overall. Since smart cameras take some of the guesswork out of component selection and system design, they allow users to focus on what is most important: the requirements of the application itself. Keep the application goals you’ve defined top of mind, share them with your vendor or integrator partner, and test, test, test. Only with this approach will you ensure that you’re investing in the machine vision solution that is right for you. www.teledynedalsa.com

mvpromedia.eu



CONTRIBUTION

HOW A NASA FACILITY IS DIGITIZING OVER 90,000 PLANETARY MISSION IMAGES University of Arizona uses Matrox Imaging OCR software to read text-field data from Surveyor missions in record time and with perfect accuracy

The University of Arizona’s Lunar and Planetary Laboratory (LPL) is home to the Space Imagery Center, a NASA Regional Planetary Image Facility. Founded in 1960, LPL was one of the few places engaged in studies of the solar system at that time. In 2015, NASA partnered with the University of Arizona, providing funding to digitize the film images and data from the Surveyor moon landers that have been in storage since the 1960s. The goal is to create an archive for inclusion in the NASA Planetary Data System (PDS), a collection of data products from NASA planetary missions. As John Anderson, senior media technician at LPL, describes it, his “focus and primary area of responsibility is the digital recording of the images, extracting and decoding the encoded image data optically recorded on each film frame, and processing the pictures for viewing in a digital format.”

Raw materials Between 1966 and 1968, the five successful Surveyor missions returned over 92,000 individual images of the moon’s surface. Film images were created by focusing a 70 mm film camera at a precision CRT display monitor and photographed onto special recording film.

50

In the 50 years since, the computer files and video tape records have long disappeared or become obsolete—the only existing copies of the images are the film rolls. Many frames from the Surveyor missions had seemingly legible text, which the operators initially thought could easily be read by conventional optical character recognition (OCR) software. They soon discovered that the characters in the text were a dot matrix similar to old printers using a 7x9 teletypestyle character, making it a challenge to find an OCR software capable of accurately reading the text fields. A comprehensive OCR solution was needed.

A stellar solution This is where Matrox® comes in. Anderson notes, “Lorne Trottier, co-owner of Matrox, saw an article in Planetary Report about the NASA PDS project. He reached out to the university through Arnaud Lina, director of research and innovation at Matrox Imaging, offering assistance using Matrox’s OCR software to read LPL’s text information. [LPL] selected some cropped images to upload for a test and the results were amazing. It was very encouraging, especially with the failure of other OCR products to read the human readable text (HRT).”

Mission control The overall project involves creating a searchable archive that will outlast conventional physical media repositories. Given the possible long-term reference potential of the images and data, there is need for careful and accurate treatment of the resources. The workflow comprised an image scanning system from Stokes Imaging. The Stokes scanning system captured between four and eight frames per minute as high-resolution TIFF images. At the conclusion of the scanning phase, LPL found themselves with over 92,000 individual images. Operator interaction was intensive during the original scanning process. While the Stokes Imaging system was automated, the film itself was not uniform in spacing, indexing, exposure, or processing. Once scanned, Adobe® Photoshop® and MATLAB software were used to pick out the details and create large composite mosaics from the image files. The process also required manual error checking since the decoding of the dot-field data relied on calibration lookup tables created from the original 1966 pre-launch test data.

mvpromedia.eu


CONTRIBUTION

We have liftoff

Digitizing over 90,000 NASA Mission Images with Matrox Imaging OCR Software University of Arizona’s Lunar Planetary laboratory uses Matrox imaging OCR software to read text-field data from Surveyor missions

The project began in February 2015 with the assembly of the Stokes Scanner, and continues to process, catalog, and data-mine the information contained within the images. Even though there are sprocket perforations on the film stock, the original recording transport was sprocket-less, resulting in inconsistent frame spacing as well as frames drifting with respect to the edge perforations. The team at LPL were unable to determine a consistent film advance, and with each new roll of film, the spacing of the frames and lateral positioning of the image shifted. This resulted in overall images with text in different places, as well as some images tainted with artifacts. Moreover, the data fields have HRT with varying number of characters. Matrox’s solution—based on one of its efficient and accurate OCR software tools—beautifully addressed the problem of reading dot matrix characters, and reduced the time expenditure to a few minutes per roll.

The initial review of the Matrox OCR solution showed an almost perfect read from nearly 4,500 different image files. For example, for roll 1 of Mission 5, the Matrox OCR solution scanned 846 files, reading 15,191 individual fields for a staggering 99.77% accuracy. Rolls 2 and 9 of Mission 5, were even better, yielding respective 99.92% and 100% accuracy rates.

Looking to the future The University of Arizona Lunar and Planetary Laboratory Space Imagery Center, a NASA Regional Planetary Image Facility, serves as the repository for many images and resources from all NASA missions. To date, the Matrox software has helped tackle data from Surveyor 5, and will prove a valuable tool during the catalogue and error check of data from Surveyor 6 and 7, along with other mission materials from NASA projects and explorations.

Conclusion The Matrox OCR software has been an instrumental addition to the archiving project. Continued use of the system will accelerate the recording of text information from the Surveyor image files, enhance the accuracy of the metadata, and streamline what can be a very labor intensive and tedious task. Anderson notes, “Compared with accuracy rates of 75% to 85% achieved with the original approach, there is no doubt as to the better result. Our project has been greatly enhanced and the progress of reading and cataloging the data with high accuracy would not have been possible without the gracious assistance of the Matrox team.”

mvpromedia.eu

51


PUBLIC VISION

PUBLIC VISION STEMMER IMAGING IPO As the main stock markets continue to roar, one of the sector’s leading lights is about to go public. Editor Neil Martin looks at the backdrop. As I write, all the main stock markets are currently at record highs as 2018 gets off to a running start. The global economy is doing well, job numbers are rising, or steady, and there’s a great deal of confidence in the machine vision sector. Everyone knows of course what goes up, must come down, but when that does happen, we all hope for orderly stock market corrections and not crashes. As central bankers plot to raise interest rates in an effort to hold off inflation and wean us off quantitative easing, cash will become tighter and this will hurt equity markets. This is in the future however. But, all in all, there could not be a better time for a STEMMER-IMAGING IPO. They have new owners, a sound track record and a market which is fizzing with growth. The management team appears up to the challenge and when 49% of the company is offered to the public, demand should be healthy. The only thing which could put a spoke in the wheel is a stock market crash, which could turn a fair valuation into a cheap give-away and then the management, and advisers, could decide the timing is not right.

52

“THERE COULD NOT BE A BETTER TIME FOR A STEMMER-IMAGING IPO. THEY HAVE NEW OWNERS, A SOUND TRACK RECORD AND A MARKET WHICH IS FIZZING WITH GROWTH”

I would guess that everyone behind the IPO on the Frankfurt Stock Exchange is hoping that they have got their timing just right (the official statement said first half of 2018, which I guess translates to as soon as possible) and that the shares will be priced at the top end of the range. Valuing a company shares during an IPO is an art form. CEO of STEMMER IMAGING Christof Zollitsch summed it up nicely: “STEMMER IMAGING has grown very successfully over the past few years. We are now in a very good position to profit from the continued growth of the market for digital machine vision.” The official IPO statement outlined the key points: “At the time of publication of this announcement, SI Holding GmbH (Munich, Germany), a PRIMEPULSE Group company, holds all the shares in STEMMER IMAGING AG.

It will continue to hold at least 51 percent of the shares in the company after the IPO capital increase is carried out and the existing shares are sold in a secondary offering, and in the event that the greenshoe option is exercised. The principal stockholder will undertake not to sell any shares for a period of six months after the start of quotation, and only to sell shares with the consent of the issuing bank for a further six months. “The company expects issue proceeds of around EUR 50 million from the placement of new shares in connection with the IPO. In addition, shares held by the existing stockholder will be sold in a secondary offering. The stockholder will make further shares available in a greenshoe option.” The public offer will only be made in Germany, and Hauck & Aufhäuser Privatbankiers Aktiengesellschaft will act as the sole global coordinator and sole bookrunner. So the stage is now set for an IPO which reflects the current bullish mood being felt throughout the sector.

mvpromedia.eu


— Spend a day with us and transform your productivity Our Switch to Robots events give you all you need to help you decide whether robots are right for your shop floor. Over one day, we cover everything from how to identify whether you need a robot through to how to justify an investment. We’ll also show you how today’s robots are easier than ever to set up and program, with a hands-on training session. Visit http://bit.ly/S2RFeb18 to sign up for our free, no-obligation Switch to Robots event at our Milton Keynes training centre on Thursday 22nd February.


BUSINESS STORIES

ENTNER ACQUIRES 3D-ONE Entner Electronics has acquired 3D-ONE, a maker of embedded vision systems. Founded in 2003 and based in Sulz, Austria, Entner has developed and manufactured a range of cutting edge camera-modules with integrated motorized optics and advanced interface products, all with onboard processing capabilities. Its products find uses in low vision systems, robotics, medical & clinical systems, industrial vision, special purpose cameras and intelligent traffic systems. Entner announced that it had reached an asset purchase agreement to acquire the business activities of 3D-ONE.

repsonsibility for the development, integration, volume production and lifecycle management of compact and highly integrated camera modules with motorized optics and embedded processing capabilities are now available under a single roof.” Managing Director of cosine Marco Beijersbergen added: “The integration of 3D-ONE’s activities with Entner Electronics is a logical step in the collaboration between Entner and cosine. We have been working closely together for many years and one market proposition will enable the customer to be served even more smoothly. cosine will continue working with Entner Electronics in the development of cosine measurement systems and software customization.”

Both companies will continue to operate under the name Entner Electronics. In 2007, 3D-ONE was spun out from cosine measurement systems, a specialist in the development of high-end remote sensing equipment, sensor networks and X-Ray optics for applications in space, aerospace and industrial systems. Today 3D-ONE is a specialist in the custom development of embedded vision systems for integration in medical devices, advanced mapping systems, special purpose cameras, food processing systems and remote sensing equipment. 3D-ONE’s embedded solutions concentrate on multi-sensor imaging, imaging spectroscopy and 3D-stereoscopic imaging. Entner and 3D-ONE’s began working together some time ago with the development of the world’s first stereoscopic camera with motorized optics. This has been the start for several joint development projects of application-specific imaging solutions that successfully entered series production. Founder and CEO of Entner Thomas Entner said: “We’re excited to integrate the expertise of 3D-ONE into Entner Electronics as together we form a powerful supplier of advanced imaging solutions for the embedded vision market,” said Thomas Entner, Founder and CEO of Entner Electronics. “The combination of 3D-ONE’s know-how on embedded image processing and Entner’s capability to develop compact camera-modules with motorized optics and FPGA & SoC accelerated processing hardware enables us to come up with highly integrated solutions in a short period of time. “The integration of the two companies will benefit customers as all competences needed to take full

Wilhelm Stemmer with managing directors Christof Zollitsch and Martin Kersting

54


BUSINESS STORIES

NORTH AMERICAN AUTOMATION MARKET SHATTERS RECORDS IN 2017 Robotics

The automation market in North America set new records in the first nine months of 2017. This is according to figures from the Association for Advancing Automation (ANN ARBOR, MI, US). The results revealed records set in the areas of robotics, machine vision, motion control, and motor technology. The latest findings show that:

Vision & Imaging

For the first nine months of 2017, 27,294 orders of robots valued at approximately $1.473 billion were sold in North America, which is the highest level ever recorded in any other year during the same time period. These figures represent growth of 14% in units and 10% in dollars during the first nine months of 2016. Automotive-related orders are up 11% in units and 10% in dollars, while non-automotive orders are up 20% and 11%, respectively. For shipments, 25,936 robots valued at $1.496 billion were shipped in North America during the first nine months. These record high quantities represent growth of 18% in units and 13% in dollars over what sold in 2016. Automotive-related shipments also grew 12% in units and 9% in dollars during that time, with non-automotive shipments increasing by 32% and 22% for units and dollars, respectively.

The North American Machine Vision Market continued its best start to a year ever in 2017, with growth of 14% overall to $1.937b, 14% in systems to $1.657b, and 14% in components to $271m. Each of those three categories set new records in the first nine months of this year, and every individual product category experienced positive year-over-year growth for the same period last year. Some notable growth rates were Smart Cameras (21% to $295m), Lighting (20% to $54m), Software (16% to $15m), and Component Cameras (14% to $143m).

The hottest industries were Metals (54%), Automotive Components (42%), and Food and Consumer Goods (21%).

Experts believe lighting, optics, imaging boards, and software will trend up, while camera sales will remain flat in the next six months. Additionally, expectations are for Application Specific Machine Vision (ASMV) systems to increase and smart cameras to remain flat over the same time period. The U.S. manufacturing sector expanded in the second quarter (avg. PMI of 53.0) and is expected to remain strong through the end of the year.

The fastest growing categories in the first nine months of 2017 were Motion Controllers (24% to $147m), Sensors & Feedback Devices (20% to $116m), AC Drives (15% to $295m), Actuators & Mechanical Systems (13% to $479m), and Motors (11% to $1b).

mvpromedia.eu

Motion Control & Motors Total motion control shipments increased by 10% to $2.6 billion, marking the industry’s best nine month mark since these figures began being tracked. The largest product category is Motors (38% of shipments), followed by Actuators and Mechanical Systems (18% of shipments), and Electronic Drives (17% of shipments).

The majority of suppliers believe that order and shipment volumes will increase in the next six months, with most distributors feeling that orders and shipments will be flat in the same time period.

55


BUSINESS STORIES

UK BUDGET AIMS TO HELP TECH SECTOR As the UK Chancellor Philip Hammond gave his autumn budget speech, it looked as though tech companies are in for some largesse. Partner and Head of Global Mobility Tax Services at Blick Rothenberg Mark Abbs said: “A long term vision and strategy to put the UK at the forefront of global digital innovation is absolutely critical for the long term welfare of the country. It is even more critical however that we continue to properly invest and support the digital sector and innovation is not treated by the Government as short term attention grabbing headlines.” The firm’s Partner and Head of Corporate Tax Genevieve Moore said: “Welcome announcement on increasing R&D tax credits. This shows the Government continues to be committed to encouraging innovation in Britain. “Enterprise Investment Scheme (EIS) tax relief to be increased to support investment in knowledge intensive companies is a positive step and part of a package of announcements to increase investment in technology and innovation. A Budget encouraging technology and growth for Great Britain.

“Increase in the EIS investment limits for knowledge intensive companies is welcome news. As ever, the devil will be in the detail but properly targeted, this could encourage private investment in innovation,” said Helena Kanczula, Corporate Tax Director. And partner Frank Nash said: “The Chancellor is keen to promote investment in the ‘super large’ rail projects. Those plans must include adequate car parks equipped with electric charge points, to support the new technology. Also, on the R&D question (the government is allocating a further £2.3bn in investment in research development and will increase the tax credit on R&D to 12%), Dominic Keen from Britbots, an organisation that supports UK-based robotics businesses, said: “the Chancellor’s additional support of innovation within sectors such as electric vehicles and AI today is welcome news indeed. AI will be a key element of the robotics sector, and this country is home to some world-class robotics technologies and this government backing will mean that robots can become job-makers, not job-takers, for the UK economy.”

ECOMACHINES VENTURES LAUNCHES EMV EIS FUND I EcoMachines Ventures, the specialist investor in energy and industrial high-tech companies (including machine vision), has launched the EMV EIS Fund I, targeted at professional investors. It will focus on investments into UK Enterprise Investment Scheme eligible companies disrupting the energy, industrial high-tech, resource efficiency and transport sectors. It’s aimed at investors seeking access to investment growth opportunities in ambitious UK companies looking to scale up, internationalise and partner with global corporations. The focus is on investments in sectors where there are significant opportunities for the adoption of new technologies, such as in robotics, artificial intelligence and industrial IoT. The fund is managed by Sapphire Capital Partners, a specialist EIS and SEIS investment fund manager, with EcoMachines Ventures acting as exclusive investment advisor. The custodian is Mainspring Nominees, a custodian which focuses on UK EIS and other funds, and which has more than £1bn under custody. The fund is targeted at the 2018/2019 tax year, with a close of up to £10m by March 2018. Managing Director of Mainspring Nominees Stephen Geddes said: “We are thrilled to be working with

56

EcoMachines on their EMV EIS Fund I and are excited by the opportunity to be involved with such an innovative new EIS offering. We have built a strong relationship with the team in the run up to the launch and look forward to supporting EcoMachines Ventures as they grow.” MD of EcoMachines Ventures Dr Ilian Iliev commented: “We expect the EMV EIS Fund I will provide EIS investors with the opportunity to invest in some of the UK’s most ambitious B2B companies with industryleading technologies and the potential to become international leaders in their market segment, and who can demonstrate traction with major global corporations as a way of accelerating growth and obtaining routes to market. We will leverage our expertise, international investor base, and relationships with some of the leading global corporations to the benefit of the fund, and its future investments.” Managing Partner of Sapphire Capital Partners Boyd Carson added: “We are delighted to partner with the EcoMachines Ventures team. As a firm specialising in the B2B space, EMV brings depth of experience and capability to the EIS space that we expect will position the EMV EIS Fund I as a unique growth investment opportunity in the EIS investment landscape.”



Cast a bigger shadow. We plan. We create. We write. We design. We develop. But best of all, we get you noticed. Give us a call if you need some Wow in your business.

thewowfactory.co.uk

01622 851639

THE

WOW

FACTORY


RAISE YOUR INSPECTION IQ with

Solving today’s challenges in product quality inspection takes brain power. That’s why experienced engineers . The 3D sensor with choose a mind for smarter quality control.

Discover

MV_Pro_FEB2018_IQ_Ad.indd 1

Inspection

visit www.factorysmart.com

2018-01-31 2:18 PM



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.