CHANGE A BIG ANNOUNCEMENT
THE FUTURE OF VISION
IOT PIONEERS
ISSUE 26 - JUNE 2021
mvpromedia.com MACHINE VISION & AUTOMATION
Generic lighting products falling short? Don’t settle. We’re here. Lighting systems are often as diverse as the objects they inspect; don’t compromise with a limited set of options. At Advanced illumination, our ability and desire to build an individualized solution is what separates us from other lighting vendors. After all, we built a company around it. Our approach allows us to tailor solutions quickly, often with little or no development expense. We also offer millions of build-to-order light configurations, all delivered in one to two weeks. Contact us today with your unique inspection challenge.
advancedillumination.com | 802.767.3830 | info@advancedillumination.com
MVPRO TEAM Alex Sullivan Publishing Director alex.sullivan@mvpromedia.com
CONTENTS 4
EDITOR’S WELCOME
5
INDUSTRY NEWS - Who Is Making the Headlines?
16
Cally Bennett Group Business Manager cally.bennett@mvpromedia.com
18 FIVE IOT PIONEERS - The Winners of the 12th IoT Innovation World Cup 24 XILINX PREDICTIONS - The Trends to Watch in 2021
Joel Davies Writer joel.davies@mvpromedia.com
26 30 YEARS OF MATERIALISE - Driving Sustainable Practices 30 ADVANCED ILLUMINATION - Safe Food Manufacturing
Sam O’Neill Senior Media Sales Executive sam.oneill@cliftonmedialab.com
Jacqueline Wilson Contributor Jacqueline.wilson@mvpromedia.com
Becky Oliver Graphic Designer
31
Visit our website for daily updates mvpromedia.com
EDMUND OPTICS - Challenges of Thermal Changes on Imaging Optics
32 EURESYS - A New Suite of Powerful Image Analysis Libraries 34
PROPHESEE - An Interview With Luca Verre
40 MVTEC Q&A - Dr Maximilian Lückenhaus on the Future 42
www.mvpromedia.com
ANNOUNCEMENT - MVPro is Changing
CHANGING LANES - The Future of Travel
46 MANUFACTURING ICE CREAM - Mitsubishi Does Dessert
MVPro Media is published by IFA Magazine Publications Ltd, 3 Worcester Terrace, Clifton, Bristol BS8 3JW Tel: +44 (0)117 3258328 © 2021. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.
mvpromedia.com
3
WELCOME Change comes from hope and hope comes from change. There’s plenty of hope going around as, dare I jinx it, Europe re-opens its borders for the last time. Naturally, we at MVPro wonder, is change following hope for the industries we cover? It’s what we aim to discover in the 26th edition of MVPro magazine, the theme of which, is change. The biggest change, at least for us and you, is that MVPro is growing. Like the caterpillar that becomes the butterfly, we are evolving into Automate Pro Europe. Automate Pro Europe is a brand new, one-stop hub covering Europe’s machine vision, computer vision & robotics sectors. You can get all the ins and outs on the next page in our official announcement, but it includes two new dedicated magazines and all the content you need to keep you in the know. In our featured interview, I speak with Luca Verre, CEO and Co-founder of Prophesee, the French imaging company and leader of the revolutionary, event-based sensor technology. In a Q&A, Dr Maximilian Lückenhaus, Director Marketing + Business Development at MVTec Software GmbH, lays out his predictions on the future of machine vision. In “Changing Lanes”, I peer into the crystal ball to examine the future of transport, which includes some of the finest European start-ups making self-driving cars, flying taxis and magnet-powered trains a reality. For “Five IoT Pioneers”, I take a deep dive into the five winners of the 12th IoT World Cup as selected during last month’s Hannover Messe. Bee monitoring, smart plugs, life-saving biometrics, a battery made of paper and AI that knows when a machine will break by listening to it all feature. We also hear from Materialise, who look back at the past three decades, highlighting how the industry has advanced. Mitsubishi Electric showcase how it helped modernise a forty-year-old company’s manufacturing of mochi ice cream. Euresys discuss its new EasyLocate technology, whilst Xilinx explains why robots, telehealth and open embedded systems are the trends to watch in 2021. We hope you enjoy this forward-thinking and final edition of MVPro magazine. We will be back soon with the first edition of Automate Pro Europe, changed and fully charged. Take care and remember to be hopeful.
Joel Davies Writer joel.davies@mvpromedia.com 3 Worcester Terrace, Clifton, Bristol BS8 3JW MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.com
Joel Davies Writer
4
mvpromedia.com
INDUSTRY NEWS
COCA-COLA STREAMLINES WITH ROBOTIC AUTOMATION SOFTWARE
Happiest Minds Technologies Limited, a digital transformation and IT solutions company, has announced it has successfully executed a digital transformation project for Coca-Cola Bottling Company United for streamlining its order management with RPA in Microsoft Power Automate. Coca-Cola Bottling Company United (Coca-Cola United) has a long history of supplying Coca-Cola products directly to retailers and restaurants. When Coca-Cola introduced its new Freestyle vending machine, Coca-Cola United, a privately owned company that isn’t owned by Coca-Cola, was challenged to streamline its order and invoicing procedures. It did so using Microsoft Power Automate robotic process automation (RPA). “While building this solution, we resurrected highvalue strategic projects that we couldn’t tackle before because of the constraints of legacy apps. We feel empowered to take advantage of any future opportunities that the business provides us”, said Bob Means, Director of Business Solutions Desktop flows in Power Automate automates repetitive processes in Windows and web applications — a perfect fit for high-volume but mundane data entry and transfer. Coca-Cola Bottling Company United collaborated with Happiest Minds to create a master automated service agent they’ve dubbed ‘Asa,’ which consists of several bots. Built on Microsoft Azure and Microsoft Power Platform, ‘Asa’ uses Azure Key Vault to help secure and control passwords and other sensitive data, and it relies on Azure DevOps for continuous integration and continuous delivery (CI/CD).
mvpromedia.com
The new, simplified process frees the dedicated CRM agent, allowing orders from all channels, such as inbound and outbound call centre agents, field service sales representatives at customer sites, and via a customer self-service portal. Using Power Virtual Agents, Happiest Minds helped the company develop a bot that served as an intelligent front end to the new solution. Power Automate was used to drive the entire process. And by interoperating with other Azure services, such as Azure Key Vault, the system was able to securely access both internal and external systems and orchestrate the entire order process, from purchase order to reconciliation, in SAP. We are very excited about this solution”, said Kaylan Cannon, Customer Service Manager, Coca-Cola Bottling Company United. “It will dramatically reduce labour costs, minimize the various points of error in our current solution, and will allow us to rapidly expand the local Freestyle campaign to better support our customers”. Happiest Minds Technologies Limited is a Mindful IT company that enables digital transformation for enterprises and technology providers by delivering customer experiences, business efficiency and actionable insights. They do this by leveraging a spectrum of disruptive technologies such as artificial intelligence, blockchain, cloud, digital process automation, internet of things, robotics/drones, security, virtual/augmented reality. A Great Place to Work-Certified™ Company, Happiest Minds is headquartered in Bangalore, India with operations in the U.S., UK, Canada, Australia and the Middle East. MV
5
INDUSTRY NEWS
IBM CREATES FIRST CHIP SMALLER THAN STRAND OF DNA The company’s semiconductor development efforts are based at its research lab located at the Albany Nanotech Complex in Albany, NY, where IBM scientists work in close collaboration with public and private sector partners to develop logic scaling and semiconductor capabilities. IBM says this collaborative approach to innovation makes IBM Research Albany a world-leading ecosystem for semiconductor research and creates a strong innovation pipeline, helping to address manufacturing demands and accelerate the growth of the global chip industry.
The fingernail-sized chip fits 50 billion transistors on it and could change the future of technological development. IBM unveils a breakthrough in semiconductor design and process with the development of the world’s first chip announced with 2 nanometres (nm) nanosheet technology. Semiconductors play critical roles in everything from computing to appliances, to communication devices, transportation systems, and critical infrastructure. Demand for increased chip performance and energy efficiency continues to rise, especially in the era of hybrid cloud, AI, and the Internet of Things. IBM says its new 2 nm chip technology helps advance state-of-the-art developments in the semiconductor industry, addressing growing demand. It is projected to achieve 45 per cent higher performance, or 75 per cent lower energy use, than today’s most advanced 7 nm node chipsi. The potential benefits of these advanced 2 nm chips could include: • Quadrupling cell phone battery life, only requiring users to charge their devices every four daysii. • Slashing the carbon footprint of data centres, which account for one per cent of global energy useiii. Changing all of their servers to 2 nm-based processors could potentially reduce that number significantly. • Drastically speeding up a laptop’s functions, ranging from quicker processing in applications to assisting in language translation more easily, to faster internet access. • Contributing to faster object detection and reaction time in autonomous vehicles like self-driving cars. “The IBM innovation reflected in this new 2 nm chip is essential to the entire semiconductor and IT industry”, said Darío Gil, SVP and Director of IBM Research. “It is the product of IBM’s approach of taking on hard tech challenges and a demonstration of how breakthroughs can result from sustained investments and a collaborative R&D ecosystem approach”.
6
IBM’s legacy of semiconductor breakthroughs also includes the first implementation of 7 nm and 5 nm process technologies, single-cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator technology, multi-core microprocessors, High-k gate dielectrics, embedded DRAM, and 3D chip stacking. IBM’s first commercialized offering including IBM Research 7 nm advancements will debut later this year in IBM POWER10-based IBM Power Systems. Increasing the number of transistors per chip can make them smaller, faster, more reliable, and more efficient. The 2 nm design demonstrates the advanced scaling of semiconductors using IBM’s nanosheet technology. Developed less than four years after IBM announced its milestone 5 nm design, this latest breakthrough will allow the 2 nm chip to fit up to 50 billion transistors on a chip the size of a fingernail. More transistors on a chip also mean processor designers have more options to infuse core-level innovations to improve capabilities for leading-edge workloads like AI and cloud computing, as well as new pathways for hardware-enforced security and encryption. IBM is already implementing other innovative core-level enhancements in the latest generations of IBM hardware, like IBM POWER10 and IBM z15. IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain a competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. MV i Based on the projected industry-standard scaling roadmap ii Based on current usage statistics for 7 nm-based cell phones iii https://science.sciencemag.org content/367/6481/984
mvpromedia.com
INDUSTRY NEWS
SICK LAUNCHES FIRST VISION CAMERA WITH DEEP LEARNING INSIDE With its vision camera, SICK aims to make it simple to create custom quality inspections of complex or irregular-shaped goods, packaging and assemblies, especially where they may have previously defied automation using traditional systems. The SICK Intelligent Inspection Deep Learning App runs on SICK’s newly launched Inspector P621 2D programmable vision camera. The all-in-one package enables machine builders and end-users to set up vision classifications using artificial intelligence in a fraction of the time and cost it would take to program challenging inspections based on recognising the preset rules and patterns of traditional vision systems. Where it has previously been very challenging to achieve consistently robust and repeatable quality inspections, the company says with its Intelligent Inspection they can now be mastered with high levels of reliability and availability. Automation is therefore now practical and affordable for complex imaging tasks such as sorting fresh fruit and vegetables, checking the orientation of timber profiles by recognising the annual ring structure, checking leather car seats for creases or flaws, or inspecting the integrity of solders in surface mount assemblies.
are uploaded to the Cloud where the image training process is completed by the neural network. The user can then apply further production images to evaluate and adjust the system. Once the user is satisfied, the custom-trained deep learning solution is downloaded to the SICK Inspector P621 camera where it can begin to take decisions automatically with no further Cloud connection necessary. Results are output to the control system as sensor values and digital I/O. The image inference is carried out directly on the device, so there is no need for an additional PC. As the system training is done in the Cloud, there is also no need for separate training hardware or software, saving on implementation time and cost. “Because it runs directly on the camera, the SICK Intelligent Inspection app does not require any additional hardware”, Sandhu continued. “So, users can automate complex vision inspections for a much lower cost of ownership. They can now consider automating quality inspections of products or goods that have just proved too difficult previously. “Even better, the system can be set up in no time at all. Many users will be able to manage this process themselves. However, if needed, SICK is also offering services to support customers through the feasibility, commissioning and neural network training process”. Users also have access to a large set of traditional machine vision tools installed as standard on the SICK Inspector P621, so they can extend the functionality of their quality inspection further. Developers working in SICK’s AppSpace can also use SICK’s Nova software tools for further custom development and to create their own SensorApps. Starting with the Intelligent Inspection on-camera package still saves significant development time and cost.
Neil Sandhu, SICK’s UK Product Manager for Imaging, Measurement and Ranging explained: “By embedding the Intelligent Inspection App onto SICK’s Inspector P621 deep learning camera, SICK has made it possible for users to purchase a ready-made package that uses artificial intelligence to run complex vision inspections with ease. “Users are guided through an intuitive process that teaches the system how to recognise ‘good and bad’ examples using SICK’s specially-optimised neural networks in the cloud”.
SICK Deep Learning is also now available as a licensed option for all InspectorP600 2D vision sensors. Initially available with image classification, the SICK Intelligent Inspection App will be extended to incorporate anomaly detection, localisation and segmentation functions later in 2021. Founded in 1946, SICK has over 50 subsidiaries and equity investments, as well as numerous agencies around the world. In the 2019 fiscal year, SICK had more than 10,000 employees worldwide and group revenue of around EUR 1.8 billion. MV
Using the SICK Inspector 621’s in-built image capture tool, users begin by collecting example images of their product in real production conditions. Guided stepby-step through the intuitive graphic interface, the system prompts them to sort the images into classes. Using SICK’s dStudio service, the pre-sorted images
mvpromedia.com
7
INDUSTRY NEWS
BIOMETRIC PASS TESTED FOR INTERNATIONAL AIR TRAVEL
IDEMIA, a global leader in Augmented Identity, is launching “Health Travel Pass”, a worldwide interoperable solution to check international health certificate standards, including PCR test results, immunity tests and vaccination certificates. The aim is to curb the risk of incoming travellers spreading Covid, which in turn helps global business recover, especially airlines that were affected by the pandemic. Passengers securely receive their health certificate stamped with the International Civil Aviation Organization’s (ICAO) Visible Digital Seal, guaranteeing worldwide interoperability with secure encoding using data encryption. The Pass ensures all travellers are covered and is available in various forms from a downloadable Government-issued health app, a PDF document sent by email or a sticker to put on their passport’s visa pages. As the Pass is linked to a traveller’s identity, it cannot be transferred to anyone else and cannot be copied without appropriate authority. The Pass can also be incorporated into the Digital Travel Credential scheme tested by some countries in conjunction with the ICAO2. Before check-in, border control or boarding, a biometric match is triggered between the passenger’s face and their passport photo, and the health certificate is verified. IDEMIA says this process for travellers and security staff guarantees convenience and cuts airport congestion when international air travel returns to normal. “We are all very proud at IDEMIA to offer people a safe way to travel around again, with a solution which is secure, user-friendly, reliable, and totally GDPR compliant. Our Health Travel Pass is a significant step forward in the resumption of international travel”, said
8
Philippe Barreau, IDEMIA’s Executive Vice President in charge of Public Security & Identity. In keeping with data privacy by design, IDEMIA’s Health Travel Pass complies with data privacy regulations. As such, passengers keep total control over their own health data. IDEMIA’s Health Travel Pass is being tested in the Netherlands. IDEMIA is well-placed to provide a service such as the Health Travel Pass having recently announced that its facial recognition algorithm, “1:N” came top among 75 tested systems and 281 entrants in NIST’s latest FRVT. FRVT measures how well facial recognition systems work for civil, law enforcement and security applications covering accuracy, speed, storage, and memory criteria and are acknowledged to be the gold standard of the global security industry. The company also revealed it is collaborating with Microsoft to support its new Microsoft Azure Active Directory (Azure AD) verifiable credentials identity solution, enabling organizations to confirm information about individuals, such as their education, professional or citizenship certifications, without collecting or storing their data. IDEMIA is a global leader in Augmented Identity, which the company say provide a trusted environment enabling citizens and consumers to perform their daily critical activities (such as pay, connect and travel), in the physical as well as digital space. The company provide Augmented Identity for international clients from the Financial, Telecom, Identity, Public Security and IoT sectors with close to 15,000 employees around the world, servicing clients in 180 countries. MV
mvpromedia.com
INDUSTRY NEWS
THE FRENCH START-UP THAT’S CURVING IMAGING SENSORS SILINA, a deep tech start-up in microelectronics, claims to bring a “real paradigm shift” to the imaging industry by curving imaging sensors. SILINA has been developing a technology that enables the curving of hundreds of imaging sensors at the same time, offering new perspectives in the design of cameras, from high volume to niche markets. They say that in nature, most vision systems use curved retinas, like human eyes. Curved retinas enable the use of simple optical lenses with only one lens, the crystalline, providing a wide field of view for outstanding image quality, making the eyes very compact. Instead, electronic imaging systems use flat imaging sensors which make the lens complex, using expensive optical elements. This degrades the optical performance and capabilities of the camera and increases the mass/volume budget and overall cost of any camera and optical system. SILINA says that curved imaging sensor technology is the next major innovation for the imaging industry, disrupting the way vision systems are designed. The company suggests it overcomes hardware limitations that no software can solve and enables a whole new generation of cameras, providing improvements on four key criteria: increased image quality and detection capability, and reduced cost and bulk of cameras. Wilfried Jahn, CTO, and co-founder of SILINA says they have been developing and optimizing a curving process to scale and reach high-volume markets. As he explains: “Our innovation has been driven to unlock the technological barriers of scalability. Previously, this technology was limited to niche markets since the various solutions were limited to manual single-chip manufacturing processes, delivering a few tens of units during the past 20 years. Thanks to cross-pollination ideation of various professional experiences, we have been able to create a unique process to curve hundreds of chips at the same time. We can control all the parameters which make the process reliable and repeatable, reducing significantly the cost of production. One month after creating SILINA, we have curved 275 units of 1-inch CMOS imaging sensors at the same time, a world-first demonstration”. SILINA says its curving process is the same whatever the sensor format and technology, notably CMOS and CCD. It can be applied to Front-Side Illuminated (FSI), Back-Side Illuminated (BSI) sensors, and on various spectral bandwidths from ultraviolet, visible to infrared. This process enables low volume and high volume manufacturing, curving one sensor, several sensors, or a full wafer at a time. Also, various shapes can be obtained: spherical, aspherical, freeform, and custom shape on-demand.
architecture and electronic board remain the same, facilitating the integration of the technology on current production lines. Michaël Bailly, CEO, and co-founder of SILINA details the startup offers: “Our services are offered to optical system designers and manufacturers, camera integrators and sensor manufacturers to support them in the improvement of their imaging system performance while reducing their cost of production. Our offer is made of two propositions: support in optical system design to integrate the curved sensor technology in their specific applications, and ondemand service to curve their imaging sensors. SILINA does not design or manufacture its sensors, but curves existing flat sensors. Finally, we plan to reach high volume markets via IP licensing”. SILINA’s technology could bring a significant benefit for the imaging industries, including aerospace, defence, photography, automotive, smartphone, and others. The main value proposition is specific to each market segment and application: e.g. high image quality for smartphones, low mass/volume budget for aerospace and drones, better detection capability for automotive. Bailly continues: “These needs drive our technological developments in several complementary aspects including the yield of production, shape accuracy and resistance to environmental constraints. In the end, each technological development for a specific application benefits the others. Microelectronics and photonics projects are defined by the European Commission as Important Projects of Common European Interest. Curved imaging sensor technology complies with this scope. It’s a real asset for European companies, and SILINA will deliver it on a large scale”.
MV
This manufacturing process has also been developed to keep the same original packaging used for classic flat sensors, meaning that the mechanical
mvpromedia.com
9
INDUSTRY NEWS
COMPUTER CHIP SHORTAGE TO LAST AT LEAST SIX MONTHS Robbins, on the other hand, thinks, “It doesn’t necessarily matter where they’re made, as long as you have multiple sources”. The issue stems from the beginning of the pandemic, when companies cut their orders for chips, expecting the demand to dwindle. As a result, suppliers also reduced their output. However, the opposite happened. The demand for consumer electronics rose, whilst the supply fell. Many companies fell into the trap, including car manufacturers who cut their demand for semiconductors, which then led chip makers to lower production. “Semiconductors go in virtually everything, and what happened was when Covid hit, everyone thought the demand side was going to decline significantly and in fact, we saw the opposite. We saw the demand side increase”, Robbins noted. The computer chip shortage is mostly due to Covid-19 delaying semiconductor production but other factors have played a part. This includes a spike in demand caused by technological advances such as 5G, IoT, and AI, which have worsened the issue. “Right now, it is a big problem,” Robbins said, “because semiconductors go in virtually everything. We think we’ve got another six months to get through the short term. The providers are building out more capacity. And that’ll get better and better over the next 12 to 18 months. “What we don’t want is to have consolidation where any of the risks that we may face could, frankly, result in the situation we’re seeing today, whether it’s weather-related disaster risks, whether it’s single point of failure risk, whether its geopolitical risks, whatever those are. We just need more options, I think, for where semiconductors are built”. In a recent BBC interview, Cisco’s chief executive, Chuck Robbins, said the shortage, caused by increased demand and restricted production during the pandemic, is set to continue for at least half a year. Robbins is the latest leader of a major company to give his two cents on when the global computer chip shortage may end, following the heads of Intel and the Taiwan Semiconductor Manufacturing Company (TSMC), who both said recently that the issue may last two more years. TSMC is the world’s biggest contract manufacturer of computer chips and is spending $100bn to expand capacity over the next three years. This week TSMC’s founder, Morris Chang, asked the Taiwanese government to “keep hold of it tightly”, reasoning that the company is better positioned to make chips than the US or China, despite their government subsidies. Intel chief executive Pat Gelsinger countered, telling the BBC it was not “palatable” to have so many chips made in Asia.
10
One company attempting to solve the computer chip shortage is Intel, which recently announced a $20bn plan to expand production, including the creation of two new plants in Arizona. Cisco itself is also expanding its capabilities, recently acquiring Acacia Communications for $4.5bn which designs computer chips among other products. Robbins played down the possibility of Cisco stepping up to solve the shortage. “We’re not a semiconductor fab company, so it’s not a core competency for us to do that”, he said. “So we think that companies that play in this space are much better equipped, we’re working very closely with them”. MV
mvpromedia.com
INDUSTRY NEWS
WOMEN’S TECHNOLOGY VISIONS FOR A MORE EQUAL FUTURE Women worldwide are using tech to power movements, learn about and secure their rights, advocate for marginalized groups, and mobilize offline actions. But they say technology needs a transformation of its own: more women in decisionmaking roles, stronger laws to protect privacy and safety, lower-cost tech access, and digital skills training taught by women. These are some of the key findings and recommendations released in the #SheTransformsTech report.
empowerment, technology access, online safety, and more.
At the beginning of the global COVID-19 pandemic, World Pulse surveyed grassroots women leaders and marginalized communities from around the world on key technology issues and opportunities. Their responses reaffirm the statistics that show that about half of the world’s women remain offline and are globally underrepresented as users, makers, and leaders in the technology industry.
The #SheTransformsTech report uncovered facts like 91% of survey respondents agreed or strongly agreed with the statement: “Technology has made a positive impact on my life”. Almost 50% report the quality of their internet connection is a barrier to accessing tech. 75% said they experienced some form of online harassment or abuse. Over 22% of those who had experienced harassment said they shut down an online account due to that harassment.
• Stories from women, in their own words, on how they utilize tech for good and how tech can do better to support women’s leadership. •
Special highlights on the impact of COVID-19 on technology access and use; improving access for those with disabilities, and advancing digital skills training.
Along with chronicling the barriers and concerns that limit women’s ability to fully benefit from technology, the report presents solutions for a way forward. “In this report, you will read the unheard voices and powerful perspectives of women and gender-diverse individuals from across the world”, said World Pulse Founder and CEO Jensine Larsen. “Taken together, these voices call for nothing less than a fundamental realignment of the priorities of the technology industry and urge us to wake up to the vast leadership potential of women coming online globally to shape a better technology future for us all”.
These inequities have been compounded by the pandemic’s rapid digitization. The #SheTransformsTech report highlights actions that policymakers, tech companies, and governments can take to make technology work for women, sourced directly from those who are most impacted. The report’s release coincides with Girls in ICT Day and is backed by a coalition of 27 partners, including EQUALS, Digital Impact Alliance (DIAL), World Wide Web Foundation, Vodafone Americas Foundation, and international women’s rights networks. The #SheTransformsTech report is the culmination of a year-long campaign and includes: • Analysis of more than 530 responses from 60+ countries (400+ survey responses; 130+ personal narratives). • Key findings and recommendations for policymakers, tech companies, and individuals related to digital
mvpromedia.com
In the report, Leonida Odongo of Kenya wrote how COVID-19 awoke the world to a new era of technology and drove home the urgency of closing the digital divide. “My ideal world is where there is access to technology for everyone — where we can connect regardless of where we live”, she said. The findings of the report will be delivered to global decision-makers, technology leaders, and the United Nations as one of the inputs into the Beijing+25 #GenerationEquality campaign. In addition, World Pulse recently launched a technology hub on its social network so that women can connect and exchange technology-related stories and solutions. Recommendations are also informing new international cross-sector initiatives to expand global networks of women digital trainers, increase inclusivity in technology design, and use technology to strengthen women’s rights movements. “We need to centre those most impacted by the problems in creating the solutions”, says a survey respondent from the United States. Software developer Karen Mukwasi from Zimbabwe asserts, “It’s time we take up space and bring the revolution to the tech field”. MV
11
INDUSTRY NEWS
WORLD’S FIRST NEUROMORPHIC SENSOR LAUNCHED INTO SPACE the collaborative nature of the mission. He said, “As a company focused on ultimate machine vision performance in challenging, power-limited environments, space is about as hard as it gets. We are very excited to see our technology taking the first small steps towards supporting long-term human activities in space.
In March 2021, Rocket Lab, a leading launch provider and space systems company, successfully launched its 19th Electron mission. The 19th mission deployed six spacecraft to orbit for a range of government and commercial customers. The mission, named ‘They Go Up So Fast,’ also deployed Rocket Lab’s latest in-house manufactured Photon spacecraft to build flight heritage ahead of the upcoming CAPSTONE mission to the Moon for NASA. Part of this mission and helping future missions take place is the iniVation DAVIS240C vision sensor, becoming the world’s first-ever neuromorphic technology to reach space. An Earth-observation satellite for BlackSky Global through Spaceflight Inc was deployed, as well as two Internet of Things (IoT) nanosatellites for Australian commercial operators Fleet Space and Myriota; a test satellite built by the University of New South Wales (UNSW) Canberra Space in collaboration with the Royal Australian Air Force; weather monitoring CubeSat for Care Weather Technologies; and a technology demonstrator for the U.S. Army’s Space and Missile Defense Command (SMDC). The mission took the total number of satellites deployed to orbit by Rocket Lab to 104. The mission, a collaboration between Western Sydney University’s International Centre for Neuromorphic Systems (ICNS), UNSW Canberra Space and the Royal Australian Air Force (RAAF), brings together emerging technologies that deliver advanced capabilities in Earth observation, maritime surveillance, and satellite communications. The sensor included in the custom payload was part of the UNSW Canberra Space’s M2 CubeSat satellite.
“Such an achievement has been made possible through long-term collaborations with many team members and colleagues, both within the company and in other organizations, in particular, the ICNS and the Institute of Neuroinformatics at the University of Zurich and the ETH Zurich. We are looking forward to even deeper collaboration to bring this technology to commercial reality”. After Electron successfully launched to an initial 550km circular orbit, the rocket’s integrated space tug or Kick Stage deployed its first five satellites to their individual orbits. The Kick Stage’s Curie engine was then reignited to lower its altitude and deploy the final small satellite to a 450km circular orbit. With its relightable Curie engine, the Kick Stage is unique in its capability to deploy multiple satellites to different orbits on the same small launch vehicle. Rocket Lab founder and CEO, Peter Beck, said: “Congratulations and welcome to orbit for all of our customers on Electron. Reaching more than 100 satellites deployed is an incredible achievement for our team and I’m proud of their tireless efforts which have made Electron the second most frequently launched U.S. rocket. Today’s mission was a flawless demonstration of how Electron has changed the way space is accessed. Not only did we deploy six customer satellites, but we also deployed our own pathfinding spacecraft to orbit in preparation for our Moon mission later this year”. The next mission is scheduled to take place from Launch Complex 1 within the next few weeks. iniVation creates high-performance neuromorphic vision systems. It combines decades of world-leading R&D experience with a deep network of >500 customers and partners across multiple industrial markets. MV
Dr Kynan Eng, CEO and co-founder at iniVation, emphasized both the ground-breaking possibilities and
12
mvpromedia.com
INDUSTRY NEWS
COGNEX REPORTS BEST FIRST-QUARTER RESULTS EVER Cognex Corporation reported financial results for the first quarter of 2021. The company announced new records for first-quarter revenue, net income, and net income per diluted share. The company stake its claim as the world leader in the machine vision industry, having shipped more than 3 million image-based products, representing over $8 billion in cumulative revenue, since the company’s founding in 1981. Cognex reported a record first-quarter revenue of $239 million, which represents an increase of 43% from Q1-20 and 7% from Q4-20. A notable contribution to growth, both year-on-year and sequentially, came from continued strong performance in the e-commerce sector of logistics. Outside of logistics, revenue from each geographic region (the Americas, Asia, and Europe) increased over Q1-20 due to improved business activity in a variety of industries. Cognex’s financial position as of April 4, 2021, continued to be strong, with $876 million in cash and investments and no debt. In Q1-21, Cognex generated $99 million in cash from operations and $35 million in net proceeds from the exercise of stock options. In addition, during Q1-21, the company paid $11 million in dividends to shareholders and spent $6 million to repurchase its common stock. Cognex intends to continue to repurchase shares of its common stock pursuant to its existing stock repurchase program, subject to market conditions and other relevant factors. “Cognex started 2021 on a strong note”, said Robert J. Willett, Chief Executive Officer of Cognex. “We reported the highest first-quarter revenue, net income, and earnings per share in our company’s 40-year history. We were highly profitable—reporting an operating margin of 33% in Q1-21 compared to 13% a year ago—demonstrating the leverage we have in our highgrowth, high gross margin business model”.
mvpromedia.com
Mr Willett continued, “We are pleased with our achievements in the first quarter. Our strong momentum continued in the e-commerce sector of logistics. Business activity has been recovering in other end markets that have struggled over the past year. Most importantly, we introduced several innovative products in the areas of 3D vision, edge intelligence, and handheld barcode reading that we believe will contribute to revenue growth in years to come”. Looking ahead, Cognex believes revenue in Q2-21 will be between $250 million and $270 million. This range represents anticipated substantial growth over Q2-20, which was marked by significant economic disruption following the COVID-19 outbreak. Its gross margin for Q2-21 is expected to be in the mid-70% range, and lower than the gross margin reported in recent quarters. Operating expenses are expected to be flat to slightly up from Q1-21. In light of the first-quarter results, Cognex hosted a conference call with a real-time audio broadcast of the conference call or an archived recording accessible on the Events & Presentations page of the Cognex Investor website. Cognex Corporation designs, develops, manufactures, and markets a wide range of image-based products, all of which use artificial intelligence (AI) techniques that give them the human-like ability to make decisions on what they see. Cognex products include machine vision systems, machine vision sensors, and barcode readers that are used in factories and distribution centres around the world where they eliminate production and shipping errors. MV
13
INDUSTRY NEWS
ZVERSE LAUNCHES “2D TO 3D” AUTOMATION-ASSISTED CONVERSION SOLUTION
Digital manufacturing solutions company ZVerse, Inc. has announced the launch of a landmark automationassisted conversion that creates 3D Computer-Aided Design (CAD) assets from 2D technical drawing files. A lack of 3D digital files is one of the largest obstacles to digital manufacturing and on-demand Maintenance, Repair, and Overhaul (MRO) part production at scale. For the supply chain, advanced manufacturing, sustainability, and field service teams at OEM and industrial organizations seeking to increase deployed and fleet equipment uptime, this breakthrough assesses original 2D engineering files to generate a rapid 2D to 3D conversion project quote, then creates 3D manufacturing assets that can be stored in a digital library. Field and repair technicians can then quickly access files to request and manufacture a part from anywhere with a secure network connection. ZVerse’s breakthrough 2D to 3D automation enables manufacturing engineers and industrial designers to speed the legacy file conversion process for MRO parts, resulting in project time and cost savings whilst delivering reliable geometric accuracy. ZVerse says its solution maximizes customer operations resources by eliminating the foundational step of re-drawing part geometries at current manufacturing software standards, resulting in an average 35-50% reduction in comparative project time.
and drive the biggest impact in workflow”, said ZVerse President, David Craig. “We have also learned that the greatest time spent in the conversion process can be automated and we now have automated those steps. We are committed to driving even more types of automation in, and around, this critical business needs”. As the foundation of the company’s solutions for digital manufacturing, this service enables ZVerse customers to accelerate digital manufacturing and supply chain initiatives for reduced equipment downtime, lower inventory carrying, material, and production costs, and to directly address Right to Repair compliance, sustainability, and circular economy goals. ZVerse and industry representatives will be addressing these topics and answering questions at an upcoming webinar on June 1 2021 titled, “Delivering on MRO: How Digital Manufacturing is Making Maintenance, Repair, and Overhaul a Reality”. ZVerse helps teams solve 3D content creation and scale challenges in digital manufacturing and supply chain. For companies leveraging digital manufacturing for service and Maintenance, Repair, and Overhaul (MRO) part production, ZVerse’s “2D to 3D” automationassisted conversion converts legacy 2D part drawings into usable 3D Computer-Aided Design (CAD) manufacturing assets. ZVerse solutions for digital manufacturing innovation help bring ideas to reality. MV
“ZVerse has been doing this work for years, converting 2D files to 3D through our on-demand design services. We know the areas where you can drive automation
14
mvpromedia.com
INDUSTRY NEWS
FLIR CAPTURES NASA’S ROVER LANDING ON MARS 1.3-megapixel cameras and one 3.1-megapixel USB camera. “Our cameras are designed for operation on Earth, and not built to operate in outer space”, said Sadiq Panjwani, VP of the Integrated Imaging Solutions (IIS) division at FLIR. “So we were quite thrilled that NASA put them to the test”. NASA began contacting FLIR in 2015 to investigate suitable cameras for the EDL (Entry, Descent Landing) system. Engineers were looking for commercial offthe-shelf (COTS) hardware with an emphasis on low cost and ease of system integration. Ever think about how interplanetary events millions of miles away are captured? FLIR did and they’re here to explain how they did it. On February 18th, NASA successfully landed the Perseverance Rover on Mars. This isn’t the first Mars mission, but it was the first time that the entry, descent, and landing of a spacecraft was filmed and broadcasted live for the public to watch and virtually participate. Six FLIR machine vision cameras captured the event from multiple angles, documenting all stages of the thrilling touchdown. Whilst only a few minutes long, the footage has already helped engineers evaluate how well their work performed in space, and inspired millions of viewers around the world. Here, FLIR walks you through how they captured NASA’s rover landing on Mars for the world to watch via the power of machine vision. The Entry, Descent, and Landing (EDL) of the rover may only be a few minutes long, but there’s a lot going on in those few minutes. After entering the Martian atmosphere, the parachute deploys about 7 miles (11 kilometres) from the surface. Just before this point, three up-looking cameras began recording, capturing footage of the supersonic deployment of the most massive parachute ever sent to space. Five miles off the ground, the heat shield (used to protect the rover during entry into the Martian atmosphere) drops off and exposes the rover downlook camera, showing some of the rover’s intense ride to Mars’ Jezero Crater.
This is the first time that FLIR machine vision cameras have been subjected to the extreme temperatures or high gravity forces experienced during the Mars landing. Everyone involved in the engineering and manufacture of cameras at FLIR is thrilled about this testament to their durability and performance. And of course, ecstatic to say that their work has made it to Mars! Congratulations to the team at NASA and everyone involved in making this historic milestone a reality. You can watch the official video released by NASA of the descent and touchdown here. Founded in 1978, FLIR Systems is a world-leading industrial technology company focused on intelligent sensing solutions for defence, industrial, and commercial applications. FLIR Systems’ vision is to be “The World’s Sixth Sense”, creating technologies to help professionals make more informed decisions that save lives and livelihoods. MV
Then the rover drops away from the back shell (and parachute). From there, its descent is managed by a rocket-powered descent stage called the “SkyCrane”. Then it’s touchdown! The cameras that captured this footage are FLIR RGB machine vision cameras and include five
mvpromedia.com
15
MVPRO IS CHANGING Dear Reader, It’s fitting that amid all the world-changing chaos the COVID-19 pandemic has caused, so many wonderful innovations have been created across the many industries we cover, machine vision, motion control and robotics industries. In times of trouble, innovators have always carved a path to not only succeed but to solve. There are, unfortunately, many examples of when this need to innovate was needed direly, so we can only be grateful that the end is approaching and this time will be but a freckle on the skin of the history of the human race. Nevertheless, good comes from evil and change from need and that is why I write to you today. This summer, after 10 years of providing quality industry content to those who need to be in the know, MVPro will no longer function as a standalone magazine. Instead, it will join two re-launched magazines as it moves under the umbrella of a brand-new, one-stop news hub: Automate Pro Europe. Joined by RoboPro (robotics) and CVPro (computer vision), the three magazines will each cover their respective industry, acting as specialised branches that feed the Automate Pro Europe magazine. Automate Pro Europe will host only the best news and features across the three industries, which will have, as it already does, a European focus. Why? Joining under one umbrella means having more room to play with. We move in the same direction but
16
with a concentrated goal that will allow us to create more and better work that’s specific to the industries we cover. Streamlining and specifying is the idea. What this means for you as a reader is more choice over the news you absorb by following the specific industries you’re interested in. Not only that, but you’ll get better and more comprehensive content for the industries you do want to know about. If you like all of the industries we report on, you still win. Just know this: we will be better equipped to report the most important events and uncover the most interesting industry stories in the world, all with a European flavouring. Not only will we tell you when a robot touches the bottom of the South China Sea or NASA finds life on Mars, but if there’s AI in Austria, infrared imaging in Iceland and collaborative robots in Romania, you’ll be the first to know. It wouldn’t be the launch of a new magazine if it didn’t have new content. So, without further ado, we’re happy to reveal the new categories and content for Automate Pro Europe: • Daily News – What you need to know as soon as it’s happened • Industry Pioneers – The best stories from the biggest companies in the world • Reader Profiles – Talking to you, the movers and shakers of the industry
mvpromedia.com
• Your Guide to… Trade Shows & Events - All the information about the biggest events (dress-down or black-tie?) • Future Technology – Revealing the cutting-edge technologies that will change the world • Finance and Investment – Big investments, ground floor start-ups and everything in-between. With articles every day during the week, a Friday newsletter and a bi-monthly magazine, it’s all you’ll need to know from Automate Pro Europe. And yes, you can shorten us to APE. Thanks for supporting us. We can’t wait to begin this summer. - The MVPro team.
MV
mvpromedia.com
17
FIVE IOT
PIONEERS On the penultimate day of the Hannover Messe 2021, 16 Internet of Things Innovators gathered for the 12th IoT Innovation World Cup ® pitch and award ceremony. Whittled down from 600 solutions, five left as the winners of their respective categories as voted for by a panel of industry experts. In the virtual crowd that day was MVPro’s writer, Joel Davies, who examines the winning innovators in more detail.
the honey makers within their hives. By installing sensors under the beehives, its technology constantly monitors the bees to provide insights such as the amount of honey they are producing by weight, sound intensity, how windy it is, external humidity, as well as internal and external temperature. Via 3Bee’s app, owners and adopters can also see pictures of their bees and listen to sound clips of the hive.
AGRICULTURE: 3BEE With World Bee Day recently gone, it’s fitting that 3Bee, the Italian Agri-tech start-up, came out on top for the Agriculture category. 3Bee develop intelligent monitoring and diagnostic systems for bee health, collecting data on
18
Bees perform around 80 per cent of all pollination worldwide. That amounts to 1,200,000,000,000,000 flowers a year. Without them, not only would we not be able to grow the fruits and vegetables that we rely on but the ecosystem would crumble. Yet, bee populations are dying. This has caused so much concern for the state of the ecological world that conservation attempts have turned to innovative technology solutions such as 3Bee to help. As 3Bee point out, most honey production is artificially assisted by humans now. With their technology, keepers can measure their bees’ health via the app on their phone
mvpromedia.com
and customers can adopt a beehive, giving them the same access to the status of the bees adopted as the keeper. In turn, adopters not only eat the honey they buy but get to watch it develop at all stages. This all amounts to the safekeeping and encouragement of the growth of bees via the medium of sensor monitoring and IoT. Markus Vogt, Director Segment Healthcare & Personal Devices, EBV Elektronik, named 3Bee as his winner of the Agriculture category. He said: “Your idea is absolutely great, it’s absolutely important that we protect our environment especially the bees because they are very important in our environment. We prepared our own beehive at our corporate headquarters and we equipped the beehive with sensors from ST, so we’re not that far away from your solution”. 3Bee was founded in 2017 by Niccolò Calandri and Riccardo Balzaretti and now has over 20 employees, 3,000 beekeepers and 40 tons of honey sold over the last four years. The company also boasts adopters such as Ferrero - makers of the Rocher, Nutella, Kinder and TicTac - and yoghurt producers Actimel, amongst others. The company is now crowdfunding its products to develop new versions of its monitoring systems and technology for the work beekeepers do. When receiving the award, Calandri said, “I want to give this prize to our
twenty employees that work every day to make this possible. Thank you”.
INDUSTRIAL: CANARIA Canaria Technologies pairs the data science of predictive biometric systems with medical-grade hardware to create predictive devices that can be used in dangerous reallife situations and anticipate serious medical incidents before they happen. Canaria’s 5th generation wearable device, the Canaria-V Earpiece, measures raw PPG wavelengths - the technology currently used in intensive care units to monitor patients’ vital signs like skin temperature, movement and environmental data. The sensors read the amount of blood in the underlying tissue 100 times a second which illustrates the different phases of heart function. The approach applies multiple light spectrums (Infrared, Red and Green) shone through the skin to a photosensor on the other side of an appendage (such as a finger or earlobe). From these primary sensor readings, they find over 60 core metrics such as blood oxygen saturation, breathing rate and heart rate variability. They combine those metrics to build early warning alarms using machine and deep learning AI models. Canaria-V detects things like heart rate variability which is one of the most reliable indicators of cognitive fatigue, whilst skin temperature is an important feature when detecting the onset of heat stress. This emerging field of science has the potential to predict heart attacks, strokes, epileptic fits, and in the case of the Canaria-V Predictive Biometrics Platform, heat exhaustion, extreme cognitive fatigue and “mandown” incidents. In its studies, Canaria cites workers in the Australian mining industry who regularly succumb to heat exhaustion - cognitive fatigue is cited as the underlying cause of 144 fatalities in Australia every
mvpromedia.com
19
year, where more than 30% of health and safety incidents were by machinery operators and drivers. They further cite that there are 30,000 fatigue-related accidents in the industrial sector each year, with $4 million lost every day in industrial sector fatiguerelated accidents. Canaria-V lends itself to all industrial environments where physical exertion is common. The robustness of its applications comes in part from the design of the sensor, which is a slim piece of tech that fits behind the ear like a hearing aid. The company notes potential future applications of the tech in improving the performance of professional athletes, for more accurate and regular diagnosis in medicine and even for space exploration in providing insights on astronauts. Canaria Technologies was founded in London in 2016 by Alex Moss (CEO) and Dr Rob Finean (CTO). The original concept behind the Canaria Predictive Biometrics Platform was conceived for use by NASA aboard the International Space Station and the founders went on to win the “NASA Global Award for Best Use of Hardware” in 2016. The company relocated to Australia in 2017 and has specialised their equipment for use in the heavy industries over 5 generations of hardware. The company is doing as every good start-up does: pitching, promoting and continuing to develop its technology. The technology is available now as HAAS (Hardware-As-A-Service), starting at $395 for its cognitive fatigue, heat exhaustion and man-down alerts and rising to $1,495 for features such as its advanced analytics suite, bespoke integration and a dedicated account manager.
20
RAPIDM2M CHALLENGE BY MICROTRONICS: NOLTA NOVA GMBH
Ever lose something valuable and not remember where you put it? Ever do it with a drainage pump or an expensive and important piece of industrial technology? Nor me, but it does happen and it sounds like a stressful situation. It’s where NOLTA enter with NOLTAnet, a motor protection plug that locates and monitors the equipment plugged into it. By connecting to a smartphone app, a user can connect, login and monitor what they need to know about their equipment from anywhere. Across the range of features the NOLTAnet equipment offers, users can receive alerts like if water levels are too low or too high, if the equipment is taken out of its set range and how long it has been operated for. Inputs can be individually named and configured using the NOLTAnet app, and push notifications that can be specifically named are sent when the status of the input signal changes within a matter of seconds. It’s not immediately obvious why this is such an excellent solution but its elegance is in its status as an “accessory”. A small piece of technology, especially compared to the equipment it can be attached to, NOLTAnet allows users to monitor equipment of all sizes and cost from anywhere. Not only that but they can provide key information that you cannot get unless on-site and beside the machine. “At the end of the day, you only sell accessories,” NOLTA says a customer once said to them but the company disagree. They argue that “if a device does not work at the right moment, it’s not an accessory, it’s a problem”.
mvpromedia.com
With this philosophy implanted into their design, they provide security, essential monitoring and most all, peace of mind. What is so deceptively elegant about NOLTA’s net product is that it subtly re-invents that which everyone needs: the plug. Why not improve it? Stefan Pfeffer, Co-founder & CFO of Microtronics Engineering GmbH voted NOLTANet as his winner of the RapidM2M Challenge. He said, “It’s a great pleasure to see how quickly and easily IT solutions can be done in fields that don’t think about that. I think this is also where you show that with a small piece in your hand you can make the world a bit better. If everybody did that I think the world would be better in the future so it’s not so hard to do and Nolta showed how to do that”. NOLTA GmbH is located in Cölbe in Upper Hesse, and describes itself as one of the “hidden champions of German SMEs”. The largest company on this list of winners, its products are the integral, “unsung heroes” of the construction, pumping and manufacturing industry. NOLTA states that its products are used on 90% of German construction sites where pumps are used. With its subsidiaries, NOLTA is present in China, the USA and India. Commercial agencies are also present in Denmark, Austria, the Benelux countries as well as in Australia and Chile.
metal catalysts to convert natural substrates such as glucose and oxygen into electricity. It uses enzymes, carbon electrodes and paper microfluidics to provide a sustainable and environmentally-friendly method of generating energy. The devices can be operated with a single drop of tap water or biological fluid. BeFC says its solution offers a sustainable and ecofriendly solution for practical energy generation whilst minimising impact to the environment. In simpler terms, it’s a battery made out of paper. Providing the next environmentally sustainable energy source isn’t easy but the company believe its approach will not only allow the replacement of batteries for existing applications, but also open up new opportunities for low-power, ultra-thin health monitoring, logistics/ transportation monitoring, and Internet of Things (IoT)
SMART CITY & TRANSPORT: BEFC Self-confidence-promoting football team or sustainable energy source inventors? A difficult one judging by the name alone. BeFC’s slogan is to, “Power the future with nature”. And there are no gimmicks here. The company has invented a paperbased, ultra-thin, flexible and miniature bio-enzymatic fuel cell system. Its technology uses biological catalysts instead of chemical or expensive noble
mvpromedia.com
21
and negotiating with potential investors to scale up and semi-automate production to meet its clients’ needs.
STWIN CHALLENGE POWERED BY STMICROELECTRONICS: NEURON SOUNDWARE
applications. They target these technologies specifically as electronics integration in disposable medical and wearable medical devices has become increasingly important. The company claim that button, coin and other mini-cell power sources are difficult to recycle, toxic or dangerous to the environment and that on average, 97% end up in landfills.
Shh, listen. Can you hear that? No? Neuron Soundware can. The company’s artificial intelligence automatically checks the collected audio data of machines against an extensive database of warning sounds. An alternative to the popular vibration checking method as a prediction failure technique, Neuron Soundware has developed the advanced audio diagnostics solution powered by AI and IoT. Coupled with an app, users can check the condition and monitoring status of machines in real-time from anywhere. Similar to NOLTAnet’s solution, the machine to app path allows for notifications to be sent to machine operators, plant managers and other staff members so they don’t miss machine failure indications. It does this by building up an extensive audio database for its anomaly detection algorithms to listen out for.
Willem Bulthuis, CEO, Corporate Ventures Advisory, picked BeFC as his winner of the Smart City & Transport category. He said, “It’s a great concept - ubiquitous IoT needs power and you have developed a way to actually provide power to IoT devices which is environmentally friendly and can be used everywhere in a lot of different applications. This is a great innovation that will help IoT”. The Grenoble-based company is made up of around 15 employees, including Dr Jules Hammond (CEO), Rodolphe Durand-Maniclas (CBDO & Co-founder) and Dr Jean-Francis Bloch (COO). This year they’ve picked up accolades like finishing first place as a “TOP Presenter” in the renewables and circular energy sector of the Tech Tour Future 21, being SET100 Certified! as one of the top 100 energy and mobility start-ups of 2021 and getting recognised as an affordable and clean energy solution by the United Nations World Tourism Organisation (UNWTO). Partners include Investissements D’Avenir, ADEME and bpifrance. The company’s currently looking
22
Where Neuron Soundware takes it to the next level is in how its models can be trained for sounds specific to client’s machines. By identifying and labelling essential sounds, they retrain AI models to improve failure detection accuracy. Clients can even view the recorded data as a waveform or spectrogram, to focus on a specific time range. If that’s not enough, they’re open for applications to their “Acoustic Academy”, which is a development process that suits proof-of-concept projects, one-off experiments, or co-development of new OEM services.
mvpromedia.com
Unplanned downtime is a buzzword and sore topic for clients across all industrial sectors. Every machine, no matter what, cannot work perpetually. But being ready to prevent its downtime to avoid major disruptions is key to the smooth running of any business. Neuron takes this idea further, claiming that its early warnings of mechanical failures protect critical infrastructure, employees, and reputation. The company further states that its AI detects at least 60% of mechanical issues 2-10 days sooner than any other monitoring method. That’s a lot of time, money and stress saved. Their tech can be applied anywhere machinery is used, including the power and utility, oil, gas and petrochemical, automotive and engineering, quality control sectors.
There you have it: the five winners of the 12th IoT Innovation World Cup® pitch and award ceremony. But if five weren’t enough, the other 16 who competed in the Industrial, Agriculture, Smart City & Transport, STWIN Challenge by STMicroelectronics and rapidM2M Challenge by Microtronics categories are as follows (you’ll have to do your research to find the other 600): • 3Bee – Hive Tech: IoT for Bees, Italy • 3d Signals – 3d Signals – Plug & Play Factory Digitalization, Israel • Agranimo – Agranimo, Germany • Alternative Energy Innovations SL – Indueye Self- Powered IoT, Spain • BeFC – BeFC – Bioenzymatic Fuel Cells, France • Canaria – Canaria-V, Australia • Fuelitics – Fuelitics as a Service, Austria • Nauticspot – Nauticspot, France • Neuron Soundware – Neuron Soundware, Czech Republic • Nolta Nova GmbH & Co KG – Noltanet CEE smartplug, Germany • Octonion – Octonion Machine Intelligence, Switzerland
The company was founded in 2016 by Pavel Konečný in Prague and named “Idea of the Year 2016” in the Czech Republic for a solution combining AI and IoT. From that point they went strength to strength, winning accolades and renown including a €5.75 million funding round in 2019, joining the ESA BIC Centre in Prague and being named the “Best AI Startup in Czechia” and the “Best IoT Startup in Central Europe” in the Central European Startup Awards Competition. Based in the Czech Republic, it employs 35 people and has partnered with Toyota Tsusho Europe, EXA Pro, ELCOM and VOLTA to name a few. They encourage anyone interested to apply for a demo for all the features and benefits of their solution in an interactive session that takes about 1530 minutes.
mvpromedia.com
• Preemar Soluciones Acuícolas– Preemar, Mexico • Schildknecht AG – Smart Cable Drum, Germany • Shayp – Shayp: The Easy-to-Use Device that Will Help You Save Water, Belgium • SmartShepherd – SmartShepherd – wearables for livestock, Australia • Tanktwo – Tanktwo String Batteries, United States
MV
23
WHY ROBOTS, TELEHEALTH AND OPEN EMBEDDED SYSTEMS ARE
THE TRENDS TO WATCH IN 2021
things like food preparation, security patrol, cleaning and disinfection, and home delivery. “We’ll also see an acceleration of robots in retail and hospitality, to handle tasks like inventory checks, social distancing, hotel check-in, customer service, and inventory management – all without requiring as many human workers on-site”, says Susan Cheng, Marketing Manager, Industrial Vision at Xilinx.
While we are collectively looking to put 2020 as far behind us as possible, 2021 and beyond will be influenced by technological advances that have been prioritized and accelerated during the ongoing pandemic. Robots proliferating beyond the factory floor, doctors assessing your health regardless of location, and finally, the pendulum will swing from the cloud toward increasing the openness of embedded systems. Xilinx looks ahead at 2021. Robots will proliferate beyond the factory floor and into our shops and communities as businesses seek to replace workers worried about close proximity to one another and to reassure consumers about health and safety protocols. Autonomous robots will handle
24
Robots also promise to add significant value when paired with humans, playing to the strengths of each. For example, a mobile security robot could scout out dangerous situations to confirm a situation is safe for human security officers. These human officers could then interact with individuals with a level of sensitivity that robots may never be able to achieve. The pandemic is going to accelerate the development and adoption of wide-ranging technology around wireless communications, security protocols and artificial intelligence (AI) to enhance remote patient monitoring, point-of-care solutions and telehealth. On telehealth, Subh Bhattacharya, Lead, Healthcare & Sciences at Xilinx predicts that “telehealth including remote monitoring, wireless communications and video conferencing will account for more than one-third of all patient care within the next 5 years and more than half by the end of the decade”.
mvpromedia.com
After the Covid-19 pandemic started, many remotemonitoring sensors and platforms, and the use of AI algorithms to manage patients remotely were cleared on a fast-track by the Food and Drug Administration (FDA) to help providers monitor Covid-19 patients remotely, thereby reducing the risk of exposure. The platform providers and major clinics are collaborating to develop digital bio-markers based on sensor data and combining it with existing patient databases to write AI algorithms to detect disease severity. The adoption of such systems by major hospitals and clinics are thus rapidly on the rise. Since proof and adoption is increasing, we expect this will rapidly widen to include and monitor patients in many additional health applications like cardiac care and pulmonary care. Additionally, deployment of 5G wireless infrastructure and enhanced security protocols will make the systems higher performance and more reliable thus increasing adoption rates. Telehealth, remote monitoring and pointof-care are now, in general, more widely adopted and accepted by insurance providers as well. The concept of Data Gravity, driven by the need for lower latency and higher levels of privacy, will require producers of embedded systems to carve out a userprogrammable area or open their systems entirely or risk falling behind the competition. The first rule of real estate is location. Similarly, location plays a critical role in factory automation. The relative compute density of some embedded systems rivals what was found only in the cloud a decade earlier, but the
location of the embedded system multiplies the value of computation due to the shelf life of critical data. These embedded systems, acting on data that it collected itself before anyone else has a chance to access it, are increasingly made open for additional business. “Like a renter subletting a two-bedroom apartment, creators of embedded systems are enabling their customers to develop and deploy proprietary apps on their embedded hardware systems for customized processing near the analogue-digital boundary”, says Chetan Khona, Director, Industrial, Vision, Healthcare & Sciences at Xilinx. “Examples of this paradigm include the IDS NXT embedded vision cameras and the Bosch Rexroth ctrlX factory automation controllers and more will surely come in 2021”. Regardless of whether your business is hospitality, healthcare, or factory automation, the key to building systems that can adapt for an uncertain future is to build them on a technological foundation that is itself adaptive. Adaptive computing devices, such as FPGAs and adaptive SoC devices, are like chameleons, not just for software, but also for hardware. In effect, adaptive compute technology can be reconfigured to become the optimal processor for the task at hand. Perhaps the greatest benefit of adaptive technology is that it gives us some much-needed leeway in how accurately we have to predict the future. Change is the one constant, and the flexibility of adaptive technology can assure that today’s systems will be able to adapt to all but the most disruptive changes. MV
MV
mvpromedia.com
25
30 YEARS OF
MATERIALISE: AUTOMATED FACTORIES AND PRODUCTION ARE DRIVING SUSTAINABLE PRACTICES
Materialise, the Belgian 3D Printing and software solutions company is celebrating its 30th birthday. Nele Motmans, Corporate Brand Strategist at Materialise, tells us about the technology, the company and what’s next for both.
When 3D printing was in its early days, the few Materialise employees would hit start on processing a CAD file and then pick up their ping pong paddles and
26
settle in for a nice long ping pong tournament. Hours later, the CAD file would be converted into an STL, and they could finally get started on the first of the many painfully slow iterations to come. Fast forward 30 years through a blur of technological advances from specialized software to computing power - we have arrived at a time where exceedingly complex files are computed in a cloud quickly and efficiently then translated into a ready-to-print file in just minutes with much less manual work. Not just that, but the files have been enhanced to have the most efficient build and positioning and the least amount of support structures needed. As we step into the future of 3D printing, we see more and more automated processes that lead to better print accuracy and improve sustainability with less waste in the industry. As Materialise is celebrating its 30th anniversary this last year, we are looking back at the most meaningful innovations over the company’s history with a series of blog posts. In this post, we’re reflecting on some of our factory and production practices and how they relate to our vision of a better and healthier world.
mvpromedia.com
IT ALL STARTED WITH ONE MAN, ONE MACHINE, AND ONE MATERIAL In 1990, Fried Vancraen recognized the extraordinary potential of a new technology he had just seen at a research facility in Bremen, Germany - 3D printing. Realizing the importance of this technology, Fried boldly invested all his savings into one machine and launched Materialise. This precious machine was around where the original ping pong battles reigned. The biggest difference between today and 30 years ago is the sheer range of materials, software, and machines. In the beginning, there was just one material that you could print on one machine, and it was extremely brittle. If a printed object was dropped, it would shatter. Its main functionality was to be used for prototyping and for ‘show and tell’ purposes. Today, we have a multitude of materials available that range from many different plastics to metals. Materialise now has over 180 professional machines spilling out more than a million parts a year. With the numerous options between tech and materials, this means that parts can be produced that have much higher functionality. With all the progress that has brought us to the present day, we can talk about all kinds of manufacturing applications that are now possible through 3D printing. But these are only possible with the right kind of software to deal with the complexity of printing today’s builds.
ORIGIN OF THE SOFTWARE UNIT Early on, there was the realization that without better software to process the files, they would never be able to use 3D printing technology efficiently. As Bart Van der Schueren, CTO of Materialise, and one of the first employees says, “The biggest issue from the start was the non-existence of decent software tools to get the CAD data to something actually printable. So that’s what we worked on. Our big belief was that if you take away the hurdles and make the technology more accessible, then the whole industry moves forward together. We would be
mvpromedia.com
able to make a bigger impact than if we just remained a service bureau”. And so they started working on Magics software that would enable more efficient builds for the entire industry by translating CAD files into STL files for printing. It also helped in repairing files and editing designs so that problems could be caught before even printing them out. With this in mind, Fried committed the company to only creating software that was machine and technologyneutral so they could co-create with other companies in the future rather than locking away innovation under a brand name.
SECURE PROTOTYPING 3D printing got its start in the prototyping world. Customers need to verify designs, perform form, fit, and function tests in a cost-efficient way before bringing the design into traditional manufacturing. This continues today with our rapid prototyping service where our project management teams help find optimal solutions and our Design & Engineering (D&E) team can support in getting early design files ready for 3D printing. The D&E team can also solve developmental challenges by re-designing products for better performance that reduces assembly (so-called function integration) or make the part more lightweight, etc. Prototyping has always been a secretive process as customers work to unveil their newest designs and advances. As their partner, Materialise has had to earn their trust by setting systems in place that make our platforms secure. Additionally, as the designs and files have been getting larger and more complex, they needed a solution to be able to both process quickly and store these massive files.
SCALING-UP IN THE CLOUD Again, the company had to evolve and innovate to respond to the need for security and enough computing power to process these large files. As time went on,
27
once. This became an irreversible trend throughout all industries as it was necessary for scaling up. Technological advancements, such as encrypted communications and management processes, ensured that customer data remains secure.
MAKING THE IMPOSSIBLE, POSSIBLE Staf Wuyts, another one of our long-term employees and the current CAD Manager for Stereolithography (SLA) recalls, “Fried said that one day we were going to build a machine that can print something the size of the human”. People were surprised but didn’t write him off. Eventually, in 2000, Materialise was actually able to build an SLA machine that could do just that, and eventually even printed bones to size for a Mammoth skeleton! The early 2000s brought this advent of printing large pieces using these SLA machines. The problem was that the builds were so labour intensive that it needed one person per machine just to handle designing all the supports needed. It was just not scalable, and Bart challenged the software department to create a module for Magics that would automatically generate the most optimal and least material-intensive support structure for the build. The software department came up with e-Stage which would do just that, but the problem was that the production team was hesitant to test it out. Each of these huge builds cost up to tens of thousands of euros, and if they tried it out and it didn’t work, it would all be wasted. The software was shelved for the moment. computers improved and we developed better software, enabling us to process files faster and automate the process. Ping pong games while waiting for files became a thing of the past. Despite this, the files continued getting more complex, and to meet the demand, we looked to cloud computing. This meant that rather than relying on just one computer, we were able to access the power of entire servers all at
28
Staf further explains, “When a customer comes in with something that we think is impossible, we make an arrangement with the customer to test out if it’s possible”. With this as a guiding principle, one day they were approached by an artist who wanted to print a very large and complex sculpture that involved hundreds of flying bees in the shape of a human. Bart made the bold decision to finally test out e-Stage because otherwise,
mvpromedia.com
they would have to turn away the project as the design was much too complex. The sculpture was printed and the finishing team marvelled that it was so much easier to remove the supports because they were much thinner than usual. It could have been a costly and wasteful test, but instead, it transformed the way that production has worked ever since. The automation of support generation saved hours of labour for the finishing team and made it so that we no longer needed one person designing support per machine. This module is still helping our software customers reduce costs and improve productivity even today.
products that prints BlueSint PA12 consisting of 100% reused powder - reducing the amount of waste in production and helping AM become more sustainable.
TRACEABLE FROM START TO FINISH Manufacturing is much more than just production volumes — it’s about quality, reliability, and repeatability. Over the years, we have been able to develop certified and traceable parts for both the medical and aerospace sector. For medical, we are creating personalized implants for patients to help minimize further procedures while for aerospace, we offer certified additive manufacturing (AM) of flight-ready parts. For AM environments, traceability is essential. It allows you to monitor when a part was made, what material was used, and is the digitalization of inventory. Software such as Materialise Streamics supports businesses in digitalization as it helps control the end-to-end workflow — from part submission to part building to finish. Every step is traceable and accessible digitally.
SUSTAINABLE PRODUCTION BECOMES A REALITY Today, on a professional 3D printer, yields from 9095% are considered good, but the remaining 5-10% are considered waste. Software and machines continuously evolve to try to further limit this amount of waste. Recently, we introduced Bluesint technology into our
mvpromedia.com
Especially during this past year in the pandemic, we have seen more and more localized production as supply chains have been disrupted. In a world where supply routes can suddenly shut down, it makes smart business sense to be able to source product locally and AM enables this. Sustainability has become less of a buzzword and more of a reality to survive as a business. Going forth, we see this tide of digitalization and automation enduring. From more specialized software to using AI algorithms to search for the best build orientation, to better post-processing, we will continue to focus on integrating more automation into production. Additionally, we see the ongoing evolution of materials so that 3D printing continues from just being a rapid prototyping technology to reliable end-use parts. Minimizing production scrap, using less material for lighter but stronger parts, simplifying supply chains, and optimizing the transportation of goods are all trends that we see continuing as we make strides towards a more sustainable future for all. MV
29
SPONSORED
SAFE FOOD MANUFACTURING FROM ADVANCED ILLUMINATION
There are many parts to a vision system. Which is the most important? Before you say the camera, don’t forget that most cameras, just like eyes, cannot see in the dark. Detailing a case study, Advanced Illumination reminds us of the importance of lighting in vision systems. MACHINE VISION CHALLENGE: MEAT PROCESSING & INSPECTION Achieving pinpoint accuracy with a vision system while maintaining the highest levels of sanitation within automated meat processing is a must. The inspection system must identify foreign objects, whether bits of metal or plastic, while being rugged enough to withstand the challenging environment of stray particles and corrosive washdown solutions. An automation partner in North America was contacted by a meat processor faced with such a challenge. In its processing environment, the customer is using knives to cut the meat as it moves along the assembly line. The customer needs to inspect the knives after each operation to ensure there are no knife fragments missing and is working with the distributor to develop the necessary inspection application. They determined the system required backlighting that would deliver silhouetting of the knives for adequate inspection and detect any chips, cracks, or changes in the knives’ surfaces post-operation.
AI SOLUTION: BL245 ULTRASEAL WASHDOWN BACKLIGHTS To deliver silhouetting while being able to withstand the washdown requirements of the inspection, the distributor recommended the Advanced illumination BL245 UltraSeal Washdown Backlights. With no cracks or crevices on the face of the light, the BL245 Washdown Backlights provides uniform illumination while eliminating the concern of meat
30
particles, stray liquids, or other particulates getting caught anywhere on the light and causing unsanitary conditions. The IP69K certified rating also delivers a peace-of-mind that the efficacy of the light would not be compromised during routine high-pressure cleanings. The UltraSeal Backlights would also withstand the frequent use of sanitizers and caustic washdown solutions necessary to maintain the strict cleanliness standards, thanks to the corrosion-resistant finish.
ILLUMINATION RESULTS: UNIFORM, RELIABLE, HYGIENIC ILLUMINATION For this challenging environment, the BL245 produces the uniform illumination required to precisely inspect the knives for blade consistency to achieve the customer’s goal of a safe food manufacturing environment. The UltraSeal Washdown Backlight also provides the specifications necessary to maintain hygienic standards within this extreme environment, presenting the optimal machine vision solution. MV
ABOUT ADVANCED ILLUMINATION Founded in 1993, Advanced illumination was the first lighting company to develop and sell an LED lighting product and has continued being a global leader in the machine vision industry ever since. Ai combines innovation in product development and process control to deliver tailored lighting solutions to its customers. Ai has Stock products that ship in 1-3 days and hundreds of thousands of Build-to-Order lights that are ready to ship in 1-3 weeks. Their customers face unique challenges regarding their ever-evolving inspection systems; Ai is here to innovate with them.
mvpromedia.com
YOU’RE HOT THEN YOU’RE COLD: Edmund Optics Discusses the Challenges of Thermal Changes on Imaging Optics As environmental requirements of image processing systems get tougher, thermal behaviour is becoming increasingly important in many applications. Outdoors, technologies such as autonomous vehicles and drones are subject to inconsistent and ever-changing temperatures. Indoors, applications such as highperformance systems in factory automation and metrology require precision imaging unaffected by environmental changes. In spite of this, temperature specifications are generally ill-documented for imaging optics: when there is some data available, if any at all, it is usually either storage or operating temperature. Whilst this may give some indication to what constant temperature the optic may be exposed, it fails to provide the full picture of how the optic’s performance is affected by any temperature fluctuations. However, defining exactly how an imaging system behaves under changes in temperature is not trivial; there are several factors at play in the system’s optics alone. Changes in temperature cause the thermal expansion of the glass lenses within the imaging optic. Since a lens’ focal length is defined by its radius of curvature, as the glass expands, the shape and thus the focus point of the lens is also changed (Figure 1). As the temperature is altered, each distinct glass material’s refractive index is changed so is the path of the light rays travelling through the lens is modified, leading to further defocus. Temperature effects are not limited to glass, however. At very low temperatures, the metal optomechanical housing can contract so much that the internal lenses become stressed and in extreme cases destroyed from the intense pressure. Conversely, elevated temperatures will cause the housing to expand so that the lenses become loose and are no longer properly seated. Any tilting or decentering of a lens within the housing leads to a loss in image quality, and the changing distances between each lens lead to a shift in the position of the optic focus. It is possible to mitigate these temperature-driven adverse effects through skilful selection of glass
mvpromedia.com
materials and frame geometries. Optics designed this way are referred to as “athermal”. However, the design effort involved is very high, so is generally not undertaken in standard optic development. Likewise, any customised solution requires careful design, expertise, and close cooperation between manufacturer and customer because ultimately, it is the system-level performance that matters. Not only is it important to understand how the optic and the camera behave separately from each other, but also how both components work together as a complete system. Equally, metrology solutions must also be discussed at an early stage in order to properly develop a measurement technology that will test the optic’s performance since standardised temperature effect measuring systems are new to the market and not yet widely used. Edmund Optics has extensive experience in the design, manufacturing and metrology of standard and custom imaging optics and can assist when thermal effects need to be considered. Our team of expert designers and production engineers can seamlessly support you in each step of your project journey from design, to prototype, to volume production. MV
Figure 1: Defocus (Δf) of a Lens in a Metal Housing with a Change in Temperature (ΔT)
31
SPONSORED
EURESYS ADDS
EASYLOCATE TO SUITE OF POWERFUL IMAGE ANALYSIS LIBRARIES
The Open eVision Libraries from Euresys are some of the most powerful image analysis libraries and software tools available in the machine vision market today. And now the next iteration of its Deep Learning libraries, EasyLocate, will soon be launched, joining existing products such as EasySegment and EasyClassify. Euresys CEO Marc Damhaut introduces us to EasyLocate and tells how it fits-in with the rest of the product range. Euresys has been on a journey for a number of years now, producing some of the most powerful image libraries and software tools, for which we are recognised throughout the machine vision industry. EasyLocate is our latest library and one which the team and I are justly proud of. It completes a range of image processing and analysis libraries that are in regular use with our extensive customer base, mostly machine manufacturers who supply the semiconductor and electronics industries.
OBJECT DETECTION EasyLocate will be launched as part of Open eVision 2.15 and it can be used to locate and identify objects, products, or defects. It has the capability of distinguishing overlapping objects and, as such, EasyLocate is suitable for counting the number of object instances. EasyLocate, based on Deep Learning algorithms, works by learning from examples. In practice, EasyLocate predicts the bounding box surrounding each object, or defect, it has found in the image and assigns to each bounding box a class label. It must be trained with images where the objects or defects that must be found have been annotated with a bounding box and a class label. One of the main benefits of EasyLocate is that we designed a deep learning network architecture targeted specifically towards industrial machine vision application. It is inspired from state-of-the-art architectures such as YOLO and RetinaNet.
DEEP LEARNING BUNDLE
Actual object recognition and tag
32
EasyLocate is part of the Deep Learning bundle of Open eVision. The Deep Learning bundle also includes EasyClassify, a library for the classification of images, that is used for defect detection and product recognition. The user of our Deep Learning bundle can also lean on the abilities offered by the EasySegment library, which is designed to ease the process of segmenting objects and defects. In unsupervised mode, it is used for defect segmentation. It is ‘trained’ with only good images and then segments the parts that differ from the standard model. In supervised mode, it can segment anything, including objects and defects. It
mvpromedia.com
SPONSORED
requires a ground truth segmentation mask for each training image.
FREE EVALUATION In addition, the free Deep Learning Studio application is a key element of the offering. It is set up for fast dataset annotations and the training and evaluation of eVision’s Deep Learning tools by and within the customer environment. What our customers appreciate is that the Deep Learning Studio is free to download from Euresys web site and that no license is needed. This allows them to evaluate the performance of the eVision tools with actual images from their own customers, solving any potential confidentiality issue.
Allowing multiple objects recognition
arbitrary image resolution. And throughout 2021 there will also be continuous improvements to the Deep Learning Studio.
THE FUTURE
Tolerant to sub-optimal object exposure
RUNS ON THE GPU AND CPU Another major benefit is that EasyLocate runs on the CPU, or with NVIDIA GPUs, for faster training and inference. That said, EasyLocate’s neural network has been specifically designed and optimized such that some applications may only need CPU for inference.
WHAT’S NEXT? EasyLocate support for Linux (Intel x64 platform) is coming in 2021. New features and upgrades will include, before the year-end, possible support for oriented bounding boxes, key point localization and
mvpromedia.com
We have high hopes for the future of EasyLocate - the feedback, after our initial customer presentations, has been very encouraging. And two main benefits have been identified as game changers. Firstly, the library is very easy to use and fits seamlessly into a customer’s application. Secondly, with EasyLocate, the customer has time to evaluate the library, before they commit to a license payment. This is a major differentiating benefit to our customers, as with other products on the market, there is no opportunity to first evaluate them and see if they are appropriate for the machine considered. EasyLocate is set for a great future and we are very excited that Open eVision 2.15 is further growing into a rounded and extensive product. Marc DAMHAUT is CEO of Euresys, a leading and innovative high-tech company, designer and provider of image and video acquisition components, frame grabbers, FPGA IP cores and image processing software. Markets include computer vision, machine vision, factory automation, medical imaging and video surveillance markets. It is headquartered in Belgium, with offices in Germany, US, Singapore, China and Japan. MV
33
FUTURE VISION: AN INTERVIEW WITH LUCA VERRE, CEO AND CO-FOUNDER OF PROPHESEE
Vision is our most important sense. It follows that some of the greatest human inventions mimic its design. Cameras, Televisions, Computers and Smart Phones all rely on the medium of vision, in turn defining the way our world works. But for nearly 200 years, we’ve been taking pictures the same way. Is it still the most effective method? MVPro’s writer, Joel Davies, sits down with the CEO and Co-founder of Prophesee to discuss the future of vision and how he’s changing the way we take pictures.
COULD YOU EXPLAIN WHO YOU ARE AND WHAT PROPHESEE DO? I co-founded Prophesee in 2014 and it’s a spinoff of different research labs and universities, mostly from the Paris Vision Institute, the CNRS, CERN, the University of Sevilla and the Austrian Institute of Technology. Why? It was the time that the key research around neuromorphic vision technologies was getting done. During a sabbatical from Schneider Electric when I was studying for an MBA I met the other two scientific, key co-founders in Paris – Christoph Posch, who is currently our CTO and Ryad Benosman who was a scientific advisor professor at the University of Paris and Carnegie Mellon. Together, we founded Prophesee. What we do at Prophesee is develop a vision sensor that mimics the human eye, and computer vision and
34
AI algorithms that mimic the brain. The original idea was pioneered by Carver Mead at Caltech in the 80s where he was the first to study the way a biological retina works and try to develop its artificial model. Eventually, this work was pursued by the government, students and other researchers in Europe in the late 90s where this technology started maturing and showing potential interest for real-life applications. We started this company because we saw a great opportunity to bring this technology to market with timing that was the most suitable. As we know, the number of cameras is growing everywhere - in our home, in our car, in our mobile phones. Video today is a sequence of images. We understand that this approach was useful to generate video for human consumption for the cinema because this is the reason the technology was originally invented more than 100 years ago. The same visual inputs are still used for machine consumption where the purpose is to acquire visual input and process it quickly. We realise that this
mvpromedia.com
conventional frame-based approach is not suitable anymore because it’s expensive in terms of power, energy consumption and computational power. This is where the neuromorphic event-based paradigm enters because it’s conceived for machine vision only, so it’s made to be extremely efficient and fast.
What we do at Prophesee is develop a vision sensor that mimics the human eye, and computer vision and AI algorithms that mimic the brain ONE EARLY APPLICATION WAS IN MEDICAL DEVICES THAT RESTORED VISION TO THE BLIND. COULD YOU TELL US ABOUT THAT? We collaborated with a company called Pixium Vision, which was also a company spun off from the same Institute where we were coming from. At that time, Pixium Vision was trying to develop a visual restoration system for people who had a degeneration of the photoreceptor of the retina, who unfortunately became blind. They were looking for technology capable to send signals to their brain in a bio-inspired fashion. This led to the first commercial development of a silicon retina for Pixium Vision, which went to market in 2016 with the first-ever made visual assistance system based on Prophesee’s first generation of the event-based sensor. The way it works is that our silicon retina is embedded inside the camera module, in a pair of goggles, and it also looks out and reacts to the outside world as the functional retina would usually react. It does this in a very fast, asynchronous event-based way, and then it sends the
mvpromedia.com
signals to electrodes, which are implanted surgically in the retina. This electrode translates the signal coming from our sensor to electrochemical signals that are then sent to the brain. With some re-education - that may take a few days through a couple of weeks - these people can, from full blindness, partially restore their vision capabilities. This was the starting point, which motivated me and the other key scientific founders to start Prophesee because we saw this as the first natural implementation of the technology. That was only the starting point, because, for us, the idea was okay, if we can now restore the sight of blind people, we can bring the properties of bioinspired or human-inspired visual capabilities to any device. Any device today, where cameras are used for machine vision, becomes relevant for us.
CAPTURING THAT WHICH IS RELEVANT RATHER THAN A WHOLE SCENE SEEMS OBVIOUS IN HINDSIGHT. ALWAYS IS, ISN’T IT? YOU MENTIONED CARVER MEAD, BUT HOW DID YOU COME UP WITH THE IDEA AND PUT IT INTO PRACTICE IN THE MODERN-DAY? The key challenges were in terms of design, the sensor, and reaching the right level of stability and performance. The breakthrough came when in 2011 Christoph Posch built the first functional silicon retina which was stable and that showed the path to a real, industrialised product. The breakthrough came because of two aspects. One is that there was some invention in the way the sensor was built. The second is the fact that we make our technology bioinspired by basically making pixels that are independent and asynchronous.
35
What happened is that, of course, there have been great advances in terms of development in technology nodes becoming more aggressive. The more aggressive they become, the better for us because we can now integrate more electronics around the photodiode. So, if you take an analogue processor as an example, it becomes smaller and the size of the sensor becomes smaller which means it’s more cost-efficient, targeting a wider spectrum of applications. This is to the point that now with our latest generation - generation four - working with Sony, we managed to use a very advanced process called a BSI 3D stack process, which means that we can process two independent wafers instead of having only one. Now, we can stack the photodiode, processed on an optimised image sensor, to a CMOS layer which is more aggressive where we can put all this analogue processing and stack underneath the photodiode. Now we can reach pixel pitch. These aspects of technology, manufacturing and market readiness are all coming together we believe, to make our story successful. We’re now glad that this is also recognised by opinion leaders in the market like Sony, who is the market-leading image sensor provider.
YOU SAY THAT EVENT-BASED VISION CAN REDUCE THE AMOUNT OF DATA PROCESSED BY UP TO 1000X LESS. THAT COULD CHANGE THE WAY MANY INDUSTRIES CURRENTLY CAPTURE AND PROCESS IMAGES. HOW DOES IT DO THAT AND WHAT ARE THE ADVANTAGES TO THIS? Our technology works in an event-based manner, so only pixels that see changes will react and send data, which means that the acquisition becomes scene driven, which is not the case, of course, with a conventional image sensor. If I take a tennis match with a conventional image sensor as an example: you would have part of the scene which moves very fast, let’s say the ball, and part of the scene which is static, like the ground. No matter what’s happening, you will acquire your 10 frames per
36
second. You would probably under-sample the ball and the ball will be seen at that fixed point in time, maybe at an annoying distance so we can’t know what’s going on! Of course, we would oversample the ground because the ground doesn’t change over time. With our technology, the beauty is that every single pixel adapts the sampling rate according to what’s happening. The pixel looking at the ball will acquire time precision, which can exceed the fraction of a millisecond, which means that it’s almost the equivalent of 10,000 frames per second. So we can follow the motion very nicely, but only for the pixel looking at the ball. The picture looking at the ground would be acquired only once. If there’s no change in the ground, nothing will happen. That’s made the acquisition process very efficient. The benefit of having this scene driven acquisition is that out of the sensor you get relevant data, and the processing side becomes more efficient. Doing some detection and tracking the ball will become more efficient, especially if you want to do some data transmission like if you wanted to follow the server. I think the benefit on the acquisition side is twofold. The first is in terms of making computer vision and machine learning tasks simpler. The second is because having less data is of course a benefit in terms of bandwidth storage and processing power.
With our technology, the beauty is that ever y single pixel adapts the sampling rate according to what ’s happening WHERE ARE YOU APPLYING YOUR TECHNOLOGY? With our third generation, we have several examples of where our technologies are deployed, some of which are related to applications where our technology is showing a
mvpromedia.com
superior performance compared to conventional framebased technology which is replacing the incumbent approaches. In some, our technologies are opening up new usages in manufacturing and quality control. Then some applications are very new like vibration measurement, and this is so new we don’t see visionbased alternatives. What we do here is observe machines or parts of machines that are vibrating because any machine that’s moving creates some vibration - and we monitor this vibration frequency in real-time because it’s relevant to predict some possible failures of the machine. We can do it with our sensor with the benefit that every single pixel of the sensor array - in a VGA you can assume there’s like 300,000 points - is a measurement point. So, it’s very rich with information, and then we can use this information to predict if there’s a change in vibration or there’s a risk of possible failure modes.
mvpromedia.com
Another unique application is welding monitoring. In laser welding, the laser processes the metal and particles of metals that are in contact with the laser spot spatter. What our customer has invented with our technology is a process that can detect, track and measure the speed and the size of these spatters that are projected randomly at a very high speed and that also have extremely bright lighting conditions, which other cameras are incapable of looking at. So, we can monitor the spatter pattern in real-time as a way to control the process in real-time and monitor, for example, the speed or intensity of the laser, while you are proceeding with the welding. [Joel] Having that many use cases must be quite exciting for you. It’s very exciting. We’re also creating a strong ecosystem of partners by working with Imago in Germany, with CenturyArks, and Restar in Japan. We have also put in place a strong distribution channel with Edom in Taiwan
37
and China, Macnica in the US and FRAMOS in Europe All these distribution and system integrator partners have adopted our technology but they’ve also trained engineers and are helping us reach various customers.
LESS THAN A MONTH AGO YOU RELEASED OPENEB, WHICH IS THE LARGEST OPEN-SOURCE EVENT-BASED VISION SOFTWARE LIBRARY. WHY DID YOU CREATE IT AND GIVE IT AWAY FOR FREE? For us, this is a strategic choice. As the most advanced pioneers in the field of neuromorphic technology, it’s really important we build a community of users and inventors that will start using and expanding the adoption of this technology. That’s the reason why we decided, up to a certain level of the software stack, to provide open-source access and let this community contribute and enjoy. We also preserve a commercial model of the software so our engineers can work together with customers to implement and develop certain know-how in key applications where if the customer wants to use them, then they need to go through a licensing process. Both for us at this point of our development of the company are valid.
WHERE DO YOU SEE THE TECHNOLOGY GOING NEXT? What we’re aiming at is to make event-based technology a new technology standard for machine vision. We want to be recognised as the company that has invented and promoted this technology to become the standard. We want to keep developing new sensor generations that are in partnership with key partners like Sony and to keep investing in the software with more and more functionalities and application examples. For us, it’s now a matter of really creating this standard and then scaling it up through more investment, in new products, in commercial technical support and new channels.
38
For us, it ’s now a matter of really creating this standard and then scaling it up through more investment, in new products, in commercial technical support and new channels WE’VE TALKED ABOUT SONY A COUPLE OF TIMES HERE AND YOU RECENTLY WON “BEST EQUIPMENT SUPPLIER” AND “COMPANY OF THE YEAR” ALONGSIDE THEM. WHAT MAKES PROPHESEE UNIQUE? I think what people recognise when they choose Prophesee as a company, is the fact that our technology is unique. We come with a sensing modality that’s different. It’s not an improvement of existing framebased, 3D sensing, radar or ultrasound technologies - we come with a new modality, which for us has been a fundamental and purposeful foundation. That new modality is biology. This is probably what people recognise in us as a very unique approach. It’s the fact that the technology has been a disruptive approach. Plus what people now recognise is the effort that the company has successfully put in place in the past years to fulfil the gaps in the value chain because when you come with a disruptive technology you kind of shake up the ecosystem a little bit. You come with a solution and of course, you want the camera module, camera system makers, software partners, etc to come on board and this is what has happened, what is happening more and more. This is happening thanks to all the effort we’re putting into our sensor and more and more into our softer side. I think this is what makes us unique. Also, the recognition we’re getting from the scientific community, like our sensor which we co-developed with Sony was recognised by ISSCC (International Solid-
mvpromedia.com
State Circuits Conference) last year. Plus our various papers have been selected by flagship computer vision and AI conferences like CVPR (Conference on Computer Vision and Pattern Recognition) which is a recognition that our technology is state of the art. We also now have more than 100 engineers that are mostly based in Paris and Grenoble but we also have offices in Silicon Valley, Japan and China. It’s a great group of talented people that have various experience. We have 25 nationalities, more than half of our engineers are PhDs and they have great scientific backgrounds. Also, great investment experiences come from companies like ST, Omni Semiconductor and OmniVision.
YOU HAVE RAISED $68M TO DATE AND HAVE PARTNERS THAT INCLUDE INTEL, BOSCH AND HUAWEI TO NAME A FEW. WHAT’S NEXT FOR PROPHESEE AND WHAT DO YOU NEED TO GET THERE? Next for me is to keep investing in product balance in partnership with customers, and to execute on that. I think the strategy is the right one and the vision has been the same since the beginning: it’s to bring human vision to devices. It’s what motivates us and what motivates our engineers. With the novel, disruptive approach, the fact that we can invent, we can enable inventors to invent and solve their problems - This is such an original approach, I think it makes us so excited about it. The other is a matter of strategy because you need to select the right partners, you need to select the right business model. I think what we have been doing the past few years since we started the commercial deployment of our third generation, is good. We see the adoption of our technology in industrial settings with our next generation. We’ll open up opportunities, also in consumer applications. So for us, it’s a matter of resilience, investment, executing on the strategy and
mvpromedia.com
scaling up to the point where then the company will become a global player in the space. [Joel] Would you be willing to tell me about the consumer space or is that under wraps? No, of course not. We have many implementations in field test. In the consumer space, we see applications in the IoT and smart building. For example, people counting, access control, fault detection in the building infrastructure environment and also some applications in traffic management. So a lot of that plus also some consumer robotics like delivery robots and vacuum cleaner robots.
So for us, it ’s a matter of resilience, investment, executing on the strategy and scaling up to the point where then the company will become a global player in the space WHAT’S THE ONE THING THE READERS SHOULD TAKE AWAY FROM THIS INTERVIEW? It’s always a question of people, right? Business is about the decisions of people and human behaviour. People tend to stay in their comfort zone and don’t move out from what they have learned at school or their job. When you come with something new the majority tend to be a bit conservative. We have been lucky with some early adopters that are showing the way and in this, what they have been showing is that our approach is the right one and is now progressively opening up the mindset of other people. My message here is to say we’re just at the beginning of this revolution and they should try it out. Our technology could be a way to reinvent their job and find solutions to their challenges. MV
39
THE DEVELOPMENT OF THE INTERNATIONAL MACHINE VISION MARKET
IMPORTANT TRENDS AND OUTLOOK 2021
The international market for machine vision systems is highly dynamic and will continue its rapid development in 2021. What innovations, new technologies and trends can we expect, and how will users and companies benefit? Dr Maximilian Lückenhaus, Director Marketing + Business Development at MVTec Software GmbH, offers his expert assessment.
DR LÜCKENHAUS, TO WHAT EXTENT WILL THE IMPORTANCE OF ARTIFICIAL INTELLIGENCE FOR MACHINE VISION SYSTEMS INCREASE IN 2021? WHAT WILL BE THE STATUS OF DEEP LEARNING RELATIVE TO TRADITIONAL TECHNOLOGIES? Artificial intelligence will continue to play a key role in the further development of machine vision systems in 2021. At the same time, the importance of deep learning will continue to grow. The technology enables a real quantum leap in the field of machine vision and is opening up opportunities for applications that were unimaginable only a few years ago. For example, our deep-learningbased feature “anomaly detection” enables precise and reliable defect detection based solely on “good images” that show the relevant object in defect-free condition. The tremendous amount of labelling associated with training regular, AI-based inspection processes is completely eliminated. Combined with training that generally requires only a few minutes, anomaly detection permits a much shorter time-to-market, among other things. But also traditional machine vision methods can be raised to a whole new level using deep learning technologies. An example of this is Deep OCR – a
40
feature that’s included in the new version 20.11 of MVTec HALCON, our standard machine vision software. Deep OCR introduces a holistic, deep-learning-based approach to optical character recognition (OCR) in industrial applications. It can locate numbers and letters much more robustly, regardless of their orientation, font type or polarity. The ability to group characters automatically makes it possible to identify whole words, which in turn significantly improves recognition performance because, for example, it helps prevent the misinterpretation of characters with similar appearances. These two examples clearly illustrate that in terms of our product strategy, we see deep learning not only as a stand-alone technology to address applications that were previously very difficult or even impossible to solve – such as with anomaly detection. Wherever we see a benefit for our customers, we also incorporate deep learning in “traditional” core technologies in order to improve them significantly.
HOW DO YOU THINK THE MARKET FOR EMBEDDED VISION SOLUTIONS WILL EVOLVE IN THE FUTURE? There’s a steep upward trend. Embedded devices are becoming more and more compact, they already have very powerful processors, and they’re available on the
mvpromedia.com
market over a long time period. The integration of deep learning accelerators further boosts performance and opens up brand new applications, especially in the area of classification. Many of these devices are now fully industry-compliant. Other attractive benefits include low power consumption and the associated reduction in waste heat, as well as a small footprint. Plus, they’re often less expensive than regular industrial PCs. For these reasons, embedded devices are becoming more and more common in the industrial environment. To satisfy this market requirement, HALCON provides robust machine vision functions that run smoothly on embedded systems. We ensure the end-to-end compatibility of our software with common embedded platforms such as the popular Arm® processor architecture. Together with our hardware partners, we’re also gradually expanding our support for deep learning accelerators.
MACHINE VISION TECHNOLOGIES ARE ALSO BECOMING INCREASINGLY AVAILABLE IN THE CLOUD. WHAT OPPORTUNITIES AND ADDED VALUE DO YOU SEE FOR USERS? Cloud technologies are experiencing a strong growth trend and are now being used by companies across all industries. The cloud enables highly scalable, flexible, inexpensive and fail-safe services. To make these numerous benefits available to machine vision users as well, we’ve also been working for some time on making our software technologies usable in the cloud. We’ve already successfully implemented pilot projects with several customers based on the “Container as a Service” (CaaS) model. In this case, the HALCON library runs in a Docker software container in a cloud instance and is activated via a license server in the cloud. The number of containers and their hardware resources are freely scalable. With this model, the customer benefits from maximum flexibility in terms of hosting – whether in a public cloud or in a self-operated, private cloud. This allows us to meet requirements specific to the industrial environment, such as data security and connectivity with automation components.
WHAT TRENDS CAN BE IDENTIFIED FOR THE GLOBAL MACHINE VISION MARKET IN 2021? WILL IT BE IMPACTED BY THE COVID-19 PANDEMIC? IF YES, HOW?
and above all in China. What we’re noticing is that the Chinese machine vision market has recovered from the COVID-19 pandemic extremely rapidly. We’ve recognised the central importance of this rapidly expanding market and have strengthened our presence in China with the opening of a new subsidiary for providing local customer support. But the pandemic has also revealed the vulnerability of global supply chains. According to our observations, it has led to a strong push for more on-site production in North America and Europe, as well as the expansion of robot-assisted production – for example, using cobots. Combined with trends such as Industry 4.0 and smart factory, all this reinforces the need for automation to increase efficiency and quality in production processes. This means that machine vision will be more necessary than ever.
WHAT NEW, VERTICAL MARKETS WILL MACHINE VISION PENETRATE IN THE FUTURE? As a matter of fact, certain vertical markets are becoming increasingly important for machine vision. In addition to logistics and medical technology, another major sector is agriculture. Modern agricultural companies are very open to the systematic digitisation and automation of their processes. They’re happy to rely on machine vision technologies, with which they can greatly improve the efficiency of their operations. This includes the use of modern harvesting robots, the automated classification and inspection of seeds and the targeted and site-specific cultivation of agricultural land (precision farming), to name just a few examples. Precision farming in particular permits extremely interesting and valuable applications. For example, agricultural land can be monitored with the aid of drones or satellites, and machine vision software can be used to identify and pinpoint anomalies like disease, pest infestation and insufficient irrigation. Farmers are then able to restrict the spraying of pesticides or fungicides to the affected areas, as well as fertilise and irrigate exactly where needed. This means that agricultural companies can cultivate their land more efficiently and ecologically using modern machine vision, because they’re able to use and apply resources like pesticides, water and fertiliser in a more targeted manner. MV
Various trends are apparent, such as the increasing strategic importance of growth markets in the Far East
mvpromedia.com
41
CHANGING LANES:
THE FUTURE OF TRAVEL Around 500 years ago, Leonardo da Vinci wound some springs up, added a brake and installed them into a cart. When the brake was released, the springs would uncoil, pushing the cart without human assistance. It’s considered the precursor to the car, as well as the robot. We’re more used to the concept of flying cars and super-fast global travel thanks to the silver screen these days but realistically, how will you get to where you’re going in the near future? MVPro’s writer, Joel Davies, peers into his crystal ball.
CARS
fossil fuel vehicles by 2027i. Self-driving cars, the next major evolution of automotive vehicles, is less assured.
Let’s start with the most popular. Let’s start with something we know. It’s a fact that we’re switching to electric cars. Most manufacturers, depending on the country and government, have to stop producing traditional fossilfuel reliant vehicles and start making electric-only ones by 2030. Ten years is the blink of an eye for a company to overhaul its product. It involves the logistical terror of sourcing new materials, adapting production facilities, designing new concepts and finally making and marketing the end-product. You will have noticed by the adverts over the past five years or so that it is happening, however. All major automotive manufacturers - even Harley Davidson – are doing as they’re told because they simply have to. And it’s not so bad for the makers - the Guardian recently found that electric cars will be cheaper to produce than
How on earth is a car able to drive itself? Thanks to good old fashioned machine vision. Invented in 1961, LiDAR, or light detection ranging, is a remote sensing method that can be used to map objects and structures including height, density and other characteristics across an area. Light is emitted from a rapidly firing laser and reflects off of objects like buildings or in this case, cars, people, roads and pavements. The reflected light energy returns to the LiDAR sensor where it is recorded and with machine learning algorithms creates a 3D map of the surroundings. Covering all angles of a car, it is, in effect, how self-driving cars “see” where they’re going. It’s then up to the onboard computer, packed with AI algorithms, to decide what to do with what it sees, just as a human brain would.
42
mvpromedia.com
That is how it works and the complexity is apparent. Whilst large American companies like Google, Uber, Apple, and Tesla have invested years and millions of dollars, many other innovative companies are rising to the challenge. In Europe, the self-driving vehicles’ market is expected to grow exponentiallyii creating new jobs and developing profits of up to €620 billion by 2025 for the EU automotive industry. This is enough motivation for the major producers, including BMW, Mercedes and Audi to be developing prototypes, with mixed results. In any case, the EU is debating and deciding on laws about the novel technology. It has even set out a handy roadmap for when we can expect self-driving cars to be on the road. Why even have self-driving cars? It’s not only to avoid the burden of driving, as some would see it. The adoption of self-driving cars is driven by factors like avoiding human error – the main cause of traffic accidents that kill 25,300 people a year on the EU’s roads. There are also positive benefits in that new digital technology can reduce traffic congestion, emissions, air pollutants and even in improving the mobility of transport for those who need it. Let’s think even bigger for a moment. The oncoming wave of consumer-level autonomous vehicles cars, as supplied by the likes of Tesla dominate the headlines. But consumers don’t spend that much time on the road, at least not compared to the drivers ferrying around the things we need. According to EU rules, a professional driver can operate a vehicle for 9 hours a day, 56 hours in a week and 90 hours in any 2 consecutive weeks. Statistics show that at least 300 people are killed each year as a result of drivers falling asleep at the wheel and around four in ten tirednessrelated crashes involve someone driving a commercial vehicle. A driver who falls asleep for three or four seconds on a motorway can travel the length of a football pitch, and half a mile in 30 seconds. That’s why, as you would have seen in the Industry News of the previous edition, that Plus, the so-called “WorldLeader” of self-driving trucks secured a $220m funding round. Two weeks ago they returned to the headlines, announcing that they were going to go public through a $3.3 billion SPAC deal. That’s just the US. On the European side, there is Einride. With its Pod, it became the first fully electric, totally autonomous transport vehicle to operate on a public road in the world. Created in Sweden, the company has around
mvpromedia.com
€27.2 million in funding and its electric self-driving cargo trucks can be controlled remotely by drivers. It offers a fleet of slick, white vehicles, sized in ranges from haulage, distribution and city, offering 24, 9 and 16-ton carrying capabilities and all the bells and whistles you could ask for. Its completely driverless technology was enough to convince Coca-cola - the two partnered in 2019. Commenting on the company’s recent $110 funding round, Robert Falck, CEO and co-founder of Einride said, “We are proud to be backed by some of the world’s most notable investors as they support our mission of becoming the leading provider of freight transportas-a-service. Their network, reach and experience will be invaluable as we further accelerate our strong momentum as the leader in autonomous and electric freight transportation, and as we expand into new markets. Einride is dedicated to transforming road freight transport as we know it, making it more cost-efficient, safe, and sustainable”. The end goal? Electric, self-driving vehicles of all types by 2030. How ready each company and country is for the switch remains to be seen. One thing we do know is: it’ll be quieter.
PLANES Commercial jets, like the ones we used to use pre-pandemic to visit other places, fly themselves already. The pilots of these planes only spend a few minutes of the journey – take-off and landing - actually flying. Autopilot takes care of the rest. Having already achieved a form of autonomy, the aviation industry is thinking smaller. It’s thinking flying taxis. Flying taxis are exactly as you imagine. Or perhaps not if you’re thinking they look like cars that hover up into the air in a Blade Runner or Harry Potter-esque way. The principal is the same, but the design is different. Electric vertical take-off and landing vehicles (eVTOL) are modelled from a combination of jets and helicopters. A sleek body with wings and many small rotary propellers, eVTOLs take off and land vertically like helicopters but cruise like jets, effectively getting the best of both. They do this via the many small propellers attached to the body of the aircraft that can tilt vertically to take off and land and horizontally to cruise. Entirely run on electricity, they’re quiet but some models like Vertical Aerospace’s VA-X4 can reach speeds of up to 321.87 km/h.
43
Why not use regular helicopters or jets? With one exposed, large rotary blade on top, helicopters are noisy, cannot reach the speeds of jets and come with the risk of lobbing limbs off. Jets, on the other hand, simply cannot land or take off vertically. Here enters the eVTOL. On a larger scale, what’s the point of them? Are you going to catch a flight into town to do your groceries? Yes and no. It all depends on the company. As it stands, the Googles and Teslas of the aviation world are working away to make eVTOLs a reality and each has its idea about how the vehicles should look and what they should be used for. Some companies, like China’s EHang, are designing eVTOLs not much larger than drones, with the capabilities of holding two passengers for short destination flights. Others, like Lilium’s 7-seater jet, is aimed at the longer city to city journeys with a maximum of 200 miles per charge. The end goal of all of these companies is the same: to travel better. In most cases, that’s by doing it quicker. Vertical Aerospace anticipates the 20-mile journey from Heathrow Airport to Canary Wharf, London would take just 13 minutes compared to the usual hour by car or train. The knock-on effect of this is the lessening of congestion. According to CNN, Commuters in the 15 most-congested American cities spent an average of 83 hours stuck in traffic in 2017. No one can complain about getting places quicker but that’s when cost rears its ugly head. Commercial flying is expensive. And when flying taxis hit the skies of a town near you, they will cost. Volocopter says that for a similar journey from central London to Heathrow, it would expect to charge £100-200. As with all novel technology, costs will fall. All of these wonderful companies won’t be around for long if they don’t. Boeing, Toyota, Hyundai and the acquirer of Uber’s failed Elevate, Joby Aviation, are some of the big-name frontrunners to get the technology off the ground first but it is Lilium that have made the biggest recent step to making regular flying a reality. You will have seen Lilium in the last edition of the MVPro magazine when the company announced the development of its electric vertical take-off and landing jet after entering into a definitive business agreement with Qell Acquisition Corp. The $830m investment brought Lilium’s total worth to $3.2b. At the time, Daniel Wiegand, Co-Founder and CEO, Lilium said, “We’re incredibly excited to reveal the development of our 7-Seater Lilium Jet and announce the next stage of our growth. This is a validation of all the hard work over
44
the last five years from our talented team and our worldclass partners and investors. “Our vision is to create a sustainable and accessible mode of high-speed travel and bring this to every community. Transport infrastructure is broken. It is costly in personal time, space consumption and carbon emissions. We are pursuing our unique electric jet technology because it is the key to higher-capacity aircraft, with a lower cost per seat mile while delivering low noise and low emissions”. The proceeds from the transaction with Qell are intended to fund the launch of commercial operations, planned for 2024. This includes the finalization of serial production facilities in Germany, launch of serial production aircraft and completion of type certification. Although a German company, up to 14 vertiports are already planned in Florida with advanced discussions taking place with key infrastructure partners for 10 vertiports to build a network across Europe. Up to 300 firms are working on a short-range batterypowered craft that takes off and lands verticallyiii , according to Natasha Santha of LEK consulting. It’s hard to comprehend, by simply looking at the futuristic crafts and the infrastructure that needs to be built, that they could be a reality so soon but most companies developing the technology state 2024 as their target. This includes Lilium and Vertical Aerospace. If, as Lilium claim, its Vertiports are 10x quicker and 100x cheaper to build than high-speed rail, then why not?
MISCELLANEOUS If cars can drive themselves and taxis can fly us city-tocity unassisted, what about the other forms of transport we rely on? What about buses, boats, trains and personal transport?
TRAINS Unless you live in Tokyo, it seems that the only time we talk about trains is when we’re discussing how late, full or uncomfortable they are. Trains aren’t due to grow wings anytime soon but they could go electric and autonomous - some already are. In Asia, they have Maglev trains, which use magnets to float carriages above the ground without the need for wheels. They are currently the fastest form of rail travel in existence with China and Japan the proponents of
mvpromedia.com
the technology. China’s passenger railways can travel between 250 - 350 km/h whilst Japan’s own Maglev train once topped out at 603 km/h in 2016 on an experimental track. Shanghai airport’s maglev does the 19-mile, hour-long taxi journey from the airport to the business district in eight minutes. That’s five minutes quicker than the proposed flying taxis above. There’s no mention of the technology in Europe, primarily because of cost. South Korea’s Incheon airport maglev cost £25m per km to build. The technology is more in line with the future of switching to green energy sources, however. Other potential ideas for trains include hydrogen, with the world’s first passenger train powered by a hydrogen fuel cell that produces electrical power for traction launched in Germany in 2018. The train emits steam and condensed water only. It was created by Alstom, a French company that has now taken an order of 12 trains from France.
BUSES For many, they can’t live with them and can’t live without them. In the EU, 55.7% of all public transport journeys are on buses, equating to 32.1 billion passenger journeys every yeariv. Following in the footsteps of their smaller relative, the car, buses are getting the electric, self-driving treatment. In February, the southern Mediterranean town of Malaga, Spain, became home to one of the first trials of a fully functional, electric, self-driving bus. With a driver behind the wheel in case of emergencies, the regular-sized bus holds 60 passengers and follows an 8km route around the city six times a day. The project was backed by the Spanish government and developed by a collective of universities. In other cases, Volvo recently announced that by the start of 2022 it would have its electric, CAT buses on the streets of Australia. The buses, developed with Volgren, aren’t autonomous. Nevertheless, the companies are due to deliver 900 new buses by 2029.
ETC: BOATS, SCOOTERS... JETPACKS? A lesser-used form of transport these days, aquatic vessels are getting the self-sailing experience. Promare and IBM’s Mayflower Autonomous Ship (MAS) is due to start its maiden voyage from Plymouth, UK to Plymouth, USA this spring.
mvpromedia.com
When it comes to more personal transport, self-driving bikes haven’t quite happened but San Francisco based Go X piloted self-driving scooters on a 500-acre technology campus in Georgia, USA. Based on the rent-a-scooter idea, through their app, a user can hail a scooter that will scoot itself to you, saving the hassle of trying to find a scooter to rent in the first place. Go X aren’t operating in Europe, where a potentially more important piece of personal travel equipment is being developed. Although it’s unlikely you’ll ever use one and hopefully never have to be in the vicinity of one, jetpacks are being developed. The most far-fetched concept in this article, the kit is real and not as gimmicky as you might think. Developed by the British company, Gravity Industries, its Jet Suit records speeds of up to 128 km/h with a jet engine power of more than 1,000bhp. The jetpack can reach altitudes of 12,000 feet. Still in its trial phase, the technology’s potential applications could extend to military and emergency services. The Royal Navy, as well as the UK’s Great North Air Ambulance Service, have tested the suit out. There you have the burgeoning future of transport, with 2030 the finishing line for many life-altering concepts like self-driving cars. Whether or not these technologies come to fruition, it’s important to remember that these are all relatively recent developments that have relied on the advancement of machine vision, computer vision and other technologies. The idea of a flying car may be as old as any futuristic idea, but only within the past 10 years has it been treated as a reality. The stress is now on when not if. So, three important questions remain: how will you travel? How fast will you get there? What will you do with the time you save? MV https://www.theguardian.com/business/2021/may/09/ electric-cars-will-be-cheaper-to-produce-than-fossil-fuelvehicles-by-2027 i
https://ec.europa.eu/jrc/en/publication/eur-scientificand-technical-research-reports/analysis-possiblesocio-economic-effects-cooperative-connected-andautomated-mobility-ccam
ii
https://www.economist.com/business/2021/04/03/ flying-taxis-are-about-to-take-off-at-last iii
iv
https://www.acea.be/automobile-industry/buses
45
ICE CREAM MAKES FOR
CHEWY ENCOUNTER WITH DIGITAL MANUFACTURING
Since its launch in 1981, Lotte’s Yukimi Daifuku has been loved by people of all ages as a popular Japanese household favourite. Many have tried the unforgettable flavour and texture of the vanilla ice cream balls wrapped in soft, chewy mochi rice cake. “Delicious whenever eaten, regardless of the season,” however, achieving that deceptively simple goal of consistent texture, quality and taste is more difficult than most people would have thought. To solve this challenge Lotte has introduced Mitsubishi Electric’s e-F@ctory to the production of Yukimi Daifuku. Chris Hazlewood, Deputy General Manager, Digital Marketing Center Mitsubishi Electric Corporation, Japan explains how they did it.
the hardness of rice cake used to vary depending on the temperature and water content. Some operations were dependent on people, and losses arose out of the need to finely adjust the machine parameters”.
“The e-F@ctory system allows us to conduct improvement activities such as enhancing the operating rate, stabilizing quality, and optimizing staffing for production activities. The extendibility of the system, depending on what we want to do, was also appealing”.
“Before introducing e-F@ctory, there was an issue of inconsistency of the rice cake quality”, said Hiroshi Sugimoto, Manager of the Facilities Department, Urawa Plant, LOTTE Co., Ltd. “When wrapping the ice cream,
46
At each of the Yukimi Daifuku production lines, the state of the product and the operating status of the machines is collected by PLCs installed in each process. Vast amounts of data, such as vibration data from the rice cake hopper to data from the conveying inverters is collected. All of the data can be understood in real-time not only through the overall SCADA monitoring system, which is installed in the control room but also through on-site computer displays. “By introducing this system, data became centralized, making it possible to view and investigate conditions
mvpromedia.com
whenever we want”, remarked Hiroshi Akimoto, Section Manager of Facilities Department, Urawa Plant, LOTTE Co., Ltd. “Because the data volume is extremely high, having all the data centralized in one place has a positive effect. One big benefit is that we can now gather and analyze data and conduct data diagnostics using a realtime data analyzer. This system not only helps us stabilize the state of the rice cakes used for the Yukimi Daifuku but also promotes improvement activities within the plant”.
failures are unavoidable with machines. We expect that these can also be better managed by using e-F@ctory’s symptom management features”.
e-F@ctory is Mitsubishi Electric’s integrated concept to build reliable and flexible manufacturing systems that enable users to achieve many of their high speed, information-driven manufacturing aspirations. Through its partner solution activity, the e-F@ctory Alliance, and its work with open network associations such as The CCLink Partners Association (CLPA), users can build solutions based on a wide-ranging “best in class” principle. “Another benefit is the adjustment of the blending ratio of rice cake and ice cream,” Akimoto continued. “This was usually done by experienced operators, who monitored the state of the rice cakes as they come out of the wrapping machine by kneading them with their fingers. We thought it would be great if we could automate this process. By automating such processes, which were conventionally performed based on human senses, and by capturing signs of any poor quality of the wrapped rice cakes beforehand, we can eliminate problems. That was our ultimate goal”. “As you know, ice cream is a cold material. This cold ice cream is combined with rice cake, which is warm when it is made”, said Takayuki Manako, Executive Director & Plant Manager of Urawa Plant, LOTTE Co., Ltd. “This technical aspect of combining a cold item with a warm one in a good balance is what makes Yukimi Daifuku a complex product. But I think this challenge is something that inspires us to find new ways to overcome it. The temperature in the manufacturing room varies all year round. We strive to maintain consistent conditions, but at the same time, we try to reliably create even better conditions. We introduced the e-F@ctory manufacturing concept with the expectation of realizing this in the future”. “In the course of daily production, machines do not operate in the same condition every day. Previously experienced staff members checked and adjusted the settings of the machines”, Manako continued. “But with e-F@ctory we can visualize the condition of machines and the machines themselves can issue instructions to make adjustments. Another thing is that maintenance and
mvpromedia.com
“The use of IoT has only just been introduced to the production of Yukimi Daifuku, however, the Urawa Plant has many other lines making chocolates and ice creams, so Yukimi Daifuku is not our only challenge”, Manako added. “We aim to horizontally deploy this system and construct a smart plant in which ‘symptom management’ and ‘operating rate improvement’ are implemented on numerous lines. Stable plant operation and manpower savings will eventually make a major contribution in terms of costs and so on. If we consider LOTTE as a whole, our goal is to further evolve this technology and extend it to other plants”. With 100 years of experience in providing reliable, highquality products, Mitsubishi Electric Corporation is a recognized world leader in the manufacture, marketing and sales of electrical and electronic equipment used in information processing and communications, space development and satellite communications, consumer electronics, industrial technology, energy, transportation and building equipment. Mitsubishi Electric enriches society with technology in the spirit of its “Changes for the Better”. Mitsubishi Electric Europe B.V., Factory Automation EMEA has its European headquarters in Ratingen near Dusseldorf, Germany. It is a part of Mitsubishi Electric Europe B.V. that has been represented in Germany since 1978, a wholly-owned subsidiary of Mitsubishi Electric Corporation, Japan. MV
47
FILTERS: A NECESSITY, NOT AN ACCESSORY.
INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING
MIDOPT.COM