Telematics Wire Magazine- December 2021

Page 1

www.telematicswire.net

December 2021 | Volume 1 | Issue 7

Telematics Wire Technology Driven | Futuristic Vehicle

Autonomous mobility- delayed not derailed December 2021 | Telematics Wire | 1


2 | Telematics Wire | December 2021


December 2021 | Telematics Wire | 3


Volume : 01 Issue : 7

06

Outlook on world autonomous vehicle and their impacts G Naveen Kumar, Comscore, Inc.

10

Changing Landscape of OTA Updates Chandrasekhar Morisetti, Innobox Systems Pvt. Ltd

14

36

The key to delivering autonomous driving promises: V2X connectivity Tushar Bhagat, Uffizio India Pvt. Ltd.

38

Product Launch Baraja: Spectrum HD Brigade Electronics: Sidescan Predict

Data-Led Personalization in the Era of Connected Mobility Vijay Suryanarayanan & Laksh Parthasarathy, Tata Consultancy Services

18

Tang Hui, Sharp Singapore Electronics Corporation Pte Ltd Neuromorphic/Event-Based Vision Systems: A Key for Autonomous Future Siddharth Jaiswal & Faizal Shaikh, Netscribes

26

The requirement of indoor mapping for localising vehicles indoors, enabling Autonomous Valet Parking in the future

40 44

Autonomous Vehicle, the next big thing in Telematics

48

How Lidar Enables Autonomous Vehicles to Operate Safely

Designer P K Gupta

Product Launch Hyundai Mobis: Mobis Parking System Samsung Electronics: Exynos Auto T5123, Exynos Auto V7 & S2VPS01

Precision and Complexity: Role of Manufacturing in Product Performance

Publication Address Telematics Wire Pvt. Ltd. D-98 2nd Floor, Noida Sec-63 Uttar Pradesh-201301 Email: info@telematicswire.net Printed and Published by Maneesh Prasad on behalf of Telematics Wire Pvt. Ltd. Telematics Wire Pvt. Ltd. D-98, 2nd Floor, Noida Sec-63 Uttar Pradesh-201301 Email: info@telematicswire.net

Car Launch

Disclaimer Telematics Wire Pvt. Ltd. does not necessarily subscribe to the views expressed in thepublication. All views expressed in this issue are those of the contributors.

Mercedes-Benz AMG A45 S Volkswagen Tiguan

50

Developing Secure Software for Autonomous Vehicles Dennis Kengo Oka, Synopsys

54 NEWS 4 | Telematics Wire | December 2021

DGM- Corporate Sales Poonam Mahajan M: +91 9810341272 mgr_corpsales@telematicswire.net Editorial Team Member Richa Tyagi

Aman Jain, Napino Auto & Electronics Ltd

32

Directors Maneesh Prasad Mohan Chandra Verma Sinha Anuj Ranjan

Praveen Dhusia, Teltonika India

Mircea Gradu, PhD, Velodyne Lidar

Dr. Brian Holt, Parkopedia

30

Dy. CEO Anuj Sinha M: +91 87440 88838 anuj.sinha@telematicswire.net

GM- Corporate Communication Yashi Mittal M: +91 98103 40678 mgr_corpcomm@telematicswire.net

LCD Introduction & Requirements

22

Editor Maneesh Prasad maneesh.prasad@telematicswire.net

Please Note: No material may be reproduced in whole or part without permission of Telematics Wire Pvt. Ltd. Copyright 2021, Telematics Wire Pvt. Ltd. All rights reserved.


8, 9, 10 March Bengaluru, India presents

CONNECTED VEHICLE 2022 Physical Conference & Exhibition is BACK!

Title & Cybersecurity Sponsor

Diamond Sponsor

Data Innovation Sponsor

Badge Sponsor

Silver Sponsors

Associate Sponsors

Breakfast Sponsor

Exhibitors Supporting Partner

Association Partner

Media Partner

Knowledge Partner

organised by

Join us at #CV2022 as a Sponsor, Exhibitor, Speaker or Delegate www.CV2022.in Contact Us: anuj.sinha@telematicswire.net | +91 87440 88838

Team Telematics Wire wishes you a very Happy New Year!


Industry Insight

Model - Honda Legend Source: Autoweek

Outlook on world autonomous vehicle and their impacts G NAVEEN KUMAR

Comscore, Inc.

F

or a long time, driverless technologies have promised to revolutionize urban transportation. Especially since manufacturers began reporting substantial achievements in automated driving technologies around 2010, there has been widespread concern about mass unemployment in transit systems and mobility-related industries such as trucking. The development of self-driving vehicles is accelerating, prompting several governments to enact legislation and rules governing the technology. These regulations are intended to address the new-wave autos' safety, liability, privacy, and security concerns. For at least the next decade, fully autonomous driving will be limited to Let’s look into a birds-eye view of how Autonomous Vehicles (AV) impact in positive and negative ways for individual owners and overall.

6 | Telematics Wire | December 2021


specific geographic regions and climates, with progressively automated mobility systems flourishing in the following decades. This timeframe allows policymakers to prepare for and minimize disruptions of millions of road transportation jobs and related industries likely to be affected while fostering substantial economic opportunities and supporting environmental impacts by developing accessible mobility systems for everyone. As we saw, there are various advantages and disadvantages which tag along with the upcoming new era of autonomous vehicles. But where in the world at present when it comes to autonomous vehicles? Are companies working hard to bring autonomous vehicles into the mainstream? How will the workforce training and education system align with this change? Are there any acquisitions and mergers worldwide to combine knowledge and technology for one vision? How do laws and legal systems support the autonomous dream to come true? Did COVID-19 impact the autonomous vehicle vision? Let's go through every question!

mobility-as-a-service (cars or trucks) deployment in densely-populated urban cities such as– Beijing-China, – LondonUK, US - New York, San Francisco – U.S., and Tel Aviv- Israel. For instance, Waymo, Cruise, and Zoox test their driverless technology in San Francisco. Mobileye tests in New York City and also in Tel Aviv. Companies such as Didi Chuxing, AutoX,

and Baidu are aggressively testing to reach the highest autonomy level in China. According to a study conducted by Alix Partners, the global investment towards making the autonomous vision possible is around $75 billion in technology between 2019 and 2023. But during this timeline, the COVID-19 pandemic happened, but did it impact the autonomous vision? We

Where is the world at present with autonomous vehicles? When we think of autonomous cars, one name always comes to mind: Tesla. The dream of completely driverless cars, stacked into six levels by SAE, and the Tesla is at Level 2 of autonomy, but the company still implies that the vehicle is fully selfdriven. Level 2 has been achieved by major global automakers such as General Motors, Volvo, Tesla, Nissan, and Toyota. But this year, 2021, the breakthrough of Level 3, achieved by two manufacturers, Honda (Model – Honda Legend) and Mercedes (Model -Mercedes-Benz S-Class and Mercedes-Benz EQS), as the name implies a step up. Level 3 means the car can read its surroundings (traffic Etc.) and make decisions based on them. Thanks to Honda and Mercedes, the world has reached Level 3 of autonomy, and it is expected that other companies will launch their Level 3 autonomous vehicles soon.

Mode - Mercedes-Benz EQS, Source: Mercedes site

Are companies working hard to bring the autonomous vehicle into the mainstream? The current pilot runs are primarily done in suburban areas. The success lies in driverless

Model - Mercedes-Benz S-Class, Source: Cnet

December 2021 | Telematics Wire | 7


List of major acquisitions Acquirer Woven Planet Holdings Cognizant Luxoft Cruise Aurora Amazon Velodyne Lidar Intel Waymo Ford, Volkswagen Apple Daimler Delphi Automotive Intel GM GM

Target company

Acquisition date

Lyft Level 5 ESG Mobility CMORE Automotive Voyage Uber ATG Zoox GRAF Moovit Latent Logic Argo AI Drive.ai Torc Robotics nuTonomy Mobileye Cruise Lyft

04/26/2021 $550 03/25/2021 -03/20/2021 -03/15/2021 -12/07/2020 -06/26/2020 1200 06/02/2020 -05/04/2020 900 12/12/2019 -07/12/2019 3600 06/26/2019 -03/19/2019 -10/24/2017 450 03/13/2017 15300 03/11/2016 1000 01/05/2016 500

will discuss this later in this article. The global deployment of such technology with no human intervention will take at least a decade or more. Moreover, the expansion will be gradual since companies are still testing for advanced driverless technology, and deployment will happen stage by stage in various regions of the world for specific transportation categories.

How will workforce training and education systems be aligned with autonomous technology? The technology change will highly affect trucking jobs, bus driver jobs, and car driver jobs positively and negatively since it's a double-edged sword. It will end some positions and also create a new stream of employment. Driver jobs are not limited to just driving, but other tasks, including vehicle check-up, breakdown and vehicle

AI-driven autonomous vehicles without human intervention is a decade's dream, and current technology is the stepping stone to make it a reality! 8 | Telematics Wire | December 2021

Value (M$)

maintenance, loading, and securing, among others. The emerging A.V. industry will open gates for more off-the-road tasks (teleoperation centres) such as emergency management, customer care, auto repairs, and registration/booking, which will provide jobs for many drivers. With proper training, the drivers can take up any offthe-road tasks, which will save the careers of many drivers. Continued investment in workforce training, or other strategies to address job and market change, is critical. In terms of the education system, new generation colleges are adding various new courses and academics to prepare a new generation of students in automation. Companies will need engineers and designers to help develop the next generation of vehicles that contain all of the necessary technology and sensors. Many companies currently offer positions that require a minimum of an undergraduate degree and some postgraduate degree or Ph.D. with additional experience in the field. But in the future, specifically for the autonomous area, it will be more of specific knowledge-based recruitment with a high degree, and companies are looking to reach a higher level of autonomy, which requires a high level of knowledge foundation.

Are there any mergers and acquisitions worldwide to combine knowledge and technology for one vision? The automotive industry's mergers and

acquisition activity has been experiencing flat or declining. Many investors are cautious about automotive investments due to the stagnant situation in the automotive industry, which leads to uncertainty about which technologies would survive. But the problem is gradually changing due to increasing developments in electrified drivetrains and autonomous technology testing, which has opened up new gateways on the technology side. Many small and mid-size suppliers face liquidity issues, which opens up opportunities for various companies to buy out. Moreover, automotive companies' valuation has been shrinking from the past 2 to 3 years, and acquisitions are an attractive method to increase the valuation and technology aspects of companies. Below are some of the significant acquisitions in the past six years.

How do laws and legal support autonomous dreams to come true? In terms of law, many countries/regions have their own set of regulations to tackle autonomous vehicle adoption. There is no global regulation for the autonomous vehicle at present, as the 1968 Vienna Convention on Road Traffic, for regulating and establishing traffic laws. Among the different rules across the globe, some are specific laws, and some are strict, which quickly reduces the global adoption of the technology. The strict rules consist of many barriers, road conditions, climatic conditions, emotions, safety issues, Etc. Since this is a paradigm shift in the automotive industry, different countries are at various stages of the regulatory process because of the newness in technology. But it's crucial to implement proper regulation for autonomous vehicles since it has many loopholes pertaining to the safety and security of the vehicle. But in a few developing countries such as India, which is trying its best to be in-game with the modern technology, regulations on testing and legalization of autonomous vehicles is still lacking, and no proper consideration, yet the introduction of Tesla with autopilot is still questionable, will it run properly on the Indian roads? Issues for self-driving cars in India are no proper roads, no infrastructure to support, reckless driving with no attention to signals which causes accidents every minute across Indian


Updated regulations news in different countries Country Australia

China

Canada

Germany

Hungary

South Korea

Turkey

United States

• • •

Regulation news The guidelines for trials of automated vehicles were updated on 12/02/2020 by the National Transport Commission. The new policy is available on the NTC website. The new national standards draft for automation classification was released on 03/09/2020 by the Ministry of Industry and Information Technology on their website. The Federal Government of Canada has maintained consistency across all jurisdictions to promote autonomous vehicle development. In Ontario, to test level 4 and level 5 vehicles on the road, companies need to get green signals in line with Ontario's A.V. Pilot Project regulations. In Quebec, for level 3-5, pilot run companies must obtain consent under an Act to amend the Highway Safety Code. On 07/28/2021, the German Government passed a new law on autonomous driving for level 4 autonomous vehicles that can drive in regular operation in public road traffic (specified operating areas) On 09/18/2020, the Hungarian Government published its National A.I. strategy for supporting the development environment for research and infrastructure required for driverless technologies. The development period is for ten years, between 2020-2030. South Korea announced the new Autonomous Vehicle Act on 05/01/2020, providing driverless vehicles' required infrastructure and support. Seoul to establish citywide autonomous driving infrastructure by 2026. A new Turkish regulation in line with E.U. rules will enter into force by 06/07/2022, pertaining to approval requirements for autonomous vehicles. Moreover, the Ministry of Transport and Infrastructure sets a new Action Plan for 2020-2023 to complete the building of Autonomous Driving Test and Certification Centers. On 01/11/2021, the U.S. Department of Transportation released a new Automated Vehicles Comprehensive Plan. The plan defines three main goals in line with USDOT's vision for Automated Driving Systems (ADS). Promoting collaboration and transparency regarding the capabilities and limitations of ADS Modernize the regulatory environment to remove accidental and redundant barriers to innovative autonomous vehicles Prepare the transportation system for evaluating the safety, efficiency, and accessibility of the transportation system. The full detail about the new plan is available on the U.S. Department of Transportation website

cities. A new regulation was proposed in the Motor vehicles Act of 2017 for testing autonomous vehicles, but it's still pending. But many countries are making more effort to implement the autonomous vehicle on their roads as soon as possible. Below is an overview of a few countries that have updated regulations to adopt autonomous vehicles in the past two years.

Did COVID-19 impact the autonomous vehicle vision? The global automobile sales volumes have

been shrinking since 2017, and COVID-19 impacted a considerable time drop from 2019 to 2020. The market is expected to

bounce back to 2019 levels by the end of 2023 or early 2024. COVID-19 impact on mobility will last for the long term as it drives change in the technology, macroeconomic environment, regulatory trends, and consumer behaviors. The COVID-19 has impacted the semiconductor industry colossal times, which led to chip shortage; it affected the entire automotive industry, including autonomous vehicles. Since the autonomous vehicle is embedded with many sensors and chips to automate the vehicle, the shortage of chips slowed down autonomous vehicle testing across the globe. But before the COVID-19 situation, many companies managed to stock the chips and sensors that helped them deploy autonomous technology to assist in various fields, including e-commerce, logistics, and healthcare. For instance, Beep and NAVYA partnered with Jacksonville Transportation Authority ( JTA) to deploy autonomous vehicles to transport medical supplies and COVID-19 tests at Mayo Clinic in Florida. However, self-driving technology companies, including Waymo, Cruise, and Uber, have partially suspended their autonomous car testing with backup driver's operation due to the spread of the coronavirus.

Final Verdict: The pandemic's effects on mobility have opened various opportunities to innovate sustainable and urban mobility. It is vital for the private sector and governments worldwide to coordinate at the local and federal levels to launch a proper strategy that will benefit everyone in the autonomous vehicle ecosystem, including customers. Many research and developments are advancing into driverless technologies rapidly across the globe. Various companies and countries are aggressively investing and releasing new regulations to make inroads into a driverless future as soon as possible.

AUTHOR G NAVEEN KUMAR Lead Analyst Comscore, Inc. Naveen Kumar has seven years of experience in management consulting, handling several large-scale strategic projects for various industries, including Automotive, Logistics & Transportation, Financial, and Healthcare industries. He loves to work on freelance projects to gain more industry knowledge, projects based on market research, strategy consulting, and analytics. He is currently working with Comscore, Inc.

December 2021 | Telematics Wire | 9


ems Pvt. Ltd

he-Air software or firmware update is no novice word in the industry, it has become integral part of Industry Insight

ded device. An update in software creates flexibility and configurability of a device. It can be an in performance, fixing an issue, security enhancement and new feature deployment for that device.

Changing Landscape of OTA Updates

ays, most of the devices and vehicles are pre-dominantly mechanical, electrical and electronic CHANDRASEKHAR MORISETTI

Innobox Systems Pvt. Ltd In case any updates were needed they used to happen using floppy disks with very limited software.

visiting service centres or someone from service centres visiting customer place. In one scenario

O

TA Over-The-Air software

Last couple of decades, there is

microcontroller based System On Chip

ppening, Boeingor747 stillupdate receives updates via ofa floppy firmware is no critical a drastic evolvement technologydisk. and System in Package solutions. All the

novice word in the industry, with increase in software defined above developments along with complex it has become integral devices ranging from desktops, AI algorithms and cloud solutions part there of every An mobile embeddedwith devices, IoT catalysed the growth. defined devices of decades, is embedded a drasticdevice. evolvement of phone, technology increase in software update in software creates flexibility and devices, automotive vehicles, satellites As the software defined devices are configurability a device.embedded It can be an devices, and airplanes. a contemporary increasing so is the satellites demand for updates, desktops, mobile ofphone, IoT As devices, automotive vehicles, and improvement in performance, fixing an development, connectivity technologies as software is being prone to security issue, security enhancement and new involving wireless standards like 802.11 performance impact and bugs. a contemporary development, connectivity technologies involvingleaks, wireless standards like 802.11 feature deployment for that device. a/b/g/n/ac, 2G, 3G, LTE, 4G and Updates during early days of software In the early days, most of the devices also evolved. In addition, there like for desktops G, 3G, LTE, 4G and 5G also evolved. 5G In addition, there is a drastic defined growthdevices in semiconductor and vehicles are pre-dominantly is a drastic growth in semiconductor involved a managed process where mechanical, electrical and electronic industry, where presence of users in front of desktop, ng industry, where processing chips manufacturing like microprocessors, microcontrollers and GPU’s are components with very limited software. processing chips like microprocessors, update server maintenance and support In case any updates were needed they microcontrollers and GPU’s are moving from an IT personal was needed, making 90 nm towards 5 nm byfloppy offering speeds low 5power and alsothehigh memory chips used to happen using disks orhighfrom 90 nm at towards nm by offering processcapacity more cumbersome, causing customers visiting service centres or high speeds at low power and also high delays in updates and leading devices t lesser cost. This to centres semiconductor companies offering highly performance someone from led service visiting capacity memory chips availability at integrated, vulnerable to malicious attacks. This customer place. In one scenario this is lesser cost. This led to semiconductor is true in scenarios where updates are fe and secured microcontroller based companies System offering On Chip and System related in Package still happening, Boeing 747 still receives highly integrated, to security.solutions. All the critical updates via a floppy disk. performance optimized, safe and secured With the combination of latest pments along with complex AI algorithms and cloud solutions catalysed theconnected growth. technologies along with cloud solutions. Over the air updates solved the problem by letting updates happen silently with minimum user input. This led to popularity of adopting OTA updates in most of the embedded devices across the industry, not leaving an exception in automotive. Embedded Perceiving the update Devices step varies from users to manufacturers or software solution providers. From the user point of view the benefits include ● Easy to install, it is just a click away where users by ensuring good connectivity just accept and install updates ● Users device/vehicle gets better over time 10 | Telematics Wire | December 2021


s led to popularity of adopting OTA updates in most of the embedded devices across

ustry, not leaving an exception in automotive.

update server

● Improved device/vehicle security But for manufacturer the steps are many fold – From Maintenance point of view ● Identify the existing bugs ● Perform validation so as to avoid regressions ● New feature verification ● Check if no performance impact after the update ● Perform long hour stress testing ● Check negative scenario ● Validate the changes and perform compatibility checks on multiple variants/models ● Ensure that their update server is secured From Benefits point of view ● Increase in product life time ● Reduce recall costs ● Resolve issues remotely ● Update multiple devices at the same time ● Saves billions of dollars ● New business models with customization ● Push value-added-services From System point of view ● Maintenance of two partition regions

in hardware – inactive and active regions ● Download the update into inactive region ● Switch over logic from inactive to active and vice versa ● Seamless restoration of the old settings Automotive industry is at the forefront of adopting OTA. The presence of OTA support in automotive already helped auto makers to save billions of dollars thus making it an inevitable feature in all future cars. The type of updates that are happening in automotive vehicles including but not limited to, maps, connected parking feature, connected charging features, personal assistant features, infotainment updates, driver assistant features The first ever OTA software update happened by Tesla in their car model in the year 2012, the update was done using 3G or WIFI data connection. Since then Tesla has delivered around 56 OTA updates to its models. There are significant number of OTA updates adoption seen from 2020 onwards on various car models from different OEMs.

● In May 2020, BMW started pushing OTA updates to some of its models. As of June 2021, BMW started rolling out software updates to 1.3 million vehicles with BMW OS. ● In Sept 2021, Volkswagen announced OTA update support for their models as part of their ACCELERATE strategy to enter into new business models by making the updates as customer centric. ● In May 2021, General Motors (GM) also announced its new “digital nerve system” which will enable over-the-air software updates on all GM vehicles ● Starting March 2021, Ford started rolling out OTA software updates for their selective models. It is aiming to have 33 million vehicles with OTA capability by the year 2028. Update handling in automotive vehicles is complex. With over more than 100 heterogenous multi-core ECUs (Electronic Control Unit) supporting various functionalities related to safety, comfort, connectivity and infotainment, there is an increasing amount of software components in vehicle that includes firmware, configurations,

ceiving the update step varies from users to manufacturers or software solution prov

m the user point of view the benefits include •

Easy to install, it is just a click away where users by ensuring good connectivity accept and install updates

Users device/vehicle gets better over time

Improved device/vehicle security

t for manufacturer the steps are many fold –

m Maintenance point of view •

Identify the existing bugs

Perform validation so as to avoid regressions

New feature verification

Check if no performance impact after the update

Perform long hour stress testing

December 2021 | Telematics Wire | 11


updates on appropriate MCU, availability of enough memory and direct conne

internet plays an important role. As the ECUs are connected directly to interne

exposed to wide rangehighly of attacks. beingCustomer Read updates, De modular OTASome solution of the attacks gaining importance. may be

settings and maps. Flashing the updates on appropriate MCU, availability of enough memory and direct connection to internet plays an important role. As the ECUs are connected directly to internet they are exposed to wide range of attacks. Some of the attacks being Read updates, Deny updates, deny functionality and control. In Automotive OTA architectures there are three main building blocks:● Cloud Server – An update generator ● Vehicle – An update installer ● Connectivity link – A communication protocol There is no single OTA framework

● Sibros – Safe, Uptane secured, AWS, cloud ready OTA platform Along with these proprietary standards, there. are alliances that are getting formed among OEM automakers to bring out a standardized OTA solution. Suzuki, Subaru, Daihatsu, Toyota and Mazda are working on standardizing how vehicles communicate with the cloud, to improve safety and make connected services better. As OTA update started becoming mandatory feature in most of the devices, an industry wise standard platform

deny functionality andGC control and Azure

interested in news updates, specific genre music, movies, office work and many more. Pulling the appropriate updates, using the car connectivity solutions will solve the problem. In some other scenarios, clients may be on a vacation trip to a new place, pulling updates about the visiting places around the vacation spot and knowing the history of that place will become a new phenomenon. This is more like the car serving as a local guide and the updates in this case will act as a temporary updates which automatically gets deleted at the end of trip.

In Automotive OTA architectures there are three main building blocks:•

Cloud Server – An update generator

Vehicle – An update installer

Connectivity link – A communication protocol

definition that takes care of updates in a usedCustomer centric in a sharedA frame There is no single OTA framework standard across theupdate, industry. safer, secured, faster and reliable way is mobility scenarios where there are

standard used across the industry. A framework which offers secured update, proper campaign management and easy roll back mechanism so as to avoid bricking of the device is essential. Some of the frameworks that are used in automotive industry are:● Red Bend from Harman ● Movimento from Aptiv ● Bosch with ESCRYPT and Microsoft is building software platform in vehicles that support over-the-air updates ● OTAmatic from Airbiquity - A cloudbased OTA service that includes OTA orchestration, campaign management, software and data management, and Uptane based security framework capabilities. ● Edge Sync from Wind River – A

much needed. Today, there are several multiple drivers of the car, all their offers secured update, properOTA campaign management and easy roll back mech proprietary update technologies personalized needs be it infotainment, from various system integrators and

genres, navigation maps, audio books,

solutions are leading to compatibility issues, increase in development costs, time to market and risk of errors. A standardized platform with standardized message formats and responses will help chip suppliers, system integrators and OEMs to simplify development, save time and costs. The saved time can be used to develop new type of update features. In case of automotive OTA updates, as we move to autonomous driving, focus on customer engagement within the vehicle during the tenure of travel is

the cloud. Basing on the person driving the car, they can initiate updating the car remotely from mobile using a secured channel so as to make them travel ready. The storage of these customizations along with new recommendations will be maintained by OEMs creating a revenue model for them. A driver moving from one city to a smart city, receives updates related to connectivity and infrastructure needs, so as to help them in making their journey more smoother and leverage the benefits from smart city infrastruture. The same can become a temporary subscription model as the OEM’s get paid for the same creating a new source of revenue. Going forward in future it will be no surprise, you buy a basic car with all the required hardware, as you keep on paying your EMI, an incremental feature update will be delivered over the air creating a new business potential and a win-win situation to both the manufacturer and user.

avoid bricking of theOEMs. device essential. Some seat of the frameworks thatinare use Theseisdifferent OTA update positioning etc., will be stored industry are:•

Red Bend from Harman

Movimento from Aptiv

Bosch with ESCRYPT and Microsoft is building software platform in support over-the-air updates

OTAmatic from Airbiquity - A cloud-based OTA service that includes

AUTHOR orchestration, campaign management, software and data management, CHANDRASEKHAR MORISETTI

based framework capabilities. Director,security Automotive and IoT •

Innobox Systems Pvt. Ltd A technical enthusiast with an experience of over 20 years, Chandra is responsible for Innobox's automotive and IoT division. His acumen in the technical know-how of multiple verticals helped in winning customer confidence and customer engagements. He is a detailoriented person which makes him unique and helped projects not to miss any requirement.

Edge Sync from Wind River – A highly modular OTA solution

12 | Telematics Wire | December 2021


22

YEARS OF IoT EXPERTISE

2

YEARS WARRANTY

<0.1%

FAILURE RATE

13 MILLION IOT DEVICES DEPLOYED!

VIDEO RECORDING

ADAS

ADVANCED DRIVER ASSISTANCE SYSTEM

DRIVER NOTIFICATION ON EVENTS

CONTACTLESS CAN BUS CONNECTIVITY

EVENTS TRANSFER TO THE SERVER

POPULAR 2G CHOICES

POPULAR 4G CHOICES

FMC125

CONTACT TELTONIKA INDIA AND LEARN MORE!

BENGALURU GURGAON HQ - MYSURU +91 80 4212 6700 +91 124 414 3200 +91 990 178 0000 INFO@TELTONIKA.CO.IN December 2021 | Telematics Wire | 13


Industry Insight

Data-Led Personalization in the Era of Connected Mobility VIJAY SURYANARAYANAN & LAKSH PARTHASARATHY

Tata Consultancy Services

Y

ou watch your favorite movie on Netflix and soon the streaming platform starts recommending movies that you may like. Netflix can understand and recommend movies best suited to your preferences. Similarly, Amazon shows you products that are relevant to you and provides recommendations based on your shopping history and other similar shopping behavior. Spotify does the same with the ‘discover weekly’ playlist that recommends songs that you have not listened to before but may like. Have you ever wondered, what it would be like if your car could do the same? For instance, what if your car lets you to do a voice search for a restaurant during a drive, or automatically asks if you want to dial into a meeting when you are stuck in traffic? Or provide you with a contextual recommendation for an accessory based on your lifestyle or driving habits? With the increasing adoption of connected vehicle technologies, vehicle owners are expecting personalized and differentiated connected vehicle services. This expectation has started influencing purchase decisions of

buyers. A car has expanded beyond the horizon of a mere mode of transportation and transformed into a digital experience center, which is deeply integrated with the lifestyle of the customer. It is an extension to the connected home and office, and acts as an important digital touchpoint in the day-to-day journeys. “Any customer can have a car painted any colour that he wants, so long as it is black” – Henry Ford had said1 about the first massproduced vehicle2, Model T in 1909. Since then, the automotive industry has evolved to provide higher levels of personalization to the customers, by giving them the choice of variants and colors, and letting them choose their favorite accessories. The industry is now heading towards an era of hyper-personalization, which permeates through all stages of the ownership lifecycle. Connected vehicle services and the insights gathered from the connected ecosystem play a key role in enabling this. Global vehicle manufacturers have been making huge investments in connected vehicle technologies. However, many have struggled

Fig 1: Capturing value from the connected vehicle ecosystems

14 | Telematics Wire | December 2021

to differentiate their connected vehicle services and generate interest among customers3. One of the reasons for this could be that customers get similar services via smartphone applications, often free of cost, and they are not convinced of the additional value in the connected services offered by their vehicle manufacturer3. While Automotive OEMs are providing a multitude of connected vehicle capabilities, they are yet to get a pulse on the needs of customers for providing highly personalized and exciting journeys. So, what is the secret sauce to enable these hyper-personalized journeys? We believe that a software-driven vehicle architecture, digitization of customer touchpoints and effective leveraging of data from the connected vehicle ecosystem are key enablers for driving hyper-personalization. Let’s take a closer look at how this can be achieved.

Personalization across the vehicle ownership journey In order to drive customer loyalty, it is important to personalize the customer experience across all stages of the vehicle ownership journey. Some examples of this include targeted marketing campaigns and contextual recommendations during the vehicle purchase process, personalizing the in-vehicle experience, contextual recommendations for restaurant or fuel stations, proactive service recommendations and customized end of lease offers.


Fig 2: Vehicle as a Data Model – TCS Approach to Data Lifecycle Management

Some car manufacturers provide a ‘driver profile’ feature to personalize the in-vehicle settings. Tesla’s driver profile feature4 allows drivers to store their favorite settings for steering wheel and seat positions, rear view mirror, driving style, lights and locks, formats and units, and maps, to match the driver’s preference. With one push of a button, the driver can enable the best-suited driving profile, without having to manually select each setting. Another interesting example is that of BMW UK, which displays personalized warranty messages5 in electronic signboards at stop signs. The messages are personalized by showing the image of the model of BMW driven by the user, determined using image recognition techniques. Such personalized advertisements catch the attention of the drivers. Porsche’s car configurator6 is another good example of personalization in the vehicle purchase process. The configurator provides personalized recommendations on options best-suited to each customer and makes the vehicle configuration process seamless. This recommendation engine is built on 270 machine-learning models that were trained for specific markets, and works with about 90% accuracy. As more customer touchpoints are digitized and more vehicle features are enabled

through software, we can be sure of a huge leap in the number of personalized offerings across the automotive value chain.

a vehicle owner may want to opt for highway driving assistance capabilities for a long drive planned during a long weekend.

Software-Driven Architecture as an Enabler for Personalization Automotive product development is being redefined through the software-defined architecture. This is helping drive higher levels of personalization. Automotive OEMs are working towards improving their software competency and transforming towards a software-driven culture. An example of this is Cariad7 - an automotive software company that bundles together Volkswagen Group's software competencies and further expands them, building upon a heritage of bringing automotive innovation to everyone. Vehicle features are now being offered on demand, supported by software updates. Tesla has been a leader in adopting such a software-driven approach. It has enabled feature upgrades and fixes, which can be done remotely via software updates. Now, many automotive OEMs are looking to offer vehicle features on demand. For instance, BMW is offering heated seats8 on a subscription basis. Developing the infrastructure to deliver vehicle features via software updates provides a great opportunity to respond quickly to the customer needs and offer them capabilities suited to their specific journeys. For example,

Leveraging the Data from the Connected Vehicle Ecosystem There is tremendous value in the data that is now available through a connected vehicle. As Automotive companies improve their vehicle connectivity and introduce more digital services, activities in the ecosystem will produce a wealth of information. This information can be leveraged for providing a personalized experience to the customers, and improving the products and services. This helps OEMs to shift from the mindset of a “onetime sale of a vehicle” to owning a customer for life and owning the vehicle lifecycle. The data from the connected ecosystem could provide deep insights to enable a personalized experience through better prediction of customer needs. This experience needs to be provided across all stages of the vehicle ownership journey – including the vehicle purchase process, driving experience, service experience and transition to the next vehicle. For example, based on analytical models, we can predict when the customer is likely to purchase the next vehicle and recommend the best-suited vehicle based on the customer’s usage journeys. Similarly, a deep

December 2021 | Telematics Wire | 15


understanding of the customer’s personal traits and preferences helps in recommending the right subscription plans, accessories and offers that excite the customer.

Vehicle-as-a-Data Model To leverage the full potential of data for personalization, product quality improvements and monetization, it is very important to have a robust infrastructure for data lifecycle management covering various aspects such as data collection, storage, cleansing, processing and retention. A strong framework for data governance, security, privacy and consent management is essential to prevent unauthorized access to data and comply with legislative requirements. The data infrastructure needs to include and seamlessly integrate vehicle data, customer data as well as data from enterprise systems and external sources. By connecting these data elements, we will be able to realize a lot more value than what was possible through a siloed data analytics approach. We envision the concept of ‘Vehicle-asa-Data Model’ that connects typically siloed user, vehicle, and enterprise data, enabling new insights to enhance customer experience and optimize value. This provides a holistic view of the customer and enables smarter decisionmaking. This data model could provide a 360-degree view of the customer and lays the foundation for providing a personalized experience.

When connected with other data elements like their demographics, vehicle usage history, service center preference, insights around the customers’ preferences and interests, the customers’ lifestyle attitudes could be gleaned and used to tailor targeted campaigns and provide differentiated experiences. For example, the customers’ household income, spend on accessories, repair orders, and the lifestyle information gathered from analytical insights could indicate whether the customer is a good candidate for upselling/ cross-selling additional services. Similarly the vehicle’s diagnostics data could be linked with the navigation data to gain insights on the customers’ aftersales service behavior for specific types of services. There are numerous such possibilities that open up when we cross-leverage the connected vehicle data with the data from the other enterprise systems. However, while implementing these initiatives, proper care must be taken to ensure that only consented data is collected, stored and distributed. There are data protection regulations such as GDPR in Europe, CCPA and CPRA in the US, PDP in India and other country specific regulations, which must be strictly adhered to. These regulations mandate that drivers need to be provided information on what data is being collected, the purpose of collection and the use of the data. Vehicle manufacturers will also have to maintain a system to allow for customers to see what data

is being collected on them overall, and also have the ability to exercise their data subject rights, such as deletion (right to forget) of data.

TCS Solutions TCS has enabled global automotive OEMs to develop vehicle onboard and offboard capabilities. TCS AutoscapeTM suite of solutions brings together these experiences and cross-industry expertise relevant to the connected vehicle ecosystem to offer an endto-end connected data management and customer experience solution. This includes Connected Vehicle Experience – a digital experience platform for enabling personalized and differentiated customer vehicle experience and Connected Insights – an analytics solution that enables Automotive OEMs and other ecosystem partners to derive actionable insights from the data. The connected insights solution provides deep insights into customer personas and helps automotive companies realize the vision of hyper-personalization. It provides insights into customers’ journey patterns, persona traits, driving behavior, and social media behavior, preferred points of interest, aftersales service behavior etc. and helps Automotive OEMs to personalize recommendations, increase vehicle and accessory sales and improve service retention rate and revenue. Our Data Monetization Toolkit, consisting of the subscription platform, consent management solution and data marketplace

Fig 3: Understanding Customer Psychographics and other key enablers for personalization

16 | Telematics Wire | December 2021


Fig 4: TCS’s Connected Vehicle Solutions

platform, enables OEMs and ecosystem players to democratize and monetize data, enable new business models through subscriptions and address all aspects of data privacy and consent management.

Future of Personalization We believe that in the near future, vehicle will evolve to become a Digital experience platform by which insurers, retailers and other ecosystem partners will be able to create new and unique personalized offerings for their customers. Software will play a very important role in the vehicle architecture and automotive companies will look to strengthen their software capabilities, in order to remain competitive in the market. There will be an increase in vehicle features offered on demand via software updates. This will drive the evolution of ecosystem play where OEMs and ecosystem players need to collaborate and work together to innovate and provide a seamless customer experience. Customer touchpoints across the ownership lifecycle will be digitized significantly and the interactions with dealer and the OEM will become smarter and more efficient by leveraging artificial intelligence and other digital tools. As the data estate grows exponentially, there will be significant investments in Analytics and Insights capabilities across the ecosystem. This also provides ample monetization opportunities for all the stakeholders. We are looking at a future in which hyperpersonalization will become a standard expectation from the mobility customer. Leading companies can gain a competitive advantage through hyper-personalization by

leveraging the power of data and analytics. Tata Consultancy Services (TCS) a leading global IT services, consulting and business solutions organization, which has been named a Leader in the Everest Group PEAK Matrix® for Autonomous, Connected, Electric and Shared (ACES) Mobility Automotive Engineering Services, has been working with automotive clients in this transformation journey, to help them realize the vision of hyperpersonalization. References 1. https://en.wikiquote.org/wiki/Henry_ Ford 2. https://en.wikipedia.org/wiki/Ford_ Model_T 3. https://www.mckinsey.com/industries/

automotive-and-assembly/our-insights/ unlocking-the-full-life-cycle-value-fromconnected-car-data 4. https://www.tesla.com/sites/default/ files/blog_attachments/software_ update_1.13.16_0.pdf 5. https://www.cnet.com/roadshow/news/ bmw-uk-public-warranty-personalizedmarketing/ 6. https://newsroom.porsche.com/en/2021/ products/porsche-car-configuratoradvisory-function-recommendation-engineartificial-intelligence-23564.html 7. https://cariad.technology/ 8. https://www.forbes.com/sites/ alistaircharlton/2020/07/02/bmw-wantsto-charge-you-a-subscription-for-yourheated-seats/?sh=33f03b893c64

AUTHORS VIJAY SURYANARAYANAN

Industry Leader, Connected Vehicles Tata Consultancy Services Vijay Suryanarayanan is an Industry Leader, who heads the connected vehicle solution portfolio in the C.A.S.E Industry segment in TCS. He has about 18 years of Consulting, Product Management and leadership experience, with a focus on the Automotive Industry. His focus area is to drive growth and transformation in the connected vehicle area, through transformative industry solutions and thought leadership.

LAKSH PARTHASARATHY

Global Business Head, C.A.S.E Industry Segment Tata Consultancy Services Laksh Parthasarathy is the Global Business Head for the C.A.S.E Industry segment in TCS. With over 25 years of experience as a thought leader in the automotive industry, he primarily focuses on developing business models and solutions to address industry challenges and enable customers accelerate their C.A.S.E journey.

December 2021 | Telematics Wire | 17


Industry Insight

LCD Introduction & Requirements TANG HUI

Sharp Singapore Electronics Corporation Pte Ltd

Overview As our everyday gadgets become ‘smart’ the need of ‘communicating’ with them multiplies. LCD (Liquid Crystal Display) greatly helps this communication by ‘telling’ the gadget’s story. A touch panel further enhances these conversations! While LCD displays enable effortless communication for the end users, designing LCD displays in the gadgets is by no means an easy feat! As more and more gadgets need a LCDs built in, system architects, designers and engineers are tasked with the selecting and designing the appropriate LCD for their application. This task could be daunting given the wide array of LCD features available and the several system constraints to consider. This article aims to assist system architects, designers, and engineers to decipher LCD vocabulary and choose the right display for their application.

crystal (LC) is sandwiched in between by 2 glasses. Polarizers are laminated on top and bottom of the LCD glass.

- TFT glass is coated with matrix of transistors as the control to each sub-pixel. That is where the name of TFT LCD comes from. On this point, TFT LCD is considered as active control, while monochrome LCD is passive control. - Color filter glass is coated with matrix of phosphor with R, G and B color for each sub-pixel. R, G and B color will be generated when backlight passes through LCD Module Structure each sub-pixel, and various colors could be From mechanical point of view, LCD displayed with combination of different R, module (LCM) can be understood as the G and B levels. stack-up of LCD panel and Backlight unit. - Physically, Liquid Crystal (LC) is a mixture of crystal molecules and chemical 2.1 LCD panel liquid, and distributed uniformly in the Overall, LCD panel could be simply cell of each sub-pixel. The molecules in understood as a matrix of windows, and each sub-pixel will turn to certain angle 2. LCD Module Structure each window consists of 3 small windows according to the different voltage provided From mechanical point of view, LCD module (LCM) can be understood as the stack-up of LCD withunit. Red, Green and Blue color. by TFT panel and Backlight In this case, each window is called as a - TFT glass and color filter glass are bonded together as LCD glass. Polarizers 2.1 LCD panel Pixel, and each small window is called as a Overall, LCD panel could be simply understood as a matrix of windows, and each window Sub-Pixel. are laminated on top and bottom of LCD consists of 3 small windows with Red, Green and Blue color. The array of pixels, such as 1366x 768, is glass with perpendicular polarity. In this case, each window is called as a Pixel, and each small window is called as a Sub-Pixel. called Resolution. Looking at the LCD glass as matrix of The array of pixels, such as 1366x 768, is called Resolution. windows again and polarizers as shutters, theoretically, without liquid crystal, there R+G+B = 1 Pixel will be no light passing through from back to front side as two shutters are installed with perpendicular angles. Here comes the characteristic and functionality of liquid crystal. Above is the simple model of TN Mechanically, LCD panel consists 2 (Twisted Nematic) type of LCD in free Mechanically, LCD panel consists 2 layers of glasses, TFT (Thin Film Transistor) glass and Color Filter glass. And Liquid crystal (LC) is sandwiched betweenFilm by 2 glasses. Polarizers are layers of glasses, TFT in(Thin Transistor) running mode. laminated on top and bottom of the LCD glass. glass and Color Filter glass. And Liquid Backlight is polarized by bottom

18 | Telematics Wire | December 2021

polarizer with 0° polarity. When it passes through the liquid crystal, the polarity is changed by the molecules little by little and finally becomes 90°. As a result, the light will be allowed to pass through top polarizer with 90° polarity.  2.2 Backlight LED started to replace CCFL (Cold Cathode Fluorescent Lamp) some years back as the backlight source for LCD. And thanks to the optics technology improvement on light guide and optical films, LED bar could be placed on side of backlight unit to achieve good uniformity. As a result, the thickness of backlight thickness has been reduced to achieve slimmer design.

In the illustration above, Light guide will help distribute the light uniformly to the whole display area. Above it, Prism film will help improve the light transmission efficiency, and Diffuser films will help improve the uniformity and viewing angle. Furthermore, Reflection film will be used for light recycle and DBEF (Double Brightness Enhancement Film) could be used to enhance the brightness to improve


the efficiency and finally achieve power saving. In most of the application, backlight dimming will be applied for power saving purpose. For TV developers, backlight dimming could help to improve the dynamic contrast ratio as well.

3. LCD specification In this chapter, pertaining on the general LCD specification, we will try to list down most of the items for LCD requirements and sourcing. 3.1 Screen size Screen size is measured in diagonal on Active Area of LCD. The unit is normally in inch.

normally described in matrix format, such as 640x480 (VGA), 1024x768 (XGA), 1366x768 (WXGA), 1280x720 (HD) and 1920x1080 (FHD). The column of the matrix is called Source, to input the video signal for each sub-pixel. And the row of the matrix is called Gate, to control On/Off of each pixel line. ICs to control the columns are called Source Driver IC and ICs to control the rows are called Gate IC. Of course one single IC might be able to control both source and gate in some designs, such as the ones with lower resolution. In most of LCD design, RGB sub-pixels are aligned in vertical stripes. So the number of source channels will be 3 times of pixel numbers in vertical. Taking 1920x1080 (FHD) as example and assuming each source driver IC can support 960 source channels, totally 6pcs of source driver ICs will be required in the panel. Normally each pixel will be designed as square shape in TFT LCD (some monochrome LCD might be the exceptions). So we can easily understand the Aspect Ratio based on resolution. For example, the aspect ratio of VGA panel is 4:3 and FHD panel will be 16:9. By default, if we talk about resolution such as 640x480, the Orientation of the LCD is landscape. Some vendors may

indicate 640(RGB)x480 in spec for clear information. In this case, if we talk about the resolution of 480x640, it means the LCD is designed with Portrait orientation. In this case, the aspect ratio of this LCD is 3:4. 3.4 Brightness Brightness refers to the luminance when LCD displays full white. For reference, the LCD for normal Notebook is around 250nits, and TV panels are around 500nits. The products used outdoors may require LCD brightness to be over 1000nits. 3.5 Color Color is a key and interesting factor on LCD. Color depth refers to the bits of color, but does NOT indicate the color saturation directly. It could be understood as ‘resolution’ of color. For example on 24-bits color depth (8bits for each color), each subpixel can display 256 levels of single color. So the total number of colors that the LCD can display is 16.7M. Color gamut is to define the color range that LCD can produce. Below figure shows the color gamut of NTSC, sRGB, Adobe RGB and DCI-P3 in CIE 1931 color chromaticity. Referring to the triangle area enclosed by each RGB, we can know the percentage it covers compared with the whole color chromaticity. One or two of the

3.2 LCD type and Display Mode According to different LC arrangement, there are mainly 3 types of LCD, namely TN (Twisted Nematic), VA (Vertical Alignment) and IPS (In-Plane Switching). In free running mode (with power supply but no video signal input to LCD), TN type of LCD will display full white, while VA and IPS type will display full black. So in LCD spec, you can see in normally white or normally black on the summary page. 3.3 Resolution, Aspect Ratio and Orientation Resolution refers to the number of pixels of LCD,

December 2021 | Telematics Wire | 19


color spaces may be chosen to define the color gamut of each LCD design. For reference, the color gamut for normal notebook panel is 45% of NTSC, and it is around 72% of NTSC for TV panels. Color temperature (T) is to define how the white (or grey) color is that LCD can display. Higher color temperature indicates cooler color tone (Bluish) and lower color temperature indicates warmer color tone (reddish/yellowish). Together with Δuv, (T, Δuv) is actually another format of color coordinates (x,y), but it is more straightforward to indicate the color tone of white. For reference. In normal mode with full white pattern, the color temperature is normally 6500K for notebook panel, and about 8500K for TV panel. In cool mode, it could be adjusted to 8500K for notebook and 10,000K (or even higher for different regions) for TV. 3.6 Interface Generally, LCD with resolution lower than XGA, it is designed with RGB interface. For higher resolution, LCD would be designed with the interface of LVDS, MIPI, eDP and etc. 3.7 Backlight Lifetime Backlight lifetime is defined as the time when LEDs’ brightness drops from initial to 50% in normal operation. For reference, the lifetime for TV panel is normally 50K hours as typical. 3.8 Others We are trying to list down some of the spec items here. They are also very important but may be difficult to change or be customized once the panel design (TFT+ color filter) is there. Response time indicates how fast the LCD display can switch from displaying Black to White pattern, or from one Grey

level to full white and back to same Grey level. Customers can refer to respective LCD spec for detail measurement method. White Uniformity is the ratio of maximum luminance by minimum luminance among certain measurement points, to indicate how uniform the LCD display is. Contract Ratio is defined as the luminance ratio of full white by full black. It is normally measured at centre point. Viewing angle is the maximum angle at left, right, up and bottom side of the display where the contrast ratio is greater than 10:1. Operating temperature and Storage Temperature. LCD is normally designed as commercial grade, industrial grade and automotive grade. It may be designed with different operating and storage temperature range for different applications. Customers need to specify the application environment of the product, so that LCD vendor can propose the suitable solution accordingly.

4. Other technology terms 4.1 TN, VA and IPS With different alignment method on liquid crystal, there are mainly 3 types of LCD, including TN, VA and IPS. TN stands for Twisted Nematic. It is the low cost version of LCD with limited viewing angle and normally white in display mode. VA stands for Vertical Alignment. VA LCD has full viewing angle and high contrast ratio. However, as the liquid crystal is aligned vertically to the LCD screen, it is sensitive to compression force. Therefore, it is not suitable with optical bonding (full lamination) with touch panel or cover glass. IPS stands for In-Plane Switching. IPS LCD has full viewing angle and excellent color reproduction even from side view. IPS LCD is called ‘hard screen’ as it is less sensitive to the compression force with

AUTHOR TANG HUI Technical Sales Manager Sharp Singapore Electronics Corp.

20 | Telematics Wire | December 2021

special liquid crystal alignment. Therefore, optical bonding with touch panel or cover glass can be applied on IPS LCD. 4.2 Transmissive, Transfletive, Reflective Most of the LCD is designed as Transmissive type. In the design, LED backlight passes through polarizers, TFT glass, liquid crystal and color filter. The overall transmission rate is only around 3-5%. In the cases requiring high brightness such as outdoor products, the LCD has to be designed with more LEDs and consumes ultra-high power. Transflective type is still designed with LED backlight, but the LCD panel itself will allow ambient light, such as sunlight, to pass through the LCD panel and reflect it at certain back plane of the LCD to contribute as backlight in the end. In such design, with normal backlight power and brightness, the display could achieve excellent sunlight readability. For sure, the contrast ratio and color gamut could be the draw-back. This is no backlight used in Reflective type. The LCD purely utilizes the environment light to display. It works with excellent sunlight readability and power savings. 4.3 RGBY and RGBW Normally the LCD pixel is designed with RGB sub-pixels. Per introduction in backlight chapter, TV developer will dim down the backlight when the video have more lowerbrightness contents, such as night scenes, in order to achieve higher dynamic contrast. However, with RGB pixel design in panel, backlight dimming may not be triggered even with single full white pixel dot. In this case, there is some special design to add one White or Yellow sub-pixel to have RGBW or RGBY pixel structure. Therefore even with full white pattern, backlight dimming with certain percentage could be allowed as the white/yellow subpixel will be turned on to compensate on the brightness loss. The trade-off of this RGBW/RGBY design is on the contrast ratio and color gamut, and color tone on RGBY design.


Backlight dimming applied here is mainly for power saving then. 4.4 Resistive Touch and Capacitive Touch, On-cell Touch and In-cell Touch Resistive touch could be simply understood as a voltage divider. Let’s take 4-wire resistive touch as an example. Two ITO layers (normally ITO film on top and ITO glass at bottom) are bonded face to face and with air gap. When there is touch, one layer will detect the voltage in x-axis and another layer will detect the voltage in y-axis. Both voltage will help locate the actual touch position. Capacitive touch has ITO patterns for x-axis and y-axis, either on single layer or different layers. The voltage change caused by the capacitance change on both axis will help loWith LCD technology improvement, LCD manufacturers started to integrate the capacitive touch layer into LCD panel about 10 years back. But there was indeed some argument on the definition on On-cell and In-cell. Generally, if the ITO layer for touch panel is on color filter glass, it is looked as On-cell touch. If the ITO layer is on TFT glass, it is defined as In-cell touch. It is quite straightforward that In-cell touch requires higher technology and process capability, especially on the noise reduction. 4.5 Passive 3D and Active 3D (with Goggles) The basic understanding on 3D is to have different 2D image in left and right eye, and processed by brain. Passive 3D panel is to split one frame image into odd lines and even lines. With passive Goggle, left eye can see odd lines and right eye can see even lines. So the drawback of passive 3D is on the resolution loss (by half) on each frame. Active 3D is to display 3D video with odd frames and even frames. Active Goggle will synchronize with 3D panel/TV and switch on alternatively for left eye to receive odd frames and right eye to receive even frames. So each eye will receive half of the video frames. This is the drawback of active 3D on the spatial resolution loss. Watchers may suffer from the flickering during Goggle On/Off as well.

December 2021 | Telematics Wire | 21


Industry Insight

Neuromorphic/Event-Based Vision Systems: A Key for Autonomous Future SIDDHARTH JAISWAL & FAIZAL SHAIKH

Netscribes

W

e are getting closer to a future with driverless cars, and the promise of autonomous vehicles is more exciting than ever before. Robotaxis will be the first fully AV to be commercially available by the end of 2025, with autonomous personal cars following suit by 2030. In the years to come, the fate of driverless technology will depend on how well it can replace human vision and make decisions just as humans do. As of today, vehicles are powered by SAE level-2 and level-3 technology, with OEMs aiming to leapfrog to level-5 autonomy. For the next five years, one of the major focuses will be on perfecting computer vision technology to replace human vision. Computer vision has emerged as an integral part of autonomous vehicle technology, unleashing the transformative benefits of the humble and traditional camera system. Today, computer vision systems are based on frame-based acquisition for capturing the motion of an object using several still frames per second

(FPS). In such systems, vision sensors collect a large chunk of data corresponding to all the frames captured by the camera. However, a significant portion of the data collected during the frame-based approach is unnecessary and only adds to the overall size and intensity of data transmission and computation. This, in turn, exerts an immense burden on the vehicle architecture. Although these vision systems work well for various image processing use cases, they fall short of specific mission-critical requirements for autonomous vehicles. Neuromorphic or event-based vision systems are concentrated on emulating the characteristics of the human eye which is essential for an autonomous vehicle. These systems operate on the changes in the brightness level of individual pixels, unlike frame-based architectures where all the pixels in the acquired frame are recorded at the same time. Event-based vision sensors follow a stream of events that encode time, location, and asynchronous changes in the brightness levels of individual pixels to provide real-time output. These systems,

Fig-1: Evolution of Automotive Industry by Vehicle Technology

22 | Telematics Wire | December 2021

therefore, offer significant potential for driving the vision systems in autonomous vehicles as they can enable low latency, HDR object detection, and low memory storage. Event-based vision technologies have been in development for over 15 years now. They have been evolving for most of the 2000s and have begun to gain momentum during the current decade. Applications such as autonomous driving are now looking for a more human-like technology that can effectively replace humans. Hence, event-based vision sensors will be a natural choice of AV OEMs going forward. - Faizal Shaikh

Key Development Areas in Event-based Vision System: Accuracy in Optical Flow Calculation: Current optical flow algorithms are complex and computationally intensive. These are used in conventional frame-based cameras. Event-based systems will follow a new optical flow technique that is driven by brightness. Companies such as Samsung and CelePixel are active in developing technologies that follow an on-chip optical flow computation method. This technique uses a built-in processor circuit that measures changes in the intensity at individual pixel levels. Power Consumption: Depth-mapping computation in event-based cameras is power-intensive as all the pixels need to be illuminated to calculate the time of flight. Furthermore, motion recognition of fastmoving objects consumes a higher amount of power. Generally, low-power, event-based cameras are affected by noise that leads to inaccurate calculations. Thus, Samsung and Qualcomm have been exploring ways to optimize power consumption in even-based sensors using single-photon avalanche diode (SPAD)- based pixels. Calibration Issues: Event-based cameras have a microsecond-level temporal resolution for detecting millions of events


Fig-2: Number of Autonomous Vehicles by SAE Definition, (2019 – 2034) ('000 Units)

per second. Inaccuracies and interferences in the calibration of event-based cameras need to be addressed for motion sensing and tracking applications. University of Tsinghua’s invention focuses on the spatial calibration method for a hybrid camera system combining event-based cameras, beam splitters, and traditional cameras. Color Detection: Currently, eventbased cameras only detect changes in light intensity and cannot detect colors with the same intensity. Companies are developing color filters to make event-based cameras more color-sensitive. Samsung has been developing an image sensor chip with a color sensor pixel group and a DVS pixel group enabled by a controlled circuit. The configuration of the chip recognizes 2D colored images and reduces power consumption using a power monitoring module.

Frontrunners in the Automotive Event-based Vision Systems: Mitsubishi (3D Pose Estimation): The OEM has been actively working on 3D pose estimation which is an essential step in localization applications in autonomous vehicles. This system is aimed at developing an edge-based 3D model by matching 2D lines obtained from the scene data with 3D lines of the 3D scene model. In other terms, the focus of this technology is to extract the three important motion parameters (X, Z, and yaw) that help in estimating the DVS trajectory in a dynamic environment. Furthermore, the significance of localization is maintained in this system by using IMUs for obtaining the height, roll, and pitch

without compromising on X, Z, and yaw. Luminar Technologies (LiDAR and 3D Point Cloud): The company has developed a method for correlating DVS pixels of the events with the 3D point cloud of a LiDAR sensor. In this method, DVS is used for selecting the region of interest (ROI) of the scene and sending it directly to the LiDAR beam scanning module, therefore eliminating complexities related to object detection and classification procedures involved in image processing. Consequently, the LiDAR sensor scans only the portions of the scene that are changing in real-time, reducing overall latency. Volkswagen (Object Detection and Classification): The OEM has developed a neuromorphic vision system for image generation and processing to support advanced driver assistance applications and automated driving functionality. The bio-inspired system includes multiple photoreceptors for generating spike data.

This data indicates whether an intensity value measured by that photoreceptor exceeds a threshold. The digital NM engine includes processors running software configured to generate digital neuromorphic output data, which is velocity vector data to determine spatiotemporal patterns for 3D analysis for object detection, classification, and tracking. Magna (Collision Avoidance): The tier-1 supplier has developed a vision system with a sensing array comprising eventbased gray level transition sensitive pixels to capture images of the exterior of a vehicle. The camera combines an infrared, nearinfrared, and/or visible wavelength sensitive or IV-pixel imager with the gray level transition sensitive DVS pixel imager. When an object is detected, this technology will alert the driver, while in motion, as well as add an overlay to the image for better display. Honda Research Institute (Sensing Road Surface): The use of event-based cameras for road surface intelligence can provide information such as the presence of potholes, gravel, oily surfaces, snow, ice, and data related to the lane marking, road grip, geometry, friction, and boundary measurements. Real-time 3D road surface modeling can also help steer transitions from one road surface to the other. The institute has developed a method that uses an event-based camera to monitor the road surface along with a processing unit that can use this signal to generate an activity map to obtain a spectral model of the surface. Tianjin University (Multi-object Tracking): Another way of using eventbased processing on the roads is to track multiple objects at once in real-time. Such

Fig-3: Differences between the output from frame-based and event-based approaches

December 2021 | Telematics Wire | 23


method eliminates the need for calculating the parallax of the non-event pixel points, thus improving the matching process and increasing efficiency.

Concluding Remarks:

Fig-4: Frame-based vs Event-based Characteristics

situations demand high recognition accuracy and high-speed processing. An autonomous vehicle needs to be continuously aware of the surroundings, should be able to track pedestrians, and needs to have the ability to distinguish between dynamic and static objects on the road. Tianjin University is developing an AER image sensor for realtime multi-object tracking. The method includes data acquisition by the AER image sensor, real-time processing, monitoring light intensity change, and performing output operation when the change reaches a specified threshold. It also includes filtering of background information to reduce data transmission quantity and redundant information. Samsung (Inaccuracies Introduced

by Non-event Pixel Points): The existing camera system solutions face a challenge of non-event pixel points formed due to inconsistent distribution and amount of event pixel points from the right and left DVS cameras. These non-event pixel points do not participate in the process of matching pixel points, consequently leading to erroneous depth information. Samsung has developed a device and method for accurately determining the depth information of the nonevent pixel points based on the location information obtained from multiple neighboring event pixel points in an image. By improving the depth information accuracy of non-event points, the depth information of the overall frame image is improved. In addition, this

AUTHORS SIDDHARTH JAISWAL Practice Lead – Automotive Netscribes Siddharth has been tracking the automotive industry for the past nine years with a keen interest in the future of mobility, autonomous technology, and vehicle E/E architecture. He leads the automotive practice at Netscribes and works closely with the major automotive companies in solving some of the most complex business challenges.

FAIZAL SHAIKH Manager – Innovation Research Netscribes Faizal has six years of experience in delivering digital transformation studies that include emerging technology domains, disruptive trends, and niche growth areas for businesses. Some of his key areas of interest include AI, IoT, 5G, AR/VR, infrastructure technologies, and autonomous/ connected vehicles.

24 | Telematics Wire | December 2021

In its early stages, this technology was designed to help the visually impaired. However, over the next 2-3 years, it is estimated to have proof-of-concept for selfdriving cars, factory automation, and AR/ VR applications. The different benefits of this new concept over the traditional camera system make it suitable for object tracking, pose estimation, 3D reconstruction of scenes, depth estimation, feature tracking, and other perception tasks essential for autonomous mobility. Automotive companies should consider R&D, either alone or in partnership, related to the deployment of event-based cameras for different applications based on computer vision, gesture recognition (pose estimation, gaze tracking), road analysis, security, always-on operations, ADAS applications, and several other indoor and outdoor scenarios. Automotive manufacturers can also leverage the potential of AI and eventbased cameras to develop ultra-low power and low latency applications. Such solutions are less redundant due to the sparse nature of the information encoded. Additionally, funding projects in the domain can help automotive companies to become a part of the event-based camera ecosystem. Companies such as Bosch, Mini Cooper, and BMW have already become part of projects with research establishments like ULPEC or Neuroscientific System Theory. LiDAR in combination with an eventbased camera could be another key area of interest for the advancement of autonomous vehicles. This may help overcome the resolution limitations of traditional cameras. Prophesee is one of the companies that aims to use its event-driven approach to sensors such as LiDARs and RADARs. Vision systems are getting adopted on a large scale at both the consumer and enterprise level. The event-based approach has the potential to take the vision systems to the next level. It has all the essential qualities to attain the ubiquity expected for the future of autonomous mobility, where neuromorphic sensors will be seamlessly embedded in the fabric of everyday life. - Siddharth Jaiswal


December 2021 | Telematics Wire | 25


Industry Insight

The requirement of indoor mapping for localising vehicles indoors, enabling Autonomous Valet Parking in the future DR. BRIAN HOLT

Parkopedia

T

he next step in autonomous and mobility services will bring the ‘blue dot’ indoors, with multi-modal route planning and Automated Valet Parking (AVP) made possible by indoor vehicle localisation, achieved without the use of the Global Positioning System (GPS). According to Harvard University research, parking facilities currently take up as much as one-third of ground-level space in some cities, increasing demand for spaceoptimising indoor and underground options. Demand is particularly high in smart cities, where on-street parking is being reduced to accommodate greener living; with wider roads built to improve traffic flow and incorporate growing last-mile services.

26 | Telematics Wire | December 2021

Indoor and underground parking facilities maximise ground-area usage, but come with the drawback of blocking the line-of-sight to satellites, preventing traditional navigation via GPS. As such, an alternative method of vehicle localisation is needed to maintain the navigation features which drivers are accustomed to on the road, as well as enabling Mobility as a Service (MaaS) and convenience services of the future, such as AVP.

Autonomous Valet Parking AVP has the potential to improve efficiency and reduce stress, emissions and time wasted on parking, with benefits such as allowing a driver to be dropped off in front of a car park or their final destination, with the vehicle then autonomously parking itself.

AVP can improve safety as drivers will no longer need to walk around poorly lit parking facilities or navigate around moving vehicles. AVP can also avoid unnecessary congestion and pollution through real-time dissemination of parking space availability to connected autonomous vehicles. For parking facility operators, AVP utilises parking spaces more efficiently, in some cases by as much as 20%, by allowing tighter, and double parking, as well as optimally distributing vehicles within the available parking real estate. While the low speeds of AVP also mean a much lower risk of damage to people, cars and infrastructure. For automakers, AVP is seen as the catalyst for wider autonomous driving features, such as enabling EVs to be autonomously charged


or vehicles to be washed and serviced, then automatically returned to a parking spot that is convenient for the driver to find. AVP is likely to be the first SAE Level 4 automation product made publicly available to drivers, due to the lower cost of implementation and lower risk profile of low speed driving in a constrained environment, however, it requires consistent and reliable global navigation within the entire area of operation. This localisation can be achieved by using advanced robotics techniques, sensors that are already available on most connected vehicles and landmarks that are already present in all car parks.

The Parkopedia AVP Project In 2020, Parkopedia demonstrated automated valet parking with HD Maps of indoor car parks that are suitable for both navigation and localisation, using vision-based localisation techniques based on artificial landmarks (fiducial markers). For the project, Parkopedia led a consortium that included The University of Surrey and The Connected Places Catapult (CPC), which was funded by the Centre for Connected and Autonomous Vehicles (CCAV) and InnovateUK, and focused on the development of solutions for SAE Level 4 automation and above. One of the key objectives was to develop a vision-based indoor positioning system that will enable decimeter level accuracy based on on-vehicle cameras, wheel odometry, IMU and Indoor Maps, while demonstrating the solution using Parkopedia's existing self-driving vehicle, fitted with representative automotivegrade hardware and cameras. Parkopedia’s proprietary state-of-the-art Simultaneous Localization and Mapping (SLAM) system integrates Light Detection

and Ranging (LiDAR), inertial measurement unit (IMU), Global Navigation Satellite Systems (GNSS) and high-resolution imagery collected from a compact 3D mobile mapping rig to generate highly accurate colourised point clouds that represent the parking facility. The project found that the installation of fiducial markers (similar to QR codes) reflected in the map, alongside crucial computer vision algorithms, to calculate the position and orientation of the vehicle, enabled the localisation accuracy needed to allow the automated vehicle to navigate the car parks safely. However, the requirement for the design, installation and maintenance of hundreds or thousands of markers in a car park presents a considerable economic challenge that threatens the commercial viability of AVP. As such Parkopedia is now replicating this demo, using common objects that occur in all applicable car parks (natural landmarks), and adapting our existing technology for production use cases via our Indoor Mapping product. Following the successful delivery of the AVP project, Parkopedia has proved that vehicles can precisely locate themselves using artificial landmarks inside a centimetreaccurate map of a parking facility without access to Global Navigation Satellite System (GNSS). By replicating the AVP demo using natural landmarks (e.g. columns, road markings, signs, etc.) Parkopedia is showcasing that the same result can now be achieved without requiring any modification to the car parking infrastructure itself, or indeed the vehicle. A natural landmark solution uses features that already exist inside car parks and sensors that are already present on vehicles, meaning minimal investment and maintenance costs for automakers and parking facilities operators. The process is the same as for artificial landmark localisation except that the natural landmarks inside the car park can be captured at the same time as the structure of the car park and be embedded inside the map right from the start. Then objects from the different

natural landmark categories are detected using the cameras already mounted on the vehicle delivering the result required for a truly scalable AVP solution. Natural landmark categories as established by Parkopedia: ● Columns (2 different types) ● Railings ● Signs ● Road markings ● Doors Parkopedia’s solution involves training a Deep Neural Network (DNN) to recognise and identify objects in the above categories via the images provided from the on-board cameras. The range and bearing to each landmark can then be calculated using associated algorithms. All the natural landmarks identified at the mapping and modelling stage are incorporated into the map so that they can be matched with the observed landmarks. Together with odometry from the car itself, the map will contain the necessary information needed to allow the vehicle to be accurately located within the map of the car park. Parkopedia’s technology can detect landmarks in the images and estimate the vehicle pose at more than 20 frames per second. This technology will allow drivers to plan journeys to not just their chosen parking facility, but to a specific parking space or location inside that facility. Drivers will also be able to navigate seamlessly between the public road network and parking facility’s internal roads, with the map supporting indoor navigation, automated parking and summoning.

Indoor Maps An Indoor Navigation service will guide drivers to the most likely available spot that minimises the overall journey time, by optimising a multi-modal route consisting of driving and walking directions, or even enable navigation to ‘hidden’ EV charging stations and find-mycar applications through to Automated Valet Parking use cases. AVP is the logical extension of this and builds upon ADAS and park assist technology, already present in vehicles today. When used in conjunction with Parkopedia’s dynamic, predictive availability data, drivers will know if there is a space or EV charger available at their destination at their estimated time of arrival, if not, it can offer alternative suggestions, helping drivers to complete frictionless journeys, and helping

December 2021 | Telematics Wire | 27


manage facility demand and occupancy. Indoor Navigation opens up many use cases that are not possible at present, such as: ● Routing to specific parking spots in a facility, e.g. those with an EV charger ● Utilising multi-storey and underground car parks for MaaS, logistics, E-commerce applications ● Driver convenience systems, e.g. Find-mycar applications

Charging Electric Vehicles Despite rapidly growing sales and legislation to encourage drivers to swap to EVs, the highly fragmented public charging infrastructure is still a major barrier to ownership. As many as one-third of prospective buyers will not be able to charge at home, and will be completely reliant on the public charging network so improvements are required immediately. The decrease in on-street parking will also see these EV chargers moved into indoor

facilities, however, without indoor navigation capability, drivers will not be able to easily find them. Parkopedia’s indoor mapping product is unique in that it is currently the only commercial available indoor navigation service, and when used in conjunction with its leading parking and EV charger POI data, resolves the most common pain point around public charging.

New use cases for indoor facilities Localisation of vehicles within indoor parking facilities allows the facilities to be used for more than just parking. Industries such as car sharing and repurposing of sections of the car park for multi-usage, e.g. green last mile delivery networks, ghost kitchens, and E-commerce applications, such as straight-to-trunk delivery, will thrive with the mass introduction of indoor localisation. Usually centrally located, and/or near to other modes of transport such as train and bus stations, parking facilities are an ideal, easy to

AUTHOR DR. BRIAN HOLT Head of HD Maps Parkopedia Dr. Brian Holt leads Parkopedia’s team developing parking solutions for autonomous cars, having previously worked at Samsung Electronics managing on-device AI, vision-based localisation and AR/VR projects.

28 | Telematics Wire | December 2021

locate hub for mobility and logistics services. When combined with accurate and reliable parking data, indoor localisation provides a car sharing solution that is finally convenient enough to encourage drivers out of their private vehicles, supporting decarbonisation initiatives.

Driver convenience features Whether due to a lack of time or mobility, drivers want to park as close as possible to their final destination, reducing the ‘on-foot’ part of their journey. Indoor mapping services allow for this feature to move indoors, improving journey efficiency as well as drivers safety and security, by being able to park as close as possible to an end destination. An Indoor Navigation service will guide drivers to the most likely available parking spot that minimises the overall journey time, enables navigation to ‘hidden’ underground EV charging stations and also find-my-car applications. Parkopedia’s Indoor Mapping Product is a unique, cost effective, scalable indoor mapping solution for automakers looking to future-proof their navigation and deliver driver convenience services expected today. Having successfully delivered multiple indoor mapping proof-of-concept projects for global automakers, covering both industry and private use cases, Parkopedia expects this technology to be rolled out to consumer vehicles within the next few years.


December 2021 | Telematics Wire | 29


Industry Insight

Precision and Complexity: Role of Manufacturing in Product Performance AMAN JAIN

Napino Auto & Electronics Ltd

W

e generally talk about advance IoT solutions which should be very precise in performance with low latency rate and highly reliable, but supply chain has become the new agenda post COVID era. Today, the electronics industry faces significant challenges in terms of raw materials, component prices and logistics. Let me just take one example here to begin the discussion around manufacturing and its challenges. With this change in requirements over a decade, we moved from basic telematics to advanced phase of telematics which is known as Advanced Driver Assistance System (ADAS). Advanced means more and more features, high performance, better accuracy which further means, more components on the boards and complex PCB design. Any electronic hardware must pass through multiple phases of production. In this article we are specifically talking about the complex electronic device manufacturing phases. Demand for advanced driver assistance systems (ADAS) is at an all-time high, and with it, the need for evolving manufacturing and sourcing

approximately USD 68 billion by 2027.

It is a complex engineered product,

testing of assembled boards is another important procedure that has significant role to play. In-Circuit Test, ICT is a powerful tool for printed circuit board test. Designer is forced to bring out many test points, which is in direct conflict with his goals to miniaturize the design. ICT equipment provides a useful and efficient form of PCB test by measuring each component in turn to check that it is in place and of the correct value. As most faults on a board arise out of the manufacturing process and usually consist of short circuits, open circuits or wrong components, this form of testing catches most of the problems on a board. These can easily be checked using simple measurements or resistance, capacitance, and sometimes inductance between two points on the circuit board. Then, we come to Functional Test (FCT), is a PCBA functional test. It is done mainly to simulate the product working environment to verify the function of boards during working conditions. If any of these two tests are skipped during the manufacturing specially in case of automotive products, the entire credibility goes for a toss!! The technological components harnessed in ADAS require precision

protocols for the automotive industry. he market is further expected to grow at a CAGR of 16.85% in the forecast period of 2022-2027 to reach a value of

demands top notch quality during production phase. To be precise, the PCB itself comes with minimum 10-12 layers to justify the design. Apart, the

engineering. They are complex, sensitive to environmental effects and thus require protection. For example, the mechanical housing of ADAS element must be

30 | Telematics Wire | December 2021

Design of Hardware Design of PCB is the crucial piece of puzzle. Seeing the current market trends, where electronics is completely imbalanced, choosing the right component for your design is very important. It takes the round of discussions with suppliers on both prices and lead time. Today's trends show that demand for IoT products is growing rapidly and will continue to increase in the future. In early 2021, most factories announced the increased lead time, and now we hear more and more about delays in the supply chain. Choosing the right manufacturing partner who has all the quality checks in place and having excellent SMT lines, plays a vital role in the performance of finished product. A typical ADAS device features a suite of sensors. As the complexity and capability of these sensors increase, so does the cost. Supply challenges affect the timeline of widespread adoption.

Test Standards Enables Smooth Performance of ADAS


resistant to the effects of corrosion, humidity, shock and vibration.

soldering ● In line X-Ray Machine When it comes to complex PCB boards, discussion over BGA (Ball Grid Array) components can’t be ignored. Reason being, these are the heart of the design and very costly. Taking the high level of precautions such as adequate solder paste, reflow are the standard practices to be followed. Any ignorance in the process can lead to huge loss of PCBs and hence the money. X-ray inspection is the right approach here to ensure the perfect soldering of module

Right Manufacturing Partner Can Solve the Puzzle We at Napino Auto and Electronics bring the expertise in high class manufacturing facility and all the ISO standards in place which are required for an automotive manufacturing setup such as IATF 16949, ISO 45001, ISO 18001. Having the dedicated test labs, certification facility, part to part traceability makes us up in the industry to be considered as the first choice as the preferred manufacturing partner. Attributes of the perfect EMS provider: ● Components Storage, Verification and Traceability ● In line Solder Paste Inspection ● 3D Automatic Optical Inspection ● Continuous profile monitoring of reflow oven ● PCB Cleaning ● ESD (Electrostatic) Control workstation ● Clean room with class 100,000 or above ● Nitrogen and lead-free compatible

which can’t be seen through naked eye. Process Failure Mode Effects Analysis (PFMEA) is a structured analytical tool used by an organization, business unit or cross-functional team to identify

The rise of the digitally connected products is driving demand for automotive and electronics industries. How electronics manufacturing industry is helping the precision in the performance of final product, is the crucial phase of product development which brings the reliability and durability in the entire product lifecycle, as well as the resulting experience. and evaluate the potential failures of a process. PFMEA helps to establish the impact of the failure and identify and prioritize the action items with the goal of alleviating risk.

AUTHOR AMAN JAIN National Sales Head-IoT, Marketing & BD Napino Auto & Electronics Ltd I am Aman Jain, having 9+ years of experience of Business Development in Vehicle Telematics Domain along with deep understanding of Industrial IoT and Electronics Manufacturing Services. I have spent most of my professional journey in Vehicle Telematics itself. My core strength has been to understand the potential opportunity in the market for IoT sensors, specifically in connected technology.

December 2021 | Telematics Wire | 31


Car Launch

MERCEDES-BENZ AMG A45 S 4Matic Plus

Mercedes-Benz India has launched the A 45 S 4MATIC, its newest offering in the sports utility vehicle and hatchback range AMG. It was launched in the second week of November. It has been priced at Rs 79.50 lakh.

32 | Telematics Wire | December 2021


SPECIFICATIONS Safety Features:

◆ Geo-Fence

◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

◆ ◆ ◆ ◆ ◆

Overspeed Warning - 1 beep over 80kmph, Continuous beeps over 120kmph Blind Spot Detection – Optional Lane Departure Prevention – Optional Lane Departure Warning – Optional Emergency Brake Light Flashing Forward Collision Warning (FCW) Automatic Emergency Braking (AEB) High-beam Assist Airbags - 7 Airbags (Driver, Front Passenger, 2 Curtain, Driver Side, Front Passenger Side) Middle rear three-point seatbelt Middle Rear Head Rest Tyre Pressure Monitoring System (TPMS) Child Seat Anchor Points Seat Belt Warning

◆ ◆ ◆ ◆ ◆ ◆

Check Vehicle Status Via App Emergency Call Over The Air (OTA) Updates Remote Car Lock/Unlock Via app Remote Car Light Flashing & Honking Via app Remote Sunroof Open/Close Via app Alexa Compatibility

Entertainment, Information & Communication Features: ◆ ◆ ◆ ◆ ◆ ◆ ◆

Wireless Charger Smart Connectivity - Android Auto, Apple Car Play Integrated (in-dash) Music System Touch-screen Display GPS Navigation System 6+ Speakers USB Compatibility

◆ ◆ ◆ ◆ ◆

Bluetooth Compatibility (Phone & Audio Streaming) AM/FM Radio iPod Compatibility Steering mounted controls Voice Command

Instrumentation: ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

Digital Instrument Cluster Electronic 2 Trips Meter Average Fuel Consumption Average Speed Distance to Empty Digital Clock Low Fuel Level Warning Door Ajar Warning Adjustable Cluster Brightness Gear Indicator Digital – Tachometer Instantaneous Consumption

Engine & Transmission: ◆ Top Speed (Kmph) – 270 ◆ Acceleration (0-100 kmph) (seconds) – 3.9 ◆ Engine Type - 2.0L M139 Turbocharged I4 ◆ Fuel Type – Petrol ◆ Max Power - 421 bhp @ 6750 rpm ◆ Max Torque - 500 Nm @ 5000 rpm ◆ Drivetrain – AWD ◆ Transmission - Automatic (Dual Clutch) 8 Gears, Paddle Shift, Sport Mode ◆ Emission Standard – BS6 ◆ Turbocharged

Braking & Traction: ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

Anti-Lock Braking System (ABS) Electronic Brake-force Distribution (EBD) Brake Assist (BA) Electronic Stability Program (ESP) Four-Wheel-Drive - Torque-On-Demand Hill Hold Control Traction Control System (TC/TCS) Limited Slip Differential (LSD)

Locks & Security Features: ◆

Engine immobilizer Central Locking - Keyless ◆ Speed Sensing Door Lock ◆ Child Safety Lock ◆

Telematics Features: ◆

Find My Car

December 2021 | Telematics Wire | 33


Car Launch

VOLKSWAGEN TIGUAN Elegance 2.0 TSI DSG

The 2021 Volkswagen Tiguan Facelift launched in India on December 7, 2021. It has been priced under Rs 30 lakh, ex-showroom Delhi. It will feature a host of cosmetic updates along with a new powertrain. While the pre-facelift Tiguan that was launched in India back in 2017 had a diesel engine only, the upcoming facelift version of the SUV will be available with only a BS6 compliant turbocharged petrol engine.

34 | Telematics Wire | December 2021


SPECIFICATIONS Safety Features:

Braking & Traction:

◆ ◆ ◆

◆ ◆ ◆ ◆ ◆

Overspeed Warning - 1 beep over 80kmph, Continuous beeps over 120kmph Emergency Brake Light Flashing High-beam Assist Airbags - 6 Airbags (Driver, Front Passenger, 2 Curtain, Driver Side, Front Passenger Side) Middle rear three-point seatbelt Middle Rear Head Rest Tyre Pressure Monitoring System (TPMS) Child Seat Anchor Points Seat Belt Warning

Engine & Transmission:

◆ ◆ ◆ ◆ ◆ ◆ ◆

Anti-Lock Braking System (ABS) Electronic Brake-force Distribution (EBD) Brake Assist (BA) Electronic Stability Program (ESP) Four-Wheel-Drive - Torque-On-Demand Hill Hold Control Traction Control System (TC/TCS) Differential Lock – Electronic

Locks & Security Features: ◆

Engine immobilizer ◆ Central Locking - Keyless ◆ Speed Sensing Door Lock ◆ Child Safety Lock

Telematics Features:

Engine Type - 2.0 TSI Fuel Type – Petrol ◆ Max Power - 187 bhp @ 4200 rpm ◆ Max Torque - 320 Nm @ 1500 rpm ◆ Drivetrain – 4WD / AWD ◆ Transmission - Automatic (Dual Clutch) 7 Gears, Manual Override & Paddle Shift, Sport Mode ◆ Emission Standard – BS6 ◆ Turbocharged

Find My Car ◆ Check Vehicle Status Via App ◆ Geo-Fence ◆ Emergency Call

Entertainment, Information & Communication Features: ◆

Gesture Control ◆ Smart Connectivity - Android Auto,

◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

Apple Car Play Integrated (in-dash) Music System Touch-screen Display GPS Navigation System 6+ Speakers USB Compatibility Aux Compatibility Bluetooth Compatibility (Phone & Audio Streaming) AM/FM Radio iPod Compatibility Steering mounted controls Voice Command

Instrumentation: ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

Analogue Instrument Cluster Electronic 2 Trips Meter Average Fuel Consumption Average Speed Distance to Empty Digital Clock Low Fuel Level Warning Door Ajar Warning Adjustable Cluster Brightness Gear Indicator Digital – Tachometer Instantaneous Consumption

December 2021 | Telematics Wire | 35


Industry Insight

The key to delivering autonomous driving promises: V2X connectivity TUSHAR BHAGAT

Uffizio India Pvt. Ltd.

Connected vs. Autonomous vehicles Autonomous vehicles are an emergent technology. It seems like there is a more exciting innovation always just waiting to happen. Autonomous cars can control themselves, sense their surroundings, detect pedestrians, park, change lanes, merge safely onto highways, and more. It is no news that autonomous fleets will

certainly reduce the risk of road fatalities and enhance traffic flow. After all, 94% of automotive collisions happen because of human error. Hence, this application of automotive technology is forever going to change the way we travel. With the help of sensors, onboard cameras, and lidars; self-driven cars can make their own decisions without very little or almost no human interaction.

On the other hand, while connected vehicles cannot make their own decisions, but they still have the potential to transform travel. At its core, connected vehicles form a networked environment that sustains high-speed data transactions among vehicles (V2V), infrastructure (V2I), or other devices (V2D). This ability to identify, collect, process and exchange real-time data equips drivers with greater situational awareness. For instance, in case of a vehicle breakdown, a notice gets transmitted to the surrounding vehicles, so drivers can press on the brake pedal and avoid collisions.

Why must autonomous vehicles connect? As mentioned earlier, autonomous vehicles deploy intelligent algorithms that enable the vehicle to make its own decision. However, autonomous vehicles will not get much smarter unless they’re connected. Imagine a foggy morning. Your car drives itself only to face poor visibility and despicable sensor ranges. Here, V2V connectivity becomes more than just a channel for information relay, it becomes essential for on-road safety. For autonomous vehicles to function on actual roads, they must operate safely under any and every condition. For that to happen, these driverless cars must exchange information with other vehicles and roadside infrastructure. V2X communications capture and interpret pragmatic data that further help driverless cars to better execute actions like accelerating or braking. Connectivity acts as an additional sensor, letting your car see further, even when the driving conditions are poor. Connectivity provides out-of36 | Telematics Wire | December 2021


sight information to the radars and sensors, advertently helping the car navigate better. So, even on a foggy day, your driverless car will aptly stop by at the intersection. With vehicle-toinfrastructure (V2I) communication, your vehicle will rightly know when to stop, despite poor visibility. Autonomous vehicles may intelligently combine vehicular data from multiple sensors but it won’t be enough if they aren’t using it strategically. Even with safety tools like advanced driver assist systems (ADAS), there is only so much that can be done without connectivity. The failed selfdriving tests of Tesla and Uber are a testament to this. Road incidents that seem pretty straightforward to humans may confuse automated vehicles. Connecting autonomous vehicles gives them a geo-localized context. Mapping surroundings is crucial, at least from a safety perspective. Knowing where a pedestrian is crossing, or an upcoming area with bad weather will help the vehicle slow down in good time. Building a connected infrastructure and promoting data-driven mobility is the key to fewer driving errors. With it, even self-driven cars will make splitsecond decisions. Connected automated vehicles (CAVs) will perceive things

that sensors alone cannot. For instance, if a connected vehicle were to identify twenty cars braking at the same place, it might use that information to slow down and prepare for braking too. With V2X connectivity, autonomous vehicles will anticipate traffic changes better and plan ahead.

Concluding Remarks Autonomous driving is an audacious yet distant goal. While the future of self-driven cars is still unknown, here is one thing that we know: we must master connectivity and V2X communications to deliver self-driven cars. These communications will depend on both sensor technology and radio-based communication. Car sensors will help systems interact with immediate surroundings whereas radiobased communication will enable data

exchange with traffic infrastructures like lights, signs, and tolls. CAVs have the potential to adapt to real-world traffic conditions and instill the hope for widespread consumer adoption. For this to happen, technologies like 5G and multi-access edge computing need to be leveraged—so we can finally fill our roads with self-driven cars. References

https://www.ite.org/technicalresources/topics/connectedautomated-vehicles/ https://ieeexplore.ieee.org/ document/8451989 Image credits: https://spectrum.ieee. org/6-key-connectivity-requirementsof-autonomous-driving

AUTHOR TUSHAR BHAGAT Director Uffizio India Pvt. Ltd. Mr. Tushar Bhagat is the CEO of Uffizio. He has simmered 15 years worth of informaticsknowledge & experience into a one-of-a-kind telematics platform. This fleet management system has been acclaimed and widely used by businesses in over 60 countries. Mr. Bhagat firmly believes in finding creative solutions to everyday challenges—so businesses can bloom into their full potential.

December 2021 | Telematics Wire | 37


Product Launch

Sidescan Predict, a predictive collision warning system

Brigade Electronics has launched a new predictive collision detection system called Sidescan Predict. It is designed for rigid body vehicles including coaches and buses with a minimum length of 5.2m, this system uses six ultrasonic sensors along the side of the vehicle and artificial intelligence to scan for object detection data (such as the speed and distance of a cyclist or vulnerable road user) within a 2.5 metre area. This combines with technology embedded in the system which gathers speed, direction, acceleration and the turning rate of the vehicle. All this data then feeds into an algorithm created by Brigade which calculates the risk of

38 | Telematics Wire | December 2021

a collision in order to warn the driver. If a collision is predicted, an alert is sent to the cab. According to the supplier the features reduce the risk of fatalities by an additional 84%.

Sidescan®Predict's features and benefits: ● Sidescan®Predict is always switched on at speeds below 22mph/30kmh. Crucially, the collision protection is active with or without the indicators on. This is particularly important as it is recognized that some drivers become irritated by false alerts and therefore will avoid using their indicator so their system does not trigger alerts, potentially putting vulnerable road users at risk ● An auto brightness feature adapts Sidescan®Predict to lighting conditions in the cabin, so the visual alert is not lost among the numerous lights present in a modern cab ● Sidescan®Predict can be retrofitted to existing vehicles and the configuration software (a key USP) provides multiple system tests to ensure the sensors are appropriately positioned and correctly fitted for additional peace of mind ● An in-cab visual alert indicates if the system has a failure ● The system is competitively priced and comes complete in one box with a compre-

hensive user manual (which is also available online) and a training video ● Installation time is approximately six hours and integrates with other Brigade systems to provide a complete solution. Additional speed and indicator switches are not required, which reduces system and installation costs ● These next generation sensors have wider detection areas therefore reduce risk The system remains switched on when the vehicle’s speed is below 22mph, regardless of the indicator selection, and cannot be deactivated by the driver. Brigade claims development of the system, which is supported by the Cambridge University Knowledge Transfer Partnership initiative, has been taking place for more than seven years and is the culmination of 10,000 hours of research. Sidescan Predict was first trialled in 2020 and received “excellent driver feedback” with a “significant reduction in the risk of collision with both vulnerable road users and static objects”. The system can be retrofitted to existing vehicles and the configuration software provides multiple system tests to ensure sensors are correctly positioned. Installation time is said to take six hours and can integrate with other Brigade systems. If the system should fail, an in-cab visual alert will warn the driver.


Product Launch

Spectrum HD, LiDAR from Baraja Baraja unveiled Spectrum HD, a LiDAR system ready for Level 4 autonomy at scale. Spectrum HD will be made available for samples in 2022.

S

pectrum HD is built on Baraja’s proprietary Spectrum-Scan™ solid-state scanning platform. Baraja’s Spectrum-Scan™ LiDAR connects a wavelength-tunable laser to prism-like optics, deflecting Laser

the light in different directions to achieve scanning with higher reliability and lower cost. Highlights: ● On chip fast axis beamsteering, transmitter and receiver

Laser Product Class

Class 1 as per IEC 60825-1:2014

Laser Wavelength

1550nm central (c-band)

Beam Divergence

0.015° (FWHM)

Mechanical/Electrical Power Consumption

<20W

Dimensions (H,W,D)

50mmx50mmx120mm

Optical Performance

● Industry

leading range and resolution ● Per point doppler velocity measurements ● True interference immunity from ambient light, other LiDARS and other Baraja sensors ● Built according to ISO26262/ ISO16750 for automotive integration, reliability and functional safety ● Foveation – detect and focus on objects in the field-of-view by dynamically changing point density on-the-fly ● High resolution detections superior to the industry standard, down to 0.0125° x 0.04°

Range to 10% reflective Lambertian target, >90% detection probability)

250m +

Minimum Range

0.01m

Vertical Angular Resolution

0.0125 (deg)

Horizontal Angular Resolution

0.04 (deg)

Field of View

120h x25v (deg)

Frame Rate

4-30Hz

Number of Returns

2+ December 2021 | Telematics Wire | 39


Industry Insight

Autonomous Vehicle, the next big thing in Telematics PRAVEEN DHUSIA

Teltonika India

F

rom Da Vinci's self-propelled cart to Tesla's autopilot and Google's self-driven car, the history of autonomous vehicles can be traced back to the 16th century from Da Vinci's time. The invention of the Whitehead Torpedo in 1868 was a significant step towards automation technology. The DARPA undertaken by the US Department of Defense in 2004 also saw a significant leap in driverless capabilities. Then came the autopilot technology followed by cruise control in cars. The advent of drone technology also changed the view of the world where vehicles could completely be navigated by pre-written codes and advanced AI. With Google, Tesla and other AV enthusiasts, the game changed as the world got its first autopilot car hands-free control of the car on highways and freeways. Today amongst the most exciting technology prospects that humans are working on, autonomous vehicles (AV), or self-driven cars, is the future of the transportation industry. These driverless cars have the ability to sense its surroundings through embedded sensors in its system. Instead of a driver-led response, the vehicle utilizes its in-built software to navigate through roads. Although this holds a glorious promise for a future that immensely enhances human safety against car crashes, it still has certain drawbacks, even among the most forward-looking critics. Whether we will be able to overcome the technology barriers like the advanced level of AI and machine learning required to successfully implement this technology is another question, but what is certain is that we are slowly, but surely reaching the Level 5 of automation.

Automation Levels Automation can be defined as creation and application of technology that can perform tasks with minimal human intervention.

40 | Telematics Wire | December 2021

And the ultimate goal of every automation technology: industrial application to source its benefits down to the common man. There are six levels of autonomous cars, based on the ascending value of automation technology that can function on their own. Level 0 is the lowest level of automation, while Level 5 is the highest. We currently are sitting at Level 2 of automation with Level 3 capabilities being developed at a rapid pace, with a deployment forecast in about the next 5 years on an industrial scale.

What the technology holds Autonomous vehicles have several advantages over human driven cars. These vehicles provide safety and could substantially decrease the number of casualties due to road accidents. They may also lead to better traffic management on the road and reduce congestion. Driverless taxis also have a potential to reduce transportation cost. Additional advantages include safe passenger rides for people with disability and less fatigue during overnight journeys. One of the biggest challenge autonomous vehicles faces is, unless and until advanced Level 0 Level 1 Level 2

Level 3

Level 4 Level 5

systems take hold, the technology may end up causing damage to the vehicle as it would be programmed to protect the humans at all cost. Also, there has to be a well-drafted set of rules that would assist the AI for better decision-making, like prioritizing passenger life or pedestrians, if such a situation may arise.

5G, Telematics and AV In the past decades, we have witnessed the substantial upgrades in cellular networks, i.e., from 2G to 3G, 4G and now, 5G. With the introduction of 5G infrastructure, the transmission and processing of data between IoT-enabled devices is set to increase. This upheaval in Telematics will have a trickledown effect on the development of AV. 5G internet speed will assist in realtime vehicle-to-vehicle and vehicle-toinfrastructure communications which will reflect the data back to the driver about traffic congestion, street lights, estimated arrival time, construction/maintenance hazards, etc. AVs will be a major benefactor with such data communications between machines, making operations easier. 5G technology

It’s the lowest level of automation where all the control rests on the human driver. Driver assistance helps the driver with steering and braking of the car. Automation helps in steering, braking and acceleration in certain predefined environments, although the human driver still has to monitor some functions and perform tasks. Automation performs the majority of the driving under ideal environments, but the driver still may have to take manual control when prompted by the system. The vehicle's Advanced Driving System performs independently all predefined tasks without any human intervention. Automation enables the autonomous vehicle to function completely on its own under any environment. This would enable the car to drive under any environment without the need of human monitoring or control. Level 5 autonomous cars can communicate with other cars, traffic lights, signage and roads.


will significantly enhance safety in AVs with technology taking over the wheels which in turn would eradicate human error.

AV: Value Creation With the advent of AVs in the public transportation and logistics sector, the paradigm would shift considerably as this technology eliminates the impediments to commercial vehicles. From less congestion to higher utilization of assets and enhanced productivity, the value creation aspect of AVs is quite high. With real-time data consisting of travel time, road incidents and favorable traffic patterns, the AVs can take the best possible route which would curb traffic density on roads. In a developing country like India, the consequences of such shared Autonomous Vehicles (SAV) can be profound. We can already see the results with the metro rail projects which are now being introduced even in Tier II cities to curb congestion. SAVs will surely bring down the cost of transportation due to driverless cabs which can ferry multiple passengers at once. This is akin to the car-sharing model that is thriving today and without the need for a driver and their skills. With most of the AVs being electric cars, the presence and wide scale implementation of SAV will also put less pressure on the environment due to no fossil fuels being used and less vehicles on road.

Challenges There are still some elementary challenges that need fixing before Level 5 AV becomes our reality on roads. Hardware Challenges The hardware requirements for Autonomous Vehicles primarily involve sensors of different kinds. Some of them are currently in application, while some are yet to be deployed on a large-scale. These sensors would help the AV to detect objects around it and interact with the environment. Cameras can be installed to view while LIDAR lasers measure the distance between the object and the vehicle. The LIDAR also measures the speed and direction that object is traveling in. These sensors have to be sensitive enough to relay the correct data to the AV's computer control system for decisionmaking. Also, in terms of mechanical engineering,

the car has to be built in a way where it is subjected through all environments for driving safety and also which can withstand the elements while being parked. In the automobile industry, this is referred to as an 'automotive upgrade' of all autonomous vehicles. A key point was raised by Mr. Chris Brewer, Ford's Chief Program Engineer for autonomous vehicle development, in which he outlined the need for the vehicle's steering control systems to rely on mechanical steering over power-steering, the latter being susceptible to power failures. And without any physical wheels, this becomes a tricky proposition. With a hidden drive control system, there still has some ground to cover to build the vehicle in mechanical redundancy in a way where the vehicle can be controlled even if power runs out. AI & Machine Learning AV technology uses AI and machine learning for processing the data that is relayed from the sensors. This data is then classified based on the metadata fed into the systems as algorithms. The system will then

Level 1

Level 2 Level 3

Level 4 Level 5

make a decision, whether to steer, brake or accelerate. Currently, there is no metadata, no method to or even a basis to guarantee complete safety through machine-learning algorithms. There are no ideals behind testing, training or validating such advanced technology. The Fail-system design is another challenge, as two independent subsystems are required to ensure that one takes over if the other one fails. Regulatory challenges The need of a regulatory framework is pivotal for the development and deployment of CAV technologies. With driverless cars, there is a shift of liability from drivers to car makers regarding accountability in case of any mishaps. All major consumer concerns have to be considered before actual deployment of Level 5 automation technologies on the road. Ethical challenges As per a 2017 report by the German Ethics Commission, " In the event of unavoidable accident situations, any distinction based

The driver has to make the most decision with one functionality taken over by ADAS (emergency brake assist, adaptive cruise control, lane keeping, etc.) Multiple functionalities operated by ADAS with the driver still in control (highway assist, autonomous parking) This is called conditional driving automation where a system can make informed choices based on sensor data and function on its own, though it would still require a manual override from a human driver. Minimum human interaction, ADAS responds to driver error or system failure Complete control of the vehicle without any human intervention

December 2021 | Telematics Wire | 41


on personal features (age, gender, physical/ mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another. General programming to reduce the number of personal injuries may be justifiable". But, due to the immense complexity of on-road situations, autonomous cars may face situations where they need to make a choice between putting the lives of the passengers at risk or saving a pedestrian. Even with the presence of regulatory frameworks, this can be a daunting task due to the moral characteristics of individuals. The future of autonomous cars depends on overcoming this challenge, which can only be done with the consent of all stakeholders after thorough analysis on the benefit and risk of each decision made in this regard.

error by the driver and improves safety of the car. It alerts the driver as to obstacles, takes precautionary measures and in some cases, limited control of the car. Cruise control, traffic warnings, parking assistance, lane centering, satellite navigation are common functions of ADAS. At each level of automation, ADAS is designed to function more independently as you reach the highest automation level:

Role of ADAS

The AV industry today

Advanced Driver Assistance Systems, or ADAS, supports the driver and enhances the vehicle & road safety. ADAS integrates the data sent by sensors along with a humanmachine interface to detect obstacles or an

Several tech companies are joining hands with automobile giants to become first in the race of fully automated vehicles. Chinese start-up Baidu has collaborated with the likes of BMW, Ford and other

Common ADAS sensors include: ● Cameras for imagery ● LIDAR, ultrasonic sensors to measure objects and their distances ● GPS for positional data ● Night vision systems ● Inertial measurement units to detect speed and acceleration data

AUTHOR PRAVEEN DHUSIA Business Development Manager Teltonika India Indian Institute of Management Alumnus with 8+ Years of Experience, Electrical and Electronics Engineer, having expertise in IoT solutions. Associated with landmark projects like Buddha International F1 Circuit, Redevelopment of Connaught place, IGI Airport Terminal 3 (MEP) etc. In his free time, he along with his friends tries to give back to the society by helping underprivileged young kids with their education.

42 | Telematics Wire | December 2021

Chinese carmakers and is currently testing autonomous flights without human involvement in Beijing and California. The Apollo System, which is its ADAS mechanism, consists of camera sensors, LIDAR and Radar along with a cellular vehicle-to-everything communication system. Another company worth its weight here is NVIDIA, a leading autonomous hardware and software company. Its ADS software stack is complete with driving policy, signal processing, perception and other control elements. Among its reputed clientele is Daimer, the parent company of Mercedes-Benz. Waymo, the offering by Google, is a step beyond everyone else as it tested its first ride-sharing vehicle in Arizona without a manual operator. Volvo currently uses Waymo's services to use in their Level 4 autonomous vehicles.

How does the future look? The future of autonomous vehicles looks bright on the horizon, though certain key challenges still need to be addressed. From ethical problems to the large amount of dataset needed to equip the AI with decision-making, the road to fully autonomous technology is still far. However, as more Tech giants like Google jump into this market, and the collaboration between such private companies, government and experts, selfdriven cars will soon become a reality in our lives with limitless potential.


THE iTriangle Gazette S H A R I N G R E G U L A R U P D AT E S O N I T R I A N G L E I N F O T E C H

DECEMBER 03 - 2021 VOL 008 RS. 1.01

B R I N G I N G VA LU E T H RO U G H A P P ROAC H , FO C U S & E X P E RT I S E T RU S T U S TO B R I N G M O R E CO N V E N I E N C E TO O U R C U S TO M E R S

• ECU Updates over CAN • Software and Calibration updates over the air, without the hassles of recalling the vehicles to Service station • Scalable Architecture to accommodate increase in vehicle ECUs • Single and Dual bank updates

VEHICLE ECU DIAGNOSTICS • ECU Diagnostics over CAN • Secured Online Access to 100 % vehicle ECUs • Live and Remote Diagnostics of the vehicle ECUs with standardized J2534 interface • Diagnostics capability on BLE

4G TCU

• Scalable, Elastic, Portable & Secure cloud platform • Device Life cycle management • SIM lifecycle management • AIS140 Common layer Workflow management • Vehicle Management cloud software for managing OTA Campaigns, Workflows etc • Customizable Cloud to Cloud integrations • End of Line testing

UX 101

VEHICLE ECU UPDATES

INTRODUCING TELEMATICS FEATURES THAT WILL BE GAME CHANGERS FOR AUTO MANFACTURERS

• Built on powerful chipset • Linux based Architecture • AIS140 Certified • EMI/ EMC as per CISPR 25 Class 3 • Low Sleep current • Binary protocol to reduce SIM data consumption • Encrypted data communication between server and device • Multiple CAN, Analog, Digital etc. interfaces to support Sensor integrations and Data acquisition

F I L L

O U T

-

CONNECTED MOBILITY PLATFORM

EMPANELLED ACROSS ALL THE

AIS 140 ACTIVE STATES

C U T

O F F

-

M A I L

iTriangle Infotech Pvt. Ltd.

for more information contact us at: sales@itriangle.in • +91 9739974445 • www.itriangle.in Sudeep Nayak - +91 77951 00253 • Ashish Rawat - +91 95139 92362 Name: ........................................................................................................ Street: ........................... Town:December ........................... State: ........................... 2021 | Telematics Wire | 43


Industry Insight

How Lidar Enables Autonomous Vehicles to Operate Safely MIRCEA GRADU, PHD

Velodyne Lidar

S

ituational awareness is essential to good driving. To navigate vehicles to a chosen destination, drivers need to know their locations and continuously observe the surroundings. These observations allow a driver to take actions instinctively, such as accelerate or brake, change lanes, merge onto the highway and maneuver around obstacles and objects. Autonomous vehicles (AVs) work in much the same way, except they use sensor and GPS technologies to perceive the environment and plan a path to the desired destination. These technologies work together to establish where the car is located and the correct route to take. They continuously determine what is going on around the vehicle, locating the position of people and objects near it and assessing the speed and direction of their movements. This steady stream of information is fed into the vehicle’s onboard computer system, which determines the safest way to

navigate within its surroundings. To better understand how sensor technologies in autonomous cars work, let’s examine how these vehicles perceive their location and environment to identify and avoid objects in their pathways.

Precise Measurements of Location and Surroundings Sensor technologies provide information about the surrounding environment to the vehicle’s computer system, allowing the vehicle to move safely in our threedimensional world. These sensors gather data that describe a car’s changes in position and orientation. Autonomous vehicles utilize highdefinition maps that are updated in real time to guide the car’s navigation system. This updating is necessary because the conditions of our roadways are not static. Congestion, accidents and construction complicate real-life movement on streets and highways. On-vehicle sensing

technologies, such as lidar, cameras and radar, perceive the environment in real time to provide accurate data of these ever-changing roadway situations. The real-time maps that these sensors produce are often highly detailed, including road lanes, pavement edges, shoulders, dividers and other critical information. These maps include additional information, such as the locations of streetlights, utility poles and traffic signs. The vehicle must be aware of each of these features to navigate the roadway safely. Sensing technologies address other critical driving requirements. For instance, some lidar sensors enable autonomous vehicles to have a 360-degree view so the entire environment around the vehicle can be seen while operating. Having a wide field of view is particularly important in navigating complicated situations, such as a high-speed merge onto a highway.

Detecting and Avoiding Objects Sensor technologies provide onboard computers with the data they need to detect and identify objects such as vehicles, bicyclists, animals and pedestrians. This data also allows the vehicle’s computer to measure these objects’ locations, speeds and trajectories. A useful example of object detection and avoidance in autonomous vehicle testing is a dangerous tire fragment on the freeway. Tire fragments are not usually large enough to spot easily from a long distance and they are often the same color as the road surface. AV sensor technology must have high enough resolution to detect accurately the fragment’s location on the roadway. This requires distinguishing the tire from the asphalt and determining that it is a stationary object, rather than something like a small, moving animal.

44 | Telematics Wire | December 2021


In this situation, the vehicle not only needs to detect the object but also classify it as a tire fragment which must be avoided. Then the vehicle must determine the right course of action, such as to change lanes to avoid the tire fragment while not hitting another vehicle or object. To give the vehicle adequate time to change its path and speed, these steps must all happen in less than a second. These decisions made by the vehicle’s onboard computer depend on accurate data provided by the vehicle’s sensors.

A Closer Look at Sensor Technologies Sensor technologies perceive a car’s environment and provide the onboard map with rich information about current roadway conditions. To build redundancy into self-driving systems, automakers utilize an array of sensors, including lidar, radar and cameras. Lidar is a core sensor technology for autonomous vehicles. Lidar sensors reflect light off surrounding objects at a high rate, with some lidar sensors producing millions of laser pulses each second. By measuring the time required for each pulse to “bounce off” of an object and return to the sensor and multiplying this time by the speed of light, the distance of the object can be calculated. Gathering this distance data at an extremely high rate produces a “point cloud,” or a 3D representation of the sensor’s surroundings, which localizes objects and even the vehicle’s own position within centimeters. Radar has long-range detection capabilities and can track the speed of other vehicles on the road. However, radar is not equipped with the resolution or accuracy to provide the detailed, precise information supplied by lidar. Cameras can identify colors and fonts, so they are capable of reading traffic signals, road signs and lane markings. However, unlike lidar, cameras rely on ambient light conditions to operate and are hindered in low light conditions and when hit with direct light. In contrast, because lidar produces its own light source, it can perform well whether it is day or night.

How Lidar Technology Enables Safe, Autonomous Operation There are multiple technology approaches

Velodyne Lidar Closer Look at Sensor Technologies

Velodyne Lidar Horizontal FoV

being advanced by lidar suppliers. In each of these approaches, the same fundamental performance metrics apply to determine if a lidar system can enable a fully autonomous car to operate successfully. The top performance features to use in looking at lidar technologies are field of view, range and resolution. These are the capabilities needed to guide an autonomous vehicle reliably and safely through the complex set of driving circumstances that will be faced on the road. Let’s review each feature and examine how it impacts an autonomous vehicle. Field of View. It is widely accepted that a 360-degree horizontal field of view – something not possible for a human driver – is optimal for safe operation of

autonomous vehicles. Having a wide horizontal field of view is particularly important in navigating the situations that occur in everyday driving. For instance, consider the scenario of performing a high-speed merge onto a highway. The maneuver requires a view diagonally behind the autonomous vehicle to see if another car is coming in the adjacent lane. This also requires a view roughly perpendicular to where the vehicle is currently traveling to assess cars in the adjacent lane and confirm there is room to merge. Throughout this process, the vehicle must look forward so it can negotiate traffic ahead of it. For these reasons, a narrow field of view would be insufficient for the vehicle to safely execute the merge maneuver. Therefore, lidar sensors that

December 2021 | Telematics Wire | 45


Velodyne Lidar Vertical FoV

Velodyne Lidar Tire Point Cloud

rotate are optimal for these applications because one sensor is capable of capturing a full 360-degree view. In contrast, if an autonomous vehicle employs sensors with a more limited horizontal field of view, then more sensors are required and the vehicle’s computer system must then stitch together the data collected by these various sensors. Vertical field of view is another area where it is important that lidar capabilities match real-life driving needs. Lidar needs to see the road to recognize the drivable area, avoid objects and debris, stay in its lane, and change lanes or turn at intersections when needed. Autonomous vehicles also need lidar beams that point high enough to detect tall objects, road signs and overhangs, as well as navigating up or down slopes Range. Autonomous vehicles need to see as far ahead as possible to optimize

safety. At highway speeds, a range of 300 meters allows the vehicle the time it needs to react to changing road conditions and surroundings. Slower, non-highway speeds allow for sensors with shorter range, but vehicles still need to react quickly to unexpected events on the roadway, such as a person focused on a mobile phone stepping onto the street from between two cars, an animal crossing the road, an object falling from a truck, and debris ahead in the road. In each of these situations, onboard sensors need sufficient range to give the vehicle adequate time to detect the person or object, classify what it is, determine whether and how it is moving, and then take steps to avoid it while not hitting another car or object. Another factor connected to range is reflectivity. Reflectivity refers to an object’s propensity to reflect light back to the sensor. Lighter colored objects reflect

AUTHOR MIRCEA GRADU, PHD Senior Vice President Automotive Programs Velodyne Lidar Mircea Gradu, PhD, is Senior Vice President of Automotive Programs at Velodyne Lidar, responsible for the development and manufacturing of worldclass products compliant with international quality standards and satisfying customer needs. Mircea brings over 25 years of experience in the automotive and commercial vehicle industry, which includes deep technical knowledge of design, development, manufacturing, safety and cybersecurity.

46 | Telematics Wire | December 2021

more light than darker objects. While many sensors are able to detect objects with high reflectivity at long range, far fewer are able to detect low reflectivity objects at range, which is needed to be highway safe. Resolution. High-resolution lidar is critical for object detection and collision avoidance at all speeds. Finer resolution allows a sensor to more accurately determine the size, shape and location of objects. This finer resolution outperforms even high-resolution radar and provides the vehicle with the clearest possible vision of the roadway. To examine the importance of resolution, consider the above example of a tire fragment in the road. The lidar system needs to be able to not only detect the object but also recognize what it is. This is not an inconsequential task given that it requires detecting a dark object on a dark surface, so a sensor with finer resolution increases the vehicle’s ability to accurately detect and classify the object. To aid the process of responding to roadway events, unlike cameras, lidar provides 3D images of the surroundings with precise measurements of how far away objects are from the vehicle.

Lidar: Essential to Operating Autonomous Cars Safely Lidar sensors produce millions of data points per second, which can enable precise, reliable navigation in real time to detect objects, vehicles and people that might pose a collision threat. Lidar can help autonomous vehicles navigate roadways at various speeds, traveling in a range of light and weather conditions such as rain, sleet and snow. Equipped with lidar, autonomous vehicles can safely and efficiently navigate in unfamiliar and dynamic environments for optimal safety. References

https://www.ite.org/technical-resources/ topics/connected-automated-vehicles/ https://ieeexplore.ieee.org/ document/8451989 Image credits: https://spectrum.ieee. org/6-key-connectivity-requirements-ofautonomous-driving


December 2021 | Telematics Wire | 47


Product Launch

Mobis Parking System Drivers no longer have to worry about passing a narrow street or facing a car ahead at a dead-end even when they are inexperienced.

H

yundai Mobis has developed the urban Advanced Driver Assistance System (ADAS) called the Mobis Parking System (MPS), which integrates Narrow Space Assistance (NSA), Reverse Assistance (RA), and Remote Smart Parking Assistance (RSPA). With the MPS, the car is able to drive itself through a narrow street by avoiding obstructions, drive

Researchers are testing related technologies at Hyundai Mobis Seosan Proving Ground.

48 | Telematics Wire | December 2021

through the revolving gate of an underground parking lot, or drive backward at a dead-end where two cars are facing each other. All these are made possible at the press of a button. Hyundai Mobis developed this technology using its own software logic and mass-produced ultrasonic sensors. This technology is based on the fact that, while RADAR and LiDAR sensors are useful for recognizing objects located far away or in high-speed driving, ultrasonic sensors are rather more suitable for narrow streets or underground parking lots. The ultrasonic sensors recognize objects over a short distance, while the software logic and the control system perform selfdriving. One of the core technologies of this system is Narrow Space Assistance (NSA). The car needs only 16 inches of extra space on both sides to drive through a

narrow street by itself. Another core technology is Reverse Assistance (RA). It records the car’s travel route on a real-time basis and creates the reverse route by itself at the press of a button. The steering wheel and vehicle speed are controlled automatically. Aside from the MPS, various other technologies for safety and convenience have been integrated into the system, thereby further enhancing the competitiveness of the driver assistance solution. The Remote Smart Parking Assistance (RSPA) system is capable of parking a car at a right angle or in parallel by finding an empty space when the driver is out of the car and presses the remote. 3D Surround View Monitor (SVM) provides a better parking experience by showing the area 360 degrees around the car threedimensionally. Rear-autonomous Emergency Braking (R-AEB) is also noticeable.


Product Launch

Samsung introduces chips for automobiles Samsung Electronics introduced three of its latest automotive chip solutions; the Exynos Auto T5123 for 5G connectivity, the Exynos Auto V7 for comprehensive in-vehicle infotainment systems and the ASIL-B certified S2VPS01 power management IC (PMIC) for the Auto V series.

Exynos Auto T5123: 5G Connectivity Solution for Automobiles The Exynos Auto T5123 is a 3GPP Release 15 telematics control unit specifically designed to bring fast and seamless 5G connectivity in both standalone (SA) and non-standalone (NSA) mode to the next generation of connected cars. It delivers essential information to the vehicle in real-time via high-speed download of up to 5.1 gigabits per second (Gbps) and allows passengers to enjoy a host of new services such as high-definition content streaming and video calls on the go. To efficiently process large amounts of data transmitted and received through the 5G modem, the Exynos Auto T5123 supports a high speed PCIe (PCI Express) interface and a low-power high-performance LPDDR4x mobile DRAM. In addition, the unit comes with two Cortex-A55 CPU cores and a builtin Global Navigation Satellite System (GNSS) to minimize the use of external ICs and help reduce product development time. The T5123 meets stringent requirements for automotive components and is Automotive Electronics Council-Q100 (AEC-Q100) qualified.

Exynos Auto V7: A Powerful Processor for IVI Systems in Mid to High-End Vehicles The Exynos Auto V7 is the newest addition to Samsung’s automotive-brand processor lineup and is designed for in-vehicle infotainment systems. For powerful processing performance, the V7 integrates eight 1.5-gigahertz (GHz) Arm Cortex-A76 CPU cores and 11 Arm Mali G76 GPU cores. The GPU comes in two separate groups, with three cores in the ‘small’ domain for cluster display and AR-HUD, and eight in the ‘big’ domain for central information display (CID) and others. Such physical separation allows the GPU to support multiple systems simultaneously and brings safer operation as it keeps one domain from interfering with another. In addition to its powerful CPU and GPU, the V7 is equipped with an NPU for convenient services such as virtual assistance that can pro-

cess visual and audio data for face, speech or gesture recognition features. The Exynos Auto V7 supports up to four displays and 12 camera inputs that provide information to assist drivers and passengers. The V7’s imaging system provides bad pixel correction, dynamic range compression and geometric distortion correction to provide noiseless and distortion-free images for features like surround view and parking assistance. The chip comes with three HiFi 4 audio processors that deliver excellent audio quality for songs, movies and even games on the go. To run all these features as smoothly as possible, the V7 has up to 32 gigabytes (GB) of LPDDR4x memory capacity that offers high bandwidth of up to 68.3 gigabytes per second (GB/s). The Exynos Auto V7 also offers strong data protection through an isolated security processor for crypto operation and provides a hardware key using a one-time programmable (OTP) or physical unclonable function (PUF). Furthermore, for critical functional safety, the Exynos Auto V7 complies with ASIL-B requirements of safety support for a digital cluster and an embedded safety island that detect and manage faults to maintain a safe state with a fault management unit (FMU). The Exynos Auto V7 is currently in mass production and is being used in Volkswagen’s latest In-Car Application-Server (ICAS) 3.1, developed by LG Electronics’ VS (Vehicle component Solutions) division, to power their next-generation in-vehicle infotainment system.

S2VPS01 – An ASIL-B Certified Power Management IC for the Exynos Auto V Series The S2VPS01 is a PMIC specifically designed and developed for the Exynos Auto V9 and V7. It is Samsung’s first automotive solution produced under the ISO 26262 functional safety process certification that was acquired in 2019, and it also achieved an ASIL-B certification in 2021. An Automotive Safety Integrity Level (ASIL), specified under ISO 26262, ranges from A to D, with D being the highest level. The Level is assigned by analyzing and assessing the severity, exposure and controllability of vehicle operations in a number of environments. To ensure the safety of vehicle systems, ASIL-B compliance is becoming a key requirement for automotive OEMs and their Tier 1 suppliers when selecting partners and solutions. The S2VPS01 regulates and rectifies the flow of electrical power, allowing reliable and robust in-vehicle infotainment system performance. It is comprised of highly efficient triple/ dual-phase buck converters, and integrates a low-dropout regulator (LDO) and real-time clock (RTC) within the package. For protection from harsh thermal and electrical conditions, the power IC comes with various protection functions including over voltage protection (OVP), under voltage protection (UVP), short circuit protection (SCP), over current protection (OCP), thermal shut down (TSD), clock monitoring and output stuck checks.

December 2021 | Telematics Wire | 49


Industry Insight

Developing Secure Software for Autonomous Vehicles DENNIS KENGO OKA

Synopsys

Autonomous vehicles and the need for cybersecurity The automotive industry is going through a transformation and moving toward software-defined vehicles where a majority of functionality is provided and controlled by software. This allows for updatability, scalability and flexibility, which leads to improved user experience. One such trending functionality is autonomous driving functionality. The automotive industry is working on several standardization activities to guide automotive organizations develop safe and secure autonomous vehicles (AVs). For example, a set of updated provisional national standards to guide the deployment of safe and secure AVs in Singapore was released in August this year. Known as TR68 [1], this technical reference comprises four parts and is a comprehensive guide for both AV developers and AV operators. The first part covers basic behavior of AVs, including topics on Dynamic Driving Task (DDT) and Automated Driving System (ADS). The second part is on safety guidance for AVs including design,

production, and safe operation. The third part covers cybersecurity, focusing on highlevel security principles and a cybersecurity assessment framework. The fourth part is on vehicular data types and formats related to data recorded for automated driving, HD mapping, and V2X information exchange. It is important to recognize that while AVs provide many benefits, there are serious cybersecurity risks that need to be considered. For instance, a semi-autonomous vehicle can contain more than 300 million lines of code, and it is estimated that a fully autonomous car will contain more than 1 billion lines of code. The development of new, complex and large software solutions combined with the use of novel and advanced technologies and interfaces such as Artificial Intelligence (AI), Lidar, sensors, cameras, V2X, and 5G increase the risk for introduction of vulnerabilities. As a simplified example, AVs use sensors and cameras to gather input about their surroundings together with GPS and map data. This input is then processed by AI to make decisions that control steering, braking and acceleration of the vehicle,

Figure 1. AVs using various inputs processed by AI to make decisions for vehicle control.

50 | Telematics Wire | December 2021

as shown in Figure 1. Cyberattackers can deliberately target these different attack surfaces to cause the AV to misbehave, which could have fatal consequences.

Automotive organizations becoming software organizations With the shift toward software-defined vehicles, autonomous driving functionality could be considered as one application providing a set of functionality running on top of a software platform or operating system (OS) in the vehicle. The development of these types of applications and software platforms is causing a disruption in the automotive industry leading to a supply chain transformation. That is, more automotive software is developed to be run on platforms, different software variants are being consolidated, and software is becoming independent from hardware, which allows reuse of software components. Automotive organizations are also acquiring or partnering with tech companies, including cloud providers, software development service providers and cybersecurity companies. Figure 2 provides an overview of an HPC (high-performance computer) architecture, where different OS are running on top of a hypervisor. For example, the autonomous driving functionality could be provided by an application running on the AUTOSAR Adaptive Platform. In addition, a Trusted Execution Environment, with a root of trust in hardware [3] can provide a number of security features including secure boot, secure execution and secure storage. For automotive organizations to be able to more efficiently develop these types of software, there is a need to shift the development methodology from the traditional V-model often used for ECU (electronic control unit) software


Figure 2. Overview of HPC architecture, including multiple OS and applications.

Figure 3. DevSecOps methodology allowing for continuous integration, continuous testing, and continuous delivery.

development to a more agile DevSecOps development methodology. Adopting a DevSecOps methodology allows the organization to improve development speed, reduce costs, perform automated testing through the use of appropriate tools, and improve quality and security through continuous feedback. Depending on the type of software, it is possible to perform various activities for continuous integration, continuous testing and continuous delivery as shown in Figure 3. To change the software development methodology, the automotive industry can learn from other industries that have already started this journey. For example, there was a study conducted of 440 large software development organizations, across 12 industries and nine countries, that measured 46 activities related to software development with the goal to identify the key success factors [4]. The top most important items of the measured activities were grouped into four groups, i.e., the four key success factors for software development: development tools, talent management, culture and

product management. This article focuses on solutions for development tools.

Application security testing approaches As mentioned, deploying the right development tools is one of the top items identified as a success factor for software development. To get a better perspective of DevSecOps, a study of 350 enterprise IT organizations was conducted with the goal to identify the most critical application security testing to add to their DevSecOps program [5]. An overview of the results from the study is presented in Figure 4. In the following, we discuss software composition analysis, static analysis and fuzz testing related to AVs in more detail, since there exist appropriate automated tools that can be deployed. While dynamic analysis and third-party penetration testing are also included in the top results from the study, the main goal with DevSecOps is to use automated tools, and there are currently not many automated tools for these test approaches applicable to AVs.

Regarding software composition analysis, the purpose is to identify included opensource software components in the codebase for the automotive system, as well as further identify any associated known vulnerabilities in the included open-source software components. First, it is important to recognize that there may be open-source software components included in the autonomous driving system. For example, the Autoware Foundation supports open-source projects for self-driving mobility, including functions for autonomous valet parking [6]. There may also be various open-source communication stacks or libraries used in the autonomous driving system providing commonly used functionality. Using a software composition analysis tool integrated into the DevSecOps process allows for automated detection of included open-source software components, associated known vulnerabilities, and continuous vulnerability monitoring even after software has been released [7]. Regarding static analysis, there are automated tools that can scan source code to detect various defects such as buffer overflows, resource leaks, memory corruption, hardcoded credentials, null pointer deference etc. [8]. Equally important for safety-critical systems, is the need to check for compliance to certain coding guidelines such as MISRA C/C++ and AUTOSAR C++, which can be achieved using static analysis tools. More specifically, for autonomous driving systems, CUDA is often used, and as such, it is imperative that the static analysis tool supports checkers for CUDA. CUDA stands for Compute Unified Device Architecture and is a general-purpose parallel computing platform and programming model for GPUs developed by NVIDIA. There is a dedicated C/C++ compiler (nvcc) and libraries (API) available. CUDA’s main purpose is to accelerate compute-intensive applications by leveraging GPUs for parallel processing. There are already more than 25 companies using NVIDIA CUDA GPU to develop fully autonomous robot taxis (SAE Level 5). A simplified example of CUDA processing flow is illustrated in Figure 5 [9]. In the center of the figure is the application code. Part of the application code contains computeintensive functions, highlighted by the gray box, that will be executed on the Device (GPU). The rest of the code will be executed on the Host (CPU). There are four steps in the CUDA processing flow. In the first step,

December 2021 | Telematics Wire | 51


the Host will copy the data that will be used for the compute-intensive function from the Host Memory to the Device Memory.. At the second step, the Host instructs the Device to execute the compute-intensive function. In the third step, the Device will process the compute-intensive function in parallel on multiple cores, reading the stored data from Device Memory from step 1 and storing the resulting output back into Device Memory. Finally, as the fourth step, the output stored in Device Memory is copied back to the Host Memory and can then be used by the rest of the application code on the Host. This type of processing flow is useful for autonomous driving applications using compute-intensive functions such as image processing that can be offloaded to the Device and processed in parallel on multiple cores on the GPU. The following code example provides some more specifics on CUDA extensions used in C programming language [9, 10]. To better understand the example, some CUDA relevant terminology is described first. kernel : code that will execute on the Device

__global__ : declares a function called from Host (CPU) and executed on Device (GPU) <<< >>>() : function call from Host to a kernel on the Device To support memory management of the Device Memory, the following functions are available: cudaMalloc() cudaMemcpy() cudaFree() The code example in Figure 6 is described as follows, highlighting the CUDA relevant parts. First, ___global___ is used to declare that the add() function will be called from the Host and executed on the Device. Then, cudaMalloc() is used to allocate space on the Device Memory to be able to store the data used as input. Next, cudaMemcpy() is called to copy the data used as input from Host Memory to Device Memory. Using the copied data as input, the add() kernel is executed on the Device by calling it using <<< >>>(). Once the Device has processed the input and stored the result on Device Memory, the result is copied to the Host Memory using cudaMemcpy().

Figure 4. Results from study showing most critical application security testing to add to DevSecOps.

AUTHOR

Figure 5. CUDA processing flow between Host and Device.

52 | Telematics Wire | December 2021

Finally, the Device Memory is cleaned up using cudaFree(). This simple example illustrates code executed on both Host and Device and the need for memory management. To help developers avoid programming mistakes, static analysis can be integrated into the DevSecOps process to run CUDA specific checkers that can detect issues with, e.g., kernel launch, fork, device threads, synchronization and memory management [8, 11]. Regarding fuzz testing, there are automated tools that can generate invalid input that is then provided to a target system to detect unknown vulnerabilities. For example, fuzz testing tools can be used to test protocol implementations of among other CAN, Ethernet, Wi-Fi, and Bluetooth [12]. Fuzz testing can also be integrated into the DevSecOps process to automate testing [13, 14]. More specifically for AVs, C-V2X communication based on 5G is becoming more relevant. 5G Automotive Association (5GAA) is a global, cross-industry organization of companies that was created in 2016 to connect the telecom industry and vehicle manufacturers to develop endto-end solutions for future mobility and transportation services. 5GAA describes a number of C-V2X use cases including crosstraffic left-turn assist, intersection movement assist, emergency brake warning, traffic jam warning, speed harmonization, and lane change warning [15]. While it is important to test the external interfaces on AVs, it is equally important to test the security and robustness of the 5G network supporting AVs through these types of use cases. The 5G network is a complex interconnected system comprising a set of different components and protocols as illustrated in Figure 7. For example, AVs are considered user equipment in this network. Moreover, there is communication between 4G eNodeB base stations and 5G gNodeB base stations using X2AP protocol, communication between 5G gNodeB base stations using XnAP protocol, communication within 5G gNodeB base stations using F1AP and E1AP protocols, communication between 5G gNodeB base stations and 5G core network using NGAP protocol, and communication within the 5G core network using PFCP protocol among others. Potential software vulnerabilities in any of these protocol implementations could potentially have impact on security, robustness and availability of features and functions required for successful


execution of use cases to support AVs. Therefore, performing fuzz testing of such protocol implementations to detect unknown vulnerabilities play a major role in ensuring the robustness and security of the deployment of 5G networks.

Summary With the development of more advanced autonomous driving functionality, including the usage of more external inputs and communications, larger software codebases and potential usage of open-source software, the need for cybersecurity becomes paramount. The automotive industry is seeing a shift toward DevSecOps development methodology to improve development speed, reduce costs, perform automated testing through the use of appropriate tools, and improve quality and security through continuous feedback. Specifically, this article highlights the usage of automated tools for software composition analysis, static analysis and fuzz testing to detect vulnerabilities earlier in the development lifecycle. References 1. Manufacturing Standards Committee, “TR 68”, 2021 2. Dennis Kengo Oka, “Building Secure Cars: Assuring the Automotive Software Development Lifecycle”, Wiley, 2021 3. Synopsys, “DesignWare tRoot Secure Hardware Root of Trust”, https:// w w w. s y n o p s y s . c o m / d w / i p d i r. php?ds=security-troot-hw-root-of-trust 4. Georg Doll, “When things get complex”, Automotive Computing Conference 2020 5. 451 Research Advisory, “DevSecOps Realities and Opportunities”, 2018 6. The Autoware Foundation, https://www. autoware.org/ 7. Synopsys, “ Black Duck Software Composition Analysis”, https://www. synopsys.com/software-integrity/ security-testing/software-compositionanalysis.html 8. Synopsys, “Coverity Static Application Security Testing”, https://www.synopsys. com/software-integrity/security-testing/ static-analysis-sast.html 9. NVIDIA, “CUDA C/C++ Basics”, htt p s : / / w w w.nv i d ia .co m / d o c s / IO/116711/sc11-cuda-c-basics.pdf 10. NVIDIA, “CUDA Toolkit Documentation”, https://docs.nvidia.

Figure 6. Code example showing CUDA extensions.

Figure 7. Overview of 5G network consisting of user equipment, base stations and core network.

com/cuda/cuda-compiler-driver-nvcc/ index.html 11. NVIDIA, “CUDA C++ Programming Guide”, https://docs.nvidia.com/cuda/ pdf/CUDA_C_Programming_Guide. pdf 12. Synopsys, “ Defensics Fuzz Testing”, https://www.synopsys.com/softwareintegrity/security-testing/fuzz-testing. html 13. Dennis Kengo Oka, and Tommi Makila, “A Practical Guide to Fuzz Testing

Embedded Software in a CI Pipeline”, FISITA World Congress 2021 14. Nico Vinzenz, and Dennis Kengo Oka, "Integrating Fuzz Testing into the Cybersecurity Validation Strategy," SAE World Congress 2021, SAE Technical Paper 2021-01-0139, https://doi. org/10.4271/2021-01-0139 15. 5GAA, “C-V2X Use Cases Methodology, Examples and Service Level Requirements”, White Paper

AUTHOR DENNIS KENGO OKA Principal Automotive Security Strategist Synopsys Dr. Dennis Kengo Oka is an automotive cybersecurity expert with more than 15 years of experience in the automotive industry. As a Principal Automotive Security Strategist at Synopsys, he focuses on security solutions for the automotive software development lifecycle and supply chain. Dennis has over 70 publications consisting of conference papers, articles and books, and is a frequent speaker at international automotive and cybersecurity conferences.

December 2021 | Telematics Wire | 53


Autonomous Vehicle/ADAS

New Beginning of HL Klemove, a company specializing in autonomous driving HL Klemove was officially launched on December 2, 2021. Mando Mobility Solutions (MMS) was split off from Mando Corp. and merged into HL Klemove. Under the leadership of its first CEO Paljoo Yoon, the company will open the era of full autonomous driving as a leading company in the field of autonomous driving and mobility. HL Klemove proclaimed that it will deliver the safest full autonomous driving solution to popularize autonomous driving. It is preparing for another leap forward by focusing on developing cuttingedge autonomous driving solutions, based on its track record of more than 2,000 patents for its autonomous driving technologies and supplying more than 20 million units of ADAS to diverse customers. It announced that it will complete commercialization of core autonomous driving products, including Lidar, 4D imaging radar, high-resolution camera, in-cabin sensor, and integrated domain control unit by 2025 and ambitiously pursue business expansion to achieve sales of 1.2 trillion won in 2021, 2.4 trillion won in 2026, and as much as 4 trillion won in 2030.

ADAS analysis from LexisNexis Risk Solutions sets record straight on U.S. Auto claims severity

LexisNexis® Risk Solutions announced a new analysis of the relationship between Advanced Driver Assistance Systems (ADAS) features and U.S. auto insurance claims, which counters common insurance industry sentiment that ADAS repair costs offset ADAS loss cost benefits. The results are featured in the new white paper, “True Impact of ADAS Features on Insurance Claim Severity Revealed,” which examines the effect of ADAS on claim severity in a multivariate setting. The findings represent the second half of a twopart study that also examined the relationship between ADAS and claims frequency. The combined findings show that the change in claim severity in vehicles with ADAS were minimal; however, the decrease in claim frequency was significant. The LexisNexis Risk Solutions study shows an overall reduction in loss cost by coverage, which can warrant ADAS feature-based policy discounts and benefits for both drivers and insurers.

Vision perception software ‘SVNet’ for LG’s ADAS Camera System StradVision has announced that it provides its camera perception software SVNet for LG Electronics’ latest ADAS Front Camera System. For the various safety functions delivered by LG Electronics’ ADAS Front Camera System, StradVision offered full customization of Object Detection and Free Space Detection. StradVision is accelerating the advancement of autonomous vehicles through the SVNet software, which relies on deep learningbased perception algorithms. Compared to competitions, SVNet achieves much higher efficiency in memory usage and energy consumption and can be customized and optimized to any system on a chip (SoC), thanks to its patented and cutting-edge Deep Neural Network. The software also works seamlessly with other sensors such as LiDAR and RADAR to achieve surround vision. SVNet is currently used in mass production models of ADAS and autonomous driving vehicles that support safety function Levels 2 to 4.

Kodiak Robotics raises $125 million in oversubscribed Series B round Kodiak Robotics, Inc. announced that it has raised $125 million in an oversubscribed Series B fundraising round for a total of $165 million raised to date. The round includes investments from SIP Global Partners, Lightspeed Venture Partners, Battery Ventures, CRV, Muirwoods Ventures, Harpoon Ventures, StepStone Group, Gopher Asset Management, Walleye Capital, Aliya Capital Partners, and others. Recent investments previously announced include Bridgestone Americas and BMW i Ventures. Kodiak is building the industry’s most advanced technology stack purpose-built specifically for long-haul trucks. The company will use the Series B funds over the next 12 months to double employee headcount by adding at least 85 new people to the team, expand autonomous service capabilities from coast-to-coast, and add a minimum of 15 new trucks for a total of at least 25 autonomous vehicles. Kodiak has shown significant momentum in 2021. The company recently unveiled its next-generation autonomous trucks, which uses a modular and discrete sensor approach to vastly simplify installation and maintenance, while increasing safety. Kodiak hauls freight daily for a range of industry partners, including some of the nation’s largest carriers and brands, and has done so since 2019.

54 | Telematics Wire | December 2021


Autonomous Vehicle/ADAS

MORAI and dSPACE to co-develop autonomous driving validation simulator MORAI has signed an MOU with dSPACE Korea to work together in developing co-simulation solutions. An autonomous driving simulator requires a core engine, which MORAI designs, develops, and distributes to some 100 clients, chief among them Hyundai Mobis, Naver Labs, and Samsung Engineering. With financial backing by some of Korea’s largest companies, such as Naver, Hyundai Motor Company, Kakao Ventures, and Atinum Investment, MORAI successfully completed a series A funding round earlier this year. One of MORAI’s key technologies is the automated generation of digital twins using HD map data, which allows for the building of large simulation environments. In autonomous driving validation, safety and reliability must be proven through repetitive testing, which has is why MORAI’s technology is held in such high esteem in the field of autonomous vehicle validation.

Argo AI and The League of American Bicyclists establish autonomous vehicle guidelines for safe driving around cyclists

Udelv to unveil autonomous cab-less Transporter, driven by Mobileye, at CES 2022

Udelv announced they will unveil the cab-less electric delivery vehicle for multistop delivery, the Transporter, driven by Mobileye, at CES. The Transporter revolutionizes the automotive and logistics industries by being the vehicle capable of driving at highway speeds without a cabin and making up to 80 stops per delivery run. Udelv’s patented adaptive shelving and hot-swappable cargo pod, the uPod, is a combination of hardware and software for intelligent loading and unloading, stateof-the-art telematics, remote control and operations, and fleet management services. It is “driven” by Mobileye’s self-driving system comprised of 360-degree sensing, safety technology, AV map and driving policy. Udelv is planning to operate more than 50,000 Mobileye-driven Transporters by 2028, with commercial deployment beginning at year-end 2023.

As part of its commitment to developing self-driving technology that positively impacts communities, Argo AI released the technical guidelines it applies to ensure safe interactions between autonomous vehicles and cyclists and encourages others to do the same. The guidelines, created in collaboration with The League of American Bicyclists, a national advocacy group on a mission to build a Bicycle Friendly America for everyone, are intended as a foundation for further innovation and improvement among companies developing self-driving technology. Together they outlined six technical guidelines for the manner in which a self-driving system should accurately detect cyclists, predict cyclist behavior, and drive in a consistent way to effectively and safely share the road: ● Cyclists Should Be a Distinct Object Class ● Typical Cyclist Behavior Should Be Expected ● Cycling Infrastructure and Local Laws Should Be Mapped ● A SDS Should Drive in a Consistent And Understandable Way ● Prepare for Uncertain Situations and Proactively Slow Down ● Cyclist Scenarios Should Be Tested Continuously

HEADLINES ● ADAS analysis from LexisNexis Risk Solutions sets record straight on U.S. Auto claims severity ● Karamba Security raises $10M in new funding from leading Asian Corps. ● Kodiak Robotics raises $125 million in oversubscribed Series B round ● Baidu Apollo wins approval for commercialized autonomous car service on open roads in Beijing ● New Beginning of HL Klemove, a company specializing in autonomous driving ● Beep and Local Motors successfully conclude autonomous shuttle project at Yellowstone National Park

December 2021 | Telematics Wire | 55


Telematics, Maps & Locations

AK300 LTE vehicle tracker for 4G migration

Road condition monitoring system from Honda Honda Research Institute USA, Inc. is developing a road condition monitoring system that uses vehicle technology to evaluate road conditions in an effort to detect possible hazards. With this vehicle-generated road condition reporting system, Honda hopes to help road operators monitor lane marking conditions in a more frequent, efficient and cost-effective way that helps fulfill Honda’s concept of “Safety for Everyone.” Honda Research Institute is developing a road condition monitoring system that uses GPS coordinates and sensors such as cameras to collect real-time data that will allow road operators to identify the location of lane markings that need repair. That road condition information, including longitude and latitude coordinates along with relevant images and video clips, is captured by the vehicle, anonymized, and then streamed to a secure platform for analysis. Road operators can access this platform to identify the location, type and severity of the road condition and hazard information, and obtain a still image and video.

ATrack Technology Inc. recently launched a new mainstream model of AK300 LTE vehicle tracker, in anticipation that many countries in Europe, Latin America, and Southeast Asia will gradually terminate their 2G and 3G network services in the coming years. AK300 is a vehicle tracker with GPS/GLONASS and 4G LTE Cat.1 fallback 2G communication which is able to monitor vehicle location and remote control via the cellular network. With its intelligent event control engine, users can define various combinations of vehicle conditions and generate various actions to meet their unique requirements. AK300-2G is available with 2G network only. AK300 features: ● LTE Connectivity ● Multiple Interface ● Supports Bluetooth ● CAN bus

Hub Drive Safe App Hub International Limited (HUB) announced the launch of its HUB Drive Safe App, a fleet risk management solution that provides driver coaching at scale, automatically detects collisions, and helps fleets reduce their insurance costs through safer driving behavior on iOS and Android mobile devices. The HUB Drive Safe App offers immediate feedback and coaching after every trip to ensure that commercial drivers avoid collisions and remain safe on the roads. The mobile app also allows fleet managers to get a detailed overview of the safety and efficiency of their drivers – all via one comprehensive dashboard. The HUB Drive Safe App is powered by a global dataset and mobility risk intelligence (MRI) platform to ensure the most precise collision detection and first notice of loss capabilities on the market. Additional safety features on the HUB Drive Safe App include: ● View and customize a driver profile to improve driving behavior with tips based on performance tracked in the app ● Gain high-level insights on the fleet's performance with a fleet score ● Automatically detect collisions, alert the fleet manager and notify an emergency contact or trigger insurance claims process ● Encourage consistent improvements in driving behavior with positive reinforcements and/or rewards

56 | Telematics Wire | December 2021

AK300 enabled to support applications include: ● Monitoring tire pressure and temperature: With Bluetooth wireless tire pressure and temperature sensors, fleet managers can monitor tire pressure and temperature in real-time, thereby preventing blowouts and detecting abnormal temperatures. ● Retrieve detail vehicle information: Through an external adapter, operations managers can remote retrieve real-time position, fuel consumption, mileage, and vehicle status anytime. ● Driver Identification: Using RFID reader and tag, the fleet manager can make sure every vehicle was used by the right person at the right time. Additional applications are also available, including SOS emergency calls and photography. It also can monitor whether or not the driver is idling, speeding, or having other bad behaviors, in order to improve the overall management efficiency of the fleet.


Telematics, Maps & Locations

GPS Insight acquires FieldAware to create comprehensive, fleet management, field services, and telematics software platform

Point One Navigation and Quectel bring precise location to robotics and agriculture markets

Point One Navigation and Quectel Wireless Solutions announced the LG69T-AM, the latest addition to the LG69T GNSS Module Series. Point One’s positioning engine powers the LG69T-AM and enables centimeter-level global accuracy by integrating augmented GNSS in an affordable yet easy-to-use module with open-source API. The LG69T-AM GNSS module features STMicroelectronics’ TeseoV dual-band L1/L5 positioning receiver platform with 80 tracking and 4 fast acquisition channels compatible with GPS, GLONASS, Galileo, BeiDou, QZSS and NAVIC. The LG69T-AM leverages Point One’s RTK and SSR technology for centimeter-level accuracy and ultra-fast convergence time. It is designed for easy integration with minimal e-BOM modification and is well-suited for mass-market adoption without the need for an expensive external co-processor. Due to its small package size, lightweight, and excellent power consumption, it is ideal for applications such as robotics and precision agriculture.

IoT CMP vendors add eSIM management capabilities to simplify logistics and localise connectivity Berg Insight released new findings about the market for IoT connectivity management platforms (CMPs), a standard component in the value proposition from mobile operators and IoT MVNOs around the world. Recent developments in the domains of network virtualisation, SIM technology and LPWA networking are currently driving a shift in the market towards a greater diversity of IoT connectivity management services. As enterprises in various sectors inherently have different connectivity needs, IoT CMP vendors and IoT managed service providers are introducing new services to address different segments of the market, ranging from mission critical to massive IoT applications. Delivery of global IoT connectivity services comprises another key focus area, propelled by enterprises’ demand for managing their global IoT device deployments through one platform or communications service provider. About 67 percent of the global installed base of 1.74 billion IoT SIMs were managed using commercial connectivity management platforms at the end of 2020. Huawei is the leading IoT CMP vendor in terms of volume with close ties to the domestic operators China Mobile and China Telecom and managed over 900 million IoT SIMs in Q2-2021. Whale Cloud, formerly known as ZTEsoft and partly owned by Alibaba Group since 2018, is the runner up on the Chinese market. Cisco is the dominant IoT CMP vendor outside of China with about 180 million connections in mid-2021, followed by Vodafone and Ericsson. Vodafone stands out as the only mobile operator that licenses its platform to third-party service providers. IoT CMPs are also a key component in the offerings from technology providers and IoT MVNOs such as 1NCE, EMnify, floLIVE, IoTM Solutions and Mavoco.

GPS Insight announced the acquisition of FieldAware. The acquisition advances field services and fleet tracking capabilities for GPS Insight, allowing them to better serve customers of all sizes through a more robust and comprehensive digital platform with capabilities to achieve operational insights and cost savings. The acquisition will better position GPS Insight to expand its field service solutions to meet the unique digital field service challenges of mid-market and enterprise service organizations across core industries, such as industrial and commercial equipment, solar and renewable energy, facility and property management, waste management, construction, HVAC, electrical, and plumbing, while ensuring fleet performance—along with driver safety and compliance—to a customer base of more than 250,000 vehicles combined.

ADESA to fully deploy automated vehicle tracking solution across North America ADESA and CoreKinect announce the complete deployment of ADESA’s fully automated vehicle tracking solution. The new service—combines a state-of-the-art mobile app with a GPS-enabled IoT Device to help customers and employees locate vehicles faster at ADESA’s more than 70 North America locations. Affixed to each vehicle upon arrival at ADESA, the IoT devices can be accessed through the Vehicle Locator functionality of the ADESA marketplace app. The app shares comprehensive vehicle information, pinpoint location services accurate to a single parking spot, and real-time visibility. Enhanced safety and security measures include push alerts for movement in off-working hours, making ADESA a more secure place to do business. The device operates on LoRaWAN technology, which significantly reduces power consumption and enables the IoT devices to last up to 10 years on a single battery.

December 2021 | Telematics Wire | 57


Connected Vehicle

Elektrobit launches Automotive Ethernet switch firmware for secure, high-performance, in-vehicle communications Elektrobit (EB) announced Automotive Ethernet switch firmware enabling secure, high-performance communications for in-vehicle networks. Available for switches from leading hardware vendors and already on the road in production electric vehicles (EVs), the EB zoneo SwitchCore will make it easier for carmakers and their suppliers to develop the advanced, high-bandwidth communications systems required by next-generation vehicles. EB zoneo SwitchCore is a firmware that adds a layer of intelligence to Automotive Ethernet switches that enables them to handle the everincreasing network functions required to enhance the scalability, safety and security of vehicles. EB zoneo SwitchCore provides advanced network management and network security functions such as routing, gateways, firewalls and network intrusion detection and prevention systems.

Patents to delete personal information from vehicles Privacy4Cars announced that it has secured two patents, including a patent for its proprietary process to remove privacy information from invehicle modules. Privacy4Cars offers a global tool for the deletion of personal information including phone numbers, call logs, text messages, garage door codes, and more that would otherwise remain stored in modern vehicles’ systems after a handoff. In 2014, Privacy4Cars founder, Andrea Amico, authored the first statistical study on how frequently and what kind of data is left in rental and for-sale vehicles. It was while completing the project that he realized how extremely common it was for drivers and occupants to leave – and unknowingly grant others access to – highly detailed digital footprints of personal information in vehicles they no longer owned or controlled. Amico also realized that companies were putting themselves at risk of violating a budding and increasingly complex web of local and global privacy and data security compliance regulations. The idea of a solution to resolve these data privacy issues for consumers and businesses sparked, and Privacy4Cars was born.

HEADLINES ● ● ● ● ● ● ● ● ●

Upstream Security lands investment from BMW i Ventures to accelerate the development of connected vehicle cybersecurity solutions Sibros to showcase deep connected vehicle technology at CES 2022 Daimler Mobility and Visa form global technology partnership to integrate digital commerce into the car seamlessly and conveniently Qualcomm delivers premium in-vehicle experiences for new PEUGEOT 308 with Snapdragon Automotive Cockpit solutions ZF accelerates the digital transformation of its products and processes worldwide via Microsoft Cloud ParkMobile expands into more locations with Google Pay direct payment option in Google Maps GlobalFoundries, Ford to address auto chip supply and meet growing demand BlackBerry, L-SPARK launch third cohort of accelerator program to advance Canadian connected vehicle technology innovation CalAmp’s LoJack España connected car solutions enhance Bipi´s rental car subscriber service with maintenance and recovery services

58 | Telematics Wire | December 2021


Mergers, Acquisitions & Alliances

HEADLINES

Otonomo collaborates with AWS to leverage AWS IoT FleetWise Otonomo Technologies Ltd. announced its plans for building on its Amazon Web Services (AWS) Partner Network (APN) membership and existing integration with AWS Connected Mobility Solution by adding support for AWS IoT FleetWise, a new AWS service. AWS IoT FleetWise’s advanced capabilities include the efficient collection and transfer of select automotive data to the cloud in near real-time from any vehicle. These capabilities, combined with AWS IoT FleetWise’s intelligent filtering capabilities, will make it even easier for Otonomo to process and deliver the massive quantities of vehicle data it manages daily into AWS workloads. Otonomo expects the resulting, increased velocity to enable its customers to build the next generation of connected car applications.

Wejo and Virtuoso Acquisition Corp. complete merger Wejo and Virtuoso Acquisition Corp. announced that they have completed their previously announced merger. The combined company will operate under the Wejo name, and its common stock and warrants are expected to commence trading on the Nasdaq Stock Market at the opening of trading on November 19, 2021 under the new ticker symbols “WEJO” and “WEJOW,” respectively. In connection with the merger and related private investment in public equity (PIPE) financing, Wejo received approximately $225.7 million in cash proceeds. Wejo looks forward to executing on its vision of building the manufacturer-agnostic industry standard in connected vehicle data by creating applications across multiple marketplaces and enriching lives around the globe.

● Arnold Transportation partners with E-SMART to improve fleet safety ● Cybellum and HCL Technologies partner to deliver solutions that address automotive cybersecurity risk assessment and regulatory requirements ● Arriver™ to support Qualcomm’s technology collaboration with BMW with Vision Perception software for automated driving ● Innoviz perception solution supported on NVIDIA DRIVE platform ● University of Massachusetts expands ParkMobile partnership to the Lowell campus to offer contactless parking solutions ● Motional and Lyft to launch fully driverless ride-hail service in Las Vegas in 2023 ● Torc Robotics collaborates with Applied Intuition to accelerate the development and validation of autonomous trucks ● NXP and Ford collaborate to deliver next-generation connected car experiences and expanded services ● Parkopedia partners with INDIGO Group’s OPnGO to strengthen parking reservation and payment services in Europe ● BlueWing Motors signs MOU with Indonesia’s UNS for joint ventures and technology exchanges for two-wheelers ● Varana Capital co-leads $74 million round in TriEye, joined by Intel, Samsung, and Porsche ● Ballard Power announces acquisition of Arcola Energy to help customers integrate fuel cell engines into heavy-duty mobility ● Wejo demonstrates momentum for Microsoft partnership, scales suite of data solutions on Azure cloud platform ● ZF and CATL join forces for an optimal aftermarket service in e-mobility and energy storage ● Swiss Re and Baidu partner to advance the ecosystem of autonomous driving with risk expertise and insurance innovation ● Wejo and Virtuoso Acquisition Corp. complete merger ● Alpha Motor Corporation SUPERSAGA™ pure electric performance sedan in collaboration with Rotiform ● Segula Technologies partners with PERRINN to develop Project 424 electric Le Mans hypercar battery system ● Hitachi Energy and Clever to accelerate sustainable mobility in Denmark ● Darktrace signs multi-million-dollar deal with global leader in automotive technology and electronics ● Passport adds new partner to digital mobility platform ● Siemens partners with Hyundai Motor Company and Kia Corporation for digital mobility transformation ● Stellantis signs lithium supply agreement with Vulcan Energy ● BorgWarner joins Clean Energy Buyers Alliance (CEBA) ● Major European energy supplier enters into purchase agreement for ultra-fast charging systems from ADS-TEC Energy ● Motiv and EverCharge team up to optimize fleet charging ● Bollinger Motors announces strategic partnership with EAVX to develop all-electric commercial trucks ● ZEV partners with the Salt River Pima-Maricopa Indian Community to electrify Indian Community vehicles ● Autoliv and SSAB collaborate to produce fossil-free steel components in automotive safety products ● Viaduct and PACCAR sign multi-year agreement to enhance vehicle uptime and optimize cost of quality via machine learning on connected truck data ● Hitachi and REE Automotive agree on collaboration to advance and simplify the adoption of sustainable electric vehicles globally ● CLEAR and Uber partner to help make travel more predictable with “Home to Gate” app integration ● Zero Electric Vehicles, Inc. partners with AAMCO Transmissions and Total Car Care to electrify vehicles ● FullSpeed Automotive® announces acquisitions and signed agreements driving Texas development ● Kyndryl and NetApp form strategic partnership to deliver critical enterprise data infrastructure to BMW Group ● Allego enters into a strategic partnership with Nissan

December 2021 | Telematics Wire | 59


Electric Vehicle

Gogoro launches new swappable battery initiative with introduction of smart parking meters

HEADLINES ● Gogoro recognized by Frost & Sullivan for revolutionizing the electric twowheeler market with its swappable battery approach ● Ford Pro offers complementary services to help commercial customers manage electric, gas fleets, improve uptime

Gogoro® Inc. announced it was launching a new urban battery initiative utilizing Gogoropowered smart parking meters. Co-developed with Shengming Technology, the smart parking meter enables cities to embrace smart city technologies for their paid parking locations that are off the power grid and wirelessly connected.

● GenCell and E.V. Motors partner to facilitate autonomous hybrid off-grid EV charging

Cities around the world manage thousands of public parking spaces and are looking for technologies to improve billing accuracy and real-time data analysis. With the new smart parking meter, Gogoro is providing cities with smart city technologies that fast track the deployment of wireless connectivity and off-grid power for a range of uses including parking management. Co-developed by Shengming Technology and Gogoro, the smart parking meter can utilize Gogoro smart batteries that are no longer optimized for high-performance vehicles, but can still be utilized as a powerful energy source.

● Lear to supply advanced connectivity and vehicle positioning solutions to global electric vehicle manufacturer ● NFI announces its first battery-electric bus order from the University of Michigan for up to 54 buses ● Tritium opens world-class EV charger testing facility

Zurich vehicle protection products for electric vehicles

● Merchants Fleet expands BrightDrop EV order to 18,000 with addition of EV410s

Zurich North America announced an enhanced suite of vehicle protection products designed to meet the unique needs of electric vehicle owners. The set of products provides coverage from the repair or replacement of high-voltage rechargeable batteries to roadside assistance for recharging EV batteries that run out of power mid-trip. The suite of products covers nearly all manufacturers including Tesla, Rivian and Polestar.

● U.S. Department of Energy selects Koura for development of fluorinated electrolytes to extend operating range and safety of LiIon batteries

Zurich’s EV protection products include: ● Vehicle Service Contracts ● Guaranteed Auto Protection (GAP) ● Road Hazard Tire and Wheel ● Environmental Protection Plan ● Paintless Dent Repair ● Key Replacement ● Select Protection ● Universal Security Guard® ● Lease Wear and Use

● Nissan unveils Ambition 2030 vision to empower mobility and beyond ● Solvay invests in Li-metal battery company Sepion through its venture capital fund ● L7 Drive to develop and supply Vehicle Connectivity Module hardware for Sono Motors’ solar electric vehicle Sion' ● Chakratec receives first orders of its 2ndGen Kinetic Power Booster and signs contracts for the deployment of 2 EV fastcharging stations in Germany ● Endera announces powertrain

new

all-electric

● Scania introduces world-class, versatile hybrid trucks ● Motiv Power Systems passes industryleading safety testing on its all-electric technology ● Berylls Strategy Advisors highlights critical areas for successful EV launches ● Rivian selects AWS as its preferred cloud provider ● Mercedes-Benz eVito panel van receives updated powertrain and battery ● Lexus unveils hydrogen-powered ROV Concept, a hydrogen-powered vehicle

The EV suite of protection products supports Zurich’s commitment to embracing sustainability in all aspects of business. Further, Zurich plans to switch its own fleet to electric and hybrid vehicles and be internal combustion engine-free in the coming years.

Sibros to deliver deep connectivity to long-range solar electric vehicle Lightyear One Sibros announced its collaboration with Lightyear, the solar electric vehicle breakthrough innovator to provide deep connected vehicle technology to the Lightyear One, the long-range solar electric vehicle designed to be grid-independent and to drive anywhere. Lightyear will integrate Sibros’ OTA Deep Logger to yield microsecond precision and real-world connected vehicle data from all sensors; both vital to long-range solar and unmatched energy efficiency unique to the Lightyear One. Sibros’ OTA Deep Updater will provide vehicle-wide software updates to the Lightyear One while meeting various safety and security requirements such as ISO 26262, GDPR, UNECE WP.29 and others. Sibros’ platform will also empower deep connected diagnostics and product usage insights to augment future product design enhancements.

60 | Telematics Wire | December 2021


Market Reports

Increasing automated safety requirements highlight need for robust regulatory framework for autonomous vehicles: Rost & Sullivan Frost & Sullivan‘s recent analysis of the global autonomous vehicles (AVs) regulatory landscape finds that increasing automated safety requirements necessitate a robust regulatory framework for AVs. Initiatives by advanced nations such as Germany, which regulated consumer use of Level 3 (L3) low-speed autonomous lane-keeping systems (ALKS), and Japan, which regulated consumer deployment of L3 vehicles and regulatory bodies such as the United Nations Economic Commission for Europe (UNECE) and National Highway Traffic Safety Administration (NHTSA), have developed regulatory guidelines for assessment, testing, and deployment of AVs. Additionally, global deployment regulations for passenger vehicles are at L3 autonomy, while several countries have commenced testing up to level 5 autonomy. The global harmonization of AV regulations will be instrumental in ramping up L3 to L5 deployment, presenting lucrative growth opportunities for AV market participants in areas such as: ● Harmonized guidelines for vertical market expansion: Global adoption of L3 and above AVs depends on a unified regulatory framework, standardization of ADAS deployment, and autonomous driving features such as driver monitoring, piloted driving, and autonomous parking. ● Regulating L2+ and L3 piloted driving: Regulatory bodies should set L2+ as a standard level and define market deployment guidelines. ● L4 robotaxis and shuttles for consumer deployment by 2024: Technology participants and OEMs can work together to develop and test advanced systems on public roads to deploy L4 robotaxis and shuttles.

Auto dealers look to increase security measures amid rising cyberthreats With rising concerns of ransomware and phishing attacks impacting businesses across the U.S., nearly half of automotive dealers surveyed plan to increase their investments in cybersecurity in 2022, compared to only 24% of dealers that invested in security measures in 2020, according to a study by CDK Global, Inc. IT-related business interruptions can be costly errors for dealerships, impacting businesses an average of 16 days in lost revenue if targeted by a cyberattack. Recovering from a data breach and restoring a dealer’s reputation is both costly and time-intensive, and automotive retailers may fall prone to meeting cybercriminals’ demands to keep their dealerships running. In fact, recent data by ransomware specialty company Coveware shows that payouts by businesses nearly quadrupled from 2019 to 2020, jumping from nearly $44,000 to $169,000.

Performance of autonomous cars to rely heavily on automotive sensor-cleaning technology: Netscribes

Netscribes, Inc. has published its findings on Automotive Sensor Cleaning Technology. In this whitepaper, the research firm examines the inception of sensor cleaning technology in cars, driven by the adoption of autonomous technologies across different vehicle classes. Sensors feed vehicle systems with important environmental data, enabling automated and autonomous operations. However, since vehicles are constantly exposed to various kinds of weather and road conditions, these sensors often get obscured due to heavy rain, snow, mud, bird droppings, dead bugs, and different types of debris. To address this challenge, industry players are exploring four fundamental types of cleaning technologies: Wipers and jets, protective material, passive/active aerodynamics, and advanced technologies, including ultrasonic and cavitation. Even though sensor cleaning technology is still being developed, autonomous vehicle OEMs face the challenge of ensuring that the sensors are integrated properly, maintained well, and are never compromised, failing which, accidents may take place. Given the crucial role sensors play in enabling accurate data inputs, navigation, and time-sensitive autonomous communication, knowing what’s next in this space will be key for makers of autonomous vehicles. With the adoption of new and emerging variants such as optical sensors, LIDAR, camera sensors, and radar sensors, several ADAS and autonomous vehicle engineers are on the lookout for efficient sensor-cleaning solutions. This demand is expected to drive the automotive sensor market, helping it to reach 23.6 billion units globally, by 2034.

HEADLINES

● ● ● ● ● ●

Global sales of automotive telematics to reach US$ 217.7 Bn: Fact.MR Study Asia-Pacific Smart Fleet Management Market to 2028: ResearchAndMarkets.com The installed base of connected tanks to reach 22.2 million in 2025 The installed base of fleet management systems in Europe will reach 22.5 million by 2025 EV charging sessions to exceed 1.5 billion by 2026 globally, as mass electrification drives infrastructure requirements Ride sharing spend by consumers to exceed $930 billion globally by 2026; Driven by strong COVID-19 recovery & reduced private vehicle usage

December 2021 | Telematics Wire | 61


India

Kerala state union minister push startups to develop EVs

TVS Motor signs MOU with Government of Tamil Nadu TVS Motor Company announced that they have signed a Memorandum of Understanding (MOU) with the Government of Tamil Nadu for investment in Future Technologies and Electric Vehicle. The MOU was signed in the presence of the Honourable Chief Minister of Tamil Nadu, Thiru. M.K. Stalin and Padma Bhushan Shri. Venu Srinivasan, Chairman TVS Motor Company at the Tamil Nadu Investment Conclave 2021 in Coimbatore. Under the MOU, TVS Motor Company will invest Rs. 1200 crores in Future Technologies and Electric Vehiclesin the next four years. The investment will be mainly for the design, development and manufacturing of new products and capacity expansion in the EV space. This investment reflects TVS Motor Company’s continued commitment towards the State’s overall economic growth as a responsible corporate citizen.

Oppo plans to launch electric vehicles in India by 2024

Oppo is planning to launch its first lineup of electric vehicles in India by the end of 2023 or early 2024, according to a report by 91mobiles. However, the company has not yet confirmed any technical details of the EV or about its probable plans. According to the latest report, Oppo’s plans for electric vehicles are already in the works and the launch is scheduled for 2023 end or early 2024. As for what kind of EVs we can expect from Oppo isn’t known yet. It was reported earlier this year in May that Oppo has already started work on its electric vehicle manufacturing, and the company’s CEO Tony Chan has held meetings with battery manufacturers and parts suppliers for Tesla.

Uber announces winners of the Green Mobility Innovation Challenge Uber announced the five winners of the Green Mobility Innovation Challenge, its partnership with the Government, in support of ideas to help drive the adoption of electric vehicles across the country. The winning startups are, Bodycast Innovators Pvt. Ltd. Virya Batteries Pvt. Ltd., Racenergy, Kazam EV Tech Pvt. Ltd., and Emuron Technologies. The winners will receive a grant of INR 7.5 Million from Uber to develop their ideas, along with six month’s business incubation at iCreate. The Runners-up shall also be eligible to receive two months of incubation, mentorship support by Uber’s leaders, access to labs, and co-working spaces at iCreate. The Uber Green Mobility Innovation Challenge focused on innovations in two and three-wheelers, the accessibility of charging stations and their ease of use, as well as potential partnership or financial models that can improve EV uptake in India.

62 | Telematics Wire | December 2021

Union minister of state Rajeev Chandrasekhar mentioned that Kerala with its strong ability in digital system design must give attention to the sector together with electric vehicles. The minister said that startups within the nation ought to benefit from world alternatives within the post-Covid period after a go to the Integrated Startup Complex (ISC). Chandrasekhar refocused that entrepreneurs have immense scope for growth within the electric automobile phase has that is nonetheless in a nascent stage within the nation. Noting that electric vehicles rely closely on electronics, he is known as upon entrepreneur within the nation to discover EVs worldwide market as most international locations are beginning with their electrification objectives. The minister conveyed that as Indian startups are majorly software-oriented, priceless contribution might be made by way of electronics {hardware} innovation below the Kerala Startup Mission.

Hero MotoCorp’s first EV to be made at Chittoor plant, launch timeline announced

Hero MotoCorp is all set to venture into the EV space with the launch of its first electric vehicle by March 2022. The company confirmed that its first-ever EV will be manufactured at its plant in Chittoor that is located in the southern Indian state of Andhra Pradesh. Hero MotoCorp says that its EV project is currently in the advanced stages and the said plant – aptly called Garden Factory for its eco-friendly and sustainable manufacturing practices – will provide an integrated ecosystem for Battery Pack Manufacturing and Testing, Vehicle Assembly, and Vehicle End of Line Testing (EOL). Hero MotoCorp also states that in keeping with its Vision – “Be the Future of Mobility” – it is committed to bringing sustainable mobility solutions to its customers and is accelerating its focus on producing electric vehicles as an integral part of its product portfolio.



8, 9, 10 March 2022 Radisson Blu, Bengaluru, India

presents

1200+

80+

Delegates

60+

Exhibitors

Speakers

10+

Sessions

6th edition

CONNECTED VEHICLE 2022

Who Should Attend? • Automakers • Automotive OEMs • Mobility Service Providers • IT Companies • Tier 1, Tier 2 & Tier 3 Suppliers • TSP’s • Chip Manufacturers • Semiconductors

• • • • • • • •

System Integrators Software/Hardware Providers Insurance Companies Lighting Companies Map Providers Content Providers Application Developers Big Data Analytics

For Speaking/Panel Slot Yashi Mittal M: +91 9810340678 mgr_corpcomm@telematicswire.net

64 | Telematics Wire | December 2021

• • • • • • • •

Telecom / Wireless carriers Cloud Service Providers Component Manufacturers Electric Vehicle Manufacturers Government Bodies State Transport Corporations Policy Makers Academia/Institutions

For Delegate Registration Poonam Mahajan M: +91 9810341272 mgr_corpsales@telematicswire.net

https://cv2022.in/

• Car Sharing Companies • Taxi Aggregators • PSU / STC • Financial Services • Associations • Consultants • Investors • Logistics & Transport

For Sponsorship/Exhibition Anuj Sinha M: +91 8744088838 anuj.sinha@telematicswire.net


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.