LoGeek Magazine Issue #17

Page 1

10.

Striking a balance Releasing agile potential in V-cycle environments

30 .

Python for everybody Build a DIY virtual assistant with less than one hundred lines of code

50. 0+ #17 / 2024

Unlocking the future Key movements and innovations in IT

THE LATEST TRENDS AND TECHNOLOGIES IN THE IT WORLD MAGAZINE

Autonomous

Lesia

Striking a balance Releasing agile potential in V-cycle environments

Andreea Simona Grigorovici

Federated

Jaclynn Soo Swee Yee, Alexander Dyakonov

Build

Rodolfo

to the Diagnostic Event

Embracing

Insights from the Project Management

Romanova
language learning Tools
study 4
and techniques for self-directed
10
operating
14
Lad Technical insight Navigating the
24
data governance
model
Devyani Prakash
digital landscape
Cangiotti Python for everybody
lines
code 30 Fractional indices Andrei Mishkinis 38
a DIY virtual assistant with less than one hundred
of
finance
Embracing quantum potential in
44 Unlocking
Key
Rashmi
50 Introduction
Manager
Sharana
72
Unlocking a new era of security and efficiency Arya Viswanath
the future
movements and innovations in IT
Malhotra
(DEM)
Basava
automation opportunities Efficiency through workflow analysis
82
Process discovery and mining for
Parameswaran Sivaramakrishnan
uncertainty
Excellence Conference
90 The
pair
AUTOSAR’s
B
92 CONTENTS #17 / 2024
Maksym Vyshnivetskyi
most essential
QAC and Tessy safeguarding
future Aravind
N

Autonomous language learning Tools and techniques for self-directed study

Knowledge of foreign languages is especially important nowadays, and successful IT specialists don’t limit themselves to English only. Increasingly, we work in distributed teams scattered around the world, and being able to speak the same language as a team, not literally but figuratively, can be beneficial.

From my point of view, the future lies in the approach of self-studying languages. Obviously, this doesn’t mean that you shouldn’t seek help from teachers; it is more about a shift in mindset. The first thing a typical person does when they want to learn a language is to enroll in courses. In the approach I advocate for, the first step is to take responsibility and identify your motivation.

Self-directed study method from

I used to make slow progress in learning English without much enthusiasm, consistently feeling dissatisfied with the results. However, my perspective changed when I came across Lydia Machova’s TED interview. I adopted the approach she outlined on the languagementoring.com website, and things started to move faster and became more enjoyable. I learned English to the desired level, and then Polish followed. The value of this approach is its universality and applicability to any foreign language.

Lydia is convinced that it’s possible to teach anyone a language. A good teacher can help you on this journey but they can’t give the language to you on a silver platter. Being an autodidact is especially easy in the internet age when countless texts, recordings, and exercises are waiting for you just one click away. All you need to know is how to do it.

According to Lydia there are four pillars of successful language learning:

When you find a way to enjoy the process of learning a language, it turns into a pleasant pastime activity. If you don’t like learning a language, you just haven’t found your methods yet.

There are millions of ways to learn a language but not all of them will get you to a truly comfortable fluent level. If you are wondering which methods work best, look at how polyglots learn languages.

If you want to speak another language, you need to be in contact with it every day, even if only for a few minutes. Luckily, there are many effective methods for busy people too.

Whether you learn from a book or language apps, you need a system in your learning. Otherwise, you won’t spend enough time with it. The best way is to make a plan and then just follow it step by step.

Learning a new language isn’t a miracle but hard work, requiring daily effort. Our task is to make this effort enjoyable, bringing us closer to our goal and not feeling like work but entertainment. We learn a language to gain specific benefits for ourselves. Humans are naturally lazy and won’t do what needs to be done, but when they find it pleasant and interesting, they will. This is how we can “trick” our brain into quickly learning by simply immersing ourselves in the language environment, using the language as a tool for activities that interest us.

So, the second step is to prepare your individual plan, consisting of activities that will help you improve your language skills and to follow this plan. It’s important to track what you have done and to rejoice in your successes, to praise yourself.

At first glance, it seems daunting and boring: Plan, follow, blah-blah-blah. But in reality, your task is to develop a habit and enjoy the process. I recommend the book “Atomic Habits” by James Clear and his website, where you can find a lot of useful information, including on motivation.

Fun Contact Methods System 4 #17/2024 5

But where to find the time for all this? That’s another question that worries everyone. The answer is: Set priorities, combine activities, and do everything within your power instead of looking for excuses.

To get started, you simply need to begin. Here are 5 tips from Lydia on how to stick to learning when you’re busy.

1. Have fun. If you find grammar book boring, swap it for your favorite book in the language you’re learning. One page will easily turn into an entire chapter, and it won’t even feel like learning.

2. Focus on your interests. Software development, healthy lifestyle, parenting, minimalism…choose language materials on the topic that excites you and you will be surprised how much you understand even at a beginner level.

3. Less is more. Split your learning into smaller chunks throughout the day. 10 minutes in the morning, 5 minutes during lunch break and 10 more before sleep is way more manageable than an hour in one sitting.

4. Change up the methods. Are you tired of paper flashcards? Try a fun vocabulary app instead. You don’t have to stick with the method you chose at the start of your learning journey.

5. Immerse ourselves in the language. Watch TV shows in the language you’re learning, listen to podcasts during your commute, write a journal, follow recipes…the options are endless.

Method extensive reading and listening from Steve Kaufmann

Another person who has succeeded in language learning and whom I want to talk about is Steve Kaufmann. He developed his own technique and the LingQ learning platform.

Kaufmann emphasizes the importance of extensive reading and listening in the target language, advocating for learners to engage with content that interests them personally. This approach is based on the principle that understanding language in context significantly enhances the ability to learn naturally, similar to how we acquire our first language.

Kaufmann’s method discourages heavy reliance on traditional grammar and vocabulary studies in the early stages. Instead, it encourages learners to discover language patterns and vocabulary through context, supported by the LingQ system’s ability to highlight and track new words and phrases. This method also values the role of listening in language acquisition, suggesting that exposure to spoken language through various media can improve pronunciation, listening skills, and overall fluency.

language learning habits are based on the following principles:

1. Focus on comprehensible input. Kaufmann believes that the best way to learn a language is to expose yourself to as much comprehensible input as possible. He achieves this by watching movies and TV shows, listening to podcasts, and reading books in the language he is learning.

2. Reading extensively. He advocates for reading extensively in the target language, allowing learners to encounter a wide range of vocabulary and sentence structures in context. He also uses his LingQ app to help him keep track of how much he is reading along with new vocabulary.

3. Listening to content. Kaufmann emphasizes the benefits of listening to audio content in the target language, such as podcasts, audiobooks, and radio shows, to enhance listening skills and familiarize the learner with natural speech patterns.

4. Repetition and review. He recommends revisiting content multiple times, reinforcing vocabulary and comprehension through repetition and review. For example, Kaufmann had said that when he is learning a new language, he will listen to the same mini story approximately 40 times. This helps him get used to the language while learning new words and phrases.

5. Don’t worry about making mistakes. Kaufmann believes that it is perfectly normal to make mistakes when you are learning a new language. He encourages learners to not let their fear of making mistakes prevent them from speaking, but to find a language tutor and start speaking.

Let’s figure out how to use comprehensible input to learn a language. Kaufmann’s language learning habits have been very successful for him. He has been able to learn a large number of languages to a high level of proficiency.

Here are a few tips from Steve on how to use comprehensible input to learn a language.

• Choose content that is interesting to you and that is at a level that you can mostly understand.

• Don’t be afraid to skip over parts that you don’t understand. You can always come back to them later.

• Don’t worry about making mistakes. The more you expose yourself to the language, the better you will become at understanding and speaking it.

• Try to find opportunities to speak with native speakers, even if it is just to say a few words or sentences.

Kaufmann’s
6 #17/2024 7

Learning tools

There are countless apps and tools, and I encourage you to test them out, finding the ones that best suit your goals. Also, pay attention to ChatGPT. With it, you can communicate like with a teacher, ask for corrections, and receive recommendations on the best word choices for specific contexts. Below, I list my favorite apps with a brief description.

Duolingo

Perfect for beginners to acquire a basic vocabulary. Everything happens in a gamified format without a heavy focus on grammar. You can practice all four skills: Writing, listening, reading and speaking.

Clozemaster

Excels in vocabulary acquisition by immersing users in contextual learning through sentence completion exercises. It offers a vast collection of sentences with fill-in-the-blank exercises, helping users learn words in context.

Tutor Lily

A fantastic example of AI for learning language. You speak aloud with Lily or write responses on a chosen topic. Your mistakes will be corrected, Lily’s questions can be conveniently translated, and you can listen to them as well.

Designed with you in mind

References

Stands out for its personalized, live tutoring sessions, providing direct interaction with native speakers for practical language use. It offers a diverse range of language options and provides flexibility in choosing a tutor based on individual preferences.

1. Ted Talks https://www.ted.com/talks/lydia_machova_the_secrets_of_learning_a_new_language

2. Lidia Machova’s site https://www.languagementoring.com/

3. James Clear (Author of book “Atomic Habits”) https://jamesclear.com/

Italki Author

Recently, celebrated her 6th work anniversary with the Luxoft Business and Systems department. She is an enthusiastic learner and contributor to our Speaking Clubs for employees in Polish and English.

Back to content 8 #17/2024
START
Discover our new career site
EXPLORING NOW

Striking a balance

Releasing agile potential in V-cycle environments

What is the V-model and its advantages?

The standard software development process used in the automotive industry is called V-cycle (also known as the Verification and Validation Model). This model splits any software development process into two phases like two converging paths meeting at the peak of a V. The left side of the V contains the requirement analysis, function/software design and change management, while the right side of the V concentrates on the main verification and validation activities.

The V-cycle method was initially created for usage in the industrial sector, then spread widely to IT sector in the 1980s. Having its origins in the conventional waterfall model, a sequential engineering approach, the V-Model traces its roots to a linear progression where each stage requires completion before advancing to the next one. It gained traction in the engineering realm in response to the escalating complexities of products or systems, necessitating heightened testing and analysis for robust project development.

The V-model offers several strengths that make it a valuable approach for engineering projects. Following the clearly defined steps to small increments of software development allows for testing earlier in the process, while this helps with finding any existing problem earlier in the development stages, this also

sets a rigid way of working. Nevertheless, because each of the team members knows exactly their role and what their responsibilities are, a boost in productivity can be seen in the projects where this model is used. Another advantage is the transparent process of the entire project life cycle, thus enabling the definition of a budget framework starting from the initial design stage.

However, the V-model also has its weaknesses which can push projects to adopt other management methods. It relies on requirements which are assumed to be known and stable from the start of the project, which is more often than not a false assumption. This can lead to problems when the requirements change over time, during the development process. Nowadays, the market and the economic context are in continuous change. Moreover, the V-cycle method is rigid, and does not consider unexpected events which can occur at any time. Another drawback is that communication inside the teams is almost non-existent as each team member has their own role and always relies on documentation whenever a problem is encountered.

All these drawbacks can be addressed with using the Agile model, however the choice between the two depends on the context and project requirements.

Adapting to change is always challenging. Working in automotive projects where V-model software development process is used is even more difficult. The V-cycle process is structured, and its sequential approach has provided a solid foundation of work, ensuring a systematic progression from requirements to testing. However, in an era characterized by rapid technological advancements and ever-changing client demands, organizations tend to seek a more flexible and iterative approach. This is where the Agile methodology steps in as a powerful ally, bringing with it a set of principles designed to navigate the uncertainties of the modern business landscape.

Adaptability is an important ability for software development professionals, through which one needs to adjust their behaviors to navigate successfully through complex or difficult situations. While holding on to the old ways may be safe, thinking outside the box has been proved to be very useful in addressing challenges and seizing opportunities.

Divergent thinking, with its emphasis on creativity and fresh perspectives, proves superior to the rigidity of convergent thinking. In a rapidly evolving world, innovation thrives when unconventional paths are explored, leading to continuous improvement and adaptability. By breaking through stereotypes and challenging norms, divergent thinking promotes inclusivity and diverse perspectives. It positions individuals and organizations to be future-ready, and able to adapt and thrive in dynamic environments. In essence, divergent thinking is a celebration of creativity, resilience, and the courage to explore uncharted territories beyond known paths and processes.

What is Agile and what are its advantages?

The Agile approach emphasizes on open communication, adaptation, collaboration, and continuous improvement. In 2001, through the publication of Agile Manifesto, Agile was born as a methodology and since then many Agile frameworks have emerged (e.g., Scrum, Kanban, Lean, etc.). Projects usually combine practices from different frameworks, and even add their own perspectives to achieve their purpose. The application of Agile within each team is unique to their needs and organizational culture.

Agile, at its core, is a collaborative and iterative project management methodology. The project is divided into smaller batches, each of them following a cycle of planning, execution, and evaluation. There is a misconception that Agile is defined by the ceremonies and specific development techniques, rather than that, Agile is a group of methodologies that display a strong commitment to continuous improvement. Embracing flexibility, it thrives on adaptability, fostering a dynamic approach to software development that prioritizes responsiveness to change. The Agile methodology brings forward a focus on delivery quality results and improving customer satisfaction. Teams are encouraged to be open to changing priorities, goals and even culture when necessary, to provide better results.

The Agile method is an attractive project management methodology because it brings some big advantages. Firstly, by enabling swift responses to changing requirements, it guarantees that projects adapt to a rapidly evolving landscape. The iterative nature of Agile leads to continuous improvement of projects, resulting in a faster time to market for results in a dynamic business environment. This also reduces the risk of project failure if the market or customer needs change. Moreover, by applying the Agile mindset, one is making sure that results are continuously delivered, not just at the culmination of the project. While results are important, projects are built around motivated individuals which bring talent and technical expertise and collaborate towards a common goal. Fostering a culture of transparency, teams tend to work better in a medium where communication, idea exchange and constructive feedback is valued and implemented. Agile brings back the focus towards the team as a cohesive unit, comprised of motivated individuals which take ownership of their work, leading to a more engaged and motivated workforce.

While Agile brings a multitude of advantages, ultimately the choice between Agile and V-model depends on project requirements, stakeholders’ preferences, and needs.

#17/2024 10 11

Adapting to change through estimations

As we explore the advantages of both V-cycle and Agile methodologies, it becomes clear that continuous improvement is a catalyst for streamlined development. However, one vital aspect in Agile framework revolves around estimations, which is a practice that plays a crucial role in planning but also serves as a strategic tool for navigating the ever-changing landscape of project development. Agile estimation is the process of measuring how much time and effort a project requires. The purpose of estimating project tasks is to improve decision making, manage risks more efficiently, and ultimately learn how to adapt to change effectively. Establishing a proper and efficient estimation technique, that aligns with the project needs, can lead to reduced costs and more accurate planning.

Types of estimations in Agile projects

Usually, estimations can be made for time or for complexity. Complexity estimation centers on assessing the intricacy and challenges associated with individual tasks. It involves understanding technical complexities and dependencies. Complexity estimations are often expressed in abstract units like story points. Contrastingly, time estimations in Agile involve predicting the amount of time it will take to complete a task, this is expressed in time units (such as hours, days, or weeks).

Agile methodology offers a variety of estimation techniques that can be used in the development of projects. However, three significant types are presented further:

a. Planning poker

Planning poker is a card-based system of estimating the effort in Agile. In a planning poker session, each member uses a set of cards with values for time or for complexity. A leader of the group (sometimes the Scrum Master) names a component of a project for the group to discuss for estimation. The discussion clarifies any uncertainties, and after it each member chooses one of their cards, they feel best represents the estimation. Everyone shares their cards and discusses further the reason for their choice. This process is repeated until a common estimation is reached.

b. T-shirt sizing

T-shirt sizing is a simple process where teams assess each component for estimation, assigning a designation according to common T-shirt sizes covering size from extra-small to extra-large. The group discusses each item when presented, then decides on a consensus placement for the item. This allows the team to sort tasks by size and provides an overview of the expected work ahead. Moreover, templates for T-shirts can be used as reference points, this can help the team align to a specific known instance.

c. Three-point method

The three-point method is an average-based estimation framework. When evaluating the size of a task, the team creates three estimations based on three scenarios. An optimistic estimation for a best-case scenario, a pessimistic estimation for a worst-case scenario, and the most realistic estimation, which represents the best guess for the actual effort required, not considering possible challenges that can appear unannounced.

To reach a final estimation, the team can either find the basic average by adding all three estimations together and dividing the result by three, or the team can use a weighted average. This entails calculating the most probable outcome multiple times, such as summing the optimistic and pessimistic estimates and then multiplying the most likely estimation by four. The final step involves dividing the total by six.

How can Agile estimations be applied in an environment with V-model history?

Integrating Agile estimation techniques into waterfall-based projects (such as V-model) shifts the focus towards breaking down requirements into more manageable units comparable to the Agile User Stories. The collective expertise of team members can be used to measure effort and complexity of work. This principle fosters a collaborative environment where divers’ perspectives contribute to more accurate estimations. The incremental approach within the V-model structure allows for iterative development cycles, which enable teams to revisit and refine estimations as the project advances. This adaptability is important, especially during regular project reviews or when unexpected challenges arise. Investing time and employing effective techniques for accurate estimations ensures the project’s progression and continuous improvement. As the teams gain a deeper understanding of the project, estimations will improve if the change is embraced. Furthermore, the collaborative approach to estimating encourages open cooperation and communication among team members, thus ensuring that diverse insights are considered. Integrating risk assessments into the estimation process acknowledges uncertainties and allows for the inclusion of contingency plans. Thereby, the tools which are typically designed for Agile project development can still be used to maximum advantage within the waterfall framework. This can also include digital platforms that support Agile practices, facilitating estimations, tracking and even collaboration within teams. The goal is to strike a balance between the approaches that enhance adaptability and responsiveness, without completely deviating from the V-model structure and objectives.

Conclusion

In conclusion, integrating Agile practices into waterfall-based projects is possible. Integration of Agile estimation principles emphasizes the importance of breaking down tasks, employing relative sizing and fostering collaboration for more accurate estimations. Furthermore, projects can benefit from adaptability in planning, collaboration techniques and the usage of Agile tools within more traditional frameworks. By embracing a more collaborative and flexible mindset, teams can enhance their ability to navigate uncertainties, refine estimates iteratively, and ultimately contribute to more successful project outcomes. The potential for synergy between Agile and waterfall methodologies allows for a more adaptive and responsive project management approach.

1. www.atlassian.com/agile

2. builtin.com/software-engineering-perspectives/v-model References

Back to content

Author

An IT professional with over five years of experience. For the past two years, she’s been working as a technical team lead for an automotive project at Luxoft Romania. She’s passionate about delivering high-quality work and always strives to improve her skills by taking on challenges and participating in various training sessions. In her free time, she loves to read and paint, finding these activities to be a great way to unwind and tap into her creativity. She believes that a well-rounded life is essential for achieving success in one’s career, and she strives to maintain a healthy work-life balance.

12 #17/2024 13

Federated data governance operating model

The modern approach to cost-effective and quality data management in the automotive industry

2023 was a year full of challenges for the automotive industry. Global headwinds including the energy crisis, slower global demand, and supply-chain disruptions continue to affect the automotive industry. Car makers forecast a slower growth and rising cost in the industry even in 2024, considering the market is shifting towards cost-efficient car models due to higher cost of living, while at the same time, the cost of quality remains high for performance optimization. So, it is a time for the automotive executives to rethink their strategies to create more efficient operating models.

Global automotive challenges entering 2024

As recently reported in KPMG’s 24th Annual Global Automotive Executive Survey, auto executives around the world are having less confidence that the industry will achieve more profitable growth over the next five years. As a survey that was conducted on more than 1,000 senior executives in 30 countries and territories, the report also mentioned a few key challenges and key elements that should be taken care of by global automotive leaders when defining their strategies.

Key challenges:

1. Potential customer habits shift driven by cost-of-living crisis

2. Fewer government subsidies

3. Creating more efficient operating systems

Key elements to keep the business successful:

1. Satisfactory performance and seamless customer experience

2. Continuity of supply for commodities and components

3. Being prepared for growing complexity due to advanced technology such as generative artificial intelligence

So, what can we understand here? The market is asking for cost-efficient vehicles, at lower prices and with no degradation of quality. This would mean a smaller profit margin, which would impose a bigger risk to any business decision made.

Data as an important asset for decision-making

Driver assistance, collision avoidance, and voice-activated controls are just a few of the vehicle software features that must handle extensive volumes of data and signals from sensors made by different manufacturers. By possessing data analytics capability, automotive companies can efficiently and strategically understand customer behavior, enabling them to identify market needs with greater precision. Additionally, data analytics aids in identifying and addressing quality issues earlier, leading to cost savings and customer satisfaction.

With the huge benefits and impacts it will bring, treating data as an asset is critical in today’s data driven landscape. In the automotive sector, data management’s market size has been valued at US$1.08 billion in 2022, and is expected to grow at a CAFR of 20% and forecasted to value of US$14.29 billion by year 2032.

14 #17/2024 15

Automotive data management market size, 2022 to 2032 (USD billion)

However, with the rapid increasing amount of data to be analyzed and growing data complexity year to year, it is no longer feasible for effective data utilization without having a defined standard process and tools in an organization that are capable of supporting collection, process, and storage of necessary data assets in a more efficient way.

Managing the standardization of data assets can be challenging in the automotive industry, given that processes and data assets often need to comply with multiple software development standards and process models, including:

• V-model and ASPICE (Automotive Software Process Improvement and Capability Determination)

• ISO 26262 Functional Safety

• UNECE SUMS (Software Update Management System) R156

• UNECE CSMS (Cyber Security Management System) R155

To ensure the process of managing the availability, usability, quality, security and compliance of the data is performed objectively without conflict of interests across multiple standards, an integrated approach of data governance is needed. This is important to prevent the diversification of software development and data asset management approaches based on each standard. However, considering the significant effort needed to understand

in detail the requirements of each standard clause as well as identifying the similarities or gap in the clauses, it may not be feasible for one or more software developers to undertake this task within the scope of their own software function or component development without useful guidelines. A robust data governance operational standard that can consider the needs of different standards as well as identify the feasibility of fitting the process guidelines into different software functions or software component developments, is needed.

Moving towards data governance standards for better data quality

Adopting data governance standards involves a strategic approach to managing two distinct yet interrelated types of data: Project data and corporate internal data. This internal data encompasses tool configurations, process best practices, and more. By dissecting the data governance initiative into these two components, we can better understand the nuances involved in enhancing data quality, the implications for project management, and the overarching benefits for an automotive business.

Part 1: Governance of project data

• Scope, timeline, and cost for project managers: Implementing data governance standards for project data directly influences project scope, timelines, and costs. Standardized data governance practices enable project managers to define clear data management objectives, streamline data handling processes, and minimize the risk of data-related delays. This results in cost savings by reducing the time spent on correcting data issues and accelerates project delivery by ensuring data is correctly managed from the outset.

• Quality enhancement: Data governance standards ensure that project data — ranging from vehicle telemetry data to customer feedback — is consistently managed, processed, and stored. This uniformity in data handling boosts the quality of insights derived from the data, enabling more informed decision-making, and enhancing the overall quality of project deliverables.

• Why we need it: The automotive industry is rapidly evolving, with a growing emphasis on software-driven innovations. This evolution demands high-quality data to fuel decision-making, from vehicle design to customer experiences. Data governance standards ensure that the data used across projects is reliable, secure, and compliant with industry regulations, enabling the company to remain competitive and meet customer expectations.

• Latest trends and achievements: The market is increasingly moving towards data-centric approaches, with technologies such as AI and machine learning playing a pivotal role in automotive development. These technologies require vast amounts of high-quality data, making effective data governance more important than ever. Achievements in this area include the development of sophisticated data analytics platforms and tools designed to enhance data quality and governance in complex project environments.

• Market movement and customer demand: The shift towards standardized data governance reflects broader market trends towards transparency, security, and personalized customer experiences. Customers, including major automotive manufacturers, demand software solutions that are not only innovative but also built on a foundation of reliable and secure data. This demand underscores the importance of adopting robust data governance practices.

$14.29 $11.85 $9.82 $8.14 $6.75 $5.59 $4.64 $3.84 $3.19 $2.64 $2.19 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 0 1.5 3 4.5 6 7.5 9 10.5 12 13.5 15
The need of data governance
16 #17/2024 17

Challenges in steering data governance

The journey towards implementing robust data governance standards, particularly in a complex and multifaceted environment like Luxoft’s automotive line of business, is fraught with challenges. Two of the most significant hurdles are managing the diversity of tools across projects and aligning priorities among stakeholders. These challenges, if not adequately addressed, can impede the successful adoption of data governance standards, affecting project delivery, quality, and the strategic alignment of data initiatives.

• Tool configurations and process best practices: For operations within a company, data governance encompasses the standardization of tool configurations and the establishment of best practices for processes. This internal focus ensures that the tools used across projects are configured to comply with automotive data governance standards, facilitating consistency and efficiency. Similarly, standardizing processes according to best practices minimizes operational variances, leading to more predictable and reliable project outcomes.

• Impact on project management: For project managers, the governance of internal data translates to more streamlined project execution. With standardized tool configurations and processes, project managers can more easily plan project timelines, scope, and budgets, knowing that the foundational elements of project execution are consistent and reliable.

• Cost saving, faster delivery, and quality: Standardizing internal data governance practices leads to significant efficiencies, reducing the learning curve for new team members and decreasing the time needed to set up new projects. This not only saves costs but also enhances the quality of the output by reducing errors and inconsistencies in the project execution phase.

• Benefits to corporate: Aligning with automotive data governance standards offers several key benefits. It enhances the company’s ability to deliver high-quality, compliant automotive software solutions efficiently and cost-effectively. Standardized data governance practices improve collaboration and knowledge sharing across projects, enabling a corporation to leverage its global expertise more effectively.

Adopting data governance standards brings benefits to Luxoft in automotive software development, with better compliance and operational efficiency. Besides that, it is an integral part of Luxoft’s strategy to drive innovation in the automotive sector. By ensuring high-quality data management, Luxoft can accelerate the development of advanced features and services, from autonomous driving systems to personalized in-vehicle experiences. This strategic focus on data governance supports Luxoft’s mission to deliver cutting-edge solutions that enhance vehicle safety, performance, and connectivity.

• Nature of the challenge: In the automotive sector, projects often require specialized software and tools for development, testing, management, and security. The variety of tools, each with its unique configurations and data handling capabilities, presents a substantial challenge for standardizing data governance practices. This diversity can lead to inconsistencies in data quality, security, and management across projects, making it difficult to enforce a unified data governance strategy.

• Implications for data governance: The use of different tools across projects complicates the establishment of common data standards, processes, and policies. It can result in data silos, where data is managed in isolation, leading to inefficiencies and a lack of transparency. Moreover, ensuring compliance with data protection regulations becomes more challenging when data is spread across multiple tools and platforms.

• Strategies for overcoming the challenge: To address the challenge posed by the diversity of tools, several steps can be taken:

- Tool rationalization: Evaluate and streamline the set of tools used across projects to minimize variability. This doesn’t mean using a one-size-fits-all approach but rather selecting tools that best align with data governance objectives and can be standardized across projects.

- Integration solutions: Implement data integration tools or platforms that can connect disparate tools, facilitating seamless data flow and centralizing data management.

- Common configuration standards: Develop and enforce standard configurations for the tools used across projects, ensuring consistent data handling practices.

Part 2: Governance of corporate internal data
Luxoft’s position in data governance strategy:
18 #17/2024 19
Challenge 1: Different tools

Challenge 2: Priority from stakeholders

• Nature of the challenge: Data governance initiatives require the support and buy-in from stakeholders across the organization, including executive leadership, project managers, developers, and operational teams. These stakeholders often, in an immediate impact to current project, require additional effort to support in setting up the needy requirements; while the long-term cost-efficiency benefit is usually only applicable to future projects. So, often there are competing priorities and different perceptions of the value of data governance, making it challenging to secure their commitment and ensure that data governance standards are consistently applied.

• Implications for data governance: Without broad stakeholder support, data governance initiatives can struggle to gain traction. Projects may continue to operate in silos, with varying degrees of adherence to data governance standards. This lack of uniformity can undermine the effectiveness of data governance, impacting data quality, security, and compliance.

• Strategies for overcoming the challenge: To align stakeholder priorities with data governance objectives, Luxoft can:

- Demonstrate value: Clearly articulate the benefits of data governance, including cost savings, improved efficiency, enhanced data quality, and compliance with regulations. Use case studies or pilot projects to showcase tangible benefits.

- Engage stakeholders early: Involve stakeholders in the development and implementation of data governance standards from the outset. This early engagement helps to ensure that the standards are practical, address the needs of different groups, and have broad support across the organization.

- Establish clear roles and responsibilities: Define and communicate the roles and responsibilities related to data governance, ensuring that stakeholders understand their part in the initiative and how it contributes to the overall success of the organization.

Guidelines definition and implementation

Defining guidelines

Adopting a federative approach to data governance for managing a variety of management, development, and testing tools across diverse projects involves a more collaborative and distributed model of governance. This approach is crucial when dealing with the complexity of different tool configurations across projects that need to align with standards such as ASPICE and cybersecurity.

• Conduct a federated audit: Initiate a federated audit involving representatives from each project team to catalog the tools and their configurations. This collaborative audit helps to understand the diversity in tool usage and configurations across projects and identifies shared challenges and needs.

• Establish federated governance teams: Form federated governance teams comprising stakeholders from various projects, including technical leads, compliance experts, and project managers. These teams are responsible for developing, reviewing, and refining the tool configuration standards and best practices. This ensures that the standards are not only compliant with ASPICE and cybersecurity, but also practical and adaptable to the needs of different projects.

• Define federated configuration standards: The federated governance teams work collaboratively to develop a set of configuration standards that balance the need for uniformity with the flexibility required by individual projects. These standards should provide a common framework within which projects can operate while allowing for necessary deviations to meet project-specific requirements.

• Document and share best practices: Develop a repository or knowledge base where best practices and standardized configurations are documented and easily accessible to all project teams. This resource should be dynamic, allowing for updates and additions as new challenges are encountered and solved.

• Federated review and consensus: Before finalization, the proposed standards and practices undergo a federated review process, ensuring consensus across different project teams. This collaborative review strengthens buy-in and ensures the guidelines are realistic and applicable across the board.

Implementation and usage

Whenever a guideline is defined, the placement of that guideline into operation is necessary to start bringing benefits and value to the corporation.

• Federated training programs: Implement training programs designed to cater to the diverse needs of various project teams while ensuring a common understanding of the federated data governance standards. Tailor training sessions to address specific configurations and best practices relevant to different toolsets.

• Flexible integration into project lifecycle: Integrate the federated configuration standards into project lifecycles with a focus on flexibility. Recognize that while the overarching standards provide a common framework, individual projects may require tailored implementations to address specific challenges or opportunities.

• Monitoring and adaptation mechanism: Establish a federated monitoring mechanism that allows for the tracking of compliance with the governance standards across projects, while also identifying opportunities for adaptation and improvement. This mechanism should support a dynamic governance model that evolves based on real-world feedback and changing project requirements.

• Continuous improvement and feedback: Encourage continuous feedback from all project teams within the federative governance framework. This feedback loop enables the ongoing refinement of tool configuration standards and best practices, ensuring they remain relevant and effective.

Federated data governance operating model

With the availability of guidelines, projects can kick up faster, teams collaborate and communicate more effectively, and important compliance needs are fulfilled. However, do all the projects use all the tools and all the data the same way? The answer will most probably be no.

Every project has different clients with different requirements. Therefore, there are no one size fits all guidelines in data governance. It is necessary for the project team to adopt the guidelines to their own needs. This approach is well modeled in a federated data governance operating model, where data governance standards and best practices are defined centrally by a steering team, while each local project teams have the autonomy and resources to execute these standards in a way that is most appropriate to fit their specific project requirements.

20 #17/2024 21

Federated data governance

The federated data governance operating model is well suited for a corporation which usually has distributed data landscape. In this model, PThe process owner board (POB) acts as the centralized steering team that performs coordination and provides oversight of the standardization and improvement needs. The decision-making is shared collaboratively with the localized project team to determine the final standards and guidelines for processes and tools. Meanwhile, each localized project team uses these standards as the primary guideline while also identifying optimal operational methods. The feedback loop from the localized project team to the centralized steering team remains open for continuous improvement, ensuring ongoing enhancements for an even better set of guidelines.

Federated data governance operating model components – the who, why, what and how

Solutions

As the automotive industry becomes increasingly competitive in terms of price and quality, the success of the federated data governance operating model emerges as a crucial factor in supporting long-term corporate benefits. This model facilitates centralized guidelines and standards based on the diverse experiences of different projects. It also empowers individual project teams to determine the most suitable execution methods, enabling projects to initiate and deliver more efficiently. This approach ultimately reduces costs while upholding project quality through the implementation of established best practices, configuration items, processes, and tools. By fostering trust in this operational model, it ensures that the best decisions are made across all responsible business units.

References

1. KPMG’s 24h Annual Global Automotive Executive Survey: https://kpmg.com/xx/en/home/media/press-releases/2024/01/ confidence-dips-in-global-automotive-sector.html

2. Automotive and mobility trends to watch in 2024: https://www.rsm.global/insights/automotive-and-mobility-trendswatch-2024

3. Automotive Data Management Market – Global Industry Analysis and Forecast (2023- 2029): https://www. maximizemarketresearch.com/market-report/automotive-data-management-market/147016/

4. Data Governance Policy: https://textileexchange.org/data-governance-policy/

5. Federated Data Governance: How to Boost Business Agility with Data Mesh: https://www.syntio.net/en/labs-musings/ federated-data-governance-how-to-boost-business-agility-with-data-mesh/

6. The Benefits of Federate Data Governance: A Modern Approach to Data Management: https://www.linkedin.com/pulse/ benefits-federated-data-governance-modern-approach-management-kamat/

7. Data Governance Frameworks: The Cornerstone of Data-Driven Enterprises: https://www.claravine.com/resources/datagovernance-framework/

8. The benefits of a flexible operating model in data governance: https://www.collibra.com/us/en/blog/the-benefits-of-a-flexibleoperating-model-in-data-governance

9. How to Achieve an Optimal Hybrid Data Governance Model: https://atlan.com/hybrid-data-governance/

Back to content

Authors

Jaclynn Soo Swee Yee

Jaclynn Soo has a strong enthusiasm for designing and delivering good software products with 15 years of experience. She previously worked as an embedded software architect and a product owner for telecommunications software products, then moved her career focus to project management in recent years. Currently, she is actively engaged in configuration management to build up and steer a standardized structures, processes and guidelines for a software bill of materials (SBOM) in a global automotive organization with a complex departmental structure.

Alexander Dyakonov

Alexander Dyakonov is a certified Project Management Professional (PMP) possessing 10 years’ experience with automotive systems. He strives to obtain operational and communication simplicity while empowering teams at all levels within/outside of the organization to make decisions and deliver results. Currently, as a project management office manager, he is leading the process of engineering tool set unification for automotive.

Localized project team - account Centralized steering team Guidelines platform Knowledge base Processes Tools Project #1 Project #1 POB QMS Solution PMO Project #2 Project #2 Project #3 Localized project
- account
team
Who Why What Budget Standards, rules and defenitions Data performance metrics Data governance RACI chart Critical data identification Data policies Data domain: Standards, assurance, community engagemen, benchmark, internal systems, learning managment Decision rights and accountability Scope Principles • Data is an organizational asset • Single source of truth • Data is of a high quality and usable • Data is safe and secure • Data is used properly Business data glossary and business rules Value statement Reuse data Share data Timeline Plan and fund Design process and tools Deploy process and tools Govern data Monitor, measure, report Process procedures guidelines Data governance process Data lifestyle Data value chain Store data Analyze data Process data Collect data Plan Data standards and defenitions Data classifications, data identifiers, data inventoty, data model How Rules Roles Processes PMO Define and manage quality data resource that enables company to inspire and equip people to accelerate the adoption of ASPICE and cybersecurity processes and tools Accounts QMS
Conclusion
22 #17/2024 23

Technical insight Navigating the digital landscape

In today’s rapidly evolving digital landscape, the role of IT professionals has never been more critical. The role of IT professionals includes but is not limited to managing complex systems and networks, implementing cutting-edge technologies, this pivotal role that we play in driving innovation, providing cybersecurity and ensuring the smooth operation of organisations across industries is remarkable. In this article, the author has tried to present you with different terms and methodologies used to define technology and technical consultant’s role in technology with latest developments in the industry. As the backbone of most of the modern businesses, IT professionals are full of technical expertise, problem-solving skills and adaptability to navigate the complexities of the digital age. As an IT professional with 10+ years of experience in the industry I would like to elaborate from my perspective on the impact on organizations, and the skills and qualities that define their success in the dynamic world of technology. Luxoft has provided various opportunities to aspiring IT consultants throughout the world. I joined Luxoft as Murex Application Analyst in 2017 with 3 years of experience in projects based on Structured Query Language owned by Oracle. Most of my experience is in Datamart Reporting based on the Murex trading application. I have also worked on multiple banking treasury projects and projects generating tickets for an airline ticketing system where I learned a lot about the industry and real-time challenges and tactics to resolve them.

Internet and information

Over the last few years the Internet has become an essential part of day-to-day activities, and has brought the world closer than before. People can share information and communicate from anywhere with an Internet connection. Presently, the entire world is experiencing unparalleled digital transformation with technological advances. Emails, video conferences, social networking, online job portals, grocery stores, food delivery, ticketing, shopping sites, every aspect of individual life in some way or another is dependent on internet. We are navigating this ever-changing digital landscape to embrace innovation and harness the power of digital technology. This has helped to create more and more jobs for technical professionals as there is an exponential need for data storage, updates, and faster access to stored data. IT professionals are responsible for helping mankind live a life of luxury and convenience.

Software

Every IT professional starts with a basic understanding of programming languages which then grows with real-time work assignments. I started with a basic understanding of SQL. It used to be one of the most widely used languages along with Java, PLSQL, MYSQL and its market coverage was significant. The career shaping policies of Luxoft has helped me grow every single day. Internal Mobility gives individuals the chance to explore industry-wide opportunities. Basic SQL concepts include data manipulation, data interpretation and storing huge volumes of data securely by having an option to provide intended access to data. As the digital world is growing faster, with every action there is a record generated and to store it more and more digital space is needed. Many big organizations are using relational databases to efficiently store the data and be able to retrieve it as a when needed. Structured Query Language also known as SQL is a programming language in which a set of statements can be used for managing relational databases. It was developed by the Oracle Corporation in the 1970s. There is an extended version of SQL known as Procedural language or PLSQL. At Oracle, server data is stored in tables. SQL provides a comprehensive set of commands for data retrieval, updates and administering purposes. SQL offers a robust and intuitive ways for performing a wide range of database operations whether it’s retrieving specific data subsets, performing complex calculations or modifying database structures.

The operations of Oracle SQL go beyond the basic functions of the a programming language by providing advanced and optimized features for scalability, security and performance to banks and other financial corporations, gaming applications, data warehousing, business intelligence tools and other online transaction processing commonly known as OLTP.

Technology, COVID and remote culture for IT: Closing the doors of physical offices

COVID-19 greatly disturbed the global economy. The pandemic was declared as endemic by the World Health Organisations during 2020. It was also when vulnerabilities in current systems were uncovered and offered a chance for improvement. Telehealth and mobile health apps were used a lot during spread of COVID, and it prompted organizations to embrace digital collaboration tools and redefine the future of work. Technology kept people connected virtually and helped businesses keep running by adapting to challenges. Virtual communication modes like video conferencing, team messaging and other asynchronous communication have become popular. The pandemic ushered in significant changes to the way businesses operate, leading to a surge in digital transformation initiatives. This, in turn, is fueling heightened demand for information technology (IT) solutions.

24 #17/2024 25

The shift from traditional systems to AI

The shift from traditional systems to artificial intelligence (AI) marks a major leap forward in how computers process and interpret data to perform tasks. I would like to present an overview of this shift:

Examples of custom design

Traditional systems require explicit instructions called as code to perform specific tasks. Programmers define algorithms and rules for processing data and producing desired outputs. These systems follow a standardized approach, where inputs always produce predictable results. However there are challenges that arise during handing complex, unstructured data in huge volume stored without well-defined rules.

Transition to deep learning

Machine learning and deep learning both are types of AI. Machine learning is AI that can automatically adapt with minimal human interference. Deep learning is a subset of machine learning. deep learning algorithms have human brain like deep neural networks. It consists of multiple connected networks termed as neurons that extract sequences from the raw data based on advancement in computing power and data availability. In real life, deep learning is used in computer vision, image processing, automated driving, signal processing, speech recognition and many more areas.

Challenges and opportunities

Advanced machine learning

Machine learning has emerged as a subset of general artificial intelligence that focuses on developing algorithms. These algorithms can gain knowledge of the trends from data feeds and learn on their own. Unlike traditional algorithms, ML algorithms help computers to learn and make predictions from data. Common machine learning algorithms are linear regression, logistic regression, logistic regression, decision tree, random forest. Each of these is suitable for different applications and data sets.

Expanded AI applications

Embracing AI technologies has transformed the operations of many industries, including health care, finance, manufacturing and transportation. AI facilitated solutions help to automate day to day tasks. It improves decision making abilities and provides insights by processing huge amounts of data efficiently. AI applications provide services in the form of virtual assistants, recommendation systems, autonomous vehicles, medical diagnostic tools and predictive maintenance systems.

AI has the potential to cause a marked change over time. However it is accompanied by challenges like data privacy and security of information.

To summarize, the transition from traditional programming to AI represents how computers process information, it started with algorithm based systems and is now using data-driven models capable of learning and adapting autonomously. This transformation has profound implications on industries, economies and perhaps on society at as a whole. It has shaped human lives differently by bringing advancement.

AI milestones

Oxford Languages defines AI as “the theory and development of computer systems able to perform tasks that normally require human intelligence.” Over the years artificial intelligence (AI) has emerged as a game-changer across various sectors, from health care and finance to manufacturing and retail. Python is easy to learn and is enriched with libraries and integration capabilities, which make it a preferred choice for AI development. It is empowering developers to create sophisticated and intelligent solutions that drive innovation capable of transforming the world. From personalized customer experiences to predictive analytics and autonomous systems, AI is revolutionizing the way organizations operate and paving the way for a future powered by intelligent automation.

As digital technologies grow rapidly and even the most valuable information is transferred digitally, cybersecurity has become paramount in safeguarding sensitive data and protecting against cyber threats. Organizations need to develop new skills or upgrade to new applications such as cloud computing to mitigate risks.

Cloud computing

Cloud computing has revolutionized the way businesses manage their existing resources and deploy new resources if necessary. It is providing scalability, flexibility and lower costs. There are 3 models provided by cloud computing partners such as AWS which are Infrastructure as-a-Service (IaaS), Platform as-a-Service (PaaS), and Software as-a-Service (SaaS). Also it is driving digital innovation and business agility towards cloud-native development and hybrid cloud solutions. Organizations are using the power of the cloud to accelerate innovation and achieve operational efficiencies. This is what I learned during my AWS training and I am looking forward to obtaining AWS certification.

Evergreen programming

Python, developed by Guido van Rossum, it was first released in 1991. Python is widely used in various fields such as web development, data analysis, artificial intelligence, scientific computation, and more. The main features of Python are:

Readable and easy fonts

Python grammar is designed to be easy to read and understand, making it accessible to beginners and experts. Python code is handled line by line by the interpreter, which means you don’t need to compile it before running the code.

Dynamic typing

Python uses dynamic typing which means the variable type is specified at runtime. Multiple paradigm language: It means Python supports different styles of writing code. One can write Python code in a procedural, object oriented, functional or imperative manner. For this reason, Python is considered a “Swiss army knife” in the developers’ toolbox.

Extensive library

Python has modules for different tasks like file input/ output, networking, etc. There is no need to install additional packages. Third-party libraries: pip is a built-in package manager in Python, making it easy to install libraries. Examples include NumPy and pandas for data analysis, TensorFlow and PyTorch for machine learning, Django and Flask for web development and many more.

Platform independent

Python code can run on operating systems including Windows and MacOS.

26 #17/2024 27

Latest technology

As we look ahead, the rapid pace of technological advances like quantum computing, robotics, blockchain, augmented reality, the metaverse and 5G networks, show no signs of slowing down in 2024. It presents opportunities and challenges for industry, government and society alike. Staying aware of the topic and embracing innovation will be key to sustainable growth and competitive advantage in the digital age. Internet of things continues to be a trending technology that has gained significant momentum recently. It is a system that uses sensors to collect data and respond intelligently without manual interference.

As I conclude, I would like to add that we live in a time of unprecedented technological innovation and disruption. The artificial intelligence currently available is a general form. In the future, there is the possibility of super artificial intelligence which might add more value in leading the way to advanced human life. From artificial intelligence to cloud computing and the Internet, emerging technologies are creating new jobs, driving digital transformation and opening up new opportunities for growth and innovation but these opportunities come with challenges, especially in cybersecurity, data privacy and personnel changes. In today’s world all the information is one click away. It is very easy to stay well informed with the help of the Internet and take a proactive approach to challenges. This can help everyone navigate the complexity of the digital landscape and thrive in an evolving technological world.

Unlock potential and unleash growth

References

1. John Daintith and Edmund Wright, “A Dictionary of Computing (6 ed.)” Publisher: Oxford University Press, Print Publication Date: 2008,Print ISBN-13: 9780199234004, June 2015.

2. Joel Murach, “Oracle SQL and PL/SQL for Developers” Publisher: Mike Murach & Associates, Incorporated, 2014.

3. Wes McKinney, “Python for Data Analysis”, ”O’Reilly Media, Inc.”, 2013.

Devyani graduated with a bachelor’s degree in engineering from Mumbai University. She has been working at Luxoft as a senior consultant for over six years and has a total of more than nine years of experience. Her expertise lies in the Murex Datamart development area. Outside of work, she enjoys practicing yoga and traveling.

Back
28
to content
Start the learning journey today Contact us to know more it-trainings@luxoft.com

Python for everybody

Build a DIY virtual assistant with less than one hundred lines of code

Our days are more than ever full of things to do (or that we would like to do). So why not use some tools to automate minor tasks or to more easily get some information we need at a certain moment of the day?

In this short article, a virtual assistant will be implemented from scratch and all the development steps will be explained in more details.

Development environment preparation

Before starting to code the virtual assistant, I suggest to setting up a virtual development environment in order to install all project dependencies locally, without affecting the system-wide code base.

Here are the command line statements which I prompted for creating and activating a venv in my machine, which is running Windows 10:

py -3.11 -m venv va-venv-3.11

.\va-venv-3.11\Scripts\activate python -m pip install --upgrade pip

In case of macOS or Linux distribution as OS, the first command should be replaced by launching the appropriate Python interpreter installed in the system scope:

python3.11 -m venv va-venv-3.11

The reader which is not familiar with the concept of venv is strongly encouraged to start applying this good practice when starting a Python project. In addition to separating the dependencies of this latter from the Python modules natively installed by the OS — and consequently, not overwriting them — venv allows to easily create multiple versions of the project with different versions of the Python interpreter and related modules.

Virtual assistant main components

From a high-level perspective, a virtual assistant can be decomposed in three main entities:

1. A speech recognizer, able to convert the input audio signal into text

2. An agent, able to understand the meaning of the incoming text and to compute a related response

3. A text-to-speech converter, to convert the mentioned response back to an audio signal

Speach recognizer

Core agent

Text-to-speech converter

In the next paragraphs, the implementation of these three main components will be treated more in depth.

30 #17/2024 31

Speech recognizer implementation

For this particular component of the virtual assistant, the SpeechRecognition library will be used.

In order to install the related modules in the virtual environment, the following statement should be prompted:

pip install SpeechRecognition

Here is a basic usage of the module:

import speech_recognition as sr

# Initialize the speech recognizer... recognizer = sr.Recognizer() with sr.Microphone() as source: recording = recognizer.listen(audio_source)

# The object of the recording is returned after audio signal is detected... sentence = recognizer.recognize_google(recording)

The Recognizer class, which is the main entity of the module, is instantiated and connected to the microphone interface in order to listen to incoming signals. As soon as speech is detected, a recording of it is created and is sent to a Google API to convert it to text.

Agent implementation

The core component of the virtual assistant will be a very primitive AI agent, able to compute some actions depending on the prompt received.

For the project in question, in particular, a very humble Google calendar manager has been implemented.

In oder to interface with the Google Calendar API, the following command should be launched to install the related Python module:

pip install gcsa

Here some lines of code to download the content of the Google Calendar chosen and print it into the console:

import os from gcsa.google_calendar import GoogleCalendar

# Initialize Google Calendar API... this_folder = os.path.dirname(__file__) google_calendar = GoogleCalendar(‘rodolfo.cangiotti@dxc.com’, credentials_path=os.path.join(this_folder, ‘credentials.json’)) for event in google_calendar: print(event)

For more information about how to configure your Google account and get the file with the related credentials, refer to this documentation paragraph.

Text-to-speech converter

For the last component of the virtual assistant, the pyttsx3 package will be used even if it has not been actively updated since the summer of 2020.

Here is the pip command to install the related modules:

pip install pyttsx3

For a basic usage of this module, here is a Python script portion:

import pyttsx3

# Initialize text-to-speech engine... engine = pyttsx3.init() voices = engine.getProperty(‘voices’) engine.setProperty(‘voice’, voices[1].id)

# Say something... engine.say(‘Hello world!’) engine.runAndWait()

32 #17/2024 33

Component aggregation

By combining the code of the above treated components, reorganizing it in a more eloquent way and adding some mechanisms to start or end the conversation with the virtual assistant, the following Python main.py file was produced:

import datetime import os

import pyttsx3 import speech_recognition as sr from gcsa.google_calendar import GoogleCalendar from speech_recognition import UnknownValueError

# Initialize the speech recognizer... recognizer = sr.Recognizer()

# Initialize text-to-speech engine... engine = pyttsx3.init() voices = engine.getProperty(‘voices’) engine.setProperty(‘voice’, voices[1].id)

# Initialize Google Calendar API... this_folder = os.path.dirname(__file__)

google_calendar = GoogleCalendar(‘rodolfo.cangiotti@dxc.com’, credentials_path=os.path.join(this_folder, ‘credentials.json’))

# Define some configuration parameters for the script... AGENT_NAME = ‘Penny’

def listen(audio_source): try:

recording = recognizer.listen(audio_source)

# The object of the recording is returned after audio signal is detected... sentence = recognizer.recognize_google(recording)

except UnknownValueError:

sentence = None # The engine wasn’t able to understand the sentence... except Exception as e: print(‘ERROR >>>’, repr(e)) raise e return sentence

def render(text): engine.say(text) engine.runAndWait()

def wait_for_agent_name(audio_source): while True:

sentence = listen(audio_source) if not isinstance(sentence, str): continue

sentence = sentence.strip() if AGENT_NAME.lower() in sentence.lower(): return

def converse(audio_source):

render(‘Hello, do you need any help?’) while True:

sentence = listen(audio_source) if not isinstance(sentence, str): continue

sentence = sentence.strip() sentence = sentence.lower() if ‘the appointments’ in sentence and \ ‘today’ in sentence: render(f’Here are the appointments for today, {datetime.date.today()}:’) for idx, event in enumerate(google_calendar, 1): render(f”Event no. {idx}: {event.summary} “

f”from {event.start.strftime(‘%H %M’)} to {event.end.strftime(‘%H %M’)}”) render(“That’s all, no other events found!”) elif ‘thank you’ in sentence or \ ‘thanks’ in sentence: render(“You are welcome, it’s a pleasure for me to help you!”) break # Stop conversation... elif ‘never mind’ in sentence or \ ‘no problem’ in sentence:

render(‘Okay, do not hesitate to reach out to me again if you need something else!’) break # Stop conversation... else:

render(“I am afraid I didn’t understand what you said. Might you please repeat it?”)

34 #17/2024 35

def main(): try: print(‘Starting agent...’) with sr.Microphone() as source: while True: wait_for_agent_name(source) converse(source) except KeyboardInterrupt: print(‘Terminating…’)

if __name__ == ‘__main__’: main()

Conclusions and further developments

The project treated in this article is a humble example of the potentials of Python as programming language. In particular, its simplicity, conciseness and the level of maturity it reached — considering also the variety of the external modules available nowadays — demonstrate that with only a few lines of code and even without a very in-depth knowledge of the subject, it is possible to create valuable tools.

This project aimed only to be a starting point with a lot of possibilities for improvements. In fact, further developments might include the usage of a more up-to-date module for text-to-speech conversion, the utilization of more precise speech recognition algorithms and — last but not least — the injection of pre-trained large language models (LLM) in order to let the conversation with the virtual assistant be more natural and wider with regards to the topics that can be discussed.

References

1. Python Launcher for Windows documentation: https://docs.python.org/3/using/windows.html#launcher

2. Python venv documentation: https://docs.python.org/3/library/venv.html

3. Speech Recognition Python package: https://pypi.org/project/SpeechRecognition/

4. gcsa Python package: https://pypi.org/project/gcsa/

5. Google Calendar Simple API documentation: https://google-calendar-simple-api.readthedocs.io/en/latest/index. html

6. pyttsx3 Python package: https://pypi.org/project/pyttsx3/

7. TIOBE Index: https://www.tiobe.com/tiobe-index/

8. Build Your Own Alexa With Just 20 Lines of Python: https://plainenglish.io/blog/build-your-own-alexa-with-just20-lines-of-python-ea8474cbaab7

Back to content

Rodolfo is a self-taught software developer, with particular experience in developing web and desktop applications. He holds a bachelor degree in Electronic Music from the Conservatory G. Rossini (Pesaro, ITA), where — even if from a musician perspective — he learned the foundations of programming and thinking algorithmically. He is deeply fascinated by the intersection between digital information technologies — in particular, emerging ones like machine learning, AI, IoT — and the arts. He firmly believes in writing clear and eloquent code, in the open-source philosophy and in sharing knowledge.

36 #17/2024 37

Fractional indices

In the upcoming article, using simple logical reasoning, I will try to explain the process of constructing fractional indices. Over the course of the article, we will delve into the intricacies of the algorithm and possible applications. Next, we will touch on the topic of optimizing index size in edge cases. We will also look at how to modify the algorithm to support simultaneous use by many users. By the end of this study, readers will have a thorough understanding of the principles behind fractional indexing.

Formulation of the problem

The challenge at hand is to sort records with minimal disruption to the existing sequence. Consider a scenario where a collection of rows is ordered based on an index field, and the objective is to relocate or insert a new line without impacting any other records.

Why might we need this?

This problem occurs in various applications, especially in the context of cloud database management. In such environments, where changing rows incurs costs, the importance of minimizing changes becomes obvious. Performing a complete row renumbering after each permutation can result in significant economic overhead, leading to the need for a more efficient approach.

Moreover, the challenge extends to scenarios like peer-to-peer text editing systems. Here, renumbering all rows can lead to conflicts during synchronization with other peers, thereby necessitating a strategy to mitigate such conflicts. By focusing solely on the rows affected by user actions, we aim to reduce conflict instances and enhance system stability.

The question arises: Is it theoretically possible to achieve such minimal disruption sorting?

The idea of building an algorithm

Indeed, the concept of fractional indexing presents a straightforward solution to our sorting challenge, akin to how one might intuitively approach the task on paper. For instance, envision writing down a list of items, and realizing the need to insert an additional item between two existing ones. Instead of renumbering the entire sequence, you simply designate the new item with a fractional index, such as 1.5.

While this intuitive approach forms the basis of fractional indexing, its direct application is hindered by the limitations of floating-point number representation. Floating-point numbers have finite precision, restricting our ability to split them indefinitely without encountering accuracy constraints.

To refine this concept, we introduce the notion of unattainable boundaries within the index range. Here, we designate zero as the unattainable upper bound and one as the unattainable lower bound, assuming rows are sorted in ascending index order.

Consider the scenario of inserting a row into an empty list: By utilizing these unattainable boundaries, we calculate the index as the midpoint between them, yielding (0+1)/2=0.5. Similarly, when inserting a row above an existing one, the new index is computed as the midpoint between the unattainable upper bound and the index of the previous row, resulting in (0+0.5)/2=0.25. Inserting between existing rows involves calculating the average of their indices, yielding (0.25+0.5)/2=0.375 in this case.

0 1 0.25 0.375 0.5 #17/2024 39 38

Upon closer examination of the resulting indices, we observe that the initial “zero-dot” prefix is common to all indices and can be disregarded. Furthermore, representing the index tails as strings or byte arrays facilitates lexicographic sorting, preserving the order of indices. This flexibility allows us to extend beyond numerical indices, incorporating characters from sets like base64 or even arbitrary bytes, provided our application or database supports lexicographic sorting of such arrays.

Insertion between indices

How to calculate a new index value between two existing ones?

To determine a new index value situated between two given indices, let’s take the byte arrays P1=[0A 58] and P2=[7B CD F2] as examples. Our approach leverages the concept of rational numbers, where trailing zeros don’t affect the value. E.g. 0.1 and 0.100 are the same number. This allows us to adjust the lengths of the indices by adding zeros as necessary.

Aligning the lengths of arrays is essentially multiplying our rational numbers by some common base so that they become integers. By treating these length-aligned indices as large integers, we can ascertain their arithmetic mean:

As evident from the aforementioned formula, achieving this merely requires executing two straightforward operations on arrays of bytes: Addition and right shifting by one bit. Both operations can be easily implemented for an arbitrary set of bytes. To do this, you just need to perform the operation on a pair of bytes and carry the remainder to the next one. Importantly, it’s unnecessary to retain all resulting numbers. Once a byte that differs between P1 and P2 is encountered, subsequent bytes become insignificant and can be discarded.

For instance, employing this method in our example yields the new index:

Memory estimation

The algorithm’s worst-case scenario occurs when continuously inserting new rows in the same position. With each insertion, the index range narrows, leading to a lengthening of the index. But how rapidly does this growth occur?

By implementing the algorithm with byte arrays and conducting 10,000 insertions at the list’s outset, we observe that the maximum index size reaches 1250 bytes. Thus, each insertion augments the index length by merely one bit.

This outcome is commendable, as one bit represents the minimum information size and appears difficult to improve upon. In fact, almost all descriptions of the algorithm stop there. However, it’s important to address edge cases separately, particularly insertions at the list’s very beginning or end. In these instances, a single open boundary exists, presenting an opportunity for optimization.

For example, consider a peer-to-peer text editor like Notepad, but the rows are sorted by fractional index. Every time we add a new row, our index increases by one bit. If you insert a line in the middle, nothing can be done about it. But when writing text, adding lines at the very end may be a more likely and natural way. Thus, by optimizing adding a new index to the end of the list, we can reduce storage overhead.

Memory estimation

Consider a straightforward scenario where the final index in our list is denoted as P1=[80 AB], and we aim to generate a subsequent index, Pn. Employing the preceding algorithm, we derive the new index value as Pn=[C0]. However, upon inspection, this increment appears too substantial. Instead, a more nuanced approach is warranted: Simply incrementing the first byte by one suffices.

Given that the initial index is precisely in the middle of the range, this observation facilitates approximately 127 insertions at the list’s end (or beginning) per first byte of the index. This equates to an approximate increase of 0.06 bits per insertion.

Moreover, leveraging the property of rational numbers, subsequent bytes are zero, enabling an additional 255 insertions per index byte. Consequently, this translates to roughly 0.03 bits per insertion.

An essential aspect of this algorithm modification is the incrementation of the index by one byte solely when reaching the edge values of the byte (FF for insertion at the end or 00 for insertion at the beginning of the list). As a result, the infrequency of reaching extreme values reduces the occurrence of new bytes.

By utilizing byte pairs for incrementation, efficiency is significantly heightened. This approach achieves remarkable values of 0.0001 bits for each new index. In such cases, identifying the first byte, excluding edge values, becomes pivotal, followed by incrementing the subsequent byte.

In essence, edge cases can be more efficiently managed compared to the basic algorithm. However, this optimization comes at the expense of the initial bytes of the index.

For our Notepad example, this means that indices will only grow by one bit when inserting into the middle of text, but adding lines to the end will cost relatively next to nothing.

40 #17/2024 41

Another example where this approach can work well is some kind of task or priority list application. Imagine a list with three tasks you constantly reorder in random order. Without optimization each move costs you a one bit in index. But the list has only three tasks, so the probability that we move task to very beginning or end is quite big. And this optimized case will trim the index back to initial bytes size.

Concurrency

Another aspect worth addressing is the concurrent editing of the list by multiple users simultaneously. What happens if two independent clients attempt to insert a row in the same location at the same time?

Let’s illustrate this scenario with a simple example involving two lines: P1=[05 12] and P2=[07 0A]. Let’s assume two clients endeavor to insert a new line between P1 and P2.

According to our algorithm, given the identical input data, both clients will obtain the same values for the inserted indices: Pa=Pb=[06]. This poses two significant issues. First, it leads to an undefined row order since the indices are identical, making it impossible to determine which should be higher. Second, and more crucially, the identical indices render us unable to insert anything between them.

To address this challenge, it’s imperative to ensure that the generated indices are unique. Here, we leverage a key characteristic of our indices: If we possess a list of unique values, appending any suffix to any index will not alter the row order. Consequently, we can introduce small, unique deviations for each client, guaranteeing the uniqueness of the generated values.

Such unique suffixes can be generated either each time a record is created or once at the application’s inception. The length of these suffixes can vary, balancing the likelihood of random collision with the additional memory required.

Conclusion

In summary, we have successfully developed an algorithm that efficiently sorts records with minimal list alterations. By addressing challenges such as concurrent client operations, we have enhanced the algorithm’s flexibility and efficiency. Notably, we have optimized insertion cases at the list’s beginning or end, ensuring robust performance across various scenarios.

References

https://www.figma.com/blog/realtime-editing-of-ordered-sequences/

Back to content

For example, in one of my pet projects I implemented the generation of a unique suffix 6 bytes long, according to the following rules:

The first two bits are constant zero-one. It is needed to break degenerate cases when the suffix consists of all zeros or ones. Since we know that such suffixes can have a significant impact on the length of the index at the beginning or end of the list.

The next 21 bits are random. This order allows us to reduce the expected length of the index so there is less chance of identical bytes, and we can truncate the index earlier when inserting a row between two existing ones.

And the last 25 bits are the truncated Unix time stamp. This is an annual cycle, but it allows us to significantly reduce the likelihood of generating duplicate suffixes because the calculation is done once at application startup.

Author

Andrei Mishkinis

Andrei Mishkinis is a senior software developer at Luxoft Serbia, where he is focused on developing and enhancing the C# translator for a static code analyzer. With over fifteen years of software development experience, his journey has been diverse, covering various domains such as web development, back-office solutions and even game development.

Bit
Value
Random Unix time stamp
number 0 1 2-22 23-47
0 1
42 #17/2024 43

Embracing quantum potential in finance

Unlocking a new era of security and efficiency

The emergence of quantum computing has caused a paradigm change in the finance sector, presenting hitherto unseen chances to transform security procedures, operational effectiveness, and risk reduction. But today, quantum algorithms can break the RSA encoding. RSA encoding is used for the majority of secure communications. So, in this context where the existing non-quantum schemes will be compromised, we investigate quantum cryptographic schemes that will enable us to secretly exchange data. Quantum cryptography exploits the science of quantum mechanics to perform cryptographic tasks. The major advantage of quantum key distribution (QKD) is its ability to detect the presence of an eavesdropper and compute a bound on the amount of its information. This would not have been possible using any classical key distribution techniques as this detection is a result of the unique properties of quantum physics.

Robust security measures are necessary in the finance sector given the current environment, which is marked by an increase in data breaches and cyberattacks. By using its unmatched computational capacity to create encryption algorithms that are nearly immune to traditional hacking approaches, quantum computing offers a disruptive alternative. Financial institutions may strengthen their defences against hostile actors, protect sensitive data, and foster confidence in the digital economy by utilizing the concepts of quantum mechanics.

Quantum computing may also make complicated financial procedures easier, such as algorithmic trading, portfolio optimization, and risk assessment. Financial professionals may traverse the complexity of this developing industry and establish themselves at the forefront of innovation in finance by comprehending the revolutionary possibilities of quantum technology.

In the digital age, quantum cryptography redefines data integrity, authentication, and privacy by providing unmatched security measures. It is a frontier in the computer business.

Supporting algorithms

Shor’s algorithm

It is one of the spectacular algorithms of quantum computing for fast factorization of large integers. The RSA algorithm is the most widely used public key encryption algorithm and is considered as the backbone of online commerce. The algorithm is based on the difficulty of factorizing large integers. The user selects two prime numbers P1 and P2, that will form their private code and transmits to everyone their product, n=P1*P2. To decode this message, we need to know the value of P

Shor’s algorithm allows quantum computers to effectively find Pi based on n and thus, to read practically all the secret messages. This algorithm is the main reason why people are investing in the design of quantum computers.

Quantum physics

Hilbert space

Quantum theory is based on two constructs: Wavefunctions and operators. The state of a system is represented by its wavefunction, observables are represented by operators. Mathematically, wavefunctions satisfy the defining conditions for abstract vectors, and operators act on them as linear transformations. In quantum mechanics, the state of a physical system is represented by a vector in a Hilbert space, a complex vector space with an inner product.

Let ψ be the state of a particle. The collection of all functions of x constitutes a vector space, but for our purpose it is much too large. To represent a possible physical state, the wavefunction ψ must be normalized.

The set of all square integrable functions, on a specified interval(x) such that:

constitutes a vector space. Physicists call it Hilbert space. In quantum mechanics, then, wavefunctions live in Hilbert space.

44 #17/2024 45

Qubit

The space of possible polarization states of a photon is an example of a quantum bit, or qubit. It is the basic unit of quantum information. Dirac’s notation, pronounced as ket x denoted as |x⟩, where x is an arbitrary label, refers to a vector representing a state of a quantum system. A vector |v⟩ is a linear combination of vectors |s1⟩,|s2⟩,. ,|sn⟩ if there exist complex numbers a such that:

|v⟩ = a1 |s1⟩ + a2 |s2⟩ + .... + an |sn⟩

A set of vectors S generates a complex vector space V if every element |v⟩ of V can be written as a complex linear combination of vectors in the set: Every |v⟩ ∈ V can be written as |v⟩ = a1 |s1⟩ + a2 |s2⟩ +...+ a n |sn⟩ for some elements |si⟩ ∈ S and complex numbers a .

Photon polarisation — a quantum explanation

Quantum mechanics models a photon’s polarization state by a unit vector, a vector of length 1, pointing in the appropriate direction. We denote |↑⟩ and |→⟩ for the unit vectors that represent vertical and horizontal polarization respectively. The measurement of state |v⟩ by a measuring device with preferred axis (|↑⟩,|→⟩) is shown in figure 2.1. An arbitrary polarization can be expressed as a linear combination:

|v⟩ = a |↑⟩ + b |→⟩ of the two basis vectors |↑⟩ and |→⟩. For example: |3⟩ = √(1/2) |↑⟩ + √(1/2) |→⟩

Picture 1: Photon polarization

is a unit vector representing polarization of 45 . The coefficients a and b in |v⟩ = a |↑⟩ + b |→⟩ are called the amplitudes of |v⟩ in the directions |↑⟩ and |→⟩ respectively. When a and b are both non-zero |v⟩ = a |↑⟩ + b |→⟩ is said to be a superposition of |↑⟩ and |→⟩

Quantum mechanics models the interaction between a photon and a polaroid as follows. The polaroid has a preferred axis, its polarization. When a photon with polarization|v⟩ = a |↑⟩ + b |→⟩ meets a polaroid with preferred axis |↑⟩ ↑〉, as a result of this measurement the photon will get through with probability | a2 | and will be absorbed with probability | b2 |; the probability that a photon passes through the polaroid is the square of the magnitude of the amplitude of its polarization in the direction of the polaroid’s pre ferred axis. The probability that the photon is absorbed by the polaroid is the square of the magnitude of the amplitude in the direction perpendicular to the polaroid’s preferred axis. Since we can get ↑ or →, the coefficients a and b must satisfy the condition | a2 | + | b2 |= 1.

Heisenberg uncertainty principle (HUP)

The Heisenberg uncertainty principle is a key principle in quantum mechanics. It provides the first insight into the underlying uncertainties in an experimenter’s capacity to monitor multiple quantum variables simultaneously. An elementary particle’s ability to be measured for position with the maximum degree of accuracy increases the uncertainty of its momentum measurement with the same level of accuracy. Transmitting encoded messages that are impenetrable by computers is known as quantum communication. Photons, which are little light particles, carry the messages. An eavesdropper attempting to read out the message in transit will be detected by the inevitable disturbance to the particles caused by their measurement as a result of the HUP.

Quantum key distribution (QKD):

The following is an example of how quantum cryptography can be used to securely distribute keys. This example includes a sender, “Alice”, a receiver, “Bob”, and a malicious eavesdropper, “Eve”.

Alice (the sender)

plain text

encryption algorithm

Eve (the eavesdropper)

Bob (the receiver)

plain text

public channel (i.e., telephone or internet)

decryption algorithm key key

quantum state generator

quantum channel (i.e., optical fiber or free space )

quantum state detector

Picture 2: Example of quantum key distribution

|↑⟩ |→⟩ b a
46 #17/2024 47

The best-known protocol for QKD is the Bennett and Brassard protocol (BB84). The procedure of BB84 is shown in Table.1

Alice’s bit sequence

Alice’s basis

Alice’s photon polarization

Bob’s basis

Bob’s measured polarization

Bob’s sifted measured polarization

Bob’s data sequence

Picture 3: BB84-Procedure Table.1

• Alice sends Bob a sequence of photons through a filter (or polarizer), each independently chosen from one of the four polarizations-vertical, horizontal, 45 degrees and 135 degrees.

• For each photon, Bob chooses one of the two measurement bases (rectilinear and diagonal) to perform a measurement.

• Bob records his measurement bases and results. Bob publicly acknowledges his receipt of signals.

• Alice broadcasts her measurement bases.

• Bob broadcasts his measurement bases.

• They discard all the events where they used different bases.

• To test for eavesdropping, Alice randomly chooses a fraction k, of all remaining events as test events.

• For those test events, she publicly broadcasts their positions and polarizations.

• Bob broadcasts the polarizations of the test events.

• Alice and Bob compute the error rate of the test events. If the computed error is larger than some prescribed threshold value, they abort. Otherwise, they proceed to the next step.

• Alice and Bob convert the polarization data of all remaining data into a binary string called a raw key.

• Now, Alice takes the message and encodes it with the key.

One of the main features of quantum physics is that measurement, in general, changes the signal. If Eve does not know in which of the two orientations each bit is sent, she can select the wrong orientation for her measurement. If Alice and Bob agreed to use the × orientation for transmitting a certain bit, but Eve selects a + orientation, then Eve’s measurement will change Alice’s signal and Bob will only get the distorted message.

The successful implementation of quantum cryptography relies on the ability to exploit quantum properties while mitigating potential vulnerabilities. Superposition and entanglement, fundamental quantum phenomena, enable the development of secure communication protocols and cryptographic algorithms. By analyzing the interaction between quantum states and cryptographic processes, financial institutions can harness the full potential of quantum computing while ensuring robust security measures. In summary, quantum computing has enormous potential for the finance industry, providing unmatched chances to improve security, streamline financial procedures, and spur innovation. Financial professionals may negotiate the complexity of this developing sector and put themselves at the forefront of innovation in finance by embracing quantum technologies and realizing their revolutionary potential. The financial sector will surely be impacted by quantum computing’s further development, which will influence how financial operations and security procedures develop going forward.

References

1. O. Galindo, V. Kreinovich and O. Kosheleva, “Current Quantum Cryptography Algorithm Is Optimal: A Proof,” 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 2018.

2. Quantum Computing - A Gentle Introduction by Eleanor Rieffel and Wolfgang Polak, 2011- MIT Press

3. Introduction to Quantum Cryptography by Xiaoqing Tan, Chapter 5 of Theory and Practice of Cryptography and Network Security Protocols and Technologies - edited by Jaydip Sen, Praxis Business School

4. Introduction to Quantum Mechanics (2nd edition), D. Griffiths. Pearson Prentice Hall, April 10, 2004.

5. Quantum Cryptography, J. Aditya, P. Shankar Rao (Dept of CSE, Andhra University)

6. Lo, HK., Zhao, Y. (2012). Quantum Cryptography. In: Meyers, R. (eds) Computational Complexity. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-1800-9_151

7. Introduction to Quantum Cryptography, Xiaoqing Tan, DOI: 10.5772/56092

Back to content

Author

Arya Viswanath Murex developer

Arya Viswanath is a dedicated Murex developer with a passion for exploring cutting-edge technologies such as quantum computing and artificial intelligence. Arya thrives on researching and integrating innovative solutions into the finance domain. Beyond coding, Arya is an avid reader and writer, always eager to discover new opportunities for learning and personal growth.

0 1 1 1 0 1 0 0 0 1
+ x + + x + x x + x
→ ↖ ↑ ↑ ↗ ↑ ↗ ↗ → ↖
+ + x + + x x + + x
→ ↑ ↖ ↑ → ↗ ↗ ↑ → ↖
→ ↑ → ↗ → ↖
0 1 0 0 1
Eavesdropping Conclusion
48 #17/2024 49

Unlocking the future

Key movements and innovations in IT

In the ever-evolving landscape of Information Technology (IT), innovation is the driving force that propels industries forward, transforms societies, and shapes the future. From groundbreaking technologies to transformative movements, the world of IT is witnessing unprecedented advancements that are redefining the way we live, work, and interact with technology. In this comprehensive exploration, we will delve into the key movements and innovations shaping the future of IT, from artificial intelligence (AI) and blockchain to cloud computing and beyond.

The rapid pace of technological innovation has ushered in a new era of digital transformation, where organizations and individuals alike are leveraging cutting-edge technologies to drive innovation, streamline processes, and create value. From the adoption of AI and machine learning algorithms to the proliferation of blockchain-based applications and the widespread adoption of cloud computing services, the IT landscape is experiencing a paradigm shift that is reshaping industries, economies, and societies on a global scale.

At the heart of these transformative movements lies the quest for innovation and progress, driven by the relentless pursuit of excellence, the spirit of entrepreneurship, and the desire to solve complex problems and address pressing challenges facing humanity. As we embark on this journey to unlock the future of IT, it is essential to understand the key movements and innovations driving change, their implications for the future, and the opportunities and challenges they present for organizations, individuals, and society as a whole.

Each of these movements represents a significant milestone in the evolution of IT, offering transformative capabilities and opportunities for innovation across diverse domains. From AI-powered applications revolutionizing health care, finance, and manufacturing to blockchain technology redefining trust and transparency in transactions and data management, these innovations are reshaping the way we interact with technology and driving digital innovation on a global scale.

As we explore each of these key movements and innovations in depth, we will examine their underlying principles, applications, benefits, and challenges, providing insights into their transformative potential and their implications for the future of IT. By understanding the driving forces behind these movements and innovations, we can gain valuable insights into the opportunities and challenges they present, enabling us to harness their full potential to drive positive change and shape a brighter future for humanity.

Let’s explore some of the key movements and innovations that are unlocking the future of IT:

Blockchain technology: Revolutionizing trust and transparency

Blockchain technology has emerged as a transformative force, offering decentralized and immutable solutions for transactions, data management, and identity verification. At its core, a blockchain is a distributed ledger that records transactions across a network of computers in a secure and transparent manner. Let’s delve deeper into the key components, applications, and challenges of blockchain technology.

Components of blockchain technology:

1. Decentralized ledger: A blockchain operates as a decentralized ledger, where transactions are recorded and verified across a network of nodes. Each node maintains a copy of the entire blockchain, ensuring transparency and redundancy.

2. Blocks: Transactions are grouped together into blocks, which are cryptographically linked to form a chain. Each block contains a timestamp, a reference to the previous block, and a list of transactions, creating an immutable record of transactions.

3. Consensus mechanism: To validate transactions and secure the network, blockchain platforms employ consensus mechanisms such as Proof of Work (PoW), Proof of Stake (PoS), or Delegated Proof of Stake (DPoS). These mechanisms ensure that all nodes agree on the validity of transactions without the need for a central authority.

4. Cryptographic hashing: Each block in the blockchain is assigned a unique cryptographic hash, generated using cryptographic algorithms such as SHA-256. Hashing ensures data integrity and prevents tampering by providing a digital fingerprint for each block.

5. Smart contracts: Smart contracts are self-executing contracts with predefined rules and conditions encoded into the blockchain. They enable automated and trustless execution of agreements, eliminating the need for intermediaries and reducing transaction costs.

Applications of blockchain technology:

1. Cryptocurrencies: Blockchain technology underpins cryptocurrencies such as Bitcoin, Ethereum, and Litecoin, enabling secure peer-to-peer transactions without the need for intermediaries like banks or payment processors.

2. Supply chain management: Blockchain technology can enhance transparency and traceability in supply chains by recording the movement of goods and verifying the authenticity of products. This helps prevent counterfeiting, reduce fraud, and improve efficiency.

3. Identity verification: Blockchain-based identity management systems offer a secure and tamper-proof way to verify identities and manage personal data. Individuals can maintain control over their digital identities, reducing the risk of identity theft and data breaches.

50 #17/2024 51

4. Financial services: Blockchain technology is disrupting traditional financial services by enabling faster, cheaper, and more secure transactions. It facilitates cross-border payments, remittances, and peer-to-peer lending, bypassing intermediaries and reducing transaction fees.

5. Health care: Blockchain technology can improve the integrity and security of health care data by providing a tamper-proof record of patient information, medical history, and treatment outcomes. This can streamline data sharing among health care providers and enhance patient privacy.

6. Real estate: Blockchain-based platforms enable transparent and efficient real estate transactions by recording property ownership, rental agreements, and mortgage contracts on a decentralized ledger. This reduces the risk of fraud and ensures trust among parties.

Challenges and considerations:

1. Scalability: Blockchain networks face scalability challenges due to the need to process a large number of transactions and store increasing amounts of data. Solutions such as sharding, off-chain transactions, and layer 2 protocols are being developed to address scalability issues.

2. Regulatory compliance: The regulatory landscape for blockchain technology is still evolving, with varying regulations and legal frameworks across jurisdictions. Compliance with regulations such as Know Your Customer (KYC) and Anti-Money Laundering (AML) is crucial for blockchain-based businesses.

3. Interoperability: Different blockchain platforms and protocols may not be interoperable, hindering the seamless exchange of data and assets across networks. Interoperability standards and protocols are needed to facilitate communication between disparate blockchain systems.

4. Security concerns: While blockchain technology is inherently secure, vulnerabilities such as 51% attacks, smart contract bugs, and private key thefts pose security risks. Robust security measures, including cryptographic encryption, multi-factor authentication, and regular audits, are essential to mitigate these risks.

Conclusion

Blockchain technology holds immense promise to revolutionize various industries by offering transparent, secure, and efficient solutions for digital transactions, data management, and identity verification. However, widespread adoption will depend on overcoming technical challenges, addressing regulatory concerns, and building trust among users and stakeholders.

References

1. Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved from https://bitcoin.org/bitcoin. pdf

2. Buterin, V. (2013). Ethereum: A Next-Generation Smart Contract and Decentralized Application Platform. Retrieved from https://ethereum.org/en/whitepaper/

3. Tapscott, D., & Tapscott, A. (2016). Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World. Penguin.

4. Swan, M. (2015). Blockchain: Blueprint for a New Economy. O’Reilly Media.

Cloud computing: Empowering digital transformation

Cloud computing has revolutionized the way organizations consume, deploy, and manage IT resources, offering scalable and on-demand access to computing power, storage, and services over the Internet. This paradigm shift has enabled businesses to innovate rapidly, reduce costs, and improve agility in an increasingly digital world. Let’s delve deeper into the key concepts, models, and benefits of cloud computing.

Key concepts:

1. On-demand access: Cloud computing provides on-demand access to a shared pool of computing resources, including servers, storage, networks, and applications. Users can provision and scale resources dynamically to meet changing business requirements.

2. Service models: Cloud computing offers three primary service models:

• Infrastructure as-a-Service (IaaS): Provides virtualized computing resources, such as virtual machines and storage, on a pay-per-use basis.

• Platform as-a-Service (PaaS): Offers development and deployment environments, including tools and frameworks, for building and hosting applications.

• Software as-a-Service (SaaS): Delivers software applications over the internet on a subscription basis, eliminating the need for local installation and maintenance.

3. Deployment models: Cloud computing supports various deployment models, including:

• Public cloud: Resources are owned and operated by third-party cloud providers and shared among multiple tenants.

• Private cloud: Resources are dedicated to a single organization and hosted either on-premises or by a third-party provider.

• Hybrid cloud: Combines public and private cloud environments, allowing workloads to be deployed across multiple clouds.

52 #17/2024 53

Benefits of cloud computing: Future trends:

1. Scalability: Cloud computing enables organizations to scale resources up or down dynamically in response to changing demand, ensuring optimal performance and cost efficiency.

2. Cost savings: By eliminating the need for upfront investments in hardware and infrastructure, cloud computing reduces capital expenditures and enables pay-as-you-go pricing models.

3. Flexibility and agility: Cloud computing enables rapid deployment of applications and services, empowering organizations to innovate quickly and respond to market changes faster.

4. Reliability and availability: Cloud providers offer robust infrastructure and redundancy measures to ensure high availability and uptime, minimizing the risk of downtime and data loss.

5. Security: Cloud providers implement stringent security measures, including data encryption, identity and access management, and compliance certifications, to protect sensitive information and ensure data privacy.

Challenges and сonsiderations:

1. Data privacy and compliance: Organizations must comply with data protection regulations and industry standards when storing and processing data in the cloud, addressing concerns about data sovereignty, residency, and jurisdiction.

2. Vendor lock-In: Migrating between cloud providers or platforms can be complex and costly, leading to vendor lock-in and limited flexibility in choosing services and technologies.

3. Performance and latency: Depending on the geographic location of data centers and network connectivity, cloud computing may introduce latency and performance issues, particularly for latency-sensitive applications.

4. Security risks: While cloud providers implement robust security measures, organizations remain responsible for securing their applications and data, including configuring access controls, monitoring for security threats, and implementing encryption.

1. Multi-cloud and hybrid cloud adoption: Organizations are increasingly adopting multi-cloud and hybrid cloud strategies to leverage the strengths of different cloud providers while mitigating risks and optimizing performance.

2. Edge computing integration: Edge computing is being integrated with cloud computing to enable real-time processing and analysis of data closer to the source of generation, reducing latency and bandwidth usage for latency-sensitive applications.

3. Serverless computing: Serverless computing, also known as Function as-a-Service (FAAs), is gaining popularity for building, and deploying applications without managing underlying infrastructure, enabling greater agility and cost savings.

Conclusion

Cloud computing has transformed the IT landscape, offering organizations unprecedented flexibility, scalability, and agility to innovate and grow. By embracing cloud technologies responsibly and addressing security, compliance, and performance considerations, businesses can unlock the full potential of cloud computing to drive digital transformation and achieve their strategic objectives.

References

• Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing (NIST Special Publication 800-145). National Institute of Standards and Technology.

• Armbrust, M., et al. (2010). A View of Cloud Computing. Communications of the ACM, 53(4), 50-58.

• Chou, D. C. (2016). Cloud Computing: Challenges and Future Directions. Journal of Computers, 11(3), 191-201.

• Gartner. (2021). Gartner Forecasts Worldwide Public Cloud Revenue to Grow 18% in 2021. Retrieved from https://www.gartner.com/en/newsroom/press-releases/2021-04-20-gartner-forecasts-worldwide-public-cloudrevenue-to-grow-18-percent-in-2021

Artificial intelligence (AI): Empowering intelligent automation

Artificial Intelligence (AI) is a transformative technology that enables machines to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and decision-making. AI systems leverage algorithms, data, and computational power to analyze patterns, make predictions, and automate processes across various domains. Let’s delve deeper into the key concepts, techniques, and applications of AI.

54 #17/2024 55

Key concepts:

1. Machine learning (ML): Machine learning is a subset of AI that focuses on developing algorithms and models that enable computers to learn from data without being explicitly programmed. ML techniques include supervised learning, unsupervised learning, and reinforcement learning, which enable machines to recognize patterns, classify data, and make predictions.

2. Deep learning (DL): Deep learning is a subfield of ML that utilizes artificial neural networks with multiple layers (deep neural networks) to extract features and learn representations from large volumes of data. DL has achieved remarkable success in tasks such as image recognition, natural language processing, and speech recognition, surpassing human performance in many cases.

3. Natural language processing (NLP): Natural language processing is a branch of AI that focuses on enabling computers to understand, interpret, and generate human language. NLP techniques include sentiment analysis, named entity recognition, and machine translation, enabling applications such as chatbots, virtual assistants, and language translation services.

4. Computer vision: Computer vision is a field of AI that enables computers to interpret and understand visual information from images or videos. Computer vision techniques include object detection, image segmentation, and facial recognition, enabling applications such as autonomous vehicles, medical imaging, and surveillance systems.

Applications of AI:

1. Health care: AI is revolutionizing health care by enabling early disease detection, personalized treatment recommendations, and medical image analysis. AI-powered systems can analyze medical records, genomic data, and diagnostic images to assist health care professionals in diagnosis and treatment planning.

2. Finance: In the finance industry, AI is used for fraud detection, risk assessment, algorithmic trading, and customer service automation. AI-powered algorithms analyze financial data, market trends, and customer behavior to make informed decisions and optimize investment strategies.

3. Retail and e-commerce: AI is transforming the retail and e-commerce sector by enabling personalized recommendations, demand forecasting, inventory management, and supply chain optimization. AI-powered chatbots and virtual assistants enhance customer engagement and support sales and customer service operations.

4. Autonomous vehicles: AI plays a crucial role in enabling autonomous vehicles to perceive their environment, navigate safely, and make real-time decisions. AI algorithms process sensor data from cameras, lidar, and radar to detect objects, predict trajectories, and plan optimal routes.

Ethical and societal implications:

1. Bias and fairness: AI systems may exhibit biases inherent in the data used for training, leading to unfair or discriminatory outcomes, especially in areas such as hiring, lending, and criminal justice. Ensuring fairness and mitigating bias in AI systems is a critical ethical consideration.

2. Privacy and security: AI systems often process large volumes of sensitive data, raising concerns about privacy and security. Protecting user data, ensuring data confidentiality, and preventing unauthorized access are essential considerations in AI development and deployment.

3. Transparency and accountability: AI systems may operate as black boxes, making it challenging to understand their decision-making processes and hold them accountable for their actions. Ensuring transparency, explainability, and accountability in AI systems is crucial for building trust and fostering responsible AI adoption.

Future directions:

1. Explainable AI (XAI): Explainable AI aims to enhance the transparency and interpretability of AI systems, enabling users to understand how decisions are made and identify potential biases or errors. XAI techniques provide insights into the inner workings of AI models, improving trust and usability.

2. AI ethics and governance: As AI becomes more pervasive, there is a growing need for ethical frameworks, guidelines, and regulations to govern its responsible development and deployment. Establishing ethical standards and governance mechanisms can help address societal concerns and ensure AI benefits all stakeholders.

3. Continual learning and adaptation: Continual learning and adaptation enable AI systems to evolve and improve over time by learning from new data and experiences. Techniques such as lifelong learning, meta-learning, and transfer learning enable AI systems to adapt to changing environments and tasks.

Conclusion

Artificial intelligence (AI) holds immense promise to transform industries, drive innovation, and address complex challenges facing society. By leveraging AI technologies responsibly, addressing ethical considerations, and fostering collaboration across disciplines, we can harness the power of AI to create a more intelligent, equitable, and sustainable future for all.

References

1. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

2. Russell, S. J., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

3. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521(7553), 436-444.

4. Silver, D., et al. (2016). Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature, 529(7587), 484-489.

56 #17/2024 57

Machine learning (ML): Unleashing the power of data

Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. ML algorithms leverage statistical techniques to identify patterns, extract insights, and automate tasks across various domains. Let’s delve deeper into the key concepts, techniques, and applications of machine learning.

Key Concepts:

1. Supervised learning: In supervised learning, models are trained on labeled data, where each example is associated with a corresponding target or outcome. Supervised learning algorithms learn to map input data to output labels, enabling tasks such as classification (e.g., spam detection) and regression (e.g., house price prediction).

2. Unsupervised learning: In unsupervised learning, models are trained on unlabeled data, where the goal is to discover hidden patterns or structures within the data. Unsupervised learning algorithms include clustering (e.g., customer segmentation) and dimensionality reduction (e.g., principal component analysis).

3. Reinforcement learning: Reinforcement learning involves training agents to interact with an environment and learn optimal strategies to maximize cumulative rewards. Reinforcement learning algorithms, such as Q-learning and deep Q-networks, have been applied to tasks such as game playing, robotics, and autonomous vehicle control.

4. Feature engineering: Feature engineering involves selecting, extracting, and transforming relevant features from raw data to improve the performance of machine learning models. Feature engineering techniques include feature scaling, dimensionality reduction, and feature selection.

Techniques and algorithms:

1. Linear regression: Linear regression is a simple and widely used regression technique that models the relationship between a dependent variable and one or more independent variables. It aims to find the best-fitting linear equation that minimizes the residual errors between predicted and actual values.

2. Logistic regression: Logistic regression is a binary classification technique that models the probability of an event occurring as a function of input variables. It is commonly used for tasks such as binary classification (e.g., spam detection, fraud detection) and probability estimation.

3. Decision trees: Decision trees are versatile and interpretable models that recursively partition the feature space into hierarchical decision rules based on feature values. Decision tree algorithms, such as CART and Random Forests, are used for both classification and regression tasks.

4. Neural networks: Neural networks are computational models inspired by the structure and function of the human brain, consisting of interconnected layers of neurons (nodes) that process input data and learn hierarchical representations. Deep neural networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have achieved state-of-the-art performance in tasks such as image recognition, natural language processing, and speech recognition.

Applications of machine learning:

1. Health care: Machine learning is transforming health care by enabling early disease detection, personalized treatment recommendations, and medical image analysis. ML models analyze electronic health records, genomic data, and medical images to assist clinicians in diagnosis and treatment planning.

2. Finance: In the finance industry, machine learning is used for fraud detection, risk assessment, algorithmic trading, and customer segmentation. ML algorithms analyze financial data, market trends, and customer behavior to make informed decisions and optimize investment strategies.

3. E-commerce and retail: Machine learning powers recommendation systems, demand forecasting, customer segmentation, and fraud detection in the e-commerce and retail sectors. ML models analyze user behavior, purchase history, and product attributes to personalize recommendations and improve customer satisfaction.

4. Autonomous vehicles: Machine learning plays a crucial role in enabling autonomous vehicles to perceive their environment, navigate safely, and make real-time decisions. ML algorithms process sensor data from cameras, LiDAR, and radar to detect objects, predict trajectories, and plan optimal routes.

Ethical and societal implications:

1. Bias and fairness: Machine learning models may exhibit biases inherent in the data used for training, leading to unfair or discriminatory outcomes, especially in areas such as hiring, lending, and criminal justice. Addressing bias and ensuring fairness in ML models is a critical ethical consideration.

2. Privacy and security: Machine learning systems often process large volumes of sensitive data, raising concerns about privacy and security. Protecting user data, ensuring data confidentiality, and preventing unauthorized access are essential considerations in ML development and deployment.

3. Transparency and interpretability: Machine learning models may operate as black boxes, making it challenging to understand their decision-making processes and hold them accountable for their actions. Ensuring transparency, explainability, and interpretability in ML models is crucial for building trust and fostering responsible AI adoption.

58 #17/2024 59

Future directions:

1. Explainable AI (XAI): Explainable AI aims to enhance the transparency and interpretability of machine learning models, enabling users to understand how decisions are made and identify potential biases or errors. XAI techniques provide insights into the inner workings of ML models, improving trust and usability.

2. Automated machine learning (AutoML): Automated machine learning streamlines the process of building, training, and deploying ML models by automating tasks such as feature engineering, model selection, and hyperparameter tuning. AutoML tools and platforms democratize ML, making it accessible to non-experts and accelerating innovation.

3. Federated learning: Federated learning enables training machine learning models across distributed devices or edge devices without centralizing data. Federated learning preserves data privacy and reduces communication overhead by training models locally on user devices and aggregating model updates centrally.

Conclusion

Machine learning (ML) has emerged as a powerful tool for unlocking insights from data, automating tasks, and driving innovation across industries. By leveraging ML techniques responsibly, addressing ethical considerations, and fostering collaboration between researchers, practitioners, and policymakers, we can harness the full potential of machine learning to create a more intelligent, equitable, and sustainable future for all.

References

1. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd ed.). Springer.

2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

3. Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

4. Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.

Internet of Things (IoT): Connecting the physical and digital worlds

The Internet of Things (IoT) is a network of interconnected devices, sensors, and objects that communicate and exchange data over the internet. IoT technologies enable seamless integration between the physical and digital worlds, transforming industries, enhancing efficiency, and creating new opportunities for innovation. Let’s delve deeper into the key concepts, applications, and challenges of the Internet of Things.

Key concepts:

1. Connected devices: IoT encompasses a wide range of connected devices, including sensors, actuators, wearables, vehicles, appliances, and industrial equipment. These devices collect data from the environment, communicate with each other, and perform automated actions based on predefined rules or machine learning algorithms.

2. Communication protocols: IoT devices communicate using various communication protocols, such as Wi-Fi, Bluetooth, Zigbee, LoRaWAN, and MQTT. These protocols facilitate reliable and efficient data transmission between devices and backend systems, ensuring interoperability and compatibility in heterogeneous IoT ecosystems.

3. Edge computing: Edge computing is a distributed computing paradigm that brings computation and data storage closer to the edge of the network, near IoT devices and sensors. Edge computing enables real-time processing and analysis of data, reducing latency, bandwidth usage, and reliance on centralized cloud infrastructure.

4. Data analytics: IoT generates vast amounts of data from connected devices, sensors, and systems. Data analytics techniques, including descriptive, diagnostic, predictive, and prescriptive analytics, enable organizations to extract actionable insights, detect patterns, and optimize operations in real-time.

Applications of IoT:

1. Smart cities: IoT technologies are transforming urban infrastructure and services, enabling smart city initiatives focused on improving transportation, energy efficiency, public safety, and environmental sustainability. IoT sensors monitor traffic flow, detect environmental pollution, and manage utilities in real-time, enhancing quality of life for residents.

2. Industrial Internet of Things (IIoT): IIoT integrates IoT devices and technologies into industrial processes, enabling predictive maintenance, asset tracking, supply chain optimization, and remote monitoring of equipment and machinery. IIoT solutions improve operational efficiency, reduce downtime, and enable data-driven decision-making in manufacturing, logistics, and utilities.

3. Health care: IoT is revolutionizing health care delivery and patient care through remote monitoring, telemedicine, and personalized medicine. IoT devices such as wearables, implantable sensors, and medical monitors collect real-time health data, enabling early disease detection, chronic disease management, and remote patient monitoring.

4. Agriculture: IoT technologies are modernizing agriculture practices, enabling precision agriculture, crop monitoring, and livestock management. IoT sensors monitor soil moisture, temperature, and nutrient levels, optimizing irrigation schedules, reducing water usage, and increasing crop yields in sustainable farming practices.

60 #17/2024 61

Challenges and considerations:

1. Security and privacy: IoT devices are vulnerable to security threats, including data breaches, malware attacks, and unauthorized access. Ensuring end-to-end security, implementing encryption, and enforcing access controls are essential to protect IoT deployments and safeguard sensitive data.

2. Interoperability and standards: The lack of interoperability and standards hinders seamless integration and communication between IoT devices and platforms. Developing open, standardized protocols and frameworks is crucial to enabling interoperability, compatibility, and scalability in IoT ecosystems.

3. Scalability and reliability: IoT deployments often involve a large number of devices and sensors distributed across geographically dispersed locations. Ensuring scalability, reliability, and resilience in IoT networks require robust infrastructure, efficient data management, and fault-tolerant communication protocols.

4. Data management and analytics: Managing and analyzing vast amounts of IoT data pose challenges in terms of data storage, processing, and analysis. Leveraging scalable cloud infrastructure, edge computing, and advanced analytics techniques is critical to extracting actionable insights and deriving business value from IoT data.

Future directions:

1. 5G and beyond: Next-generation cellular technologies such as 5G are poised to revolutionize IoT connectivity, enabling ultra-low latency, high bandwidth, and massive device connectivity. 5G networks will unlock new opportunities for real-time applications, immersive experiences, and mission-critical IoT deployments.

2. Artificial intelligence and machine learning: AI and machine learning techniques are increasingly being integrated into IoT solutions to enable predictive maintenance, anomaly detection, and autonomous decision-making. AI-powered IoT systems can adapt to changing conditions, optimize performance, and enhance efficiency in diverse domains.

3. Edge AI: Edge AI combines edge computing with artificial intelligence to enable real-time inference and decision-making at the edge of the network, near IoT devices and sensors. Edge AI reduces latency, bandwidth usage, and dependence on centralized cloud infrastructure, enabling intelligent IoT applications in resource-constrained environments.

Conclusion

The Internet of Things (IoT) is reshaping the way we interact with the physical world, unlocking new opportunities for innovation, efficiency, and sustainability across industries and sectors. By leveraging IoT technologies responsibly, addressing security, privacy, and interoperability concerns, and fostering collaboration between stakeholders, we can harness the full potential of IoT to create smarter, safer, and more connected environments for individuals, communities, and societies worldwide.

References

1. Atzori, L., et al. (2010). The Internet of Things: A Survey. Computer Networks, 54(15), 2787-2805.

2. Vermesan, O., et al. (Eds.). (2017). Internet of Things: From Research and Innovation to Market Deployment. River Publishers.

3. Borgia, E. (2014). The Internet of Things Vision: Key Features, Applications and Open Issues. Computer Communications, 54, 1-31.

4. Gubbi, J., et al. (2013). Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions. Future Generation Computer Systems, 29(7), 1645-1660.

5. Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.

Cybersecurity and privacy: Safeguarding digital assets in a connected world

Cybersecurity and privacy are critical considerations in the digital age, where the proliferation of connected devices, online services, and digital transactions has created new opportunities for malicious actors to exploit vulnerabilities and compromise sensitive information. Effective cybersecurity measures and privacy protections are essential to safeguarding digital assets, preserving user trust, and ensuring the integrity, confidentiality, and availability of data. Let’s delve deeper into the key concepts, challenges, and best practices in cybersecurity and privacy.

Key concepts:

1. Cyber threat landscape: The cyber threat landscape encompasses a wide range of threats, including malware, phishing attacks, ransomware, data breaches, and insider threats. Threat actors exploit vulnerabilities in software, networks, and human behavior to gain unauthorized access to systems, steal data, disrupt operations, and cause financial and reputational damage.

2. Security controls: Security controls are measures implemented to mitigate cybersecurity risks and protect against threats. These controls include preventive measures such as firewalls, antivirus software, and access controls, as well as detective measures such as intrusion detection systems (IDS), security information and event management (SIEM) systems, and security audits.

3. Privacy principles: Privacy principles govern the collection, use, disclosure, and protection of personal information. Key privacy principles include notice and consent, purpose limitation, data minimization, security safeguards, accountability, and user rights such as access and rectification. Privacy regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) impose legal obligations on organizations to comply with these principles and protect individuals’ privacy rights.

62 #17/2024 63

Challenges and considerations:

1. Emerging threats: The cybersecurity threat landscape is constantly evolving, with new threats and attack vectors emerging regularly. Threat actors leverage advanced techniques such as artificial intelligence (AI), machine learning (ML), and automation to evade detection, bypass security controls, and launch sophisticated attacks. Organizations must stay vigilant and proactive in detecting and mitigating emerging threats to safeguard their digital assets.

2. Supply chain security: Supply chain security refers to the security of products, services, and components acquired from third-party vendors and suppliers. Supply chain attacks, such as supply chain compromise and software supply chain attacks, pose significant risks to organizations by targeting trusted relationships and exploiting vulnerabilities in the supply chain ecosystem. Organizations must implement robust supply chain security practices, including vendor risk management, secure software development, and supply chain resilience planning, to mitigate these risks.

3. Regulatory compliance: Regulatory compliance is a major challenge for organizations, particularly in highly regulated industries such as healthcare, finance, and government. Compliance with privacy regulations such as GDPR, CCPA, Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standard (PCI DSS) requires organizations to implement comprehensive privacy and security programs, conduct regular risk assessments, and maintain documentation of data processing activities. Non-compliance can result in severe penalties, fines, and reputational damage.

4. Cybersecurity skills shortage: The cybersecurity skills shortage is a global challenge that hinders organizations’ ability to effectively address cybersecurity threats and vulnerabilities. The demand for cybersecurity professionals exceeds the supply, leading to a talent gap and recruitment challenges for organizations seeking to build and maintain cybersecurity capabilities. Investing in cybersecurity education, training, and workforce development initiatives is essential to bridge the skills gap and build a robust cybersecurity workforce for the future.

Best practices:

1. Risk management: Implement a risk-based approach to cybersecurity, focusing on identifying, assessing, and mitigating risks to critical assets and systems. Conduct regular risk assessments, prioritize security controls based on risk severity, and monitor the effectiveness of security measures to adapt to changing threats and vulnerabilities.

2. Security by design: Incorporate security into the design, development, and deployment of systems, applications, and services from the outset. Follow secure coding practices, conduct security reviews and testing throughout the software development lifecycle, and adhere to security standards and best practices to minimize security vulnerabilities and mitigate risks.

3. User awareness and training: Educate employees, users, and stakeholders about cybersecurity best practices, security policies, and potential threats and risks. Provide training and awareness programs to promote security awareness, encourage responsible behavior, and empower users to recognize and report security incidents and suspicious activities.

4. Incident response and recovery: Establish an incident response plan and procedures to effectively respond to security incidents, breaches, and data breaches. Define roles and responsibilities, establish communication channels, and conduct regular incident response exercises and simulations to test and validate the effectiveness of response measures. Implement backup and recovery mechanisms to restore systems and data in the event of a cyberattack or data loss incident.

Conclusion

Cybersecurity and privacy are paramount concerns in an increasingly interconnected and digitized world, where the protection of sensitive information, critical infrastructure, and personal privacy is essential to maintain trust, integrity, and resilience. By adopting a proactive and comprehensive approach to cybersecurity, organizations can mitigate risks, protect against threats, and safeguard digital assets and privacy rights. Collaboration between government, industry, academia, and civil society is crucial to address cybersecurity challenges, promote information sharing, and foster a culture of cybersecurity awareness and resilience in society.

References

1. Schneier, B. (2015). Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. W. W. Norton & Company.

2. NIST Special Publication 800-53, Revision 5. (2020). Security and Privacy Controls for Information Systems and Organizations.

3. GDPR (General Data Protection Regulation). (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC.

4. CCPA (California Consumer Privacy Act). (2018). California Civil Code Sections 1798.100 – 1798.199.

5. Ponemon Institute. (2021). Cost of a Data Breach Report.

DevOps and Agile practices: Accelerating software delivery and collaboration

DevOps and Agile practices have revolutionized the software development and delivery lifecycle, enabling organizations to deliver high-quality software faster, more reliably, and with greater agility. By fostering collaboration, automation, and continuous improvement, DevOps and Agile methodologies empower cross-functional teams to respond quickly to changing requirements, deliver value to customers, and drive innovation. Let’s delve deeper into the key concepts, principles, and benefits of DevOps and Agile practices.

64 #17/2024 65

Key concepts:

1. Agile manifesto: The Agile manifesto is a set of guiding principles that prioritize individuals and interactions, working software, customer collaboration, and responding to change over rigid processes and documentation. Agile methodologies, such as Scrum, Kanban, and Extreme Programming (XP), emphasize iterative development, continuous feedback, and adaptive planning to deliver value incrementally and iteratively.

2. DevOps culture: DevOps is a cultural and organizational mindset that promotes collaboration, communication, and integration between development (Dev) and operations (Ops) teams throughout the software delivery lifecycle. DevOps principles, such as automation, continuous integration (CI), continuous delivery (CD), and infrastructure as code (IaC), aim to streamline workflows, reduce cycle times, and improve collaboration between development, operations, and other stakeholders.

3. Continuous integration (CI): Continuous integration is a software development practice that involves integrating code changes from multiple developers into a shared repository frequently, typically several times a day. CI ensures that code changes are automatically built, tested, and validated against predefined criteria, such as coding standards, unit tests, and integration tests, to detect and address integration errors early in the development process.

4. Continuous delivery (CD): Continuous delivery is an extension of CI that enables automated deployments of code changes to production or staging environments quickly and reliably. CD pipelines automate the deployment process, including code packaging, environment provisioning, testing, and release, to ensure that software updates can be delivered to customers continuously and with minimal manual intervention.

Benefits of DevOps and Agile practices:

1. Faster time to market: DevOps and Agile practices enable organizations to accelerate software delivery and release cycles, reducing time to market for new features, enhancements, and bug fixes. By automating repetitive tasks, streamlining workflows, and fostering collaboration, teams can deliver value to customers more frequently and predictably.

2. Improved quality and reliability: DevOps and Agile methodologies emphasize early and continuous testing, code reviews, and feedback loops to identify and address defects and issues promptly. By integrating quality assurance and testing activities into the development process, teams can improve software quality, reliability, and stability, reducing the risk of production incidents and customer dissatisfaction.

3. Enhanced collaboration and communication: DevOps and Agile practices break down silos between development, operations, and other teams, fostering a culture of collaboration, transparency, and shared responsibility. By aligning goals, sharing knowledge, and promoting cross-functional teams, organizations can improve communication, decision-making, and problem-solving, leading to better outcomes and higher employee satisfaction.

4. Increased flexibility and adaptability: DevOps and Agile methodologies enable organizations to respond quickly to changing requirements, market conditions, and customer feedback. By embracing iterative development, continuous feedback, and adaptive planning, teams can prioritize work based on customer needs, experiment with new ideas, and pivot direction as needed, increasing resilience and competitiveness in dynamic environments.

Best practices:

1. Automate everything: Automate repetitive tasks, including code builds, testing, deployment, and infrastructure provisioning, to reduce manual effort, minimize errors, and improve consistency and reliability in software delivery pipelines.

2. Embrace continuous feedback: Foster a culture of continuous feedback and improvement by soliciting feedback from customers, stakeholders, and team members regularly. Use feedback to identify opportunities for improvement, refine processes, and address issues early in the development lifecycle.

3. Empower cross-functional Teams: Empower cross-functional teams with the autonomy, ownership, and accountability to deliver value independently. Encourage collaboration, knowledge sharing, and shared responsibility across development, operations, quality assurance, and other disciplines to break down silos and drive innovation.

4. Measure and monitor performance: Establish key performance indicators (KPIs) and metrics to track the effectiveness, efficiency, and quality of software delivery processes. Use metrics such as lead time, cycle time, deployment frequency, and defect rates to identify bottlenecks, measure progress, and drive continuous improvement.

Conclusion

DevOps and Agile practices have transformed the way organizations develop, deliver, and operate software, enabling them to respond quickly to changing market demands, deliver value to customers, and drive innovation. By embracing a culture of collaboration, automation, and continuous improvement, organizations can unlock the full potential of DevOps and Agile methodologies to achieve their strategic objectives, stay competitive, and thrive in today’s fast-paced digital landscape.

References

1. Humble, J., & Farley, D. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Addison-Wesley Professional.

2. Kim, G., et al. (2016). The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations. IT Revolution Press.

3. Schwaber, K., & Sutherland, J. (2017). The Scrum Guide. Scrum.Org.

4. Kniberg, H., & Skarin, M. (2012). Kanban and Scrum: Making the Most of Both. InfoQ.

5. Ponemon Institute. (2021). Cost of a Data Breach Report.

66 #17/2024 67

Quantum computing: Unleashing the power of quantum mechanics

Quantum computing represents a paradigm shift in computing technology, harnessing the principles of quantum mechanics to perform computations that are exponentially faster than classical computers. Quantum computers leverage quantum bits, or qubits, to represent and manipulate information, enabling breakthroughs in solving complex problems across various domains. Let’s delve deeper into the key concepts, principles, and applications of quantum computing.

Key concepts:

1. Quantum bits (qubits): Qubits are the fundamental units of quantum information, analogous to classical bits in traditional computing. Unlike classical bits, which can exist in a state of either 0 or 1, qubits can exist in a superposition of both states simultaneously, enabling parallel computation and exponential scalability.

2. Quantum entanglement: Quantum entanglement is a phenomenon in quantum mechanics where the states of two or more qubits become correlated, even when separated by large distances. Entanglement enables qubits to exhibit non-local behavior, allowing for the creation of highly interconnected quantum systems and enabling faster communication and computation.

3. Quantum superposition: Quantum superposition is a fundamental principle of quantum mechanics that allows qubits to exist in multiple states simultaneously. By exploiting superposition, quantum computers can explore multiple computational paths simultaneously, enabling exponential speedup for certain types of problems, such as factoring large numbers and searching unsorted databases.

4. Quantum interference: Quantum interference occurs when multiple quantum states interfere with each other, leading to constructive or destructive interference patterns. Quantum algorithms leverage interference to enhance computational efficiency and solve complex optimization and search problems more effectively than classical algorithms.

Applications of quantum computing:

1. Cryptography and security: Quantum computing has the potential to disrupt classical cryptographic algorithms, such as RSA and ECC, by efficiently factoring large numbers using algorithms like Shor’s algorithm. Quantum-resistant cryptographic techniques, such as lattice-based cryptography and hash-based cryptography, are being developed to secure communications in a post-quantum world.

2. Optimization and simulation: Quantum computing enables faster and more efficient optimization and simulation of complex systems, such as financial portfolios, supply chains, and chemical reactions. Quantum algorithms, such as the Quantum Approximate Optimization Algorithm (QAOA) and the Variational Quantum Eigensolver (VQE), promise to revolutionize optimization and simulation across industries.

3. Machine learning and AI: Quantum computing has the potential to accelerate machine learning and artificial intelligence algorithms by enabling faster training, optimization, and inference processes. Quantum machine learning algorithms, such as quantum neural networks and quantum support vector machines, are being developed to harness the power of quantum computing for pattern recognition, data classification, and predictive modeling.

4. Drug discovery and material science: Quantum computing promises to revolutionize drug discovery and material science by simulating molecular structures and interactions with unprecedented accuracy and speed. Quantum computers can explore vast solution spaces, identify promising drug candidates, and design new materials with desirable properties, leading to breakthroughs in pharmaceuticals, materials science, and nanotechnology.

Challenges and considerations:

1. Qubit stability and error correction: Quantum computers are highly sensitive to noise, decoherence, and errors, which can degrade the reliability and accuracy of quantum computations. Developing qubits with long coherence times and implementing error correction techniques, such as quantum error correction codes and fault-tolerant quantum computation, are critical challenges in quantum computing research.

2. Scalability and hardware constraints: Building large-scale, fault-tolerant quantum computers with hundreds or thousands of qubits remains a significant engineering challenge. Overcoming scalability limitations, improving qubit connectivity, and developing scalable quantum hardware architectures are essential for realizing the full potential of quantum computing.

3. Algorithm development and software tools: Designing quantum algorithms and software tools that leverage the unique capabilities of quantum computers requires interdisciplinary expertise in quantum mechanics, computer science, and mathematics. Developing quantum algorithms that outperform classical algorithms and optimizing quantum software for real-world applications are ongoing research challenges.

4. Ethical and societal implications: Quantum computing raises ethical and societal implications related to privacy, security, and intellectual property. Quantum computers have the potential to break widely used cryptographic algorithms, compromise sensitive information, and disrupt existing industries. Addressing ethical concerns, ensuring responsible use of quantum technology, and promoting collaboration between stakeholders are essential for realizing the benefits of quantum computing while mitigating potential risks.

Future directions:

Quantum supremacy and demonstration: Achieving quantum supremacy, where a quantum computer outperforms the best classical computers for a specific task, is a milestone in quantum computing research. Demonstrating practical quantum advantage and solving real-world problems with quantum computers are key objectives for the field.

Hybrid quantum-classical computing: Integrating quantum and classical computing technologies to create hybrid quantum-classical systems offers a promising approach to addressing scalability and reliability challenges in quantum computing. Hybrid quantum-classical algorithms and architectures combine the strengths of both paradigms to solve complex problems efficiently.

68 #17/2024 69

Quantum internet and communication: Building a quantum internet and quantum communication infrastructure enables secure quantum communication and distributed quantum computing. Quantum networks facilitate quantum key distribution (QKD), quantum teleportation, and quantum entanglement-based communication, enabling secure communication channels resistant to eavesdropping and interception.

Conclusion

Quantum computing holds immense promise to revolutionize computing, science, and society by solving complex problems that are intractable for classical computers. By harnessing the principles of quantum mechanics, developing scalable quantum hardware, and advancing quantum algorithms and software, we can unlock the full potential of quantum computing to address pressing challenges in cryptography, optimization, simulation, and beyond. Collaboration between academia, industry, and government is essential to overcome technical barriers, address ethical concerns, and realize the transformative impact of quantum computing on a global scale.

References

1. Nielsen, M. A., & Chuang, I. L. (2010). Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press.

2. Preskill, J. (2018). Quantum Computing in the NISQ era and beyond. Quantum, 2, 79.

3. Arute, F., et al. (2019). Quantum supremacy using a programmable superconducting processor. Nature, 574(7779), 505-510.

4. Hidary, J. D. (Ed.). (2020). Quantum Computing: An Applied Approach. Springer.

Overall conclusion

In conclusion, the landscape of information technology (IT) is experiencing a profound transformation driven by key movements and innovations that are shaping the future of various industries and societies worldwide. From artificial intelligence (AI) to blockchain, edge computing, and beyond, these advancements are unlocking new possibilities, driving digital transformation, and revolutionizing the way we interact with technology.

Artificial intelligence (AI) is empowering intelligent automation and decision-making across diverse domains, while blockchain technology is revolutionizing trust and transparency in transactions, data management, and identity verification. Edge computing is bringing computation closer to the source of data generation, enabling real-time processing and analysis for latency-sensitive applications.

These innovations are not only reshaping industries but also raising important ethical, societal, and regulatory considerations. As organizations embrace emerging technologies responsibly, they must address concerns such as data privacy, security, bias, and fairness to build trust and ensure responsible adoption.

Looking ahead, the future of IT holds immense promise, driven by trends such as multi-cloud and hybrid cloud adoption, explainable AI (XAI), and federated learning. By fostering collaboration, innovation, and ethical leadership, we can harness the power of IT to create a more intelligent, equitable, and sustainable future for generations to come.

In this rapidly evolving landscape, it is imperative for organizations to stay abreast of the latest developments, adapt to changing trends, and embrace a culture of continuous learning and innovation to unlock the full potential of IT in driving positive change and shaping a brighter future for humanity.

Back to content

Rashmi Malhotra has more than 7 years of experience in IT. Working in India, Poland and Canada has broadened her perspective and enriched her writing. Besides her technical skills, she has a keen interest in creative writing and enjoys expressing herself through poems and articles. As a new mother to a precious boy, her journey has taken on a new dimension, as she navigate the joys and challenges of motherhood while balancing her professional aspirations and creative pursuits. Her journey as a mother adds depth, purpose, and and inspires her to strive for excellence, embrace change, and leave a lasting legacy for future.

70 #17/2024 71

Introduction to the Diagnostic Event Manager (DEM)

Introduction about diagnostics in AUTOSAR

Diagnostics in AUTOSAR plays the main role in vehicles, because using the diagnostic we are checking the health of our vehicle.

There are two main types of diagnostics used in vehicles:

1. On-board diagnostics (OBD):

Imagine your vehicle having a built-in doctor! On-board diagnostics are essentially software functions within a vehicle’s electronic control units (ECUs) that constantly monitor the engine, sensors and actuators.

2. Off-board diagnostics:

While OBD provides a good initial check-up, sometimes a more in-depth analysis is needed. This is where off-board diagnostics come in. These involve specialized tools and equipment used by technicians. These tools can include scan tools that communicate directly with the vehicle’s ECU, various measuring devices to assess specific parameters, and even databases containing a wealth of repair information.

Diagnostics modules consist of three modules

1. DEM (Diagnostic Event Manager):

This module acts as the central hub for managing diagnostic events detected within the vehicle. It receives information from sensors and ECUs about potential problems.

2. DCM (diagnostic communication manager):

Think of the DCM as the translator for communication between the vehicle and external diagnostic tools. It implements diagnostic protocols, such as Unified Diagnostic Service (UDS), which allows mechanics to use scan tools to communicate with the vehicle’s ECU. DCM manages diagnostic sessions, security levels and transmits diagnostic information requested by the scan tool. It also relies on the DEM for information about detected events and fault codes.

3. FIM (function inhibition manager):

This module plays a crucial role in safeguarding the vehicle. Based on the severity of diagnostic events reported by DEM, FIM can take action to inhibit specific vehicle functions. For example, if a critical engine issue is detected.

Introduction to the Diagnostic Event Manager (DEM)

module

The DEM module plays a crucial role in automotive software systems, providing robust error detection, handling and reporting mechanisms. It acts as the central nervous system, monitoring the health and status of various devices and components within the vehicle’s electronic architecture.

#17/2024 73 72

The DEM is separated logically into multiple interacting software components:

• For each OS partition configured with DEM access, a dedicated DEM Satellite service SWC provides the interfaces Diagnostic Monitor and Diagnostic Info.

• For one OS partition, a DEM Master service SWC provides the remainder of the AUTOSAR interfaces

DEM Satellite(s): A DEM Satellite performs de-bouncing locally; this includes counter — and time based debouncing. Also, the DEM Satellite provides access to the MonitorStatus. If application ports only connect to the local Dem Satellite, there is no runtime overhead for DEM calls.

DEM Master: The actual event processing like UDS status, storage of environmental data and notification handling is performed on the DEM Master service component. Also, DEM Master is the source of all configured callbacks or notifications.

Diagnostic trouble codes (DTCs)

DTCs are cryptic messages generated by your vehicle’s ECUs to signal potential malfunctions. They act as standardized identifiers, akin to error codes, pinpointing issues within various systems.

Main role of DTCs:

a. Early warning system: DTCs serve as an early indication of problems, allowing for prompt attention before they escalate into major breakdowns.

b. Targeted diagnosis: By pinpointing the culprit system or component, DTCs guide technicians towards specific areas for troubleshooting, saving time and effort.

c. Universal language: Standardized DTC formats ensure compatibility between ECUs and diagnostic tools from different manufacturers, facilitating communication during repairs

d. Based on severity:

1. Pending DTC: A potential issue requiring further monitoring.

2. Confirmed DTC: The fault has been detected multiple times and demands attention.

3. Permanent DTC: A severe problem that may impact drivability or safety.

e. Additional considerations:

1. Test not complete DTC: The diagnostic procedure to confirm the DTC is ongoing.

2. History DTC: The fault was previously detected but hasn’t recurred recently.

In our project, we created standardized interfaces within application software components to facilitate communication with the Diagnostic Event Manager (DEM) module via the RTE (Runtime Environment) for efficient diagnosis and management of vehicle issues through access to Diagnostic Trouble Codes (DTCs) and related information. We also implemented some DTCs for testing purposes using stub code to simulate DEM interactions.

Dem Rte Det Dit Dcm FiM SchM NvM BSW EcuM Os J1939Dcm DEM module architecture Core A Core B Application Efficient communication path Partition 1 Partition 2 Partition 3 DEM satellite DEM master DEM satellite DEM satellite Application Application
74 #17/2024 75

We declared the variable rte_Dtc_test=0; uint8 rte_Dtc_test;   if(rte_Dtc_test ==1)

{Rte_Call_AirComp_Blocked_SetEventStatus(DEM_EVENT_STATUS_FAILED); } else if(rte_Dtc_test ==2)

{Rte_Call_AirComp_Overload_SetEventStatus(DEM_EVENT_STATUS_FAILED);} else if(rte_Dtc_test==3)

{Rte_Call_CAN_BusOFF_SetEventStatus(DEM_EVENT_STATUS_FAILED); } else if(rte_Dtc_test==4)

{Rte_Call_CAN_VMCU_Chas3_02P_Error_SetEventStatus(DEM_EVENT_STATUS_FAILED);} else if(rte_Dtc_test==5)

Rte_Call_TVB_LowVoltager_SetEventStatus (DEM_EVENT_STATUS_FAILED);} else if(rte_Dtc_test==6)

{Rte_Call_TVB_LowVoltager_SetEventStatus(DEM_EVENT_STATUS_PASSED);}

Diagnostic event processing

A diagnostic event defines the result of a monitor which can be located in a SWC or a BSW module. These monitors can report an event as a qualified test result by calling Dem-Report Error Status () or Dem-Set Event Status () with “Failed” or “Passed” or as a pre-qualified test result by using the event de-bouncing with “Pre-Failed” or “Pre-Passed”.

To use pre-qualified test results the reported event must be configured with a de-bounce algorithm. Otherwise (using monitor internal de-bouncing) pre-qualified results will cause a DET report and are ignored.

Event de-bouncing

1. Counter-based algorithm

A monitor must trigger the DEM actively, usually multiple times, before an event will be qualified as passed or failed. Each separate trigger will add (or subtract) a configured step size value to a counter value, and the event will be qualified as ‘failed’ or ‘passed’ once this de-bounce counter reaches the respective configured threshold value.

The configurable thresholds support a range for the de-bounce counter of -32768 … 32767· For external reports its current value will be mapped linearly to the UDS fault detection counter which supports a range of -128 … 127.

If enabled, counter based de-bounced events can de-bounce across multiple power cycles. Therefore the counter value is persisted into non-volatile memory during shutdown of the ECU.

Below figure show the debouncing configuration in our project

2. Time-based algorithm

Time-based de-bouncing tackles transient glitches by requiring only a single trigger from the application to set the qualification direction (failed or passed). The DEM then starts a timer, ignoring further reports for the same event and direction. Upon timer expiration, a PREFAILED direction triggers a potential DTC, while PREPASSED confirms the initial report as a false alarm.

This approach simplifies implementation and reduces false positives, but the de-bounce time needs careful configuration based on the monitored system’s response characteristics.

3. Monitor internal de-bouncing

If the application implements the de-bouncing algorithm itself, a callback function can be provided, which is used for reporting the current fault detection value to the diagnostics layer.

These functions should not implement logic, since they are called in runtime extensive context. If monitor internal de-bouncing is configured for an event, its monitor cannot request debouncing by the DEM. This would also result in a DET report in case development error detection is enabled. The DEM module does not have the necessary information to process these types of monitor results.

76 #17/2024 77

Event reporting

Monitors may report test results either by Port Interface or, in case of a complex device driver or basic software module, by direct C API.

1. Monitor status:

Every event supports a monitor status information which is updated synchronously with the monitor reports:

Bit 0 – TestFailed

The bit indicates the last qualified test result (passed or failed) reported by the monitor.

Bit 1 – TestNotCompletedThisOperationCycle

The bit indicates if the monitor has reached a qualified test result (passed or failed) in the current operation cycle.

2. Event status:

Every event supports a status byte whereas each bit represents different status information. These bits are performed asynchronously on the DEM main function.

Bit 0 – TestFailed

The bit indicates the qualified result of the most recent test.

Bit 1 – TestFailedThisOperationCycle

The bit indicates if during the active operation cycle the event was qualified as failed.

Bit 2 – PendingDTC

This bit indicates if during a past or current operation cycle the event has been qualified as failed, and has not tested ‘passed’ for a whole cycle since the failed result was reported.

Bit 3 – Confirmed-DTC

The bit indicates that the event has been detected enough times that it was stored in long-term memory.

Bit 4 – TestNotCompletedSinceLastClear

This bit indicates if the event has been qualified (passed or failed) since the fault memory has been cleared.

Bit 5 – TestFailedSinceLastClear

This bit indicates if the event has been qualified as failed since the fault memory has been cleared.

Bit 6 – TestNotCompletedThisOperationCycle

This bit indicates if the event has been qualified (passed or failed) during the active operation cycle.

Bit 7 – WarningIndicatorRequested

The bit indicates if a warning indicator for this event is active.

NVRAM storage

The usual AUTOSAR DEM will store all data collected from the application in NVRAM. For such data elements, data sampling is always processed on the DEM cyclic function. Queries (e.g., through DCM UDS diagnostic services) always return the frozen value.

As an extension to AUTOSAR, the DEM also allows to configure data elements to return ‘live’ data. This is useful especially to support statistics data that is not already covered by the DEM internal data elements. When data elements are configured not to be stored in NVRAM, the data is requested every time a query is processed. Their implementation should be reentrant and fast to allow diagnostic responses to be completed in time.

78 #17/2024 79

Conclusion

The Diagnostic Event Manager (DEM) module serves as a critical component in automotive systems, adhering to AUTOSAR standards to ensure effective monitoring, reporting and resolution of diagnostic events. With robust features such as event reporting, aging and healing mechanisms, operation cycle management, and comprehensive data storage through NVRAM, extended data records, and various snapshot records, the DEM module enhances the reliability and diagnostic capabilities of modern vehicles. Its commitment to real-time status updates, time series snapshot records, and global snapshot records underscores its role in facilitating efficient fault diagnosis and performance monitoring, contributing to the overall integrity and functionality of automotive systems.

References

1. https://www.kpit.com/insights/the-lifecycle-of-a-diagnostic-trouble-code-dtc/

2. https://www.embitel.com/blog/embedded-blog/decoding-the-implementation-of-uds-vehicle-diagnostics-inautosar-base-software-module

3. https://www.autosartoday.com/posts/dem_overview

Sharana Basava works as a junior software engineer at Luxoft in the automotive domain, specifically contributing to projects with a focus on diagnostics. He graduated from Sir M. Visvesvaraya Institute of Technology in Bangalore with a degree in Electrical and Electronics Engineering (EEE). He’s passionate about automotive technology and software development.

Take the wheel at Luxoft and drive automotive innovation

Back to content 80
APPLY NOW

Process discovery and mining for automation opportunities

Efficiency through workflow analysis

The need for automation

In an increasingly competitive environment, organizations must continuously enhance their operational efficiency to reduce costs, errors and deliver superior products and services. Automation presents a compelling solution to these challenges. But not all processes are equally suited for automation. The success lies in identifying the right processes through process discovery and mining.

Benefits of process automation

The advantages of process automation are many and include increased productivity, reduced errors, improved consistency, and improved cost savings. By automating the right processes, organizations can achieve a competitive edge.

Process discovery and mining:

‘Process discovery’ refers to a set of tools that provide a way to identify, map, define and analyze business processes. Discovery is the essential first step in any process improvement or automation effort because for these initiatives to be successful, businesses must first have a thorough understanding of their processes as they exist today.[1]

Process mining is a method of applying specialized algorithms to event log data to identify trends, patterns and details of how a process unfolds. Process mining applies data science to discover, validate and improve workflows.[2]

Together, these help organizations to understand their processes at a granular level helping with the discovery of blockages and inadequacies.

In today’s era of automation and digital transformation, methods to enhance productivity and reduce operating expenses are constantly sought after by organizations. Helpful tools in identifying and automating inefficient processes along the way have been shown to be process mining and discovery. This subject is explored in this white paper, which provides a guideline for companies aiming to leverage these techniques to maximize the advantages of automation.

Process discovery: Unveiling the structure of workflows

Techniques for process mapping

Process discovery begins with a meticulous examination of existing workflows through various mapping techniques. These techniques offer a visual representation of the steps, tasks, and interactions within a process. The choice of mapping method depends on the complexity and specific characteristics of the process under scrutiny:

It is a generic tool that can be adapted for a wide variety of purposes, and can be used to describe various processes, such as a manufacturing process, an administrative or service process, or a project plan.[3]

A swimlane diagram delineates who does what in a process. Using the metaphor of lanes in a pool, a swimlane diagram provides clarity and accountability by placing process steps within the horizontal or vertical “swimlanes” of a particular employee, work group or department. It shows connections, communication and handoffs between these lanes, and it can serve to highlight waste, redundancy and inefficiency in a process.[4]

Value stream mapping (sometimes called VSM) is a lean manufacturing technique to analyze, design, and manage the flow of materials and information required to bring a product to a customer.[5] Value stream maps focus on visualizing the entire end-to-end process, emphasizing value-adding and non-value-adding activities.

Flowcharts Swimlane diagrams Value stream maps
82 #17/2024 83

Data collection: The pillar of discovery

Data serves as the foundational element for effective process discovery. Organizations collect a diverse set of data types to gain a comprehensive understanding of the workflow. Organizations typically use 2 types for data in this regard- historical data and event logs.

• Historical data: Capturing past process executions provides insights into performance.

• Event logs: These logs record all process activities, pivotal for subsequent process mining.

Document analysis

Interviews and observation

Techniques such as interviews with stakeholders directly involved in the workflow and direct observation of the workflow offer a qualitative dimension in understanding the process dynamics:

• Interviews offer a historical context and provide insight into the process intricacies.

• Observations provides the real-time insights into the actual execution of the processes and involved variances.

Analyzing the existing documentation aligns theory with execution. Document review is the methodical examination of available manuals and standard operating procedures which help in aligning the theoretical aspects of the process with its real-world execution.

Organizations can create an in-depth understanding of their present procedures by combining these strategies, which paves the way for the process mining phase that comes next. Process discovery not only sheds light on the documented aspects of workflows but also captures the deviations that may occur during the actual execution. This comprehensive understanding forms the foundation for informed decision-making and targeted improvements in the pursuit of automation opportunities.

Process mining: Obtaining insights from operational data

Process discovery

algorithms

Process mining utilizes algorithms to analyze the event log data and then automatically generate process models. These models function as an active representation of how the processes are executed, providing fine details not found in the documentation. Key process discovery algorithms include:

The Alpha Miner (or α-algorithm, α-miner) connects event logs or observed data and the discovery of a process model.[6]

Conformance checking

In computer science, a heuristic is a technique designed for solving a problem more quickly by finding an approximate solution when classic algorithms look for an exact solution.[7] By applying heuristics, this approach identifies patterns in the event log data to construct process models. It is particularly useful for handling noisy or incomplete data.

The Genetic Miner derives its name from biology and works in a similar way as natural selection. It works by using a genetic algorithm to search for a space of possible process models to identify the most likely process model. The Genetic Miner can be seen as an evolutionary approach that involves mutating and combining process models to search for better ones. [8]

Conformance checking is a technique used to check process compliance by comparing event logs for a discovered process with the existing reference model (target model) of the same process.[9] This step ensures that there is an alignment between the documented processes and their real-world execution. Key aspects of conformance checking include:

• Fitness metrics quantify the alignment between the process model and observed data.

• Precision and recall evaluate the accuracy and completeness of the process model.

Performance analysis

Process mining tools also examine the performance of workflows. They identify bottlenecks, any delays, and resource utilization. Common metrics used are:

• Throughput analysis which examines the completion rates for overall process efficiency.

• Cycle time analysis which evaluates the time taken for a single case to traverse the entire process.

• Resource utilization which reveals allocation efficiencies across the workflow.

Alpha algorithm Heuristic mining Genetic algorithms
84 #17/2024 85

Root cause analysis

Understanding the root causes of deviations and inefficiencies is vital for process improvement. Process mining tools facilitate root cause analysis by:

• Identifying patterns: Uncovers recurring issues within the process.

• Trace analysis: Granular examination of individual process instances.

• Contextual information: Factors contributing to deviations are examined.

Predictive analytics

Some process mining tools also use predictive analytics to forecast future behavior of processes. By leveraging historical data these tools can:

• Anticipate bottlenecks: Predictive models can identify potential bottlenecks before they occur.

• Scenario planning: Simulating different scenarios based on historical data allows organizations to evaluate the potential impact of process changes or automation initiatives.

• Resource allocation optimization: Predictive analytics aid in optimizing resource allocation by forecasting demand and identifying areas requiring additional resources.

In conclusion, process mining goes beyond static process documentation. It uses a mix of existing documentation and offers a dynamic data-driven approach. This aids in understanding, analyzing, and optimizing workflows.

Identifying automation opportunities: Navigating efficiency

Criteria for automation

Frequently used criteria to identify processes for automation are as below:

• Repetition

• Frequency

• Rule-based

Leveraging process mining Use cases and examples

Process mining reveals inefficiencies and bottlenecks apt for automation.

• Bottleneck identification: Process mining reveals areas where processes experience delays or constraints. Automating these bottleneck areas can significantly enhance efficiency.

• Redundancy elimination: Process mining helps identify redundant steps or activities. Automation can streamline processes by eliminating unnecessary redundancies.

• Complexity analysis: Complexity of processes can be evaluated with process mining. Identifying areas where automation can simplify tasks can reduce the risk of errors. This will in turn improve the overall process efficiency.

A few examples of automation opportunities across various domains are as below:

• Invoice processing: Automatic processing of invoices until a threshold frees up human resources and automatic validation of invoices reduces human errors.

• Customer support: Chatbots handle common queries freeing up human agents. Broadband operators have chatbots investigate and restart customer modems or tell customers of any outages.

• Data entry and validation: Automation improves accuracy and efficiency. Data entry can be minimized or even avoided if the data is already available in a system and just needs to be used instead of re-entering manually.

• Workflow approval processes: Streamline approval workflows for faster decision-making. Automatic approvals of sick leaves are a simple yet widely used automation.

Implementing automation: Navigating the path to efficiency

Selecting the right automation tools and developing a robust strategy are key for achieving efficiency through automation.

Choosing the right automation tools

While choosing an automation tool, we need to consider the compatibility with existing applications. We also need to look at the scalability of the tool to accommodate growing business needs. The tool should be user-friendly for increasing adoption rates and should be customizable for flexibility and adaptation.

Time intensive
Resource intensive
Error prone
Scalable
86 #17/2024 87

Building an automation strategy

Developing a well-defined automation strategy is crucial for a successful implementation. While building a strategy we need to consider the following key components:

• Objectives could be improving efficiency, reducing costs, or enhancing customer experience. A well-defined purpose provides direction.

• Prioritize processes based on criteria like impact, feasibility, and importance. Start with processes that offer quick wins and significant efficiency gains.

• Anticipate and address the resistance to change. Manage change by communicating the benefits, involving key stakeholders, and providing training to ensure a smooth transition.

• Implement automation in small pilot programs before scaling up. Pilot programs help identify potential challenges before full-scale deployment.

The road to continuous improvement: Sustaining momentum in automation

Continuous monitoring, scalability, security, technology adaptation, and fostering a culture of innovation drive sustained improvement:

• Monitoring and feedback loops regularly assess performance metrics and gather user feedback.

• Scaling automation initiatives identify new opportunities and standardize processes.

• Ensuring security and compliance by conducting regular audits and adhering to compliances.

• Adapting to technological advances by staying informed about emerging technologies and pilot new tools.

• Cultivating a culture of continuous Improvement by promoting innovation, investing in training, and recognizing contributions.

References

1. https://www.nintex.com/process-intelligence/process-discovery/learn/what-is-process-discovery/

2. https://www.ibm.com/topics/process-mining

3. https://asq.org/quality-resources/flowchart

4. https://www.atlassian.com/continuous-delivery/principles/value-stream-mapping

5. https://www.lucidchart.com/pages/tutorial/swimlane-diagram

6. https://www.workfellow.ai/learn/process-mining-algorithms-simply-explained

7. https://www.workfellow.ai/learn/process-mining-algorithms-simply-explained

8. https://www.workfellow.ai/learn/process-mining-algorithms-simply-explained

9. https://appian.com/process-mining/conformance-checking.html

Back to content

Process discovery and mining are essential techniques that empower organizations to identify the right candidate processes for automation. By leveraging these, organizations can uncover inefficiencies, optimize processes, and embark on a journey towards efficiency and competitiveness. Process automation is no longer a luxury. It is a necessity for organizations seeking to thrive in a dynamic and digital world.

Author

Parameswaran Sivaramakrishnan Account delivery lead

Parameswaran brings over a decade of IT expertise to the table, specializing in the realms of business process management and workflow orchestrations. With an extensive career spanning various industries including manufacturing, finance, and insurance, he has honed his skills in demystifying intricate technical concepts, rendering them comprehensible for audiences of all technical backgrounds.

Conclusion
Searching for NeoXam experts Refer your friends and earn a 2x bonus! Contact apac_niche_recr@dxc.com for more details 88 #17/2024 89

Embracing uncertainty Insights from the Project Management Excellence Conference

If you were to inquire with any project manager about the most dreaded aspect they’d like to avoid, chances are high that uncertainty would be at the top of their list. This sentiment finds justification in the definition provided by the Oxford Reference, which explains that uncertainty arises when decisions must be made regarding the future without the ability to assign probabilities to potential outcomes, often used interchangeably with the term risk. In essence, we dislike uncertainty because it introduces risks.

There are two primary forms of uncertainty that project managers contend with, as defined by Glen Alleman:

1. Aleatory uncertainty, stemming from the inherent randomness of processes. For instance, flipping a coin illustrates this type of uncertainty.

2. Epistemic uncertainty, which arises from a lack of knowledge. This encompasses a range of factors, including but not limited to unfamiliarity with specific tasks (such as baking a cake or understanding probability theory) and ignorance of what is yet unknown. This concept is epitomized by the metaphor of the “black swan” — an idea captured by the Latin phrase “Rara avis in terris nigroque simillima cycno” (A rare bird on this earth, like nothing so much as a black swan). For centuries, people believed that black swans did not exist until Dutch explorers encountered them in Western Australia in 1697.

While we may be constrained in our ability to mitigate aleatory uncertainty, we have the capacity to address epistemic uncertainty. Embracing the philosophy encapsulated in the words of Russell Ackoff, “we can never know everything, but we can always know more,” we’ve chosen to confront epistemic uncertainty head-on by organizing the Project Management Excellence Conference at Luxoft.

The conference theme, “Project Management on Fire: How to Run Projects in Turbulent Times,” was not selected arbitrarily. Rather, it reflects the current landscape in which we operate, where the ability to engage in proactive risk management and demonstrate adaptability is paramount to the success of projects.

From the preparation to the execution, every moment was enriching. 1000+ participants, dozens of questions and fruitful discussions made the event valuable.

We were privileged to hear from 8 speakers, each sharing insights and real-world examples of how project management transforms businesses:

• Dmytro Pidoprygora delved into using Way of Working as a foundation for large software system integration projects.

• Dzmitry Yavid explored Kanban implementation for enhanced project performance.

• Yogesh Kshirsagar enlightened us on the fusion of strategic thinking with project management.

• Kseniya Kultysheva shared strategies for fostering effective communication and engagement in virtual teams.

• Sandeep Kumar Singh showcased significant savings in end-user productivity through efficient project management.

• Maksym Vyshnivetskyi shed light on the influence of experience on decision-making.

• Veronika Khalaim navigated us through project management in high-stakes scenarios.

• Our guest star Taras Fedoruk unveiled the phoenix effect: Closing projects to maximize organizational benefits.

None of this would have been possible without the unwavering support of the Luxoft Employer Branding team and Anastasiia Tkachuk. Did we achieve all the goals we planned? Definitely. Are we planning to stop now? Definitely not. We’ve already begun working on our next major event, and if you’d like to be a part of it, don’t hesitate to reach out to our PM Chapter leads. We’re seeking bright minds and contributors. Stay tuned for more updates!

Back to content

Author

Maksym has 20 years in the IT field from junior project manager up to department director and head of the project management office. He’s spent the last 10 years at Luxoft focusing on project management excellence and quality assurance processes. Certified Project Management Professional (PMP) and accredited Kanban Trainer from Kanban University (AKT).

#17/2024 91 90

The most essential pair QAC and Tessy safeguarding AUTOSAR’s future

AUTOSAR, which stands for AUTomotive Open System ARchitecture represents an effort, among automotive and software companies worldwide. The primary goal of this partnership is to create a software framework and open E/E system architecture for mobility. AUTOSAR focuses on delivering scalability, portability, safety, security, and innovation to software systems.

Maintaining the quality and safety of AUTOSAR software components necessitates the implementation of testing methods and tools throughout the development phase. In this article we will introduce two tools: QAC and Tessy. QAC refers to Qualification Assurance Criteria. A set of requirements that outline the expected quality characteristics of AUTOSAR software components. On the hand Tessy serves as an automated testing tool for AUTOSAR software components enabling verification of adherence, to QAC requirements and measuring code coverage.

Let’s delve into the concept and significance of QAC and Tessy exploring how they can be integrated to form a testing approach, for AUTOSAR systems. We’ll also touch on the advantages, obstacles, and future directions of employing QAC and Tessy in AUTOSAR development.

Introducing QAC: The sentinel of quality

QAC encompasses a set of criteria that outline the desired quality attributes of AUTOSAR software components encompassing functionality, performance, robustness, reliability, and security. These requirements stem from AUTOSAR specifications, functional safety standards like ISO 26262 and industry best practices. QAC requirements are divided into four categories.

• Requirements: These are criteria that can be assessed using static analysis tools, like coding rules, naming conventions, data types and interfaces.

• Dynamic requirements: Refer to the criteria that can be checked using testing tools like input/output behavior timing, memory usage and error handling.

• Qualification requirements: Outline the standards, for qualifying a software component for a safety level or certification including test coverage, test cases, test documentation and traceability.

• Configuration requirements: Specify the configuration parameters and options for the software component, such, as preprocessor macros, compiler switches and linker options.

QAC standards are essential in making sure that AUTOSAR software components are of the best quality and safe. This is to be achieved through setting clear, measurable objectives, providing guidelines, as well as verification means. Equally important, QAC requirements define a common quality standard and interface specification which allow for integration of software components from diverse sources.

Introducing Tessy: The automated maestro of testing

Tessy is a Razorcat Development GmbH product for automated testing.

This tool supports all the phases including test design, execution, analysis of test results and reporting on it. Integration testing is among the other features supported by Tessy. The tool also supports regression testing, code coverage measurement as well as test execution. The TÜV SÜD has certified Tessy for functional safety compliance including IEC 61508,

ISO including IEC 61508, ISO 26262, EN 50128 IEC 60880 and IEC 62304.

Tessy can be used to verify compliance with QAC requirements because its support includes numerous coding standards such as MISRA and AUTOSAR. Other coverage metrics include statement, branch, and MC/DC coverage among others. Documentation of qualification for the software component includes Tessy’s features like test scenarios, test data or even complete test reports showing qualification demonstrations with this respect.

Some of these include:

• Test coverage improvement: Tessy can automate test case generation, which is a structured and pictorial way of specifying test cases by dividing the input and output variables of a software component into classes. Tessy measures the test cases’ code coverage and identifies those parts with untested or unreachable code.

• Time saving in testing: In comparison to manual testing, Tessy can do test execution and analysis of the results automatically, which saves time and energy. Also, Tessy can allow for the use of tests that have been done before in other code differences, thereby reducing testing time needed for repeated tests or continuous testing.

• Early defect detection: After all, it is easier for QAC to find out defects earlier at the phase of development; as such, no additional errors are propagated as well as cost and complexity related to error fixing are reduced.

92 #17/2024 93

How do QAC and Tessy work together?

As shown below QAC and Tessy can be used together to create a comprehensive testing strategy for AUTOSAR

The following steps make up the workflow:

Step 1: Develop the software component’s QAC requirements, based on AUTOSAR specifications, functional safety standards and best practices. The QAC requirements must cover static, dynamic, qualification and configuration parts of the software component.

Step 2: Create the software component’s test cases using Tessy’s classification tree editor (CTE). These test cases must address QAC dynamic requirements such as timing; input/output behavior; memory usage and error handling. In addition to this, they should be linked to the QAC requirements by means of Tessy’s requirements coverage view in Tessy.

Step 3: Use test data editor (TDE) integrated in Tessy to run tests for this software component. The test execution process should be run with an appropriate compiler, debugger, and target system as per the QAC configuration requirements. Again, it should determine code coverage as defined by QAC qualification requirements.

Step 4: Evaluate the results from testing the software component using Tessy’s test cockpit view. The test results will indicate whether a given test case passed or failed along with other output values and coverage data for each individual test case used. Similarly, they need to show compliance status against QAC requirement through traceability with requirements which is defined by QAC qualification.

Step 5: Report the test outcomes, for the software component by utilizing Tessy’s feature for generating test summaries in Tessy. The report should outline the test and coverage results, including reviews of coverage, for any unreachable code segments. Additionally ensure that the report illustrates how the software component meets the QAC qualification criteria.

The future of QAC and Tessy at AUTOSAR

The development of AUTOSAR is encountering challenges and opportunities as the automotive sector undergoes changes and advancements including electrification, connectivity, automation, and artificial intelligence. In order to tackle these shifts and prospects QAC and Tessy must evolve to align with the changing landscape. This evolution entails:

• Embracing AUTOSAR platforms and standards; AUTOSAR has introduced the adaptive platform, a software platform designed for high performance computing and sophisticated applications like infotainment, advanced driver assistance systems and autonomous driving. QAC and Tessy must adjust to support the features of the platform such as service-oriented architecture, dynamic configuration, and security measures. Furthermore, they need to adhere to emerging standards and regulations in the industry, like SOTIF (Safety of the Intended Functionality) and GDPR (General Data Protection Regulation).

• Integrating with development tools and methods, in AUTOSAR development is becoming more collaborative and agile. Software components are now being integrated by teams and organizations using a variety of tools and methods. To keep up QAC and Tessy must align with these approaches, such as model based design, continuous integration, and DevOps. They also need to ensure compatibility with testing tools, like Google Test, CppUTest and Cunit to support interoperability.

• Utilizing the technologies and methods AUTOSAR development is enhanced by the progress and applications of technologies, like artificial intelligence, machine learning and cloud computing. QAC and Tessy must make use of these advancements by employing intelligence, for creating or refining test cases utilizing machine learning to analyze or forecast test outcomes and leveraging cloud computing to expand or parallelize test execution.

• QAC and Tessy are powerful tools used in testing AUTOSAR software components, which they do by providing a comprehensive and automated solution for the quality and safety of the software components. This means that QAC and Tessy can be used on any AUTOSAR platform or standard; moreover, they can be integrated with a wide range of development tools, methods as well as benefit from different technologies or techniques. Therefore, QAC and Tessy are two essential tools for tomorrow’s AUTOSAR developments.

Additional content ideas

Case studies could also be included to illustrate how QAC and Tessy have been successfully employed in real-world AUTOSAR projects such as the examples below.

For instance, QAC and Tessy were utilized to test the automotive power steering system components developed by an OEM supplier. These software components had to meet the AUTOSAR classic platform compliance level as well as ISO 26262 requirements. Consequently, QAC and Tessy helped achieve 100% MC/DC coverage, which qualified the software components at ASIL D level.

Among other things tested in Europe were vehicle-to-vehicle (V2V) systems based on C2C-CC set up where QAC alongside TESSY was involved in evaluating V2X application layer interfaces (ITS-S). The software parts were required to meet the ETSI ITS standard and the AUTOSAR adaptive platform. QAC and Tessy participated in ensuring that software parts have the required characteristics, e.g., performance, functionality, security as well as measuring latency and reliability of communication.

AUTOSAR: This is a worldwide consortium of motor vehicle manufacturers, suppliers, service providers, enablers as well as tool developers whose aim is to create a standardized automotive system architecture that will consist of an open framework for all E/E systems within vehicles.

QAC: Qualification Assurance Criteria are set of demands that outline what is anticipated out of AUTOSAR software components when it comes to features like performance, robustness, functionality, reliability, and security.

Tessy developed by Razorcat Development GmbH is a tool for automated testing of AUTOSAR software components that helps throughout the entire unit test cycle such as test design, test execution, test result analysis and test reporting.

Step 1: Develop QAC requirements Step 2: Create test cases using Tessy’s CTE Step 3: Run tests with Tessy’s TDE Step 4: Evaluate test results with Tessy’s test cockpit Step 5: Report test outcomes
94 #17/2024 95

Conclusion

To construct software of AUTOSAR systems can be straightforward and snappy work, and it can likewise be trying and be challenging. Many inconveniences and intricacies anticipate, holding on to ruin great software projects. This will be later followed by Tessy is programming, which is a software testing tool used for automated module, component, and back-to-back testing of embedded applications. It helps the users to test the AUTOSAR software. Embracing devices based on automation, pursuing industries such as services, business, AI and so on, and encouraging developers to participate along will be useful as we can expect excellent results for AUTOSAR software that is developed.

References

1. Author, S. T. (2024, January 11). AUTOSAR in Automotive Systems: An In-Depth Exploration - SRM Technologies - Global Partner for Digital, Embedded and Product Engineering Services. SRM Technologies - Global Partner for Digital, Embedded and Product Engineering Services. https://www.srmtech.com/knowledge-base/blogs/autosarin-automotive-systems-an-in-depth-exploration

2. AUTOSAR - Wikipedia. (2021, February 1). AUTOSAR - Wikipedia. https://en.wikipedia.org/wiki/AUTOSAR

3. Hitex: TESSY and Standards. (n.d.). Hitex: TESSY and Standards. https://www.hitex.com/tools-components/testtools/dynamic-module/unit-test/tessy-and-standards

4. Helix QAC for C and C++ | Perforce. (n.d.). Helix QAC for C and C++ | Perforce. https://www.perforce.com/ products/helix-qac

Back to content

Author

Aravind B N

Junior software engineer

Aravind B N is a technical writer and junior software engineer at Luxoft, working in the Automotive. Having 1.5 years of professional work experience, he currently manages tasks related to AUTOSAR testing and integration and has a degree in Electronics and Communication Engineering [ECE] from Visvesvaraya Technological University [VTU].

Embark on a Murex adventure with Luxoft!

Join one of the world’s largest Murex partners and shape the future of finance

96
APPLY NOW

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.