22 minute read
News Watch
by d2emerge
NEWSNEWS WATCHWATCH
Swedish-Ukrainian software company Sigma Software Group has teamed up with Tech Nation to host a hackathon called Hack for Peace throughout Europe aimed at building tech solutions for solving issues related to war and promoting peace throughout the region.
The hackathon will take place from October 21-23 in Ukraine, the UK, Sweden, Poland, and Portugal. Hack for Peace participants will be able to connect online through teleconferencing options in order to exchange ideas across country borders, the companies explained.
Four tracks will be included in the hackathon as areas to develop ideas around: cybersecurity, information hygiene, and media wars; mental health, kids education, and logistics in war.
According to Sigma Software, the hackathon will have three winners at the end, and they will be able to implement their ideas with assistance from investors and mentors.
Microsoft Dev Box enables on-demand workstation creation
The managed service enables developers to create “ondemand, high-performance, secure, ready-to-code, project-specific workstations in the cloud, ” the company said.
Users can sign into the Azure portal and search for dev box to begin creating dev boxes for their organization.
Dev Boxes are ready-tocode and pre-configured by the team with all the tools that developers need. Developers can also create their own dev boxes whenever they need to switch between projects, experiment on a proofof-concept, or start a full build in the background.
Dev Box supports any developer IDE, SDK, or tool that runs on Windows. It also supports building cross-platform apps because of Windows Subsystem for Linux and Windows Subsystem for Android.
People on the move
n Stuart McClure has been named the new CEO of ShiftLeft. He has over 30 years of experience in security, and will use this experience to drive growth for the company and advance AI/ML in the DevSecOps market. Previously he was CEO and founder of Cylance, which Blackberry acquired in 2019.
n NS1 has appointed Emily Nerland as its new head of global sales. She will be focused on working to bring awareness to the company’s products worldwide and explain to companies how NS1 solutions can solve current and future infrastructure problems. She was previously the managing director of EMEA at Masergy.
n Veronica Curran is SmartBear’s new Chief People & Culture Officer, and will be utilizing her background in Diversity, Equity, and Inclusion (DEI) to build an inclusive culture at SmartBear and improve acquisition and onboarding. She comes most recently from Alumni Ventures where she was Chief People Officer.
JetBrains Space now available on-premises
The productivity tools company, JetBrains, recently announced that its complete platform for software development, Space, is now available on-premises in beta. This offering comes with Docker Compose and Kubernetes installation options.
Space brings users an allin-one platform that covers Git hosting, code review, CI/CD, package repositories, issue tracking, documents, and chats.
On top of this, users gain access to a remote development toolset and native integration with JetBrains IDE.
According to the company, Space is customizable and can be extended to meet the specific needs of any company in the industry. It also works to eliminate context switching by simplifying the developer’s work and leading them to focus on their tasks with minimal distractions.
CData brings out 2022 updates to solutions
The data connectivity company CData has announced the 2022 release of its drivers and connectivity solutions, and the update comes packed with features.
First, it announced the addition of several new data sources, including AlloyDB, Azure Active Directory, Monday.com, Neo4j, Oracle ERP, Oracle HCM, Oracle Service Cloud, Outreach, Pipedrive, Reckon Accounts Hosted, Salesloft, and Zoho Projects.
The company also announced that its UI for ODBC Drivers now includes tabs in the connection dialog boxes that allow information to be previewed, which means that users won’t have to relaunch the dialog box to see that information.
The 2022 update also includes a new hierarchy design for drivers where each driver will now get a JSON document with details on how to build custom connection dialog UIs.
Finally, based on customer feedback and interest, CData decided to deprecate the MuleSoft Connectors and BizTalk Adapters, but will still support connectivity from these platforms.
Cloudera launches Data Platform One
Cloudera announced the launch of Cloudera Data Platform (CDP) One, an all-in-one data lakehouse software for analytics and exploratory data science.
The service has built-in enterprise security and machine learning that requires no security or monitoring operations staff, helping companies move to cloud computing for analytics and data.
CDP One provides Zero Ops functionality that enables easy self-service analytics on any type of data and reduces TCO by 20% to 35% when including initial set up and operations of platform ops, sec ops and support, versus DIY cloud solutions, according to Cloudera in a post.
Infragistics Ultimate 22.1 released
This release is intended to improve, streamline, and modernize app building with added features, capabilities, and UI controls, better design and development processes, and a
streamlined and interactive data visualization experience.
Among the new enhancements is improved design-tocode features and capabilities in App Builder, such as Swagger UI support, localhost data access, business charts, 15 screen layouts, new UI kits in the design system, and new add-ons in the App Builder toolbox.
Updated UI kits and new controls in Angular, Blazor, React, and Web Components, new themes, and Angular Pivot Grid and Blazor DockManager are also included in Infragistics Ultimate 22.1.
The team at Microsoft recently announced that the Windows Community Toolkit Labs is now the primary way that the company will be developing new features for Windows Community Toolkit. It is intended to act as a safe space for collaboration and engineering solutions from the prototyping stage all the way through polished and finalized components. According to Microsoft, the Windows Community Toolkit Labs will make it easier for users to contribute to the toolkit, try out new features still in development, and cooperate together on the development process.
The new “Labs” repository will hold discussions about new items and development along with initial ‘experiments’ that will each represent a new component (or set of related components) that will begin its journey from an initial implementation to a well-tested feature working through a defined set of criteria and quality gates along the way.
With Windows Community Toolkit Labs, users can make changes in Labs, try out new ideas, and not have to worry about having code needing to be ‘shippable’ in order to make its way to the repository.
With this, the company can more easily gather feedback from developers, collaborate with users on the component, tests, and documentation, and reduce the overhead on reviewing monolithic PRs.
Quality gates can then be abstracted as a part of this process and incrementally review and move components from the prototyping stage towards a production-quality component.
NVIDIA announces major release of Omniverse
NVIDIA, the company known for designing and manufacturing GPUs, announced a new range of developer frameworks, tools, apps, and plugins for its platform for building and connecting Metaverse worlds based on Universal Scene Description (USD), NVIDIA Omniverse.
According to the company, with this expansion, users gain access to new AI-powered tools and features that offer artists, developers, and engineers the power to construct virtual worlds and content easier than ever.
This release also works to help users more easily connect to 3D applications such as PTC Creo, SideFX Houdini, Unity, and solutions from the Siemens Xcelerator platform.
“The metaverse is a multi-trillion-dollar opportunity that organizations know they can’t ignore, but many struggle to see a clear path forward for how to engage with it, ” said Rev Lebaredian, VP of Omniverse and simulation technology at NVIDIA. “NVIDIA Omniverse closes the gap between the physical and virtual worlds, and these new tools, technologies and collaborations make it possible to leap into the 3D internet today. ”
A key element of this expansion is the launching of NVIDIA Omniverse Avatar Cloud Engine, a suite of AI models and services made for building and deploying lifelike virtual assistants and digital humans.
In addition, the company has introduced several platform updates such as: l Omniverse Kit: New OmniLive Workflows brings an overhaul of USD-based collaboration in
Omniverse that offers increased speed and performance to multiple app 3D workflows l Omniverse Audio2Face: An AI tool with the ability to create facial animations straight from an audio file and infer and generate realistic emotions l Omniverse Machinima: Several new, free 3D assets from Post Scriptum, Beyond the Wire, and
Shadow Warrior 3 games, as well as a suite of new AI animation tools l Omniverse DeepSearch: Omniverse Enterprise customers can utilize AI to intuitively and accurately search through large untagged 3D asset databases of visuals using nature language.
GitHub Projects is now available to group issues
GitHub announced the general availability of the new Projects powered by GitHub Issues. The new version connects planning directly to the work that teams are doing on GitHub.
The new GitHub Projects enables users to group and pivot their issues by stage, priority, status, assignee, or any custom field.
Users can also define priorities, labels, assignees, OKRs, reviewers, QA stages, and other concepts with a type system that adapts to users’ processes and workflows.
To make charts more easy to use, GitHub Projects enables configuring and tracking cycle velocity, current work status, and complex visualizations like cumulative flow diagrams.
Over the next two quarters, GitHub said it will focus on the continuous improvement of the day-to-day scenarios. This includes adding parent-child, duplicate, depend on, and block relationships in issues and projects to keep everyone aligned, new automation capabilities with custom triggers, conditionals, and action logic. A new timeline layout will support group-by to quickly segment the work by team, initiative, or product line. A GitHub Mobile experience will also be available. z
The Project Management Task You (Almost) Never Complete
BY GEORGE TILLMANN
The James Webb Space Telescope is the largest optical space telescope ever built. It is designed to see back more than 13 billion years to the dawn of the universe. While the telescope ’ s functionality is meeting if not exceeding expectations, the project cost more than 10 times what it was originally expected to cost and came in 14 years late. The Webb team learned what IT has long known — the bane of project management is estimating.
Flip a coin a hundred times and half the time it will come up heads and half the time it will come up tails. Project estimates (effort, time, and cost)? — well that’ s a different story. Antidotal information is that project managers are more than five times as likely to underestimate effort, time, or cost than overestimate them. Why do we underestimate so often? Well, perhaps it is in our genes. If we ignore for the moment evolutionary biology, genetics, neuroscience, neuroepigenetics, phylogeny, phrenology, not to mention the Hardy–Weinberg principle, we see how underestimation is an evolutionary advantage for our species. How many times have you said that if you knew how hard it was going to be to do something you never would have undertaken it? If Thomas Edison knew how hard it would be to invent the light bulb would he have tried? Had the U.S. Congress known what the Webb telescope would eventually cost would they have funded it? Underestimating is a tremendous advantage for our species, for it allows the creation/discovery of things that rational minds avoid.
However, while some evolutionary traits are an advantage for the species, they can be a disaster for the individuals of that species. For example, salmon swimming upstream to spawn is good for the salmon species, but the individual fish does not survive the journey. And the praying mantis who eats her mate — well let’ s not even go there. Similarly, while underestimating might be essential for the human species to advance, it can play havoc for the individual, such as a project manager.
The reality of the situation is that whatever you do, you might be destined, perhaps by some aberrant gene, to underestimate the effort required to build your system. This is the estimation conundrum. The systems development Kobayashi Maru. The no-win scenario.
There is only one practical workaround for this genetic peculiarity — feedback. The project manager needs frequent, timely, and accurate
George Tillmann is a retired programmer/analyst, project manager, and CIO. This article is adapted from his book, Project Management Scholia: Recognizing and Avoiding Project Management’ s Biggest Mistakes (Stockbridge Press, 2019). He can be reached at georgetillmann@gmx.com.
feedback as to the quantity and quality of all functionality produced along with the time and cost it took to produce it (See “Planning for the Perfect, ” March 2020). With constant feedback the project manager can finally overcome his or her genetic destiny. The number one source for understanding the scope of actual project deliverables and their costs is the post-project review (PPR).
The project’ s PPR is a chapter in the project manager ’ s and IT’ s history book. It provides necessary feedback so that the project manager can continually learn and improve project management skills. It can also play a valuable learning tool for other project managers, or would-be project managers, as to what to do, what not to do, and what to avoid like the plague.
The Internet is awash with PPR templates and sample reports free for downloading. You need only pick one and follow it. They all look a bit different, but the differences are largely unimportant if they include a traditional mathematical look at schedules, staff effort, costs, deliverables, etc. as well as two additional critical components. The first is lessons learned, a review of the project’ s successes and failures. This section is the lynchpin for any hope for future project managers to learn from the experiences of those who went before them. The project manager should detail what worked, what didn ’t work, and why. All subjects are fair game including tools, techniques, staff (u.s.er and IT), productivity, u.s.er availability, vendors, testing issues, working conditions (office space, technical resources, pizza delivery, etc.), and even those moments of brilliance as well as the mistakes made by the project manager.
The second critical component is recommendations for future systems development projects, where the project manager speaks to future project managers (or his or her future self), detailing what future project managers should look for and what they should do differently. Think of Dear Abby giving lovelorn advice, only here the advice is for the managers of future projects.
However, this is only possible if a robu.s.t and truthful PPR exists. Many PPR tasks are included in initial project plans but they are eaten up by the two PPR enemies: poor project planning and scope creep. Once the project manager discovers that schedules or costs will likely overrun the plan, then the hunt is on for the tasks that can be sacrificed. The PPR is often the first. The 20-person day PPR is cut to 10 days, then five days, and then completely eliminated in an IT sleight-of-hand.
There are a few critical success factors for a good PPR.
1. Bundle the PPR into the proj-
ect plan. The best way a project manager can increase the chances of a decent PPR is to bundle the review (including its schedules and costs) into the project plan. Some u.s.er managers might balk at paying for a PPR. Telling one user organization that they are being billed for a task that might only benefit another user organization is often a non-starter. If the PPR is a seamless part of IT’ s development methodology, then it might very well pass user scrutiny. If it becomes an issue with user management, then the project champion might be useful in convincing the user of the value and importance of a robust PPR (See “Projects, Politics, and Champions, ” March 2022).
If user management refuses to fund the PPR then the project manager should encourage IT to foot the bill. It might take some selling (See “Half of Managing is Selling, ” November 2020) but if the project manager focuses on the benefits to IT, all might turn out well. If IT doesn ’t have the funds to pay for the PPR directly, then IT might consider the cost of the PPR a project overhead item and bundle it into IT’ s daily billing rate. 2. Positives and negatives. This is not summer camp where every kid gets a trophy. Lay out what went well but also point out what could have been done better. Do not defend what is not defensible. If IT management failed to have needed developers or development tools available on time, say it. If user management never provided the promised space the team needed, say it. If the project manager miscalculated testing time, ‘fess up. (Don ’t worry about retaliation. When was the last time a user or IT management voluntarily read a project team deliverable?) 3. Be honest and objective. There is no sense in going through the effort of a PPR if it becomes a puff piece (just the good things) or thin soup (a twopage Hallmark card congratulating the team). Remember all those times you went home frustrated and kicked the dog? This is the opportunity to explain those vet bills, cleanse the soul of the disappointments with being a systems developer, and help the next project’ s team members with better canine relationships.
Honesty is particularly needed in understanding project productivity. Accounting can give you an accurate “dollars spent” (cost) on the project, and HR the staff hours consumed (effort), both of which might be the only num-
DevOps Feedback Loop Explained: Noisy Feedback
Feedback is routinely requested and occasionally considered. Using feedback and doing something with it is nowhere near as routine, unfortunately. Perhaps this has been due to a lack of a practical application based on a focused understanding of feedback loops, and how to leverage them. We’ll look at Feedback Loops, the purposeful design of a system or process to effectively gather and enable data-driven decisions; and behavior based on the feedback collected. We’ll also look at some potential issues and explore various countermeasures to address things like delayed feedback, noisy feedback, cascading feedback, and weak feedback. To do this, we’ll follow newly onboarded associate Alice through her experience with this new organization which needs to accelerate organizational value creation and delivery processes.
Our previous story was devoted to delayed feedback. Today let’s look at what noisy feedback means for the speed of digital product delivery.
As you may recall from Part One, Alice joined the company to work on a digital product, with the specific goal to accelerate delivery. The engineering team was relatively small, about 50 engineers, with three cross-functional teams of 6 engineers, shared services for data, infrastructure, and user acceptance testing (UAT). Analysis showed that the largest amount of time spent in the product delivery process was spent in testing after code development was completed.
Alice learned that the team has an automated regression suite that runs every night (4 hours) and always has about a 25% failure rate for 1,000 tests. Some engineers even tried to fix these issues, but they didn’t have time because of the release deadline and feature development priority, so no one had done anything substantial about it. To keep the ball rolling and continue feature development, it was customary to skip results and move forward. It was easy to close your eyes to the small noise/failed tests especially if you know that the test failure is not a product defect but a test defect. Indeed, it would be great if automated regression had found defects as it was supposed to do. Instead, failed tests signaled environmental issues in which tests are executed. The typical issues were network latency leading to the timeout services, wrong version of the components the product is integrating with, network access issues, wrong libraries on the server to
run the application, the database was corrupted data, etc. To investigate and discern the root cause of the failed tests’ actual defect from environment misconfiguration or malfunction, the engineering team needed to dedicate a significant amount of time given the accumulated volume. And as you might suspect, most of the environmental issues were under the control of the infrastructure team and BY PAVEL AZALETSKIY the data team. These teams were focused
AND JACK MAHER on the production environment being focused on firefighting, keeping a small
SECOND OF FOUR PARTS capacity to support product delivery. As you can imagine, it was hard to find a common language for these three groups since all of them were independently responsible for their piece of value delivery but didn’t recognize the importance of working together on every value increment. Such a situation had several adverse consequences: •Trust in automated tests deteriorated: the engineering team didn’t look at automated tests results •Quality degradation since there were actual defects to be addressed, but they were hidden under the noise. •The shared team focused on firefighting, most likely because no one addressed environment consistency early in the process •Collaboration issues among teams due to capacity constraints Alice proposes to fix such an issue with fragile and inaccurate quality feedback from nightly regression. She suggested gradually reducing the number of failed tests and blocking further development unless the threshold is achieved. Given the initial start of 25% (250 failed tests) it might be reasonable to set the target of 20% and then, with a 3% increment, go down to 2-3% of allowed failed tests. Therefore, for a specific period, the product team would allocate some % of capacity to address this "quality debt" and refactor tests, fix infrastructure, or address data issues affecting test results. She also proposed for the transition period to dedicate one DevOps and one data person per team for at least a sprint to ensure the teams can challenge the status quo with appropriate domain expertise. As an outcome, she expected to reduce the number of production incidents that distracted all groups. To justify such a change from a financial point of view, first of all, she needed to calculate how much the production
deployment and post-deployment incidents cost to address, and also calculate the average cost of a defect in production. (It might be the revenue loss and/or labor costs to fix the issue). Since her proposal is temporary and the release production issues are continuous, it was easy to quickly confirm, and gain quick benefit.
Let us take a look at the numbers: • Revenue loss because of defects varied from $100 per minute to $1,000 per minute because of reputational consequences. Last year ’ s loss was estimated as half the cost of one full-time engineer (FTE). • Post-production release stabilization costs typically average one engineering team being focused over a couple of days to fix as well as the infrastructure and database team. The last reporting period had three days, with six engineers from the product team and two engineers each from infrastructure and database. Total ten engineers for three days. Over the past few releases this has been about 120 full-time engineering days.
And required investment • Three teams allocated 10% of their capacity to address these issues, which is about two engineers per release. Given initial coverage of 25% they might need 5-6 releases to stabilize the regression suite. So it is about 12 fulltime engineering days.
As you can see, the cost implications of leaked defects because of the fragile environment were substantially more than the required investment of 120 fulltime engineers vs 12 days. Therefore, after discussion with the product manager, she got approval to start fixing the noisy feedback and improve its accuracy and value for the engineering team.
Alice ’ s story didn ’t end here, she also investigated several other issues known as cascaded feedback and weak feedback. We will unfold these terms in the following stories.
To summarize this story, we would emphasize the importance of a feedback loop frame when you optimize digital product delivery. In addition to the short time to get feedback, feedback accuracy also plays a vital role in ensuring the speed of delivery. z bers that senior management cares about. However, for understanding productivity, both of these numbers are meaningless without also considering the work completed (the fully developed and tested functionality of the system). If functions were eliminated or their usefulness reduced, if testing was abbreviated, or if documentation was shortchanged, and these changes are not taken into account when calculating productivity, then a false number will emerge, the cycle of poor and inaccurate feedback will continue, and any hope of project managers learning from their experiences will evaporate. Ensure that the actual functionality delivered is the basis of any PPR.
4. Include many inputs and many
comments from many people. The PPR is not the project manager ’ s opportunity to settle scores. Every team member, IT and user, IT management and user management, should have the opportunity to add his or her comments and rants to the PPR. However, the PPR is not a social media blog. There should
be an “ official” opinion penned by the project manager. However, just as the U.S. Supreme Court might have a minority opinion accompanying the opinion of the majority, there should be an opportunity and place in the PPR for those who disagree with the project manager ’ s conclusions to post their opinion.
There are four important PPR takeaways:
1. Unless you are retiring after the current project, past projects can help you perfect your skills for future projects. 2. The PPR is the best vehicle for learning from past projects. 3. Use the PPR to build a picture of yourself as a project manager. (a) Where do you habitually underestimate or overestimate? (b) What should you have done differently? (c) What processes, data, or personal skills would help you be a better project manager? 4. Tell the truth — lying, shading, or coloring what really happened is a waste of precious project time.
Done properly the post-project review can be one of the most useful and most cost-effective tools in IT’ s systems development arsenal. It will also please your dog. z
The Little Book of Big Mistakes and How to Avoid Them
Project Management Scholia
focuses on the 17 most consequential reasons IT projects fail and presents ways the project manager can avoid these problems by reading the danger signs and taking timely corrective action. The book dives into the often painful lessons learned — not from the library or the classroom — but from the corporate trenches of real-world systems development.
Available on Amazon
By George Tillmann
George Tillmann is a retired programmer, analyst, management consultant, CIO, and author.