23 minute read
News Watch
from SD Times July 2021
by d2emerge
NEWSNEWS WATCHWATCH
Prosus to acquire Stack Overflow for $1.8 billion
The technology investment company Prosus has announced its intent to acquire the online development community Stack Overflow for $1.8 billion. According to Stack Overflow, this acquisition will enable it to continue to operate as an independent company, but with the backing of a “global technology powerhouse. ”
According to Prosus, it decided to acquire the developer community because of its popularity within the developer and technologist community, as well as for the company’s knowledge management and collaboration solution Stack Overflow for Teams.
Android 12 adds privacy features in Beta 2
One new feature is the Privacy Dashboard, which provides users with more insight into what data apps access. It shows a timeline view of recent app accesses to microphone, camera, and location. Users will also be able to request information from an app on why it accessed certain data. Developers can provide this information through a new system intent: ACTION _ VIEW _ PERMISSION _ USAGE _ FOR _ PERIOD.
The Android development team is recommending developers utilize this intent and use the Data Auditing APIs to help track access in code and third-party libraries.
Android 12 Beta 2 also adds indicators for when an app is using the microphone or camera. Users will be able to go to Quick Settings to learn which apps are accessing the microphone or camera and manage those permissions.
Tim Berners-Lee auctioning off original source code for the web as NFT
Tim Berners-Lee, the creator of the World Wide Web, has announced that he is auctioning off the original source code for the web as non-fungible token (NFT), which is a unique digital asset that exists on blockchains.
Sotheby’s will be running the auction from June 23 to June 30, with the initial bidding starting at $1,000. The proceeds from the NFT sale will go to initiatives supported by Berners-Lee and his wife, Lady Rosemary Leith Berners-Lee.
The NFT will contain four elements: Original time-stamped files that contain the source code, a visualization of 10,000 lines of code, a letter written by Berners-Lee, and a digital poster of the full code created by Berners-Lee.
Apple announces Xcode Cloud and AR updates
Xcode Cloud aims to simplify the developer workflow by integrating the cloud and developer tools together. When a developer commits a change to the code Xcode Cloud will automatically build the app. Since the app is built in the cloud, the device is free for other tasks and other members of the team see if a change introduces errors.
Xcode Cloud runs automated tests in parallel, simulating how they would run on every Apple device and platform and test results are displayed in Xcode. Once it passes all tests, it can be distributed to all testers on the team and even beta testers with Test Flight, according to Apple.
For augmented reality, the company also revealed a new advanced rendering API and the ability to create realistic 3D objects using Object Capture — a technology that leverages photogrammetry that turns 2D pictures into 3D content.
Other advancements to AR included the launch of Live Text, which can automatically identify text in pictures so that users can save it as a note or use it in an online search. In addition, the company ramped up its machine learning capabilities to be able to identify text, locations and objects that are on a screen and then enable users to search up these elements through Spotlight.
Infragistics Ultimate 21.1 zeros in on collaboration
The latest version of the Infragistics Ultimate UI/UX toolkit is now available with new Indigo.Design, Agular, React, Web Components, Windows Forms and WPF features.
Infragistics Ultimate 21.1 is built off of three key themes: l Enabling hyper-productivity and better collaboration between app development and design through its design-tocode platform, Indigo.Design App Builder l New innovations and experiences with Angular, React, Web Components, ASP.NET Core l New enhancements in Windows Form and WPF
Eclipse IDE Working Group Formed
The Eclipse Foundation has announced it launched a working group for Eclipse IDE. The Eclipse IDE Working Group will work to ensure the “continued evolution, adoption, and sustainability of the Eclipse IDE suite of products, related technologies, and ecosystem, ” according to the Eclipse Foundation.
The Eclipse IDE Working Group will offer governance, guidance, and funding for communities supporting Eclipse IDE products.
Next.js 11 brings faster starts, changes
Vercel, the company behind the React and JavaScript framework Next.js, announced the release of Next.js 11 at its Next.js Conf in June. New improvements include faster starts and changes, real-time feedback, and live collaboration.
Vercel announced a preview of Next.js Live, which enables developers to develop in their web browsers. According to the company, this allows developers to collaborate and share with a URL, leading to faster feedback loops, less time spent waiting for builds, and real-time peer programming.
Next.js 11 also adds a new tool to help developers migrate from Create React App to Next.js. According to Vercel, there has been an increase in these migrations over the past six months. The new tool adds a ‘pages/’ directory, moves CSS imports to the right location, and enables a Create React App compatibility mode which ensures patterns work with Next.js.
Facebook makes PyTorch its default AI framework
PyTorch is an open-source machine learning framework that the company co-created with AI researchers in 2016.
By making PyTorch the default framework for all of its AI and machine learning models, the company believes its research and engineering initiatives will become more effective, collaborative and efficient as well as help advance the libraries and learn from PyTorch developers.
Google open sources FHE transpiler
Google has announced that it is open sourcing a transpiler for Fully Homomorphic Encryption (FHE). According to the company, FHE will allow developers to work on encrypted data without being able to access personally identifiable information.
FHE allows encrypted data to be transported across the Internet to a server and get processed without being decrypted. The transpiler will allow developers to write code for basic computations, like string processing or math, and run it on the encrypted data. The transpiler transforms the code into code that can run on the data.
According to Google, this tool will allow developers to create new applications that don’t need unencrypted data. They can also use it to train machine learning models on sensitive data.
Google noted that this is just a first step and that there is still a long way to go before most computations are possible with FHE.
Harness updates platform with Test Intelligence
Harness’s new test intelligence feature reduces test cycle time by up to 98% by using AI/ML workflows to prioritize and optimize test execution without compromising quality. The new capabilities shift failed tests earlier into the build cycle so that developers can quickly find out if a fix worked.
The new feature flag capabilities enable developers to release new features without making them visible to users. It also makes it easier to try capabilities such as A/B testing or software functionality variations like one- or twostep checkout.
Harness also integrated its acquisition of Lightwing technology into its Cloud Cost Management module to enable engineering teams to auto-stop and restart their non-production environments within seconds. z
People on the move
n ConnectALL announced the appointment of Eric Robertson as its senior advisor to the office of COO and president. In this role Robertson will work closely with the COO and president to advance the company’s value stream management initiatives, identify new market opportunities, and accelerate the company’s growth. His previous experience of nearly 20 years includes leadership in product development, VSM, Agile, and DevOps.
n Prashant Ketkar has been announced as the new chief technology and product officer at Corel Corp. Ketkar has over two decades of experience in the software industry. He previously served as senior vice president of product and engineering at Resolve Systems. He’s also held leadership roles at Evident.io, Oracle and Microsoft.
n Splunk welcomed Shawn Bice as its new president of products and technology on June 1. This is a new role that will oversee technical divisions like product, engineering, design, architecture, CIO, CTO, and CISO functions. Bice spent the past five years at AWS overseeing database products. Prior to that, he spent 17 years at Microsoft managing SQL Server and Azure data services.
n Donna Wilczek has been appointed as the first independent director of Optimizely’s board of directors. She has over two decades of experience and currently serves as the senior vice president of product strategy and innovation at Coupa Software, where she helped scale the company from a startup to a public company.
n Aqua Security announced two major hires last month: Darkbit co-founders Brad Geesaman and Josh Larsen. Geesaman will serve as director of cloud security and Larsen will serve as director of cloud product. Both have over 20 years of experience in information security and they’ve been working with Kubernetes since its creation.
There is a potential train wreck out there. According to the trade press and peer-reviewed journals alike, systems development is in trouble. The much revered, and equally reviled, Standish Group ’ s Chaos Report says that only about 30% of systems development projects succeed, 20% outright fail or are cancelled, and around 50% hobble along in some middle (between success and failure) state.
If you don ’t like the Chaos Report, there are a number of academic studies (hundreds of them) showing perhaps not as dire results but the same message — systems development is a blood sport.
What they all agree on is that there is a fundamental flaw in how we build systems, and the project manager is caught in a real-life Catch-22 situation in trying to solve the problem.
A 2007 study of more than 400 projects in the United States and the United Kingdom titled “The Impact Of Size And Volatility On IT Project Performance, ” is a telling example. The study found that as the project headcount gets larger, the risk of underperformance gets higher. The larger the team size, the greater the risk of failure. A 21 Full Time Equivalent (FTE) project is more than twice as likely to underperform as a 10-FTE project.
OK, you want to reduce project risk, and the plan calls for too many people on the project. What do you do? Well, one option is to spread the project out over time thus requiring fewer staff. Figure 1 (from the same study) presents the level of risk based on the duration of the project. It shows that as the schedule extends out, the risk increases, with 18-month projects encountering twice the failure rate of 12-month projects.
In fact, the project manager can ’t do much. As the same study shows, it doesn ’t matter whether you thin the staff count and make the project longer or shorten the duration by adding staff, the devil is the project effort (person-months) required.
Thick or thin (many staff or few staff), long or short (long duration versus short duration), a 500 person-month project is twice as likely to underperform as a 25 person-month project. Put simply, Big is Bad.
If big is bad, go small. Now, that should be the end of this article, but making big projects small is not so easy. Below are a few suggestions for accomplishing the small is beautiful effect.
Out of one, many
The simplest way to reduce the risk of one big project is to make it multiple small projects. Slicing up the megaproject into bite-sized pieces is the best way of bringing in a large project. The result should be a number of subprojects, or phases, each with their own staff, project manager, goals, and deliverables. But exactly how big should the subprojects be?
From the study ’ s findings, one could conclude that a good team size would be in the range of 5 to 15 staff and a good duration somewhere in the 3- to 12month range. Other authors have different but not terribly dissimilar numbers. Reviewing more than a dozen research studies, one would not be wrong in considering the average recommended team size seems to be in the four to seven range with a duration somewhere between three and nine months. For simplicity, we refer to the four to seven staff and 3- to 9-month duration as the project sweet spot.
The project sweet spot has a number of advantages. Its small headcount minimizes the required communication overhead, while the short duration mitigates the honeymoon problem.
The project sweet spot can be implemented serially or in parallel, or in any combination. A serial implementation has the mini-projects or phases executed one at a time, one after the other. If mega-
continued on page 9 >
Figure 1 – Risk of Underperformance Attributed to Duration
< continued from page 7 project X is broken down serially into mini-projects A, B, and C, IT could theoretically use the same staff on each project. When A is complete, the team moves on to B, etc.
Parallel execution requires multiple project teams working on the different mini-projects at the same time. Parallel projects require different team members for each project—sharing staff across projects defeats the purpose of phasing.
Most phasing is serial because it is often the easiest way to divide a project, however, parallel phasing becomes more desirable when there are significant schedule pressures.
There are a number of technical challenges to successful project phasing.
Communication. One of the reasons to break up a large project into smaller pieces is the communications overhead problem—as the number of team members increases, the time and effort needed to keep everyone up to speed on project activity increases exponentially. However, communication between teams is now needed, particularly if the phasing is parallel. While most intra-team communication is verbal, multi-team communication is often in writing, further increasing communication costs.
Partitioning. Exactly how to carve up the megaproject into multiple smaller pieces called mini-projects, subprojects, or phases is not always obvious. To do it right, the project manager (or whoever is tasked with parsing the project) needs a good understanding of the finished system and the tasks to build it.
Figure 2 shows a sample application data flow diagram (DFD). Processes or functions are depicted with rounded rectangles (example: A2, C1, etc.). Data stores or files (static data) are represented by open rectangles. Arrows depict the flow of data (data in motion) to and from data stores and communication (data sharing) between processes.
Selecting which processes to include in a mini-project is critical to development success. A phase or subproject should consist of processes where the communication (data sharing) between them is the highest. Phase boundaries should be defined to minimize crossphase communication.
In Figure 2, processes A1, A2, A3, and A4 have the most significant communication between them and are kept together as a subproject, while a similar decision is made about processes B1, B2, and B3.
Budget. Partitioning a large project into bite-sized chunks can have a negative impact on effort and schedules. Communication overhead was discussed above, but in addition, multiphased projects often require more analysis time (as business users are interviewed and re-interviewed) and testing time as the various sub-systems
are integrated. Project managers for the various mini-projects need to incorporate the additional required effort and time into their individual plans.
Testing. The testing of the individual subprojects is usually neither harder nor easier then testing a similar portion of a mega-project, however, it can be different. If the mega-project is divided serially into phases, then testing other than in the first phase might require revisiting previous phases. For example, imagine a mega-project divided into subprojects A, B, and C. If the subprojects are executed serially, then testing subproject C might uncover changes needed to earlier completed subproject A. This problem is not limited to serially executed subprojects, but can also occur in parallel subproject development and even in a big bang approach where the work on the various portions of the system is completed at different times. However, it can be more prevalent and acute in serially developed subprojects.
Integration. A megaproject that is divided into components can require some effort to reintegrate once the mini-projects are complete. Not necessarily a difficult task, but one that needs to be taken into account.
Throwaway Code. Project phasing often requires additional non-application code that is not in the final production system. This code is required for the proper testing and integration of phase components that will eventually need to interact with components in other, not yet developed, phases.
Slicing up what the user sees as a single project can present some management challenges.
User management. Senior business managers are often suspicious of any “ project” that does not deliver the entire end result.
They see a potential “bait and switch” where A, B, C was promised but they are only going to get
A or A, B. Further, the additional time and dollars required for the phased system adds insult to injury. To top it off, they are skeptical of the argument that partitioning will eventually cost less (no write-offs for cancelled projects, or increased maintenance costs for underperforming systems) while increasing the odds of getting what they want.
IT management. Some IT organizations face a significant systems development backlog with needed applications having to wait months or even years before project work can begin. Some senior IT managers pressure current project managers to move ahead as quickly as possible to free up resources that can be applied elsewhere.
In spite of the cons, and because of the pros, phasing a large systems development project into manageable subprojects is the single best planning action a project manager can take to increase the odds of project success, and, in spite of the increased development costs and schedules, one of the cheapest. z
Figure 2 – Megaproject Divided into Three Subprojects (A, B, and C)
Microservices at Acomplexity management
BY JENNA SARGENT
The benefits of microservices have been touted for years, and their popularity is clear when you consider the explosion in use of technologies, such as Kubernetes, over the last few years. It seems that based on the number of successful implementations, that popularity is deserved.
For example, according to a 2020 survey by O’Reilly, 92% of respondents reported some success with microservices, with 54% describing their experience as “ mostly successful” and under 10% describing a “ complete success. ”
But building and managing all of these smaller units containing code adds a lot of complexity to the equation, and it’ s important to get it right to achieve those successes. Developers can create as many of these microservices as they need, but it’ s important to have good management over those, especially as the number of microservices increases.
According to Mike Tria, head of platform at Atlassian, there are two schools of thought when it comes to managing the proliferation of microservices. One idea is just to keep the number of microservices to a minimum so that developers don ’t have to think about things like scale and security.
“Every time they small, ” Tria said.
“That works fine for a limited number of use cases and specific domains, because what will happen is those microservices will become large. You ’ll end up with, as they say, a distributed monolith. ”
The other option is to let developers spin up microservices whenever they want, which requires some additional considerations, according to Tria. Incorporating automation into the process is the key to ensuring this can be done successfully.
“If every time you ’ re building some new microservice, you have to think about all of those concerns about security, where you ’ re going to host it, what’ s the IAM user and role that you need access to, what other services can it talk to—If developers need to figure all that stuff out every time, then you ’ re going to have a real scaling challenge. So the key is through automating those capabilities away, make it such that you could spin up microservices without having to do all of those things, ” said Tria.
According to Tria, the main benefits of automation are scalability, reliability, and speed. Automation provides the ability to scale because new microservices can be created without burdening developers. Second, reliability is encapsulated in each microservice, which
scale:
issue
means the whole system becomes more reliable. Finally, nimbleness and speed are gained because each team is able to build microservices at their own pace.
At Atlassian, they built their own tool for managing their microservices, but Tria recommends starting small with some off-the-shelf tool. This will enable you to get to know your microservices and figure out your needs, rather than try to predict your needs and buy some expensive solution that might have features you don ’t need or is missing features you do.
“It’ s way too easy with microservices to overdo it right at the start, ” Tria said. “Honestly, I think that’ s the mistake more companies make getting started. They go too heavy on microservices, and right at the start they throw too much on the compute layer, too much service mesh, Kubernetes, proxy, etc. People go too, too far. And so what happens is they get bogged down in process, in bureaucracy, in too much configuration when people just want to build features really, really fast. ”
In addition to incorporating automation, there are a number of other ways to ensure success with scaling microservices. 1. In of corp the orate sec nature of urity. Because microservices, they tend to evoke additional security concerns, according to Tzury Bar Yochay, CTO and co-founder of application security company Reblaze. Traditional software architectures use a castle-and-moat approach with a limited number of ingress points, which makes it possible to just secure the perimeter with a security solution.
Microservices, however, are each independent entities that are Internetfacing. “Every microservice that can accept incoming connections from the outside world is potentially exposed to threats within the incoming traffic stream, and it has other security requirements as well (such as integrating with authentication and authorization services). These requirements are much more challenging than the ones typically faced by traditional applications, ” said Bar Yochay.
According to Bar Yochay, new and better approaches are constantly being invented to secure cloud native architectures. For example, service meshes can build traffic filtering right into the mesh itself, and block hostile requests before the microservice receives them. Service meshes are an addition to microservices architectures that enable services to communicate with each other. In addition to added security, they offer benefits like load balancing, discovery, failure recovery, metrics, and more.
These advantages of service meshes will seem greater when they are deployed across a larger number of microservices, but smaller architectures can also benefit from them, according to Bar Yochay.
Of course, the developers in charge of these microservices are also responsible for security, but there are a lot of challenges in their way. For example, there can often be friction between developers and security teams because developers want to add new features, while security wants to slow things down and be more cautious. “As more apps and services are being maintained, there are more opportunities for these cultural issues to arise, ” Bar Yochay said.
In order to alleviate the friction between developers and security, Bar Yochay recommends investing in developer-friendly security tools for microservices. According to him, there are many solutions on the market today that allow for security to be built directly into containers or into service meshes. In addition, security vendors are also advancing their use of technology, such as by applying machine learning to behavioral analysis and threat detection. 2. Make vices d sure your on’t get too mic big. roser-
“We ’ ve seen microservices turn into monolithic microservices and you get kind of a macroservice pretty quickly if you don ’t keep and maintain it and keep on top of those things, ” said Bob Quillin, chief ecosystem officer at vFunction, a company that helps migrate applications to microservices architectures.
Dead code is one thing that can quickly lead to microservices that are bigger than they need to be.
Figuring out a plan for microservices automation at Atlassian
Mike Tria, head of platform at Atlassian, is a proponent of incorporating automation into microservices management, but his team had to learn that the hard way.
According to Tria, when Atlassian first started using microservices back in early 2016, it had about 50 to 60 microservices total and all of the microservices were written on a Confluence page. They listed every microservice, who owned it, whether it had passed SOC2 compliance yet, and the on-call contact for that microservice.
“I remember at that time we had this long table, and we kept adding columns to the table and the columns were things like when was the last time a performance test was run against it, or another column was what are all the services that depend on it? What are all the services it depends on? What reliability tier is it for uptime? Is it tier one where it needs very high uptime, tier two where it needs less? And we just kept expanding those columns.”
Once the table hit one hundred columns, the team realized that wouldn’t be maintainable for very long. Instead, they created a new project to take the capabilities they had in Confluence and turn them into a tool.
“The idea was we would have a system where when you build a microservice, it essentially registers it into a central repository that we have,” said Tria. “That repository has a list of all of our services. It has the owners, it has the reliability, tiers, and anyone within the company can just search and look up a surface and we made the tool pretty plugable so that when we have new capabilities that we’ re adding to our service.” z
< continued from page 11 a lot of software where you’re not quite sure what it does,” said Quillin. “You and your team are maintaining it because it’s safer to keep it than to get rid of it. And that’s what I think that eventually creates these larger and larger microservices that become almost like monoliths themselves.” 3.Be clear about ownership. Tria recommends that rather than having individuals own a microservice, it’s best to have a team own it.
“Like in the equivalent of it takes a village, it takes a team to keep a microservice healthy, to upgrade it to make sure it’s checking in on its dependencies, on its rituals, around things like reliability and SLO. So I think the good practices have a team on it,” said Tria.
For example, Atlassian has about 3,000 developers and roughly 1,400 microservices. Assuming teams of five to 10 developers, this works out to every team owning two or three microservices, on average, Tria explained. 4.Don’t get too excited about the polyglot nature of microservices. One of the benefits of microservices—being polyglot—is also one of the downsides. According to Tria, one of Atlassian’s initial attractions to microservices was that they could be written using any language.
“We had services written in Go, Kotlin, Java, Python, Scala, you name it. There’s languages I’ve never even heard of that we had microservices written in, which from an autonomy perspective and letting those teams run was really great. Individual teams could all run off on their own and go and build their services,” said Tria.
However this flexibility led to a language and service transferability problem across teams. In addition, microservices written in a particular language needed developers familiar with that language to maintain them. Eventually Tria’s team realized they needed to standardize down to two or three languages.
Another recommendation Tria has based on his team’s experience is to understand the extent of how much the network can do for you. He recommends investing in things like service discovery early on. “[At the start] all of our services found each other just through DNS. You would reach another service through a domain name. What that did is it put a lot of pressure on our own internal networking systems, specifically DNS,” said Tria. z