34 minute read

News Watch

NEWSNEWS WATCHWATCH

Sumo Logic updates portfolio

The monitoring company Sumo Logic announced new capabilities that will provide developers with the ability to get faster insights into the performance of their applications.

These updates are being spread across a number of Sumo Logic’s offerings, including Real User Monitoring, Unified Entity Model, and Intelligent Alert Management.

Real User Monitoring updates include insights related to user actions on a page, long task delay metrics that indicate if the main browser interface has been locked for long periods of time, better dashboard visualizations, and capturing and displaying browser errors in the log index and dashboards.

Unified Entity Model adds Database Entities, which automatically detects data, delivers a user-friendly grouping of database entities, and displays them, giving developers a more holistic view of data. Sumo Logic Entity Inspector also now displays related APM Entities on the Infrastructure Entity dashboard, which will make it easier to switch between contexts.

Intelligent Alert Management introduces Intelligent Alert Grouping, which simplifies alert management by allowing developers to specify conditions for which alerts are generated.

Rust establishes security team

The dedicated security team will be underwritten by the OpenSSF’s Alpha-Omega Initiative as well as the Rust Foundation’s newest platinum member, JFrog.

“There’s often a misperception that because Rust ensures memory safety that it’s one hundred percent secure, but Rust can be vulnerable just like any other language and warrants proactive measures to protect and sustain it and the community, ” said Bec Rumbul, executive director at the Rust Foundation. “With the establishment of the Rust Foundation Security Team, we will be able to support the broader Rust community with the highest-level of security talent and help ensure the reliability of Rust for everyone. Of course, this is just a start. We hope to continue to build out the team in the coming months and years. ”

According to the Rust Foundation, the investments from Alpha-Omega and JFrog include staff resources that allow the foundation to implement best security practices.

The new security team will work to undertake a security audit and threat modeling exercises in order to identify how to economically maintain security going forward. The team will also advocate for security practices spanning the Rust landscape, including Cargo and Crates.io.

People on the move

n Kate Johnson has been announced as the new CEO and president of Lumen Technologies. She will also serve on the Board of Directors. Starting Nov. 7, she will replace current CEO Jeff Storey, who is retiring, and he will remain on board through the end of the year to ensure a smooth transition. Johnson has previously held leadership roles at Oracle, General Electric, and Microsoft.

n Insurtech company Accelerant has appointed Pete Horst as its new chief technology officer. In this role, Horst will lead global platform strategy and development. Prior to joining Accelerant, he was vice president of engineering for Qlik. He has also held engineering roles at IBM and Cognos.

n It has been announced that Simon Bennetts is joining Jit to continue developing OWASP Zed Attack Proxy (ZAP), an open-source web app security scanner that he created. ZAP is one of the underlying scanning technologies for Jit. Jit also announced a $38 million seed funding round in June 2022.

PyTorch joins the Linux Foundation

PyTorch is transitioning away from Meta and joining the foundation where it will exist under the newly formed PyTorch Foundation.

According to the PyTorch maintainers, since its inception back in 2016, the PyTorch machine learning framework has been adopted by over 2,400 contributors and 18,000 organizations to be used in both academic research and production environments.

The Linux Foundation has said that it will be working with project maintainers, its developer community, and the founding members of PyTorch in order to properly support the ecosystem.

“Growth around AI/ML and Deep Learning has been nothing short of extraordinary — and the community embrace of PyTorch has led to it becoming one of the five-fastest growing open source software projects in the world, ” said Jim Zemlin, executive director for the Linux Foundation. “Bringing PyTorch to the Linux Foundation where its global community will continue to thrive is a true honor. We are grateful to the team at Meta — where PyTorch was incubated and grown into a massive ecosystem — for trusting the Linux Foundation with this crucial effort. ”

Under the Linux Foundation, PyTorch and its community will gain access to many programs and support infrastructure such as training and certification programs, research, and local and global events.

Additionally, the LFX collaboration portal will enable the PyTorch community to identify future leaders, locate potential hires, and observe shared project dynamics.

Lightbend switches Akka license to Business Source 1.1

Lightbend announced that it is switching the license for Akka, a set of open-source libraries for designing scalable, resilient systems that span cores and networks.

The project ran on the Apache 2.0 license which has become increasingly risky when a small company solely carries the maintenance effort even though it is still the de facto license for the open-source community, according to Jonas Bonér, CEO and founder of Lightbend in a blog post.

The new license, Business Source License (BSL) v1.1, freely allows for using code for development and other non-production work such as testing. Production use of the software now requires a commercial

license from Lightbend, the company behind Akka.

Bonér added that BSL v1.1 provides an incentive for large businesses to contribute back to Akka and to Lightbend.

Adobe to acquire Figma for $20B

Adobe has announced its intention to acquire the popular design platform Figma for $20 billion.

Since much of Adobe’s business revolves around helping people create digital content, the addition of Figma will help them “usher in a new era of collaborative creativity, ” Adobe said.

Figma was founded in 2012 by Dylan Field and Evan Wallace, and today it is used by people who design mobile and web applications. It enables collaboration through multiplayer workflows, sophisticated design systems, and a rich developer ecosystem.

Adobe believes that Figma’s capabilities will accelerate delivery of Adobe’s Creative Cloud technologies on the web in order to democratize the creative process by making it available to more people.

OpenText targets Micro Focus for acquisition

Both companies’ boards have reached an agreement on what the terms of the acquisition would be.

OpenText’s CEO Mark J Barrenechea commented that this acquisition will position the company as one of the biggest software and cloud companies in the world as it will already have a large customer base, global scale, and go-to-market capabilities.

For context, Micro Focus earns $2.9 billion in annual revenue, generates revenue in 180 countries, and 40% of its employees focus on R&D.

In addition, customers of the companies will benefit from the deal and be able to accelerate their digital transformations through the new capabilities that will be unlocked, Barrenechea added.

Heroku to stop offering free plans

Heroku stated that it will stop offering free product plans on November 28th and that it is planning to shut down free dynos and data services.

Also, accounts that have been inactive for over a year and their associated storage will be deleted starting on October 26th this year.

“Our product, engineering, and security teams are spending an extraordinary amount of effort to manage fraud and abuse of the Heroku free product plans. In order to focus our resources on delivering mission-critical capabilities for customers, we will be phasing out our free plan for Heroku Dynos, free plan for Heroku Postgres, and free plan for Heroku Data for Redis, as well as deleting inactive accounts, ” Bob Wise, Heroku General Manager and Salesforce EVP, stated in a blog post.

Heroku also announced the launch of its interactive product roadmap for Heroku on GitHub and encourages feedback and an upcoming program to support students and nonprofits in conjunction with its nonprofit team.

Heroku will continue to contribute to open-source projects such as Cloud Native Buildpacks and will be offering Heroku credits to select open-source projects through Salesforce’s Open Source Program Office.

Perfecto supports integration testing for Flutter

Flutter is an open-source framework by Google that enables Dart developers and programmers to build, test, and deploy mobile, web, desktop, and embedded apps from a single codebase.

Perfecto now supports integration testing, otherwise known as end-to-end or GUI testing and is one of three types of testing for Flutter apps. It’s done through a configurable Gradle plugin that allows users to install and run the iOS and Android tests in parallel and at scale.

Developers will have access to AI-powered reporting that enables users to quickly identify and fix issues in their Flutter integration tests.

Aqua Security adds CSPM capabilities to open-source Trivy

Aqua Security has updated its open-source project Trivy to include cloud security posture management (CSPM) capabilities.

Trivy is a code scanning tool that looks through container images, file systems, and Git repositories for security vulnerabilities.

Now, the tool can be used with AWS, and Aqua Security said that support for other cloud providers is upcoming. AWS users can use Trivy to scan their account for misconfigurations and insider threats. This enables users to more easily meet security standards and comply with the CIS benchmarks.

Users can define their own rules or use Trivy’s community catalog, which likely wouldn’t be an option if using the builtin cloud tool. They can also keep consistent rules across IaC definitions and production environments.

Another benefit of this integration is users will be able to identify issues in AWS even when the infrastructure is defined from another tool, like Terraform or CloudFormation.

Progress updates include launch of ThemeBuilder Pro

The software company Progress has announced the release of updates to several of its major tools, including Progress Telerik, Progress Kendo UI, and Progress Telerik Test Studio as part of its R3 2022 release.

It also announced the launch of Progress ThemeBuilder Pro, which allows developers and designers to implement design systems in web applications.

According to the company, implementation of application design can require significant coding work that can result in mistakes, or there could be misalignment between the goals or developers and designers.

With ThemeBuilder Pro, designers and developers can create their own design system in a visual interface using Material, Bootstrap, or Fluent design. Then they can implement it across their web apps using their UI components. z

BY JENNA SARGENT BARRON

Low code has many benefits, and they ’ ve been widely discussed in numerous articles in SD Times, but one area in which they don ’t really have an edge is security.

It’ s not that low code is more risky than traditional code, but the same risks are there, Jeff Williams, co-founder and CTO of Contrast Security explained. These include things like authentication, authorization, injection, encryption, logging, and more.

Even developers who spend their entire days writing code have, for the most part, very little security training, and often they don ’t even have much communication with the security team. One main difference between the two groups is that citizen developers might be more likely to accidentally introduce a security risk, explained Williams.

“I would expect citizen developers will make a lot of the basic mistakes such as hard-coded and exposed credentials, missing authentication and authorization checks, disclosure of PII, and exposure of implementation details, ” said Williams.

According to Mark Nunnikhoven, distinguished cloud strategist at Lacework, access to data is also a big issue to consider, especially when you ’ re giving citizen developers access to data in systems they hadn ’t previously encountered. It’ s important to both restrict access to only what is needed and to teach citizen developers the appropriate use of the data connections they access. “We don ’t teach you like, ‘hey, you ’ ve got access to all of our Salesforce information and here you ’ re in sales or in marketing, and you should have access to that, so here you go. ’”

Nunnikhoven explained that this is a huge problem in low-code development because suddenly low-code developers have the ability to access and manipulate data and connect to other systems, and if they don ’t understand the appropriate use of that, they won ’t understand the inappropriate use of it either.

“I think that’ s the real challenge with these platforms, ” said Nunnikhoven. “It’ s exposing a gap in our information management or our information security programs that we don ’t often talk about, because we ’ re so focused on the cybersecurity and the nuts and bolts of how we secure digital

< continued from page 7 systems, not the information in those systems. ”

Jayesh Shah, SVP of customer success at Workato, also advises customers to develop a certification program specific to the low-code platform that will be in use. This is so that the people who will be working with it understand the capabilities and can more easily stay within the company ’ s policies and guardrails.

Process of security doesn’t change much

Even though the way of building the application is different when you ’ re talking about low code versus traditionally coded apps, the process of security should be the same.

“Fundamentally the challenge for companies of all sizes is to define their specific level of security, test against that definition, and fix problems, ” said Williams.

He recommends that companies set guidelines for exactly how they will use the platform. For example, how should users be authenticated? How is input validated? How are credentials stored?

After setting these guidelines, it’ s important to test to ensure that developers are implementing them. These tests can be automated using interactive application security testing (IAST), which analyzes the entire application as it is assembled. Methods like static application security testing (SAST) and dynamic application security testing (DAST) might miss real issues and report false positives, Williams explained.

In addition to having good policies within your company, the low-code platform itself can also minimize security risks. For example, according to Shah, the platform can incorporate its own security controls, such as requiring citizen developers to work in sandbox environments or limiting their options.

According to Shah, one area in which low code may have the edge over traditional code is that when a new vulnerability is discovered by the security community, custom software isn ’t likely to be updated in a timely manner, while a low-code platform could be updated by the vendor to minimize or remove that vulnerability, Shah explained.

“The low-code platform can ensure that the platform components it provides do not have security vulnerabilities and are patched and updated as necessary to benefit all users globally, ” he said.

Shah added that while traditional development might offer greater flexibility in terms of what can be created, that freedom also brings a broader level of responsibility. Custom software often incorporates third-party or open-source components, which are notorious for being weak points for vulnerabilities, he noted. z

OWASP Top 10 expands to low code

The OWASP Top 10 is a list of the ten most common security vulnerabilities in code. Recently, work began on an OWASP Top 10 list specifically for low code, with the same idea as the original guide but focused specifically on low-code risks.

“You as an organization that is adopting low code/no code should be able to look at the OWASP Top 10 and say, ‘Here are the main security concerns, as agreed by the experts in the community, how am I going to address these within my environment?’” said Nunnikhoven.

Here are the top 10 risks specified by the guide at the time of this writing:

1. Account impersonation 2. Authorization misuse 3. Data leakage and unexpected consequences 4. Authentication and secure communication failures 5. Security misconfiguration 6. Injection handling failures 7. Vulnerable and untrusted components 8. Data and secret handling failures 9. Asset management failures 10. Security logging and monitoring failures

In theory the OWASP list would give companies a set of items to focus on in their security strategies, but Williams, who created the original guide back in 2003, said that’s not really the case, unfortunately. He said that’s what he thought would happen when he wrote the guide, but that he’s “still waiting” for that.

He added: “I think OWASP helps to raise awareness and understanding around risks, but it doesn’t seem to translate into a significant decrease in vulnerabilities. I think it only really works if platform vendors take the advice and build better guardrails into their own specific environments. ” z

Communication, collaboration key to hybrid work

And the rise of cloud-based tools

Over the last nearly three years, questions surrounding hybrid and remote work have circulated in the business world. People want to know if this new way of working is here to stay, if it causes productivity to suffer, how to combat the disconnection that it could bring, and really, just how to cope with all the changes.

In an attempt to answer these questions and solve the challenges that they bring, several companies have adopted cloud tools and technologies in order to fully transition their work into the cloud and make this new working world easier for employees to adjust to.

The influence of the pandemic on cloud adoption

David Williams, VP of product strategy at the cloud and DevOps automation company Quali, said that this push to move to the cloud has resulted in things that used to be on the back burner now taking center stage.

“The front-end, consumer-based applications that have come to fruition in regards to how we interoperate with the consumer world have always been out there, ” he said. we have seen now is that the governance has come in that enables people to move things to the cloud with a little bit more security and less risk. ”

According to David Torgerson, VP of infrastructure and IT at the collaboration company Lucid Software, with the pandemic being the driving factor behind the transition into the cloud, a great deal of velocity was needed early on. This caused some companies to thrive while others did not.

“So many companies, if not all companies, had one day where they just decided that tomorrow they ’ re not going to come back into the office so that digital transformation was really forced upon everybody, ” said Torgerson.

Williams also touched on the pandemic ’ s influence on the rate of cloud adoption. He said, “The pandemic came in and was really what put an emphasis on leveraging the cloud… It had multiple impacts and one of them was the higher priority given to the legacy applications that were on the back burner until a year and a half ago. ”

Adam Preset, VP analyst at Gartner, emphasized this point. He explained that throughout 2020 and 2021, the volume of questions that organizations had surrounding cloud

BY KATIE DEE

Now it’s interpersonal

Ozdemir went on to explain that many organizations had to go through two transformations while adapting to the cloud.

The first took place as they tried to replicate the collaboration available in an office setting and the second was the technology transformation needed to enable that interpersonal collaboration.

“You can almost look at this as a human transformation vs. infrastructure and technology transformation. For the companies that already had those tools, it was really just scaling and training while others had to spend months to implement them first, ” he said.

Even so, Williams expressed that not all developers in an organization need access to the same collaborative cloud tools. He said that it is highly dependent on the type of development being done.

“DevOps, for example, is about smaller teams, and those smaller teams are using communication platforms like Slack and they use this sort of communication to update each other on a regular basis, ” Williams said. “I think that the methods of DevOps, and the ability for the cloud to support that type of application collaboration, has really been what’ s driven the cloud adoption. ”

He also said that since a hybrid environment means there is less accidental communication, cloud tools help to foster more intentional and meaningful interactions between team members, as long as everyone uses them to their fullest potential.

has facilitated this new WFH normal

collaboration tools increased exponentially.

Preset attributed this to companies realizing that on-premises collaboration tools came with several limitations around where employees had to work and how they can access the technology they need.

Implementing collaboration cloud technology

Cenk Ozdemir, cloud and digital lead at the consulting company PwC, spoke about how the companies that did well with this forced transition are the ones that proactively had some kind of investment in cloud technology and tools.

He said, “Many companies had to find new customers and channels during the pandemic, and what we ’ ve seen is that the companies that had been pre-invested in cloud architecture… have been able to innovate much faster than those that weren ’t on the cloud. ”

Lucid’ s Torgerson then explained that in order to adapt to remote and hybrid work well, companies had to move quickly and implement a new work from home strategy that utilized cloud tools and technologies right at the start of the pandemic.

According to Ozdemir, companies that already had this cloud infrastructure in place were the ones who were really able to keep up with the pace demanded of them.

Torgerson also said that the main struggle that many companies faced with the initial transition to hybrid work was finding a way to maintain team collaboration and lose as little productivity as possible. This is where cloud tools really worked to pick up the slack.

“What we ran into was a full industry of people who didn ’t have much experience interacting with each other without that in-person piece, ” he said. “So, Zoom ’ s stock skyrocketed and Teams and Slack and other communication tools just really took off because of that necessity to maintain some of that in-person experience even in a hybrid environment. ”

Cloud adoption and developer satisfaction

Despite these few pitfalls, though, Quali’ s Williams believes that the benefits outweigh the challenges. He said, “When it comes to building product… The cloud enables developers and remote workers to spin up instances very quickly without having to go to IT and waste all that time. ”

Overcoming cloud hurdles

While collaboration tools for software development are extremely helpful, they are not a magic bullet.

Lucid Software VP of Infrastructure and IT David Torgeson explained that once cloud tools are implemented, getting employees on board and using the tools correctly was another pain point early on.

He pointed out that one of the main issues was the employees’ instinct to keep their cameras turned off during meetings. This severely limited the amount of non-verbal communication that teams can engage in and led to heightened amounts of miscommunication.

“Those short communication styles where you can use facial expressions to really convey a meaning just disappeared… and that’s where visual communication really comes into play, ” Torgerson explained.

He also said that another way that cloud tools can fall short of their full potential is when organizations only look at them as purely technical, rather than as human-centric, communication tools.

Creating new tool silos

David Williams, VP of product strategy at the cloud and DevOps automation company Quali, pointed out that a downside to transitioning to the cloud when working remotely is the risk of creating silos in an organization.

“There is an awful lot of fragmentation that exists in the market today because most people will look at the benefits of productivity gains and the idea that you can offer much more visibility for developers as consumers use the product, giving them a greater ability to innovate… But the ying to the yang is that you need more skills if you’re going to be starting to do that, ” he said. He explained that communication needs to be mandated in a way that focuses on strengthening productivity rather than getting too caught up with too many different cloud tools. With so many options on the market now, as well as new open-source tools being released everyday, Williams said that if a certain tool is not mandated in an organization, everyone may just end up using whatever they want. This ultimately hurts productivity and leads to that fragmentation he mentioned earlier.

“If you and I are developing something and we decided to use something different to provision infrastructure, then if I were to hand something to you, you wouldn’t be able to take it very readily without having to reinvent the infrastructure using your tool. So the fragmentation is quite an inhibitor, ” he explained.

For Cenk Ozdemir, cloud and digital lead at the consulting company PwC, the biggest downside to transitioning into the cloud is the up front costs that a business has to toll out.

He explained that implementing and scaling these tools to house every employee at an organization was just one part of the overall cost.

“Plus the cost of enabling employees by sending them monitors and keyboards and cameras and lights and all other kinds of personal technology enablement, ” he said. “It’s probably the smaller cost but you have to recognize that working from home is more than just putting a laptop in front of you for ten hours a day. ” z —Katie Dee

< continued from page 11

Additionally, Torgerson said that the overall happiness and satisfaction level of remote and hybrid developers went up exponentially after implementing cloud technology.

“The experience of using these digital tools for things that we have done for decades prior is just better, ” he explained. “Now that we are in a hybrid environment… We have found that even when people are in the office they do not prefer to use a whiteboard anymore because these collaborative tools let you share ideas and they allow you to go back and create revisions and histories and create action items and link it directly to Jira or Asana so it does more than what the traditional whiteboard ever could… The world has just changed for the better. Ozdemir credited the increase in developer satisfaction to the fact that it gives developers and engineers a lot of their personal time back.

“Engineers were enabled to remotely collaborate… and in other locations like India, it did cut down a significant amount of commute time in some of these countries, ” he said. “So our engineers, to a large extent, were probably one of the most satisfied in the move. ”

Williams also touched on the overall effect on developer satisfaction with cloud tools, but he had a different take.

He pointed out that adding cloud tools into an organization has the potential to increase complexity for developers and, therefore, make their jobs harder if they are not implemented properly.

“I think we ’ re the only industry that when it comes to making things simpler and easier, we add something, explained.

“So I think developers are happier than they were, but they ’ re still not fully happy. ”

Even with Williams ’ assessment, the developer response has been mostly positive. Because of this, Torgerson posits that even though the pandemic was the catalyst for hybrid work, its end does not mean that organizations will be going back into an office full time any time soon.

“If companies force their employees to go back into an office then they are ignoring the advancements that have been so great, ” he said. “I think that communication happens by accident in an office and I think in the coming years we will see a pivot from accidental communication to organizations recognizing that they need to help facilitate some intentional interactions [via cloud tools]. ” z

We’ll Help You Keep It Clean

Dealing with bad data is a task no developer needs on their checklist. Inaccurate, outdated, and duplicate records can build up in your database, affecting business decisions, the customer experience, and your bottom line. As the Address Experts, Melissa helps our customers improve operational ef ciency with the best Address Veri cation, Identity Veri cation and Data Enrichment solutions available. We validated 30 billion records last year alone, which is why thousands of businesses worldwide have trusted us with their data quality needs for 37+ years.

BAD DATA BUILDUP

Returned Mail & Packages

Money Laundering & Fraud

Decreased Customer Insight DATA CLEANLINESS

Real-time Address Veri cation

Identity Resolution & Watchlist Screening

Geographic & Demographic Data Appends

Test our APIs Today! Visit www.melissa.com/developer/ to get started with 1,000 Free Credits.

BY JAKUB LEWKOWICZ

It seems that every day in the tech world we hear about the salvation that the new era of the web will bring by taking away mega corporations ’ hold on user data and giving control back to the people (at least some of it).

But it isn ’t until we read into the matter further that we see the terms Web3 and Web 3.0 thrown around, seemingly synonymous yet quite different.

Web3 is the more commonly referred to aspect of the new web world and it incorporates concepts such as decentralization, blockchain technologies, and token-based economics.

On the other hand, Web 3.0 is otherwise known as the Semantic Web championed by father of the web, Sir Tim Berners-Lee, in an effort to correct his brainchild that has been led astray. His Solid project will have private information stored in decentralized data stores called pods that can be hosted anywhere the user wants. The project also relies on existing W3C standards and protocols as much as possible, according to Solid’ s MIT website.

When asked about whether he aligns with Web3’ s version of the future at TNW’ s 2022 Conference, he said, “ nope ” adding that “ when you try to build those things on the blockchain, it just doesn ’t work” — referring to the aspects of the web that would give power over data and identity back to the people.

Web 3.0 holds promise in linking data together

The goal behind Web 3.0 has been to make data as machine-readable as possible.

The rules laid out for Web 3.0 as to how to link data are like the rules for writing an article, and how you should use links so that machines can read that information and understand the connection between different topics so that crawlers can learn effectively from that,

Two different ideas that can coexist

according to Reed McGinley-Stempel, co-founder and CEO at Stytch, a developer platform for authentication.

“I feel like when I interpret that today, as someone that has been trying to go really deep on a lot of the stuff that OpenAI has been doing, like GPT3 and DALL-E 2, it feels like Tim Berners-Lee was way ahead of his time in terms of predicting that as you build smarter ML and AI, it would be really valuable if you had the context in a machine-readable form of what articles or content related to each other on the web, ” Reed said.

The two ideas for the new web differ in this regard, because the Semantic Web focuses mostly about how to actually present information at the machine-readable level on a website. On the other hand, the blockchain Web3 is much more focused on what is the back-end data structure for how this data is readable.

However, this idea of data discoverability can be possible in some regards in Web3, according to Reed.

“If you go to the heart of a blockchain, which is open data by default, obviously, there is some overlap here. Data discoverability mattered a lot to Tim Berners-Lee and his concept, and that can exist on the blockchain, because anything you do with your Ethereum wallet, or any smart contract that you interact with, is naturally searchable and discoverable. Though I think the intent for that data discoverability is different than that of Tim Berners-Lee, ” Reed said.

Similar goal, but a different way to get there

Bruno Woltzenlogel Paleo, STEM Lead at Dtravel, a native Web3 travel ecosystem that provides property hosts and hospitality entrepreneurs with the infrastructure to accept on-chain bookings, said that there are many articles that present Web3 and Web 3.0 as opposites, whereas they ’ re both just actually addressing different aspects of what people want to have from whatever follows Web 2.0.

“I think it’ s perfectly possible for these ideas to coexist, ” he explained, adding that they can even be complementary. “The Web3 notion coming from blockchain and cryptocurrency can contribute a lot to the economic incentives aspect, whereas the Web 3.0 idea from the Solid project can contribute a lot to the data storage and data ownership aspect. ”

What people want from the new web is more participation and ownership over their data, more privacy over the data, and less dependency on third parties and intermediaries. The selling of user data for advertising has eroded the trust that people have in Web2.

“The current technical solution from Web3, which in practice is Web2 plus blockchains, cryptocurrencies and smart contracts, doesn ’t deliver the latter aspect yet, ” Paleo said. “Tim Berners-Lee ’ s notion of Web 3.0 is very interesting and I think it addresses this need for data privacy and data ownership better than the approaches that currently exist in the blockchain space. ”

Any kind of data can be stored in a Solid Pod: from structured data to regular files that you might store in Google Drive or Dropbox folders, and people can grant or revoke access to any piece of their data as needed.

All data in a Solid Pod is stored and accessed using standard, open, and interoperable data formats and protocols. Solid uses a common, shared way of describing things that different applications can understand. This gives Solid the unique ability to allow different applications to work with the same data, according to the Solid project.

There’s a challenge to monetize Web 3.0

However, Paleo said that he doesn ’t see anything in Web 3.0 to address the economic incentives.

“It’ s not only a matter of finding a solution that allows people to easily

< continued from page 15 own their data and migrate the data, ” Paleo said. “There ’ s also an economic problem that people don ’t want to store their own data and then, for somebody else to store their data, let’ s say for Facebook or Google to store the data, there has to be some economic incentive and in Web 2.0 the incentive is the monetization of that data. But in the Web 3.0 idea, I just don ’t see how he ’ s proposing any alternative to that monetization of data being proposed. ”

On the other hand, Web3 has the profit motive because Web3 companies can provide services or tokenize their business model.

What people want from the new web is more participation and ownership over their data [and] more privacy over the data...

Challenges for developers in Web3

While Web3 is poised to disrupt the web as we know it, it’ s important for developers to understand that they ’ re not moving away from Web 2.0 but rather will continue to use the usual software development tools and add some extra components from Web3, according to Paleo.

“This is not something that’ s going to happen over the next five years, or probably even 10 years, but maybe even longer as infrastructure develops and becomes easier for people to store their own data or to hold on to it, ” said Cynthia Huang, head of growth of Dtravel.

A big thing that developers have to watch out for is that some types of data are best not stored on the blockchain. Because transparency is really key to blockchains, and to Web3, it doesn ’t really work well for data that you don ’t want to be public. For example, if you have medical records, it doesn ’t make sense for you to store that on the blockchain, Huang explained.

Another challenge is that developers not only have to consider the front and back end of an application, they

’ll also have to consider the smart contract layer and then the communication with the blockchain.

“It’ s challenging to decide what parts of an application logic should go into the smart contract, and what parts should be handled by the back end, for instance, ” Paleo said. “And just because you ’ re using smart contracts, it doesn ’t necessarily mean that magically you will gain the benefits from blockchains. ”

Developers have to design in very specific ways to gain benefits from blockchain.

“When people use blockchain, they typically talk about less reliance on trust

and more independence from third parties and intermediaries, but if you implement a smart contract in such a way that you have absolute power to modify the smart contract anytime you want, then your users are still dependent on you as a third party and intermediary, ” Paleo said. “So you must implement smart contracts in ways that really deliver those goals of immutability and reduction of the need for trust. ”

Many people are still not familiar with cryptowallets

Also, many people are still not used to using noncustodial crypto wallets like MetaMask, and are still used to the Web2 way of paying for services with credit cards.

“If you want to make a project that is crypto-native that is purely Web3, then to pay for things on your website, users would have to connect their MetaMask wallet and they would have to fund that MetaMask wallet with the base currency of some blockchain to pay for gas fees, ” Paleo said. “So this creates entrance barriers for the users and friction for users who are new to blockchains and cryptocurrencies, which is a big challenge for developers in Web3. ”

Because tokenomics might open up new revenue streams that don ’t involve selling user data, holding users ’ data may become a liability or a risk that is best avoided. so it’ s in the interest of companies to not hold onto data anymore.

Paleo said that there are some interesting approaches such as the IPFS (interplanetary file system), Filecoin, and the Web 3.0 idea of Tim BernersLee that can help solve this problem.

Web3 adoption in practice

Currently, a lot of Web3 adoption is driven by Web2 companies wanting to add Web3-native features into their products, according to Reed. For example, Twitter allows users to link their NFT to their Twitter profile.

“The most traction we ’ re seeing with Web3 use cases are offerings within Web2 use cases that already have distribution. I think a lot of Web3 apps are still trying to prove why should you use this app over Twitter, Uber, Lyft, Facebook, or Google, because I think there are real UX questions about whether it’ s worth the tradeoff at this point, which is why it seems to be that the hybrid approaches are gaining more traction from our vantage point, ” Reed said.

Also, not everyone wants the tradeoffs that Web3 would bring if it means sacrificing UX.

The origin story of the Web3 idea is that people didn ’t want to be locked into a walled garden of large Web2 platforms that have immense control over everyone ’ s digital lives. But, a lot of users don ’t want to purely exist in a world where there ’ s bad UX, but you have complete control of your data.

“A lot of companies think there are interesting technical pieces and cultural trends coming up with Web3, and they ’ re interested to adopt that. They ’ re not immediately running everything on the blockchain. They see tons of value in their core Web2 platform and products. And they see value and also being able to appeal to the users that are very interested in when Web3 NFTs. And so they just see it as another feature they can offer, ” Reed said. z

This article is from: