5 minute read

The Case For Industry Standards

“The leaders of the new standards committees did not argue that capitalists should be left alone in their selfish pursuits of profit; rather, they believed that some problems of industrial society could be resolved more efficiently by cooperation among experts.”

“– Andrew L. Russell, Open Standards and the Digital Age Consumer and communal welfare problems, such as poor safety and quality, have led to a growing distrust of social media companies.

Facebook is the largest social media platform, with 3.74 billion users.12 Yet, in a recent study, 72 percent of Internet users trust Facebook “not much” or “not at all” to responsibly handle their personal information.13 Beyond Facebook and privacy, only about 17 percent of young adults trust social media platforms to provide them with accurate information14; and 41 percent of Americans have personally experienced some form of online harassment, with a growing percentage of those saying they experience “severe” harassment.15 Social media companies have also been losing consumer confidence amid high-profile cases like Cambridge Analytica, the Facebook Whistleblower case, and the numerous media-covered Congressional hearings aimed at uncovering social media’s data and safety practices.

Ultimately, there are real to consumer and communal welfare involving online platforms: Platforms have increased mental and physical health risks. There is mounting evidence that social media has increased depression and anxiety, both in young adults and older adults.16 They have also introduced financial risks, such as pervasive false digital advertising, misleading financial advice, and scams.17 Consumers also are subject to new reputational and social risks, which can have implications for their professional opportunities. For example, non-consensual release of sexual material or pervasive online defamation that reaches a scale only possible through online mediums can have serious implications for a consumer’s career and private life.

There are also risks to public goods, such as a diminishing press freedom as news outlets grow beholden to social media companies to host and promote content.18 Additionally, social media companies create privacy risks, which are incentivized by their two sided marketplace and data-intensive products and services.

Finally, there are risks to individual and communal sovereignty, which we see through addictive product features and foreign influence of elections, respectively. With regards to the latter, in September 2022, the U.S. intelligence community released information that Russia has spent over $300 million in election interference since 2014.19 A portion of that money was proven to be used by Russia’s Internet Research Agency to interfere with U.S. elections through social media troll farms and compromise our democratic process.

These risks at a high level are not new, of course. However, the digital architecture, data practices of companies, and new interactions and networks of people exacerbate the risks, causing them to take on a different form. Therefore, dealing with them will require special attention to ensure our governance systems are updated for the new landscape.

Nearly every social media platform has in recent years been forced to grapple with these issues, and some have even built a competitive advantage on addressing them—for example, Signal. However, consumers are now recognizing that they deserve a baseline level of safety and security online and, for that reason, there is both a need and an opportunity for the industry to collaborate on standards.

Standards setting offers a collaborative and expert-led path forward to develop shared measurements, evaluation schemes, and best practices for consumer and communal welfare that are global in nature.

Standard setting in technology industries refers to the process of establishing technical specifications and guidelines for the design, development, deployment, and interoperability of technology products and services. This process involves bringing together industry stakeholders, such as developers, service providers, regulators, and standards organizations, to create and implement common technical and operational standards. Fundamentally, standards set out a common understanding among experts of “how things should be done if they are to be done effectively.”20

What is compelling about standards setting is that it is a known quantity process with a history of private sector engagement and success from the consumer welfare perspective. The ISO (International Organization for Standardization), which is an independent, non-governmental international organization, has a membership of 168 national standards bodies alone.21 It works across a number of sectors including pharmaceuticals, energy technology, information security, and more. And although voluntary standards are non-binding, they often lead to mandatory standards enforced within a jurisdiction.

For digital platforms, the standards settings process offers a collaborative and ongoing medium to develop a common industry-wide language to measure and evaluate performance of online products and services, which is an important piece of the puzzle that is currently missing. It allows us to use a familiar and tested process to solve these somewhat novel problems, which has implications for global governance of digital platforms–not just domestic.

Industry-led standards development increases consumer confidence, builds trust with government, and can align with the fiduciary responsibilities of firms–all while supporting existing government and public interest initiatives.

Industry-level standards setting has significant upside for the private sector. Standards have the potential to increase consumer confidence in social media companies, which could help platforms like Facebook, Instagram, and Twitter with user retention and growth. For example, industry standards could provide a range of safety guidelines for children’s use of social media products, which would provide parents confidence about uniform safety measures among the participating platforms.

They can also build trust with governments by demonstrating a willingness to create and participate in a robust self-regulation apparatus, as well as developing a track record of compliance, facilitating interoperability, improving transparency, and enhancing security. Further, standards development facilitates more intentional public-private partnerships through a collaborative process by experts across government and the private sector, which is critical in cultivating a healthy relationship between these two camps. With the European Union and UK government ready to move forward on regulation22, and the US debating whether or not to follow along, it would be well timed for American firms–who dominate the digital services space–to signal their willingness to self-regulate.

Additionally, standards can create market advantages for the companies whose technical or operational standard is voted into effect since they have the benefit of already complying with the standard.23 At the same time, standards can also help level the playing field across companies because, in the best case, platforms go through the same processes and practices to uphold some baseline level of safety and quality assurance. This eliminates the tradeoff question with regards to quality and safety versus first mover advantage. (We can see this playing out right now in the generative AI space between Microsoft and Google, with Google moving faster than planned to release BARD—compromising quality assurance—because of Microsoft’s move with OpenAI’s ChatGPT and GPT-4.24)

Finally, one of the biggest advantages of standards setting for private companies is that it allows for pooled resources and collaboration, ultimately saving time and money.25 Nearly every platform company has tested internal standards efforts, responsible innovation teams, and compliance teams. Industry-wide efforts function as a force multiplier for each individual firm because companies are exposed to new ideas, leverage external resources to facilitate the standards process, and benefit from the wisdom of the crowds. Companies have already signaled they are interested in collaborations to solve these difficult problems. As just one example, Facebook launched a Deepfake Detection Challenge, which leans on academics and other industry leaders.26 Standards could provide more opportunities like this, with added due processes and scale, and the opportunity to build market advantages by socializing their adopted methods as industry-wide standards (as noted above).

On the government side, the advantage is that technology companies are able to take responsibility for the problem space, developing shared measurements and best practices that can then be the basis for legislation and regulation. It encourages firms to be more transparent and develop a shared language and measurement scheme for each risk space. This then equips lawmakers the information and infrastructure in the long run to develop smart rules and enforcement schemes to protect consumer and communal welfare. And this has been proven historically: industry-led standards have paved the way for savvy regulation in a range of industries, including telecommunications, agriculture, healthcare, and food. They can do the same for social media, and digital services more broadly.

This article is from: