
3 minute read
About the Initiative
The Democracy and Internet Governance Initiative (DIGI) is a special joint initiative between Belfer Center for Science and International Affairs and Shorenstein Center on Media, Politics and Public Policy.
DIGI aims to research and build solutions to mitigate the harms of digital platforms, with a particular focus on social media. As part of the Initiative, our team worked with a range of stakeholders across government, business, and civil society to address growing public concerns.
About the Author
Amritha Jayanti is the Associate Director of the Technology and Public Purpose (TAPP) Project at Harvard Kennedy School’s Belfer Center for Science and International Affairs. Before assuming this role, she served as a Research Associate, supporting both TAPP and the Belfer Center’s Director and former U.S. Secretary of Defense, Ash Carter. Her work focuses on emerging technology, international security, and public purpose.
Prior to joining the Belfer Center, Amritha was a visiting researcher at the University of Cambridge’s Centre for the Study of Existential Risk where she primarily researched the governance of artificial intelligence in Western military organizations. She has also worked at the Brookings Institution’s Center for Technology Innovation researching the application of artificial intelligence in various sectors, including defense and education.
Prior to her policy-oriented focus, Amritha served as the lead product manager at Clara Labs, a San Francisco-based, Sequoia-back startup. She also served as the Executive Director of a San Francisco-based non profit, Interact, focused on supporting young technologists interested in social impact. Additionally, she founded a nonprofit, Technica, which encourages gender diversity in computer science and STEM more broadly; she remains a member of the board.
Amritha received her degree from the University of Maryland, where she studied computer engineering, economics, and public policy.
The following analysis is part of Harvard Kennedy School’s Democracy and Internet Governance Initiative, which has focused primarily on improving the quality of our information ecosystem, countering online extremism and radicalization, and addressing harassment and diminishing press freedom online as part of its initial research.

Key Points
• The current strategy for digital platform governance is fragmented, ad hoc, and politicized; the United States has barely moved the needle on an organized governance strategy to improve consumer welfare and communal wellbeing online.
• Industry-wide voluntary standards setting offers a path forward to ensure we have expert-led processes, measurements, and best practices to elevate safety and quality on digital platforms, while paving the way for comprehensive U.S. federal rules and enforcement.
• If U.S.-based firms do not act soon, the European Union and UK will set the standards via policies like the Digital Services Act and Online Safety Bill respectively. This could lead to non-ideal regulatory conditions for American companies, and the broader U.S. innovation ecosystem.
Large scale digital platforms, particularly social media platforms, have created and exacerbated significant harms to individual and communal sovereignty, mental health, consumption practices, and public goods, such as robust information access. Journalists, academics, and civil society have been warning about the harms for nearly a decade now, garnering significant attention from lawmakers and the public.
Despite emergent agreement among industry officials and the public on high-level goals such as online safety and security, consumer protection, user choice, and trustworthy information access, little progress has been made to address consumer welfare needs and hold individuals and organizations accountable when harms materialize. Meanwhile, more sophisticated technologies like generative AI are being deployed in the mass market,1 introducing new challenges to consumer and communal wellbeing.
As concerns about these platforms grow, the lack of comprehensive policies is increasingly evident. Technology companies are actively lobbying the United States Congress to avoid external regulations, while Congress is struggling to understand the scope of the problem and what they can do to address it amidst a politically charged environment.
The critical question is: can we break this standstill under the current conditions? We believe the answer is yes. By drawing on historical examples of industry betterments, we propose that the most promising path forward is through a commitment to industry-wide voluntary standards setting.
Industry-wide standardization has a long history both domestically and internationally. In fields like accounting, health care, or agriculture, industry standards promote best industry practices that ensure safety and quality control for consumers. Standards, which provide a common language to measure and evaluate performance of a product or service, aim to reflect the shared values and responsibilities we as a society project upon each other and our world. The standards development process relies on cross-sectoral experts to form technical and operational standards for technology development and deployment.
Standards setting—in its traditional and tested form—has not yet penetrated the digital services space. In this position piece, we make the case for why standard setting is the most viable way for social media governance (as one type of digital service) to move beyond the status quo, which currently fails to prioritize product safety and quality, and serve as a means to an end for smart government regulation.