6 minute read

Konstantine Buhler

Partner at

Nathan Benaich General Partner at

Advertisement

Chosen research paper:

Released in Apr 2021

Generative Agents: Interactive Simulacra of Human Behavior

Stanford University - Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein

Why it’s important:

"In this paper, the team out of Stanford places several generative agents in a shared digital world somewhat similar to the game Sims. These agents, built on LLMs, interact with each other. The interactions are surprisingly realistic, including a coordinated Valentine's day party. If the AI revolution is a continuation of the personal computer revolution, as in a revolution of computation, prediction, and work, then this type of multi-agent interaction is reminiscent of the early days of PC-networking, which eventually led to the Internet."

Chosen research paper:

Released in January 2023

Large Language Models Generate Functional Protein Sequences Across Diverse Families

Profluent, Salesforce - Ali Madani, Ben Krause, Eric Greene, Subu Subramanian, Benjamin Mohr, James Holton, Jose Luis Olmos Jr, Caiming Xiong, et al.

Why it’s important:

"Madani et al. demonstrate how a language model architecture originally designed for code can be adapted to learn the language of proteins. Through large-scale training, they use a protein language model (ProGen) to create artificial protein sequences that encode functionality that is equivalent to or better to naturally occurring proteins. This means we can generate proteins (drugs or otherwise) with desired functions in a far more systematic way than ever before."

Levin Bunz Partner at

Christian Jepsen

Partner at

Chosen research paper:

Released in October 2022

Video PreTraining (VPT): Learning to Act by Watching

Unlabeled Online Videos

OpenAI : Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune

Why it’s important:

"The research from OpenAI applies semi-supervised imitation learning for computer agents to learn to act by "watching" unlabeled video data. The model was pre-trained with 70k hours of online videos of people playing Minecraft, finetuned with a small amount of labeled data (video labeled with keypresses and mouse movements). The trained model was able to craft diamond tools with human-level performance. Taking this further, complex and sequential tasks could be automated by simply "observing" humans doing the work, e.g., for data entry tasks within or across applications.”

Chosen research paper:

Released in Jan 2023

Mastering Diverse Domains through World Models

Why it’s important:

"A research team from Deepmind show that a Reinforcementlearning-based general and scalable algorithm can master a wide range of domains with fixed hyperparameters. By interacting with the game, the model learned to obtain diamonds in the popular video game Minecraft despite sparse rewards, and without human data or domain-specific heuristics. "Learning by doing" across different domains and sparce/delayed rewards is a trait of human intelligence and hence this research presents a potential path towards a "general" AI."

Felix Becker Associate at

Pete Huang Author of

Chosen research paper:

Released in Mar 2023

Alpaca: A Strong, Replicable Instruction-Following Model

Stanford University, Meta- Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, Tatsunori B. Hashimoto

Why it’s important:

"Generative AI models are pivotal for productivity, especially when users can run them on their own hardware. Alpaca showed that a combination of an open-source foundational model and extracted instruction-output pairs can achieve similar performance to text-davinci-003. More importantly, this leap to democratization happened very cost-efficiently (<600 USD). The paper initiated a discussion on how defensible even humanlabeled training data is. It foreshadowed a missing moat of big tech and hints at forthcoming possibilities of AI models stealing from each other."

Chosen research paper:

Released in Feb 2023

LLaMA: Open and Efficient Foundation Language Models

Meta- Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, et al.

Why it’s important:

"Meta's release of LLaMA was important for two reasons. First, it showed that training a smaller model for longer could yield really impressive results - the 13B model outperformed GPT-3, which has 175B parameters. Second, an entire ecosystem has bloomed around LLaMA. Alpaca and Vicuna for starters, but also the efforts to open source a LLaMA-equivalent with commercial licenses, running these models on your laptop and your phone, etc. A lot of progress in the large language model space from 2023 is thanks to Meta and its work with LLaMA."

Sahar Mor

AI Product Lead at & Editor of AI Tidbits

Chosen research paper:

Released in Mar 2023

Towards Expert-Level Medical Question Answering with Large Language Models (Med-PaLM 2)

Deepmind - Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres , Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, et al.

Why it’s important:

"Singhal et al present a model designed to tackle the grand challenge of medical question answering with performance exceeding SOTA across multiple datasets. Med-PaLM 2 combines improvements in LLMs with medical domain fine-tuning and novel prompting strategies. The model scored up to 86.5% on the MedQA dataset, surpassing the previous SOTA by over 19%. Combined with the recent progress in multimodal AI, which would allow AI models also to see and hear - we can imagine a world where individuals can access personalized, timely, and accurate medical advice conveniently, empowering them to make informed decisions about their health, improving healthcare access and outcomes for humans across the globe.”

Nicole Büttner CEO at

Chosen research paper:

Released in Apr 2023

Segment Anything

Meta - Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, et al.

Why it’s important:

"This paper is something like the GPT Moment for Computer Vision. The Segment Anything Model (SAM) trains on more than 1b segmentation masks on 11M images. The model's zero shot capabilities solve many of the computer vision tasks without training data through a prompt, which revolutionizes the field."

Niko McCarty

Author of

Chosen research paper:

Released in April 2023

Efficient Evolution of Human Antibodies from General Protein Language Models

Stanford University - Brian L. Hie, Varun R. Shanker, Duo Xu, Theodora U. J. Bruun, Payton A. Weidenbacher, Shaogeng Tang, Wesley Wu, John E. Pak

Why it’s important:

"Large language models can massively accelerate evolution experiments in the lab, including for clinically-relevant applications. This study used six language models, altogether, that were trained on protein sequences in the UniRef database. The model would suggest mutations - without knowing the target antigen - based on which substitutions have a higher evolutionary likelihood across the six models. The evolved antibodies had improved affinities comparable to those achieved by a state-of-the-art lab evolutionary system (which takes weeks to perform) suggesting that LLMs can massively accelerate clinical development times in some scenarios.”

Darian Shirazi General Partner

Chosen research paper:

Released in June 2022

ZeroQuant: Efficient and Affordable Post-Training

Quantization for Large-Scale Transformers

Why it’s important:

"As large language models become even larger, there are significant memory and processing power limitations which increases latency and cost for many applications. This paper introduced a novel technique for post-training Quantization of LLMs by representing parameters in 8 bits rather than 32 bits with minimal accuracy loss and a >5x response time. In time, many products will leverage different optimized models for different use cases and Quantization is one of the major steps towards this reality. While a simple and elegant solution, ZeroQuant has led to a number of other optimization methods such as SmoothQuant and other methods."

Lars Maaløe

Co-founder & CTO at Corti.ai Adj. Professor at DTU

Chosen research paper:

Released in Mar 2023

Are Emergent Abilities of Large Language Models a Mirage?

Ryan Schaeffer, Brando Miranda, Sanmi Koyejo

Why it’s important:

”With the impressive progress and the many use-cases of Large Language Models, it is important to learn what we can expect. As Yann LeCun stated: Auto-Regressive Large Language Models will always hallucinate and it is not fixable. Do they, however, have ‘emergent abilities’: “abilities that are not present in smaller-scale models but are present in large-scale models …”. The authors of this paper presents an intelligent study showing that previous belief that these models possess emergent abilities are wrong, and that it simply has to do with the evaluation metrics. Hence, completing a multiple-choice medical exam is not evidence that a model has emergent abilities.”

Max Niederhofer Partner at

Chosen research paper:

Released in June 2022

Semantic Reconstruction of Continuous Language from Non-invasive Brain Recordings

Jerry Tang, Amanda LeBel, Shailee Jain, Alexander G. Huth et al.

Why it’s important:

“In this paper, researchers developed a non-invasive decoder that can reconstruct continuous natural language from brain recordings. This allows for the interpretation of perceived speech, imagined speech, and even silent videos. Although cooperation from subjects is still needed, the paper makes us wonder how long this will prevail. Advanced techniques could have the potential to infringe on mental privacy. While fMRI is currently a key tool in this research, the rapid pace of technological advancement means that other methods may eventually supplant it.”

This article is from: