3 minute read
Runway Unleashes Text to Video
AI VIDEO DEVELOPER RUNWAY has announced Gen 2 of its generative video platform which adds the ability to produce video footage from text prompts, eliminating the need for reference video or still imagery.
The new release comes hot on the heels of the Gen 1 release, now publicly available.
Advertisement
According to the company, “Not too long ago, Runway pushed the boundaries of generative AI with Gen 1, a video to video model that allows you to use words and images to generate new videos
Adobe Unveils Firefly
out of existing ones. In the weeks since launching, the model has constantly gotten better temporal consistency better fidelity better results.
“And, as more and more people gained access, we unlocked entirely new use cases and displays of creativity.
“We’re excited to announce our biggest unlock yet - text to video with Gen 2. You can generate a video with nothing but words. No driving video, no input image. Gen 2 represents yet another major research milestone and another
ADOBE HAS INTRODUCED Adobe Firefly, a new family of creative generative AI models, first focused on the generation of images and text effects. Currently in Beta, Adobe Firefly will be integrated into Creative Cloud, Document Cloud, Experience Cloud and Adobe Express workflows where content is created and modified. Adobe Firefly will also be part of a series of new Adobe Sensei generative AI services across Adobe’s clouds.
“Generative AI is the next evolution of AI-driven creativity and productivity, transforming the conversation between creator and computer into something more natural, intuitive and powerful,” said David Wadhwani, president, Digital Media Business, Adobe. “With Firefly, Adobe will bring generative AI-powered ‘creative ingredients’ directly into customers’ workflows, increasing productivity and creative expression for all creators from highend creative professionals to the long tail of the creator economy.”
Firefly tools currently available include Text-to-Image, where users enter prompts to generate images, and Text Effects where styles or textures are applied to text using a text prompt. A Recolor Vectors tool is also in the works.
AI-generated video is not currently available via Firefly, but a spokesperson did tell C+T, “Adobe Firefly is initially focused on the generation of images and text effects. However, it will soon extend to target additional use cases and content types.”
Adobe Firefly is significant in that it can serve an existing ecosystem of users and workflows, and is also an attempt to apply ethical guidelines to the training, creation and monetisation of generative AI, the lack of which has attracted criticism of other platforms.
Adobe will integrate Firefly directly into its tools and services, so its existing users can leverage the power of generative AI monumental step forward for generative AI. With Gen 2, anyone anywhere can suddenly realise within their existing workflows. The first applications that will benefit from Adobe Firefly integration will be Adobe Express, Adobe Experience Manager, Adobe Photoshop and Adobe Illustrator.
Adobe Firefly will be made up of multiple models. Adobe’s first model has been trained on Adobe Stock images, openly licensed content and public domain content where copyright has expired. The company says Firefly won’t generate content based on other people’s or brands’ IP.
Adobe’s intent is to build generative AI in a way that enables customers to monetise their talents, much like Adobe has done with Adobe Stock and Behance. Adobe is developing a compensation model for Adobe Stock contributors and will share details once Adobe Firefly is out of beta.
Adobe is a founder of the Content Authenticity Initiative (CAI) whose aim is to create a global standard for trusted digital content attribution. Adobe is pushing for open industry standards using CAI’s opensource tools that are free and actively developed through the nonprofit Coalition for Content Provenance and Authenticity (C2PA). These goals include universal “Do Not Train” Content Credentials tag in the image’s Content Credential for creators to request that their content isn’t used to train models. The Content Credentials tag will remain associated with the content wherever it is used, published or stored. In addition, AI generated content will be tagged accordingly. Adobe is also planning to make Adobe Firefly available via APIs on various platforms to enable customers to integrate into custom workflows and automations.
Visit https://firefly.adobe.com entire worlds, animations, stories, anything you can imagine.”
Visit https://runwayml.com
EFS open storage platform designed and optimized specifically for digital media.
FLOW media management indexes every file and provides easy access across any NLE.
FLEX offers turnkey cloud media management and storage for the ultimate flexibility.