Curated by THEOUTPOST
On Tue, 22 Oct, 12:08 AM UTC
2 Sources
[1]
Haiper debuts new flagship video generation model - SiliconANGLE
Haiper Ltd., a venture-backed artificial intelligence startup, today debuted a video generator that can create short clips based on user prompts. Haiper 2.0 is a new iteration of an AI model that the company debuted earlier this year. Compared with the previous release, it generates clips faster and in a more realistic style. Additionally, an upcoming update will boost the resolution of the videos that the model creates to 3840 pixels by 2160 pixels. London-based Haiper was launched in late 2021 by Chief Executive Officer Yishu Miao and Chief Technology Officer Ziyu Wang, who previously held research roles at Google DeepMind. The company went on to raise a $13.8 million seed round earlier this year. Haiper uses its AI models to power a suite of cloud services that consumers can use to generate short clips. Haiper 2.0, the new model that the company debuted today, is based on a so-called DiT architecture. This is an approach to building neural networks that was first introduced by Meta Platforms Inc. researchers in a 2022 paper. It combines two earlier neural network designs known as the diffusion and transformer architectures. To create a diffusion model, developers assemble a collection of images and introduce a type of error known as Gaussian noise into each file. They then instruct the neural network to try and remove the error. By repeating the task many times during training, diffusion models learn how to generate entirely images from scratch. The DiT design on which Haiper 2.0 is an improved version of the diffusion architecture. To create a DiT model, researchers take a diffusion model and replace several of its key components with a second AI model. The latter algorithm is based on a neural network design known as the transformer architecture, which is most commonly used to build large language models. The Meta researchers who introduced the concept in 2022 determined that DiT often outperform standard diffusion models. Moreover, algorithms based on the former technology can ingest data in the form of latent spaces. Those are mathematical structures capable of representing files in a condensed, hardware-efficient form that cuts infrastructure costs. "With our new model, users will be able to generate ultra-realistic videos faster than ever before," Miao said. Haiper will use Haiper 2.0 model to enhance its lineup of video generation services. The application suite enables users to generate six-second clips by entering a prompt or uploading a reference image. Additionally, Haiper offers a tool that can extend existing clips by two seconds.
[2]
Watch out, OpenAI -- Haiper 2.0 AI video generation model just launched and it looks stunning
Haiper, a leading AI platform for visual content creation, today announced the launch of Haiper 2.0. This major upgrade comes just seven months after the model's initial release and promises to deliver hyper-realistic videos faster than ever before. Unlike OpenAI's Sora, Haiper 2.0 is available now and users can try it for free. By leveraging a proprietary combination of transformer-based models and diffusion techniques, Haiper 2.0 improves video quality, realism and production speed. This update adds more lifelike and smoother movement, potentially setting a new standard for the best AI video generators. Alongside this release, Haiper introduced Video Templates. These customized formats let users upload still images and then turn them into high-quality videos. The templates streamline the video and animation process, a time-saver for creative projects and marketing applications alike. From hobbyists to large businesses, Haiper 2.0 is tailored to meet user demands for speed, realism and ease of use. Now, users can generate 1080p videos faster than before, with future upgrades promising 4K resolution. Since its launch, Haiper has continued to push the boundaries of video AI, introducing several tools, including a built-in HD upscaler and keyframe conditioning for more precise control over video content. The platform continues to evolve with plans to expand its AI tools, including features that support longer video generation and advanced content customization. We'll be putting Haiper 2.0 to the test soon, so stay tuned for our results.
Share
Share
Copy Link
Haiper, an AI startup, has launched Haiper 2.0, a new video generation model that promises faster creation of ultra-realistic short clips. The model utilizes advanced AI architectures and is set to enhance the company's suite of video generation services.
Haiper Ltd., a London-based artificial intelligence startup, has unveiled Haiper 2.0, its latest flagship video generation model. This new iteration marks a significant advancement in AI-powered video creation technology, offering improved speed and realism compared to its predecessor 1.
Haiper 2.0 is built on a DiT (Diffusion-Transformer) architecture, a cutting-edge approach to neural network design first introduced by Meta Platforms Inc. researchers in 2022. This architecture combines the strengths of diffusion models and transformer models, resulting in enhanced performance and efficiency 1.
The model utilizes latent spaces, mathematical structures that represent files in a condensed, hardware-efficient form. This approach not only improves the quality of generated videos but also helps in reducing infrastructure costs 1.
Haiper 2.0 boasts several improvements over its predecessor:
Haiper was founded in late 2021 by CEO Yishu Miao and CTO Ziyu Wang, both former researchers at Google DeepMind. The company secured a $13.8 million seed round earlier this year, positioning it as a notable player in the AI video generation space 1.
Unlike some competitors in the field, such as OpenAI's Sora, Haiper 2.0 is immediately available to users. The platform offers free trials, making it accessible to a wide range of users, from hobbyists to large businesses 2.
Haiper has outlined plans for continued expansion of its AI tools, including features to support longer video generation and advanced content customization. These developments aim to further solidify Haiper's position in the rapidly evolving field of AI-generated visual content 2.
Reference
[1]
Runway introduces Gen-3 Alpha Turbo, an AI-powered tool that can turn selfies into action-packed videos. This advancement in AI technology promises faster and more cost-effective video generation for content creators.
2 Sources
Runway AI, a leader in AI-powered video generation, has launched an API for its advanced video model. This move aims to expand access to its technology, enabling developers and enterprises to integrate powerful video generation capabilities into their applications and products.
8 Sources
Genmo releases Mochi 1, an open-source text-to-video AI model, offering high-quality video generation capabilities comparable to proprietary models. The launch is accompanied by a $28.4 million Series A funding round.
4 Sources
India's first Gen-AI aggregator platform, OneAIChat, announces a partnership with UK-based Haiper AI to introduce a text-to-video generation feature, aiming to revolutionize content creation with AI-powered technology.
2 Sources
Pika Labs launches Pika 1.5, an updated AI video generation model featuring innovative "Pikaffects" that allow users to manipulate objects in surreal ways. The update also includes improved video quality, faster rendering, and enhanced customization options.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved