21 Sources
[1]
Midjourney launches its first AI video generation model, V1 | TechCrunch
Midjourney, one of the most popular AI image generation startups, announced on Wednesday the launch of its much-anticipated AI video generation model, V1. V1 is an image-to-video model, in which users can upload an image -- or take an image generated by one of Midjourney's other models -- and V1 will produce a set of four five-second videos based on it. Much like Midjourney's image models, V1 is only available through Discord, and it's only available on the web at launch. The launch of V1 puts Midjourney in competition with AI video generation models from other companies, such as OpenAI's Sora, Runway's Gen 4, Adobe's Firefly, and Google's Veo 3. While many companies are focused on developing controllable AI video models for use in commercial settings, Midjourney has always stood out for its distinctive AI image models that cater to creative types. The company says it has larger goals for its AI video models than generating B-roll for Hollywood films or commercials for the ad industry. In a blog post, Midjourney CEO David Holz says its AI video model is the company's next step towards its ultimate destination, creating AI models "capable of real-time open-world simulations." After AI video models, Midjourney says it plans to develop AI models for producing 3D renderings, as well as real-time AI models. The launch of Midjourney's V1 model comes just a week after the startup was sued by two of Hollywood's most notorious film studios: Disney and Universal. The suit alleges that images created by Midjourney's AI image models depict the studio's copyrighted characters, like Homer Simpson and Darth Vader. Hollywood studios have struggled to confront the rising popularity of AI image and video-generating models, such as the ones Midjourney develops. There's a growing fear that these AI tools could replace or devalue the work of creatives in their respective fields, and several media companies have alleged that these products are trained on their copyrighted works. While Midjourney has tried to pitch itself as different from other AI image and video startups -- more focused on creativity than immediate commercial applications -- the startup can not escape these accusations. To start, Midjourney says it will charge 8x more for a video generation than a typical image generation, meaning subscribers will run out of their monthly allotted generations significantly faster when creating videos than images. At launch, the cheapest way to try out V1 is by subscribing to Midjourney's $10-per-month Basic plan. Subscribers to Midjourney's $60-a-month Pro plan and $120-a-month Mega plan will have unlimited video generations in the company's slower, "Relax" mode. Over the next month, Midjourney says it will reassess its pricing for video models. V1 comes with a few custom settings that allow users to control the video model's outputs. Users can select an automatic animation setting to make an image move randomly, or they can select a manual setting that allows users to describe, in text, a specific animation they want to add to their video. Users can also toggle the amount of camera and subject movement by selecting "low motion" or "high motion" in settings. While the videos generated with V1 are only five seconds long, users can choose to extend them by four seconds up to four times, meaning that V1 videos could get as long as 21 seconds. Much like Midjourney's AI image models, early demos of V1's videos look somewhat otherworldly, rather than hyperrealistic. The initial response to V1 has been positive, though it's still unclear how well it matches up against other leading AI video models, which have been on the market for months or even years.
[2]
Midjourney Released an AI Video Generator: How You Can Get Started
Certified Sleep Science Coach, Certified Stress Management Coach The popular AI image platform, Midjourney, released a new AI-video generator on Wednesday. The new V1 video model lets you create 5-second AI videos from images you create on the platform or upload. Founder David Holz announced the video model in a blog post. This generator could one day compete with other generative-AI video options, like OpenAI's Sora and Google's Flow. Subscriptions for Midjourney's V1 start at $10/month for 3.3 hours of "fast" GPU time. According to Holz, a "video job" consists of four five-second images and costs about eight times more than an image. Midjourney is more affordable than OpenAI's Sora's $20/month or $200/month subscriptions and $20/month for Google's Flow or $249/month for the Ultra tier. More capabilities in the future will likely drive up the cost. Holz says in the post that the company will monitor how V1 is used and will adjust pricing later. For now, VI is only available on the website. If you use Midjourney through Discord, log in using the "continue with Discord" option. To start your first video, you only need an image you uploaded or that's in your photo gallery. It will serve as the "starting frame." You'll see an animate image button. The auto, or default option, makes the image move, while the manual option lets you enter a text prompt for how you'd like it to move. Next, there's a low-motion or high-motion option. According to Holz's blog post, low motion is better for ambient scenes where only the subject moves. The high-motion setting allows both the subject and camera to move. This setting might produce unrealistic or glitchy movements. You can extend your Midjourney video by four seconds up to four times, totalling a 21-second video. This tool generates videos in 480p quality. Midjourney is facing a lawsuit from Disney and Universal for copyright infringement after allegedly failing to take precautions to prevent people from using copyrighted characters. Midjourney has not issued a public statement about the lawsuit.
[3]
Midjourney launches an AI video generator
Midjourney's AI video generator is currently only available on the web and through the startup's Discord server. It requires a subscription to the service, which starts at $10 / month for 3.3 hours of "fast" GPU time (around 200 image generations). The startup says it will charge "about 8x more for a video job than an image job," adding up to around "'one image worth of cost' per second of video." Midjourney is currently the subject of a lawsuit from Disney and Universal, which cited the prospect of it launching a video generator as a special point of concern. It contends Midjourney offers a "virtual vending machine, generating endless unauthorized copies of Disney's and Universal's copyrighted work." The in-progress video generation model was first announced in January, and Disney and Universal argued that its training process meant "Midjourney is very likely already infringing Plaintiffs' copyrighted works."
[4]
Midjourney's new animation tool turns images into short videos - here's how
Want to see your AI-generated images come to life? Midjourney's new feature does just that. A growing number of AI sites and services are able to generate short videos based on your descriptions or still images. Now, you can add Midjourney to the mix. On Wednesday, the popular AI image creator announced that users can now animate their images into five-second videos. The new feature is available to all Midjourney subscribers, including those on the $10-per-month Basic plan, and offers a variety of ways to cook up cool videos. Also: I test AI tools for a living. Here are 3 image generators I actually use and how Adopting an image-to-video approach, you can create a video from an image already generated through Midjourney or one you upload from your computer. From there, just select the Animate button, and the transformation begins. But there's more to it based on how you want to direct the video. An automatic setting creates the motion prompt for you and then devises the video on its own. Otherwise, a manual setting allows you to revise the prompt to describe the video as you want to see it. You can also choose between low motion and high motion. Low motion is better for ambient scenes in which the camera remains mostly stationary and the subject moves slowly. High motion is ideal for scenes in which you want both the camera and subject to move. After the initial video has been generated, you can extend it around four seconds at a time and for as many as four times. That means your five-second video can turn into a relatively full-length production at just over 20 seconds. For now, the video generation is available only on the Midjourney website, not through any other avenues. And there's a cost in terms of credits and minutes. Also: This interactive AI video generator feels like walking into a video game - how to try it Each video you create will cost around eight times more than what is required for an image. The number of credits and fast minutes you receive each month depends on your plan. But you can also purchase additional credits if you run out. For the future, though, Midjourney said it that it will offer a relax mode for Pro and Mega plan subscribers, which presumably would chew up fewer credits and minutes by generating the videos more slowly. "We're releasing Version 1 of our Video Model to the entire community," Midjourney said. "From a technical standpoint, this model is a stepping stone, but for now, we had to figure out what to actually concretely give to you. Our goal is to give you something fun, easy, beautiful, and affordable so that everyone can explore. We think we've struck a solid balance. Though many of you will feel a need to upgrade at least one tier for more fast-minutes." Sound interesting? If you are a Midjourney subscriber, here's how to tap into the new video generation. Sign in to the Midjourney website with your account. If you want to animate an existing image and haven't yet used the AI to create any, check out my article on "How to use Midjourney's website to generate amazing images with AI." Click the Create tab on the left sidebar and open an image you want to turn into a video. The lower right corner of the page displays the options for Animate. Choose Auto if you want Midjourney to animate the video, or Manual if you want to supply the prompt. Here you can also choose low motion or high motion depending on how animated you want the video to be. Midjourney will tell you that your job has been submitted. Head to the Organize page. When Midjourney is done, you'll see four videos based on your image. Open one of the videos to play it. If you wish, you can then extend the video by choosing Auto or Manual and low or high motion at the bottom right. If you opt for high motion, the current prompt appears at the top, which you can modify. You're able to extend the video this way up to four times. To try this out, I used an existing image of a woman with long red hair wearing a white evening gown dancing with a tall man in a top hat and black tuxedo. The resulting video was beautifully rendered, smooth, and fluid, and just what I wanted. I did run into a couple of hiccups. In one case, Midjourney told me that it couldn't generate a video based on an existing image as that might violate its guidelines. That's odd since Midjourney created the image in the first place. I also had trouble trying to extend the image by using the manual option to revise the prompt myself. But hopefully these glitches will work themselves out. For now, the new video creation is a cool way to bring life to your still images.
[5]
Generative Image Maker Midjourney Joins the AI Video Craze
Midjourney is one of the most popular AI image generators out there, but it's now branching out into creating animated videos from static images, bringing it in line with rivals like Open AI's Sora, Adobe's Firefly, and Google's Veo 3. The tool, called V1, is currently only available to desktop users via Midjourney's Discord app, and you'll need to sign up for a subscription plan, which start at $10 per month. Users can press "Animate" to make their Midjourney-generated images move. V1 will then produce a set of four five-second videos based on whatever they input. There are two animation settings you can pick from: "automatic" and "manual." With the "automatic" setting, the tool creates what the startup calls a "motion prompt" and "just makes things move." But if you're after a bit more creative control, the "manual" animation button lets users describe how they want things to move and how the scene should develop. V1 also offers two styles: high motion and low motion. In low motion, the camera stays mostly static while the subject moves slowly or deliberately. In high motion, both the subject and the camera move -- though Midjourney admits that "all this motion can sometimes lead to wonky mistakes." In addition, users can animate images uploaded from outside of Midjourney. You'll need to drag the image you want to animate into the prompt bar and mark it as a "start frame," then type a motion prompt to describe how you want it to move. Once you have a video you'd like to hold on to, you can "extend" it, roughly four seconds at a time, up to four times total. But turning your creative aspirations to video won't come cheap: using Midjourney to generate video will cost eight times more than conventional image generation, meaning you burn through your monthly credits much faster than normal. There's also no guarantee of what the tool will ultimately end up costing at this early stage. Midjourney noted that the cost of running these models is "hard to predict," and that it monitors how people use the service before adjusting pricing to ensure it's running "a sustainable business." The new features come as Midjourney has plenty on its plate to deal with beyond product design. Last week, Universal and Disney sued the Bay Area start-up, claiming its business is "a bottomless pit of plagiarism," as a result of drawing from many of the studios' iconic productions for its imagery. But lawsuits aren't stopping the startup from making highly ambitious pronouncements about the future of its tech. "We believe the inevitable destination of this technology is models capable of real-time open-world simulations," said a Midjourney spokesperson as part of the announcement.
[6]
You Can Now Make AI Videos in Midjourney
Midjourney will generate up to four five-second clips based on the images you input, though it admits that some settings can produce 'wonky mistakes.' AI image generator Midjourney has rolled out an AI image-to-video generator for the first time, bringing it into competition with the likes of OpenAI's Sora, Google's Veo 3, and Adobe's Firefly. The tool, called V1, is currently only available to desktop users via Midjourney's Discord app, and you'll need to sign up for the company's $10-a-month subscription at a minimum to start using the service. Users can press "Animate" to make their Midjourney-generated images move. V1 will then produce a set of four five-second videos based on whatever they input. There are two animation settings you can pick from: "automatic" and "manual." With the "automatic" setting, the tool creates what the startup calls a "motion prompt" and "just makes things move." But if you're after a bit more creative control, the "manual" animation button lets users describe how they want things to move and how the scene should develop. V1 also offers two styles: high motion and low motion. In low motion, the camera stays mostly static while the subject moves slowly or deliberately. In high motion, both the subject and the camera move -- though Midjourney admits that "all this motion can sometimes lead to wonky mistakes." In addition, users can animate images uploaded from outside of Midjourney. You'll need to drag the image you want to animate into the prompt bar and mark it as a "start frame," then type a motion prompt to describe how you want it to move. Once you have a video you'd like to hold on to, you can "extend" it, roughly four seconds at a time, up to four times total. But turning your creative aspirations to video won't come cheap: using Midjourney to generate video will cost eight times more than conventional image generation, meaning you burn through your monthly credits much faster than normal. There's also no guarantee of what the tool will ultimately end up costing at this early stage. Midjourney noted that the cost of running these models is "hard to predict," and that it monitors how people use the service before adjusting pricing to ensure it's running "a sustainable business." The new features come as Midjourney has plenty on its plate to deal with beyond product design. Last week, Universal and Disney sued the Bay Area start-up, claiming its business is "a bottomless pit of plagiarism," as a result of drawing from many of the studios' iconic productions for its imagery. But lawsuits aren't stopping the startup from making highly ambitious pronouncements about the future of its tech. One Midjourney spokesperson said, "We believe the inevitable destination of this technology is models capable of real-time open-world simulations," said a Midjourney spokesperson as part of the announcement.
[7]
Midjourney adds AI video generation
AI company Midjourney has its first video model. This initial take on AI-generated video will allow users to animate their images, either ones made in Midjourney or uploaded from a different source. The initial results will be five-second clips that a user can opt to extend by four seconds up to four times. Videos can be generated on web only for now and require at least a $10 a month subscription to access. Midjourney was one of the early names in the space for AI-generated still images, even as other platforms have pushed the forefront of the discussions around artificial intelligence development. Google's latest I/O conference included several new tools for AI generated video, such as the text-to-video and a tool for filmmakers called . OpenAI's Sora, which last year, is also a text-to-video option, while the more recent from Adobe can create video from a text or image prompt. But being a little late to the video game hasn't stopped it from drawing the ire of creatives who allege that its models were trained illegally. In fact, this video announcement follows hot on the heels of a lawsuit against the company. Disney and NBCUniversal Midjourney last week on claims of copyright infringement. And as with any AI tool, there's always a potential for misuse. But Midjourney has nicely asked that people "please use these technologies responsibly" so surely nothing will go wrong.
[8]
Midjourney video generation is here -- but there's a problem holding it back
Midjourney, one of the oldest and best-known AI image generators, is taking a new direction. Following in the footsteps of its competitors, Midjourney is now also offering AI video generation. Known as V1 Video, this new model allows users to upload an image or use an image generated by Midjourney's V7 image generator, creating a set of short videos from it. Announced in a post on X and a blog post, David Holz, Midjourney CEO, stated, "Introducing our V1 Video Model. It's fun, easy, and beautiful. Available at $10/month, it's the first video model for *everyone* and it's available now." Unlike some of the other competitors in the AI video world, Midjourney can't make a video from a prompt alone. While this will likely change in the future, it does put Midjourney a few steps behind the likes of Sora and Kling 2. Launching the product, the Midjourney CEO went on to explain, "Today's Video workflow will be called 'Image-to-Video'. This means that you still make images in Midjourney, as normal, but now you can press "Animate" to make them move." There is an automatic animation setting that will create a random motion prompt for you. For those wanting more control, there is also the option to manually choose an animation move for the video. When generating a video, you can choose if it is low or high motion (how much movement happens in the video). Once the video is created, you can extend it, adding four seconds at a time. You can do this up to four times. While you can add images from outside of Midjourney to animate, Holz added, "We ask that you please use these technologies responsibly. Properly utilized, it's not just fun, it can also be really useful, or even profound - to make old and new worlds suddenly alive." This lines up with Midjourney's recent legal battle with Disney, raising concerns over the use of external copyright in the training model. As mentioned above, you can currently use Midjourney V1 on the $10 per month plan. However, that could well change pretty soon. "The actual costs to produce these models and the prices we charge for them are challenging to predict," Holz added in the announcement post. "We're going to do our best to give you access right now, and then over the next month, as we watch everyone use the technology (or possibly entirely run out of servers) we'll adjust everything to ensure that we're operating a sustainable business." The problem is that the starting cost is already much higher than an image. Midjourney will be charging 8x more for a video creation than for an image. While that is still less than the average competitor, that amount is likely to change. While you can make videos on the cheapest plan, they will quickly eat up your credits. There will also be a slower but less expensive version of video generation available on the Pro model.
[9]
Midjourney's video generator is behind the competition -- here's why I love it anyway
While the AI image market is now pretty crowded, Midjourney was one of the first to do it, turning words into images years ago. But since then, the company has fallen behind, being outpaced by a variety of competitors. This was most evident in the development of AI video generation. The technology has blown up in recent months, seeing huge improvements from companies like Google and OpenAI. While Midjourney has been quiet on this front for a long time, it finally launched its first video generator. However, while others push boundaries, competing to offer the most advanced package, Midjourney's first attempt at a video generator was surprisingly limited. There is no ability to prompt for the video, only to add images -- either your own or ones you've made with Midjourney. While audio isn't common in video generators yet, it's missing from Midjourney's tool However, despite these limitations, in my time using Midjourney's video generator so far, it has quickly become one of my favorites. Midjourney has always stood out in the world of AI generation for one good reason. Where the likes of ChatGPT and Gemini are designed to create lifelike images and videos, Midjourney is hyper-focused on creativity. Before you even use it, you need to rank hundreds of images, giving the model an idea of your style preferences. These preferences can be turned on or off, but with the personalization on, image and video results are clearly pushed to styles that fit me. While you can't directly prompt for a video, the process is isn't much more complicated. After creating an image on Midjourney, you'll have the option to "animate" the image. This can be done automatically, allowing the AI to choose what happens in the video, either with low or high motion. Or you can manually choose what happens. This turns Midjourney into a similar version of an AI video generator as the big competitors... just with a few extra steps to get there. You can also upload your own images, turning those into videos. Despite some big concerns around copyright right now, Midjourney has also put a strong emphasis on avoiding deepfakes. It won't edit images of real people and is surprisingly unwilling to create something that might resemble celebrity figures. The video generation from Midjourney is clearly designed for the same group as the image generator. It's built for people wanting to make creative projects or design things that are clearly separate from real life. Scroll through Midjourney's explore page and you'll be greeted by moving comic book strips, anime fights and stylised car chases. Midjourney also seems to have put a lot of work into its prompt understanding. Previously, the model would struggle to create good results without incredibly specific details. Now, it works in a similar way to the likes of ChatGPT, able to create images and videos from short descriptions. From the short time I've used it so far, I've got high hopes for Midjourney's video generator. They have warned that prices could change as they test the model so now is a great time to give it a go.
[10]
Midjourney launches AI video model. How to try V1, how much it costs.
Generative image AI platform Midjourney has introduced its V1 Video Model: This Tweet is currently unavailable. It might be loading or has been removed. "It's fun, easy, and beautiful," the company posted on X on Wednesday. "Available at 10$/month, it's the first video model for everyone and it's available now." This appears to be a dig at competing generative-AI video programs. OpenAI's Sora is available for ChatGPT Plus and Pro users, for $20/month or $200/month, respectively, and Google's Flow is $249/month. (Adobe's Firefly starts at a compatible $9.99/month for up to 20 five-second videos, and Runway's Gen-4 Turbo video starts at $12/month). Midjourney is still charging eight times more to produce a video than an image, and each job will produce four five-second videos. In a blog post on Midjourney's website, founder David Holz explained this and wrote that the prices will be hard to predict. The team will watch how V1 is used over the next month and adjust from there. Holz called V1 a "stepping stone," as Midjourney ultimately wants to create "real-time open-world simulations." The building blocks to this, Holz wrote, is image models, video models of those images, 3D models, and doing this all quickly (real-time models). Midjourney plans on building these models individually and releasing them, with version one of its Video Model out now. Midjourney users can create images in the platform as usual and now press "Animate" to make them move. They can choose to do this automatically or manually and choose between low motion (for more ambient scenes) and high motion. Videos can be "extended" around four seconds at a time, four times in total. Users can animate images from outside of Midjourney, as well. For now, V1 is web-only. "We ask that you please use these technologies responsibly," Holz wrote. As Mashable's Timothy Beck Werth reported for the launch of Google's Veo 3, misinformation experts have sounded the alarm that AI video may soon be indistinguishable from real video. (The recent viral emotional support kangaroo video shows that that's already happening.) AI generation has been used by bad actors, such as to create explicit deepfakes (which is now a federal crime in the U.S.) "Properly utilized it's not just fun, it can also be really useful, or even profound -- to make old and new worlds suddenly alive," Holz continued. V1 is launched amid a recent lawsuit Disney and Universal filed against Midjourney. The suit claims that the platform illegally trained on copyrighted content and called Midjourney a "bottomless pit of plagiarism."
[11]
Midjourney Arrives in the AI Video Space With V1 Model
Midjourney, best known for its AI image generator, has just announced its first video generation model called V1 Video which will animate still images into short video clips. Users can generate a five-second clip using a written prompt and an image; it can be either an AI image from Midjourney's picture generator or they can upload one of their own. Videos can then be extended in four-second increments up to four times, for a maximum duration of 21 seconds. Animations can be adjusted for either low or high motion, depending on whether both the subject and camera should move or just the subject. The new tool is accessible via Midjourney's website and Discord server and is part of a subscription plan starting at $10 per month. That plan provides 3.3 hours of "fast" GPU time, roughly equivalent to 200 image generations. Video jobs, however, are significantly more resource intensive costing about eight times as much as a single image generation, or approximately "one image worth of cost per second of video," according to the company. An automatic mode generates basic movement with a default prompt, while a manual option allows users to describe motion in more detail. Midjourney founder David Holz describes the release as "a stepping stone" toward more advanced models, such as "real-time open-world simulations." "This means that you still make images in Midjourney, as normal, but now you can press 'Animate' to make them move," Holz says in a blog post. "Properly utilized, it's not just fun, it can also be really useful, or even profound to make old and new worlds suddenly alive." The rollout comes at a sensitive time for the company. Last week, Disney and NBCUniversal filed a lawsuit against Midjourney, citing concerns about the potential misuse of their copyrighted content. The companies allege that the platform functions as a "virtual vending machine, generating endless unauthorized copies of Disney's and Universal's copyrighted work." The suit also raises alarms about Midjourney's model training methods, arguing they are infringing. Despite trailing other companies like OpenAI, Google, and Adobe in the text-to-video space, Midjourney's move into animation signals its intent to remain competitive. Tools like Google's Veo, OpenAI's Sora, and Adobe's Firefly Video have already introduced more sophisticated prompt-to-video capabilities. Holz acknowledges the challenges of scaling the video tool, stating, "The actual costs to produce these models and the prices we charge for them are challenging to predict." He adds that pricing and availability may shift in the coming weeks as usage patterns emerge. Midjourney has urged users to, "Please use these technologies responsibly."
[12]
Midjourney just dropped its first AI video model and Sora and Veo 3 should be worried
The tool is relatively affordable and a possible rival for Google Veo or OpenAI's Sora. Midjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1. This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments. Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI's Sora and Google's Veo 3, especially when you consider the price. For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself. To make a video, you simply generate an image in Midjourney like usual, hit "Animate," choose your motion settings, and let the AI go to work. The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt. You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively. The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users. Midjourney video is really fun from r/midjourney Midjourney isn't trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They're trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer. Midjourney's video tool isn't pretending to be Hollywood's next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media. And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image's worth of cost per second of video. That's 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives. That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content. For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone. Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they're busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.
[13]
'Surpassing all my expectations': Midjourney releases first AI video model amid Disney, Universal lawsuit
Popular AI image generation service Midjourney has launched its first AI video generation model V1, marking a pivotal shift for the company from image generation toward full multimedia content creation. Starting today, users can animate images via the Midjourney website, transforming their generated or uploaded stills into 5-second long clips with options for extending the generation longer up to 20 seconds (in 5 second bursts), and guiding them with text. With the launch, the bootstrapped small lab Midjourney positions itself in a rapidly intensifying AI video race. At the same time, it's also confronting serious legal challenges from two of the largest entertainment studios in the world. What does it mean for AI creators and enterprises looking to harness the latest in creative tech for advertising, marketing or user engagement? A new product built directly atop Midjourney's popular AI image generator Midjourney's new offering extends its familiar image-based workflow, including its new v7 text-to-image model. Users generate a still image, either within the Midjourney platform or by uploading an external file, then press "Animate" to turn that image into video. Two primary modes exist: one uses automated motion synthesis, while the other lets users write a custom motion prompt to dictate via text how elements should move in the scene. So Midjourney video arrives with support for both image-to-video and text-to-video edits and modifications. From a creative standpoint, users can toggle between two motion settings. There's a low motion mode is optimized for ambient or minimalist movement -- such as a character blinking or a light breeze shifting scenery -- and high motion mode that attempts more dynamic animation of both subject and camera, though this can increase the chance of visual errors. These are accessed below a generated or uploaded image on the Midjourney website in the right hand options pane below a field labeled "Animate Image," as seen here: Each video job generates four different 5-second clips as options, and users can extend the animation by 4 seconds per clip, up to a total of 20 seconds. While this is relatively short-form, the company has indicated that video duration and features will expand in future updates. Midjourney, launched in summer 2022, is widely considered by many AI image creators as the premiere or "gold standard" in AI image generation to this day thanks to its relatively frequent and more realistic and varied creation options, so there were high expectations surrounding its entry into the AI video space. Initial reactions from users we've seen have been mainly promising, with some like Perplexity AI designer Phi Hoang (@apostraphi on X) commenting: "it's surpassing all my expectations," on a post on X. Affordable price Midjourney is offering video access as part of its existing subscription plans, starting at $10 per month. The company states that each video job will cost approximately 8x more than an image generation task. However, since each video job produces 20 seconds of content, the cost-per-second is roughly equivalent to generating one still image -- a pricing model that appears to undercut many competitors. A "video relax mode" is being tested for "Pro" subscribers and above. This mode, like its counterpart in image generation, would offer delayed processing in exchange for reduced compute costs. Fast generation remains metered through GPU minutes based on tiered subscription plans. Community commentators have largely received the pricing positively. AI content creator @BLVCKLIGHTai emphasized on social media that the cost is roughly in line with what users pay for upscaling images -- making the tool surprisingly affordable for short-form video experimentation. It's comparable to rival Luma AI's "Web Lite Plan" for $9.99 per month and below Runway's "Standard" plan ($15 monthly). No sound yet and a more limited built-in editor than AI video rivals such as Runway, Sora, Luma For now, any soundtrack would need to be added manually in post-production using separate tools. In addition, Midjourney's outputs remain short and are capped at 20 seconds. There is no current support for editing timelines, scene transitions, or continuity between clips. Midjourney has stated this is only the beginning and that the initial release is intended to be exploratory, accessible, and scalable. Rising stakes in crowded AI video market The launch lands at a time when AI video generation is rapidly becoming one of the most competitive corners of the generative AI landscape. Tech giants, venture-backed startups, and open-source projects are all moving fast. This week, Chinese startup MiniMax released Hailuo 02, an upgrade to its previous video model. Early feedback has praised its realism, motion adherence to prompts, and 1080p resolution, though some reviewers noted that render times are still relatively slow. The model appears especially adept at interpreting complex motion or cinematic camera angles, putting it in direct comparison with Western offerings like Runway's Gen-3 Alpha and Google's Veo line. Meanwhile, Luma Labs' Dream Machine has gained traction for its ability to co-generate audio alongside high-fidelity video, a feature missing from Midjourney's new release, and like Runway, allows for re-stylizing or "re-skinning" video with a new feature called Modify Video. Google's Veo 3 and OpenAI's upcoming Sora model are similarly working toward broader multimodal synthesis, integrating text, image, video, and sound into cohesive, editable scenes. Midjourney's bet appears to be on simplicity and cost-effectiveness -- a "good enough" solution priced for scale -- but that also means it launches without many advanced features now standard in the premium AI video tier. The shadow of litigation from Disney and Universal over IP infringement Just days before the launch, Midjourney was named in a sweeping copyright infringement lawsuit filed by Disney and Universal in U.S. District Court. The complaint, spanning more than 100 pages, accuses Midjourney of training its models on copyrighted characters -- including those from Marvel, Star Wars, The Simpsons, and Shrek -- without authorization and continuing to allow users to generate derivative content. The studios allege that Midjourney has created a "bottomless pit of plagiarism," intentionally enabling users to produce downloadable images featuring characters like Darth Vader, Elsa, Iron Man, Bart Simpson, Shrek, and Toothless with little friction. They further claim that Midjourney used data scraping tools and web crawlers to ingest copyrighted materials and failed to implement technical safeguards to block outputs resembling protected IP. Of particular note: the lawsuit preemptively names Midjourney's Video Service as a likely source of future infringement, stating that the company had begun training the model before launch and was likely already replicating protected characters in motion. According to the complaint, Midjourney earned $300 million in revenue in 2024 and serves nearly 21 million users. The studios argue that this scale gives the platform a commercial advantage built atop uncompensated creative labor. Disney's general counsel, Horacio Gutierrez, stated plainly: "Piracy is piracy. And the fact that it's done by an AI company does not make it any less infringing." The lawsuit is expected to test the limits of U.S. copyright law as it relates to AI training data and output control -- and could influence how platforms like Midjourney, OpenAI, and others must structure future content filters or licensing agreements. For enterprises concerned about infringement risks, services with built-in indemnity like OpenAI's Sora or Adobe Firefly Video are probably better options for AI video creation. A 'world model' and realtime world generation is the goal Despite the immediate risks, Midjourney's long-term roadmap is clear and ambitious. In public statements surrounding the video model's release, the company said its goal is to eventually merge static image generation, animated motion, 3D spatial navigation, and real-time rendering into a single, unified system, also known as a world model. These systems aim to let users navigate through dynamically generated environments -- spaces where visuals, characters, and user inputs evolve in real time, like immersive video games or VR experiences. They envision a future where users can issue commands like "walk through a market in Morocco at sunset," and the system responds with an explorable, interactive simulation -- complete with evolving visuals and perhaps, eventually, generative sound. For now, the video model serves as an early step in this direction. Midjourney has described it as a "technical stepping stone" to more complex systems. But Midjourney is far from the only AI research lab pursuing such ambitious plans. Odyssey, a startup co-founded by self-driving tech veterans Oliver Cameron and Jeff Hawke, recently debuted a system that streams video at 30 frames per second with spatial interaction capabilities. Their model attempts to predict the "next state of the world" based on prior states and actions, enabling users to look around and explore scenes as if navigating a 3D space. Odyssey combines AI modeling with its own 360-degree camera hardware and is pursuing integrations with 3D platforms like Unreal Engine and Blender for post-generation editing. However, it does not yet allow for much user control beyond moving the position of the camera and seeing what random sights the model produces as the user navigates the generated space. Similarly, Runway, a longtime player in AI video generation, has begun folding world modeling into its public roadmap. The company's AI video models -- the latest among them, Gen-4 introduced in April 2025 -- support advanced AI camera controls that allow users to arc around subjects, zoom in and out, or smoothly glide across environments -- features that begin to blur the line between video generation and scene simulation. In a 2023 blog post, Runway's CTO Anastasis Germanidis defined general world models as systems that understand environments deeply enough to simulate future events and interactions within them. In other words, they're not just generating what a scene looks like -- they're predicting how it behaves. Other major AI efforts in this space include: While Midjourney's approach has so far emphasized accessibility and ease of use, it's now signaling an evolution toward these more sophisticated simulation frameworks. The company says that to achieve this, it must first build the necessary components: static visuals (its original image models), motion (video models), spatial control (3D positioning), and real-time responsiveness. Its new video model, then, serves as one foundational block in this longer arc. This puts Midjourney in a global race -- not just to generate beautiful media, but to define the infrastructure of interactive, AI-generated worlds. A calculated and promising leap into an increasingly complicated competitive space Midjourney's entry into video generation is a logical extension of its popular image platform, priced for broad access and designed to lower the barrier for animation experimentation. It offers an easy path for creators to bring their visuals to life -- at a cost structure that, for now, appears both aggressive and sustainable. But this launch also places the company squarely in the crosshairs of multiple challenges. On the product side, it faces capable and fast-moving competitors with more features and less legal baggage. On the legal front, it must defend its practices in a lawsuit that could reshape how AI firms are allowed to train and deploy generative models in the U.S. For enterprise leaders evaluating AI creative platforms, Midjourney's release presents a double-edged sword: a low-cost, fast-evolving tool with strong user adoption -- but with unresolved regulatory and IP exposure that could affect reliability or continuity in enterprise deployments. The question going forward is whether Midjourney can maintain its velocity without hitting a legal wall or whether it will have to significantly restructure its business and technology to stay viable in a maturing AI content ecosystem.
[14]
I Tried Midjourney's AI Video Generator, and It's Better Than I Expected
David Nield is a technology journalist from Manchester in the U.K. who has been writing about gadgets and apps for more than 20 years. While the AI image generators that are built into chatbots might have been grabbing most of the attention recently, the dedicated AI imagery engine Midjourney has been quietly improving and evolving since its launch three years ago. Now, it also features a video model. According to Midjourney, this is another step toward producing an AI tool that's capable of producing a real-time, 3D world simulator. The V1 model has been released with that ultimate goal in mind, though it's going to take a while to get there. The AI video maker in Midjourney works a little differently than other generators. You start with an image -- either AI-generated or one you already have -- and Midjourney creates a five-second animation from it. These short clips can then be extended, four seconds at a time, and four times in total. As usual with Midjourney, this content creation will cost you time (the Midjourney version of credits): A second of video is the same cost as an image generation, and Midjourney plans start at $10 a month and go up from there. To create a video in Midjourney, you first need to create an image through the web interface. Enter your prompt in the box at the top, using the sliders button to the right to set some of the options, such as the aspect ratio. Be as precise as you can in your prompt (there are more tips here), then hit Enter (or click the send icon) to run it. As usual, Midjourney presents you with several results from your prompt, together with options for building on them. Included in these are now four animation options for creating a video. Your first decision is whether to go with Auto (Midjourney chooses the motion that's added) or Manual (you describe the motion you want). Your second decision is whether to go with Low Motion (motion is limited) or High Motion (where everything in the frame moves, and glitches are more likely). Once you've made your pick, you can edit your prompt again (if you've picked Manual), and the video is created. As with images, you'll see multiple variations presented. Click on any of the generated videos, and you'll see the same four animation options are here, only these are now for extending the video further -- which you can do four times in total. You can mix up auto and manual sections, and low-motion and high-motion sections, to build up the clip you're looking for. You'll find the options for downloading your video up above the prompt on the right: You can download the raw video or a version optimized for social media (which combats some of the compression that happens when you post videos to those platforms). You can start again by clicking on the original prompt, then making changes to it. Midjourney is an impressive AI image generator, and its videos reach the same standard. I tried creating a sci-fi cityscape and a natural landscape animation, and the end results were mostly consistent and logical, while closely following the prompt instructions. Some of the typical quirks of AI-generated video are here, like weird physics, but even at this early stage, the V1 model is polished and capable. You can see both the limitations and advantages of the Midjourney approach in these clips: Each four-second segment moves smoothly into the next, but you don't get much time to do what you want to do in your video if you're working in four-second bursts, and as the video progresses you do tend to lose some of the detail and richness that you get in your original image. If you're paying OpenAI $20 or more a month for ChatGPT, then you also have access to Sora. Like Midjourney, Sora lets you start videos from an image (either AI-generated or otherwise), or with a fresh prompt. I got Sora to build on the futuristic sci-fi city and animated landscape images I'd created in Midjourney, and got mixed results. The scene felt more engaging but there were more oddities in it, such as unnatural movements and glitchy backgrounds (especially with the animation, which got really weird). You can use Sora to generate videos up to 20 seconds in length, but there's less control over how a scene progresses than there is with Midjourney: You basically just enter your prompt and then take whatever you get back. For casual projects, at least, Midjourney feels like the more accessible tool, capable of more realistic results. I also tried creating the same scene in Google's Veo 2, via the Flow online app. Flow lets you base your videos on images, and extended scenes while maintaining consistency, like Midjourney (you don't get the same features with Veo 2 in the Gemini app). Overall, I'd say this got me the results closest to what I was looking for, though there were still some inconsistencies and oddities. You can see that the flying car does descend in a believable way through the cityscape, and the prompt instructions are followed closely. As for the animation, flying across a cartoon-ish landscape, the results from Google Flow and Veo 2 were the best of the bunch -- though again you can see that you gradually lose some of the richness and detail present in the original image. If your AI filmmaking ambitions are a bit more grand, Google's tools might be the best fit, though again, there's a cost: Video generation and access to Flow will set you back $20 or more a month. You can also pay $250 for the Google AI Ultra plan, which gets you extended access to the more advanced Veo 3 model, complete with sound (though Veo 3 can't yet make videos based on a static image). While this isn't the biggest sample size, the quality of the Midjourney clips is clear to see, and the approach to video making is straightforward and intuitive. Google Veo 2 remains a better choice for overall quality, while for now Sora remains rather chaotic and unpredictable. You're going to have to spend a lot more time with the OpenAI model to end up with passable results.
[15]
Midjourney debuts new V1 video generation model - SiliconANGLE
Midjourney Inc. today introduced a new artificial intelligence model, V1, that can generate videos up to 21 seconds in length. San Francisco-based Midjourney launched in 2022 with an initial focus on developing AI image generators. The company's algorithms are believed to have about 21 million users. It generates revenue by providing access to the models through a subscription-based cloud service. As part of its service, Midjourney offers a gallery feature that allows users to view their AI-generated images in one place. The gallery now displays a new button below each image that allows users to animate it with the new V1 model. By default, the service generates a 5-second clip. Customers can optionally customize the video in various ways. After generating an initial 5-second video, V1 can be instructed to extend it by 4 seconds up to 4 times. That translates into a maximum video length of 21 seconds. Google LLC's competing Veo 3 model and OpenAI's Sora currently generate clips up to 20 seconds in length. Users can have V1 automatically decide how to animate an image or customize the workflow by providing a prompt. If they choose the latter option, two more customization settings are available. According to Midjourney, V1 can be configured to closely align clips with the user's prompt or add a "creative flair" that introduces new elements. The manner in which the model generates motion is customizable as well. "Low motion is better for ambient scenes where the camera stays mostly still and the subject moves either in a slow or deliberate fashion," Midjourney Chief Executive Officer David Holz wrote in a blog post today. "High motion is best for scenes where you want everything to move, both the subject and camera." V1 is rolling out two months after Midjourney debuted its newest AI image generator. V7, as the latter algorithm is called, is significantly faster than its predecessor and generates higher-quality images. AI image and video generators are usually based on a machine learning approach called diffusion. In a diffusion project, developers assemble a collection of images, add noise to each file and then ask an AI model to reconstruct the original images. By repeatedly reconstructing images, the model can learn how to create new ones from scratch. AI video generators include additional features that typically aren't available in an image generation model. There's a so-called temporal module, which makes it possible to keep objects consistent across a clip's frames. Video generators also include features that allow them to track the order in which the frames should be displayed. According to Midjourney, V1 is part of a long-term development effort focused on training AI models that can generate interactive 3D simulations. "In order to do this, we need building blocks," Holz wrote. "We need visuals (our first image models). We need to make those images move (video models). We need to be able to move ourselves through space (3D models) and we need to be able to do this all fast (real-time models)."
[16]
Midjourney rolls out V1 AI video generation model
Midjourney unveiled its V1 AI video generation model on Wednesday. The image-to-video model, accessible through Discord, produces four five-second videos from a user-uploaded or Midjourney-generated image and will cost 8x more for a video generation than a typical image generation. CEO David Holz stated in a blog post that this model is a step toward creating AI models capable of "real-time open-world simulations." Following video models, the company plans to develop AI models for 3D renderings and real-time AI models. V1's release positions Midjourney alongside competitors such as OpenAI's Sora, Runway's Gen 4, Adobe's Firefly, and Google's Veo 3, though Midjourney's AI models are specifically geared towards creative uses. Video: Midjourney The V1 launch follows a lawsuit filed a week prior by Disney and Universal, alleging that Midjourney's AI image models depict copyrighted characters, like Homer Simpson and Darth Vader. Finally Midjourney V7 is live but it's missing crucial features According to Midjourney, subscribers to the $60-a-month Pro plan and $120-a-month Mega plan will have unlimited video generations in the company's slower, "Relax" mode, while the cheapest way to try out V1 is by subscribing to Midjourney's $10-per-month Basic plan. The company plans to reassess its video model pricing in the next month. V1 offers custom settings to control video output. Users can select automatic animation for random movement or manual settings to describe animations in text and can also adjust camera and subject movement by selecting "low motion" or "high motion" settings. While the initial videos are five seconds, users can extend them in four-second increments up to four times, creating videos up to 21 seconds long.
[17]
Midjourney Releases V1, Its First AI Video Generation Model
To access Midjourney V1, you need to subscribe to the $10 monthly plan. Midjourney is known for developing one of the best image generation AI models. And now, the company has launched its first video generation AI model called 'V1'. Along with image generation on Midjourney, you can now click on "Animate" to create four 5-second videos. Basically, you can create videos from still images using the Midjourney V1 model. The best part is that you can upload existing images outside of Midjourney to animate images and create a motion video seamlessly. It uses your image as the starting frame, and then you can add a motion prompt to describe what kind of video you want to generate. That said, Midjourney V1 is not free, and you need to subscribe to the $10 per month plan. Note that there are "high motion" and "low motion" settings to produce videos in different motion styles. Low motion is better for ambient scenes, and High motion can be used where everything moves in the video. And once you are fine with the generated video, you can extend it by four seconds. You can do this up to four times. Currently, ByteDance's Seedance 1.0 and Google's Veo 3 are the best video generation AI models on the Artificial Analysis Video Arena leaderboard. It will be interesting to see where Midjourney V1 stacks up in the video generation race.
[18]
Midjourney AI Video Model Officially Launches : Refining Storytelling Through Motion
Have you ever imagined bringing a still image to life -- transforming a single frame into a dynamic, moving story? Midjourney's latest innovation in video AI makes this dream a reality, and it's nothing short of innovative. This innovative tool doesn't just animate images; it crafts visually stunning, high-resolution videos with seamless motion and artistic flair. Whether you're a content creator, marketer, or just someone who loves experimenting with new tech, this model opens up creative possibilities that were once reserved for high-budget studios. But here's the kicker: it's not just about making videos -- it's about rethinking how we tell stories through motion. In this exploration, Olivio Sarikas uncovers how Midjourney's video AI is reshaping the creative landscape by turning static images into polished animations. From its dynamic camera effects to its customizable motion styles, this tool offers a level of control and refinement that sets it apart from traditional video generation models. But it's not without its quirks -- like any emerging technology, it has its limitations. So, what makes this tool "crazy good," and where does it still have room to grow? Let's unravel the layers of this new innovation and see how it's changing the way we create and imagine. Midjourney's video AI distinguishes itself by focusing on transforming images into videos, setting it apart from traditional text-to-video models. This approach allows you to use high-quality images as the foundation for video creation, making sure a visually rich starting point. The tool supports a diverse range of styles, including realistic, 3D, anime, cartoon, and artistic effects, giving you the flexibility to align the output with your creative vision. Whether you are producing content for social media, marketing campaigns, or professional projects, this model adapts seamlessly to your needs. By emphasizing image-based video generation, Midjourney's tool offers a more focused and refined approach, allowing creators to achieve polished results without the complexities often associated with text-to-video models. Midjourney's video AI brings several notable strengths to the forefront, making it a valuable asset for creators aiming to produce high-quality animations. Its standout features include: These strengths make the tool particularly appealing for creators who prioritize visual quality and creative flexibility. Whether you are a novice or an experienced professional, the model's capabilities cater to a wide range of creative needs. One of the most impressive aspects of Midjourney's video AI is its extensive customization options, which allow you to tailor the video creation process to your specific requirements. Key customization features include: These features strike a balance between accessibility and creative freedom, making the tool suitable for both beginners seeking simplicity and experienced creators looking for advanced control. The ability to fine-tune every aspect of the video ensures that the final output aligns perfectly with your vision. The video generation process is powered by GPU-based rendering, making sure efficient and high-quality outputs. Each video takes approximately eight minutes to complete, with the model generating four variations during each session. This allows you to select the most suitable version for your project. While the initial outputs may have lower resolution, the built-in upscaling technology enhances the final quality, delivering a polished result. However, the AI does face challenges with complex motions, such as dancing or intricate choreography, which can lead to unnatural movements or inconsistencies. Despite these limitations, the model's performance in simpler animations remains highly reliable, making it a strong choice for a wide range of applications. Midjourney's video AI is integrated into its existing subscription plans, offering flexibility for users with varying needs and budgets. The pricing structure includes: This tiered approach ensures that users can select a plan that aligns with their creative goals and financial considerations, making the tool accessible to a broad audience. When compared to competing models such as Kling 2.1 and Veo 2.1, Midjourney's video AI stands out with its smoother motion and superior visual quality. While competitors may excel in specific areas, such as text-to-video generation or niche effects, Midjourney's balanced approach to customization, style variety, and resolution makes it a versatile choice for diverse applications. The model's ability to produce consistent, high-resolution animations with dynamic camera effects gives it a competitive edge, particularly for creators who value both quality and creative control. As an evolving technology, Midjourney's video AI is expected to undergo significant advancements in the coming years. Future updates may address current challenges, such as motion inconsistencies and difficulties with complex animations, further enhancing the tool's capabilities. Additionally, the integration of new features and improvements in processing speed could expand its appeal to an even broader audience. With ongoing development, this model has the potential to redefine the possibilities of video generation, opening up new creative opportunities for users across various industries. Its current capabilities already position it as a valuable tool, and its future advancements are likely to solidify its status as a leader in the field of image-to-video transformation.
[19]
Midjourney expands into real-time generated AI videos as part of latest update
Artificial intelligence company Midjourney has recently unveiled its next major step forward. Following years of offering and focussing on still images, Midjourney is now looking to expand into the world of video generation, specifically real-time generation. This is now available as part of Version 1 of the Video Model, which is described as "something fun, easy, beautiful, and affordable so that everyone can explore." At the moment, the system is meant to turn images into video by using an animate motion prompt mechanic. Essentially, it brings life and movement to a still image, which can be tweaked depending on the level of motion that should be expected from the source image in question. Compared to rival AI video software, it might seem like Midjourney is a bit behind, but it notes that this is just a stepping stone towards a much greater goal. The long-term intention is to be able to offer real-time open-world simulations, and getting there means also being able to move through 3D models of generated video, and then also generating everything very fast. These latter points are the next steps towards making Midjourney's aim a reality. In Midjourney's own words: "What you might not know, is that we believe the inevitable destination of this technology are models capable of real-time open-world simulations. "What's that? Basically; imagine an AI system that generates imagery in real-time. You can command it to move around in 3D space, the environments and characters also move, and you can interact with everything. "In order to do this, we need building blocks. We need visuals (our first image models). We need to make those images move (video models). We need to be able to move ourselves through space (3D models) and we need to be able to do this all fast (real-time models). "The next year involves building these pieces individually, releasing them, and then slowly, putting it all together into a single unified system. It might be expensive at first, but sooner than you'd think, it's something everyone will be able to use." The catch with this video software is that it's much more expensive to use. Midjourney is charging 8x what it charges for an image job, which it describes is still "over 25 times cheaper than what the market has shipped before."
[20]
Midjourney Video Review : Stunning Features and Surprising Flaws
What if you could create visually stunning, emotionally rich videos with just a few prompts -- no cameras, no crews, no expensive software? That's the promise of Midjourney Video, a innovative AI tool that's redefining how creators approach video production. With its ability to generate lifelike character movements, cohesive visual styles, and even subtle emotional expressions, Midjourney Video feels like a glimpse into the future of storytelling. But here's the catch: while its creative potential is undeniable, the platform isn't without its quirks. From resolution limitations to occasional motion stutters, this tool walks a fine line between new innovation and frustrating imperfection. So, is it worth adding to your creative arsenal? That's exactly what we'll explore in this hands-on review. In the video below, Theoretically Media break down the key features that make Midjourney Video a standout choice for experimental and artistic projects, as well as the limitations that might leave you reaching for complementary tools like Topaz Astra. You'll discover how this AI-powered platform handles complex prompts, breathes life into static visuals, and even surprises users with unexpected creative interpretations. Whether you're a filmmaker, animator, or digital artist, this review -- crafted by Theoretically Media -- will help you decide if Midjourney Video is the right fit for your workflow. Let's unravel its potential and see how far this tool can take your imagination. Midjourney's video generation model is designed to provide flexibility and control, catering to a wide range of creative workflows. Whether you prefer an automated approach or hands-on customization, the tool adapts seamlessly to your needs. Each prompt generates four video variations, giving you the opportunity to explore multiple interpretations of your ideas and refine them further. Here are some of the standout features that make Midjourney Video a compelling choice: The model also excels at handling intricate prompts, such as those involving detailed actions or nuanced emotional expressions. Improvements in rendering hands and maintaining visual balance further enhance its appeal for creators seeking polished and professional-looking results. Despite its strengths, Midjourney Video has several limitations that may impact its suitability for certain projects. The most notable constraint is the resolution cap at 480p, which falls short of professional-quality standards. However, this limitation can be mitigated by using external upscaling tools like Topaz Astra to enhance the resolution to 1080p or higher, making sure outputs meet higher quality expectations. Other challenges include: While these limitations can be frustrating, they do not overshadow the model's creative potential. By pairing Midjourney Video with complementary tools and techniques, you can address these challenges and unlock its full capabilities. Take a look at other insightful guides from our broad collection that might capture your interest in Midjourney video generation. Midjourney Video shines in producing stylized and visually engaging outputs, making it an excellent choice for artistic and experimental projects. Whether you're creating hybrid CGI animations, exploring whimsical dreamlike aesthetics, or transforming static images into dynamic videos, the tool adapts well to a variety of creative prompts. Its ability to breathe life into older visuals by animating static images is particularly noteworthy. One of the most intriguing aspects of Midjourney Video is its capacity to deliver unexpected results. By granting the model creative freedom, you can uncover unique interpretations that may inspire new artistic directions. This element of unpredictability adds a sense of discovery to the creative process, making the tool especially valuable for experimentation and innovation. To overcome the resolution limitations of Midjourney Video, the Topaz Astra video upscaler offers a practical and effective solution. This external tool enhances video quality to 1080p or higher, providing two distinct modes: precise and creative upscaling. The precise mode focuses on preserving the original details of the video, while the creative mode introduces new textures and refinements to enhance the overall aesthetic. Topaz Astra is particularly effective in refining textures, enhancing fine details, and improving the overall clarity of videos. However, careful selection of upscaling modes is essential to avoid over-smoothing or unintentionally altering key features of the original output. For creators aiming to elevate their Midjourney projects, Topaz Astra serves as an indispensable companion. To make the most of Midjourney Video and achieve high-quality results, consider implementing the following strategies: By following these tips, you can streamline your workflow, overcome the tool's limitations, and produce outputs that align with your creative vision. Midjourney Video represents a significant step forward in AI-driven video generation, offering creators an affordable and versatile platform for artistic exploration. While it has room for improvement in areas such as resolution, text rendering, and motion consistency, its strengths in stylistic consistency, character movement, and creative adaptability make it a valuable asset for a wide range of projects. By using external tools like Topaz Astra and employing effective strategies, you can maximize the tool's potential and unlock new possibilities for creative expression.
[21]
Midjourney V1 Explained: Better than Google Veo 3?
When Midjourney first emerged, it did more than just contribute to the wave of AI-generated imagery, it helped define it. With its distinct aesthetic and intuitive Discord interface, Midjourney found a fiercely loyal creative community, and it became synonymous with high-quality, stylized visuals. But now, the company is stepping into an entirely new arena: AI video generation. On June 18, 2025, Midjourney unveiled its first video model, simply titled V1, marking the beginning of a new chapter in its creative evolution. And as the AI video space heats up with players like Google Veo 3, many are asking: can Midjourney's V1 compete? Also read: From OpenAI's Sora to Google's Veo: 5 AI tools for video generation Midjourney V1 isn't aiming to be a clone of Google Veo, OpenAI's Sora or Runway. It doesn't generate video from scratch using text prompts. Instead, it takes something Midjourney already excels at, gorgeous still imagery, and animates it. The result is a video model that doesn't attempt to reinvent the wheel but rather makes it spin with elegance. Here's how it works: users generate an image with the traditional /imagine prompt, then click the new "Animate" button within Discord. This triggers Midjourney V1, which renders a 5-second clip based on the still image, with the option to extend it in 5-second intervals up to 20 seconds. There are two motion presets: low motion, which offers gentle shifts, like subtle camera pans or ambient movement, and high motion, which introduces more dynamic action, perhaps a figure turning, a swirl of fabric, or the camera sweeping dramatically through the scene. Users can also guide the animation using additional prompts to influence how the movement unfolds, giving some creative agency to the process. Where Midjourney V1 shines is in style. Unlike other video models that often aim for photorealism or cinematic accuracy, V1 leans into its roots, surrealism, dream logic, and painterly beauty. The videos it produces feel like animated versions of a visual poem. They're not meant to mimic real life but to evoke a mood, a feeling, an idea. This artistic integrity extends to its consistency. Scenes stay coherent; characters maintain shape and form; visual motifs remain intact throughout a sequence. The output is crisp enough to impress casual users and creative professionals alike. It's not perfect, some flickering and unnatural transitions can occur, especially with high motion, but overall, the visual quality is striking. Also read: The Era of Effortless Vision: Google Veo and the Death of Boundaries What makes it even more appealing is the effortless user experience. There's no need to learn a complex interface or script a storyboard. If you've used Midjourney to generate images, you already know how to create video. That low barrier to entry makes V1 especially inviting to digital artists, motion designers, and anyone curious about animation without a background in it. MidjourneyV1 and Veo 3 are very different tools, for different purposes, and cater to different imaginations. While Midjourney V1 is creating waves in expression, Google Veo 3 represents a different kind of revolution in realism. It excels at turning imagination into motion, abstract, emotional, and visually rich. It doesn't try to simulate the world; it tries to reinterpret it. For artists, illustrators, and anyone enchanted by the surreal, V1 is less of a tool and more of a muse. Veo 3, by contrast, is for precision storytelling. It's made for brands, filmmakers, and tech-savvy creatives who want cinematic control and visual fidelity. In terms of technical range and realism, it's the clear winner. But it's also more complex, more closed, and less community-oriented. Midjourney has hinted that V1 is just the beginning. The company is exploring 3D modeling, real-time simulation, and even interactive worlds, ambitious goals that could eventually lead to full-scale text-to-video generation or immersive virtual environments. If future versions of V1 begin to integrate character persistence, voice, and narrative logic, Midjourney could become a serious contender not just in creative tools, but in entertainment itself. Imagine a world where a few prompts generate a stylized short film, one with movement, mood, music, and meaning. In its current form, Midjourney V1 is not a direct rival to Veo 3, but it doesn't have to be. It is accessible, affordable, and above all, inspiring. It empowers creators to animate their imaginations with a few clicks, offering a kind of visual poetry in motion that few platforms can match. So, will it be better than Veo 3? Not by Veo's rules. But if the measure is creativity, community, and charm, Midjourney V1 might already be playing an entirely more interesting game.
Share
Copy Link
Midjourney, known for AI image generation, has released its first AI video generation model, V1, allowing users to create short videos from images. This move puts Midjourney in competition with other AI video generation tools and raises questions about copyright and creative industry impacts.
Midjourney, a leading AI image generation startup, has officially launched its first AI video generation model, V1. This new tool allows users to create short videos from still images, marking Midjourney's entry into the competitive AI video market 1.
Source: Digit
V1 is an image-to-video model that enables users to upload an image or use one generated by Midjourney's existing models. The tool then produces a set of four five-second videos based on the input image 1. Users can choose between automatic and manual animation settings, as well as low-motion and high-motion options to control the level of movement in the generated videos 4.
The V1 model is currently available only through Midjourney's website and Discord server. Subscriptions start at $10 per month for the Basic plan, which includes 3.3 hours of "fast" GPU time 2. Video generation costs approximately eight times more than image generation, which may require users to upgrade their subscription plans 3.
The launch of V1 puts Midjourney in direct competition with other AI video generation models, including OpenAI's Sora, Runway's Gen 4, Adobe's Firefly, and Google's Veo 3 1. This move reflects the growing trend of AI companies expanding their offerings to include video generation capabilities.
Midjourney's entry into the video generation market comes amid legal challenges. The company is currently facing a lawsuit from Disney and Universal, alleging copyright infringement 3. The lawsuit cites concerns about Midjourney's video generation capabilities and their potential impact on copyrighted works 5.
Source: TechRadar
Midjourney CEO David Holz has outlined ambitious goals for the company's AI models, stating that V1 is a step towards creating "models capable of real-time open-world simulations" 1. The company plans to develop AI models for 3D renderings and real-time simulations in the future.
The introduction of V1 and similar AI video generation tools raises questions about the future of creative industries. There are growing concerns that these AI tools could potentially replace or devalue the work of human creatives in fields such as filmmaking, advertising, and visual effects 1.
Source: GameReactor
As AI video generation technology continues to evolve, it is likely to have significant impacts on various sectors, from entertainment and advertising to education and social media. The development of these tools also highlights the ongoing debate surrounding AI, creativity, and copyright in the digital age.
Goldman Sachs is testing Devin, an AI software engineer developed by Cognition, potentially deploying thousands of instances to augment its human workforce. This move signals a significant shift towards AI adoption in the financial sector.
5 Sources
Technology
15 hrs ago
5 Sources
Technology
15 hrs ago
RealSense, Intel's depth-sensing camera technology division, has spun out as an independent company, securing $50 million in Series A funding to scale its 3D perception technology for robotics, AI, and computer vision applications.
13 Sources
Technology
15 hrs ago
13 Sources
Technology
15 hrs ago
AI adoption is rapidly increasing across businesses and consumers, with tech giants already looking beyond AGI to superintelligence, suggesting the AI revolution may be further along than publicly known.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Elon Musk's artificial intelligence company xAI is preparing for a new funding round that could value the company at up to $200 billion, marking a significant increase from its previous valuation and positioning it as one of the world's most valuable private companies.
3 Sources
Business and Economy
15 hrs ago
3 Sources
Business and Economy
15 hrs ago
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago