This week, the popular AI generative model for images Midjourney announced that it is stopping the free trials on its platform, citing user abuse as the main reason.
According to the company, people were abusing the one-trial-per-person system, and it caused a GPU shortage, so they halted the offer.
Coincidentally, extremely realistic, fake photos of famous people made with Midjourney have also gone viral in recent days, causing all sorts of confusion and commentary in the press and social media. And a group of AI experts has publicly called for a halt in AI tech development.
According to an announcement made on Discord that was originally reported by the Washing Post, on Tuesday, March 28th, Midjourney’s CEO David Holz said that they were stopping the free trial offer due to “extraordinary demand and trial abuse.”
They later expanded through declarations to The Verge that they were seeing a huge surge in free trial accounts with throwaway emails, as well as experiencing a GPU shortage as a result. It was mentioned that a tutorial to activate multiple free trials was viralized online, and that could be the reason behind the unusual activity.
Want to know how many images people are generating with tools like Midjourney? Our latest visual AI stats have the answer.
Unless you live under a rock, you must have seen and/or heard about the fake images that keep going viral on social media and catching mainstream interest in the press. Former US President Donald Trump being arrested on the street or the Pope wearing a fashionable white jacket are some of the most notorious examples of fake, AI-generated photos that went viral recently. And they were created with Midjourney.
Such images are problematic for obvious reasons: they are realistic to the point common viewers believed them real, they generated confusion around important public figures and overall could be infringing different rights.
It’s worth noting, however, that according to the AI tech lab, the closure of the free trials has nothing to do with these controversial images. They even point out those were created using Midjourney 5 –the latest and most advanced version of their software that produces extremely realistic synthetic photos–, which is not available for free users.
Another important observation is that despite the increasingly negative commentary in the international press about fake images of real-life people and being sued by a trio of artists in a class-act lawsuit for copyright infringement, Midjourney hasn’t changed its terms of use as of yet, and these continue enabling users to produce images of celebrities and public figures.
In addition to being permissive in a way other similar apps aren’t –Dall-e, for example, has filters in place, and its terms explicitly forbid creating pictures of well-known people– some have noticed that Midjourney’s policies are somewhat arbitrary and sometimes are changed on the fly, on a case-by-case basis. They might not let you generate a photo of a specific celebrity, but be ok with a different famous person’s.
The company has said they are working on better regulations for the use of their software, though, and the fact they’re a small tech lab with limited funding and staff has been cited to explain the difficulty in sorting these matters faster.
Anyone who has been following the development of AI media generators knows they develop at an impressively fast rate, becoming increasingly more powerful and delivering better results.
But there are a number of experts who think this speed is too much for everyone else to catch up to and could end up being harmful. Elon Musk, the renowned tech genius who originally founded Dall-E –though he rescinded a few years ago–has recently co-written an open letter with other tech leaders aimed at AI generative labs and urging them to halt the training of AI beyond OpenAI’s latest GPT-4 (a natural language processor with shockingly good performance), citing that it could become a “risk to society.”
One could argue that Midjourney 5 allowing users to produce images that would be illegal under every standard for copyright and privacy rights, and both the laws and the company responsible for the tool failing to regulate that, could be proof of Musk & co being right.
As with everything in such a new and whirlwind-expanding industry, it’s as controversial as it is interesting. What do you think?
THE AUTHOR
Ivanna Attie
I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.
AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.
Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share