The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On October 15, 2024
2 Sources
[1]
Meta's AI Northern Lights Post is a Stark Reminder of Big Tech's Contempt For Artists
Over the weekend, Meta's official Threads account posted a series of AI-generated images of major cities with the Northern Lights overhead along with the line, "You missed the northern lights IRL, so you made your own with MetaAI." It's a stark reminder of the contempt Meta, and much of big tech, have for artists. When I say contempt, I say it with an emphasis on the definition of the word that reads, "the feeling that a person or a thing is beneath consideration [or] worthless." It's not that Meta doesn't notice artists or know they exist on its platforms, it even understands that Instagram would not be what it is without them. But the leadership at Meta -- and many other large tech companies -- is that what artists do is of little value. Meta's post was met with the response you would expect from a platform that succeeds largely thanks to the artists who use it. "Utter rubbish," "ridiculous," "livin' the lie," "I'm disappointed with you," and "this is why we hate you" are just a few of the sentiments shared in response to Meta's post. But the social media company remains undeterred and at the time of publication, the post has not been removed. And why would Meta remove it? The company's actions show it clearly believes that making recreations of the spectacular aurora with AI is just as good as taking a photo. This is the same mentality that leads to celebrities simply taking a photographer's work and posting it without permission. When people don't foster a skill, they don't see the effort that goes into honing it into something that people want to see and enjoy. Many simply see photography and all visual arts as a means to promote something of theirs without considering what about the work makes it so good at being the impetus for that promotion. It's a remarkable show of cognitive dissonance: visual art is both worthless and valueless but at the same time the best medium for promotion. If you scrutinize generative AI as a concept for even a few minutes, this sentiment is hard not to see. It's a technology that takes something that used to cost money and takes a little bit of time and compresses it into something that's in many cases free and provides instant gratification. To get to that point, though, that technology first had to "learn" how to recreate these visuals by looking at what a human did first. But big companies, celebrities, and powerful people have been stealing photos for years, so asking a computer to do it hardly feels like a change in mental direction. It's business as usual. There is no such thing as a photo and we as users should be more concerned with how a memory made us feel than how it actually happened. There are exceptions to this mentality, but the vast majority of big tech is ready to forgo actual experiences if it means they can monetize faking it. The irony here is that despite their obvious contempt for visual artists, these companies need it. Their models rely on it. Generative AI is not capable of creating something new, it has to base it on something. When the technology has no human-created content to work with, it ends up producing visual garbage. Photographers are used to being undervalued and in the age of generative AI, nothing has changed except for at times, their overt disdain slips out from behind their mask.
[2]
Missed the Northern Lights? Meta says you should just fake photos with its AI instead
For all the benefits of the best AI image generators, many of us are worried about a torrent of misinformation and fakery. Meta, it seems, didn't get the memo - in a Threads post, it's just recommended that those of us who missed the recent return of the Northern Lights should just fake shots using Meta AI instead. The Threads post, spotted by The Verge, is titled "POV: you missed the northern lights IRL, so you made your own with Meta AI" and includes AI-generated images of the phenomena over landmarks like the Golden Gate Bridge and Las Vegas. Meta has received a justifiable roasting for its tone-deaf post in the Threads comments. "Please sell Instagram to someone who cares about photography" noted one response, while NASA software engineer Kevin M. Gill remarked that fake images like Meta's "make our cultural intelligence worse". It's possible that Meta's Threads post was just an errant social media post rather than a reflection of the company's broader view on how Meta AI's image generator should be used. And it could be argued that there's little wrong with generating images like Meta's examples, as long as creators are clear about their origin. The problem is that the tone of Meta's post suggests people should use AI to mislead their followers into thinking that they'd photographed a real event - and for many, that's crossing a line that could have more serious repercussions for news events that are more consequential than the Northern Lights. Is posting AI-generated photos of the Northern Lights any worse than using Photoshop's Sky Replacement tool (above)? Or editing your photos with Adobe's Generative Fill? These are the kinds of questions that generative AI tools are raising on a daily basis - and this Meta misstep is an example of how thin the line can be. Many would argue that it ultimately comes down to transparency. The issue with Meta's post (which is still live) isn't the AI-generated Northern Lights images, but the suggestion that you could use them to simply fake witnessing a real news event. Transparency and honesty around an image's origins are as much the responsibility of the tech companies as it is their users. That's why Google Photos is, according to Android Authority, testing new metadata that'll tell you whether or not an image is AI-generated. Adobe has also made similar efforts with its Content Authenticity Initiative (CAI), which has been attempting to fight visual misinformation with its own metadata standard. Google recently announced that it will finally be using the CAI's guidelines to label AI images in Google Search results. But the sluggishness in adopting a standard leaves us in a limbo situation as AI image generators become ever-more powerful. Let's hope the situation improves soon - in the meantime, it seems incumbent on social media users to be honest when posting fully AI-generated images. And certainly for tech giants to not encourage them to do the opposite.
Share
Share
Copy Link
Meta's official Threads account suggests using AI to create fake Northern Lights images, igniting a debate on the value of authentic photography and the ethical use of AI-generated content.
Meta, the parent company of Facebook and Instagram, has sparked controversy with a recent post on its official Threads account. The post, which remains live, suggested using Meta AI to create artificial images of the Northern Lights for those who missed seeing them in real life [1][2]. This move has ignited a heated debate about the ethics of AI-generated content and its potential impact on artistic integrity and misinformation.
The photography community has responded with strong criticism to Meta's post. Many users expressed disappointment and frustration, accusing the company of undermining the value of authentic photography [1]. Comments ranged from calling the suggestion "utter rubbish" to more pointed criticisms like "this is why we hate you" [1]. The backlash highlights the growing tension between traditional visual artists and the rapid advancement of AI-generated imagery.
Critics argue that Meta's post encourages users to mislead their followers by presenting AI-generated images as authentic experiences [2]. This raises significant concerns about the potential for misinformation, especially when applied to more consequential news events. Kevin M. Gill, a NASA software engineer, warned that such practices "make our cultural intelligence worse" [2].
The controversy surrounding Meta's post underscores the increasingly blurred line between AI-generated and human-created content. While tools like Photoshop's Sky Replacement and Adobe's Generative Fill have long allowed for image manipulation, the ease and accessibility of AI image generators present new challenges [2]. This situation highlights the urgent need for clear guidelines and transparency in the use of AI-generated imagery.
In response to these challenges, some tech companies are taking steps to increase transparency. Google Photos is reportedly testing new metadata that would indicate whether an image is AI-generated [2]. Similarly, Adobe's Content Authenticity Initiative (CAI) aims to combat visual misinformation through metadata standards [2]. Google has also announced plans to adopt CAI guidelines for labeling AI images in search results [2].
Meta's post has reignited discussions about the relationship between big tech companies and artists. Critics argue that such actions demonstrate a lack of respect for the skill and effort involved in creating authentic visual art [1]. This incident serves as a reminder of the ongoing debate about the value of human creativity in an increasingly AI-driven digital landscape.
As AI image generators become more sophisticated, the need for clear ethical guidelines and transparency measures becomes increasingly critical. The incident highlights the responsibility of both tech companies and users in ensuring the honest presentation of AI-generated content [2]. It also raises questions about the future of photography and visual arts in an era where artificial creation is becoming increasingly indistinguishable from reality.
Meta is testing AI-generated posts in Facebook and Instagram feeds, raising concerns about user experience and content authenticity. The move has sparked debate about the role of artificial intelligence in social media platforms.
4 Sources
Google's upcoming Pixel 9 smartphone introduces an AI-powered Magic Editor feature, allowing users to dramatically alter photos. While innovative, it raises questions about the authenticity of digital images and potential misuse.
3 Sources
As AI-generated images become more prevalent, concerns about their impact on society grow. This story explores methods to identify AI-created images and examines how major tech companies are addressing the issue of explicit deepfakes.
2 Sources
Procreate, a popular digital art app, has publicly rejected the integration of generative AI. The company's CEO, James Cuda, expressed strong opposition to AI-generated art, sparking debate in the digital art community.
6 Sources
Procreate, the popular digital illustration app, has firmly rejected the integration of generative AI tools. CEO James Cuda's stance has sparked discussions about AI's role in creative industries.
4 Sources