2 Sources
[1]
Transcript: The trouble with deepfakes -- Beyond control?
Hannah Murphy Before we begin, we'd love to hear a bit more about you and what you like about this show. We're running a short survey, and anyone who takes part before August 29th will be entered into a prize draw for a pair of Bose QuietComfort 35 wireless headphones. You can find a link to the survey and terms and conditions for the prize draw in our show notes. Hany Farid So this was . . . Let me see if I can find the year here. Hold on. John Kerry and Jane Fonda, sharing a stage at an anti-war . . . I like . . . Hannah Murphy This is Hany Farid. He's the digital forensic specialist we spoke to him last episode, and I've just asked him what his favourite example of a faked photo is. Hany Farid OK. It looks like . . . Oh, there's actually a Wiki . . . this is amazing! You know, say what you will about the internet. There's a few things that are really good. There's actually a Wikipedia page for Kerry-Fonda 2004 election photo controversy. And this is exactly the photo that I'm referring to. And I like it for a couple of reasons. Hannah Murphy The photo is from 20 years ago, when US Senator John Kerry was running against George Bush in the US presidential election. Hany Farid And somebody started circulating -- this is very early days of social media -- a photo of John Kerry sitting at a table and next to him was Jane Fonda at a microphone. And it looks like a fake headline saying "Fonda speaks to Vietnam veterans at anti-war rally". Hannah Murphy Jane Fonda, she's the Hollywood actor who, back in the '70s, was a vocal critic of the Vietnam war. Hany Farid And this started circulating to try to tag Kerry with a controversial figure, which was Jane Fonda at the time because of her position on the Vietnam war. Hannah Murphy And ahead of the 2004 election, the photo surfaces and starts spreading. Hany Farid Why is this one of my favourite examples? First of all, it was one of the really first early examples of this type of photo manipulation in politics. Hannah Murphy It's one of the first Photoshopped political images to spread virally via social media, back when social media was still in its infancy. Hany Farid But the other reason I like it is that it's not a very good fake. Like, if you look at it, it's clear that she's standing up with a floor mic and he's sitting down. But there it was, created this huge controversy. People are still talking about it. The third reason I really like it is it inspired a technique for that we developed to detect manipulated images. Hannah Murphy As I said, Farid is a digital forensics specialist. He's a professor at the University of Berkeley, and he figures out how to spot when photos have been faked, either doctored in more traditional ways or deep-faked, generated by artificial intelligence. Hany Farid In this particular case, they're apparently sitting outdoors. And when you're sitting outdoors, you are primarily being illuminated by the sun. And when you composite two people together, more likely than not, where the sun is in those two images is different. And so we developed techniques that can look at an image like this and estimate for each individual where the light was, and then look for inconsistencies. It's a technique that we still use today. So there's three reasons I really like this photo. But they are nothing compared to the fourth reason. And the fourth reason goes like this. On election night, John Kerry lost the election, of course. NPR was doing exit polls, and they were talking with somebody who was coming out of the poll and they said, do you mind if we ask who you voted for? No. Who did you vote for? I couldn't vote for John Kerry. Why not? I couldn't get that image of him and Jane Fonda at an anti-war rally out of my mind. And the reporter said, you know it was a fake photo. And here's where things get interesting. The guy said, yeah, but I couldn't get it out of my mind. Hannah Murphy The voter knew it was fake, but just having seen it was enough to change the way he voted. Hany Farid And I'll tell you why this is so powerful, because it actually haunts me. Because what is my job? My job is to detect manipulated images. Well, what if that doesn't actually help set the record straight? And that just shows you the scale and the complexity of this problem, right? Because everything we've been talking about is technology. Yeah, but there's a very human part of this, right? And the very human part is incredibly complicated and subtle and fragile. And that is something that I just . . . really, it haunts me dealing with how do we deal with that very human aspect of disinformation? [MUSIC PLAYING] Hannah Murphy Welcome to Tech Tonic from the Financial Times. I'm Hannah Murphy, and I'm a technology reporter for the FT covering social media out of San Francisco. Last episode, we heard how deepfake technology is improving at an exponential rate. Anyone with access to the internet and a few dollars to spare can conjure up fake images, fake videos, and fake audio clips of pretty much anything. So in an era where seeing is not necessarily believing, how do we verify truth from fabrication? And what's being done to limit the proliferation of deepfakes? Last episode, we spoke a lot about how deep fakes can get in the way of politics. The idea of fake audio, video and images undermining some of the most powerful people and processes on the planet. But in reality, that's not what most of the deepfakes flooding the internet today are actually about. As with most things online, deepfakes started with porn. Anita I'm . . . it's Anita. I'm known as Sweet Anita online and I have Tourette syndrome. So if I swear at you, I probably don't mean it. Audio clip If you're watching and you know it, clap your hands . . . Hannah Murphy Anita is a content creator online and by the way, her work is nothing to do with porn. Audio clip Listen, listen. Don't have very high expectations for this particular stream, OK? Hannah Murphy She's on YouTube and TikTok, but her main outlet is the streaming platform Twitch. Anita It's where I have the most fun. It's what I look forward to doing every day. And it's kind of the centre of everything I do and my main presence on the internet. Hannah Murphy She has almost 2mn followers on there, and pretty much every day she livestreams herself gaming, chatting and feeding her rabbits. This is how Anita makes her living and it also makes her relatively famous in the Twitch streaming world. [STREAM CLIP PLAYING] So this is what happened to her. Back at the beginning of 2023, Anita noticed some sort of controversy bubbling up in the Twitch space. Anita I was on the internet one day just scrolling through Twitter, and I saw that another content creator had gotten into a little bit of hot water. Hannah Murphy Viewers of a fellow male Twitch streamer had noticed in one of his streams that he had a tab open in his web browser that hosted deepfake porn, and in particular, deepfake porn apparently of famous online content creators. Anita This really gave me a sinking feeling when I saw it all. So I was like, wait, I'm not on these websites, am I? So I googled my name and the name of the website, and suddenly got a massive montage of pictures of me in sexual positions I'd never been in, with people I'd never met, and doing things that I would never do, let alone do in front of a camera. And there was so much of it. I had no idea there was this massive portfolio of porn of me out there. Hannah Murphy How did you feel when you first saw these images? What was that like? Anita There's so many things to feel at once. So there was a part of me that was just like, I don't know whether I even want people to know how I feel about this, because it might just make it worse. And then there was another part of me that was just profoundly angry. I should have a choice whether or not I sexually engage with people. No one should be able to sexually solicit my body without my consent. There was a guy profiting off of people wanting to sexually engage with me who never asked my permission, never gave a second thought to how it would affect me physically or mentally or socially or in my career to have this all floating around. Hannah Murphy Unfortunately, Anita is far from being the only person to have been left feeling this way. Pornography is actually where deepfakes inflict most harm. According to a 2023 study by one US cyber security firm, 98 per cent of all deepfake videos are pornography. In fact, the term deepfake even comes back to porn. It's a portmanteau of fake and deep learning, and it was the name of a Reddit thread which, back in 2017 when this technology was just getting started, was used to share deepfakes of celebrities face-swapped on to porn videos. And as experienced by Anita, that's still predominantly what the technology is being used for. But one of the things that struck her most in the whole experience wasn't that someone had created deepfake porn in her likeness, it was how hard it was to do something about it afterwards. Anita I immediately collaborated with a few other streamers, and we started investigating what legal action we could potentially take against the website and how to get it taken down, and those sorts of things. So that was my immediate next steps, was I hope there's a way that we can get justice for this. Hannah Murphy The main website that hosted them did actually remove the deepfakes, but Anita had already found several other websites featuring deepfake porn of her. And at the time, making and sharing deepfake porn wasn't a crime. Anita This stage of internet and people having these careers is a relatively new thing. The law hasn't caught up, and so there was no one to turn to to ask for advice on how to deal with this. Hannah Murphy She wrote to politicians in the UK to campaign to get the law changed. She ended up getting invited to the House of Parliament to speak, and in the months that followed, the UK actually announced a law that will make sharing deepfake pornography illegal. But in the meantime, the deepfakes of Anita continued to float around the internet. Anita found firms that offer to manage the footprint of nonconsensual sexual imagery. The process consists of scraping the entire internet, find out which websites are hosting the images or videos, and then sending out legal letters to those sites asking them to take the images down. As you can imagine, it is labour-intensive and never-ending. Every few months, the lawyers have to repeat the whole process. And that's not even taking into account pornography that might exist on the darker, harder-to-reach corners of the internet. And when Anita looked into getting this kind of help, she was put off by the price. Anita It's between 1,500 for the lower end to 3,000 or 4,000 a month on the higher end. And that puts me at financial disadvantage compared to male streamers. Like, they get to spend their earnings on making better content. You know, and I just wasn't willing to accept that disadvantage to my peers in terms of creating content. Hannah Murphy Everyday people, overwhelmingly women, are finding that they have been made victims of deepfake pornography. And it really is astoundingly easy to make. One in every three deepfake tools allows users to create explicit deepfakes, and you can do so for free. To create a 60-second pornographic video, all you need is one image of someone's face. And there have now been several reports of teenagers showing deepfake sexual images of their classmates. It's now been over a year since Anita discovered she had been a victim of deepfake pornography, and it still has a daily impact on her life. Anita I get sent lots of videos and pictures that have been made of me. I get sent the deepfakes as a way to taunt me because they know that it makes me uncomfortable. I also get people who endlessly treat me in a weird and sexual way in DMs and things like that, and, you know, sending dick pics unsolicited and all this stuff. And in the future, if I decide I want to do any other job, now employers look you up on the internet. If I ever need to change jobs for any reason whatsoever, it's gonna be incredibly hard because in advance, like, if I go, oh, by the way, if you find porn of me, it's not real. Hannah Murphy That seems to be the thing with a fake image or video. Once it's out there, once people have seen it, its repercussions live on. And that's true of porn and of other forms of deepfakes, just like the voter who wouldn't go for John Kerry after seeing the image of him with Jane Fonda. So if currently we're living in an era where the access to making deepfakes is fully democratised, what guardrails can be put in place to protect future victims of deepfake pornography and to prevent AI-synthesised media from influencing people's political beliefs? [MUSIC PLAYING] So are there tools out there that can safeguard against the harms of deepfakes? Or is Generative AI developing to rapidly for us to get it under control? Here's Hany Farid. Hany Farid Somebody asked me the other day if I'm scared of AI, and I said, no, but I am scared of capitalism. Hannah Murphy Farid says deepfakes are part of a wider trend -- big tech companies and Silicon Valley start-ups rapidly developing AI without thinking about the unintended consequences of the technology. Hany Farid Really, the issue here is not the technology itself, but it's that everybody is racing as fast as possible to beat everybody else to make the next trillion-dollar company. And without the proper guardrails from our government to mitigate the harms of that, I think capitalism is gonna do what capitalism does, which is burn the place to the ground in order to win. And that worries me. Hannah Murphy When it comes to deepfakes, Farid says there are things that we can do to mitigate the potential harms. For example, there are steps that can be taken to stop deepfakes being shared on social media or to shut down sites that create deepfake porn. Hany Farid There are entire web services that you don't have to go any further than a Google search to find, that literally says "nudify" women in images. They're not hiding. They're not pretending. Those web services absolutely have a responsibility. Google has a responsibility for the fact that they're surfacing these websites. Oh, and by the way, these websites are being supported by Visa, Mastercard, American Express, the monetisation side as well. And so we should go after the infrastructure for the people who allow this stuff to be created. Hannah Murphy We put these comments to the companies mentioned. Google said that it's made recent updates to its search engine to combat deepfakes, including changes to its ranking system. But they also said there's more work to be done to tackle nonconsensual deepfake imagery. The other companies did not reply to our requests for comment. When it comes to deepfakes more generally and the world of politics and disinformation, it gets more difficult because Farid argues that making a deepfake isn't intrinsically wrong. Hany Farid The disinformation side is where things get tricky, right? Particularly around elections, right? We can't say you can't make things up. That's a dangerous line that I don't think we want to cross. So we have to find a balance between a free and open society where we get to criticise and make fun of and ridicule our politicians. I think that's fine. I think it's fine to make a video of Joe Biden or Donald Trump that ridicules them. I think that should be protected. But I think there's also a line where you've gone too far. Hannah Murphy If you don't want to ban deepfakes completely, one solution could be to label them as fake from the moment they're made -- a feature called watermarking. Hany Farid The big generative AI companies -- the Midjourneys, the Stable Diffusions, the OpenAIs and the Adobes -- can watermark and fingerprint every single piece of content that they create. Hannah Murphy He's talking about a digital marker built into the metadata of the content, showing it has been made using AI technology. An organisation called Coalition for Content Provenance and Authenticity, or C2PA, has been working on this for the past couple of years, and their standard is already being used by some of the big AI companies. In fact, Silicon Valley groups such as OpenAI and social media companies have already started adopting watermarking standards. Farid says watermarks are like nutrition labels. Hany Farid If you go to the grocery store, you can buy all kinds of food that are bad for you, but the label is going to tell you that this is bad for you, right? And I think it's the same way with information. We should just tell people what this information is. This is an authentic recording of Joe Biden. This is a fake audio. And obviously if it crosses a line to illegal activity, then it gets taken down. Hannah Murphy But the thing is, watermarking can be easily hacked, removed, or altered. If you screenshot an image or if you upload it to social media, it strips the data. And if someone is intent on making a deepfake for nefarious purposes, you could find tools on the dark web that don't even use watermarks. So what else can be done? Several start-ups have developed deepfake detection software. AI programs that aim to flag a bit of deepfake content, or at least try to. Typically, they analyse media looking for anomalies and spit out a probability that something will be a deepfake. But the problem is that the technology behind deepfakes is constantly improving. So it's a cat-and-mouse game. Hany Farid There are, you know, dozens of websites out there that claim to be able to do this. I think you should be very careful using those. They are hit or miss sometimes and you don't know how reliable they are. And we've absolutely seen them misfire and claim that things are real or fake and vice versa. Hannah Murphy Farid says detection software could have some role to play in the future, as could watermarking and other technical solutions. But he says a crucial part of the solution is regulation. If we can't stop deepfakes completely, technology and social media companies need to be pushed to take more responsibility for curbing them. Hany Farid Our regulators continue to fall asleep at the wheel when it comes to regulating the technology sector. They have had 20 years to get their act together and they have refused to. It's particularly bad here in the United States, a little bit better in the EU and a little bit better in the UK, but our regulators have got to start taking this more seriously as it affects individuals, both children and adults. Hannah Murphy But the thing with deepfakes is that they are not an isolated technology. They're just one corner of the broader technological change of generative AI, which some observers believe will completely reshape society. Nina Schick We're at the dawn of what can only be conceived of as a new industrial revolution. Hannah Murphy This is Nina Schick. She works on AI policy and is the author of a book on deepfakes. She says the challenges we're currently facing with deepfakes are just a taste of the challenges we'll face with AI. Nina Schick We live in a world where exponential technology is gonna reshape the economy. It's gonna reshape the labour market. It's gonna reshape what and how we think about knowledge and learning. And that isn't something that's happening in 100 years' time or 50 years' time. It's happening in our lifetime. Hannah Murphy These questions, like how should the technology be used, how can it be controlled, will have to be answered for AI as a whole, not just for deepfake technology. But whatever the answers, Schick says one solution shouldn't be to ban the technology completely or restrict who can use it. Even if people misuse it, AI technology is just too important. Nina Schick I don't think that the argument that people will do bad things with a really potent technology is strong enough to mean that everybody should not have access to what essentially is gonna be the most powerful technology for intelligence that we've ever seen. And certainly, I don't think that this should just be in the hands of a few powerful tech companies. You could even argue that it's futile, right? To say that we want to control this technology, it should be in a walled garden, that's simply not the reality of what's happening. If you look at the history of any general purpose technology, the rule is proliferation. Ultimately it proliferates. Whether that's fire, whether it's industrial revolution, whether it's computing, there is no walled garden where you can keep kind of these advances behind. So given that that's just the nature of technology, given that there's a very important argument to keep AI, in particular open-source, I don't think it's realistic or compelling to argue that this should just be in the hands of a few. [MUSIC PLAYING] Hannah Murphy In a sense, the challenges with deepfakes are common to all kinds of new technology. Whether it's the internet or social media or generative AI, people will find bad things to do with it. So will the array of tools being developed, like watermarking, detection software and regulation actually do enough to curb the nefarious uses of deepfake technology? It might be too early to tell, but for me it feels like it's falling short. But they're challenges we'll be facing more and more if generative AI has the seismic and widespread impact it's predicted to have. In the meantime, maybe we need to get used to an internet where we can't believe what we see or hear. The technology that makes deepfakes possible isn't going back in the box. We're already living in a world where the distinction between real and fake is becoming progressively blurred. You've been listening to Tech Tonic from the Financial Times with me, Hannah Murphy. I've put some free links related to the episode in the show notes. Do check them out and do leave us a review. It helps spread the word. This series of Tech Tonic is produced by Persis Love. Edwin Lane is the senior producer. Manuela Saragosa is the executive producer. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.
[2]
The trouble with deepfakes: Beyond control?
Your browser does not support playing this file but you can still download the MP3 file to play locally. Anita was scrolling on Twitter when she found someone had made deepfake porn of her, without her permission. But that was just the start of her problems; she found it was difficult and expensive to get the deepfakes taken down and nigh-on impossible to prevent their proliferation online. So, what guardrails can regulators and tech companies put in place to prevent the spread of deepfakes and protect those whose likeness has been stolen without their consent? Technological fixes, such as deepfake detection software and deepfake watermarking exist, but can the technology keep up with the ever-improving capacities of generative AI? Host Hannah Murphy speaks to Hany Farid, digital forensics expert at the University of California, Berkeley; Nina Schick, CEO and founder of Tamang Ventures, and author; and Sweet Anita, Twitch streamer. Tell us what you think of Tech Tonic and you could be in with a chance to win a pair of Bose QuietComfort 35 Wireless Headphones. Complete the survey here. Want more? Google upgrades search in drive to tackle deepfake porn India tells tech giants to police deepfakes under 'explicit' rules Political deepfakes top list of malicious AI use, DeepMind finds Clips: sweet_anita Twitch This series of Tech Tonic is presented by Hannah Murphy. The producer is Persis Love. The senior producer is Edwin Lane. Our executive producer Manuela Saragosa. Additional production help from Josh Gabert-Doyon. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.
Share
Copy Link
The global semiconductor industry is in a fierce competition to develop alternatives to Nvidia's AI chips, as countries and companies seek to reduce reliance on the US-based tech giant.
Nvidia, the US-based technology company, has established a dominant position in the artificial intelligence (AI) chip market. The company's graphics processing units (GPUs) have become the go-to choice for training large language models and other AI applications. This dominance has led to a surge in Nvidia's market value, which recently surpassed $1tn 1.
The overwhelming success of Nvidia's AI chips has sparked a global race to develop alternatives. Countries and companies worldwide are seeking to reduce their dependence on the US-based tech giant and create their own AI chip solutions 1.
China, in particular, is making significant efforts to develop its own AI chips. The country aims to decrease its reliance on US technology, especially in light of export controls imposed by Washington. Chinese tech giants like Huawei and Alibaba are investing heavily in chip development to compete with Nvidia 2.
European countries are also joining the race. The EU has launched the European Chips Act, allocating substantial funds to boost domestic semiconductor production and reduce dependence on foreign suppliers. Companies like Graphcore in the UK are working on developing AI-specific processors to challenge Nvidia's dominance 1.
Major technology companies, including Google, Amazon, and Meta, are investing in developing their own custom AI chips. These efforts aim to optimize performance for their specific AI workloads and reduce reliance on external suppliers like Nvidia 1.
While the global race to develop AI chip alternatives is gaining momentum, challenges remain. Nvidia's technological lead and ecosystem of software and developer tools make it difficult for competitors to catch up quickly. However, the increasing demand for AI chips and the push for technological sovereignty are creating opportunities for new players in the market 2.
The competition to develop AI chip alternatives is reshaping the global technology landscape. It has implications for national security, economic competitiveness, and the future of AI development. As countries and companies invest in their own chip technologies, the AI chip market is likely to become more diverse and competitive in the coming years 1 2.
Google's latest Pixel 10 series showcases significant AI advancements while maintaining familiar hardware, offering a blend of innovative features and reliable performance.
35 Sources
Technology
23 hrs ago
35 Sources
Technology
23 hrs ago
A sophisticated supply chain attack on Nx NPM packages leveraged AI tools to steal sensitive data, including GitHub tokens, cloud credentials, and AI API keys, affecting potentially thousands of developers and organizations.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago
Andreessen Horowitz's latest report reveals shifts in the AI landscape, with Google's Gemini emerging as a strong competitor to ChatGPT, while other players like Grok show rapid growth.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago
The S&P 500 reached a record high following Nvidia's strong quarterly results, reinforcing the ongoing AI-driven market rally despite some concerns over China sales.
8 Sources
Technology
7 hrs ago
8 Sources
Technology
7 hrs ago
Asian markets show varied performance ahead of Nvidia's crucial earnings report, while U.S. stocks reach new highs. China's semiconductor industry sees significant gains, reflecting the growing importance of AI in the global tech landscape.
6 Sources
Technology
7 hrs ago
6 Sources
Technology
7 hrs ago