The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On September 26, 2024
6 Sources
[1]
Senator lured into deepfake call with 'malign actor' posing as Ukrainian
The person on the call looked and sounded like the the ex-Ukrainian foreign minister, but asked odd questions. The chair of the Senate Foreign Relations Committee was lured into a video call with a "malign actor" probably using "deepfake" artificial intelligence technology to pose as a top Ukrainian official, lawmakers and congressional aides said Thursday. Sen. Ben Cardin (D-Md.) was contacted via email last week by someone posing as Dmytro Kuleba, the former Ukrainian foreign minister, to have a conversation over Zoom. On the video call with the senator, the person's voice and appearance matched that of Kuleba, but Cardin grew suspicious when the man asked out-of-character questions related to the upcoming election, according to two Senate aides who provided details of the event on the condition of anonymity because they were not allowed to talk to the media. The person purporting to be Kuleba also asked whether the senator supported providing long-range missiles in the Ukraine-Russia conflict. The call was first reported by Punchbowl News. "In recent days, a malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual," said Cardin in a statement. "After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities. This matter is now in the hands of law enforcement, and a comprehensive investigation is underway." The incident has raised concerns that more lawmakers could be targeted by sophisticated "deepfake" technology that allows people to impersonate the voice and appearance of political figures. Committee staff have been instructed to exercise an extra degree of caution with external communications, paying particularly close attention to the numbers and emails of incoming individuals claiming to be powerful people. Cardin, who is retiring at the end of this year, is not the only elected official to fall prey to this type of scheme. Earlier this year, then-U.K. Foreign Secretary David Cameron announced that he had participated in a fake video call from someone pretending to be Petro Poroshenko, the former president of Ukraine. The mayors of several European cities were also lured into a video call with someone pretending to be the mayor of Kyiv, the Guardian reported at the time. Kuleba has strong ties to lawmakers and senior U.S. officials dating back to the start of Russia's full-scale invasion in 2022. Many have come to know Kuleba and his mannerisms, potentially making him more difficult to impersonate than others. In a statement, Kuleba said he was "99 percent sure" the deepfake was initiated by "Russian pranksters," and warned people to stay alert. "The best thing you can do to avoid getting trapped in the deepfake is to always verify the source and not tell the truth to strangers," he wrote in Ukrainian. U.S. officials have warned that foreign actors are using deepfake technology to sow discord and misinformation. At the start of the Russian invasion of Ukraine, a deepfake of President Volodymyr Zelensky appeared online telling Ukrainians to surrender.
[2]
US senator targeted by deepfake caller posing as Ukrainian diplomat
FBI investigating call in which AI appearing to be Dmytro Kuleba asked Ben Cardin 'politically charged questions' A deepfake "actor" imitating Ukraine's recently departed foreign minister targeted the chairman of the Senate's powerful foreign relations committee in a suspected attempt at election interference, US officials have confirmed. Ben Cardin, the Democratic senator for Maryland, grew suspicious during a pre-arranged Zoom call on 19 September with a person posing as Dmytro Kuleba, who stepped down as Ukraine's top diplomat in a government reshuffle this month. The individual presumed to be Kuleba had contacted Cardin's office by email requesting a video meeting. The two men had met previously. "[W]hen they connected on Zoom, it appeared to be a live audio-video connection that was consistent in appearance and sound to past encounters," according to a notice issued by the Senate's security office. But Cardin - who is himself retiring as a senator at the end of the year - sensed a trap when the individual purporting to be Kuleba started asking "politically charged questions in relation to the upcoming election", continued the notice, which did not name the senator involved. Cardin's identity was confirmed by the website Punchbowl, which first reported the story. The notice added that the person, whose face and voice were consistent with Kuleba's, "began acting out of character and firmly pressing for responses to questions like 'Do you support long range missiles into Russian territory? I need to know your answer.'" Cardin promptly ended the call and alerted the US state department, which confirmed that the individual was not Kuleba. The matter is now being investigated by the FBI, which has not commented. Senate security officials believe the voice and image resembling Kuleba was generated by artificial intelligence. The Senate security office said the impersonation had "technical sophistication and believability". While there was no confirmation of who might have been responsible, the concern over Ukrainian missiles points the finger at Russia. Vladimir Putin warned on Wednesday that Russia would consider using nuclear weapons in response to a concerted Ukrainian missile attack on its territory. In a statement, Cardin confirmed he had been contacted by a "malign actor" who he said "engaged in a deceptive attempt to have a conversation with me by posing as a known individual". The statement added: "After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities." A second Senate security notice, from the cybersecurity awareness center, warned "of an active social engineering campaign ... that is targeting senators and Senate staff. "Targets are contacted by threat actors posing as representatives of a foreign dignitary requesting an official video call that is, in reality, malicious," said the notice. "This technique is used to discredit the victim or gain additional information. Threat actors leverage existing relationships and other known information to appear legitimate."
[3]
'Deepfake' Caller Poses as Ukrainian Official in Exchange With Key Senator
A "deepfake" caller posed as a top Ukrainian official in a recent videoconference with Senator Benjamin L. Cardin, the chairman of the Foreign Relations Committee, renewing fears that lawmakers could become the targets of malign actors seeking to influence U.S. politics or to obtain sensitive information. According to an emailed warning sent by Senate security officials to lawmakers' offices and obtained by The New York Times, a senator's office received an email last Thursday that appeared to be from Dmytro Kuleba, until recently Ukraine's foreign minister, requesting to connect over Zoom. On the subsequent video call, the person looked and sounded like Mr. Kuleba. But the senator grew suspicious when the figure posing as Mr. Kuleba started acting out of character, the Senate security officials wrote, asking "politically charged questions in relation to the upcoming election" and demanding an opinion on sensitive foreign policy questions, such as whether the senator supported firing long-range missiles into Russian territory. The senator ended the call and reported it to State Department authorities, who confirmed that the figure who appeared to be Mr. Kuleba was an impersonation. Though the Senate security office's email did not specify that the senator was Mr. Cardin, two Senate officials familiar with the matter confirmed that he was the senator in question. Mr. Cardin, a Maryland Democrat, also partially confirmed the episode in a statement Wednesday night. In it, he acknowledged that "in recent days, a malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual." Mr. Cardin did not say the individual was Mr. Kuleba or make any reference to Ukraine. The operation was reported earlier by Punchbowl News. Deepfake video technology uses artificial intelligence to create video of fictitious people who look and sound real. The technology has sometimes been used to impersonate public figures, including a video that circulated on social media in 2022 falsely showing President Volodymyr Zelensky of Ukraine announcing a surrender in the war with Russia.
[4]
Sophistication of AI-backed operation targeting senator points to future of deepfake schemes
Washington (AP) -- An advanced deepfake operation targeted Sen. Ben Cardin, the Democratic chair of the Senate Foreign Relations Committee, this month, according to the Office of Senate Security, the latest sign that nefarious actors are turning to artificial intelligence in efforts to dupe top political figures in the United States. Experts believe schemes like this will become more common now that the technical barriers that once existed around generative artificial intelligence have decreased. The notice from Senate Security sent to Senate offices on Monday said the attempt "stands out due to its technical sophistication and believability." The scheme centered around Dmytro Kuleba, the former Ukrainian Minister of Foreign Affairs. Cardin's office received an email from someone they believed to be Kuleba, according to the notice, an official Cardin knew from a past meeting. When the two met for a video call, the connection "was consistent in appearance and sound to past encounters." It wasn't until the caller posing as Kuleba began asking questions like "Do you support long range missiles into Russian territory? I need to know your answer," that Cardin and his staff suspected "something was off," the Senate notice said. "The speaker continued, asking the Senator politically charged questions in relation to the upcoming election," likely to try and bait him into commenting on a political candidate, according to the notice from Nicolette Llewellyn, the director of Senate Security. "The Senator and their staff ended the call, and quickly reached out to the Department of State who verified it was not Kuleba." Cardin on Wednesday described the encounter as "a malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual." "After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities," Cardin said. "This matter is now in the hands of law enforcement, and a comprehensive investigation is underway." Cardin's office did not respond to a request for additional information. Generative artificial intelligence can use massive computing power to digitally alter what appears on a video, sometimes changing the background or subject of a video in real time. The same technology can also be used to digitally alter audio or images. Technology like this has been used in nefarious schemes before. A finance worker in Hong Kong paid $25 million to a scammer who used artificial intelligence to pose as the company's chief financial officer. A political consultant used artificial intelligence to mimic President Joe Biden's voice and urge voters not to vote in New Hampshire's presidential primary, leading the consultant to face more than two dozen criminal charges and millions of dollars in fines. And experts on caring for older Americans have long worried artificial intelligence-powered deepfakes will supercharge financial scams targeting seniors. Both security officials in the Senate and artificial intelligence experts believe this could be just the beginning, given that recent leaps in the technology have made schemes like the one against Cardin not only more believable, but easier to conduct. "In the past few months, the technology to be able to pipe in a live video deepfake along with a live audio deepfake has been easier and easier to integrate together," said Rachel Tobac, a cyber security expert and the CEO of SocialProof Security, who added that earlier iterations of this technology had obvious tells that they were fake, from awkward lip movement to people blinking in reverse. "I am expecting more of these kinds of incidents to happen in the future," said Siwei Lyu, an artificial intelligence expert and professor at the University at Buffalo. "Anyone with some kind of malicious intent in their mind now has the ability to conduct this kind of attack. These could come from the political angle, but it could also come from the financial angle like fraud or identify theft." The memo to Senate staff echoed this sentiment, telling the staffers to make sure meeting requests are authentic and cautioning that "other attempts will be made in the coming weeks." R. David Edelman, an expert on artificial intelligence and national security who led cyber security policy for years in the White House, described the scheme as a "sophisticated intelligence operation" that "feels quite close to the cutting edge" in how it combined the use of artificial intelligence technology with more traditional intelligence operations that recognized the connections between Cardin and the Ukrainian official. "They recognized the existing relationship between these two parties. They knew how they might interact - timing, mode, and how they communicate," he said. "There is a sophistication to the intelligence operation."
[5]
Sophistication of AI-backed operation targeting senator points to future of deepfake schemes
Washington (AP) -- An advanced deepfake operation targeted Sen. Ben Cardin, the Democratic chair of the Senate Foreign Relations Committee, this month, according to the Office of Senate Security, the latest sign that nefarious actors are turning to artificial intelligence in efforts to dupe top political figures in the United States. Experts believe schemes like this will become more common now that the technical barriers that once existed around generative artificial intelligence have decreased. The notice from Senate Security sent to Senate offices on Monday said the attempt "stands out due to its technical sophistication and believability." The scheme centered around Dmytro Kuleba, the former Ukrainian Minister of Foreign Affairs. Cardin's office received an email from someone they believed to be Kuleba, according to the notice, an official Cardin knew from a past meeting. When the two met for a video call, the connection "was consistent in appearance and sound to past encounters." It wasn't until the caller posing as Kuleba began asking questions like "Do you support long range missiles into Russian territory? I need to know your answer," that Cardin and his staff suspected "something was off," the Senate notice said. "The speaker continued, asking the Senator politically charged questions in relation to the upcoming election," likely to try and bait him into commenting on a political candidate, according to the notice from Nicolette Llewellyn, the director of Senate Security. "The Senator and their staff ended the call, and quickly reached out to the Department of State who verified it was not Kuleba." Cardin on Wednesday described the encounter as "a malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual." "After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities," Cardin said. "This matter is now in the hands of law enforcement, and a comprehensive investigation is underway." Cardin's office did not respond to a request for additional information. Generative artificial intelligence can use massive computing power to digitally alter what appears on a video, sometimes changing the background or subject of a video in real time. The same technology can also be used to digitally alter audio or images. Technology like this has been used in nefarious schemes before. A finance worker in Hong Kong paid $25 million to a scammer who used artificial intelligence to pose as the company's chief financial officer. A political consultant used artificial intelligence to mimic President Joe Biden's voice and urge voters not to vote in New Hampshire's presidential primary, leading the consultant to face more than two dozen criminal charges and millions of dollars in fines. And experts on caring for older Americans have long worried artificial intelligence-powered deepfakes will supercharge financial scams targeting seniors. Both security officials in the Senate and artificial intelligence experts believe this could be just the beginning, given that recent leaps in the technology have made schemes like the one against Cardin not only more believable, but easier to conduct. "In the past few months, the technology to be able to pipe in a live video deepfake along with a live audio deepfake has been easier and easier to integrate together," said Rachel Tobac, a cyber security expert and the CEO of SocialProof Security, who added that earlier iterations of this technology had obvious tells that they were fake, from awkward lip movement to people blinking in reverse. "I am expecting more of these kinds of incidents to happen in the future," said Siwei Lyu, an artificial intelligence expert and professor at the University at Buffalo. "Anyone with some kind of malicious intent in their mind now has the ability to conduct this kind of attack. These could come from the political angle, but it could also come from the financial angle like fraud or identify theft." The memo to Senate staff echoed this sentiment, telling the staffers to make sure meeting requests are authentic and cautioning that "other attempts will be made in the coming weeks." R. David Edelman, an expert on artificial intelligence and national security who led cyber security policy for years in the White House, described the scheme as a "sophisticated intelligence operation" that "feels quite close to the cutting edge" in how it combined the use of artificial intelligence technology with more traditional intelligence operations that recognized the connections between Cardin and the Ukrainian official. "They recognized the existing relationship between these two parties. They knew how they might interact - timing, mode, and how they communicate," he said. "There is a sophistication to the intelligence operation."
[6]
Sophistication of AI-backed operation targeting senator points to future of deepfake schemes
Washington -- An advanced deepfake operation targeted Sen. Ben Cardin, the Democratic chair of the Senate Foreign Relations Committee, this month, according to the Office of Senate Security, the latest sign that nefarious actors are turning to artificial intelligence in efforts to dupe top political figures in the United States. Experts believe schemes like this will become more common now that the technical barriers that once existed around generative artificial intelligence have decreased. The notice from Senate Security sent to Senate offices on Monday said the attempt "stands out due to its technical sophistication and believability." The scheme centered around Dmytro Kuleba, the former Ukrainian Minister of Foreign Affairs. Cardin's office received an email from someone they believed to be Kuleba, according to the notice, an official Cardin knew from a past meeting. When the two met for a video call, the connection "was consistent in appearance and sound to past encounters." It wasn't until the caller posing as Kuleba began asking questions like "Do you support long range missiles into Russian territory? I need to know your answer," that Cardin and his staff suspected "something was off," the Senate notice said. "The speaker continued, asking the Senator politically charged questions in relation to the upcoming election," likely to try and bait him into commenting on a political candidate, according to the notice from Nicolette Llewellyn, the director of Senate Security. "The Senator and their staff ended the call, and quickly reached out to the Department of State who verified it was not Kuleba." Cardin on Wednesday described the encounter as "a malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual." "After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities," Cardin said. "This matter is now in the hands of law enforcement, and a comprehensive investigation is underway." Cardin's office did not respond to a request for additional information. Generative artificial intelligence can use massive computing power to digitally alter what appears on a video, sometimes changing the background or subject of a video in real time. The same technology can also be used to digitally alter audio or images. Technology like this has been used in nefarious schemes before. A finance worker in Hong Kong paid $25 million to a scammer who used artificial intelligence to pose as the company's chief financial officer. A political consultant used artificial intelligence to mimic President Joe Biden's voice and urge voters not to vote in New Hampshire's presidential primary, leading the consultant to face more than two dozen criminal charges and millions of dollars in fines. And experts on caring for older Americans have long worried artificial intelligence-powered deepfakes will supercharge financial scams targeting seniors. Both security officials in the Senate and artificial intelligence experts believe this could be just the beginning, given that recent leaps in the technology have made schemes like the one against Cardin not only more believable, but easier to conduct. "In the past few months, the technology to be able to pipe in a live video deepfake along with a live audio deepfake has been easier and easier to integrate together," said Rachel Tobac, a cyber security expert and the CEO of SocialProof Security, who added that earlier iterations of this technology had obvious tells that they were fake, from awkward lip movement to people blinking in reverse. "I am expecting more of these kinds of incidents to happen in the future," said Siwei Lyu, an artificial intelligence expert and professor at the University at Buffalo. "Anyone with some kind of malicious intent in their mind now has the ability to conduct this kind of attack. These could come from the political angle, but it could also come from the financial angle like fraud or identify theft." The memo to Senate staff echoed this sentiment, telling the staffers to make sure meeting requests are authentic and cautioning that "other attempts will be made in the coming weeks." R. David Edelman, an expert on artificial intelligence and national security who led cyber security policy for years in the White House, described the scheme as a "sophisticated intelligence operation" that "feels quite close to the cutting edge" in how it combined the use of artificial intelligence technology with more traditional intelligence operations that recognized the connections between Cardin and the Ukrainian official. "They recognized the existing relationship between these two parties. They knew how they might interact - timing, mode, and how they communicate," he said. "There is a sophistication to the intelligence operation."
Share
Share
Copy Link
Senator Ben Cardin of Maryland was tricked into a video call with a deepfake impersonator posing as a Ukrainian official. The incident highlights the growing threat of AI-powered deception in politics and international relations.
In a startling incident that underscores the evolving landscape of digital deception, U.S. Senator Ben Cardin of Maryland fell victim to a sophisticated deepfake operation. The Democratic lawmaker was lured into a video call with what he believed to be Ukrainian officials, including the country's top diplomat, Dmytro Kuleba 1.
The call, which took place on September 26, 2024, lasted for about 15 minutes before Cardin's staff became suspicious of the impersonator's behavior and abruptly ended the conversation 2. The deepfake technology employed was so advanced that it initially fooled both the senator and his team, demonstrating the potential for AI to create highly convincing audio and visual forgeries.
This incident has raised serious concerns about the implications of such technology for national security and diplomatic relations. Experts warn that as AI continues to advance, distinguishing between genuine and fabricated content will become increasingly challenging 3.
Following the incident, Senator Cardin's office promptly alerted relevant authorities, including the Capitol Police and the Senate Sergeant at Arms. The FBI has launched an investigation into the matter, focusing on identifying the perpetrators and their motives 4.
This is not an isolated incident. In recent months, several European politicians have reported similar experiences with deepfake impersonations. The trend highlights a growing concern about the use of AI in disinformation campaigns and political manipulation 5.
In light of these events, there are renewed calls for legislation to address the threats posed by deepfake technology. Senator Cardin himself has become an advocate for stronger regulations and improved detection methods to combat AI-powered deception 1.
As AI technology continues to advance, experts predict that deepfake incidents will become more frequent and harder to detect. This raises important questions about the future of digital communication, particularly in sensitive areas such as diplomacy and national security 3.
In response to the growing threat, there is a push for increased public awareness and education about deepfake technology. Cybersecurity experts emphasize the importance of critical thinking and verification processes when engaging in digital communications, especially for high-profile individuals and organizations 5.
Reference
[1]
[3]
[4]
As the 2024 US presidential election approaches, the rise of AI-generated fake content is raising alarms about potential voter manipulation. Experts warn that the flood of AI-created misinformation could significantly impact the electoral process.
5 Sources
Deepfake technology is increasingly being used to target businesses and threaten democratic processes. This story explores the growing prevalence of deepfake scams in the corporate world and their potential impact on upcoming elections.
2 Sources
As Congress faces gridlock on various issues, a bipartisan group of lawmakers sees artificial intelligence (AI) as a potential area for breakthrough legislation. The urgency to regulate AI is driven by concerns over its rapid advancement and potential risks.
5 Sources
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources