Curated by THEOUTPOST
On Tue, 24 Dec, 4:04 PM UTC
2 Sources
[1]
AI's Most Controversial Personalities Who Made Headlines in 2024
Scarlett Johansson publicly expressed her frustration after discovering that OpenAI had created a voice for its chatbot that she felt resembled hers too closely. While 2024 has been a year of progress, it has also been a hotbed of bold claims and heated controversies within the world of AI. This article looks back at the most controversial figures in AI in 2024 - individuals who stirred debates, challenged norms and redefined what AI can and should do. German computer scientist Jürgen Schmidhuber, known for his work on recurrent neural networks, has often argued that he and other researchers have not received adequate recognition for their contributions to deep learning. Instead, he claimed, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have received disproportionate credit. Most recently, he alleged that Hinton's Nobel Prize is based on uncredited work. Schmidhuber alleged that Hinton and Hopfield's contributions were heavily influenced by existing research without inadequate acknowledgement. "This is a Nobel Prize for plagiarism," Schmidhuber wrote on LinkedIn. He argued that methodologies developed by Alexey Ivakhnenko and Shun'ichi Amari in the 1960s and 1970s, respectively, formed the foundation of the laureates' work. "They republished methodologies developed in Ukraine and Japan without citing the original papers. Even in later surveys, they didn't credit the original inventors," Schmidhuber said, suggesting that the omission may have been intentional. Rosalind Picard, a professor at MIT Media Lab, recently faced controversy over alleged discriminatory remarks made during a keynote speech at NeurIPS 2024 towards Chinese students. During her presentation, Picard mentioned an incident involving a Chinese student who had been expelled. The act drew criticism for appearing to single out nationality and reinforce harmful stereotypes. This prompted apologies from both Picard and the NeurIPS organisers and sparked discussions about inclusivity and respect within the AI research community. Bhavish Aggarwal, founder of Ola's AI chatbot Krutrim, made several notable statements about AI this year that sparked discussions. Earlier this year, Aggarwal framed India's AI development in terms of data sovereignty and criticised what he termed "techno-colonialism" to describe the exploitation of developing countries by global tech giants through technology. "India generates the largest amount of digital data in the world, but all of it is sitting in the West...They take our data out, process it into AI and then bring it back and sell it in dollars to us. It's the same East India Company all over again," he said. His remarks triggered controversy, as critics noted that much of Ola's early funding had come from global investment firms. Meanwhile, following an incident where LinkedIn's AI tool referred to him using gender-neutral pronouns, Aggarwal announced that Ola would shift from Microsoft's Azure cloud platform to its own Krutrim cloud. Moreover, he called on other Indian companies to follow suit, which some interpreted as promoting anti-Western sentiment in the tech industry. Former Google CEO Eric Schmidt recently made several notable statements about AI and its potential risks. In interviews with ABC News and PBS, Schmidt warned that AI systems could reach a "dangerous point" when they can self-improve, suggesting that we need to consider "unplugging" them at that stage. He expressed concern about computers running autonomously and making their own decisions, calling for human oversight to maintain "meaningful control" over autonomous weapons. Mira Murati, former chief technology officer of OpenAI, found herself at the centre of controversy in March regarding the training data for Sora, OpenAI's new text-to-video AI model. During an interview with The Wall Street Journal, Murati was asked about the specific sources of data used to train Sora. She revealed that the model was trained on "publicly available and licensed data". However, when asked whether content from platforms like YouTube, Instagram, or Facebook was used to train the model, she responded with uncertainty, saying, "I'm actually not sure about that. I'm not confident about it." This year, Hoan Ton-That, CEO and co-founder of Clearview AI, remained a controversial figure in the AI industry due to his company's facial recognition technology and its practices. Clearview AI faced significant legal issues, including a €30.5 million fine from the Dutch Data Protection Authority for maintaining an "illegal database" of billions of facial images. The company was also warned of additional penalties of up to €5.1 million for failing to comply with EU data protection laws. Despite these challenges, Ton-That defended the company as he asserted that it only uses publicly available online data and compared its approach to Google's photo search. He argued that Clearview's technology plays a crucial role in law enforcement, citing its use in investigations into the January 6 Capitol riots. Earlier this year, Scarlett Johansson became embroiled in a major controversy regarding the alleged unauthorised use of her voice by OpenAI in ChatGPT. The issue surfaced when OpenAI unveiled a new voice feature called 'Sky', which many users noticed sounded strikingly similar to Johansson's voice from her role in the movie 'Her'. The situation escalated when Johansson publicly expressed her frustration after discovering that OpenAI had created a voice for the chatbot that she felt resembled hers too closely. This came despite her having declined an offer from OpenAI in September 2022 to lend her voice to the project. Upon hearing a demo of the new voice, Johansson was reportedly shocked and upset, leading her to demand that OpenAI halt the use of the voice. Earlier this year, Prabhakar Raghavan, Google's chief technologist, faced criticism over the company's Gemini AI image generation feature. The controversy stemmed from Gemini producing historically inaccurate and overly diverse images in response to prompts about specific historical figures and events. For example, when prompted to create depictions of the Founding Fathers of the United States, the AI-generated images included individuals from various ethnic backgrounds, which did not align with historical records. Raghavan admitted that the feature had fallen short and issued an apology for the inaccuracies. He explained that the AI model had been adjusted to promote diversity in its outputs, which occasionally resulted in overcorrection. Elon Musk has a love-hate relationship with OpenAI. The tech billionaire recently filed a preliminary injunction to stop OpenAI from switching to a for-profit model. Musk, who co-founded OpenAI, accused the company of antitrust violations and betraying its founding principles. His lawsuit, which now includes Microsoft as a defendant, argues that OpenAI has moved away from its original nonprofit mission to use AI research to benefit humanity. In response, OpenAI released emails and documents from 2017 showing that Musk had supported a for-profit structure and even sought majority control of the company. OpenAI CEO Sam Altman had publicly called Musk "a clear bully".
[2]
AI Personalities Who Sparked Controversy in 2024
Scarlett Johansson publicly expressed her frustration after discovering that OpenAI had created a voice for its chatbot that she felt resembled hers too closely. While 2024 has been a year of progress, it has also been a hotbed of bold claims and heated controversies within the world of AI. This article looks back at the most controversial figures in AI in 2024 - individuals who stirred debates, challenged norms and redefined what AI can and should do. German computer scientist Jürgen Schmidhuber, known for his work on recurrent neural networks, has often argued that he and other researchers have not received adequate recognition for their contributions to deep learning. Instead, he claimed, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have received disproportionate credit. Most recently, he alleged that Hinton's Nobel Prize is based on uncredited work. Schmidhuber alleged that Hinton and Hopfield's contributions were heavily influenced by existing research without inadequate acknowledgement. "This is a Nobel Prize for plagiarism," Schmidhuber wrote on LinkedIn. He argued that methodologies developed by Alexey Ivakhnenko and Shun'ichi Amari in the 1960s and 1970s, respectively, formed the foundation of the laureates' work. "They republished methodologies developed in Ukraine and Japan without citing the original papers. Even in later surveys, they didn't credit the original inventors," Schmidhuber said, suggesting that the omission may have been intentional. Rosalind Picard, a professor at MIT Media Lab, recently faced controversy over alleged discriminatory remarks made during a keynote speech at NeurIPS 2024 towards Chinese students. During her presentation, Picard mentioned an incident involving a Chinese student who had been expelled. The act drew criticism for appearing to single out nationality and reinforce harmful stereotypes. This prompted apologies from both Picard and the NeurIPS organisers and sparked discussions about inclusivity and respect within the AI research community. Bhavish Aggarwal, founder of Ola's AI chatbot Krutrim, made several notable statements about AI this year that sparked discussions. Earlier this year, Aggarwal framed India's AI development in terms of data sovereignty and criticised what he termed "techno-colonialism" to describe the exploitation of developing countries by global tech giants through technology. "India generates the largest amount of digital data in the world, but all of it is sitting in the West...They take our data out, process it into AI and then bring it back and sell it in dollars to us. It's the same East India Company all over again," he said. His remarks triggered controversy, as critics noted that much of Ola's early funding had come from global investment firms. Meanwhile, following an incident where LinkedIn's AI tool referred to him using gender-neutral pronouns, Aggarwal announced that Ola would shift from Microsoft's Azure cloud platform to its own Krutrim cloud. Moreover, he called on other Indian companies to follow suit, which some interpreted as promoting anti-Western sentiment in the tech industry. Former Google CEO Eric Schmidt recently made several notable statements about AI and its potential risks. In interviews with ABC News and PBS, Schmidt warned that AI systems could reach a "dangerous point" when they can self-improve, suggesting that we need to consider "unplugging" them at that stage. He expressed concern about computers running autonomously and making their own decisions, calling for human oversight to maintain "meaningful control" over autonomous weapons. Mira Murati, former chief technology officer of OpenAI, found herself at the centre of controversy in March regarding the training data for Sora, OpenAI's new text-to-video AI model. During an interview with The Wall Street Journal, Murati was asked about the specific sources of data used to train Sora. She revealed that the model was trained on "publicly available and licensed data". However, when asked whether content from platforms like YouTube, Instagram, or Facebook was used to train the model, she responded with uncertainty, saying, "I'm actually not sure about that. I'm not confident about it." This year, Hoan Ton-That, CEO and co-founder of Clearview AI, remained a controversial figure in the AI industry due to his company's facial recognition technology and its practices. Clearview AI faced significant legal issues, including a €30.5 million fine from the Dutch Data Protection Authority for maintaining an "illegal database" of billions of facial images. The company was also warned of additional penalties of up to €5.1 million for failing to comply with EU data protection laws. Despite these challenges, Ton-That defended the company as he asserted that it only uses publicly available online data and compared its approach to Google's photo search. He argued that Clearview's technology plays a crucial role in law enforcement, citing its use in investigations into the January 6 Capitol riots. Earlier this year, Scarlett Johansson became embroiled in a major controversy regarding the alleged unauthorised use of her voice by OpenAI in ChatGPT. The issue surfaced when OpenAI unveiled a new voice feature called 'Sky', which many users noticed sounded strikingly similar to Johansson's voice from her role in the movie 'Her'. The situation escalated when Johansson publicly expressed her frustration after discovering that OpenAI had created a voice for the chatbot that she felt resembled hers too closely. This came despite her having declined an offer from OpenAI in September 2022 to lend her voice to the project. Upon hearing a demo of the new voice, Johansson was reportedly shocked and upset, leading her to demand that OpenAI halt the use of the voice. Earlier this year, Prabhakar Raghavan, Google's chief technologist, faced criticism over the company's Gemini AI image generation feature. The controversy stemmed from Gemini producing historically inaccurate and overly diverse images in response to prompts about specific historical figures and events. For example, when prompted to create depictions of the Founding Fathers of the United States, the AI-generated images included individuals from various ethnic backgrounds, which did not align with historical records. Raghavan admitted that the feature had fallen short and issued an apology for the inaccuracies. He explained that the AI model had been adjusted to promote diversity in its outputs, which occasionally resulted in overcorrection. Elon Musk has a love-hate relationship with OpenAI. The tech billionaire recently filed a preliminary injunction to stop OpenAI from switching to a for-profit model. Musk, who co-founded OpenAI, accused the company of antitrust violations and betraying its founding principles. His lawsuit, which now includes Microsoft as a defendant, argues that OpenAI has moved away from its original nonprofit mission to use AI research to benefit humanity. In response, OpenAI released emails and documents from 2017 showing that Musk had supported a for-profit structure and even sought majority control of the company. OpenAI CEO Sam Altman had publicly called Musk "a clear bully".
Share
Share
Copy Link
A comprehensive look at the key personalities who stirred debates and controversies in the AI world during 2024, highlighting issues ranging from intellectual property disputes to ethical concerns and data privacy.
The year 2024 witnessed significant advancements in artificial intelligence, accompanied by a series of controversies involving key figures in the field. These debates ranged from intellectual property disputes to ethical concerns and data privacy issues, highlighting the complex challenges facing the AI industry 12.
German computer scientist Jürgen Schmidhuber sparked controversy by alleging that Geoffrey Hinton's Nobel Prize was based on uncredited work. Schmidhuber claimed that Hinton and Hopfield's contributions were heavily influenced by existing research without adequate acknowledgment, going as far as to call it "a Nobel Prize for plagiarism" 12.
Rosalind Picard, a professor at MIT Media Lab, faced backlash over alleged discriminatory remarks made during a keynote speech at NeurIPS 2024. The incident, which involved comments about a Chinese student, led to apologies from both Picard and the conference organizers, igniting discussions about inclusivity in the AI research community 12.
Bhavish Aggarwal, founder of Ola's AI chatbot Krutrim, stirred debate with his statements on data sovereignty and "techno-colonialism." He criticized global tech giants for exploiting developing countries' data, comparing it to colonial-era practices. Aggarwal's decision to shift from Microsoft's Azure to Ola's own cloud platform further fueled discussions about data ownership and control 12.
Former Google CEO Eric Schmidt raised concerns about AI systems reaching a "dangerous point" of self-improvement. In interviews, he emphasized the need for human oversight and the possibility of "unplugging" AI systems to maintain control, particularly in the context of autonomous weapons 12.
Mira Murati, former CTO of OpenAI, found herself at the center of controversy regarding the training data for Sora, OpenAI's text-to-video AI model. Her uncertainty about the specific sources of training data raised questions about transparency and data usage in AI development 12.
Hoan Ton-That, CEO of Clearview AI, continued to face legal challenges due to his company's facial recognition technology. Despite facing significant fines and warnings from EU authorities, Ton-That defended Clearview's practices, arguing for the technology's importance in law enforcement 12.
Actress Scarlett Johansson became embroiled in a controversy with OpenAI over the alleged unauthorized use of her voice in ChatGPT. The dispute highlighted the growing tensions between AI companies and individuals over intellectual property rights and consent in the digital age 12.
These controversies underscore the multifaceted challenges facing the AI industry as it continues to evolve and expand its influence across various sectors of society. As AI technology advances, the need for clear ethical guidelines, transparent practices, and robust regulatory frameworks becomes increasingly apparent.
Reference
[1]
[2]
OpenAI wraps up its "12 Days of Shipmas" marketing campaign, facing significant challenges in 2025, including a legal battle with Elon Musk and fierce competition in the AI industry.
30 Sources
30 Sources
OpenAI, the company behind ChatGPT, faces a significant leadership shakeup as several top executives, including CTO Mira Murati, resign. This comes as the company considers transitioning to a for-profit model and seeks new funding.
7 Sources
7 Sources
Geoffrey Hinton, recent Nobel Prize winner in Physics for his work in AI, expresses concern over OpenAI's shift towards profit under Sam Altman's leadership, praising the brief firing of Altman by his former student Ilya Sutskever.
4 Sources
4 Sources
Black Widow star Scarlett Johansson takes a jab at OpenAI CEO Sam Altman, suggesting he'd make a good Marvel villain. The comment comes in the wake of AI voice cloning controversies involving the actress.
4 Sources
4 Sources
In 2024, AI made significant strides in capabilities and adoption, driving massive profits for tech companies. However, concerns about safety, regulation, and societal impact also grew.
13 Sources
13 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved