Curated by THEOUTPOST
On Mon, 14 Oct, 12:00 AM UTC
6 Sources
[1]
Meta's AI chief is right to call AI fearmongering 'BS' but not for the reason he thinks
AI is the latest technology monster scaring people about the future. Legitimate concerns around things like ethical training, environmental impact, and scams using AI morph into nightmares of Skynet and the Matrix all too easily. The prospect of AI becoming sentient and overthrowing humanity is frequently raised, but, as Meta's AI chief Yann LeCun told The Wall Street Journal, the idea is "complete B.S." LeCun described AI as less intelligent than a cat and incapable of plotting or even desiring anything at all, let alone the downfall of our species. LeCun is right that AI is not going to scheme its way into murdering humanity, but that doesn't mean there's nothing to be worried about. I'm much more worried about people relying on AI to be smarter than it is. AI is just another technology, meaning it's not good or evil. But the law of unintended consequences suggests relying on AI for important, life-altering decisions isn't a good idea. Think of the disasters and near disasters caused by trusting technology over human decision-making. The rapid-fire trading of stocks using machines far faster than humans has caused more than one near meltdown of part of the economy. A much more literal meltdown almost occurred when a Soviet missile detection system glitched and claimed nuclear warheads were inbound. In that case, only a brave human at the controls prevented global armageddon. Now imagine AI as we know it today continues to trade on the stock market because humans gave it more comprehensive control. Then imagine AI accepting the faulty missile alert and being allowed to activate missiles without human input. Yes, it sounds far-fetched that people would trust a technology famous for hallucinating facts to be in charge of nuclear weapons, but it's not that much of a stretch from some of what already occurs. The AI voice on the phone from customer service might have decided if you get a refund before you ever get a chance to explain why you deserve one, and there's no human listening and able to change their mind. AI will only do what we train it to do, and it uses human-provided data to do so. That means it reflects both our best and worst qualities. Which facet comes through depends on the circumstances. However, handing over too much decision-making to AI is a mistake at any level. AI can be a big help, but it shouldn't decide whether someone gets hired or whether an insurance policy pays for an operation. We should worry that humans will misuse AI, accidentally or otherwise, replacing human judgment. Microsoft's branding of AI assistants as Copilots is great because it evokes someone there to help you achieve your goals but who doesn't set them or take any more initiative than you allow. LeCun is correct that AI isn't any smarter than a cat, but a cat with the ability to push you, or all of humanity, off of a metaphorical counter is not something we should encourage.
[2]
Will AI grow so powerful it can threaten us? Chief Meta scientist says: 'Pardon my French, but that's complete B.S.' Which is the attitude I would expect from any company with so much skin in the AI game
AI is a worrisome subject. From the potential of LLMs misinforming people en masse through chatbots or generative AI posing a risk to the future of many great artists' careers, all the way to Skynet taking over and conquering humanity. Alright, one of those is far less likely to happen than the others but it's still a debate many are having, and a chief AI scientist at Meta reckons we all have nothing to worry about. In a new report for The Wall Street Journal, Meta Chief AI scientist Yann LeCun was asked if humans should be afraid of the future of AI. To which he said: "You're going to have to pardon my French, but that's complete B.S." LeCun has an impressive resume in AI, winning a Turing Award in 2018 for his work in deep learning. He has since been proclaimed as one of the "godfathers of AI", as pointed out in the original report. That is to say, he has tonnes of experience in the field. He told the reporter above that AI is dumber than a cat, and this mimics a similar report from Apple scientists on the limitations of LLMs (Large Language Models) reported recently. This report suggests that LLMs can't reason as humans do, and shows a "critical flaw in LLMs' ability to genuinely understand mathematical concepts and discern relevant information for problem-solving." This is not LeCun's first time publicly dissuading ire and fear around AI. In May this year, he had a spat with Elon Musk, where LeCun not only doubted Musk's promises around xAI (X's own AI) but also Musk's politics. When challenged on his credentials by Musk, Lecun produced 80 technical papers he has published over the previous two years, and Musk told him "That's nothing, you're going soft. Try harder!". Understandably, Musk's arrogance led to much support for LeCun, as can be shown in the likes to his original tweets. Meta has been exploring its own AI and users have been expressing their distrust through the "Goodbye Meta AI" chain mail. Meta is currently involved in many types of AI use, so comparing its intelligence to humans could be potentially missing some of the arguments people are making against it. It's important to note that users' worries about AI don't just rely on fears of Skynet and AI surpassing human intelligence. Much of the fear around it is cultural, political, legal, and artistic. It doesn't necessarily matter how good or bad AI is in a technical sense if it replaces human art and creativity, and it doesn't need to be smarter than a housecat to do that. Hey, even a housecat can knock your computer off your desk.
[3]
Meta's AI chief says that artificial intelligence is not the apocalypse: They are dumber than cats - Softonic
We have all read that Artificial Intelligence will reach a point where it will get rid of us, as it will realize that humans are a danger to Earth and the future of our planet. Deep down, it wouldn't be entirely wrong. However, Yann LeCun, chief researcher at Meta and one of the so-called Godfathers of AI, believes that such claims are "complete nonsense." LeCun is one of the industry experts who believes that the catastrophic rumors about AI are exaggerated. Last year, he described warnings that the technology was a threat to humanity as "ridiculous," and he reiterated his words to the Wall Street Journal last week. "You'll have to excuse me, but that's complete nonsense," he stated to the newspaper, when asked if AI would become intelligent enough to pose a threat to humanity. Unlike those in the sector who believe that General Artificial Intelligence (AGI) will be the next step in the generative AI revolution, LeCun asserts that current LLMs will not lead to a system that matches or surpasses human capabilities in a wide range of cognitive tasks. The New York University professor claims that LLMs like ChatGPT and Grok lack persistent memory, reasoning, planning, and understanding of the physical world. He added that LLMs demonstrate that "language can be manipulated without being intelligent," and that they will never lead to true AGI. LeCun explained to the newspaper that LLMs are not as smart as a domestic cat, but they are very good at predicting the next word in the text they generate, which is what leads people to think they are really "smart" or even intelligent. LeCun is not entirely against the idea of an AGI, he just believes it will not come through more advanced LLMs, but from a different AI. LeCun, Dr. Geoffrey Hinton, and Yoshua Bengio earned the nickname Godfathers of AI in 2019 after winning the Turing Award. Hinton left his job at Google in 2023, warning that as companies exploit more powerful AI systems, they become increasingly dangerous.
[4]
Meta's AI chief Yann LeCun calls AI apocalypse fears "complete B.S."
A hot potato: Artificial Intelligence is going to reach a point where it will become smarter than humans, and that could threaten our very existence - or so many people warn. However, Yann LeCun, Meta senior researcher and one of the so-called Godfathers of AI, thinks such claims are "complete B.S." LeCun is one of those in the industry who believes the doomsday talk about AI is overblown. He called warnings that the technology was a threat to humanity "ridiculous" last year, repeating his words to the Wall Street Journal last week. "You're going to have to pardon my French, but that's complete B.S.," he told the publication, when asked if AI will become smart enough to pose a threat to humanity. Unlike those in the industry who believe Artificial General Intelligence (AGI) will be the next step in the generative AI revolution, LeCun says today's LLMs will not lead to a system matching or surpassing human capabilities across a wide range of cognitive tasks. The New York University professor said that LLMs such as ChatGPT and Grok lack persistent memory, reasoning, planning, and an understanding of the physical world. He added that LLMs prove that "you can manipulate language and not be smart," and they will never lead to true AGI. "We are used to the idea that people or entities that can express themselves, or manipulate language, are smart - but that's not true." LeCun told the Journal that LLMs aren't as smart as a house cat, but they are very good at predicting the next word in the text they generate, which is what leads people into thinking they're actually "smart" or even intelligent. LeCun isn't totally against the idea of an AGI; he just thinks that it won't arrive via more advanced LLMs. Several big names in the industry believe we are on a fast path to AGI. Nvidia's Jensen Huang thinks it will be here in the next five years. OpenAI boss Sam Altman thinks it will be the "reasonably close-ish future" before we see AGI. But then Altman also claims a superintelligence - an AI that is vastly smarter than humans - is coming in "a few thousand days." LeCun, Dr Geoffrey Hinton, and Yoshua Bengio earned the Godfathers of AI nickname in 2019 after winning the Turing Prize. Hinton left his job at Google in 2023, warning that as companies exploit more powerful AI systems, they're becoming increasingly dangerous.
[5]
Meta's AI Chief on AI Endangering Humanity: 'That's Complete B.S.'
Meta's AI Chief Yann LeCun has said predictions about AI endangering humanity are: "Complete B.S.' LeCun has an extremely decorated resume in the world of AI. He's won one of the most prestigious awards in the field, the A.M Turing award, for his work in deep learning, and he is a professor at New York University. When questioned by a journalist from The Wall Street Journal on whether AI will become smart enough to endanger humanity in the near future, he simply replied: "You're going to have to pardon my French, but that's complete B.S." That doesn't mean that LeCun was completely dismissive about the possibility of artificial general intelligence (AGI), which is an advanced machine intelligence that resembles a human being and solves a wide variety of tasks. However, he argued that large language models (LLMs) like ChatGPT and X's Grok won't lead to an AGI, regardless of how much they scale their operations. LeCun said these LLMs merely demonstrate that "you can manipulate language and not be smart". "We are used to the idea that people or entities that can express themselves, or manipulate language, are smart -- but that's not true," says LeCun. He explained to the WSJ that current LLMs are merely predicting upcoming words in pieces of text, but are "so good" that they fool viewers. He highlighted the work that META's Fundamental AI Research (FAIR) division is doing as the future of AI, where his team is currently working on digesting video from the real world. The scientist, who has been called "The Godfather of AI", comes into conflict with other figures in the tech world like Open AI CEO Sam Altman and Elon Musk with these sort of comments. In January 2024, Altman predicted that an AGI would be coming in the "reasonably close-ish future" in a speech at the World Economic Forum organized by Bloomberg. Musk has been another figure who has consistently promoted the need for AI regulation before a super-intelligent AI is developed. Elon Musk recently came out in support of California bill SB 1047, which would introduce new safety and accountability mechanisms for large AI systems, highlighting AI as a "potential risk to the public" in a post on X. This brought him into direct opposition with LeChun, who claimed the legislation would have"apocalyptic consequences on the AI ecosystem" due to regulating the research and development process.
[6]
Meta's Yann LeCun says worries about A.I.'s existential threat are 'complete B.S.'
AI pioneer Yann LeCun doesn't think artificial intelligence is actually on the verge of becoming intelligent. LeCun -- a professor at New York University, senior researcher at Meta, and winner of the prestigious A.M. Turning Award -- has been open about his skepticism before, for example tweeting that before we worry about controlling super-intelligent AI, "we need to have the beginning of a hint of a design for a system smarter than a house cat." He elaborated on his opinions in an interview with the Wall Street Journal, where he replied to a question about A.I. becoming smart enough to pose a threat to humanity by saying, "You're going to have to pardon my French, but that's complete B.S." LeCun argued that today's large language models lack some key cat-level capabilities, like persistent memory, reasoning, planning, and an understanding the physical world. In his view, LLMs merely demonstrate that "you can manipulate language and not be smart" and will never lead to true artificial general intelligence (AGI). It's not that he's a complete AGI skeptic. However, he said new approaches will be needed. For example, he pointed to work around digesting real world video by his Fundamental AI Research team at Meta.
Share
Share
Copy Link
Yann LeCun, Meta's Chief AI scientist, argues that current AI systems are not intelligent enough to pose a threat to humanity, sparking debate about the future and potential risks of artificial intelligence.
Yann LeCun, Meta's Chief AI scientist and one of the "godfathers of AI," has sparked controversy by dismissing fears of an AI apocalypse as "complete B.S." In a recent interview with The Wall Street Journal, LeCun argued that current AI systems, including large language models (LLMs), are far from posing an existential threat to humanity [1][2].
LeCun, a Turing Award winner for his work in deep learning, asserts that today's AI is "less intelligent than a cat" [3]. He emphasizes that LLMs like ChatGPT and Grok lack crucial cognitive abilities:
According to LeCun, LLMs demonstrate that "you can manipulate language and not be smart," suggesting that their apparent intelligence is merely an illusion created by their proficiency in predicting the next word in a sequence [4].
While LeCun doesn't entirely dismiss the possibility of AGI, he argues that current LLMs will not lead to systems matching or surpassing human capabilities across a wide range of cognitive tasks [5]. This stance puts him at odds with other industry leaders who predict the imminent arrival of AGI:
LeCun's comments have reignited the debate on AI regulation and safety. While he opposes stringent regulation of AI research and development, others in the tech industry advocate for proactive measures:
Despite LeCun's reassurances, concerns about AI extend beyond the fear of superintelligent machines:
LeCun highlights the work of Meta's Fundamental AI Research (FAIR) division as a potential path forward. Their focus on processing real-world video data suggests a shift towards more grounded and contextual AI systems [5].
As the debate continues, it's clear that the future of AI remains a contentious topic among experts, policymakers, and the public. While LeCun's perspective offers a counterpoint to doomsday scenarios, it also underscores the need for ongoing discussion and careful consideration of AI's role in society.
Reference
[1]
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
An analysis of AI's potential and limitations, highlighting its promise for society while cautioning against overreliance and potential pitfalls in decision-making and innovation.
2 Sources
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
2 Sources
Daron Acemoglu, a renowned MIT economist, cautions against the AI hype, predicting it will only impact 5% of jobs in the next decade. He warns of potential economic consequences and a possible tech stock crash due to overinvestment in AI.
7 Sources
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved