Curated by THEOUTPOST
On Wed, 18 Sept, 12:04 AM UTC
3 Sources
[1]
How Elon Musk, Sam Altman, and the Silicon Valley elite manipulate the public
The following is an excerpt from Gary Marcus's book Taming Silicon Valley: How We Can Ensure That AI Works for Us. The question is, why did we fall for Silicon Valley's over-hyped and often messianic narrative in the first place? This chapter is a deep dive into the mind tricks of Silicon Valley. Not, mind you, the already well-documented tricks discussed in the film The Social Dilemma, in which Silicon Valley outfits like Meta addict us to their software. As you may know, they weaponize their algorithms in order to attract our eyeballs for as long as possible, and serve up polarizing information so they can sell as many advertisements as possible, thereby polarizing society, undermining mental health (particularly of teens) and leading to phenomena like the one Jaron Lanier once vividly called "Twitter poisoning" ("a side effect that appears when people are acting under an algorithmic system that is designed to engage them to the max"). In this chapter, I dissect those mind tricks by which big tech companies bend and distort the reality of what the tech industry itself has been doing, exaggerating the quality of the AI, while downplaying the need for its regulation. Let's start with hype, a key ingredient in the AI world, even before Silicon Valley was a thing. The basic move -- overpromise, overpromise, overpromise, and hope nobody notices -- goes back to the 1950s and 1960s. In 1967, AI pioneer Marvin Minsky famously said: "Within a generation, the problem of artificial intelligence will be substantially solved." But things didn't turn out that way. As I write this, in 2024, a full solution to artificial intelligence is still years, perhaps decades away. But there's never been much accountability in AI; if Minsky's projections were way off, it didn't much matter. His generous promises (initially) brought big grant dollars -- just as overpromising now often brings big investor dollars. In 2012, Google cofounder Sergey Brin promised driverless cars for everyone in five years, but that still hasn't happened, and hardly anyone ever even calls him on it. Elon Musk started promising his own driverless cars in 2014 or so, and kept up his promises every year or two, eventually promising that whole fleets of driverless taxis were just around the corner. That too still hasn't happened. (Then again, Segways never took over the world either, and I am still waiting for my inexpensive personal jetpack, and the cheap 3D-printer that will print it all.) All too often, Silicon Valley is more about promise than delivery. Over $100 billion has been invested in driverless cars, and they are still in prototype phases, working some of the time, but not reliably enough to be scaled up for worldwide deployment. In the months before I wrote this, GM's driverless car division Cruise all but fell apart. It came out that they had more people behind the scenes in a remote operations center than actual driverless cars on the road. GM pulled support; the Cruise CEO Kyle Vogt resigned. Hype doesn't always materialize. And yet it continues unabated. Worse, it is frequently rewarded. A common trick is to feign that today's three-quarters-baked AI (full of hallucinations and bizarre and unpredictable errors) is tantamount to so-called Artificial General Intelligence (which would be AI that is at least as powerful and flexible as human intelligence) when nobody is particularly close. Not long ago, Microsoft posted a paper, not peer-reviewed, that grandiosely claimed "sparks of AGI" had been achieved. Sam Altman is prone to pronouncements like "by [next year] model capability will have taken such a leap forward that no one expected....It'll be remarkable how much different it is." One master stroke was to say that the OpenAI board would get together to determine when Artificial General Intelligence "had been achieved," subtly implying that (1) it would be achieved sometime soon and (2) if it had been reached, it would be OpenAI that achieved it. That's weapons-grade PR, but it doesn't for a minute make it true. (Around the same time, OpenAI's Altman posted on Reddit, "AGI has been achieved internally," when no such thing had actually happened.) Only very rarely does the media call out such nonsense. It took them years to start challenging Musk's overclaiming on driverless cars, and few if any asked Altman why the important scientific question of when AGI was reached would be "decided" by a board of directors rather than the scientific community. The combination of finely tuned rhetoric and a mostly pliable media has downstream consequences; investors have put too much money in whatever is hyped, and, worse, government leaders are often taken in. Two other tropes often reinforce one another. One is the "Oh no, China will get to GPT-5 first" mantra that many have spread around Washington, subtly implying that GPT-5 will fundamentally change the world (in reality, it probably won't). The other tactic is to pretend that we are close to an AI that is SO POWERFUL IT IS ABOUT TO KILL US ALL. Really, I assure you, it's not. Many of the major tech companies recently converged on precisely that narrative of imminent doom, exaggerating the importance and power of what they have built. But not one has given a plausible, concrete scenario by which such doom could actually happen anytime soon. No matter; they got many of the major governments of the world to take that narrative seriously. This makes the AI sound smarter than it really is, driving up stock prices. And it keeps attention away from hard-to- address but critical risks that are more imminent (or are already happening), such as misinformation, for which big tech has no great solution. The companies want us, the citizens, to absorb all the negative externalities (an economist's term for bad consequences, coined by the British economist Arthur Pigou) that might arise -- such as the damage to democracy from Generative AI-produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones -- without them paying a nickel. Big Tech wants to distract us from all that, by saying -- without any real accountability -- that they are working on keeping future AI safe (hint: they don't really have a solution to that, either), even as they do far too little about present risk. Too cynical? Dozens of tech leaders signed a letter in May 2023 warning that AI could pose a risk of extinction, yet not one of those leaders appears to have slowed down one bit. Another way Silicon Valley manipulates people is by feigning that they are about to make enormous barrels of cash. In 2019, for example, Elon Musk promised that a fleet of "robo taxis" powered by Tesla would arrive in 2020; by 2024 they still hadn't arrived. Now Generative AI companies are being valued at billions (and even tens of billions) of dollars, but it's not clear they will ever deliver. Microsoft Copilot has been underwhelming in early trials, and OpenAI's app store (modeled on Apple's app store) offering custom versions of ChatGPT is struggling. A lot of the big tech companies are quietly recognizing that the promised profits aren't going to materialize any time soon. But the abstract notion that they might make money gives them immense power; government dare not step on what has been positioned as a potential cash cow. And because so many people idolize money, too little of the rhetoric ever gets seriously questioned. Another frequent move is to publish a slick video that hints at much more than can be actually delivered. OpenAI did this in October 2019, with a video that showed one of their robots solving a Rubik's Cube, one-handed. The video spread like wildfire, but the video didn't make clear what was buried in the fine print. When I read their Rubik's Cube research paper carefully, having seen the video, I was appalled by a kind of bait-and-switch, and said so: the intellectual part of solving a Rubik's Cube had been worked out years earlier, by others; OpenAI's sole contribution, the motor control part, was achieved by a robot that used a custom, not-off-the-shelf, Rubik's Cube with Bluetooth sensors hidden inside. As is often the case, the media imagined a robotics revolution, but within a couple years the whole project had shut down. AI is almost always harder than people think. In December 2023, Google put out a seemingly mind-blowing video about a model they just released, called Gemini. In the video, a chatbot appeared to watch a person make drawings, and to provide commentary on the person's drawings in real time. Many people became hugely excited by it, saying stuff on X like "Must-watch video of the week, probably the year," "If this Gemini demo is remotely accurate, it's showing broader intelligence than a non-zero fraction of adult humans *already*," and "Can't stop thinking about the implications of this demo. Surely it's not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every one's words, and make intelligent contributions to the issues debated? Now tell me. Wouldn't that count as AGI?" But as some more skeptical journalists such as Parmy Olson quickly figured out, the video was fundamentally misleading. It was not produced in real time; it was dubbed after the fact, from a bunch of still shots. Nothing like the real-time, multimodal, interactive-commentary product that Google seemed to be demoing actually existed. (Google itself ultimately conceded this in a blog. ) Google's stock price briefly jumped 5 percent based on the video, but the whole thing was a mirage, just one more stop on the endless train of hype. Hype often equates more or less directly to cash. As I write this, OpenAI was recently valued at $86 billion, never having turned a profit. My guess is that OpenAI will someday be seen as the WeWork moment of AI, a dramatic overestimation of value. GPT-5 will either be significantly delayed or not meet expectations; companies will struggle to put GPT-4 and GPT-5 into extensive daily use; competition will increase, margins will be thin; the profits won't justify the valuation (especially after a pesky fact I mentioned earlier: in exchange for their investment, Microsoft takes about half of OpenAI's first $92 billion in profits, if they make any profits at all). The beauty of the hype game is that if the valuations rise high enough, no profits are required. The hype has already made many of the employees rich, because a late 2023 secondary sale of OpenAI employee stock allowed them to cash out. (Later investors could be left holding the bag, if profits never materialize.) For a moment, it looked as if that whole calculation might change. Just before the early employees were about to sell shares at a massive $86 billion valuation, OpenAI abruptly fired its CEO Sam Altman, potentially killing the deal. No problem. Within a few days, nearly all the employees had rallied around him. He was quickly rehired. Guess what? Business Insider reported, "While the entire company signed a letter stating they'd follow Altman to Microsoft if he wasn't reinstated, no one really wanted to do it." It is not that the employees wanted to be with Altman, per se, no matter what (as most onlookers assumed), but rather, I infer, that they wanted the big sale of employee stock at the $86 billion valuation to go through. Bubbles sometimes pop; good to get out while you can. Another common tactic is to minimize the downsides of AI. When some of us started to sound alarms about AI-generated misinformation, Meta's chief AI scientist Yann LeCun claimed in a series of tweets on Twitter, in November and December 2022, that there is no real risk, reasoning, fallaciously, that what hadn't happened yet would not happen ever ("LLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousness"). He further suggested that "LLMs will not help with careful crafting [of misinformation], or its distribution," as if AI-generated misinformation would never see the light of day. By December 2023, all of this had proven to be nonsense. Along similar lines, in May 2023, Microsoft's chief economist Michael Schwarz told an audience at the World Economic Forum that we should hold off on regulation until serious harm had occurred. "There has to be at least a little bit of harm, so that we see what is the real problem. Is there a real problem? Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not." Fast-forward to December 2023, and the harm is starting to come in; The Washington Post, for example, reported: "The rise of AI fake news is creating a 'misinformation superspreader'"; in January 2024 (as I mentioned in the introduction), deepfaked robocalls in New Hampshire that sounded like Joe Biden tried to persuade people to stay home from the polls. But that doesn't stop big tech from playing the same move over and over again. As noted in the introduction, in late 2023 and early 2024, Meta's Yann LeCun was arguing there will be no real harm forthcoming from open-source AI, even as some of his closest collaborators outside of industry, his fellow deep learning pioneers Geoff Hinton and Yoshua Bengio, vigorously disagreed. All of these efforts at downplaying risks remind me of the lines that cigarette manufacturers used to spew about smoking and cancer, whining about how the right causal studies hadn't yet been performed, when the correlational data on death rates and a mountain of causal studies had already made it clear that smoking was causing cancer in laboratory animals. (Zuckerberg used this same cigarette-industry style of argument in response to Senator Hawley in his January 2024 testimony on whether social media was causing harm to teenagers.) What the big tech leaders really mean to say is that the harms from AI will be difficult to prove (after all, we can't even track who is generating misinformation with deliberately unregulated open-source software) -- and that they don't want to be held responsible for whatever their software might do. All of it, every word, should be regarded with the same skepticism we accord cigarette manufacturers. Then there's ad hominem arguments and false accusation. One of the darkest episodes in American history came in the 1950s, when Senator Joe McCarthy gratuitously called many people Communists, often with little or no evidence. McCarthy was of course correct that there were some Communists working in the United States, but the problem was that he often named innocent people, too -- without even a hint of due process -- destroying many lives along the way. Out of desperation, some in Silicon Valley seem intent on reviving McCarthy's old playbook, distracting from real problems by feinting at Communists. Most prominently, Marc Andreessen, one of the richest investors in Silicon Valley, recently wrote a "Techno-Optimist Manifesto," enumerating a long, McCarthy-like list of "enemies" ("Our enemy is stagnation. Our enemy is anti-merit, anti-ambition, anti-striving, anti-achievement, anti-greatness, etc.") and made sure to include a whistle call against Communism on his list, complaining of the "continuous howling from Communists and Luddites." (As tech journalist Brian Merchant has pointed out, the Luddites weren't actually anti-technology per se, they were pro-human.) Five weeks later, another anti-regulatory investor from the Valley, Mike Solana, followed suit, all but calling one of the OpenAI board members a Communist ("I am not saying [so and so] is a CCP asset... but..."). There is no end to how low some people will go for a buck. The influential science popularizer Liz Boeree recounts becoming disaffected by the whole "e/acc" ("effective accelerationism") movement that urges rapid AI development: I was excited about e/acc when I first heard of it (because optimism *is* extremely important). But then its leader(s) made it their mission to attack and misrepresent perceived "enemies" for clout, while deliberately avoiding engaging with counter arguments in any reasonable way. A deeply childish, zero-sum mindset. In my mind, the entire accelerationist movement has been an intellectual failure, failing to address seriously even the most basic questions, like what would happen if sufficiently advanced technology got into the wrong hands. You can't just say "make AI faster" and entirely ignore the consequences -- but that's precisely what the sophomoric e/acc movement has done. As the novelist Ewan Morrison put it, "This e/acc philosophy so dominant in Silicon Valley it's practically a religion....[It] needs to be exposed to public scrutiny and held to account for all the things it has smashed and is smashing." Much of the acceleration effort seems to be little more than a shameless attempt to stretch the "Overton window," to make unpalatable and even insane ideas seem less crazy. The key rhetorical trick was to make it seem as if the nonsensical idea of zero regulation was viable, falsely portraying anything else as too expensive for startups and hence a death blow to innovation. Don't fall for it. As the Berkeley computer scientist Stuart Russell bluntly put it, "The idea that only trillion-dollar corporations can comply with regulations is sheer drivel. Sandwich shops and hairdressers are subject to far more regulation than AI companies, yet they open in the tens of thousands every year." Accelerationism's true goal seems to be simply to line the pockets of current AI investors and developers, by shielding them from responsibility. I've yet to hear its proponents come up with a genuine, well-conceived plan for maximizing positive human outcome over the coming decades. Ultimately, the whole "accelerationist" movement is so shallow it may actually backfire. It's one thing to want to move swiftly; another to dismiss regulation and move recklessly. A rushed, underregulated AI product that caused massive mayhem could lead to subsequent public backlash, conceivably setting AI back by a decade or more. (One could well argue that something like that has happened with nuclear energy.) Already there have been dramatic protests of driverless cars in San Francisco. When ChatGPT's head of product recently spoke at SXSW, the crowd booed. People are starting to get wise. Gaslighting and bullying are another common pattern. When I argued on Twitter in 2019 that large language models "don't develop robust representations of 'how events unfold over time'" (a point that remains true today), Meta's chief AI officer Yann LeCun condescendingly said, "When you are fighting a rear-guard battle, it's best to know when you adversary overtook your rear 3 years ago," pointing to research that his company had done, which allegedly solved the problems (spoiler alert: it didn't). More recently, under fire when OpenAI abruptly overtook Meta, LeCun suddenly changed his tune, and ran around saying that large language models "suck," never once acknowledging that he'd said otherwise. All this -- the abruptly changing tune and correlated denial of what happened -- reminded me of Orwell's famous line on state-sponsored historical revisionism in 1984: "Oceania has always been at war with Eastasia" (when in fact targets had shifted). The techlords play other subtle games, too. When Sam Altman and I testified before Congress, we raised our right hands and swore to tell the whole truth, but when Senator John Kennedy (R-LA) asked him about his finances, Altman said, "I have no equity in OpenAI," elaborating that "I'm doing this 'cause I love it." He probably does mostly work for the love of the job (and the power that goes with it) rather than the cash. But he also left out something important: he owns stock in Y Combinator (where he used to be president), and Y Combinator owns stock in OpenAI (where he is CEO), an indirect stake that is likely worth tens of millions of dollars. Altman had to have known this. It later came out that Altman also owns OpenAI's venture capital fund, and didn't mention that either. By leaving out these facts, he passed himself off as more noble than he really is. And all that's just how the tech leaders play the media and public opinion. Let's not forget about the backroom deals. Just as an example, we've all known for a long time that Google was paying Apple to put their search engine front and center, but few of us (including me) had any idea quite how much. Until November 2023, that is, when, as The Verge put it, "A Google witness let slip" that Google gives Apple more than a third of the ad revenue it gets from Apple's Safari, to the tune of $18 billion per year. It's likely a great deal for both, but one that has significantly, and heretofore silently, shaped consumer choice, allowing Google to consolidate their near-monopoly on search. Both companies tried, for years, to keep this out of public view. Lies, half-truths, and omissions. Perhaps Adrienne LaFrance said it best, in an article in The Atlantic titled "The Rise of Technoauthoritarianism": The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement...The world that Silicon Valley elites have brought into being is a world of reckless social engineering, without consequence for its architects...They promise community but sow division; claim to champion truth but spread lies; wrap themselves in concepts such as empowerment and liberty but surveil us relentlessly. We need to fight back.
[2]
Can Silicon Valley Be Tamed? Unpacking Big Tech's Obsession With Faulty AI
Is Silicon Valley in a state of moral decline thanks to generative AI? NYU psychology and neural science professor emeritus Gary Marcus thinks so, but he's not convinced it's a new phenomenon. The US 2008 financial crisis spurred a Silicon Valley shift toward value extraction and the prioritization of startup valuations over actual profits or sustainable business models, Marcus argues in his new book -- Taming Silicon Valley: How We Can Ensure That AI Works for Us -- citing correspondence with early Facebook investor Roger McNamee. This short-term outlook in Silicon Valley has trickled into the generative AI we see today, where AI's risks are deemed worth it because of the hypothetical concept and dream of artificial general intelligence, or AGI, which has not yet been publicly developed by any AI firm to date. But Marcus has more than just an axe to grind with generative AI. It's more of a broad, sweeping warning, cautioning us that if major changes aren't made soon, AI could dramatically unravel our digital and real-world lives. Providing numerous examples from around the world, Marcus explains that there are at least a dozen immediate threats posed by the generative AI tech we have today, mainly because there's little to no regulation in the US and these tools are flawed by design. They're also free, or cheap, and widely available. Political disinformation, market manipulation, accidental misinformation, defamation, nonconsensual deepfakes, increased crime, cybersecurity threats, bioweapons, and discrimination are just some of the many threats detailed with great specificity in Marcus' book. He zeroes in on our ongoing societal anxieties around generative AI, exposes the dark side of big tech and anti-regulation sentiment, and lays out a plan of action for regulators and everyday citizens. While Marcus criticizes OpenAI, Google, and Meta, he notably takes a somewhat different stance toward Apple, which "hasn't quite fallen as far from the tree as the others," he writes in Taming Silicon Valley. Apple, he argues, is more in the business of selling "sexy productivity tools" than your personal information, agreeing with Facebook whistleblower Frances Haugen that Apple doesn't have an incentive to deceive the public about its business strategy. In a written interview with PCMag, Marcus shares his unfiltered stance on big tech and the rise of generative AI. This interview has been sparingly condensed for clarity. PCMag: You have written a book about AI previously, Rebooting AI, which came out in 2020 before ChatGPT's rise. What inspired you to write Taming Silicon Valley? Why now? Gary Marcus: The previous book was a look at why AI is hard and why then-current approaches were inadequate technically. Five years later, I still [think] most of what we said was right. The new book is about what is happening to society, right now, as a very immature form of AI (generative AI) is spreading rapidly. It's ultimately about something more though: it's about why getting to the right kind of AI is urgent, and why citizens should get involved. PCMag: Some of the biggest names in the GenAI game right now are telling us that AGI is on the way. Are they right, or wrong? Marcus: Wrong. Whenever people say that, I know they don't have a good feel for how complex intelligence really is, and for how far we have left to go. And most of the people who say that don't want to actually defend their beliefs; I offered Elon Musk a million dollars against claims like this but he didn't dare take it. Even the latest model (GPT o1) can't reliably win at tic-tac-toe or consistently make legal moves in chess; it has trouble with floating point arithmetic too. Each new model suffers from hallucinations and stupid errors, just like the last, and all we have is an endless litany of promises, not a principled solution. PCMag: In your opinion, can AI ever really 'know' anything in the way that humans can? Marcus: I see no reason why not. A GPS-nav system effectively knows where your car is and where you are heading and what the roads in between are and computes an efficient path. That seems like knowledge to me, and reliable knowledge at that. LLMs are much less tractable, much less reliable black boxes. I am not sure I would really want to credit them with any knowledge beyond statistical tendencies. But that doesn't mean that other approaches can't be conceptually deeper. Just means we should stop wasting so much money on GenAI, and that we should start exploring different approaches. PCMag: Some Silicon Valley execs and VCs claim that regulation inherently stands in the way of innovation. What do you make of arguments like this? Marcus: They are total nonsense, from greedy people who are willing to say anything to make a buck. Do you think commercial airlines would be safe if there was no regulation? PCMag: Your book discusses the many ways AI is currently being used for harm. AI is also incredibly energy-hungry, in a time where human-caused climate change, the fossil fuel industry, and global pollution remain a concern. Given all this, do you think the AI of today is a net-positive for humanity? Or is it a net-negative today? If negative, what needs to happen to flip the impact? Marcus: I would say that AlphaFold, Google Search, and GPS-navigation are all tremendously net positive, but chatbots and GenAI have been pretty mixed, and could easily wind up being net negative. I think (a) we should be looking for better technology than GenAI, which is the technique du jour but very flawed and (b) holding the companies more responsible for the downsides of their tech (misinformation, harm to the environment, nonconsensual deepfake porn, covert racism in job hiring, etc.) and that if we do, the companies will figure out better approaches. If we don't hold them responsible, we are looking at a mess. PCMag: Some big tech firms, like Meta, Google, and Microsoft, have already seen backlash to their AI products. Surely, Meta is aware that viral AI-generated Shrimp Jesus imagery isn't moving the needle to make society a better place, and Microsoft's Recall feature has been criticized as a security concern. Why do you think these companies keep trying to push generative AI features on users, often without giving them a choice to opt out? Marcus: Obviously they are all looking for an ROI [return on investment], but the technology isn't working out nearly as well as they wanted the public to believe, and both the public (and big companies) are realizing that they had been sold a bill of goods. So you have companies like Microsoft that seem desperate to find a use for GenAI, but not convincing all that many people. PCMag: As you discuss in your book, AI-generated images are crowding out legitimate historical or real-world image results on Google. Google's AI Overviews have also been a big problem that users can't really turn off. How should everyday people navigate the web as Google and Bing become increasingly filled with AI-generated results? Marcus: They should "just say no" and tell the companies that they don't want a bunch of climate-crushing, copyright-violating AI systems they didn't ask for, and maybe even consider boycotting AI if Silicon Valley doesn't get its house in order.
[3]
How and Why Gary Marcus Became AI's Leading Critic
Maybe you've read about Gary Marcus's testimony before the Senate in May of 2023, when he sat next to Sam Altman and called for strict regulation of Altman's company, OpenAI, as well as the other tech companies that were suddenly all-in on generative AI. Maybe you've caught some of his arguments on Twitter with Geoffrey Hinton and Yann LeCun, two of the so-called "godfathers of AI." One way or another, most people who are paying attention to artificial intelligence today know Gary Marcus's name, and know that he is not happy with the current state of AI. He lays out his complaints in full in his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, which was published today by MIT Press. Marcus goes through the immediate dangers posed by generative AI, which include things like mass-produced disinformation, the easy creation of deepfake pornography, and the theft of creative intellectual property to train new models (he doesn't include an AI apocalypse as a danger, he's not a doomer). He also takes issue with how Silicon Valley has manipulated public opinion and government policy, and explains his ideas for regulating AI companies. Gary Marcus: Well, I started coding when I was eight years old. One of the reasons I was able to skip the last two years of high school was because I wrote a Latin-to-English translator in the programming language Logo on my Commodore 64. So I was already, by the time I was 16, in college and working on cognitive science. So you were already interested in AI, but you studied cognitive science both in undergrad and for your Ph.D. at MIT. Marcus: Part of why I went into cognitive science is I thought maybe if I understood how people think, it might lead to new approaches to AI. I think we need to take a broad view of how the human mind works if we're to build really advanced AI. I don't know that for sure. As a scientist and a philosopher, I would say it's still unknown how we will build artificial general intelligence or even just trustworthy general AI. But we have not been able to do that with these big statistical models, and we have given them a huge chance. There's basically been $75 billion spent on generative AI, another $100 billion on driverless cars. And neither of them has really yielded stable AI that we can trust. We don't know for sure what we need to do, but we have very good reason to think that merely scaling things up will not work. The current approach keeps coming up against the same problems over and over again. What do you see as the main problems it keeps coming up against? Marcus: Number one is hallucinations. These systems smear together a lot of words, and they come up with things that are true sometimes and not others. Like saying that I have a pet chicken named Henrietta is just not true. And they do this a lot. We've seen this play out, for example, in lawyers writing briefs with made-up cases. Second, their reasoning is very poor. My favorite examples lately are these river-crossing word problems where you have a man and a cabbage and a wolf and a goat that have to get across. The system has a lot of memorized examples, but it doesn't really understand what's going on. If you give it a simpler problem, like one Doug Hofstadter sent to me, like: "A man and a woman have a boat and want to get across the river. What do they do?" It comes up with this crazy solution where the man goes across the river, leaves the boat there, swims back, something or other happens. Sometimes he brings a cabbage along, just for fun. Marcus: So those are boneheaded errors of reasoning where there's something obviously amiss. Every time we point these errors out somebody says, "Yeah, but we'll get more data. We'll get it fixed." Well, I've been hearing that for almost 30 years. And although there is some progress, the core problems have not changed. Let's go back to 2014 when you founded your first AI company, Geometric Intelligence. At that time, I imagine you were feeling more bullish on AI? Marcus: Yeah, I was a lot more bullish. I was not only more bullish on the technical side. I was also more bullish about people using AI for good. AI used to feel like a small research community of people that really wanted to help the world. So when did the disillusionment and doubt creep in? Marcus: In 2018 I already thought deep learning was getting overhyped. That year I wrote this piece called "Deep Learning, a Critical Appraisal," which Yann LeCun really hated at the time. I already wasn't happy with this approach and I didn't think it was likely to succeed. But that's not the same as being disillusioned, right? Then when large language models became popular [around 2019], I immediately thought they were a bad idea. I just thought this is the wrong way to pursue AI from a philosophical and technical perspective. And it became clear that the media and some people in machine learning were getting seduced by hype. That bothered me. So I was writing pieces about GPT-3 [an early version of OpenAI's large language model] being a bullshit artist in 2020. As a scientist, I was pretty disappointed in the field at that point. And then things got much worse when ChatGPT came out in 2022, and most of the world lost all perspective. I began to get more and more concerned about misinformation and how large language models were going to potentiate that. You've been concerned not just about the startups, but also the big entrenched tech companies that jumped on the generative AI bandwagon, right? Like Microsoft, which has partnered with OpenAI? Marcus: The last straw that made me move from doing research in AI to really working on policy was when it became clear that Microsoft was going to race ahead no matter what. That was very different from 2016 when they released [an early chatbot named] Tay. It was bad, they took it off the market 12 hours later, and then Brad Smith wrote a book about responsible AI and what they had learned. But by the end of the month of February 2023, it was clear that Microsoft had really changed how they were thinking about this. And then they had this ridiculous "Sparks of AGI" paper, which I think was the ultimate in hype. And they didn't take down Sydney after the crazy Kevin Roose conversation where [the chatbot] Sydney told him to get a divorce and all this stuff. It just became clear to me that the mood and the values of Silicon Valley had really changed, and not in a good way. I also became disillusioned with the U.S. government. I think the Biden administration did a good job with its executive order. But it became clear that the Senate was not going to take the action that it needed. I spoke at the Senate in May 2023. At the time, I felt like both parties recognized that we can't just leave all this to self-regulation. And then I really became disillusioned over the course of the last year, and that's really what led to writing this book. You talk a lot about the risks inherent in today's generative AI technology. But then you also say, "It doesn't work very well." Are those two views coherent? Marcus: There was a headline: "Gary Marcus Used to Call AI Stupid, Now He Calls It Dangerous." The implication was that those two things can't coexist. But in fact, they do coexist. I still think gen AI is stupid, and certainly cannot be trusted or counted on. And yet it is dangerous. And some of the danger actually stems from its stupidity. So for example, it's not well-grounded in the world, so it's easy for a bad actor to manipulate it into saying all kinds of garbage. Now, there might be a future AI that might be dangerous for a different reason, because it's so smart and wily that it outfoxes the humans. But that's not the current state of affairs. You've said that generative AI is a bubble that will soon burst. Why do you think that? Marcus: Let's clarify: I don't think generative AI is going to disappear. For some purposes, it is a fine method. You want to build autocomplete, it is the best method ever invented. But there's a financial bubble because people are valuing AI companies as if they're going to solve artificial general intelligence. In my view, it's not realistic. I don't think we're anywhere near AGI. So then you're left with, "Okay, what can you do with generative AI?" Last year, because Sam Altman was such a good salesman, everybody fantasized that we were about to have AGI and that you could use this tool in every aspect of every corporation. And a whole bunch of companies spent a bunch of money testing generative AI out on all kinds of different things. So they spent 2023 doing that. And then what you've seen in 2024 are reports where researchers go to the users of Microsoft's Copilot -- not the coding tool, but the more general AI tool -- and they're like, "Yeah, it doesn't really work that well." There's been a lot of reviews like that this last year. The reality is, right now, the gen AI companies are actually losing money. OpenAI had an operating loss of something like $5 billion last year. Maybe you can sell $2 billion worth of gen AI to people who are experimenting. But unless they adopt it on a permanent basis and pay you a lot more money, it's not going to work. I started calling OpenAI the possible WeWork of AI after it was valued at $86 billion. The math just didn't make sense to me. What would it take to convince you that you're wrong? What would be the head-spinning moment? Marcus: Well, I've made a lot of different claims, and all of them could be wrong. On the technical side, if someone could get a pure large language model to not hallucinate and to reason reliably all the time, I would be wrong about that very core claim that I have made about how these things work. So that would be one way of refuting me. It hasn't happened yet, but it's at least logically possible. On the financial side, I could easily be wrong. But the thing about bubbles is that they're mostly a function of psychology. Do I think the market is rational? No. So even if the stuff doesn't make money for the next five years, people could keep pouring money into it. The place that I'd like to prove me wrong is the U.S. Senate. They could get their act together, right? I'm running around saying, "They're not moving fast enough," but I would love to be proven wrong on that. In the book, I have a list of the 12 biggest risks of generative AI. If the Senate passed something that actually addressed all 12, then my cynicism would have been mislaid. I would feel like I'd wasted a year writing the book, and I would be very, very happy.
Share
Share
Copy Link
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
In recent months, the tech world has been abuzz with controversy as prominent figures like Elon Musk and Sam Altman face accusations of manipulating public opinion while advancing potentially risky AI technologies. This has ignited a fierce debate about the ethics of artificial intelligence and the outsized influence of Silicon Valley elites on society.
Elon Musk, the enigmatic entrepreneur behind Tesla and SpaceX, and Sam Altman, CEO of OpenAI, have found themselves at the center of a storm. Critics argue that these tech luminaries are using their platforms to shape public perception of AI in ways that serve their own interests rather than the greater good 1.
Musk, known for his provocative statements, has repeatedly warned about the existential risks of AI while simultaneously investing heavily in the technology. This apparent contradiction has led some to question his motives and the sincerity of his public statements.
The controversy extends beyond individual figures to encompass the broader AI industry. Many experts are raising alarms about the rapid development and deployment of AI systems without adequate safety measures or ethical considerations 2.
Critics argue that Silicon Valley's "move fast and break things" mentality is particularly dangerous when applied to AI, which has the potential to impact society on an unprecedented scale. They call for more robust regulations and ethical guidelines to govern AI development and deployment.
Among the most vocal critics is Gary Marcus, a cognitive scientist and AI researcher. Marcus has consistently challenged the tech industry's approach to AI development, arguing for a more cautious and scientifically grounded approach 3.
Marcus contends that many of the current AI systems, including large language models like GPT-3, are fundamentally flawed and potentially dangerous when deployed in critical applications. He advocates for a hybrid approach that combines symbolic AI with neural networks to create more reliable and interpretable systems.
In response to these concerns, there is a growing movement within and outside the tech industry calling for more responsible AI development. This includes initiatives to increase transparency, improve AI safety research, and develop ethical guidelines for AI applications.
However, critics argue that these efforts are often led by the same tech elites who stand to benefit from rapid AI advancement, raising questions about potential conflicts of interest. They call for more diverse voices and independent oversight in shaping the future of AI technology.
The ongoing debate has highlighted the crucial role of media in shaping public perception of AI and tech leaders. Some argue that the tech press has been too deferential to Silicon Valley elites, often amplifying their views without sufficient critical analysis 1.
As the controversy unfolds, there is a growing call for more nuanced and critical coverage of AI developments and the actions of tech industry leaders. This includes a push for greater scientific literacy among journalists and a more diverse range of expert voices in AI-related reporting.
Reference
[3]
IEEE Spectrum: Technology, Engineering, and Science News
|How and Why Gary Marcus Became AI's Leading CriticA comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
4 Sources
4 Sources
An exploration of the complex landscape surrounding AI development, including political implications, economic impacts, and societal concerns, highlighting the need for responsible innovation and regulation.
2 Sources
2 Sources
Exploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
2 Sources
2 Sources
In 2024, AI made significant strides in capabilities and adoption, driving massive profits for tech companies. However, concerns about safety, regulation, and societal impact also grew.
13 Sources
13 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved