4 Sources
[1]
Can Sam Altman Be Trusted with the Future?
In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books -- romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google's researchers had done, he prompted it to predict the most probable next word in a sentence. The machine responded: one word, then another, and another -- each new term inferred from the patterns buried in those seven thousand books. Radford hadn't given it rules of grammar or a copy of Strunk and White. He had simply fed it stories. And, from them, the machine appeared to learn how to write on its own. It felt like a magic trick: Radford flipped the switch, and something came from nothing. His experiments laid the groundwork for ChatGPT, released in 2022. Even now, long after that first jolt, text generation can still provoke a sense of uncanniness. Ask ChatGPT to tell a joke or write a screenplay, and what it returns -- rarely good, but reliably recognizable -- is a sort of statistical curve fit to the vast corpus it was trained on, every sentence containing traces of the human experience encoded in that data. When I'm drafting an e-mail and type, "Hey, thanks so much for," then pause, and the program suggests "taking," then "the," then "time," I've become newly aware of which of my thoughts diverge from the pattern and which conform to it. My messages are now shadowed by the general imagination of others. Many of whom, it seems, want to thank someone for taking . . . the . . . time. That Radford's breakthrough happened at OpenAI was no accident. The organization had been founded, in 2015, as a nonprofit "Manhattan Project for A.I.," with early funding from Elon Musk and leadership from Sam Altman, who soon became its public face. Through a partnership with Microsoft, Altman secured access to powerful computing infrastructures. But, by 2017, the lab was still searching for a signature achievement. On another track, OpenAI researchers were teaching a T-shaped virtual robot to backflip: the bot would attempt random movements, and human observers would vote on which resembled a flip. With each round of feedback, it improved -- minimally, but measurably. The company also had a distinctive ethos. Its leaders spoke about the existential threat of artificial general intelligence -- the moment, vaguely defined, when machines would surpass human intelligence -- while pursuing it relentlessly. The idea seemed to be that A.I. was potentially so threatening that it was essential to build a good A.I. faster than anyone else could build a bad one. Even Microsoft's resources weren't limitless; chips and processing power devoted to one project couldn't be used for another. In the aftermath of Radford's breakthrough, OpenAI's leadership -- especially the genial Altman and his co-founder and chief scientist, the faintly shamanistic Ilya Sutskever -- made a series of pivotal decisions. They would concentrate on language models rather than, say, back-flipping robots. Since existing neural networks already seemed capable of extracting patterns from data, the team chose not to focus on network design but instead to amass as much training data as possible. They moved beyond Radford's cache of unpublished books and into a morass of YouTube transcripts and message-board chatter -- language scraped from the internet in a generalized trawl. That approach to deep learning required more computing power, which meant more money, putting strain on the original nonprofit model. But it worked. GPT-2 was released in 2019, an epochal event in the A.I. world, followed by the more consumer-oriented ChatGPT in 2022, which made a similar impression on the general public. User numbers surged, as did a sense of mystical momentum. At an off-site retreat near Yosemite, Sutskever reportedly set fire to an effigy representing unaligned artificial intelligence; at another retreat, he led colleagues in a chant: "Feel the AGI. Feel the AGI." In the prickly "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" (Penguin Press), Karen Hao tracks the fallout from the GPT breakthroughs across OpenAI's rivals -- Google, Meta, Anthropic, Baidu -- and argues that each company, in its own way, mirrored Altman's choices. The OpenAI model of scale at all costs became the industry's default. Hao's book is at once admirably detailed and one long pointed finger. "It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman's singular drive, network, and fundraising talent, that created a ripe combination for its particular vision to emerge and take over," she writes. "Everything OpenAI did was the opposite of inevitable; the explosive global costs of its massive deep learning models, and the perilous race it sparked across the industry to scale such models to planetary limits, could only have ever arisen from the one place it actually did." We have been, in other words, seduced -- lulled by the spooky, high-minded rhetoric of existential risk. The story of A.I.'s evolution over the past decade, in Hao's telling, is not really about the date of machine takeover or the degree of human control over the technology -- the terms of the A.G.I. debate. Instead, it's a corporate story about how we ended up with the version of A.I. we've got. The "original sin" of this arm of technology, Hao writes, lay in a decision by a Dartmouth mathematician named John McCarthy, in 1955, to coin the phrase "artificial intelligence" in the first place. "The term lends itself to casual anthropomorphizing and breathless exaggerations about the technology's capabilities," she observes. As evidence, she points to Frank Rosenblatt, a Cornell professor who, in the late fifties, devised a system that could distinguish between cards with a small square on the right versus the left. Rosenblatt promoted it as brain-like -- on its way to sentience and self-replication -- and these claims were picked up and broadcast by the New York Times. But a broader cultural hesitancy about the technology's implications meant that, once OpenAI made its breakthrough, Altman -- its C.E.O. -- came to be seen not only as a fiduciary steward but also as an ethical one. The background question that began to bubble up around the Valley, Keach Hagey writes in "The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future" (Norton), "first whispered, then murmured, then popping up in elaborate online essays from the company's defectors: Can we trust this person to lead us to AGI?" Within the world of tech founders, Altman might have seemed a pretty trustworthy candidate. He emerged from his twenties not just very influential and very rich (which isn't unusual in Silicon Valley) but with his moral reputation basically intact (which is). Reared in a St. Louis suburb in a Reform Jewish household, the eldest of four children of a real-estate developer and a dermatologist, he had been identified early on as a kind of polymathic whiz kid at John Burroughs, a local prep school. "His personality kind of reminded me of Malcolm Gladwell," the school's head, Andy Abbott, tells Hagey. "He can talk about anything and it's really interesting" -- computers, politics, Faulkner, human rights. Altman came out as gay at sixteen. At Stanford, according to Hagey, whose biography is more conventional than Hao's but is quite compelling, he launched a student campaign in support of gay marriage and briefly entertained the possibility of taking it national. At an entrepreneur fair during his sophomore year, in 2005, the physically slight Altman stood on a table, flipped open his phone, declared that geolocation was the future, and invited anyone interested to join him. Soon, he dropped out and was running a company called Loopt. Abbott remembered the moment he heard that his former student was going into tech. "Oh, don't go in that direction, Sam," he said. "You're so personable!"
[2]
Book Review: 'Empire of AI,' by Karen Hao; 'The Optimist,' by Keach Hagey
Tim Wu is a law professor at Columbia University and the author of the forthcoming "The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity." EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI, by Karen Hao THE OPTIMIST: Sam Altman, OpenAI, and the Race to Invent the Future, by Keach Hagey The "paper clip problem" is a well‑known ethics thought experiment in the world of artificial intelligence. It imagines a superintelligent A.I. charged with the seemingly harmless goal of making as many paper clips as possible. Trouble is, as the philosopher Nick Bostrom put it in 2003, without common-sense limits it might transform "first all of earth and then increasing portions of space into paper clip manufacturing facilities." The tale has long served as a warning about objectives pursued too literally. Two new books that orbit the entrepreneur Sam Altman and the firm he co-founded, OpenAI, suggest we may already be living with a version of the problem. In "Empire of AI," the journalist Karen Hao, who has worked for The Wall Street Journal and contributes to The Atlantic, argues that the pursuit of an artificial superintelligence has become its own figurative paper clip factory, devouring too much energy, minerals and human labor. Meanwhile, "The Optimist," by the Wall Street Journal reporter Keach Hagey, leaves readers suspecting that the earnest and seemingly innocuous paper clip maker who ends up running the world for his own ends could be Altman himself. "Empire of AI" is the broader and more critical of the two. Hao profiled OpenAI in 2020, two years before its most famous product, the intelligent chatbot called ChatGPT, debuted publicly. She portrays OpenAI and other companies that make up the fast‑growing A.I. sector as a "modern-day colonial world order." Much like the European powers of the 18th and 19th centuries, they "seize and extract precious resources to feed their vision of artificial intelligence." In a corrective to tech journalism that rarely leaves Silicon Valley, Hao ranges well beyond the Bay Area with extensive fieldwork in Kenya, Colombia and Chile. "The Optimist" is a more conventional biography, concentrated on Altman's life and times. Born in Chicago to progressive parents named Connie and Jerry -- in the 1980s, Jerry innovated a way to stir investment in affordable housing -- Altman was heavily influenced by their do-gooder spirit. ("You can't out‑nice Jerry," his friends would say.) Altman's relentlessly upbeat manner and genuine technical skill made him a perfect fit for Silicon Valley. Charming and smart, he tells people what they want to hear and has a knack for talking big in exactly the way 2010s Bay Area investors liked. The arc of Altman's life also follows a classic script. He drops out of Stanford to launch a start‑up that fizzles, but the effort brings him to the attention of Paul Graham, the co-founder of Y Combinator, an influential tech incubator that launched companies like Airbnb and Dropbox. By age 28, Altman has risen to succeed Graham as the organization's president, setting the stage for his leadership in the A.I. revolution. As Hagey makes clear, success in this context is all about the way you use the people you know. The author supplies a meticulous account of 21st‑century networking culture in Silicon Valley, where Altman's technical talents end up being less important than some of the qualities usually associated with religious leaders -- dare-to-dream boldness, self‑effacing geniality and, she writes, "a skill for convincing people that he can see into the future." Not unlike ChatGPT, Altman molds himself into whatever people want him to be. As Graham once quipped, "You could parachute him into an island full of cannibals and come back in five years and he'd be the king." During the 2010s Altman joined a group of Silicon Valley investors determined to recover the grand ambitions of earlier tech eras. Tired of start-ups based on incremental tweaks to social‑media platforms or gig-work apps, they sought to return to outer space, unlock nuclear fusion, achieve human-level A.I. and even defeat death itself. The investor Peter Thiel was a major influence, but Altman's most important collaborator in the field of A.I. was Elon Musk. The early‑2010s Musk who appears in both books is almost unrecognizable to observers who now associate him with black MAGA hats and chain-saw antics. This Musk, the builder of Tesla and SpaceX, believes that creating superintelligent computer systems is "summoning the demon." He becomes obsessed with the idea that Google will soon develop a true artificial intelligence and allow it to become a force for evil. Altman, dining regularly with Musk, mirrors his anxieties and persuades him to bankroll a more idealistic rival. "If it's going to happen," Altman emailed Musk in 2015, "it seems like it would be good for someone other than Google to do it first." He pitched a "Manhattan Project for A.I.," a nonprofit to develop a good A.I. in order to save humanity from its evil twin, just as the actual Manhattan Project sought to outrace the Nazis to the atomic bomb. Musk guaranteed $1 billion and even supplied the name OpenAI. Hagey's book, written with Altman's cooperation, is less critical, but no hagiography. "The Optimist" lets the reader see how thoroughly Altman outfoxed his patron, leveraging Musk's paranoia into enormous sums of money while slowly making OpenAI his own. It's striking that, despite providing much of the initial capital and credibility, Musk ends up with almost nothing to show for his investment. Hao's 2020 profile of OpenAI, published in the M.I.T. Technology Review, was unflattering and the company declined to cooperate with her for her book. She believes that OpenAI was "begun as a sincere stroke of idealism," but she wants to make its negative spillover effects evident. Hao does an admirable job of pulling the camera back, telling the stories of workers in Nairobi who earn "starvation wages to filter out violence and hate speech" from ChatGPT, and of visits to communities in Chile where data centers siphon prodigious amounts of water and electricity to run complex hardware. Both books climax with the weekend in November 2023 when Altman was abruptly fired by his company's board, only to be reinstated days later after staff members and investors revolted. From the outside, many critics saw the coup as a last-ditch effort to stop OpenAI from becoming the very Eye of Sauron it was founded to restrain. Hagey renders this moment as a conventional board mutiny: Directors had tired of Altman's "duplicity and calamitous aversion to conflict." One of them, the OpenAI co-founder Ilya Sutskever, recalls that Altman "would tell him one thing, then say another, and act as if the difference was an accident." Sutskever said he regretted his vote to oust Altman, but after Altman returned to OpenAI, Sutskever left the company. Hao's version is darker. Relying on a lot of the same sources as Hagey, she presents Altman as a peddler of "many little lies and some big ones," who helped create "a directionless, chaotic and back-stabbing environment." In her book, Sutskever comes to see Altman as engaging in what some of his colleagues call "psychological abuse." Together, these two excellent and deeply reported books form a diptych. On one panel stands Altman as the secular prophet preaching human progress and boundless optimism. Hagey calls Altman a "brilliant deal maker with a need for speed and a love of risk, who believes in technological progress with an almost religious conviction." On the other panel is Altman the opportunist. He uses idealism as a tool, harnessing the concept of human progress to build an empire the way Europeans once used Christianity to justify conquest. Altman recently told the statistician Nate Silver that if we achieve human-level A.I., "poverty really does just end." But motives matter. History suggests that some technologies aimed at growth have taken a bad situation and made it worse. The efficiencies of the cotton gin, for instance, saved on labor but made slavery even more lucrative. If the aim is not, in the first place, to help the world, but instead to get bigger -- better chips, more data, smarter code -- then our problems might just get bigger too. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.) EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI | By Karen Hao | Penguin Press | 482 pp. | $32 THE OPTIMIST: Sam Altman, OpenAI, and the Race to Invent the Future | By Keach Hagey | Norton | 367 pp. | $31.99
[3]
'Empire of AI' author on OpenAI's cult of AGI and why Sam Altman tried to discredit her book
When OpenAI unleashed ChatGPT on the world in November 2022, it lit the fuse that ignited the generative AI era. But Karen Hao, author of the new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, had already been covering OpenAI for years. The book comes out on May 20, and it reveals surprising new details about the company's culture of secrecy and religious devotion to the promise of AGI, or artificial general intelligence. Hao profiled the company for MIT Technology Review two years before ChatGPT launched, putting it on the map as a world-changing company. Now, she's giving readers an inside look at pivotal moments in the history of artificial intelligence, including the moment when OpenAI's board forced out CEO and cofounder Sam Altman. (He was later reinstated because of employee backlash.) Empire of AI dispels any doubt that OpenAI's belief in ushering in AGI to benefit all of humanity had messianic undertones. One of the many stories from Hao's book involves Ilya Sutskever, cofounder and former chief scientist, burning an effigy on a team retreat. The wooden effigy "represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI's duty, he said, was to destroy it." Sutskever would later do this again at another company retreat, Hao wrote. And in interviews with OpenAI employees about the potential of AGI, Hao details their "wide-eyed wonder" when "talking about how it would bring utopia. Someone said, 'We're going to reach AGI and then, game over, like, the world will be perfect.' And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear." Hao's seven years of covering AI have culminated in Empire of AI, which details OpenAI's rise to dominance, casting it as a modern-day empire. That Hao's book reminded me of The Anarchy, the account of the OG corporate empire, The East India Company, is no coincidence. Hao reread William Dalrymple's book while writing her own "to remind [herself] of the parallels of a company taking over the world." This is likely not a characterization that OpenAI wants. In fact, Altman went out of his way to discredit Hao's book on X. "There are some books coming out about OpenAI and me. We only participated in two... No book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to." This Tweet is currently unavailable. It might be loading or has been removed. The two authors Altman named are Keach Hagey and Ashlee Vance, and they also have forthcoming books. The unnamed author was Hao, of course. She said OpenAI promised to cooperate with her for months, but never did. We get into that drama in the interview below, plus OpenAI's religious fervor for AGI, the harms AI has already inflicted on the Global South, and what else Hao would have included if she'd kept writing the book. Karen Hao: I'm really glad that you used religious belief to describe that, because I don't remember if I explicitly used that word, but I was really trying to convey it through the description. This was a thing that honestly was most surprising to me while reporting the book. There is so much religious rhetoric around AGI, you know, 'AI will kill us' versus 'AI will bring us to utopia.' I thought it was just rhetoric. When I first started reporting the book, the general narrative among more skeptical people is, 'Oh, of course they're going to say that AI can kill people, or AI will bring utopia, because it creates this image of AI being incredibly powerful, and that's going to help them sell more products.' What I was surprised by was, no, it's not just that. Maybe there are some people who do just say this as rhetoric, but there are also people who genuinely believe these things. I spoke to people with wide-eyed wonder when they were talking about how it would bring utopia. Someone said, 'We're going to reach AGI and then, game over, like, the world will be perfect.' And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear. I was really shocked by that level of all-consuming belief that a lot of people within this space start to have, and I think part of it is because they're doing something that is kind of historically unprecedented. The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to. Because you feel so inadequate otherwise, having all that responsibility. Also, the community is so insular. Because I talked with some people over several years, I noticed that the language they use and how they think about what they're doing fundamentally evolves. As you get more and more sucked into this world. You start using more and more religious language, and more and more of this perspective really gets to you. It's like Dune, where [Lady Jessica] tells a myth that she builds around Paul Atreides that she purposely kind of constructs to make it such that he becomes powerful, and they have this idea that this is the way to control people. To create a religion, you create a mythology around it. Not only do the people who hear it for the first time genuinely believe this because they don't realize that it was a construct, but also Paul Atreides himself starts to believe it more and more, and it becomes a self-fulfilling prophecy. Honestly, when I was talking with people for the book, I was like, this is Dune. I think what's happening here is twofold. First, we need to remember that when designing these systems, AI companies prioritize their own problems. They do this both implicitly -- in the way that Silicon Valley has always done, creating apps for first-world problems like laundry and food delivery, because that's what they know -- and explicitly. My book talks about how Altman has long pushed OpenAI to focus on AI models that can excel at code generation because he thinks they will ultimately help the company entrench its competitive advantage. As a result, these models are designed to best serve the people who develop them. And the farther away your life is from theirs in Silicon Valley, the more this technology begins to break down for you. The second thing that's happening is more meta. Code generation has become the main use case in which AI models are more consistently delivering workers productivity gains, both for the reasons aforementioned above and because code is particularly well suited to the strengths of AI models. Code is computable. To people who don't code or don't exist in the Silicon Valley worldview, we view the leaps in code-generation capabilities as leaps in just one use case. But in the AI world, there is a deeply entrenched worldview that everything about the world is ultimately, with enough data, computable. So, to people who exist in that mind frame, the leaps in code generation represent something far more than just code generation. It's emblematic of AI one day being able to master everything. I originally did not plan to focus the book that much on OpenAI. I actually wanted to focus the book on this idea that the AI industry has become a modern-day empire. And this was based on work that I did at MIT Technology Review in 2020 and 2021 about AI colonialism. It was exploring this idea that was starting to crop up a lot in academia and among research circles that there are lots of different patterns that we are starting to see where this pursuit of extremely resource-intensive AI technologies is leading to a consolidation of resources, wealth, power, and knowledge. And in a way, it's no longer sufficient to kind of call them companies anymore. To really understand the vastness and the scale of what's happening, you really have to start thinking about it more as an empire-like phenomenon. At the time, I did a series of stories that was looking at communities around the world, especially in the Global South, that are experiencing this kind of AI revolution, but as vulnerable populations that were not in any way seeing the benefits of the technology, but were being exploited by either the creation of the technology or the deployment of it. And that's when ChatGPT came out... and all of a sudden we were recycling old narratives of 'AI is going to transform everything, and it's amazing for everyone.' So I thought, now is the time to reintroduce everything but in this new context. Then I realized that OpenAI was actually the vehicle to tell this story, because they were the company that completely accelerated the absolute colossal amount of resources that is going into this technology and the empire-esque nature of it all. As I started covering AI more and more, I developed this really strong feeling that the story of AI and society cannot be understood exclusively from its centers of power. Yes, we need reporting to understand Silicon Valley and its worldview. But also, if we only ever stay within that worldview, you won't be able to fully understand the sheer extent of how AI then affects real people in the real world. The world is not represented by Silicon Valley, and the global majority or the Global South are the true test cases for whether or not a technology is actually benefiting humanity, because the technology is usually not built with them in mind. All technology revolutions leave some people behind. But the problem is that the people who are left behind are always the same, and the people who gain are always the same. So are we really getting progress from technology if we're just exacerbating inequality more and more, globally? That's why I wanted to write the stories that were in places far and away from Silicon Valley. Most of the world lives that way without access to basic resources, without a guarantee of being able to put healthy food on the table for their kids or where the next paycheck is going to come from. And so unless we explore how AI actually affects these people, we're never really going to understand what it's going to mean ultimately for all of us. I was really lucky in that I started covering AI before all the companies started closing themselves off and obfuscating technical details. And so for me, it was an incredibly dramatic shift to see companies being incredibly open with publishing their data, publishing their model weights, publishing the analyses of how their models are performing, independent auditors getting access to models, things like that, and now this state where all we get is just PR. So that was part of it, just saying, it wasn't actually like this before. And it is yet another example of why empires are the way to think about this, because empires control knowledge production. How they perpetuate their existence is by continuously massaging the facts and massaging science to allow them to continue to persist. But also, if it wasn't like this before, I hope that it'll give people a greater sense of hope themselves, that this can change. This is not some inevitable state of affairs. And we really need more transparency in how these technologies are developed. They're the most consequential technologies being developed today, and we literally can't say basic things about them. We can't say how much energy they use, how much carbon they produce, we can't even say where the data centers are that are being built half the time. We can't say how much discrimination is in these tools, and we're giving them to children in classrooms and to doctors' offices to start supporting medical decisions. The levels of opacity are so glaring, and it's shocking that we've kind of been lulled into this sense of normalcy. I hope that it's a bit of a wake-up call that we shouldn't accept this. Obviously, he's a very strategic and tactical person and generally very aware of how things that he does will land with people, especially with the media. So, honestly, my first reaction was just... why? Is there some kind of 4D chess game? I just don't get it. But, yeah, we did see a rise in interest from a lot of journalists being like, 'Oh, now I really need to see what's in the book.' When I started the book, OpenAI said that they would cooperate with the book, and we had discussions for almost six months of them participating in the book. And then at the six-month mark, they suddenly reversed their position. I was really disheartened by that, because I felt like now I have a much harder task of trying to tell this story and trying to accurately reflect their perspective without really having them participate in the book. But I think it ended up making the book a lot stronger, because I ended up being even more aggressive in my reporting... So in hindsight, I think it was a blessing. When I approached them about the book, I was very upfront and said, 'You know all the things that I've written. I'm going to come with a critical perspective, but obviously I want to be fair, and I want to give you every opportunity to challenge some of the criticisms that I might bring from my reporting.' Initially, they were open to that, which is a credit to them. I think what happened was it just kept dragging out, and I started wondering how sincere they actually were or whether they were offering this as a carrot to try and shape how many people I reached out to myself, because I was hesitant to reach out to people within the company while I was still negotiating for interviews with the communications team. But at some point, I realized I'm running out of time and I just need to go through with my reporting plan, so I just started reaching out to people within the company. My theory is that it frustrated them that I emailed people directly, and because there were other book opportunities, they decided that they didn't need to participate in every book. They could just participate in what they wanted to. So it became kind of a done decision that they would no longer participate in mine, and go with the others. For sure the Stargate Project and DeepSeek. The Stargate Project is just such a perfect extension of what I talk about in the book, which is that the level of capital and resources, and now the level of power infrastructure and water infrastructure that is being influenced by these companies is hard to even grasp. Once again, we are getting to a new age of empire. They're literally land-grabbing and resource-grabbing. The Stargate Project was originally announced as a $500 billion spend over four years. The Apollo Program was $380 billion over 13 years, if you account for it in 2025. If it actually goes through, it would be the largest amount of capital spent in history to build infrastructure for technology that ultimately the track record for is still middling. We haven't actually seen that much economic progress; it's not broad-based at all. In fact, you could argue that the current uncertainty that everyone feels about the economy and jobs disappearing is actually the real scorecard of what the quest for AGI has brought us. And then DeepSeek... the fundamental lesson of DeepSeek was that none of this is actually necessary. I know that there's a lot of controversy around whether they distilled OpenAI's models or actually spent the amount that they said they did. But OpenAI could have distilled their own models. Why didn't they distill their models? None of this was necessary. They do not need to build $500 billion of infrastructure. They could have spent more time innovating on more efficient ways of reaching the same level of performance in their technologies. But they didn't, because they haven't had the pressure to do so with the sheer amount of resources that they can get access to through Altman's once-in-a-generation fundraising capabilities. The story of the empire of AI is so deeply connected to what's happening right now with the Trump Administration and DOGE and the complete collapse of democratic norms in the U.S., because this is what happens when you allow certain individuals to consolidate so much wealth, so much power, that they can basically just manipulate democracy. AI is just the latest vehicle by which that is happening, and democracy is not inevitable. If we want to preserve our democracy, we need to fight like hell to protect it and recognize that the way Silicon Valley is currently talking about weaponizing AI as a sort of a narrative for the future is actually cloaking this massive acceleration of the erosion of democracy and reversal of democracy.
[4]
'Every person that clashed with him has left': the rise, fall and spectacular comeback of Sam Altman
From Elon Musk to his own board, anyone who has come up against the OpenAI CEO has lost. In a gripping new account of the battle for AI supremacy, writer Karen Hao says we should all be wary of the power he now wields The short-lived firing of Sam Altman, the CEO of possibly the world's most important AI company, was sensational. When he was sacked by OpenAI's board members, some of them believed the stakes could not have been higher - the future of humanity - if the organisation continued under Altman. Imagine Succession, with added apocalypse vibes. In early November 2023, after three weeks of secret calls and varying degrees of paranoia, the OpenAI board agreed: Altman had to go. The drama didn't stop there. After his removal, Altman's most loyal staff resigned, and others signed an open letter calling for his reinstatement. Investors, including its biggest, Microsoft, got spooked. Without talent or funding, OpenAI - which developed ChatGPT and was worth billions - wouldn't even exist. Some who had been involved in the decision to fire Altman switched sides and within days, he was reinstated. Is he now untouchable? "Certainly he has entrenched his power," says Karen Hao, the tech journalist whose new book, Empire of AI, details this saga in a tense and absorbing history of OpenAI. The current board is "much more allied with his interests," she says. Hao's book is a gripping read (subtitle: "Inside the Reckless Race for Total Domination"), featuring the unimaginably rich, as well as people in developing countries who are paid a pittance to root out the horrific sexual and violent content in the internet data that AI is trained on. The cast of characters that make up OpenAI have brilliant minds, and often eccentric behaviour. Elon Musk, after all, is one of its founders. Another founder and its chief scientist, Ilya Sutskever - who would be part of the failed attempt to remove Altman - dramatically illustrated his fears about the "unaligned" AI they had created by burning a wooden effigy constructed to represent it in 2023 while his senior colleagues stood around a firepit at a luxury resort, wearing bathrobes. At the centre of it all is Altman, OpenAI's charismatic co-founder and CEO who is, depending how you view him, the villain who has put humanity on the path to mass extinction, or the visionary utopian who will bring us cures for diseases and a revolution in how we work. In the less than two years it has taken Hao to write her book, Altman, 40, appears to have outmanoeuvred his dissenters and has announced plans to raise $7tn. Hao describes Altman as a "once-in-a-generation fundraising talent" and claims OpenAI's chances of winning the AI arms race depend on raising vast sums. "He persuades people into ceding power to him, he doesn't seize it. The reason he's able to do this is because he's incredibly good at understanding what people want and what motivates them. I came to understand that ... when he is talking to someone, what comes out of his mouth is not necessarily correlated as much with his own beliefs as it is with what he thinks the other person needs to hear." It's why, she says, he was able to recruit so many talented people and get so much investment (and also what made some on his original board, and senior employees, nervous). "It's also why he was able to pull off something that most people would not be able to do, which is to get the public to buy into this premise that he's doing something profoundly good for society, just long enough to get away with it." Within OpenAI, Hao points out, "every single person that has ever clashed with him about his vision of AI development has left - Musk has left, Dario Amodei has left, Sutskever has left [the three were early leaders in OpenAI] and a whole host of other people. Ultimately, they had a different idea of how AI should be developed. They challenged Altman, and Altman won." In 2021, Altman's sister Annie made the shocking allegation on what was then Twitter that he had sexually abused her as a child (he is nine years older). In January this year, she filed a lawsuit against him. In a statement released by Altman, his mother and his two brothers, they described the allegations as "utterly untrue" (his father died in 2018). Hao had several conversations with Annie, piecing together how her life unravelled. A bright child who planned to go to medical school, she suffered with poor mental health, and then developed a series of chronic physical health issues as a young adult. After her father's death, her health declined even further and, as her family started cutting off financial help, she became estranged from them. Annie, says Hao, "is such a perfect case study of why we need to be sceptical of what Sam Altman says about the benefits of AI". Altman claims AI is going to solve poverty and improve healthcare, but Annie - who lives in poverty and has health issues - hasn't seen any of the benefits, says Hao. "She's representative of more people, and how they live in the world, than he is, and it just so happens that this perfect case study is also his sister." Despite agreeing to speak to Hao for her book, OpenAI pulled out when they found out she was in touch with Annie. "This should be a family thing," says Hao. "Why is a company representative now making this their top issue? That illustrated to me how important Sam, the man, is to the company." It highlighted to her, she says, that Silicon Valley companies, particularly when faced with criticism, "can bring their full power to bear to quash that dissent." Hao studied mechanical engineering at university and moved to San Francisco after graduation to work for a startup. "I thought that was going to be my career, to be in Silicon Valley and do that whole journey," she says, when we speak over Zoom. "I pretty much realised within the first year that the incentives within Silicon Valley for technology development were not aligned with the public interest." So she moved into journalism and, writing for the magazine MIT Technology Review, became fascinated by AI. "I was primarily spending all my time talking with researchers within companies that were operating in academic-like environments, where they didn't really have any commercial objectives. There was so much diversity of research happening." There were also healthy debates. This was in 2016, around the time Donald Trump won his first election, and there had been a lot of criticism of the tech industry. "There was emerging research on AI and its impact on society. What are the harms? What are the biases embedded in models that lead to potential widespread discrimination and civil rights issues? That's kind of where the AI world and discourse was before it got derailed by ChatGPT." Within days of the release of ChatGPT in late 2022, it had one million users. Within a couple of months, it had 100 million users and had become the fastest growing consumer app in history. Hao felt its dazzling success had overshadowed those kind of debates, at least in the mainstream. "People were just buying what OpenAI and other companies were spoon-feeding in terms of narratives, like: this is going to cure cancer, this is going to solve the climate crisis, all these utopic things that you can't even dream of." She started working on what would become her book, looking at the history of OpenAI and its competitors. "Only when you have that context can you begin to understand that what these companies say should not be taken at face value." Before Hao started following OpenAI more closely, she says she had a "pretty positive impression. I was curious about them; they were not a company, they were a non-profit, and they talked about how they were going to be transparent, open their research, and were focused on benefiting society". Then, in 2019, things started to change; OpenAI developed a "capped-profit" structure (investors would have their returns capped at a very generous 100 times), Altman became CEO, they signed a billion-dollar deal with Microsoft, and started to withhold their research. "It seemed like quite a significant shift," says Hao. "That is one of the reasons why OpenAI has had so much drama, but it's also emblematic of AI development - it's so much driven by ideology," she says. "There's this clash of egos and ideologies that's happening, to try to seize the direction." Within OpenAI, whether boomer (those who want to scale as fast as possible) or doomer (those who believe AI is a threat to humanity), the finish line was the same: to develop, and therefore control, AI first. Does Hao think AI poses an existential threat? "The biggest and most pressing threat is that AI is going to completely erode democracy and, if you understand that, the conclusion is then we should just stop developing this technology in the way that these companies are developing it." The funnelling of resources "is a completely different scale than previous tech companies ... They're trying to justify raising the largest private investment rounds again and again - OpenAI having just raised $40bn in the latest round". That kind of concentration of wealth, she says, "is in and of itself a threat. We are already seeing that play out with the US government, with the takeover by unelected tech billionaires." The apocalyptic visions of a superintelligent AI turning against humanity have been a distraction, she thinks. "Ultimately, what's going to cause catastrophe is people, not rogue AI, and we need to watch what the people are doing." However, she has met people who genuinely believe AI will destroy humanity. "I spoke to people whose voices were literally quivering in fear, that is the degree to which they believe, and if you truly believe that, that's terrifying." Then there are those who use the idea of how AI could become so powerful as "a rhetorical tool to continue saying: 'That's why we good people need to continue controlling the technology and not have it be in the hands of bad people.'" But as far as Hao can see, "we've not gotten more democratic technologies, but more authoritarian ones." Neither does Hao have much sympathy for the argument that the development of AI requires huge investment. "I don't think it needs the level of investment these companies say it needs," she says. "They have already spent hundreds of billions of dollars on developing a technology that has yet to achieve what they said it's going to achieve," says Hao. "And now you expect us to spend trillions? At what point do we decide that actually they've just failed? "When I was covering AI pre-ChatGPT and the wide range of research that was happening, there were such exciting ideas ... ChatGPT erased people's imaginations for what else could be possible." Generative AI has taken over - not just OpenAI, but at other tech companies including Google's DeepMind - and this, says Hao, "has distorted the landscape of research, because talent goes where the money goes." The money doesn't flow equally, though. Hao interviews people working for outsourced companies in Kenya, Colombia and Chile, who annotate the data that generative AI is trained on, sifting out harmful content for low pay and without much thought for their mental health. The AI, meanwhile, is powered by vast datacentres, buildings packed with computers, that require a huge amount of energy to run, and whose cooling systems require a huge amount of water. In the near future there will be even bigger datacentres known as "mega-campuses". Just one of these could use more energy than three cities the size of San Francisco. The premise of her book is that the AI giants are running an empire. But history shows us that empires can and do fall. Hao sees each step of the supply chain as a potential site of resistance. Artists and writers, for instance, are pushing back against their work being used to train generative AI (the Guardian has a deal with OpenAI for the use of its content). Enforcing data privacy laws, "are also ways to contain the empire", as is forcing companies to be transparent about their environmental impact, from their energy consumption to where and how the minerals needed for hardware are extracted. Tech companies, says Hao, "want their tools to feel like magic" but she would like more public education to make people realise that each AI prompt uses resources and energy. Hitting these pressure points and more means, she says, "we can slowly shift back to a more democratic model of governing AI". Compelling though he is, this isn't just about Altman, the reigning emperor of AI. "It will take a far more concerted effort now to remove him," she says, but adds, "we fixate a bit too much on the individual". If, or when, Altman chooses to step down or is successfully ousted, will his successor be any different? "OpenAI is ultimately a product of Silicon Valley." And anybody who may one day replace Altman, says Hao with foreboding, is going to pursue the same objective: "To build and fortify the empire."
Share
Copy Link
A comprehensive look at OpenAI's journey, Sam Altman's leadership, and the ethical concerns surrounding the rapid development of artificial intelligence.
OpenAI, co-founded by Sam Altman and others in 2015, emerged as a nonprofit "Manhattan Project for AI" with early funding from Elon Musk 1. The organization's breakthrough came in 2017 when Alec Radford, a young OpenAI engineer, experimented with transformer neural networks, leading to the development of language models that could generate human-like text 1. This innovation laid the groundwork for ChatGPT, released in 2022, which revolutionized the AI industry 1.
Sam Altman, described as a "once-in-a-generation fundraising talent," became the public face of OpenAI 4. His leadership style, characterized by charm, technical skill, and an ability to tailor his message to his audience, has been both praised and criticized 2. Altman's vision for AI development has been central to OpenAI's strategy, focusing on language models and amassing vast amounts of training data 1.
OpenAI's success sparked an industry-wide race to develop increasingly powerful AI systems. This competition has raised concerns about resource allocation, with critics arguing that the pursuit of artificial general intelligence (AGI) is consuming too much energy, minerals, and human labor 2. The company's approach to AI development, described as a "modern-day colonial world order" by some, has led to ethical debates about the impact on global resources and labor 2.
The AI community, particularly at OpenAI, has developed a culture that some describe as having religious undertones. Employees and leaders often speak of AGI in terms of utopia or existential threat, reflecting the profound impact they believe their work will have on humanity 3. This fervor is exemplified by incidents such as Ilya Sutskever, OpenAI's co-founder and former chief scientist, burning effigies representing unaligned AI at company retreats 3.
OpenAI's journey has been marked by power struggles and governance issues. The brief firing and subsequent reinstatement of Sam Altman in November 2023 highlighted the complex dynamics within the company 4. This event, described as "Succession with added apocalypse vibes," demonstrated Altman's ability to consolidate power, with many who challenged his vision ultimately leaving the company 4.
While OpenAI and Altman promote AI as a solution to global problems, critics point out the disconnect between these promises and reality. The company's practices, particularly in data collection and processing, have raised concerns about exploitation in developing countries 3. Additionally, personal controversies, such as allegations made by Altman's sister, have further complicated the public perception of OpenAI and its leadership 4.
As OpenAI continues to lead in AI development, questions remain about the long-term implications of its approach. The company's ability to raise vast sums of money and attract top talent has positioned it at the forefront of the AI revolution 4. However, the concentration of power and resources in the hands of a few tech companies raises concerns about the democratic control and ethical development of AI technologies that could shape the future of humanity 234.
Google has launched its new Pixel 10 series, featuring improved AI capabilities, camera upgrades, and the new Tensor G5 chip. The lineup includes the Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL, with prices starting at $799.
60 Sources
Technology
16 hrs ago
60 Sources
Technology
16 hrs ago
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to compete with Apple in the premium handset market.
22 Sources
Technology
16 hrs ago
22 Sources
Technology
16 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
1 day ago
6 Sources
Technology
1 day ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, AI-powered features, and satellite communication capabilities, positioning it as a strong competitor in the smartwatch market.
18 Sources
Technology
16 hrs ago
18 Sources
Technology
16 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
16 hrs ago
7 Sources
Technology
16 hrs ago