Curated by THEOUTPOST
On Mon, 27 Jan, 12:01 AM UTC
4 Sources
[1]
Why Reid Hoffman feels optimistic about our AI future | TechCrunch
In Reid Hoffman's new book Superagency: What Could Possibly Go Right With Our AI Future, the LinkedIn co-founder makes the case that AI can extend human agency -- giving us more knowledge, better jobs, and improved lives -- rather than reducing it. That doesn't mean he's ignoring the technology's potential downsides. In fact, Hoffman (who wrote the book with Greg Beato) describes his outlook on AI, and on technology more generally, as one focused on "smart risk taking" rather than blind optimism. "Everyone, generally speaking, focuses way too much on what could go wrong, and insufficiently on what could go right," Hoffman told me. And while he said he supports "intelligent regulation," he argued that an "iterative deployment" process that gets AI tools into everyone's hands and then responds to their feedback is even more important for ensuring positive outcomes. "Part of the reason why cars can go faster today than when they were first made, is because ... we figured out a bunch of different innovations around brakes and airbags and bumpers and seat belts," Hoffman said. "Innovation isn't just unsafe, it actually leads to safety." In our conversation about his book, we also discussed the benefits he's already seeing from AI, the technology's potential climate impact, and the difference between an AI doomer and an AI gloomer. This interview has been edited for length and clarity. You'd already written another book about AI, Impromptu. With Superagency, what did you want to say that you hadn't already? So Impromptu was mostly trying to show that AI could [provide] relatively easy amplification [of] intelligence, and was showing it as well as telling it across a set of vectors. Superagency is much more about the question around how, actually, our human agency gets greatly improved, not just by superpowers, which is obviously part of it, but by the transformation of our industries, our societies, as multiple of us all get these superpowers from these new technologies. The general discourse around these things always starts with a heavy pessimism and then transforms into -- call it a new elevated state of humanity and society. AI is just the latest disruptive technology in this. Impromptu didn't really address the concerns as much ... of getting to this more human future. You open by dividing the different outlooks on AI into these categories -- gloomers, doomers, zoomers, bloomers. We can dig into each of them, but we'll start with a bloomer since that's the one you classify yourself as. What is a bloomer, and why do you consider yourself one? I think a bloomer is inherently technology optimistic and [believes] that building technologies can be very, very good for us as individuals, as groups, as societies, as humanity, but that [doesn't mean] anything you can build is great. So you should navigate with risk taking, but smart risk taking versus blind risk taking, and that you engage in dialogue and interaction to steer. It's part of the reason why we talk about iterative deployment a lot in the book, because the idea is, part of how you engage in that conversation with many human beings is through iterative deployment. You're engaging with that in order to steer it to say, "Oh, if it has this shape, it's much, much better for everybody. And it makes these bad cases more limited, both in how prevalent they are, but also how much impact they can have." And when you talk about steering, there's regulation, which we'll get to, but you seem to think the most promise lies in this sort of iterative deployment, particularly at scale. Do you think the benefits are just built in -- as in, if we put AI into the hands of the most people, it's inherently small-d democratic? Or do you think the products need to be designed in a way where people can have input? Well, I think it could depend on the different products. But one of the things [we're] trying to illustrate in the book is to say that just being able to engage and to speak about the product -- including use, don't use, use in certain ways -- that is actually, in fact, interacting and helping shape [it], right? Because the people building them are looking at that feedback. They're looking at: Did you engage? Did you not engage? They're listening to people online and the press and everything else, saying, "Hey, this is great." Or, "Hey, this really sucks." That is a huge amount of steering and feedback from a lot of people, separate from what you get from my data that might be included in iteration, or that I might be able to vote or somehow express direct, directional feedback. I guess I'm trying to dig into how these mechanisms work because, as you note in the book, particularly with ChatGPT, it's become so incredibly popular. So if I say, "Hey, I don't like this thing about ChatGPT" or "I have this objection to it and I'm not going to use it," that's just going to be drowned out by so many people using it. Part of it is, having hundreds of millions of people participate doesn't mean that you're going to answer every single person's objections. Some people might say, "No car should go faster than 20 miles an hour." Well, it's nice that you think that. It's that aggregate of [the feedback]. And in the aggregate if, for example, you're expressing something that's a challenge or hesitancy or a shift, but then other people start expressing that, too, then it is more likely that it'll be heard and changed. And part of it is, OpenAI competes with Anthropic and vice versa. They're listening pretty carefully to not only what are they hearing now, but ... steering towards valuable things that people want and also steering away from challenging things that people don't want. We may want to take advantage of these tools as consumers, but they may be potentially harmful in ways that are not necessarily visible to me as a consumer. Is that iterative deployment process something that is going to address other concerns, maybe societal concerns, that aren't showing up for individual consumers? Well, part of the reason I wrote a book on Superagency is so people actually [have] the dialogue on societal concerns, too. For example, people say, "Well, I think AI is going to cause people to give up their agency and [give up] making decisions about their lives." And then people go and play with ChatGPT and say, "Well, I don't have that experience." And if very few of us are actually experiencing [that loss of agency], then that's the quasi-argument against it, right? You also talk about regulation. It sounds like you're open to regulation in some contexts, but you're worried about regulation potentially stifling innovation. Can you say more about what you think beneficial AI regulation might look like? So, there's a couple areas, because I actually am positive on intelligent regulation. One area is when you have really specific, very important things that you're trying to prevent -- terrorism, cybercrime, other kinds of things. You're trying to, essentially, prevent this really bad thing, but allow a wide range of other things, so you can discuss: What are the things that are sufficiently narrowly targeted at those specific outcomes? Beyond that, there's a chapter on [how] innovation is safety, too, because as you innovate, you create new safety and alignment features. And it's important to get there as well, because part of the reason why cars can go faster today than when they were first made, is because we go, "Oh, we figured out a bunch of different innovations around brakes and airbags and bumpers and seat belts." Innovation isn't just unsafe, it actually leads to safety. What I encourage people, especially in a fast moving and iterative regulatory environment, is to articulate what your specific concern is as something you can measure, and start measuring it. Because then, if you start seeing that measurement grow in a strong way or an alarming way, you could say, "Okay, let's, let's explore that and see if there's things we can do." There's another distinction you make, between the gloomers and the doomers -- the doomers being people who are more concerned about the existential risk of super intelligence, gloomers being more concerned about the short-term risks around jobs, copyright, any number of things. The parts of the book that I've read seem to be more focused on addressing the criticisms of the gloomers. I'd say I'm trying to address the book to two groups. One group is anyone who's between AI skeptical -- which includes gloomers -- to AI curious. And then the other group is technologists and innovators saying, "Look, part of what really matters to people is human agency. So, let's take that as a design lens in terms of what we're building for the future. And by taking that as a design lens, we can also help build even better agency-enhancing technology." What are some current or future examples of how AI could extend human agency as opposed to reducing it? Part of what the book was trying to do, part of Superagency, is that people tend to reduce this to, "What superpowers do I get?" But they don't realize that superagency is when a lot of people get super powers, I also benefit from it. A canonical example is cars. Oh, I can go other places, but, by the way, when other people go other places, a doctor can come to your house when you can't leave, and do a house call. So you're getting superagency, collectively, and that's part of what's valuable now today. I think we already have, with today's AI tools, a bunch of superpowers, which can include abilities to learn. I don't know if you've done this, but I went and said, "Explain quantum mechanics to a five-year-old, to a 12-year-old, to an 18-year-old." It can be useful at -- you point the camera at something and say, "What is that?" Like, identifying a mushroom or identifying a tree. But then, obviously there's a whole set of different language tasks. When I'm writing Superagency, I'm not a historian of technology, I'm a technologist and an inventor. But as I research and write these things, I then say, "Okay, what would a historian of technology say about what I've written here?" When you talk about some of these examples in the book, you also say that when we get new technology, sometimes old skills fall away because we don't need them anymore, and we develop new ones. And in education, maybe it makes this information accessible to people who might otherwise never get it. On the other hand, you do hear these examples of people who have been trained and acclimated by ChatGPT to just accept an answer from a chatbot, as opposed to digging deeper into different sources or even realizing that ChatGPT could be wrong. It is definitely one of the fears. And by the way, there were similar fears with Google and search and Wikipedia, it's not a new dialogue. And just like any of those, the issue is, you have to learn where you can rely upon it, where you should cross check it, what the level of importance cross checking is, and all of those are good skills to pick up. We know where people have just quoted Wikipedia, or have quoted other things they found on the internet, right? And those are inaccurate, and it's good to learn that. Now, by the way, as we train these agents to be more and more useful, and have a higher degree of accuracy, you could have an agent who is cross checking and says, "Hey, there's a bunch of sources that challenge this content. Are you curious about it?" That kind of presentation of information enhances your agency, because it's giving you a set of information to decide how deep you go into it, how much you research, what level of certainty you [have.] Those are all part of what we get when we do iterative deployment. In the book, you talk about how people often ask, "What could go wrong?" And you say, "Well, what could go right? This is the question we need to be asking more often." And it seems to me that both of those are valuable questions. You don't want to preclude the good outcomes, but you want to guard against the bad outcomes. Yeah, that's part of what a bloomer is. You're very bullish on what could go right, but it's not that you're not in dialogue with what could go wrong. The problem is, everyone, generally speaking, focuses way too much on what could go wrong, and insufficiently on what could go right. Another issue that you've talked about in other interviews is climate, and I think you've said the climate impacts of AI are misunderstood or overstated. But do you think that widespread adoption of AI poses a risk to the climate? Well, fundamentally, no, or de minimis, for a couple reasons. First, you know, the AI data centers that are being built are all intensely on green energy, and one of the positive knock-on effects is ... that folks like Microsoft and Google and Amazon are investing massively in the green energy sector in order to do that. Then there's the question of when AI is applied to these problems. For example, DeepMind found that they could save, I think it was a minimum of 15 percent of electricity in Google data centers, which the engineers didn't think was possible. And then the last thing is, people tend to over-describe it, because it's the current sexy thing. But if you look at our energy usage and growth over the last few years, just a very small percentage is the data centers, and a smaller percentage of that is the AI. But the concern is partly that the growth on the data center side and the AI side could be pretty significant in the next few years. It could grow to be significant. But that's part of the reason I started with the green energy point. One of the most persuasive cases for the gloomer mindset, and one that you quote in the book, is an essay by Ted Chiang looking at how a lot of companies, when they talk about deploying AI, it seems to be this McKinsey mindset that's not about unlocking new potential, it's about how do we cut costs and eliminate jobs. Is that something you're worried about? Well, I am -- more in transition than an end state. I do think, as I describe in the book, that historically, we've navigated these transitions with a lot of pain and difficulty, and I suspect this one will also be with pain and difficulty. Part of the reason why I'm writing Superagency is to try to learn from both the lessons of the past and the tools we have to try to navigate the transition better, but it's always challenging. I do think we'll have real difficulties with a bunch of different job transitions. You know, probably the starting one is customer service jobs. Businesses tend to -- part of what makes them very good capital allocators is they tend to go, "How do we drive costs down in a variety of frames?" But on the other hand, when you think about it, you say, "Well, these AI technologies are making people five times more effective, making the sales people five times more effective. Am I gonna go into hire less sales people? No, I'll probably hire more." And if you go to the marketing people, marketing is competitive with other companies, and so forth. What about business operations or legal or finance? Well, all of those things tend to be [where] we pay for as much risk mitigation and management as possible. Now, I do think things like customer service will go down on head count, but that's the reason why I think it's job transformation. One [piece of] good news about AI is it can help you learn the new skills, it can help you do the new skills, can help you find work that your skill set may more naturally fit with. Part of that human agency is making sure we're building those tools in the transition as well. And that's not to say that it won't be painful and difficult. It's just to say, "Can we do it with more grace?"
[2]
There's Something Very, Very Wrong With Tech Today. This Man Thinks He Knows How to Fix It.
Ed Zitron is worried about the "future that tech's elite wants to build," and thinks you should be too. Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. How should media organizations cover artificial intelligence and the giant technology companies that have hitched their wagons to it? More interrogatively, according to Ed Zitron. As an Englishman who lives in Las Vegas and runs his own public relations firm, he is an unusual candidate for becoming one of the internet's most popular A.I. skeptics. But Zitron established himself as one of the most pugnacious critics of Big Tech after he penned a 2023 newsletter about tech products' drift from quality toward mindless growth. Headlined "The Rot Economy," the piece quickly went viral. Zitron's newsletter now has more than 50,000 subscribers. More than 125,000 accounts follow his posts on Bluesky, plus 90,000 on X. He hosts Better Offline, an iHeart podcast that questions "the growth-at-all-costs future that tech's elite wants to build." Oftentimes Zitron takes aim not just at the tech companies trafficking in an A.I.-focused vision for the future but the media organizations and star technology reporters that cover them. Some journalists believe in covering A.I. as an ongoing and potentially larger breakthrough with profound, dangerous ramifications for society and enormous profit potential for tech companies. Then there is a sizable camp, of which Zitron is one of the most prominent members, that reacts with deep skepticism and hostility to the tech industry's embrace of A.I and messaging around it. While A.I. boosters jostle with each other for the levers of federal power over the next four years, I anticipate that intense outside criticism of the whole sector will only grow. And in that world, few commentators have found a more engaged audience than Zitron. Recently, Zitron and I talked about the state of tech coverage, the questions he doesn't think enough people consider, and an ongoing fight over A.I. skepticism. Our conversation has been edited and condensed for clarity. Alex Kirshner: I don't like to ask questions about "the media," because we are not a monolith -- but I wonder what the most frequent bad practices are that you see in coverage of A.I. Ed Zitron: It starts with the will of the markets. There's a worryingly large amount of reporters who write with the immediate acceptance that A.I. will be artificial general intelligence, or A.I. will be good, or that this stuff is already proven and already powerful, because there's so much money behind it. This is a huge mistake, because it assumes the premise that anything OpenAI or Anthropic says is important. On top of that, I don't know if people really do enough work to fully understand how much these models can do. These reporters understand what they're talking about, but they don't go to the next step to say, "Is this actually important?" But I think that the biggest thing is that it feels like we're living in this insane cognitive dissonance. We have the largest generative A.I. company burning $5 billion or more a year for a product that is yet to really prove itself. Now, some would argue, "Well, ChatGPT has proven itself; 200 million people use it a week." That doesn't mean it's proven. It's the most prevalently discussed tech right now in every outlet everywhere all the time. A bunch of people are being driven by a media campaign. Of course ChatGPT would have that many users. It's still not that useful. You wrote recently about how Microsoft predicts A.I. will become a $10 billion annual business, but there's no "A.I." business unit on Microsoft's earnings reports, so measuring that is hard. Along that line, I saw Jeff Bezos speaking to Andrew Ross Sorkin last month, and he called A.I. a "horizontal enabling layer." It strikes me that this lack of division might make it easier for companies to get away without showing their work. Why do you think they present the issue this way? Well, I'm going to use a technical term: Jeff Bezos is talking bollocks. His statement was, "Modern A.I. is a horizontal enabling layer. It can be used to improve everything. It will be in everything." Jeff Bezos, what the fuck are you talking about? It doesn't mean anything. A horizontal enabling layer "like electricity" means nothing. The actual thing to say about this is "Jeff Bezos says complete nonsense on stage." A.I. is like electricity? Electricity has immediate use cases. On top of that, we are years into generative A.I. Where is the horizontal enablement? Where is the thing it's enabling? Two years. Show me one thing which you use that you go, "Oh, damn, I'm so glad I have this." Show me the AirPlay; show me the Apple Pay. Show me the thing that you're like, "Goddamn, I'm glad this is here." I can't think of one, and I'm exactly the little pig that would want it. I am an enthusiast. I love this crap. Do you worry that you're missing someone's use case, even if the product at issue sounds downright freakish to you or me? For example, what if there's a person who struggles to talk to people and might gain confidence to be more social in the future from chatting with bots? Sure. And I hedge my bets fairly clearly. The problem with the "I might be missing something" thing is that you can acknowledge that, but the thing you're not missing is how much these companies are making. And also, how there's not a thing yet. Nobody using ChatGPT can pretend that this is the future. Basic utility isn't there. People like it, fine, but it's not this revolution. If the immediate thing you do when you think "Maybe I'm missing something" is to say, "Well, I best trust what the tech company is saying," you're failing your readers. By all means, go and talk to some of the many, many experts out there who will explain these things to you. But also, find the product. Find the thing. If you can't find the thing, say you couldn't find the thing. Don't do the work for the companies. When you say that we need to "find the product," when it comes to generative A.I. companies like OpenAI, what do you mean? Find the actual thing that genuinely changes lives, improves lives, and helps people. Though Uber as a company has horrifying labor practices, you can at least look at them and go, "This is why I'm using the app. This is why this is a potentially world-changing concept." Same with Google search and cloud computing. With ChatGPT and their ilk -- Anthropic's Claude, for example -- you can find use cases, but it's hard to point to any of them that are really killer apps. It's impossible to point to anything that justifies the ruinous financial cost, massive environmental damage, theft from millions of people, and stealing of the entire internet. Also, on a very simple level, what's cool about this? What is the thing that really matters here? A few weeks ago, Casey Newton, the prominent tech journalist at Platformer, wrote critically about A.I. skepticism in a piece called "The Phony Comforts of AI Skepticism." It made a lot of social media waves for describing a prominent school of thought about A.I. as being that it "is fake and sucks" without thinking about A.I.'s potential, for good or ill. Do you think that is an accurate description of a school of thought? If so, are you in it? There is nothing comfortable about saying A.I. is going to collapse. There is going to be a meaningful hurt put on the stock market. Tens of thousands of people in these tech companies will lose their jobs. There will be a contraction in tech valuations and likely a depression that comes as a result of this. There is no "phony comfort." What I and others are talking about is bordering on apocalyptic. I'm interested in media coverage of this sector because I'm media myself and can't look away. But you've gained a big audience talking about this stuff, and I bet most of your readers aren't reporters. Why do you think this beat has compelled them? So a lot of my listeners and readers are not tech people. I have people who are from all sorts of walks of life, and everyone is being told artificial intelligence is the future. It's gonna do this, it's gonna do that. People are aware that this term is being drummed into them repeatedly. I think everyone, for a manifold amount of reasons, is currently looking at the cognitive dissonance of the A.I. boom, where we have all of these promises and egregious sums of money being put into something that doesn't really seem to be doing the things that everyone's excited about. We're being told, "Oh, this automation's gonna change our lives." Our lives aren't really being changed, other than our power grids being strained, our things being stolen, and some jobs being replaced. Freelancers, especially artists and content creators, are seeing their things replaced with a much, much shittier version. But nevertheless, they're seeing how some businesses have contempt for creatives. "Why is this thing the future? And if it isn't the future, why am I being told that it is?" That question is applicable to blue-collar workers, to hedge fund managers, to members of the government, to everyone, because this is one of the strangest things to happen in business history. You have a football in a case behind you. I write and podcast a lot about football, and sometimes I think about applying a football coverage lesson to A.I. Every coach in the NFL forgets more about football in a week than I will know in my life. But they can't all be good coaches, let alone be right all the time. I accept that the CEOs of Nvidia and Apple and Microsoft and OpenAI know more about A.I. than I do, but they can't all be right all the time. How should the media balance a goal to not assume we know more about these companies' plans than the companies themselves do with the possibility that a CEO could, in fact, just be selling you a bad idea? A big starting point for me has been to ask: What is it I don't know about these people? Is there some greater design, some genius, some intricacy behind their businesses and the people they talk to, or their companies' structure? There must be something. After probably a good hundred hours of looking through stuff, there isn't. These are regular people. They are people that are well accomplished in the sense that they've been heads of businesses or units. But behind the curtain, from what I've seen, there is no intricacy. There's nothing hiding. There's nothing magical about Satya Nadella, Sundar Pichai, or any of these people. If there was, they'd be showing it by now. There are good football coaches who understand personnel and know how to use the things and get the most out of them because they truly understand football. The worst coaches in the NFL are usually the ones that believe they're mega-geniuses that have seen everything. I think we're in the Josh McDaniels era of tech CEOs. It's these people that believe that they've been part of massive legacy systems. Things that have happened around them have them saying, "I understand how good works." No. Josh McDaniels was the offensive coordinator for Bill Belichick in a nearly perfect system. And I think you can tell, looking at Bill Belichick at this point, it might have just been Tom Brady. Who is the Tom Brady in this situation? Is it just the markets? That's a great question. The answer is multiple Tom Bradys. It's Azure, Google Cloud, and smartphones. Everything that has driven tech stocks up and up forever. Yes. There may just be no more lands to conquer. There may just not be things to draw value out of. These people are all Josh McDanielsing, in the sense that they found all the easy stuff. They found all the stuff that they could pull out just by throwing more money at it. Zoom is a company that grew based on the fact that, "Hey, I want to easily talk to someone on video and audio." Now they're adding A.I. bullshit because they don't know what else to do because they have to grow forever. That's where they all are. These aren't companies run by people that build products. These aren't companies that win markets by making a better thing than the competition. These people are monopolists. They're management consultants. They're people that only know how to extract value by throwing money at stuff. Except now we're at the end of that. This is actually genuine sympathy for the media: How the fuck do you cover that? Even when I write this stuff, I feel a little insane. Because you have to look out the window and be like, "Hey, A.I. is the future. It's going to be amazing." Then you look at the numbers and products, and it's not remotely like this. Everyone's kind of in on this and trying to keep it going, because once everyone doesn't, everyone knows there's a collapse. Everyone knows deep down that something's wrong.
[3]
Why Is This C.E.O. Bragging About Replacing Humans With A.I.?
Ask typical corporate executives about their goals in adopting artificial intelligence, and they will most likely make vague pronouncements about how the technology will help employees enjoy more satisfying careers, or create as many opportunities as it eliminates. A.I. will "help tackle the kind of tasks most people find repetitive, which frees up employees to take on higher-value work," Arvind Krishna, the chief executive of IBM, wrote in 2023. And then there's Sebastian Siemiatkowski, the chief executive of Klarna, a Swedish tech firm that helps consumers defer payment on purchases and that has filed paperwork to go public in the United States with an expected valuation north of $15 billion. Over the past year, Klarna and Mr. Siemiatkowski have repeatedly talked up the amount of work they have automated using generative A.I., which serves up text, images and videos that look like they were created by people. "I am of the opinion that A.I. can already do all of the jobs that we, as humans, do," he told Bloomberg News, a view that goes far beyond what most experts claim. According to Klarna, the company has saved the equivalent of $10 million annually using A.I. for its marketing needs, partly by reducing its reliance on human artists to generate images for advertising. The company said that using A.I. tools had cut back on the time that its in-house lawyers spend generating standard contracts -- to about 10 minutes from an hour -- and that its communications staff uses the technology to classify press coverage as positive or negative. Klarna has said that the company's chatbot does the work of 700 customer service agents and that the bot resolves cases an average of nine minutes faster than humans (under two minutes versus 11). Mr. Siemiatkowski and his team went so far as to rig up an A.I. version of him to announce the company's third-quarter results last year -- to show that even the C.E.O.'s job isn't safe from automation. In interviews, Mr. Siemiatkowski has made clear he doesn't believe the technology will simply free up workers to focus on more interesting tasks. "People say, 'Oh, don't worry, there's going to be new jobs,'" he said on a podcast last summer, before citing the thousands of professional translators whom A.I. is rapidly making superfluous. "I don't think it's easy to say to a 55-year-old translator, 'Don't worry, you're going to become a YouTube influencer.'" Mr. Krishna, the IBM chief executive, once turned heads when he said A.I. could prompt the company to slow or pause hiring for the roughly 10 percent of its jobs involving back-office roles like human resources. For his part, Mr. Siemiatkowski said that A.I. had allowed his company to largely stop hiring entirely as of September 2023, which he said reduced its overall head count to under 4,000 from about 5,000. He said he expected Klarna's work force to eventually fall to about 2,000 as a result of its A.I. adoption. (Mr. Siemiatkowski and Klarna declined to comment for this article.) One might be tempted to conclude that Mr. Siemiatkowski is simply unfamiliar with the political sensitivity around questions of automation, or with the best practices for communicating about it to skeptical employees. ("Leaders can combat this initial resistance by highlighting how A.I. can help people focus on more meaningful work," an IBM study said.) But Mr. Siemiatkowski is well aware of the backlash that his bluntness can provoke. "We did a tweet later on about the marketing things we are doing about A.I., where we have less need for photographers," he said in the podcast interview. "That had a violent reaction online." Instead, interviews with former employees and transcripts of internal company meetings suggest that Mr. Siemiatkowski's pronouncements about A.I. are motivated by something altogether different from political naïveté or an impulse for real talk. And those motivations shed light on the A.I. future that many executives and investors are working to bring about. Leaning In to Automation So far, most large companies do not appear to be replacing workers en masse. A report on 50 large banks by Evident, a firm that analyzes A.I. adoption, found that they typically derive other benefits from the technology, like improving services or helping employees work faster. In a paper exploring one area that Klarna has highlighted, customer service, the Stanford economist Erik Brynjolfsson and two co-authors found that A.I. made many employees more productive when it came to relatively complicated tasks, like navigating customers' tax issues. The bot did this by excelling at certain simpler tasks, like advising the human on the optimal order in which to request information from a customer. But it didn't handle the interaction from start to finish. (In fairness, the experiment didn't attempt full automation.) "I think people exaggerate how much they can automate everything in the near term," said Dr. Brynjolfsson, though he acknowledged that more tasks could be automated as A.I. became more powerful over the next few years. When pressed, Mr. Siemiatkowski has conceded that the picture is somewhat more complicated than his company's news releases have suggested. He explained on another podcast that Klarna had been relying on humans to perform customer service tasks that other companies had automated long before A.I., like instructing a customer where to go on the Klarna app to delay a payment. As a result, Klarna replaced more workers than other companies would have replaced. His claims about hiring may have been overblown, too. The website TechCrunch searched through Klarna's job listings more than a year after the company supposedly stopped hiring and found more than 50 openings in a variety of jobs. A Klarna spokesman told the outlet that the company was "not actively recruiting to expand the work force but only backfilling some essential roles" like engineering, and that Mr. Siemiatkowski had been "simplifying for brevity in a broadcast interview." But all of this raises the question: At a moment when A.I. is already alarming office workers, why would a chief executive not only speak candidly about his company's progress in automating jobs, but even overstate the case? A Self-Mythologizing Rise The son of Polish nationals who immigrated to Sweden in the early 1980s, not long before he was born, Mr. Siemiatkowski grew up feeling like something of an outsider in his parents' adopted country. He has talked of being teased as a child. According to former employees, he once said that feeling like an outsider helped him empathize with Black Americans after the killing of George Floyd. Mr. Siemiatkowski founded Klarna, then known as Kreditor, in 2005 with two classmates after a telemarketing job alerted him to the problems that small companies had collecting payments from online customers. The idea was to guarantee the payment for merchants and collect from the customer later. It was an old retail practice known as "buy now, pay later," except updated for the internet age. The company quickly turned a profit by charging merchants a fee for the payment service, and began expanding across Europe and taking business from banks. By 2010, Klarna had renamed itself Klarna, meaning "clear," and had begun to attract the attention of Silicon Valley investors. Mr. Siemiatkowski gave the impression of someone who had for years been playing out the moment in his mind. When the famed Silicon Valley venture capital firm Sequoia dispatched a partner to Sweden to pitch the co-founders on an investment, telling them Sequoia thought they could transform banking the way Google had changed the internet, Mr. Siemiatkowski was quick to pipe up. "Just tell me one more thing," he said, recalling the exchange to Forbes magazine years later. "If we're going to be the Google of banks, would you really just send you? Wouldn't the whole of Sequoia come here?" The Sequoia partner quickly connected the founders with Michael Moritz, one of the firm's high-profile investors. Mr. Moritz apologized for not appearing in person and later joined Klarna's board. Mr. Siemiatkowski, who with his strong jaw and blue eyes looks like a long-lost Hemsworth brother, seemed to style himself as the kind of tech mogul investors were eager to back. Former employees said the company's hiring process for engineers resembled that of a Silicon Valley start-up -- using a logic test to screen applicants, then requiring some to demonstrate their coding chops in real time. From Amazon, he borrowed the "two pizza" rule -- keeping teams small enough that the group could be fed with two pizzas. In 2019, Klarna began to build a major presence in the United States. The company's timing proved impeccable. When the pandemic hit, Americans cut back on dining out and travel and embarked on an online shopping splurge -- precisely the consumption habits Klarna was built to enable. New investors piled in at ever-higher valuations -- from $5.5 billion in 2019 to to $45.6 billion in 2021. Klarna accelerated hiring, roughly tripling in size to 7,000 employees within three years. It ran a Super Bowl ad starring Maya Rudolph to lodge itself in the American psyche. Then the bill came due. From Google to Amazon to Netflix, the share prices of companies that had raked in profits as people retreated to their living rooms were suddenly pummeled by investors who saw rising inflation and interest rates as a sign that the pandemic-era boom was ending. When Klarna tried to raise money again in 2022, reportedly seeking a valuation above $50 billion, investors had other ideas. A funding round announced in July would value it at a mere $6.7 billion. In the meantime, Klarna culled about 10 percent of its employees, under pressure from investors to cut costs, and endured suddenly skeptical media coverage. Mr. Siemiatkowski also now had to contend with another setback to his rise as a tech icon: a growing union presence inside the company. Though morale at Klarna had generally been high because of its collaborative culture and competitive pay, a relatively small group of workers had formed a union in 2020. The union roughly doubled in size, to over 1,000 employees, not long after the downsizing announcement in May 2022. During an all-hands meeting around the same time, a recording of which The New York Times obtained, Mr. Siemiatkowski spoke darkly of how unionized companies handle layoffs ("union representatives and senior management, behind locked doors, decide on the outcome of each individuals"). He seemed to worry that a union would turn Klarna into just another stodgy Swedish company -- around 90 percent of the country's workers are covered by collective-bargaining agreements -- and hardly the muse of investors worldwide. "The more everything becomes thick and slow moving," he said at another meeting, alluding to the effect of a union, "my investors will challenge me." But as workers prepared to strike in the fall of 2023, the company backed down and signed a collective-bargaining agreement. Mr. Siemiatkowski was sarcastic and brooding as he announced the arrangement at a third all-hands meeting. He appeared to liken union leaders to the pigs in "Animal Farm," whom George Orwell had intended as a stand-in for Stalinists, and he quipped that there were two people in the entire company of more than 4,000 who made less than what the collective-bargaining agreement would mandate. "They're going to get a salary increase thanks to us signing the C.B.A.," he said. "Isn't that amazing?" A Favorite Guinea Pig Mr. Siemiatkowski often says he first realized A.I. would upend the workaday world shortly after playing around with OpenAI's ChatGPT in late 2022, only a few months after Klarna endured layoffs and saw its valuation crater. "I'm on Twitter in November '22, and somebody is tweeting, 'You've got to try this,'" he said on a podcast. "I'm just like, 'Jesus, I'm speaking to a computer.'" He quickly arranged a meeting with Sam Altman, the chief executive of OpenAI, and began pushing employees to experiment with the software. Whatever progress Klarna made on automation, Mr. Siemiatkowski sometimes seemed as invested in spinning out a story about A.I. as actually using the technology. In 2024, he and the company regularly put out news releases and conducted interviews, leading to headlines like "Klarna Marketing Chief Says A.I. Is Helping It Become 'Brutally Efficient,'" in The Wall Street Journal. By the time Mr. Siemiatkowski made the rounds of prominent tech podcasts that summer, in a tour that included the popular show "Acquired" and podcasts hosted by Sequoia and the venture capitalist Logan Bartlett, he seemed to have distilled Klarna's A.I. story to its sharpest narrative elements. "My understanding is that you told Sam and OpenAI that you wanted to be their guinea pig," an interviewer said. "Their favorite guinea pig," Mr. Siemiatkowski corrected. A former Klarna manager, who left in 2022, said the rhetorical emphasis on A.I. was no accident. According to the manager, there was a sense within the company that Klarna had lost its sheen in the media and among investors, and that Mr. Siemiatkowski was desperate to get it back. The former manager said the A.I. story provided a lifeline at a time when Klarna was hoping to offer shares on the public markets. It demonstrated that the company was still on the cutting edge, and that it was shrinking not because it had faltered but because it had figured out how to replace humans with machines. The effort appears to have worked. Klarna's likely public offering is one of the more anticipated of this year and could fetch triple the valuation that followed its 2022 swoon. Though some of that progress reflects Klarna's improved financial performance over the past year and a half and the upward march of the market overall, Mr. Siemiatkowski's relentless focus on A.I. appears to have been important. "The benefits of A.I. are likely to be a key selling point for any Klarna I.P.O.," The Financial Times wrote last year. It does not appear to have hurt that Mr. Siemiatkowski is willing to go much further in his A.I. pronouncements than fellow C.E.O.s, telling the paper, "Not only can we do more with less, but we can do much more with less." Mr. Siemiatkowski's statements are sometimes sweeping or grandiose because, former employees say, he sees himself as a righteous warrior in a fight with powerful forces. "I have always been anti-establishment," he said at one all-hands meeting. "To me, what we've been doing here, going after the banks, is to be anti-establishment." As with his challenge to Swedish banks and his standoff with the union, Mr. Siemiatkowski's A.I. campaign appears to be another instance of self-interest merging with heroic self-conception. When the host of the "Big Technology Podcast" asked why he was so intent on talking up Klarna's A.I. prowess, Mr. Siemiatkowski said it was partly for the good of humanity. "We have a moral responsibility to share that we are actually seeing real results and that that's actually having implications on society today," he said. "To encourage people, specifically politicians in society, to actually treating this as a serious change that's coming." Then he acknowledged that another part of the motivation was "self-promotion, for sure." He added, "We're regarded as a thought leader." Saying What Investors Can't Mr. Siemiatkowski may have at times overstated what A.I. has accomplished at Klarna, but that doesn't mean he's wrong about the future. Dr. Brynjolfsson of Stanford notes that most office jobs are collections of tasks, and that while A.I. can take on some of them, it still struggles to combine most or all of them in the manner of a human. But even he believes that could change within a few years, while a growing number of tech experts argue that artificial general intelligence -- a bot that can do anything the human brain does -- is not far-off. Mr. Altman of OpenAI recently predicted that A.I. agents -- bots than can perform relatively complicated tasks on their own -- would soon "join the work force" and "materially change the output of companies." Others have predicted that such agents will take over a wide variety of jobs. Many tech investors are already banking on this outcome, effectively counting on automation to save their huge bets on free-spending A.I companies. In an influential analysis last year, the venture capitalist David Cahn estimated that the combined A.I.-related revenue of companies like OpenAI and Microsoft was likely to be hundreds of billions a year less than the amount needed to pay back investors. But one way to make the numbers add up is if employers can save hundreds of billions of dollars using A.I. to replace workers in the relatively near future. In that case, the revenue of companies like OpenAI could grow rapidly and their investors could earn a profit. (They might still risk being undercut by Chinese competitors who can build similar technology at lower cost, though that would also make it cheaper for employers to automate work.) The catch is that very few investors and top executives are willing to discuss this in plain language. When it comes to the question of job loss, those with a large financial interest in A.I. tend to euphemize and equivocate. Even Mr. Altman, one of the foremost proponents of the idea that A.I. will soon be capable of advanced humanlike cognition, has increasingly avoided discussing the potential downside for workers. Two years ago, he conceded that A.I. would take over certain jobs and that the shift in power from labor to capital "goes way further in a world with A.I." By last year, he had toned down this language, telling a podcaster that he, too, imagined A.I. taking over tasks rather than whole jobs and that it would allow people to do work at "a higher level of abstraction." He did this even as -- or perhaps because -- he seemed to think the technology was becoming vastly more powerful. (OpenAI declined to comment. The New York Times has sued OpenAI and its partner, Microsoft, for copyright infringement. The two tech companies have denied the claims.) Mr. Siemiatkowski has brought clarity to this discussion. In his eagerness to court investors, and in his tendency to overstate the case and say the quiet part out loud, he has laid bare Silicon Valley's ambition. In his own slightly muddled way, for his own slightly idiosyncratic reasons, he is helping to surface a conversation that has largely been whispered in the executive suites. Investors in his presence sometimes become so excited about the possibilities of displacing humans that they forget to deploy the usual euphemisms and aphorisms. During a podcast interview with Mr. Siemiatkowski, a partner at the prominent venture firm Kleiner Perkins gushed about Klarna's "full-on automation at scale" and said, "That's where it's eyebrow-raising." At times, even Mr. Siemiatkowski can be wrong-footed by such directness. When another podcaster asked which jobs were most likely to be automated, he seemed momentarily flustered, then reached for a joke he'd told Sam Altman. "I said to Sam, 'What you should focus on, try to build A.I. that replaces C.E.O.s, bankers and lawyers,'" he recalled, identifying three unpopular jobs. "'Nobody will make a big fuss about it.'"
[4]
Will states lead the way on AI regulation?
And 2025 could see just as much activity, especially on the state level, according to Mark Weatherford. Weatherford has, in his words, seen the "sausage making of policy and legislation" at both the state and federal levels; he's served as Chief Information Security Officer for the states of California and Colorado, as well as Deputy Under Secretary for Cybersecurity under President Barack Obama. Weatherford said that in recent years, he's held different job titles, but his role usually boils down to figuring out "how do we raise the level of conversation around security and around privacy so that we can help influence how policy is made." Last fall, he joined synthetic data company Gretel as its vice president of policy and standards. So I was excited to talk to him about what he thinks comes next in AI regulation and why he thinks states are likely to lead the way. This interview has been edited for length and clarity. That goal of raising the level of conversation will probably resonate with many folks in the tech industry, who have maybe watched congressional hearings about social media or related topics in the past and clutched their heads, seeing what some elected officials know and don't know. How optimistic are you that lawmakers can get the context they need in order to make informed decisions around regulation? Well, I'm very confident they can get there. What I'm less confident about is the timeline to get there. You know, AI is changing daily. It's mindblowing to me that issues we were talking about just a month ago have already evolved into something else. So I am confident that the government will get there, but they need people to help guide them, staff them, educate them. Earlier this week, the US House of Representatives had a task force they started about a year ago, a task force on artificial intelligence, and they released their report -- well, it took them a year to do this. It's a 230 page report; I'm wading through it right now. [Weatherford and I first spoke in December.] [When it comes to] the sausage making of policy and legislation, you've got two different very partisan organizations, and they're trying to come together and create something that makes everybody happy, which means everything gets watered down just a little bit. It just takes a long time, and now, as we move into a new administration, everything's up in the air on how much attention certain things are going to get or not. It sounds like your viewpoint is that we may see more regulatory action on the state level in 2025 than on the federal level. Is that right? I absolutely believe that. I mean, in California, I think Governor [Gavin] Newsom, just within the last couple months, signed 12 pieces of legislation that had something to do with AI. [Again, it's 18 by TechCrunch's count.)] He vetoed the big bill on AI, which was going to really require AI companies to invest a lot more in testing and really slow things down. In fact, I gave a talk in Sacramento yesterday to the California Cybersecurity Education Summit, and I talked a little bit about the legislation that's happening across the entire US, all of the states, and it's like something like over 400 different pieces of legislation at the state level have been introduced just in the past 12 months. So there's a lot going on there. And I think one of the big concerns, it's a big concern in technology in general, and in cybersecurity, but we're seeing it on the artificial intelligence side right now, is that there's a harmonization requirement. Harmonization is the word that [the Department of Homeland Security] and Harry Coker at the [Biden] White House have been using to [refer to]: How do we harmonize all of these rules and regulations around these different things so that we don't have this [situation] of everybody doing their own thing, which drives companies crazy. Because then they have to figure out, how do they comply with all these different laws and regulations in different states? I do think there's going to be a lot more activity on the state side, and hopefully we can harmonize these a little bit so there's not this very diverse set of regulations that companies have to comply with. I hadn't heard that term, but that was going to be my next question: I imagine most people would agree that harmonization is a good goal, but are there mechanisms by which that's happening? What incentive do the states have to actually make sure their laws and regulations are in line with each other? Honestly, there's not a lot of incentive to harmonize regulations, except that I can see the same kind of language popping up in different states -- which to me, indicates that they're all looking at what each other's doing. But from a purely, like, "Let's take a strategic plan approach to this amongst all the states," that's not going to happen, I don't have any high hopes for it happening. Do you think other states might sort of follow California's lead in terms of the general approach? A lot of people don't like to hear this, but California does kind of push the envelope [in tech legislation] that helps people to come along, because they do all the heavy lifting, they do a lot of the work to do the research that goes into some of that legislation. The 12 bills that Governor Newsom just passed were across the map, everything from pornography to using data to train websites to all different kinds of things. They have been pretty comprehensive about leaning forward there. Although my understanding is that they passed more targeted, specific measures and then the bigger regulation that got most of the attention, Governor Newsom ultimately vetoed it. I could see both sides of it. There's the privacy component that was driving the bill initially, but then you have to consider the cost of doing these things, and the requirements that it levies on artificial intelligence companies to be innovative. So there's a balance there. I would fully expect [in 2025] that California is going to pass something a little bit more strict than than what they did [in 2024]. And your sense is that on the federal level, there's certainly interest, like the House report that you mentioned, but it's not necessarily going to be as big a priority or that we're going to see major legislation next year? Well, I don't know. It depends on how much emphasis the [new] Congress brings in. I think we're going to see. I mean, you read what I read, and what I read is that there's going to be an emphasis on less regulation. But technology in many respects, certainly around privacy and cybersecurity, it's kind of a bipartisan issue, it's good for everybody. I'm not a huge fan of regulation, there's a lot of duplication and a lot of wasted resources that happen with so much different legislation. But at the same time, when the safety and security of society is at stake, as it is with AI, I think there's, there's definitely a place for more regulation. You mentioned it being a bipartisan issue. My sense is that when there is a split, it's not always predictable -- it isn't just all the Republican votes versus all the Democratic votes. That's a great point. Geography matters, whether we like to admit it or not, that, and that's why places like California are really being leaning forward in some of their legislation compared to some other states. Obviously, this is an area that Gretel works in, but it seems like you believe, or the company believes, that as there's more regulation, it pushes the industry in the direction of more synthetic data. Maybe. One of the reasons I'm here is, I believe synthetic data is the future of AI. Without data, there's no AI, and quality of data is becoming more of an issue, as the pool of data -- either it gets used up or shrinks. There's going to be more and more of a need for high quality synthetic data that ensures privacy and eliminates bias and takes care of all of those kind of nontechnical, soft issues. We believe that synthetic data is the answer to that. In fact, I'm 100% convinced of it. This is less directly about policy, though I think it has sort of policy implications, but I would love to hear more about what brought you around to that point of view. I think there's other folks who recognize the problems you're talking about, but think of synthetic data potentially amplifying whatever biases or problems were in the original data, as opposed to solving the problem. Sure, that's the technical part of the conversation. Our customers feel like we have solved that, and there is this concept of the flywheel of data generation -- that if you generate bad data, it gets worse and worse and worse, but building in controls into this flywheel that validates that the data is not getting worse, that it's staying equally or getting better each time the fly will comes around. That's the problem Gretel has solved. Many Trump-aligned figures in Silicon Valley have been warning about AI "censorship" -- the various weights and guardrails that companies put around the content created by generative AI. Do you think that's likely to be regulated? Should it be? Regarding concerns about AI censorship, the government has a number of administrative levers they can pull, and when there is a perceived risk to society, it's almost certain they will take action. However, finding that sweet spot between reasonable content moderation and restrictive censorship will be a challenge. The incoming administration has been pretty clear that "less regulation is better" will be the modus operandi, so whether through formal legislation or executive order, or less formal means such as [National Institute of Standards and Technology] guidelines and frameworks or joint statements via interagency coordination, we should expect some guidance. I want to get back to this question of what good AI regulation might look like. There's this big spread in terms of how people talk about AI, like it's either going to save the world or going to destroy the world, it's the most amazing technology, or it's wildly overhyped. There's so many divergent opinions about the technology's potential and its risks. How can a single piece or even multiple pieces of AI regulation encompass that? I think we have to be very careful about managing the sprawl of AI. We have already seen with deepfakes and some of the really negative aspects, it's concerning to see young kids now in high school and even younger that are generating deep fakes that are getting them in trouble with the law. So I think there's a place for legislation that controls how people can use artificial intelligence that doesn't violate what may be an existing law -- we create a new law that reinforces current law, but just taking the AI component into it. I think we -- those of us that have been in the technology space -- all have to remember, a lot of this stuff that we just consider second nature to us, when I talk to my family members and some of my friends that are not in technology, they literally don't have a clue what I'm talking about most of the time. We don't want people to feel like that big government is over-regulating, but it's important to talk about these things in language that non-technologists can understand. But on the other hand, you probably can tell it just from talking to me, I am giddy about the future of AI. I see so much goodness coming. I do think we're going to have a couple of bumpy years as people more in tune with it and more understand it, and legislation is going to have a place there, to both let people understand what AI means to them and put some guardrails up around AI.
Share
Share
Copy Link
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
Reid Hoffman, co-founder of LinkedIn, presents an optimistic outlook on AI's future in his new book "Superagency: What Could Possibly Go Right With Our AI Future." Hoffman argues that AI can extend human agency, providing more knowledge, better jobs, and improved lives 1. He advocates for "smart risk taking" and an "iterative deployment" process to ensure positive outcomes from AI technologies.
Hoffman emphasizes the importance of getting AI tools into people's hands and responding to their feedback:
"Part of the reason why cars can go faster today than when they were first made, is because ... we figured out a bunch of different innovations around brakes and airbags and bumpers and seat belts," Hoffman said. "Innovation isn't just unsafe, it actually leads to safety." 1
In contrast to Hoffman's optimism, Ed Zitron, a prominent AI skeptic, raises concerns about the "future that tech's elite wants to build" 2. Zitron criticizes media coverage of AI, arguing that many reporters accept claims about AI's capabilities and potential without sufficient scrutiny.
Zitron points out the disconnect between AI companies' valuations and their actual profitability:
"We have the largest generative AI company burning $5 billion or more a year for a product that is yet to really prove itself." 2
He also challenges tech leaders' vague pronouncements about AI's potential, such as Jeff Bezos's description of AI as a "horizontal enabling layer" 2.
Sebastian Siemiatkowski, CEO of Klarna, has taken a more controversial stance by openly discussing how AI is replacing human workers in his company 3. Klarna claims to have saved $10 million annually using AI for marketing needs and reduced its workforce from 5,000 to under 4,000, with plans to further decrease to about 2,000 employees 3.
However, some experts caution against overstating AI's current capabilities. Erik Brynjolfsson, a Stanford economist, notes:
"I think people exaggerate how much they can automate everything in the near term," although he acknowledges that more tasks could be automated as AI becomes more powerful over the next few years 3.
As AI's influence grows, there's an increasing focus on regulation. Mark Weatherford, a former government cybersecurity official, predicts that states will lead the way in AI regulation in 2025 4. He notes that over 400 different pieces of legislation related to AI have been introduced at the state level in the past 12 months.
Weatherford highlights the challenge of harmonizing these regulations:
"How do we harmonize all of these rules and regulations around these different things so that we don't have this [situation] of everybody doing their own thing, which drives companies crazy." 4
California has been particularly active in AI legislation, with Governor Gavin Newsom recently signing 12 pieces of AI-related legislation 4. However, the need for a balanced approach is evident, as Newsom vetoed a more comprehensive AI bill that would have required extensive testing and potentially slowed innovation 4.
Reference
[2]
[3]
[4]
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
3 Sources
Exploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
2 Sources
2 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
In 2024, AI made significant strides in capabilities and adoption, driving massive profits for tech companies. However, concerns about safety, regulation, and societal impact also grew.
13 Sources
13 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved