2 Sources
[1]
Transcript: A power shift in Lebanon
Sonja Hutson Good morning from the Financial Times. Today is Tuesday, January 14th, and this is your FT News Briefing. Goldman Sachs is digging in deeper on private credit. And there's been a big political shift in Lebanon. Plus, Amazon is trying to give Alexa a new AI brain. Madhumita Murgia If it does work, it could really be a sort of entirely new type of product. Goldman Sachs announced yesterday that it's building a new unit to focus on private credit funds. The Wall Street giant faces fierce competition from private credit firms, which are increasingly financing large corporate transactions. And this growing industry leaves banks in a tricky spot for another reason, too. You see banks compete with them on financing while also serving them as clients. So this new unit will be tasked with wooing private credit, but also beating them at offering loans. [MUSIC PLAYING] Lebanon has a new prime minister. Lawmakers selected the International Court of Justice president Nawaf Salam yesterday. They also voted on a new president last week. Their choices are a clear sign of the waning influence of the militant group Hizbollah on the country's politics. I'm joined now by the FT's Raya Jalabi to explain. Hi, Raya. Raya Jalabi Hi. Sonja Hutson Can you explain a little bit about Lebanon's new president and prime minister? Who are they? Raya Jalabi So last week, Joseph Oun became Lebanon's president and it followed more than two years of political deadlock in which Lebanon's fractious political parties couldn't agree to a candidate. So, you know, he's a 40-year veteran of Lebanon's armed forces, which is seen as one of the only state institutions that is still respected and seen as independent because Lebanon is a country rife with corruption and a lot of its institutions are not trusted by most of its people as well as foreign partners. So Joseph Oun was a candidate who was in the running for about two years and yet he never made it across the finish line because Hizbollah, which still until recently remained Lebanon's dominant political and military player, opposed his candidacy. And likewise with Nawaf Salam. So Nawaf Salam has always been seen as this sort of stalwart, political independent. A man who represented Lebanon on the international stage as a diplomat for many, many years. So both men are seen as relatively independent and not partisan to Lebanon's fractious politics. And that is very, very different. Sonja Hutson And how much power do you think that these two men will ultimately have in office? Raya Jalabi The prime minister does have a lot of power to change things to a certain extent, but we have to remember that we are in Lebanon. And in Lebanon you have complex political dynamics that are very much dominated by its sectarian politics. So the president is always a Christian. The prime minister is always a Sunni Muslim, and the speaker of parliament is always a Shia Muslim. And so that division of power sort of continues along all government posts. And so that sort of fractiousness leads to incredibly bitterly divided politics. Sonja Hutson Yeah. So tell me a little bit about some of the main tasks that this incoming administration is going to face. Raya Jalabi So chief among them is reconstruction because we are just coming out of a period of intense warfare with Israel. So much of Lebanon's south and east has been reduced to rubble. And so there's a huge reconstruction bill ahead. And, you know, the World Bank has estimated that that's going to cost more than $8bn. And that's money that Lebanon just doesn't have. And frankly, its society has been ripped apart in many ways, not just by conflict, but by decades of political stasis and mistrust. So there's a lot of work ahead. These two candidates, they have a working relationship, and I think there's mutual respect there. So I don't imagine it's going to be a difficult working relationship. It's more about whether or not, you know, the other political parties, Hizbollah and its allies, will be able to sort of support the new government's work. Sonja Hutson Well, where do these political developments leave the ceasefire with Israel, which, you know, does end later this month? Raya Jalabi The ceasefire is due to expire on January 26th. And by that point, you were supposed to have seen a Hizbollah withdrawal from southern Lebanon, the dismantling of its military infrastructure and Lebanon's armed forces moving into southern Lebanon and taking over those areas. You're also supposed to see a complete withdrawal of Israeli forces from Lebanese territory. Now, none of those things have happened, truly, and not to the satisfaction of any of the parties to the ceasefire. And so we don't really know what happens next. It's important to state that Joseph Oun is very much trusted by people to sort of get the army in the shape it needs to be in order to fulfil the terms of the ceasefire. But again, this comes back to the question of Hizbollah. How much is it willing to play ball right now given its diminished stature? And is it going to want to project power by making it harder for the ceasefire's chance to come to fruition? Sonja Hutson Raya Jalabi is the FT's Middle East correspondent. Thanks, Raya. Sonja Hutson China's global trade surplus reached a new record last year: almost $1tn. More than a third of that is from trade with the US. The data was released a week before US president-elect Donald Trump takes office, and he's threatened to impose tariffs of up to 60 per cent on goods from China. Chinese producers have been stepping up exports to offset sluggish demand at home. They're also rushing to ship as much as possible before the Trump administration begins. Some Chinese manufacturers have also offshored production to parts of south-east Asia to try to avoid future US tariffs. [MUSIC PLAYING] Amazon is hoping to transform its Alexa voice assistant system using artificial intelligence, but there are big hurdles in the way. And Amazon is under pressure to make money from consumer AI products. Madhumita Murgia is the FT's artificial intelligence editor, and she joins me now to explain. Hi, Madhu. Madhumita Murgia Hi there. Sonja Hutson So how would you explain what Alexa does now? And how would introducing AI make it different? Madhumita Murgia |Yeah. So this Alexa device, which is, it's a voice-activated assistant. Mostly people use it to do things like set a timer or play music on Spotify or connected to their smart lights to turn all their lights on and off, things like that. But the way it works so far is that you have to ask the question in a specific way, and it sort of looks up the answers for you. But what Amazon is trying to do is sort of transplant the brain of that system with a large language model, something like the models that underpin, say, ChatGPT or Google's Gemini, for instance, and make it a lot more colloquial, chatty, as I said, and broader in terms of the types of queries it can respond to. So you can ask it a range of things like sort of sports scores or, you know, where should I go on my holiday to Morocco next month or make some restaurant reservations, things like that. Sonja Hutson OK. Well, what kind of hurdles has Amazon been facing while trying to make this idea into a reality? Madhumita Murgia Yeah, I mean, I think what they found over the sort of two years or two and a bit years that they've worked on is it's just not as simple as a chat bot that you use online. So what they've really struggled with I think is firstly the technical aspects of integrating a large language model into the current system. There's been the sort of cultural issues around safety. It's in people's homes, used by their children, by old people. So how can we make sure that an Alexa that is being used by a kid is still appropriate? But I think the core issue really is accuracy. And, you know, the ability that these generative AI models have to make things up. And when you have an assistant that you're used to giving you answers to questions that you act on, you know, you just can't have them making facts up. Sonja Hutson Yeah, that would not be helpful. Madhumita Murgia Exactly. So they need it to be accurate. They need it to not be unsafe because of the context in which it's being used. Sonja Hutson What about money? Is Amazon going to be able to profit off of this new AI Alexa endeavour? Madhumita Murgia So I think at the moment all the big tech companies are facing a lot of pressure from the market to kind of show ROI on AI technologies. And so you're seeing that this race dynamic is impacting all tech companies now who feel that they have to show that they're using AI and also making money from it. OpenAI is making money through its subscriptions, of course. So that seems to be the kind of primary business model. But I do think when it comes to Alexa, you know, it hasn't really been a product that generated huge amount of money. But I think that what they're hoping is that they can charge something like a subscription model or take a cut of sales or build different services into it via these language models and turn it into a sort of new line of business and a new revenue generator. Sonja Hutson In that vein, how important is Alexa to Amazon's overall AI strategy? Madhumita Murgia I think it's a part of it. They have a massive cloud business, which is AWS or Amazon Web Services. And a big part of their AI strategy is allowing their customers to use lots of different AI models through their web services, to allow access to that. And they've actually recently released their own models as well, called the Nova models, which are also being, you know, integrated into Alexa. So I think, you know, building a consumer device with generative AI is a really . . . it's a tough problem. If it does work, it could really be a sort of entirely new type of product. So it's not something that they're going to, you know, be turning huge profits with in the short term. Sonja Hutson Madhumita Murgia is the FT's artificial intelligence editor. Thanks, Madhu. Madhumita Murgia Thank you. Sonja Hutson You can read more on all these stories for free when you click the links in our show notes. This has been your daily FT News Briefing. Check back tomorrow for the latest business news.
[2]
Transcript: Tech in 2025 -- Hi, I'm your AI-powered assistant
Dario Amodei I think what we're going to see is this kind of progression, right, where AI could do what a high school student could do, that it could do what an undergrad could do. And I think 2025 is going to be the year that AI can do what, like, a PhD student or an early degree professional in a field is able to do. Murad Ahmed That's Dario Amodei, boss of Anthropic, one of the companies at the vanguard of generative artificial intelligence. You'll be hearing a lot more from him in this episode of Tech Tonic. Ever since OpenAI launched ChatGPT two years ago, AI has dominated news headlines. And since then it's really inserted itself into daily life. ChatGPT is now a household name, so-called copilots are assisting workers in their daily tasks, and we all look at text, images and video and wonder, did AI create that? So if 2024 was the year that AI became everyday, where will artificial intelligence go in 2025? [MUSIC PLAYING] Welcome to Tech Tonic, the technology podcast from the Financial Times. With me, Murad Ahmed. I'm the tech news editor for the FT. I direct the work of our reporters in Silicon Valley, China, London and beyond to help shape our tech news coverage. As the editor, part of my job is looking ahead and asking myself, what are the stories that we should be telling? What's going to happen next in Big Tech, and how will that shape the world we live in? So in this series of Tech Tonic, I put my head together with our team of reporters to choose the questions that we believe will shape the tech world in the year to come. In this episode, what next for AI? [MUSIC PLAYING] To answer this question, I'm joined by the FT's artificial intelligence editor, Madhumita Murgia. Madhumita Murgia Hi, Murad. Murad Ahmed So, Madhu, you are the very first person that told me about ChatGPT almost two years ago. And I wasn't that impressed. And I've learned my mistake. And you've taught me that mistake over the last few months and years. But where are we with generative AI right now? Madhumita Murgia It's been only two years, but it feels, at least to me, like a decade or something because so much has changed and progressed in that time. There's been a huge uptake from consumers of these chatbot type of interfaces like ChatGPT, but others also, like Claude and Gemini, where you kind of ask a question and you have an AI software give you an answer. These tools can generate images based on your text prompts. They can even make videos now. And we're at the stage now where the companies who have built these tools are trying to find a way to make money using generative AI and trying to convince the rest of us -- customers, enterprises and consumers -- that we need this technology moving forward. Murad Ahmed And one of the chatbots or tools that you mentioned was Claude, which is made by Anthropic. And the voice that we heard at the start was of Dario Amodei, the chief executive of Anthropic. Why is his company at the heart of this revolution? Madhumita Murgia So, yeah, and Dario Amodei is very interesting because he worked for Google, then led research at OpenAI where he worked on one of the sort of early versions of the model that now powers ChatGPT and then left and founded Anthropic. So he's basically been at the heart of three of the most important AI companies of the past decade. Now we have just a handful of companies. Anthropic is one of them that are leading in this race to create more AI tools for consumers and companies. Murad Ahmed So, what ideas do they have? What is coming next? Madhumita Murgia The two core ways in which the generative AI companies themselves are thinking about it is either building copilots. So these would be tools that sort of help you to complete tasks, help you speed up those tasks. Sort of like a little assistant sitting on your shoulder with some form of intelligence that can write and analyse and create and draw things. And the second idea, which is sort of a derivative of this assistant, is what we're calling AI agents. And that's going to be the big idea from all of the tech companies for 2025. And these would be much more autonomous AI software bits that not only help you to complete a task, but actually you could just tell them to do something on your behalf, like go find your piece of information, send an email, record and transcribe an interview and so on. And they would go off and actually execute on your behalf rather than help you do that. This is the big idea that now is being espoused by all the Big Tech companies, ranging from Google and Microsoft, OpenAI, and Anthropic as well, which is something that I spoke with Amodei about. Dario Amodei You can tell it, "book me a reservation at this restaurant" or "plan a trip for this day", and the model will just directly use your computer, right? It'll look at the screen, it'll click at various positions on the mouse and it will type in things using the keyboard, like not physically, it's not a physical robot, but it's able to kind of automate and control your computer for you. Murad Ahmed So what Amodei appears to be describing is something way beyond where we are with chatbots at the moment, which is you ask it a question and comes up with a response. It's able to have actions. It can do things almost independently, it can connect between lots of different things and complete a task that you give it. Madhumita Murgia Well, essentially the agent becomes your interface between you and all your kind of digital world and starts to do actions for you that are not low-value but take up time. So we've seen that from Apple, with Google as well. You know, they have their agent that they call Project Astra, and you know, they want it to be a sort of phone-based or glasses-based agent that you can ask about the world around you. But they also have something you can integrate on to Chrome, which will, you can ask it, for example, to find you a recipe of something, get the ingredients and then add it to your supermarket shopping basket. So it will do all of those sort of incremental tasks one by one, and then eventually that ends up with you having a bunch of groceries in your basket that you can just click buy on. So much more, not just autonomous, but much more deliberate in terms of performing actions rather than just giving you the information you need. Murad Ahmed So forgive me, Madhu, I'm not completely overwhelmed by this idea, at least the way that AI agents have been set out here. This sounds relatively basic stuff, stuff that sure will save me some time, but it's not going to save me a huge amount of time and I might want to do most of this myself anyway. I was promised an AI revolution. You know, things are really going to change my life. So how is this technology really going to accelerate in ways that I really feel it and may even be excited by it? Madhumita Murgia So I think the reason that this could potentially for these companies be a big deal is it makes AI a part of our daily lives in a way that it hasn't been so far. So just as we now, you know, using our phones to take pictures or make videos, send messages, find information, all of this is kind of second nature to the way that we live, the way we work, the way we communicate. What they want us for these agents to take it one step further and do tasks and actions on our behalf, which means it frees up our time to do other things. And so I think, you know, the reason they're excited about it is for the first time, we'll have software that's kind of clever enough to do things that only humans were able to do. But it's not clear to me whether we actually need these tools yet, and I think that's what next year will be about. You know, can this become really second nature to us like emailing or texting has become? And what will be the killer features of this? For me personally, I think what could be great is asking my phone, you know, "Murad sent me something about filing a story next week, to do with AI agents. Can you just find me what he said and where he said it?" Because we communicate on multiple different platforms, right. So it basically becomes a way to kind of surface stuff from across your phone and your laptop in a very kind of natural way and maybe will help us all be more productive and reduce how much we stare at our screens. So that's the hope. But I think it's an open question. Murad Ahmed I would have hoped that any commission message I send to you is front of mind, Madhu. But at the moment, what the companies are talking about are relatively small decisions. But what are the sorts of big decisions that an agent might be able to make on our behalf into the future? Madhumita Murgia I think that there'll be a huge jump between those two because, you know, trust is such a big and key element of allowing these software tools to do actions for us, right? Even today, companies like Google have said, you know, they won't allow their agents to take financial decisions for you. For example, you can ask it to find you something to buy and put it in your basket, but it won't actually go and make the payment for you. That's something you'll still control. And this is actually one of the key issues that Amodei raised in our conversation. Dario Amodei I actually think the most challenging thing about AI agents is making sure they're safe, reliable and predictable. So as a thought experiment, you know, just imagine I have this agent and I basically say, do some research for me on the internet, you know, form a hypothesis and then go and buy some materials to build the thing I want to do or, you know, make some trades done or take my trading strategy. Once the models are doing things out there in the world, it opens up the possibility that, of course, the models could do things that I didn't want them to do, that I didn't have in mind, maybe they're changing the settings on my computer in some way. Maybe they're representing me when they talk to someone and they're saying something that I wouldn't endorse at all. So just the kind of wildness and unpredictability needs to be tamed. And we've made a lot of progress with that. But I think the level of predictability you need is substantially higher. And so this is what's holding it up. It's not the capabilities of the model. It's getting to the point where we're assured that we can release something like this with confidence and it will reliably do what people want it to do when people can actually have trust in the system. Madhumita Murgia Yeah. As you say, the stakes are a lot higher when it moves from, you know, it telling you something or giving you information that you can act on versus acting on something for you. And even if it isn't malicious, it can be annoying, which could actually be as big a barrier, I guess, to adoption if people are just annoyed by it not working the way they want it to work. Dario Amodei Yeah. Annoying or just it's not malicious but it does something randomly. Like, you know, do you want to let a gremlin loose in the internals of your computer to just kind of like change random things and you might never know it changed those things? Madhumita Murgia So when do you think we get to a point of enough predictability and mundanity with these agents that you can actually, you'd be able to put something out and people can use them routinely? Dario Amodei Yeah. I mean, I think it's not a binary, right? People have built things on the computer we use today that are, you know, that you would call agents, that take action on your behalf. And so to a small extent, we're kind of already putting them out and we're enabling others to build things. But the more you're turning over wider swaths of activity, right, if I want to have, like, I don't know, like a virtual employee, right, where I say go off for several hours, you know, do all this research, write up this report. You know, think of what, you know, a management consultant does or something like that or a programmer. I'd like us to get to the point where you can just give the AI system a task for a few hours, similar to a task you might give to a, you know, a human intern or an employee. Every once in a while it comes back to you, it asks for clarification, and then it completes the task. And that people have confidence, that it'll actually do what you said it would do and not some crazy other thing that I can trust that the outputs are correct and predictable. In terms of that, I think we'll make a lot of progress towards that by 2025. So I would predict that there will be products in 2025 that do roughly this. But again, it's not a binary. The skill level is going to go up and up and up. And for each level that the skill level goes up, a wider variety of tasks will be appropriate for the AI system to do in this way for you. And there will always still be tasks that you don't quite trust an AI system to do because it's not smart enough or not autonomous enough or not reliable enough. Murad Ahmed Given Amodei's answer there, how confident do you feel, Madhu, about the safety of AI agents? Madhumita Murgia I guess that's key, the question of how confident do I feel? Because even though I'm not an expert, I'm going to be a consumer of these products, as are you and as is everybody who's listening to this. And I think even just the perception of how safe they are is going to be really important for their adoption. On the question of the actual safety, you know, will they go off and do crazy things? One thing I will say is this is definitely a top priority for the companies building it. You heard from Amodei here, you know, that they don't want to release anything until they feel they've hit the sort of limit of safety. I've spoken to others, Google and others who, you know, for them, this is really key and not just because it's part of their responsibility, you know, putting so-called gremlins and setting them loose into your computer, but also because it affects adoption. People see that it's doing things they didn't want it to do or misrepresenting them. They're just going to stop using it. Murad Ahmed And that will affect their bottom line. Madhumita Murgia Well, exactly. So it all comes back to that. So safety is, it's something that you should do because it's the responsible thing to do. But also, more than anything, it's going to affect how and whether people use it and ultimately whether this is ever going to take off. [MONEY CLINIC AD PLAYING] Murad Ahmed So we've talked about how AI has developed at such an enormous rate over the last couple of years, but there are doubts out there about whether this sort of acceleration can continue at the same pace. Madhu, will 2025 be the year that the pace that AI model develops actually slows down? Madhumita Murgia This is another sort of key debate question in the AI community and has really ramped up over the past few months. What it comes down to is this phenomenon called the scaling law, which essentially says that if you add more data and more computing power and more chips to these algorithms that you have, you will just get better and better capabilities and it just gets kind of exponentially more powerful. So over the last few years, all the AI companies have observed and been really excited about the fact that the scaling laws have continued to play out. And when they have thrown more data and more compute and chips at these algorithms, they have shown greater capabilities, more sort of so-called intelligence. But over the past few months, there's been more discussion about whether that will start to plateau and level off, partly because there are limitations in those resources. So you might be running out of public data, for example. Computing power is really expensive. How big a cluster of chips can you possibly afford or build? And the energy constraints as well. So that there have been reports of it's not scaling in quite the same way and you might need other innovations. You might need other ways to get around things like a limitation in data. And this is something that I asked Amodei about, and he mentioned the concept of synthetic data. Dario Amodei As we run out of natural data, we start to increase the amount of synthetic data. So, for example, AlphaGo Zero, the model from DeepMind, was trained with synthetic data. This model learned to play Go at a level above the best human players without ever playing against a human player or training on data from human players. It just played against itself. So that's an example of synthetic data that you can use to kind of bootstrap things seemingly from something to nothing. I think this is going to be a big thing in the next couple of years. Murad Ahmed Absolutely fascinating because, you know, this is a big constraint. We're running out of how much of the internet we can pump into these models and get a response. And we're having to create data from other sources, including the AI itself. Madhumita Murgia Yeah, I thought that was really interesting. It was the first time I'd heard him talking about game playing as a way to generate that data. But the idea, as he says himself, isn't new. It's something that DeepMind has pioneered for many years, essentially getting your AI agent to play games and learn the rules of the games by themselves. And the reward is winning, right? Winning a point. So you've seen this, you know, technique used by others, and that's how they create data. But I've also talked to companies that generate synthetic data using their own AI systems. So you might have humans write a bunch of answers to questions and then get an AI system that sort of uses that as a starting point and generates more of that. So it's AI creating more data to then train more AI. So it sounds a bit circular, but every company is now using some volume of synthetic data to sort of pad out the real-world data that they have. Murad Ahmed Amazing to think about and could be problematic. But are there other limitations? I mean, we've mentioned some of them that cost a lot of money. One of them is chips, the other one's energy. Madhumita Murgia Yeah. You have certain limitations for these scaling laws that, you know, that are physical limitations, limited by the laws of physics. There's only so many transistors that can fit on a chip, only so many chips you can bring together to run these, only so much energy that you can have access to without bringing down some kind of massive grid. So these things are sort of unchangeable. You can't just use AI to generate more of them like you can with, say, data. So I think when Amodei was talking about this and what I hear broadly from the field is given those constraints, we'll have to find other sort of clever, ingenious ways and other breakthroughs to keep scaling up these capabilities rather than just sort of throwing more workhorse resources like chips or energy at it. And that's not impossible. You have some of the world's most intelligent computer scientists, biologists, physicists working on these problems. It's just that we don't know what those breakthroughs will look like because now we're coming down to fundamental science, not so much engineering, which is what we've been doing for a couple of years. Murad Ahmed If we go back to the companies themselves now, we said there was a handful of companies. I mean, we could name them, right? So there's OpenAI led by Sam Altman and Anthropic, as you mentioned. Then the Big Tech companies Google, Amazon, Microsoft, which has an association with OpenAI. There's a lot of incestuous stuff going on between them because they're investing in start-ups. If you had to pick your runners and riders, you're winners and losers, who's ahead in this race to make agents the next big thing in AI? Madhumita Murgia That's really the key question, right? That's the prize that these companies that you've named are competing for. I think there's pros and cons that come with being a big incumbent consumer or enterprise company that's already embedded into our workflows and our daily lives. And that would be somebody like Google, of course, Amazon, Apple. You know, phones in all our hands or Microsoft, they already have the nerve endings to users, billions around the world. But then on the other hand, start-ups like OpenAI and Anthropic, are singularly focused on generative AI tools, on agents, on creating these so they can be much more flexible, much more agile, and may be better products too. The big question will be who can make this stick? Who can turn this into a habit? So I don't think it's obvious who's going to win. Having a smartphone device, having search engines, having enterprise products that billions use anyway, or being a retail giant like Amazon or whatever, is a good way to reach people more quickly. But yeah, I think OpenAI and Anthropic have a lot of name recognition and a lot of love for their products today. So still an open race. Murad Ahmed And a lot of these companies, they don't really describe what they're trying to do as to make tons of money for themselves. I mean, that's probably the reason why they're doing it. But one of . . . they talk about missions and the key mission is getting to the point of artificial general intelligence. You know, this idea that we can get computers to have cognitive abilities just as good, if not better, than human beings. And what that can do for us. What are the other big problems that you think that AI, in the most optimistic view of what it can do, could help us in the near future? Madhumita Murgia Yeah, I think the big areas that people often talk about as AI being potentially transformative is education, science and healthcare and energy. So these are the four areas that, you know, also very lucrative, but also could have huge progress for us as a species. Amodei in particular started out as a biological physicist with a specialism in biology. And he writes and talks a lot about the potential of AI in biology. Dario Amodei I think that there are going to be many areas where amazing progress happens. Like I'm sure the legal field will be revolutionised. I'm sure the practice of finance will be revolutionised. And all those things, I think, you know, in their own way if done well, will be good for the world. But when I looked at it and I said, you know what makes the biggest difference to our everyday experience of our lives, what leads to misery or joy for humans? You know, I look at biology and I look at some of the problems that both relate to serious diseases that people have that destroy lives and take away people's loved ones. And biology is really complicated. And no matter how good, no matter how smart the AI is, it's going to have a hard time solving them. Or at least it's going to take time. But if we look at, you know, things like cancer and Alzheimer's, there's nothing magic about them. There's an incredible amount of complexity. But AI specialises in complexity. And so I just have this optimism that it's not going to happen all at once. But bit by bit, we're going to unravel this complexity that we couldn't deal with before. And I really have this belief that if we get it right, we will. Murad Ahmed So Madhu, from everything I've heard from you and Amodei, we've got a big year ahead. We've got agents coming soon that will help us with our everyday tasks. But there are these big, huge goals to be achieved as well. So what are your predictions about what the year will look like? How close will we get to artificial general intelligence? Or will it be closer to the mundane? Madhumita Murgia So I think how close will we get to AGI is kind of how long is a piece of string? I derive my prediction on that from, I guess, the experts in the field that I speak to as part of my day to day job. And while nobody thinks it's coming next year, those who are leading in the field believe it's no more than a decade away. But that relies on us making these breakthroughs, on us discovering new ways to develop these algorithms. So I think that's something that there will be progress. There'll be exciting new kind of research outcomes that we'll follow closely. On the more consumer-practical front, you know, there will be much more diffusion of these products, I believe, into our lives more than there has been in the past two years. Largely because it's going to be on devices that so many of us use, including people who are older and younger, not just those who are sort of early adopters, who know what ChatGPT is. We . . . pretty much everybody has a phone, a smartphone or a laptop on which these things are going to start to show up. So I think there will be more diffusion, more adoption and more awareness of AI going forward. And for me personally, I hope that we'll see some really interesting applications of this, particularly in healthcare, where which is what I'm most optimistic about in terms of being transformed through these AI systems. And I think we'll see more kind of progress in diagnostics, maybe even clinical trials and treatment developments from AI. And that would be the sort of dream outcome from developing these systems. Murad Ahmed Well, I for one, promise that next time you tell me there's been a big breakthrough in AI, I won't ignore it like I did ChatGPT two years ago. Thank you so much. Madhumita Murgia is the FT's artificial intelligence editor. Thank you. Murad Ahmed That's it for this episode of Tech Tonic. And next week we'll ask another big question: what the incoming Trump presidency will mean for the tech world. Everything from how he's going to influence crypto regulation to what Elon Musk is really getting up to in the White House. So tune in for that next week. Tech Tonic is presented by me, Murad Ahmed. Our senior producer is Edwin Lane and our producer is Persis Love. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Sam Giovinco. Music by Metaphor Music. Our global head of audio is Cheryl Brumley.
Share
Copy Link
Tech companies are gearing up to introduce AI agents in 2025, autonomous software that can perform tasks independently, marking a significant leap from current chatbot capabilities.
In the rapidly evolving landscape of artificial intelligence, tech giants are setting their sights on a new frontier for 2025: AI agents. These autonomous software entities represent a significant leap forward from the chatbots and generative AI tools that have become household names over the past two years 1.
Since the launch of ChatGPT by OpenAI two years ago, generative AI has become deeply integrated into daily life. Consumers have widely adopted chatbot interfaces like ChatGPT, Claude, and Gemini for various tasks, from answering questions to generating images and even videos 1.
As we look towards 2025, the focus is shifting from chatbots to more sophisticated AI agents. Dario Amodei, CEO of Anthropic, explains:
"You can tell it, 'book me a reservation at this restaurant' or 'plan a trip for this day', and the model will just directly use your computer... It'll look at the screen, it'll click at various positions on the mouse and it will type in things using the keyboard" 1.
These AI agents are designed to be more autonomous, capable of executing tasks on behalf of users rather than simply assisting them. They represent a significant advancement in AI capabilities, potentially serving as an interface between users and their digital world.
Major tech companies are at the forefront of this AI agent revolution:
Anthropic: Led by Dario Amodei, a veteran of Google and OpenAI, Anthropic is developing advanced AI agents 1.
Google: Their "Project Astra" aims to create a phone or glasses-based agent that can provide information about the user's surroundings 1.
Microsoft and OpenAI: Both companies are investing heavily in AI agent technology 1.
The introduction of AI agents could revolutionize how we interact with technology and perform daily tasks. Some potential applications include:
As AI agents become more autonomous, questions about privacy, security, and ethical use will likely come to the forefront. Ensuring these agents act in accordance with user intentions and societal norms will be crucial for their widespread adoption and success.
Amodei predicts that by 2025, AI could potentially match the capabilities of PhD students or early-career professionals in various fields 1. This rapid advancement suggests that AI agents could become a transformative force in both personal and professional spheres, reshaping how we interact with technology and manage our digital lives.
Meta Platforms has signed a six-year, $10 billion deal with Google Cloud to expand its AI infrastructure, marking a significant move in the ongoing AI race among tech giants.
17 Sources
Business
21 hrs ago
17 Sources
Business
21 hrs ago
Court documents reveal that Elon Musk approached Meta CEO Mark Zuckerberg for potential financing in his $97.4 billion bid to acquire OpenAI, highlighting the complex relationships and competition in the AI industry.
18 Sources
Business
21 hrs ago
18 Sources
Business
21 hrs ago
Nvidia CEO Jensen Huang reveals ongoing discussions with the U.S. government about exporting a new, more advanced AI chip to China, highlighting the complex dynamics of U.S.-China tech relations and semiconductor trade policies.
18 Sources
Technology
21 hrs ago
18 Sources
Technology
21 hrs ago
Apple is reportedly considering using Google's Gemini AI to power a revamped version of Siri, marking a potential shift in its AI strategy and sparking discussions about competition and collaboration in the tech industry.
13 Sources
Technology
5 hrs ago
13 Sources
Technology
5 hrs ago
Anthropic, the AI company behind Claude, is close to securing a massive $10 billion funding round, doubling its initial target due to high investor demand. This raise would significantly boost its valuation and fuel its competition with other AI giants.
4 Sources
Business
22 hrs ago
4 Sources
Business
22 hrs ago