Curated by THEOUTPOST
On Sun, 15 Dec, 8:01 AM UTC
8 Sources
[1]
Google's push to re-establish lead in AI boosts investor confidence
Google's push to re-establish itself at the vanguard of technological research and artificial intelligence this month has driven its stock to a record high while quietening criticism it had fallen behind rivals. Over the course of December, the Big Tech group impressed investors with a more advanced version of its AI models and applications called Gemini 2.0 which has beaten rivals in benchmark testing. In a flurry of co-ordinated releases, the company also unveiled a new generation of its custom AI accelerator chip -- a Tensor Processing Unit (TPU) called Trillium -- which aims to challenge Nvidia's near monopoly on the market. Google also added the capability to act on users' behalf and compile complex research reports -- Project Mariner -- and answer real-time queries across text, video and audio -- Project Astra -- including via smart-glasses. And it launched video and image generation models called Veo 2 and Imagen 3. "The last month has transformed the state of AI, with the pace picking up dramatically in just the last week," said Ethan Mollick, a professor at Wharton business school and author of a book on the technology, describing Google's releases, in particular Veo 2, as "astonishing". "This isn't steady progress -- we're watching AI take uneven leaps past our ability to easily gauge its implications," added Mollick. Additionally, Google confirmed last week that it had made a breakthrough in quantum computing with a chip called Willow. It can hold "qubits" stable for longer, reducing errors and allowing them to perform useful computations. The company claims it can complete tasks in 5 minutes that would take conventional supercomputers 10 septillion years, but the elusive technology remains years from commercial application. In more recognition of its edge on research, in October, Sir Demis Hassabis, founder of Google's AI research lab DeepMind and his colleague John Jumper shared the chemistry Nobel Prize for predicting the structure of every known protein using AI software known as AlphaFold. The showcase of technological advances -- along with three consecutive quarters of double-digit profit growth -- have helped revive parent company Alphabet's share price. The stock is up 38 per cent this year and briefly touched a record high of $199.91 this week, giving it a market capitalisation of $2.3tn. Still, a $1tn gap remains to close with Microsoft. Since the release of ChatGPT in late 2022, Google appeared to squander its early advantage in AI, after incubating the underlying research, in particular when its arch-rival Microsoft partnered with OpenAI. It took Google a year to release its own comparable version, Gemini. "Alphabet has been under the microscope since ChatGPT was released," said Tiffany Hsia, a US equity portfolio manager at AllianceBernstein, a shareholder in the company. "Gemini 2.0 and the quantum chip gives investors renewed confidence that they are one of the leading tech powerhouses." In a sign of growing confidence, chief executive Sundar Pichai -- who had faced some of the harshest criticism in his nine-year tenure for the slow AI rollouts in the spring -- challenged his counterpart at Microsoft, Satya Nadella. "I would love to do a side-by-side comparison of Microsoft's own models and our models any day, any time," said Pichai during the DealBook Summit earlier this month. He added with a smile that, besides, "they're using someone else's models". While the company demonstrates its technological chops, it must work out how to incorporate these innovations into its consumer and commercial applications without stifling the creativity of its engineers. Pichai has steadily sought to integrate AI into its search engine, while needing to placate investors concerned that such a move will cannibalise advertising earnings. The search giant still controls 90 per cent of the market, but for the first time in decades faces real competition from AI-powered products by groups including OpenAI, Anthropic and Perplexity that can provide comprehensive answers rather than links. Google's solution so far has been "AI Overviews", brief answers to queries at the top of its results page. Executives have said the feature is popular, but early evidence suggests users click on Overview ads at a lower rate -- down 8 per cent year-on-year in the third quarter, according to research by advertising platform Skai. Other threats remain. After losing a big antitrust case against its search arm in August, the Department of Justice is seeking to force the sale of its Chrome browser, cancel an exclusive contract to be Apple's default search engine and share the trove of user data that underpins Google's proprietary webpage ranking algorithms, ad targeting systems and AI model training. The company is awaiting the results of another monopoly trial in the US focused on its ad tech business when Alphabet's other core revenue source could be broken up. Another potential danger is Elon Musk. The world's richest man holds sway over president-elect Donald Trump after spending $250mn to help win last month's US election -- gaining the power to influence AI regulation and antitrust enforcement. Musk's xAI start-up has also built the world's largest supercomputer in Memphis, Tennessee, in record time. Nicknamed Colossus, it has networked 100,000 cutting-edge Nvidia graphics processing units -- and has plans to expand the data centre 10-fold to 1mn chips -- which should help xAI's chatbot Grok catch up with the competition in 2025. "Sundar does now seem more confident. Because the ethos of Google is to be perfectionist, we may see product launches at a more meticulous, calculated pace, but we mustn't be so impatient about it," said AllianceBernstein's Hsia. "This is a race, and recent developments show they're still in it."
[2]
Is tech industry already on cusp of artificial intelligence slowdown?
That problem is starting to surface even as billions of dollars continue to be poured into AI development. On Tuesday, Databricks, an AI data company, said it was closing in on $10 billion in funding -- the largest-ever private funding round for a startup. And the biggest companies in tech are signaling that they have no plans to slow down their spending on the giant data centers that run AI systems.Demis Hassabis, one of the most influential artificial intelligence experts in the world, has a warning for the rest of the tech industry: Don't expect chatbots to continue to improve as quickly as they have over the past few years. AI researchers have for some time been relying on a fairly simple concept to improve their systems: the more data culled from the internet that they pumped into large language models -- the technology behind chatbots -- the better those systems performed. But Hassabis, who oversees Google DeepMind, the company's primary AI lab, now says that method is running out of steam simply because tech companies are running out of data. "Everyone in the industry is seeing diminishing returns," Hassabis said this month in an interview with The New York Times as he prepared to accept a Nobel Prize for his work on AI. Hassabis is not the only AI expert warning of a slowdown. Interviews with 20 executives and researchers showed a widespread belief that the tech industry is running into a problem many would have thought was unthinkable just a few years ago: They have used up most of the digital text available on the internet. That problem is starting to surface even as billions of dollars continue to be poured into AI development. On Tuesday, Databricks, an AI data company, said it was closing in on $10 billion in funding -- the largest-ever private funding round for a startup. And the biggest companies in tech are signaling that they have no plans to slow down their spending on the giant data centers that run AI systems. Not everyone in the AI world is concerned. Some, including OpenAI CEO Sam Altman, say progress will continue at the same pace, albeit with some twists on old techniques. Dario Amodei, CEO of AI startup Anthropic, and Jensen Huang, CEO of Nvidia, are also bullish. (The Times has sued OpenAI, claiming copyright infringement of news content related to AI systems. OpenAI has denied the claims.) The roots of the debate trace to 2020 when Jared Kaplan, a theoretical physicist at Johns Hopkins University, published a research paper showing that large language models steadily grew more powerful and lifelike as they analyzed more data. Researchers called Kaplan's findings "the Scaling Laws." Just as students learn more by reading more books, AI systems improved as they ingested increasingly large amounts of digital text culled from the internet, including news articles, chat logs and computer programs. Seeing the raw power of this phenomenon, companies such as OpenAI, Google and Meta raced to get their hands on as much internet data as possible, cutting corners, ignoring corporate policies and even debating whether they should skirt the law, according to an examination this year by the Times. It was the modern equivalent of Moore's Law, the oft-quoted maxim coined in the 1960s by Intel co-founder Gordon Moore. He showed that the number of transistors on a silicon chip doubled every two years or so, steadily increasing the power of the world's computers. Moore's Law held up for 40 years. But eventually, it started to slow. The problem is: Neither the Scaling Laws nor Moore's Law are immutable laws of nature. They're simply smart observations. One held up for decades. The others may have a much shorter shelf life. Google and Kaplan's new employer, Anthropic, cannot just throw more text at their AI systems because there is little text left to throw. "There were extraordinary returns over the last three or four years as the Scaling Laws were getting going," Hassabis said. "But we are no longer getting the same progress." Hassabis said existing techniques would continue to improve AI in some ways. But he said he believed that entirely new ideas were needed to reach the goal that Google and many others were chasing: a machine that could match the power of the human brain. Ilya Sutskever, who was instrumental in pushing the industry to think big as a researcher at both Google and OpenAI before leaving OpenAI to create a new startup this past spring, made the same point during a speech last week. "We've achieved peak data, and there'll be no more," he said. "We have to deal with the data that we have. There's only one internet." Hassabis and others are exploring a different approach. They are developing ways for large language models to learn from their own trial and error. By working through various math problems, for instance, language models can learn which methods lead to the right answer and which do not. In essence, the models train on data that they themselves generate. Researchers call this "synthetic data." OpenAI recently released a new system called OpenAI o1 that was built this way. But the method only works in areas such as math and computing programming, where there is a firm distinction between right and wrong. Even in these areas, AI systems have a way of making mistakes and making things up. That can hamper efforts to build AI "agents" that can write their own computer programs and take actions on behalf of internet users, which experts see as one of AI's most important skills. Sorting through the wider expanses of human knowledge is even more difficult. "These methods only work in areas where things are empirically true, like math and science," said Dylan Patel, chief analyst for research firm SemiAnalysis, who closely follows the rise of AI technologies. "The humanities and the arts, moral and philosophical problems are much more difficult." People such as Altman say these new techniques will continue to push the technology ahead. But if progress reaches a plateau, the implications could be far-reaching, even for Nvidia, which has become one of the most valuable companies in the world thanks to the AI boom. During a call with analysts last month, Huang was asked how the company was helping customers work through a potential slowdown and what the repercussions might be for its business. He said that evidence showed there were still gains being made, but that businesses were also testing new processes and techniques on AI chips. "As a result of that, the demand for our infrastructure is really great," Huang said. Although he is confident about Nvidia's prospects, some of the company's biggest customers acknowledge that they must prepare for the possibility that AI will not advance as quickly as expected. "We have had to grapple with this. Is this thing real or not?" said Rachel Peterson, vice president of data centers at Meta. "It is a great question because of all the dollars that are being thrown into this across the board."
[3]
Is the Tech Industry Nearing an A.I. Slowdown?
Demis Hassabis, one of the most influential artificial intelligence experts in the world, has a warning for the rest of the tech industry: Don't expect chatbots to continue to improve as quickly as they have over the last few years. A.I. researchers have for some time been relying on a fairly simple concept to improve their systems: the more data culled from the internet that they pumped into large language models -- the technology behind chatbots -- the better those systems performed. But Dr. Hassabis, who oversees Google DeepMind, the company's primary A.I. lab, now says that method is running out of steam simply because tech companies are running out of data. "Everyone in the industry is seeing diminishing returns," Dr. Hassabis said this month in an interview with The New York Times as he prepared to accept a Nobel Prize for his work on artificial intelligence. Dr. Hassabis is not the only A.I. expert warning of a slowdown. Interviews with 20 executives and researchers showed a widespread belief that the tech industry is running into a problem many would have thought was unthinkable just a few years ago: They have used up most of the digital text available on the internet. That problem is starting to surface even as billions of dollars continue to be poured into A.I. development. On Tuesday, Databricks, an A.I. data company, said it was closing in on $10 billion in funding -- the largest-ever private funding round for a start-up. And the biggest companies in tech are signaling that they have no plans to slow down their spending on the giant data centers that run A.I. systems. Not everyone in the A.I. world is concerned. Some, like OpenAI's chief executive, Sam Altman, say that progress will continue at the same pace, albeit with some twists on old techniques. Dario Amodei, the chief executive of the A.I. start-up Anthropic, and Jensen Huang, Nvidia's chief executive, are also bullish. (The New York Times has sued OpenAI, claiming copyright infringement of news content related to A.I. systems. OpenAI has denied the claims.) The roots of the debate trace back to 2020 when Jared Kaplan, a theoretical physicist at Johns Hopkins University, published a research paper showing that large language models steadily grew more powerful and lifelike as they analyzed more data. Researchers called Dr. Kaplan's findings "the Scaling Laws." Just as students learn more by reading more books, A.I. systems improved as they ingested increasingly large amounts of digital text culled from the internet, including news articles, chat logs and computer programs. Seeing the raw power of this phenomenon, companies like OpenAI, Google and Meta raced to get their hands on as much internet data as possible, cutting corners, ignoring corporate policies and even debating whether they should skirt the law, according to an examination this year by The New York Times. It was the modern equivalent of Moore's Law, the oft-quoted maxim coined in the 1960s by the Intel co-founder Gordon Moore. He showed that the number of transistors on a silicon chip doubled every two years or so, steadily increasing the power of the world's computers. Moore's Law held up for 40 years. But eventually, it started to slow. The problem is, neither the Scaling Laws nor Moore's Law are immutable laws of nature. They're simply smart observations. One held up for decades. The others may have a much shorter shelf life. Google and Dr. Kaplan's new employer, Anthropic, cannot just throw more text at their A.I. systems because there is little text left to throw. "There were extraordinary returns over the last three or four years as the Scaling Laws were getting going," Dr. Hassabis said. "But we are no longer getting the same progress." Dr. Hassabis said that existing techniques would continue to improve A.I. in some ways. But he said he believed that entirely new ideas were needed to reach the goal that Google and many others were chasing: a machine that could match the power of the human brain. Ilya Sutskever, who was instrumental in pushing the industry to think big as a researcher at both Google and OpenAI before leaving OpenAI to create a new start-up this spring, made the same point during a speech last week. "We've achieved peak data, and there'll be no more," he said. "We have to deal with the data that we have. There's only one internet." Dr. Hassabis and others are exploring a different approach. They are developing ways for large language models to learn from their own trial and error. By working through various math problems, for instance, language models can learn which methods lead to the right answer and which do not. In essence, the models train on data that they themselves generate. Researchers call this "synthetic data." OpenAI recently released a new system called OpenAI o1 that was built this way. But the method only works in areas like math and computing programming, where there is a firm distinction between right and wrong. Even in these areas, A.I. systems have a way of making mistakes and making things up. That can hamper efforts to build A.I. "agents" that can write their own computer programs and take actions on behalf of internet users, which experts see as one of A.I.'s most important skills. Sorting through the wider expanses of human knowledge is even more difficult. "These methods only work in areas where things are empirically true, like math and science," said Dylan Patel, chief analyst for the research firm SemiAnalysis, who closely follows the rise of A.I. technologies. "The humanities and the arts, moral and philosophical problems are much more difficult." People like Mr. Altman of OpenAI say that these new techniques will continue to push the technology ahead. But if progress reaches a plateau, the implications could be far-reaching, even for Nvidia, which has become one of the most valuable companies in the world thanks to the A.I. boom. During a call with analysts last month, Mr. Huang, Nvidia's chief executive, was asked how the company was helping customers work through a potential slowdown and what the repercussions might be for its business. He said that evidence showed there were still gains being made, but that businesses were also testing new processes and techniques on A.I. chips. "As a result of that, the demand for our infrastructure is really great," Mr. Huang said. Though he is confident about Nvidia's prospects, some of the company's biggest customers acknowledge that they must prepare for the possibility that A.I. will not advance as quickly as expected. "We have had to grapple with this. Is this thing real or not?" said Rachel Peterson, vice president of data centers at Meta. "It is a great question because of all the dollars that are being thrown into this across the board."
[4]
Artificial Intelligence in 2030
At the DealBook Summit, ten experts in artificial intelligence discussed the greatest opportunities and risks posed by the technology. Modern artificial intelligence is expected to be one of the most consequential technologies in history. But there is a big debate over what those consequences will be: Will the technology power an age of prosperity, in which humans work less? Will it be used to wipe out humanity? In a discussion at the DealBook Summit moderated by Kevin Roose, a technology columnist for The Times and a co-host of the Times tech podcast, "Hard Fork," 10 experts discussed the greatest opportunities and risks. Here's what they said. The opportunities In a live poll, seven of the experts indicated they thought there was a 50 percent chance or greater that artificial general intelligence -- the point at which A.I. can do everything a human brain can do -- would be built before 2030. But most of the potential opportunities experts pointed out could materialize well before then. Josh Woodward, vice president of Google Labs, said A.I. could help humans create in different mediums, for example. Peter Lee, the president of Microsoft Research, pointed out a wide range of potential applications: "We might be able to do things like drastically speed up drug discovery or find targets for drugs that are currently considered undruggable. Or we could predict severe weather events days or even a couple of weeks in advance. Even mundane things like, I don't know, making your vegan food taste better or your skin tone fresher looking." A.I. can personalize lesson plans for students, said Sarah Guo, the founder of Conviction, a venture capital firm. She added that a similar approach could make everything from specialized medical services to legal advice more accessible. The technology could also have a broader impact on daily life, Guo said. "I think we're going to continue to have a market economy, and people will see a significant part of their value to be society and their identity be determined by their work," she said. But she added that expectations for what "an improved speed of scientific discovery and cheaper health care and education means in the world should be a little bit more positive." Of her own expectations, she said: "In a future where you have a high quality of life, where there is enough productivity, where you can do less work, and the work you do is what you choose, I think people learn and entertain and create." The risks Some of the top figures in A.I. have warned of its potential risks. Geoffrey Hinton, a former Google researcher who won the Nobel Prize this year, has pointed to potential hazards including misinformation and truly autonomous weapons. More than 1,000 A.I. leaders and researchers signed an open letter last year saying that A.I. tools posed "profound risks to society and humanity" and urged development labs to pause work on the most advanced A.I. systems. In a separate letter last year, leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs signed a one-sentence statement: "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Geopolitics Dan Hendrycks, director of the Center for A.I. Safety, which released the statement signed by OpenAI, DeepMind and Anthropic executives, said his top fear about A.I. had changed. It used to be bioweapons, he said, but A.I. chatbots now have improved safety mechanisms that mostly stop them from providing instructions to make weapons. Now his biggest fear is geopolitics. "For instance, if China invades Taiwan later this decade, that's where we get all of our A.I. chips from," he said. "So that would be a fairly plausible world where the West would summarily fall behind." Some industry leaders, including Alexander Karp, the chief executive of Palantir Technologies, have argued that the U.S. needs a program to accelerate development of A.I. technology, similar to how it established the Manhattan Project to develop nuclear weapons, to keep it from falling behind the rest of the world. At the DealBook Summit, Marc Raibert, the founder of Boston Dynamics, the robotics company, disagreed. "It seems to me we have about three or four or five of them already if you look at the big companies who are investing 10s or 20s or 30s of billions of dollars in it," he said, referring to the handful of companies building generative A.I. models, which includes Meta, Google and OpenAI, which is spending more than $5.4 billion a year to develop A.I. Eugenia Kuyda, the founder of Replika, an A.I. companion company, said that if the U.S. government wanted to accelerate A.I. research, it should start by making it easier for A.I. scientists to immigrate. "It's almost impossible to bring people here," she said, adding of A.I. scientists, "it's actually much harder to get a visa if you're coming with one of those degrees." Economic insecurity In another live poll, six of the 10 panelists indicated they believed A.I. will create more jobs than it destroys. "A lens that I use to think about the A.I. revolution is that it will play out like the Industrial Revolution but around 10 times faster," said Ajeya Cotra, who leads grant making for research on potential risks posed by advanced A.I. at Open Philanthropy. But the vision of widespread economic prosperity that some think A.I. puts within reach isn't a given. "Things can go in different directions," said Tim Wu, a professor at Columbia Law School and former special assistant to the president for technology and competition policy in the Biden administration. "The plow made a lot of farmers able to be self-sustaining. But something like the cotton gin reinforced plantation model and led a lot of workers who were enslaved to terrible life conditions." Wu said the A.I. revolution could lead to economic security that has bigger geopolitical affects, similarly to how the Industrial Revolution arguably led to World War I or World War II. Empowering bad actors Cotra refers to a future in which A.I. makes most of the decisions as "the obsolescence regime." In this future, she said, "to refuse to use A.I. or even to spend too much human time double checking its decisions would be like insisting on using pen and paper or not using electricity today." There may be danger in giving machines too much control, she argued: I think a big reason that we don't have nasty engineered pandemics every year is that there are some tens of thousands of human experts that could create such pandemics, but they don't want to. They choose not to, because they're not sociopathic. But in this future world that I'm imagining, you would have expertise at the push of a button that could potentially be perfectly loyal to any individual who has the power, the money to buy computers to run those experts on. And I think about that in the context of say, democracy.President Trump, in his previous term, tried to push the limits in a bunch of different ways, tried to tell people underneath him to do things that were norm violating or illegal, and they pushed back. If all those people under him were instead 10 times as brilliant, but perfectly loyal -- programmed to be perfectly loyal -- that could be a destabilizing situation. Our society is sort of designed around some give and some autonomy from humans. Another fear? That the machines could go rogue. If "they're running the economy," Cotra said, "they're running on our militaries, our governments, and if they were to kind of go awry, we wouldn't have very many options with which to stop that transition." "A.I. slop" One immediate fear cited by Hinton, the Nobel Prize-winning researcher, is that A.I. will flood the internet with so much false content that most people will "not be able to know what is true anymore." At the DealBook Summit, Woodward, of Google Labs, said that he thought "A.I. slop" could increase the value of things that are created by humans. "Even labeling it, marking it as human created or other things don't seem far-fetched to me," he said. "The value of taste, I think will go up," he added. "So taste from a user perspective, but also how companies like Google and others rank content and surface discover retrieve it." Thanks for reading! We'll see you tomorrow. We'd like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.
[5]
Google's Sundar Pichai on Antitrust, Trump and A.I.
Google's chief executive spoke with Andrew Ross Sorkin at the DealBook Summit Google got a head start in the artificial intelligence race, and at the DealBook Summit on Dec. 4, its chief executive, Sundar Pichai, snapped back at suggestions that it should be more competitive considering its vast resources. Whereas A.I. startups rely on tech giants for processing power, Google uses its own. The company's products, like YouTube and Gmail, give it access to mountains of data, and its A.I. researchers have made huge breakthroughs, with two of them winning a Nobel Prize this year. That gives Google an advantage in all three of what Sam Altman, the chief executive of OpenAI, earlier in the day called "key inputs" to A.I. progress: compute, data and algorithms. Microsoft's chief executive, Satya Nadella, has said that Google should have been the "default winner" in A.I. At the DealBook Summit, Pichai responded, "I would love to do a side-by-side comparison of Microsoft's own models and our models any day, any time." Microsoft largely depends on OpenAI for its A.I. models. Pichai also defended his company's competitiveness. He said that although he thought A.I. progress would slow in the next year (speaking earlier, Altman had a different take), Google's search engine "will continue to change profoundly in '25." He said he expected search to become more, not less, valuable as the web is flooded with content generated by A.I. Pichai also touched on the company's antitrust lawsuits, the second Trump administration and how artificial intelligence is affecting the way he hires. Here are five highlights from the conversation. On Google's antitrust cases Google lost an antitrust case in August over its search dominance, and the company now faces the possibility that a federal judge will force it to divest from its Chrome web browser. It is also awaiting a decision in an antitrust case over its ad tech business. Pichai defended the company and said he had "deep faith in our judicial system." He also said that Google might eventually spin off units for other reasons: There are companies in our other bets, which they are set up with boards; we have outside investors. Just -- we take a long-term view, and you know, do I expect in a 10-year time frame some of those to be independent public companies? The answer is yes. He said that regardless of which businesses would ultimately spin off from Google's parent company, Alphabet, "I'm staying with the mothership." On Trump President-elect Donald Trump recently nominated Andrew Ferguson to replace Lina Khan as F.T.C. chair. While Ferguson is likely to be more lenient on mergers than his predecessors were, he told members of Trump's transition team that he would continue to scrutinize Big Tech companies. At the summit, which was held before Ferguson was announced as Trump's pick, Pichai expressed optimism about the Trump administration: Look, I think there's a real opportunity in this moment. One of the constraints for A.I. could be the infrastructure we have in this country, including energy. The rate at which we can build things. I think there are real areas where I think he's thinking about and committed to making a difference. So hopefully we can make progress there. Before and after the election, technology executives courted favor with Trump, and Amazon, Meta and Altman of OpenAI each plan to donate $1 million to the president-elect's inaugural fund. Other executives speaking at the summit expressed views similar to that of Pichai, including Jeff Bezos, who said, "I'm very hopeful." On A.I. and hiring On a recent call with analysts, Pichai said that more than a quarter of Google's new code was now generated by A.I. but reviewed and accepted by engineers. Pichai said the technology would make engineers more productive than ever, and that more people will become programmers, not fewer: Just like blogging made the world of publishing, not everyone needs to be as good as you to get online and write something. And, you know, I feel the same with programming. I think 10 years from now, it will be accessible to millions more people. On whether Google will need more or fewer programmers in the future, he said: All of us as companies are thinking about how to be more productive. You have to do that. And A.I. is one of the most important ways we are thinking about how to make the company more efficient and productive across everything. So factored into our growth plans is an assumption that our software engineers will be more productive than ever before. So that may, on the margin, have an impact, but it's also being able to do more things. So it's not that you're looking to hire less people, but what can you accomplish with those people? On A.I. safety and regulation Geoffrey Hinton, a former Google engineer who won a Nobel Prize this year for his work in artificial intelligence, left the company last year and warned of the technology's dangers. Regulators around the world have taken vastly different approaches to regulating the technology, and the U.S. federal government has been slow to put any guardrails around A.I., even as executives like Altman have suggested that such regulation is necessary. Pichai said he was "definitely on the optimistic side" about the potential impact of A.I. and argued that existing regulation already covered a lot of the uses for artificial intelligence. For example, he said: It's not like you can bring a treatment in without going through all the regulatory approvals. So just because you're using A.I. doesn't change all of that, right? So you really want to be careful about what additional regulation, if anything, you need at all. You know, you have to get your drugs approved. There's the established process to do that. On employee activism A decade ago, Google was considered a hotbed of employee activism, but executives have made moves to discourage employees from expressing their political views in the office. This year, the company fired 28 workers after they participated in sit-ins at work to protest its cloud computing contract with the Israeli government, and Pichai wrote in a memo to employees that Google was not a place "fight over disruptive issues or debate politics." Pichai said of the company's apparent shift: People come with a variety of personal opinions of workplaces, and where you can reconcile all those differences is that you're there because you believe in the mission. And the best way we can impact the world is through the products and services we build. And so getting our employees to be more mission-first and mission-focused. The company is not a personal platform, right? And I think for me, it's been a change for a while. On whether the power dynamic in companies has swung from employees back to employers, Pichai said: I don't see it as a power dynamic, necessarily. I actually think it's resonating with a lot of employees, too. On using copyrighted material to train A.I. The business model around the enormous amounts of data used to create A.I. models is in flux. News sites like The New York Times are suing OpenAI and Microsoft for using articles without authorization, while other sites like The Associated Press have signed deals to license their data. Google pays to license data from Reddit, for example, but the Reddit users who created the data aren't paid. Pichai said: I think there'll be creators who will create for A.I. models, or something like that, and get paid for it. I definitely think that's part of the future. Thanks for reading! We'll see you tomorrow. We'd like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.
[6]
Sam Altman reckons with a growing threat to OpenAI: Elon Musk
OpenAI's Sam Altman is reckoning with an unpredictable force that threatens his ambition of transforming the start-up into a trillion-dollar company: Elon Musk. Since Donald Trump was elected president in November, executives at the ChatGPT-maker have been preparing to deal with the incoming US administration -- a process complicated by Musk's emergence as a pivotal confidant of the president-elect. OpenAI has been among Musk's rivals who are trying to anticipate how the billionaire may use his new vantage point in Washington, from pushing for new regulations that target the company to influencing the award of lucrative government contracts that could boost Musk's own artificial intelligence start-up xAI. "I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power, to the degree that Elon has it, to hurt your competitors and advantage your own businesses," Altman told a New York Times conference last week. Trump himself has said Musk would put the national interest ahead of his companies, while Musk said on his social media platform X that rivals were "right" to expect him to be magnanimous. "No one believes that for a second," said a lawyer who has incurred Musk's wrath in the past. Having founded OpenAI together in 2015, the relationship between Musk and Altman has collapsed. The Tesla chief has described Altman as "swindly Sam" and filed lawsuits against him and OpenAI accusing them of "deceit of Shakespearean proportions" while seeking to void its multibillion-dollar commercial partnership with Microsoft. Musk is "unique", according to OpenAI's policy chief Chris Lehane, a political veteran who has helped companies such as Airbnb and Coinbase navigate tricky regulatory obstacles. OpenAI's approach would be to "control what we can control", he added. The company was emphasising its importance to the Trump agenda on three fronts, according to Lehane: boosting US competitiveness, particularly against China, rebuilding the economy and bolstering national security. Altman is also donating $1mn of his own money to Trump's inaugural fund. "At the end of the day, every American, in or out of government [is] going to want to put the interest of the US first," Lehane said. "This administration talked over the campaign and since about the imperative of . . . US-led AI prevailing. If you want that to happen then OpenAI is going to have to be in the mix." OpenAI has been at the front of the pack of AI companies since launching ChatGPT in November 2022. It is currently changing its structure, in part to accommodate greater external investment in a bid to remain ahead -- a move which Musk's lawsuit alleges betrays OpenAI's original mission. On Friday, OpenAI fired back in a blog post, claiming Musk himself pushed for a similar structure in 2017, when he was still co-chair. Musk "should be competing in the marketplace rather than the courtroom," the company said. Reid Hoffman, founder of LinkedIn and board member at Microsoft, OpenAI's biggest backer, said he was "certainly worried" that Musk's animosity towards Altman would play out in Trump's AI policies. "Obviously [someone with] integrity and character would say, look, since I'm involved in these kinds of lawsuits and so forth, I should keep myself distinct from the operation of government in these things," said Hoffman. Should Musk blur his personal views and larger geopolitical rules and structures, it "portends potentially dangerous myopias and dangerous conflicts of interest", he added. People close to Musk said he was too principled to use his new role to target OpenAI with onerous regulation, and it made no sense to do so given his remit as the co-chair of a new US "department of government efficiency" is to find ways to slash regulation. "You will see a bunch of red tape cut," said one person who has invested in Musk and Altman's companies. "OpenAI will have a streamlined process for getting their data centres up and running quickly. It will be equally applied across the competitor set," they added. Musk could, however, leverage his position as a central player in the incoming administration to boost xAI, according to an investor in one of his companies. "The US government is the biggest employer in the US," the person said. "As [Musk's] web of customers expands, does the government become a large customer [for xAI]?" Hoffman, a former OpenAI board member, speculated that Musk could use his position to slow down competitors to xAI. "You could just do all of that kind of thing if you're implementing government policy to try to privilege one company over others," he said, adding that it would be "frankly a very destructive thing to do. It's destructive for the industry, it's destructive for American society." For now, OpenAI's biggest challenge from Musk comes from direct competition from xAI, rather than political leverage. "Across Musk's companies they have probably the largest proprietary data set anywhere. They have satellite images from Starlink, videos from cars at Tesla and X data. They are having a serious crack at it," said a person who has worked with both entrepreneurs. xAI's latest chatbot offering Grok-2, released in August, has managed to compete with similar models from leading tech groups, and is on the tail of Google's Gemini, OpenAI's ChatGPT and Meta's Llama. Earlier this year, Musk started work on Colossus, a supercomputer based in Memphis, Tennessee. By September it was online and being used to train xAI's large language model, Grok, a rival to OpenAI's latest generative AI system, GPT-4. "From start to finish, it was done in 122 days," Musk wrote on X. The data centre houses more than 100,000 Nvidia H100 graphics processing units, more than any other individual AI compute cluster. Jensen Huang, chief executive of Nvidia, said in October that "there was only one person in the world who could do that", and has also referred to Colossus as "easily the fastest supercomputer on the planet as one cluster". "The one feather in his cap -- other than torturing Altman -- is the speed they put out Colossus," said a large investor in a number of Musk's companies, including SpaceX and xAI. "Nobody has the same compute power for AI and that's a big deal, but there's a lot to be determined." Regardless of Musk's new advantage gained through his proximity to the president-elect, the investor said the biggest threat to OpenAI remained his position at the helm of overlapping businesses, a vast personal fortune and the relentless working culture instilled at his companies. "Elon can manifest things in the real world that others can't," they said.
[7]
How OpenAI Plans to Move From Being a Nonprofit
Last fall, the nonprofit that controls OpenAI tried to fire the company's high-profile leader, Sam Altman. It failed. Ever since then, Mr. Altman has been trying to wrest control of the company away from the nonprofit. Under the watchful eyes of government regulators, the press and the public, Mr. Altman and his colleagues are working to sever the nonprofit's control while ensuring that the existing board is properly compensated for the changes, according to four people familiar with the negotiations who spoke on the condition of anonymity. Mr. Altman and his colleagues need to answer a question: What is a fair price for ceding control over a technology that might change the world? Proper compensation to the nonprofit is still being debated, but it could easily be in the billions of dollars, one person said. And the clock is ticking for OpenAI's board of directors. It has promised investors that it will restructure the organization within the next two years, according to documents reviewed by The New York Times. "We are and have been for a while looking at some changes," Mr. Altman said this month during an appearance at The Times's DealBook Summit in New York. "It is, as you can imagine, complicated." As it stands, OpenAI is a for-profit operation with hundreds of workers, millions of customers and billions of dollars in revenue that is overseen -- at least in theory -- by a high-minded nonprofit with just two employees. Mr. Altman wants to remove the nonprofit's control and let the for-profit business run itself. Without that new structure, OpenAI could struggle to raise the enormous amounts of money needed to build its technologies and keep pace with tech giants like Google, Meta and Amazon. Mr. Altman and his colleagues also have to redefine OpenAI's identity, without a nonprofit at its core. The maker of ChatGPT has prided itself on self-restraint: The nonprofit required it to put humanity first and profits second. "Any potential restructuring would ensure the nonprofit continues to exist and thrive, and receives full value for its current stake in the OpenAI for-profit with an enhanced ability to pursue its mission," Bret Taylor, chairman of the nonprofit board, said in a statement to The Times. The negotiations are complicated by the involvement of outside investors, including Microsoft. Microsoft's approval may be required to make the final change, one person said. They are further complicated by the involvement of Mr. Altman. He holds a position on the board of the nonprofit and is chief executive of the for-profit company, putting him effectively on both sides of this negotiation. But he has not recused himself, one person said. "We don't know what hat he's wearing," said Ellen Aprill, a professor at Loyola Law School in California who studies nonprofits and who has written about OpenAI's structure. "He has such a strong interest in the new structure that it seems very hard to believe that he could be acting solely in his fiduciary duty as a member of the board of the nonprofit." If the nonprofit is removed from OpenAI's chain of command, it could spin off into funding research on topics like ethics in artificial intelligence, one person said. But Mr. Altman and his colleagues have not yet assigned a dollar value to the nonprofit's potential loss of control. "Here, the asset in question is so unique and potentially so earth-shattering," said Alexander L. Reid, a lawyer advising nonprofits at the law firm BakerHostetler. "How much is it worth to control the power to bring the genie out of the bottle?" OpenAI sparked the A.I. boom with the release of ChatGPT in late 2022. It remains the market leader, with 300 million people using its chatbot each month. But despite its success, the company is still saddled by a decision that its founders made nearly a decade ago when they first decided to build an A.I. lab: They did not start a company. They started a charity. Mr. Altman founded OpenAI as a nonprofit in December 2015 alongside several A.I. researchers and entrepreneurs, including Elon Musk. Their concern was that Google, which was leading the race to build artificial intelligence, might obtain too much control over the future of humanity -- and that it would not see the potential harms as it sped toward more powerful profits. "Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity," Mr. Altman said then. But the arrangement lasted only three years. (The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit's claims.) By 2018, OpenAI's founders realized that building powerful A.I. technologies would require far more money than they could raise through a nonprofit. Early that year, Mr. Musk left the lab. And when Mr. Altman took over as chief executive, he created a new OpenAI: a for-profit company able to take on investors and promise them financial returns, while still answering to the nonprofit board. By the next year, OpenAI raised a billion dollars from Microsoft. And then $12 billion more. That arrangement lasted about five years, until the nonprofit board's attempt to remove Mr. Altman in November 2023. The board said it no longer trusted Mr. Altman to build artificial intelligence for the benefit of humanity. The ouster of Mr. Altman was exactly the kind of hard decision that the nonprofit was set up to make, putting OpenAI's ideals before the demands of the market. But the market won. After protests from investors and employees, Mr. Altman was reinstated and most of the board was replaced. Still, the episode left investors shaken. Microsoft was about to invest more money in OpenAI, but backed off from negotiations after Mr. Altman's temporary ouster, according to four people familiar with the talks. This fall, OpenAI raised $6.6 billion from many of the world's wealthiest companies and investment firms, including Microsoft, the chipmaker Nvidia, the Japanese tech conglomerate SoftBank and the United Arab Emirates investment firm MGX. But the money arrived with a footnote. If OpenAI did not change its corporate structure within two years, the investment would convert into debt, according to documents reviewed by The Times. That would put the company in a much riskier situation, saddling its balance sheet with even more red ink when it is already losing billions of a dollars a year. Kathy Jennings, Delaware's attorney general, oversees OpenAI's nonprofit because it is registered in her state. Ms. Jennings, a Democrat, told OpenAI in October that she wanted to review any potential changes, to be sure the nonprofit was not shortchanged. Facebook's parent company Meta -- one of OpenAI's main rivals in the A.I. race -- has also asked California's attorney general, Rob Bonta, to block these changes. Mr. Bonta, a Democrat, has jurisdiction over charities operating in his state. He did not respond to a request for comment. Mr. Reid, the lawyer, said the huge range of unknowns about the future of OpenAI reminded him of nuclear-fission research in the 1940s. It could power the world or destroy it. "You can't appreciate the value of this technology until everyone has used it, and understood it," he said. For now, the nonprofit also holds another key power: It can decide when OpenAI has reached "artificial general intelligence," or A.G.I. That would mean OpenAI's computers could perform most tasks that a human brain could. Reaching A.G.I. could also reshape OpenAI's business. When that declaration is made, Microsoft loses its rights to use OpenAI's technology, according to the investment contract it signed with OpenAI. If OpenAI severs its ties to Microsoft, it could consider partnerships with other tech giants. Already, OpenAI's for-profit company has used this potential declaration as leverage against Microsoft -- warning that if Microsoft will not agree to better terms, the nonprofit might issue this declaration and void their entire agreement, according to a person familiar with the company's negotiations. OpenAI must also satisfy another party: the public at large. In part because Mr. Altman has spent years publicly warning that A.I. could become dangerous, many individuals now share similar concerns. And many in the tech industry are publicly questioning whether OpenAI is prepared to guard against the risks its technologies will bring. Mr. Altman said this month that one of the options that OpenAI had explored was the creation of a "public benefit corporation" that would be partly owned by the original nonprofit. A "P.B.C." is a for-profit corporation designed to create public and social good. Its commitment to good is largely ceremonial, but this may be OpenAI's best option as it looks to please everyone.
[8]
Sam Altman on Microsoft, Trump and Musk
The OpenAI C.E.O. spoke with Andrew Ross Sorkin at the DealBook Summit. Since kicking off the artificial intelligence boom with the launch of ChatGPT in 2022, OpenAI has amassed more than 300 million weekly users and a $157 billion valuation. Its C.E.O., Sam Altman, addressed whether that staggering pace of growth can continue at the DealBook Summit last week. Altman pushed back on assertions that progress in A.I. is becoming slower and more expensive; on reports that the company's relationship with its biggest investor, Microsoft, is fraying; and on concerns that Elon Musk, who founded an A.I. company last year, may use his relationship with President-elect Donald Trump to hurt competitors. Altman said that artificial general intelligence, the point at which artificial intelligence can do almost anything that a human brain can do, will arrive "sooner than most people in the world think." Here are five highlights from the conversation. On Elon Musk Musk, who co-founded OpenAI, has become one of its major antagonists. He has sued the company, accusing it of departing from its founding mission as a nonprofit, and started a competing startup called xAI. On Friday, OpenAI said Musk had wanted to turn OpenAI into a for-profit company in 2017 and walked away when he didn't get majority equity. Altman called the change in the relationship "tremendously sad." He continued: I grew up with Elon as like a mega hero. I thought what Elon was doing was absolutely incredible for the world, and I'm still, of course, I mean, I have different feelings about him now, but I'm still glad he exists. I mean that genuinely. Not just because I think his companies are awesome, which I do think, but because I think at a time when most of the world was not thinking very ambitiously, he pushed a lot of people, me included, to think much more ambitiously. And grateful is the wrong kind of word. But I'm like thankful. You know, we started OpenAI together, and then at some point he totally lost faith in OpenAI and decided to go his own way. And that's fine, too. But I think of Elon as a builder and someone who -- a known thing about Elon is that he really cares about being 'the guy.' But I think of him as someone who, if he's not, that just competes in the market and in the technology, and whatever else. And doesn't resort to lawfare. And, you know, whatever the stated complaint is, what I believe is he's a competitor and we're doing well. And that's sad to see. Altman said of Musk's close relationship with Trump: I may turn out to be wrong, but I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors and advantage your own businesses. And I don't think people would tolerate that. I don't think Elon would do it. On OpenAI's relationship with Microsoft Microsoft, OpenAI's largest investor, has put more than $13 billion into the company and has an exclusive license to its raw technologies. Altman once called the relationship "the best bromance in tech," but The Times and others have reported that the partnership has become strained as OpenAI seeks more and cheaper access to computing power and Microsoft has made moves to diversify its access to A.I. technology. OpenAI expects to lose $5 billion this year because of the steep costs of developing A.I. At the DealBook Summit, Altman said of the relationship with Microsoft, "I don't think we're disentangling. I will not pretend that there are no misalignments or challenges." He added: We need lots of compute, more than we projected. And that has just been an unusual thing in the history of business, to scale that quickly. And there's been tension on that. Some of OpenAI's own products compete with those of partners that depend on its technologies. On whether that presents a conflict of interest, Altman said: We have a big platform business. We have a big first party business. Many other companies manage both of those things. And we have things that we're really good at. Microsoft has things they're really good at. Again, there's not no tension, but on the whole, our incentives are pretty aligned. On whether making progress in A.I. development was becoming more expensive and slower, as some experts have suggested, he doubled down on a message he'd previously posted on social media: "There is no wall." Andrew asked the same question of Sundar Pichai, the Google C.E.O., which we'll recap in tomorrow's newsletter. On A.G.I. and A.I. safety Altman has a business interest in suggesting that A.G.I. is near: When OpenAI decides that it has reached the milestone, Microsoft's exclusive license to use OpenAI's raw A.I. technologies ends. That would be a big bargaining chip in its negotiations with Microsoft. At the DealBook Summit, Altman suggested that A.G.I. was closer than most people expect, but he downplayed safety concerns, some of which he has raised himself or have come from his own former employees: A lot of the safety concerns that we and others expressed actually don't come at the A.G.I. moment. It's like, A.G.I. can get built, the world goes on mostly the same way. The economy moves faster, things grow faster. Altman said that he did expect there would be quick disruption of jobs in the relatively near term, but that the major upheaval would come with what he called "superintelligence." True superintelligence -- the system that is not just smarter than you and smarter than me, but smarter than all of us put together, just unbelievable capability -- even if we can make that technically safe, which I assume we'll figure out, we are going to have some faith in our governments. There are going to have to be some policy issues around that. There is going to have to be global coordination to a degree that I assume will rise to the occasion, but seems challenging. On not having equity in OpenAI OpenAI operates as a nonprofit with a for-profit arm, but plans to restructure as a for-profit company (an effort that Musk has asked a court to block). The plan has reportedly sparked discussions about giving Altman his first equity in the company. Altman currently makes a $76,000 salary as C.E.O. and has previously told employees that he has no plans to take a large equity stake. At the DealBook Summit, he insisted he is not interested in equity. "If I could go back in time, I would have taken it just some little bit just to never answer this question," he said, adding: No matter how many times I try to explain to people I have the most interesting, coolest job in the world -- this is like my retirement dream way to spend my time after what was a pretty good career, and people can work on art projects and not get paid for that, and no one thinks it's weird or whatever -- it just does not come across. So I wish I had taken some. I don't imagine I would work any harder or less hard. There would be, I think, something clearer about the alignment I would have had with investors. It definitely would have been easier to raise money. There are plenty of investors who have not invested because I didn't take equity. On using copyrighted material to train A.I. Several news organizations, including The New York Times, are suing OpenAI and Microsoft over the use of articles to train A.I. models. Others, like The Associated Press and News Corp, have struck deals with OpenAI to license copyrighted material to use as training data for its chatbots. When Andrew asked Altman about the use of books, articles, movies and other copyrighted material to train A.I., Altman said: I think we do need a new deal, standard protocol, whatever you want to call it, for how creators are going to get rewarded. I very much believe in the right to learn or whatever you want to call it, and if an A.I. reads a physics textbook and learns physics, it can use that for other things, like a human can. I think those parts of copyright law and fair use really need to keep applying, but I think there's additional things that we're starting to explore, and others are, where -- a particular passion of mine has always been, can we figure out to do micro-payments where if you generate, story in the style of Andrew Ross Sorkin, you can opt into that for your name and likeness and style to be used and get paid for it. Later in the day, Pichai, the Google C.E.O., responded to a similar question. We'll share his response in tomorrow's newsletter. Thanks for reading! We'll see you tomorrow. We'd like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.
Share
Share
Copy Link
Google showcases AI advancements, including Gemini 2.0 and new hardware, while industry experts debate the future of AI progress amid data scarcity concerns.
Google has made significant strides in reasserting its position at the forefront of artificial intelligence (AI) research and development. The tech giant's recent innovations have boosted investor confidence and quieted critics who claimed the company had fallen behind its competitors 1.
In December, Google unveiled Gemini 2.0, an advanced version of its AI models and applications that outperformed rivals in benchmark testing. The company also introduced a new generation of its custom AI accelerator chip, the Tensor Processing Unit (TPU) called Trillium, aimed at challenging Nvidia's dominance in the market 1.
Google's AI advancements extend beyond language models. The company has introduced several new projects and capabilities:
These developments have been described as "astonishing" by industry experts, with Ethan Mollick, a professor at Wharton business school, noting that "This isn't steady progress -- we're watching AI take uneven leaps past our ability to easily gauge its implications" 1.
In addition to AI advancements, Google confirmed a significant breakthrough in quantum computing with its Willow chip. This new technology can hold "qubits" stable for longer, reducing errors and allowing for useful computations. The company claims it can complete tasks in 5 minutes that would take conventional supercomputers 10 septillion years 1.
Google's showcase of technological advances, coupled with three consecutive quarters of double-digit profit growth, has revitalized its parent company Alphabet's share price. The stock has risen 38% this year, briefly touching a record high of $199.91 1.
However, Google still faces stiff competition, particularly from Microsoft and its partnership with OpenAI. Google CEO Sundar Pichai has expressed confidence in the company's AI capabilities, challenging Microsoft to a "side-by-side comparison" of their respective models 5.
Despite the progress, the AI industry faces potential challenges. Some experts, including Demis Hassabis of Google DeepMind, warn of diminishing returns in AI development due to the scarcity of available data for training large language models 23.
This "peak data" phenomenon has led to debates about the future of AI progress. While some, like OpenAI's Sam Altman, remain optimistic about continued advancements, others believe new approaches will be necessary to achieve further breakthroughs 3.
As AI technology advances, questions of regulation and ethical use come to the forefront. Pichai has expressed optimism about potential collaboration with the incoming Trump administration on AI infrastructure and development 5.
Regarding AI safety and regulation, Pichai argues that existing regulations already cover many AI applications, particularly in sectors like healthcare. He cautions against hasty implementation of additional regulations, emphasizing the need for careful consideration of what, if any, new rules are necessary 5.
The AI industry is at a crossroads, with companies exploring new methods to continue progress. These include developing ways for large language models to learn from their own trial and error, generating "synthetic data" 3. However, these methods currently work best in areas with clear right and wrong answers, such as mathematics and computer programming.
As the AI landscape evolves, the industry faces the dual challenges of pushing technological boundaries while addressing concerns about data scarcity, ethical implications, and regulatory frameworks.
Reference
[1]
[2]
[3]
[4]
[5]
AI experts warn of diminishing returns in AI development due to the exhaustion of available digital text data, potentially leading to a slowdown in chatbot improvements and necessitating new approaches in AI research.
2 Sources
2 Sources
Leading AI companies are experiencing diminishing returns on scaling their AI systems, prompting a shift in approach and raising questions about the future of AI development.
7 Sources
7 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
6 Sources
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
20 Sources
20 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved