Curated by THEOUTPOST
On Mon, 30 Dec, 4:04 PM UTC
13 Sources
[1]
Silicon Valley stifled the AI doom movement in 2024
For several years now, technologists have rung alarm bells about the potential for advanced AI systems to cause catastrophic damage to the human race. But in 2024, those warning calls were drowned out by a practical and prosperous vision of generative AI promoted by the tech industry - a vision that also benefited their wallets. Those warning of catastrophic AI risk are often called "AI doomers," though it's not a name they're fond of. They're worried that AI systems will make decisions to kill people, be used by the powerful to oppress the masses, or contribute to the downfall of society in one way or another. In 2023, it seemed like we were in the beginning of a renaissance era for technology regulation. AI doom and AI safety -- a broader subject that can encompass hallucinations, insufficient content moderation, and other ways AI can harm society -- went from a niche topic discussed in San Francisco coffee shops to a conversation appearing on MSNBC, CNN, and the front pages of the New York Times. To sum up the warnings issued in 2023: Elon Musk and more than 1,000 technologists and scientists called for a pause on AI development, asking the world to prepare for the technology's profound risks. Shortly after, top scientists at OpenAI, Google, and other labs signed an open letter saying the risk of AI causing human extinction should be given more credence. Months later, President Biden signed an AI executive order with a general goal to protect Americans from AI systems. In November 2023, the non-profit board behind the world's leading AI developer, OpenAI, fired Sam Altman, claiming its CEO had a reputation for lying and couldn't be trusted with a technology as important as artificial general intelligence, or AGI -- once the imagined endpoint of AI, meaning systems that actually show self-awareness. (Although the definition is now shifting to meet the business needs of those talking about it.) For a moment, it seemed as if the dreams of Silicon Valley entrepreneurs would take a backseat to the overall health of society. But to those entrepreneurs, the narrative around AI doom was more concerning than the AI models themselves. In response, a16z cofounder Marc Andreessen published "Why AI will save the world" in June 2023, a 7,000 word essay dismantling the AI doomers' agenda and presenting a more optimistic vision of how the technology will play out. "The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it," said Andreessen in the essay. In his conclusion, Andreessen gave a convenient solution to our AI fears: move fast and break things - basically the same ideology that has defined every other 21st century technology (and their attendant problems). He argued that Big Tech companies and startups should be allowed to build AI as fast and aggressively as possible, with few to no regulatory barriers. This would ensure AI does not fall into the hands of a few powerful companies or governments, and would allow America to compete effectively with China, he said. Of course, this would also allow a16z's many AI startups make a lot more money -- and some found his techno-optimism uncouth in an era of extreme income disparity, pandemics, and housing crises. While Andreessen doesn't always agree with Big Tech, making money is one area the entire industry can agree on. a16z's co-founders wrote a letter with Microsoft CEO Satya Nadella this year, essentially asking the government not to regulate the AI industry at all. Meanwhile, despite their frantic hand-waving in 2023, Musk and other technologists did not stop slow down to focus on safety in 2024 - quite the opposite: AI investment in 2024 outpaced anything we've seen before. Altman quickly returned to the helm of OpenAI, and a mass of safety researchers left the outfit in 2024 while ringing alarm bells about its dwindling safety culture. Biden's safety-focused AI executive order has largely fallen out of favor this year in Washington, D.C. - the incoming President-elect, Donald Trump, announced plans to repeal Biden's order, arguing it hinders AI innovation. Andreessen says he's been advising Trump on AI and technology in recent months, and a longtime venture capitalist at a16z, Sriram Krishnan, is now Trump's official senior adviser on AI. Republicans in Washington have several AI-related priorities that outrank AI doom today, according to Dean Ball, an AI-focused research fellow at George Mason University's Mercatus Center. Those include building out data centers to power AI, using AI in the government and military, competing with China, limiting content moderation from center-left tech companies, and protecting children from AI chatbots. "I think [the movement to prevent catastrophic AI risk] has lost ground at the federal level. At the state and local level they have also lost the one major fight they had," said Ball in an interview with TechCrunch. Of course, he's referring to California's controversial AI safety bill SB 1047. Part of the reason AI doom fell out of favor in 2024 was simply because, as AI models became more popular, we also saw how unintelligent they can be. It's hard to imagine Google Gemini becoming Skynet when it just told you to put glue on your pizza. But at the same time, 2024 was a year when many AI products seemed to bring concepts from science fiction to life. For the first time this year: OpenAI showed how we could talk with our phones and not through them, and Meta unveiled smart glasses with real-time visual understanding. The ideas underlying catastrophic AI risk largely stem from sci-fi films, and while there's obviously a limit, the AI era is proving that some ideas from sci-fi may not be fictional forever. 2024's biggest AI doom fight: SB 1047 The AI safety battle of 2024 came to a head with SB 1047, a bill supported by two highly regarded AI researchers: Geoffrey Hinton and Yoshua Benjio. The bill tried to prevent advanced AI systems from causing mass human extinction events and cyberattacks that could cause more damage than 2024's CrowdStrike outage. SB 1047 passed through California's Legislature, making it all the way to Governor Gavin Newsom's desk, where he called it a bill with "outsized impact." The bill tried to prevent the kinds of things Musk, Altman, and many other Silicon Valley leaders warned about in 2023 when they signed those open letters on AI. But Newsom vetoed SB 1047. In the days before his decision, he talked about AI regulation on stage in downtown San Francisco, saying: "I can't solve for everything. What can we solve for?" That pretty clearly sums up how many policymakers are thinking about catastrophic AI risk today. It's just not a problem with a practical solution. Even so, SB 1047 was flawed beyond its focus on catastrophic AI risk. The bill regulated AI models based on size, in an attempt to only regulate the largest players. However, that didn't account for new techniques such as test-time compute or the rise of small AI models, which leading AI labs are already pivoting to. Furthermore, the bill was widely considered an assault on open-source AI - and by proxy, the research world - because it would have limited firms like Meta and Mistral from releasing highly customizable frontier AI models. But according to the bill's author, state Senator Scott Wiener, Silicon Valley played dirty to sway public opinion about SB 1047. He previously told TechCrunch that venture capitalists from Y Combinator and A16Z engaged in a propaganda campaign against the bill. Specifically, these groups spread a claim that SB 1047 would send software developers to jail for perjury. Y Combinator asked young founders to sign a letter saying as much in June 2024. Around the same time, Andreessen Horowitz general partner Anjney Midha made a similar claim on a podcast. The Brookings Institution labeled this as one of many misrepresentations of the bill. SB 1047 did mention tech executives would need to submit reports identifying shortcomings of their AI models, and the bill noted that lying on a government document is perjury. However, the venture capitalists who spread these fears failed to mention that people are rarely charged for perjury, and even more rarely convicted. YC rejected the idea that they spread misinformation, previously telling TechCrunch that SB 1047 was vague and not as concrete as Senator Wiener made it out to be. More generally, there was a growing sentiment during the SB 1047 fight that AI doomers were not just anti-technology, but also delusional. Famed investor Vinod Khosla called Wiener clueless about the real dangers of AI in October of this year. Meta's chief AI scientist, Yann LeCun, has long opposed the ideas underlying AI doom, but became more outspoken this year. "The idea that somehow [intelligent] systems will come up with their own goals and take over humanity is just preposterous, it's ridiculous," said LeCun at Davos in 2024, noting how we're very far from developing superintelligent AI systems. "There are lots and lots of ways to build [any technology] in ways that will be dangerous, wrong, kill people, etc... But as long as there is one way to do it right, that's all we need." Meanwhile, policymakers have shifted their attention to a new set of AI safety problems. The fight ahead in 2025 The policymakers behind SB 1047 have hinted they may come back in 2025 with a modified bill to address long-term AI risks. One of the sponsors behind the bill, Encode, says the national attention SB 1047 drew was a positive signal. "The AI safety movement made very encouraging progress in 2024, despite the veto of SB 1047," said Sunny Gandhi, Encode's Vice President of Political Affairs, in an email to TechCrunch. "We are optimistic that the public's awareness of long-term AI risks is growing and there is increasing willingness among policymakers to tackle these complex challenges." Gandhi says Encode expects "significant efforts" in 2025 to regulate around AI-assisted catastrophic risk, though she did not disclose any specific one. On the opposite side, a16z general partner Martin Casado is one of the people leading the fight against regulating catastrophic AI risk. In a December op-ed on AI policy, Casado argued that we need more reasonable AI policy moving forward, declaring that "AI appears to be tremendously safe." "The first wave of dumb AI policy efforts is largely behind us," said Casado in a December tweet. "Hopefully we can be smarter going forward." Calling AI "tremendously safe" and attempts to regulate it "dumb" is something of an oversimplification. For example, Character.AI - a startup a16z has invested in - is currently being sued and investigated over child safety concerns. In one active lawsuit, a 14-year-old Florida boy killed himself after allegedly confiding his suicidal thoughts to a Character.AI chatbot that he had romantic and sexual chats with. This case, in itself, shows how our society has to prepare for new types of risks around AI that may have sounded ridiculous just a few years ago. There are more bills floating around that address long-term AI risk - including one just introduced at the federal level by Senator Mitt Romney. But now, it seems AI doomers will be fighting an uphill battle in 2025.
[2]
How the Benefits -- and Harms -- of AI Grew in 2024
In 2024, both cutting-edge technology and the companies controlling it grew increasingly powerful, provoking euphoric wonderment and existential dread. Companies like Nvidia and Alphabet soared in value, fueled by expectations that artificial intelligence (AI) will become a cornerstone of modern life. While those grand visions are still far into the future, tech undeniably shaped markets, warfare, elections, climate, and daily life this year. Perhaps technology's biggest impact this year was on the global economy. The so-called Magnificent Seven -- the stocks of Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla -- thrived in large part because of the AI boom, propelling the S&P 500 to new highs. Nvidia, which designs the computer chips powering many AI systems, led the way, with its stock nearly tripling in price. These profits spurred an arms race in AI infrastructure, with companies constructing enormous factories and data centers -- which in turn drew criticism from environmentalists about their energy consumption. Some market watchers also expressed concern about the increasing dependence of the global economy on a handful of companies, and the potential impacts if they prove unable to fulfill their massive promises. But as of early December, the value of these companies showed no sign of letting up. Though not with the explosive novelty of ChatGPT's 2023 breakthrough, generative AI systems advanced over the past 12 months: Google's DeepMind achieved silver-medal results at a prestigious math competition; Google's NotebookLM impressed users with its ability to turn written notes into succinct podcasts; ChatGPT passed a Stanford-administered Turing test; Apple integrated new artificial intelligence tools into its newest iPhone. Beyond personal devices, AI played a pivotal role in forecasting hurricanes and powering growing fleets of driverless cars across China and San Francisco. A more dangerous side of AI, however, also came into view. AI tools, created by companies like Palantir and Clearview, proved central to the wars in Ukraine and Gaza in their ability to identify foreign troops and targets to bomb. AI was integrated into drones, surveillance systems, and cybersecurity. Generative AI also infiltrated 2024's many elections. South Asian candidates flooded social media with AI-generated content. Russian state actors used deepfaked text, images, audio, and video to spread disinformation in the U.S. and amplify fears around immigration. After President-elect Donald Trump reposted an AI-generated image of Taylor Swift endorsing him on the campaign trail, the pop star responded with an Instagram post about her "fears around AI" and an endorsement of Vice President Kamala Harris instead. Read More: How Tech Giants Turned Ukraine Into an AI War Lab Swift's fears were shared by many of her young fans, who are coming of age in a generation that seems to be bearing the brunt of technology's harms. This year, hand-wringing about the impact of social media on mental health came to a head with Jonathan Haidt's best seller The Anxious Generation, which drew a direct link between smartphones and a rise in teen depression. (Some scientists have disputed this correlation.) Social media platforms scrambled to address the issue with their own fixes: Instagram, for instance, set new guardrails for teen users. But many parents, lawmakers, and regulators argued that these platforms weren't doing enough on their own to protect children, and took action. New Mexico's attorney general sued Snap Inc., accusing Snapchat of facilitating child sexual exploitation through its algorithm. Dozens of states moved forward with a lawsuit against Meta, accusing it of inducing young children and teenagers into addictive social media use. In July, the U.S. Senate passed the Kids Online Safety Act (KOSA), which puts the onus on social media companies to prevent harm. Most tech companies are fighting the bill, which has yet to pass the House. The potential harms around generative AI and children are mostly still unknown. But in February, a teenager died by suicide after becoming obsessed with a Character.AI chatbot modeled after Game of Thrones character Daenerys Targaryen. (The company called the situation "tragic" and told the New York Times that it was adding safety features.) Regulators were also wary of the centralization that comes with tech, arguing that its concentration can lead to health crises, rampant misinformation, and vulnerable points of global failure. They point to the Crowdstrike outage -- which grounded airplanes and shut down banks across the world -- and the Ticketmaster breach, in which the data of over 500 million users was compromised. President Joe Biden signed a bill requiring its Chinese owner to sell TikTok or be banned in the U.S. French authorities arrested Telegram CEO Pavel Durov, accusing him of refusing to cooperate in their efforts to stop the spread of child porn, drugs, and money laundering on the platform. Antitrust actions also increased worldwide. In the U.S., Biden officials embarked on several aggressive lawsuits to break up Google's and Apple's empires. A U.K. watchdog accused Google of wielding anticompetitive practices to dominate the online ad market. India also proposed an antitrust law, drawing fierce rebukes from tech lobbyists. But the tech industry may face less pressure next year, thanks in part to the effort of the world's richest man: Elon Musk, whose net worth ballooned by more than $100 billion over the past year. Musk weathered many battles on many frontiers. Tesla failed to deliver its long-awaited self-driving cars, agitating investors. X was briefly banned in Brazil after a judge accused the platform of allowing disinformation to flourish. In the U.S., watchdogs accused Musk of facilitating hate speech and disinformation on X, and of blatantly using a major public platform to put his finger on the scale for his preferred candidate, Donald Trump. Musk's companies face at least 20 investigations, from all corners of government. Read More: How Elon Musk Became a Kingmaker But Musk scored victories by launching and catching a SpaceX rocket and implanting the first Neuralink chip into a paralyzed patient's brain. And in the November election, his alliance with Trump paid off. Musk is now a prominent figure in Trump's transition team, and tipped to head up a new government agency that aims to slash government spending by $2 trillion. And while the owner of Tesla must navigate Trump's stated opposition to EVs, he is positioned to use his new perch to influence the future of AI. While Musk warns the public about AI's existential risk, he is also racing to build a more powerful chatbot than ChatGPT, which was built by his rival Sam Altman. Altman's OpenAI endured many criticisms over safety this year but nevertheless raised a massive $6.6 billion in October. Is the growing power of tech titans like Musk and Altman good for the world? In 2024, they spent much of their time furiously building while criticizing regulators for standing in their way. Their creations, as well as those of other tech gurus, provided plenty of evidence both of the good that can arise from their projects, and the overwhelming risks and harms.
[3]
The most important tech stories of 2024, and also my favorite ones
Today, we're looking at a few themes that will influence the online and offline worlds in 2025 Last week, we looked back at how 2024 made Elon Musk the world's most powerful man. Today, we're looking at a few other important themes that will influence the online and offline worlds in 2025. Google: Ruled an illegal monopoly in August, Google could be broken up. The results are anybody's guess, but what seemed impossible for a company worth $2.5tn is at play. The US has asked the judge in the case for a wholesale breakup of the giant, which would force it to divest Chrome, the world's most popular browser and one of Google's core businesses. TikTok: The US passed a law that will, in t-minus three weeks, either force TikTok parent company ByteDance to sell its popular video app, or push it entirely dark in the US. Of the two possibilities, a ban is more likely, as ByteDance has said divestment is impossible, and Beijing has opposed a sale. As with Google, what seemed so implausible is now very possible: TikTok could be well and truly banned. Only the US supreme court now stands between TikTok and its closure. If you had told me a year ago that TikTok would disappear at the same time we published a story cheekily headlined How 2023 became the year Congress forgot to ban TikTok, I would have laughed. TikTok has become the center of American online culture, though perhaps Instagram Reels and YouTube Shorts will fill the absence. Social media companies likewise faced difficult legal headwinds in Australia. Read on to the next section for more on that. The dual revolutions of smartphones and social media have made their way to the youngest among us. Now, as their guardians, we're tasked with setting the boundaries for them. However, we don't yet know where those guardrails should be. The wildfire-hot debate over kids and social media began in earnest in March, when psychologist and social scientist Jonathan Haidt published The Anxious Generation. Haidt attributes today's crisis in teen mental health to social media and the loss of unstructured playtime. The book shot to the top of bestseller lists, where it remains; Haidt's explanation has hit a nerve. I think he's wrong; you can read why here. Schools across the US instituted policies that prohibit smartphone usage during the school day to varying degrees. Los Angeles's school district, the second-largest in the US, announced its phone prohibition in June. Parents fought back while simultaneously bemoaning how much time their children spend on their phones. The UK, which has banned phones from schools nationwide, looked on with perplexity. Australia took the most extreme step in November when it banned any child under 16 from social media, though the new law will only go into effect in a year after tests of age-gating software in the country. Much remains unknown about the law - particularly, how it will be enforced - but it is the most restrictive in the world. Read more about it here. The Guardian's series on social media's role in the exploitation and trafficking of children has been shortlisted for a Fetisov Journalism award for outstanding contributions to civil rights. You can read the first story in the series here: 'If Instagram didn't exist, it wouldn't have happened': a mother's search for her trafficked daughter Of any niche in tech, cryptocurrency had the worst 2023. Sam Bankman-Fried's multibillion-dollar fraud at FTX became worldwide news, and crypto's most famous man became the mascot for its worst impulses as he was convicted of wire fraud and sentenced to 25 years in prison. Then the US went after Binance, the world's largest cryptocurrency exchange. That company admitted to money laundering, paid a fine and lost its CEO, Changpeng Zhao. Bitcoin ended the year at a price of roughly $42,000. Then, like Elon Musk and AI, cryptocurrency had a great time in 2024. Bitcoin soared to $100,000. Chalk that up to the close alliance between Donald Trump and the cryptocurrency industry. He's the first US presidential candidate to accept donations in crypto, after all, and he's started his own cryptocurrency venture. Polymarket and Kalshi, prediction betting markets built on blockchain technology, rose to prominence as a new and influential type of political polling. Polymarket's CEO bragged that Trump had called him from Mar-a-Lago to talk about the odds on the site and to praise it as more reliable than traditional polling, which continues to lose trust. Listen to our podcast about the bromance between Trump and cryptocurrency here. Read about the success of prediction betting markets like Polymarket and Kalshi and how they're planning to capitalize on it in 2025. Nvidia, the maker of the chips most coveted for programming artificial intelligence, is the biggest financial winner of 2024. Its share price has tripled since the start of the year. Consider the headlines the Guardian ran in coverage of its earnings reports: November: Nvidia earnings: AI chip leader shows no signs of stopping mammoth growth August: Nvidia rides big tech's AI investment to beat Wall Street's sky-high expectations. (There was a brief but precipitous dip in the share price in late August that was quickly erased.) May: Nvidia reports stratospheric growth as AI boom shows no sign of stopping Google likewise reaped enormous benefits from the financial frenzy for AI. Even as it endured a flogging in US courts, its stock value climbed higher and higher with each announcement of each new AI thingamajig, more than doubling its share price since the start of 2024. There were multiple versions of Gemini, Google's flagship AI assistant capable of understanding images and words as well as generating them. The model underpins Google's AI-generated search result summaries, perhaps its most visible AI product. I would characterize the investor reaction to Google's announcements as such: "An AI that summarizes your notes and makes a podcast out of them? How neat! Hard to see that finding a long-term use case. Regardless, here's $50bn in market cap." Would AI podcasting software become a $50bn company on its own? No, certainly not. But in Google's hands, who knows? Google is likewise not known as a chipmaker, yet the announcement this month that the company had developed an AI chip in-house with improved performance and less energy consumption caused its stock to rise 12%, a $250bn growth for its market capitalization. A slew of other smaller companies have likewise financially drafted off the AI boom: OpenAI raised $6.5bn this year; Reddit, the beneficiary of deals for its archive of high-quality text data, debuted on the US stock market this year and saw its share price steadily rise to more than triple its starting share price; Databricks, which provides software for storing and analyzing huge amounts of data, brought in $10bn in investment; Broadcom, which makes chips and data-center software, is worth $1tn as of 13 December. Taiwan Semiconductor, an integral part of the AI supply chain, hit the same market cap milestone in October. Pavel Durov's arrest in August put Telegram's lax approach to content moderation in the spotlight. The founder and CEO of the app, which boasts nearly a billion users, was detained in France and indicted on 12 charges, including "allowing criminal activity" on his app. French authorities accused him of complicity in the distribution of child sexual abuse material and drug trafficking, which he denies. His case is still pending as he remains on bail in France. Prior to the arrest, Telegram was known for how little it policed what its users said and what they sent. Now, it's publishing reports on how effective its moderation is. It announced it would crack down on harmful content like fraud and terrorism in September, and this month, it announced AI had enabled it to remove some 15m groups and channels dedicated to illegal behavior. The change, though confined to one corner of the internet, means that a billion people now speak to each other in quite a different landscape than before.
[4]
AI took giant strides in 2024, as AGI comes into view
Artificial intelligence enjoyed a banner year in 2024. The frontier technology captured awards, corralled investors, charmed Wall Street and showed that it could reason mathematically -- even explaining differential equations. It also drew the attention of global regulators, concerned about privacy and safety risks. Others worried that AI might soon evolve into artificial general intelligence (AGI) and then artificial superintelligence -- surpassing human cognitive abilities. Catastrophic scenarios were posited and discussed: bioterrorism, autonomous weapons systems and even "extinction-level" events. Generative artificial intelligence (GenAI), a subset of AI, is able to create something out of nothing (well, apart from its voluminous training data). Prompt it with a line of text, for instance, and it can generate a 500-word ghost story. GenAI took center stage in 2024. And it wasn't just ChatGPT, the AI-enabled chatbot developed by OpenAI. Google's Gemini, Microsoft's Copilot, Anthropic's Claude, and Meta' Llama 3 series also helped push the edge of the envelope, developing software that could read and generate not just text, but also audio, video and images. AI labs spent freely to fuel these advances. AI spending surged to $13.8 billion in 2024, more than six times the amount forked out in 2023, according to Menlo Ventures, in "a clear signal that enterprises are shifting from experimentation to execution, embedding AI at the core of their business strategies." #2 AI captures Nobel prizes for physics, chemistry Further evidence that AI is here to stay was provided in October when the Royal Swedish Academy of Sciences announced the 2024 Nobel Prizes. Geoffrey Hinton and John Hopfield took the physics prize "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Neural networks are a core technology in today's AI. Hinton, a British-Canadian computer scientist and cognitive psychologist -- i.e., not a physicist -- has often been called the "Godfather of AI." His path-breaking work on neural networks goes back to the 1980s when he used tools from statistical physics like a Boltzmann machine to advance machine learning. Elsewhere, Demis Hassabis -- co-founder and CEO of Google DeepMind -- and John Jumper were honored with the Nobel Prize for chemistry for developing an artificial intelligence model that can predict proteins' complex structures. #3 Nvidia overtakes Apple as world's most valuable company It takes a special type of computer chip to train and run the massive large language models (LLMs) that were so dominant in 2024, and chipmaker Nvidia produced more of these special graphics processing units, or GPUs, than any company in the world. It isn't surprising, then, that Nvidia also became the world's most valuable company in 2024 -- reaching $3.53 trillion in market capitalization in late October, eclipsing Apple's $3.52 trillion. "More companies are now embracing artificial intelligence in their everyday tasks and demand remains strong for Nvidia chips," commented Russ Mould, investment director at AJ Bell. Will Nvidia keep its manufacturing dominance in 2025, and beyond? Nvidia's widely anticipated Blackwell GPUs, expected to launch in the 4th quarter, were delayed because of design flaws reportedly, but given Nvidia's enormous lead in the GPUs -- it controlled 98% of the market in 2023 -- few expect it to be outduelled any time soon. Everyone wants an artificial intelligence that is safe, secure, and beneficial for society at large, but passing laws and implementing rules to ensure a responsible AI is no easy matter. Still, in 2024, global regulatory authorities took some first steps. The European Union's Artificial Intelligence Act came into force in August, introducing safeguards for general-purpose AI systems and addressing some privacy concerns. The act sets strict rules on the use of AI for facial recognition, for example, but it also seeks to address broader risks like automating jobs, spreading misinformation online and endangering national security. The legislation will be implemented in phases, stretching out until 2027. Regulating AI won't be easy, however, as California found out in 2024 with its proposed SB 1047 legislation that was sidelined (vetoed) by the state's governor in September. Described as the "most sweeping effort yet to regulate artificial intelligence," SB 1047 had support from some AI proponents like Geoffrey Hinton and Elon Musk, who argued that it provided badly needed guardrails for this rapidly evolving technology. But it also drew criticism from other technologists, like Andrew Ng, founder of DeepLearning.AI, because it imposed liability on AI developers and this could arguably stifle innovation. #5 Emergence of small language models (SLMs) Massively large AI models that are trained on billions of datapoints became commonplace in 2024. ChatGPT was trained on 570 gigabytes of text data scraped from the internet -- about 300 billion words, for instance. But for many enterprises the AI future lies in smaller, industry-specific language models, some of which began to emerge in 2024. In April, Microsoft rolled out its Phi-3 small language models, while Apple presented eight small language models for its handheld devices. Microsoft and Khan Academy are now using SLMs to improve math tutoring for students, for example. "There is much more compute available at the edge because the models are getting smaller for specific workloads, [and] you can actually take a lot more advantage of that," Yorke Rhodes, Microsoft's director for digital transformation, blockchain and cloud supply chain, explained at a May conference. SLMs require less training data and computational power to develop and run, and their capabilities "are really starting to approach some of the large language models," he added. #6 Agentic AI moved to the forefront Chatbots like ChatGPT are all about asking questions and receiving answers on a wide breadth of topics -- though they can also write software code, draft emails, generate reports, and even write poetry. But AI agents go a step beyond chatbots and can actually make decisions for users, enabling them to achieve specific goals. In the healthcare industry, an AI agent could be used to monitor patient data, making recommendations when appropriate to modify a specific treatment, for instance. Looking ahead, tech consulting firm Gartner named Agentic AI as one of its "Top Strategic Technology Trends for 2025." Indeed, by 2028 as much of a third of enterprise software applications will include agentic AI, the firm predicts, up from less than 1% in 2024. AI agents could even be used to write blockchain-based smart contracts (technically they can already do so, but the risks of an errant bug and a loss of funds are too high at present). Blockchain project Avalanche has already begun building a new virtual machine at the intersection of AI and blockchains to do this in a natural language. "You write your [smart contract] programs in English, German, French, Tagalog, Chinese [...] a natural language that your mother taught you in your mother's tongue," said Ava Labs founder Emin Gün Sirer. Smart contract programming as it stands today is really hard, so an easy-to-use AI agent could potentially bring in "billions of new [blockchain] users," Sirer predicted. #7 Reasoning models for solving 'hard problems' Chatbots have other limitations. They can struggle with simple math problems and software coding tasks, for instance. They aren't great at answering scientific questions. OpenAI sought to remedy matters in September with the drop of OpenAI o1, a new series of reasoning models "for solving hard problems," like differential equations. The response was mostly positive. "Finally, an AI model capable of handling all the complex science, coding and math problems I'm always feeding it," tweeted New York Times columnist Kevin Roose. On tests, o1 performed as well as the top 500 students in the US in a qualifier for the USA Math Olympiad, for instance, and exceeded human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems, OpenAI reported. #8 Zeroing in on AGI Why do advances in structured problem solving, as described above, matter? They bring AI incrementally closer to providing human-like intelligence, i.e., artificial general intelligence, or AGI. OpenAI's o3 models, released just before Christmas, performed even better than o1, especially on math and coding tests, while other projects like Google's Gemini 2.0 also made progress in 2024 on structured problem solving -- i.e., that is, breaking down complex tasks into manageable steps. However, AGI still remains a distant goal in the view of many experts. Today's advanced models still lack an intuitive understanding of physical concepts like gravity or causality, for instance. Nor can current AI algorithms think up questions on their own, or learn if and when scenarios take an unexpected turn. Overall, "AGI is a journey, not a destination -- and we're only at the beginning," Brian Hopkins, the vice president for emerging technology at consulting firm Forrester, declared recently. # 9 Signs of a looming training data shortage Unquestionably, 2024 was an exciting year for AI developers and users alike, and few expect AI innovation to subside any time soon. But there were also suggestions in 2024 that the AI's LLM sub-epoch may have already peaked. The reason is a looming data shortage. Companies like OpenAI and Google may soon run out of data, AI's lifeblood, used to "train" massive artificial intelligence systems. Only so much data can be scraped from the internet, after all. Moreover, LLM developers are finding they can't always gather publicly available data with impunity. The New York Times, for one, has sued OpenAI for copyright infringement with regard to its news content. It isn't likely to be the only major news organization to seek recourse from the courts. "Everyone in the industry is seeing diminishing returns," said Google's Demis Hassabis. One answer may be to train algorithms using synthetic data -- artificially generated data that mimics real-world data. AI developer Anthropic's Claude 3 LLM, for instance, was trained, at least in part, on synthetic data, i.e., "data we generate internally," according to the company. Even though the term "synthetic data" may sound like an oxymoron, scientists, including some medical experts, say creating new data from scratch holds promise. It could support medical AI by filling out incomplete data sets, for instance, which could help eliminate bias against certain ethnic groups, for instance. #10 Emergence of a more ethical AI Interestingly, Anthropic explains in some detail how it obtains its training data in the referenced paper above. Of particular note, it operates its website crawling system "transparently," which means that website content providers -- like The New York Times, presumably -- "can easily identify Anthropic visits and signal their preferences to Anthropic." The firm has gone to some lengths to prevent misuse of its technology, even creating a responsible scaling officer, whose scope was broadened in 2024 in an effort to create a "safe" AI. The company's efforts didn't go unnoticed. Time magazine named it one of the 100 most influential companies in 2024, extolling it as the "AI Company Betting That Safety Can Be a Winning Strategy." Given the drift of AI development in 2024 and public concerns about potential catastrophic risks from these new frontier systems, it seems entirely likely that more developers may soon embrace a more transparent and responsible AI. Magazine: Story Protocol helps IP creators survive AI onslaught... and get paid in crypto
[5]
IEEE Spectrum's Top 10 AI Stories of 2024
Eliza Strickland is a senior editor at IEEE Spectrum covering AI and biomedical engineering. IEEE Spectrum's most popular AI stories of the last year show a clear theme. In 2024, the world struggled to come to terms with generative AI's capabilities and flaws -- both of which are significant. Two of the year's most read AI articles dealt with chatbots' coding abilities, while another looked at the best way to prompt chatbots and image generators (and found that humans are dispensable). In the "flaws" column, one in-depth investigation found that the image generator Midjourney has a bad habit of spitting out images that are nearly identical to trademarked characters and scenes from copyrighted movies, while another investigation looked at how bad actors can use the image generator Stable Diffusion version 1.5 to make child sexual abuse material. Two of my favorites from this best-of collection are feature articles that tell remarkable stories. In one, an AI researcher narrates how he helped gig workers gather and organize data in order to audit their employer. In another, a sociologist who embedded himself in a buzzy startup for 19 months describes how engineers cut corners to meet venture capitalists' expectations. Both of these important stories bring readers inside the hype bubble for a real view of how AI-powered companies leverage human labor. In 2025, IEEE Spectrum promises to keep giving you the ground truth. Even as the generative AI boom brought fears that chatbots and image generators would take away jobs, some hoped that it would create entirely new jobs -- like prompt engineering, which is the careful construction of prompts to get a generative AI tool to create exactly the desired output. Well, this article put a damper on that hope. Spectrum editor Dina Genkina reported on new research showing that AI models do a better job of constructing prompts than human engineers. The New York Times and other newspapers have already sued AI companies for text plagiarism, arguing that chatbots are lifting their copyrighted stories verbatim. In this important investigation, Gary Marcus and Reid Southen showed clear examples of visual plagiarism, using Midjourney to produce images that looked almost exactly like screenshots from major movies, as well as trademarked characters such as Darth Vader, Homer Simpson, and Sonic the Hedgehog. It's worth taking a look at the full article just to see the imagery. The authors write: "These results provide powerful evidence that Midjourney has trained on copyrighted materials, and establish that at least some generative AI systems may produce plagiaristic outputs, even when not directly asked to do so, potentially exposing users to copyright infringement claims." When OpenAI's ChatGPT first came out in late 2022, people were amazed by its capacity to write code. But some researchers who wanted an objective measure of its ability evaluated its code in terms of functionality, complexity and security. They tested GPT-3.5 (a version of the large language model that powers ChatGPT) on 728 coding problems from the LeetCode testing platform in five programming languages. They found that it was pretty good on coding problems that had been on LeetCode before 2021, presumably because it had seen those problems in its training data. With more recent problems, its performance fell off dramatically: Its score on functional code for easy coding problems dropped from 89 percent to 52 percent, and for hard problems it dropped from 40 percent to 0.66 percent. It's worth noting, though, that the OpenAI models GPT-4 and GPT-4o are superior to the older model GPT-3.5. And while general-purpose generative AI platforms continue to improve at coding, 2024 also saw the proliferation of increasingly capable AI tools that are tailored for coding. That third story on our list perfectly sets up the fourth, which takes a good look at how professors are altering their approaches to teaching coding, given the aforementioned proliferation of coding assistants. Introductory computer science courses are focusing less on coding syntax and more on testing and debugging, so students are better equipped to catch mistakes made by their AI assistants. Another new emphasis is problem decomposition, says one professor: "This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve." Overall, instructors say that their students' use of AI tools is freeing them up to teach higher-level thinking that used to be reserved for advanced classes. This feature story was authored by an AI researcher, Dana Calacci, who banded together with gig workers at Shipt, the shopping and delivery platform owned by Target. The workers knew that Shipt had changed its payment algorithm in some mysterious way, and many had seen their pay drop, but they couldn't get answers from the company -- so they started collecting data themselves. When they joined forces with Calacci, he worked with them to build a textbot so workers could easily send screenshots of their pay receipts. The tool also analyzed the data, and told each worker whether they were getting paid more or less under the new algorithm. It found that 40 percent of workers had gotten an unannounced pay cut, and the workers used the findings to gain media attention as they organized strikes, boycotts, and protests. Calacci writes: "Companies whose business models rely on gig workers have an interest in keeping their algorithms opaque. This "information asymmetry" helps companies better control their workforces -- they set the terms without divulging details, and workers' only choice is whether or not to accept those terms.... There's no technical reason why these algorithms need to be black boxes; the real reason is to maintain the power structure." Like a couple of Russian nesting dolls, here we have a list within a list. Every year Stanford puts out its massive AI Index, which has hundreds of charts to track trends within AI; chapters include technical performance, responsible AI, economy, education, and more. This year's index. And for the past four years, Spectrum has read the whole thing and pulled out those charts that seem most indicative of the current state of AI. In 2024, we highlighted investment in generative AI, the cost and environmental footprint of training foundation models, corporate reports of AI helping the bottom line, and public wariness of AI. Neural networks have been the dominant architecture in AI since 2012, when a system called AlexNet combined GPU power with a many-layered neural network to get never-before-seen performance on an image-recognition task. But they have their downsides, including their lack of transparency: They can provide an answer that is often correct, but can't show their work. This article describes a fundamentally new way to make neural networks that are more interpretable than traditional systems and also seem to be more accurate. When the designers tested their new model on physics questions and differential equations, they were able to visually map out how the model got its (often correct) answers. The next story brings us to the tech hub of Bengaluru, India, which has grown faster in population than in infrastructure -- leaving it with some of the most congested streets in the world. Now, a former chip engineer has been given the daunting task of taming the traffic. He has turned to AI for help, using a tool that models congestion, predicts traffic jams, identifies events that draw big crowds, and enables police officers to log incidents. For next steps, the traffic czar plans to integrate data from security cameras throughout the city, which would allow for automated vehicle counting and classification, as well as data from food delivery and ride sharing companies. In another important investigation exclusive to Spectrum, AI policy researchers David Evan Harris and Dave Willner explained how some AI image generators are capable of making child sexual abuse material (CSAM), even though it's against the stated terms of use. They focused particularly on the open-source model Stable Diffusion version 1.5, and on the platforms Hugging Face and Civitai that host the model and make it available for free download (in the case of Hugging Face, it was downloaded millions of times per month). They were building on prior research that has shown that many image generators were trained on a data set that included hundreds of pieces of CSAM. Harris and Willner contacted companies to ask for responses to these allegations and, perhaps in response to their inquiries, Stable Diffusion 1.5 promptly disappeared from Hugging Face. The authors argue that it's time for AI companies and hosting platforms to take seriously their potential liability. What happens when a sociologist embeds himself in a San Francisco startup that has just received an initial venture capital investment of $4.5 million and quickly shot up through the ranks to become one of Silicon Valley's "unicorns" with a valuation of more than $1 billion? Answer: You get a deeply engaging book called Behind the Startup: How Venture Capital Shapes Work, Innovation, and Inequality, from which Spectrumexcerpted a chapter. The sociologist author, Benjamin Shestakofsky, describes how the company that he calls AllDone (not its real name) prioritized growth at all costs to meet investor expectations, leading engineers to focus on recruiting both staff and users rather than doing much actual engineering. Although the company's whole value proposition was that it would automatically match people who needed local services with local service providers, it ended up outsourcing the matching process to a Filipino workforce that manually made matches. "The Filipino contractors effectively functioned as artificial artificial intelligence," Shestakofsky writes, "simulating the output of software algorithms that had yet to be completed."
[6]
In 2024, artificial intelligence was all about putting AI tools to work
If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank. There was a "shift from putting out models to actually building products," said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference." The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others. Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not -- for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools. "The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan. "What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people." At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly "bigger and qualitatively better," resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That's also meant that the public discourse has shifted from "is AI going to kill us?" to treating it like a normal technology, he said. AI's sticker shock On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them. "We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan. Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view. "We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought and it's not as productive as we thought." Rangan, however, is still bullish about its potential and says that AI tools are already proving "absolutely incrementally more productive" in sales, design and a number of other professions. AI and your job Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators. Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike. Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things," said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech. "We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it's going to bounce. AI tools currently don't understand the world." Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores. "What AI lacks today is the common sense that humans have, and I think that is the next step," he said. An 'agentic future' That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI "agents" that can do more useful things on people's behalf. That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025. Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said. Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with "agents that check for correctness, agents that check for security, agents that check for scale." "We're getting to an agentic future," he said. "You're going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that's how we operate." AI makes gains in medicine AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry -- one of two Nobels awarded to AI-related science -- went to work led by Google that could help discover new medicines. Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods. Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near "human level robustness and accuracy," for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences. Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between "wet labs," in which humans conduct physical experiments and research, and "dry labs" where people analyze data and often use computers for modeling. When it comes to pharmaceutical development, that collaborative process can take several years, he said -- with AI, the process can be cut to a few days. "That, to me, has been the most dramatic use," Pandey said. © 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[7]
In 2024, artificial intelligence was all about putting AI tools to work
If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank. There was a "shift from putting out models to actually building products," said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference." The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others. Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not - for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools. "The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan. "What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people." At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly "bigger and qualitatively better," resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That's also meant that the public discourse has shifted from "is AI going to kill us?" to treating it like a normal technology, he said. AI's sticker shock On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tinto nuclear power to help run them. "We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan. Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view. "We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought and it's not as productive as we thought." Rangan, however, is still bullish about its potential and says that AI tools are already proving "absolutely incrementally more productive" in sales, design and a number of other professions. AI and your job Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators. Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike. Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things," said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech. "We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it's going to bounce. AI tools currently don't understand the world." Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores. "What AI lacks today is the common sense that humans have, and I think that is the next step," he said. An 'agentic future' That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI "agents" that can do more useful things on people's behalf. That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025. Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said. Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with "agents that check for correctness, agents that check for security, agents that check for scale." "We're getting to an agentic future," he said. "You're going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that's how we operate." AI makes gains in medicine AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry - one of two Nobels awarded to AI-related science - went to work led by Google that could help discover new medicines. Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods. Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near "human level robustness and accuracy," for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences. Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between "wet labs," in which humans conduct physical experiments and research, and "dry labs" where people analyze data and often use computers for modeling. When it comes to pharmaceutical development, that collaborative process can take several years, he said - with AI, the process can be cut to a few days. "That, to me, has been the most dramatic use," Pandey said.
[8]
In 2024, artificial intelligence was all about putting AI tools to work
If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank. There was a "shift from putting out models to actually building products," said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference." The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others. Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not -- for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools. "The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan. "What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people." At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly "bigger and qualitatively better," resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That's also meant that the public discourse has shifted from "is AI going to kill us?" to treating it like a normal technology, he said. On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them. "We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan. Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view. "We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought and it's not as productive as we thought." Rangan, however, is still bullish about its potential and says that AI tools are already proving "absolutely incrementally more productive" in sales, design and a number of other professions. Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators. Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike. Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things," said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech. "We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it's going to bounce. AI tools currently don't understand the world." AI can mimic what it learns from patterns, he said, but can't "understand the world so that they reason on what happens in the future." That, he said, is where AI falls short. "It still cannot imagine things," he said. "And that imagination is what we hope to achieve later." Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores. "What AI lacks today is the common sense that humans have, and I think that is the next step," he said. That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI "agents" that can do more useful things on people's behalf. That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025. Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said. Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with "agents that check for correctness, agents that check for security, agents that check for scale." "We're getting to an agentic future," he said. "You're going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that's how we operate." AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry -- one of two Nobels awarded to AI-related science -- went to work led by Google that could help discover new medicines. Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods. Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near "human level robustness and accuracy," for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences. Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between "wet labs," in which humans conduct physical experiments and research, and "dry labs" where people analyze data and often use computers for modeling. When it comes to pharmaceutical development, that collaborative process can take several years, he said -- with AI, the process can be cut to a few days. "That, to me, has been the most dramatic use," Pandey said.
[9]
In 2024, artificial intelligence was all about putting AI tools to work
If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank. There was a "shift from putting out models to actually building products," said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference." The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others. Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not -- for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools. "The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan. "What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people." At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly "bigger and qualitatively better," resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That's also meant that the public discourse has shifted from "is AI going to kill us?" to treating it like a normal technology, he said. On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them. "We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan. Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view. "We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought and it's not as productive as we thought." Rangan, however, is still bullish about its potential and says that AI tools are already proving "absolutely incrementally more productive" in sales, design and a number of other professions. Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators. Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike. Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things," said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech. "We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it's going to bounce. AI tools currently don't understand the world." AI can mimic what it learns from patterns, he said, but can't "understand the world so that they reason on what happens in the future." That, he said, is where AI falls short. "It still cannot imagine things," he said. "And that imagination is what we hope to achieve later." Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores. "What AI lacks today is the common sense that humans have, and I think that is the next step," he said. That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI "agents" that can do more useful things on people's behalf. That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025. Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said. Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with "agents that check for correctness, agents that check for security, agents that check for scale." "We're getting to an agentic future," he said. "You're going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that's how we operate." AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry -- one of two Nobels awarded to AI-related science -- went to work led by Google that could help discover new medicines. Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods. Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near "human level robustness and accuracy," for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences. Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between "wet labs," in which humans conduct physical experiments and research, and "dry labs" where people analyze data and often use computers for modeling. When it comes to pharmaceutical development, that collaborative process can take several years, he said -- with AI, the process can be cut to a few days. "That, to me, has been the most dramatic use," Pandey said.
[10]
In 2024, Artificial Intelligence Was All About Putting AI Tools to Work
If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank. There was a "shift from putting out models to actually building products," said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference." The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others. Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not -- for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools. "The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan. "What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people." At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly "bigger and qualitatively better," resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That's also meant that the public discourse has shifted from "is AI going to kill us?" to treating it like a normal technology, he said. AI's sticker shock On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them. "We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan. Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view. "We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought and it's not as productive as we thought." Rangan, however, is still bullish about its potential and says that AI tools are already proving "absolutely incrementally more productive" in sales, design and a number of other professions. AI and your job Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators. Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike. Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things," said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech. "We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it's going to bounce. AI tools currently don't understand the world." AI can mimic what it learns from patterns, he said, but can't "understand the world so that they reason on what happens in the future." That, he said, is where AI falls short. "It still cannot imagine things," he said. "And that imagination is what we hope to achieve later." Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores. "What AI lacks today is the common sense that humans have, and I think that is the next step," he said. An 'agentic future' That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI "agents" that can do more useful things on people's behalf. That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025. Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said. Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with "agents that check for correctness, agents that check for security, agents that check for scale." "We're getting to an agentic future," he said. "You're going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that's how we operate." AI makes gains in medicine AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry -- one of two Nobels awarded to AI-related science -- went to work led by Google that could help discover new medicines. Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods. Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near "human level robustness and accuracy," for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences. Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between "wet labs," in which humans conduct physical experiments and research, and "dry labs" where people analyze data and often use computers for modeling. When it comes to pharmaceutical development, that collaborative process can take several years, he said -- with AI, the process can be cut to a few days. "That, to me, has been the most dramatic use," Pandey said. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[11]
These Were The Biggest AI Announcements of 2024
OpenAI launched the GPT-4o AI model in May Gemini 2.0 Flash was released by Google in December Microsoft introduced Copilot+ PCs in June If 2023 was all about the rise of generative artificial intelligence (AI) and its entry into mainstream tech conversations, 2024 became the year when AI began displaying its transformative capabilities. What started as a text-based chatbot fad that could respond to users in a human-like fashion, is today powering many major tech products and platforms offering practical use cases. New use cases of the technology were also seen in music and video generation as well as agentic capabilities. And contrary to the opinions of the nay-sayers, the AI bubble did not burst this year. The year 2024 marked the entry of large language models (LLMs) focused on advanced reasoning, the beginning of the era of AI PCs (Copilot+ PCs if you take Microsoft's word for it), and accelerated growth of the open-source AI space. However, these are just some of the major events that dominated the headlines this year. Let us take a look at the best and the biggest moments that shaped the AI space in 2024. OpenAI might have started the generative AI trend with its Generative Pre-trained Transformer (GPT) architecture in late 2022, but by the end of 2023, it was clear that the tech giants were not going to stay out of the race for long. Google, Microsoft, Meta, and even Amazon released several AI models, trying to take the crown in benchmark scores. OpenAI started the year big with its advanced reasoning-focused GPT-4o AI model release in May, which was followed by the GPT-4o Mini in July. The AI firm also ended the year on a high with the launch of the full version of the o1 model and the much-anticipated release of its text-to-video model Sora. Additionally, the company also introduced its Advanced Voice Mode with Vision to the ChatGPT app, offering newer ways to interact with the chatbot. OpenAI also launched its own search engine dubbed ChatGPT Search, which was integrated within the chatbot platform. But the biggest coup for the AI firm came in the form of a partnership with Apple, which saw ChatGPT be integrated with Apple Intelligence tools. Following the partnership, OpenAI also released a standalone macOS and Windows app for ChatGPT. Google also went ballistic with its large number of model releases. In February, the company introduced the Gemini 1.5 series of AI models including the Gemini 1.5 Pro with one trillion parameters. In December, it closed the year by releasing the Gemini 2.0 series, with the Flash model available to everyone in preview, and a larger model reserved for the paid subscribers. But that was not all the Mountain View-based tech giant did. Google DeepMind, the AI wing of the company, released the Imagen 3 image generation model, and Veo 2 video generation model, and has previewed the music generation AI model MusicLM. Apart from this, the tech giant also released NotebookLM, an AI tool to process large documents that can also create engaging podcasts with two AI hosts. The company also introduced new features in Gemini. It added a two-way voice communication feature called Gemini Live, integrated the Gemini AI assistant into most of the Google Workspace apps including Gmail, Docs, Slides, and Sheets. Meta might have been known for its social media platforms before 2024, but this year, the company showcased its capabilities by developing and releasing several small language models (SLMs), many of which were released in open source. The tech giant introduced several of its Large Language Model Meta AI (Llama) series models including 70B and 30B coding-focused models, the largest open-source model Llama 3.1 405B, as well as multiple instruct models. However, the company's biggest announcement came with the expansion of its native chatbot Meta AI globally. Meta AI was added to Facebook's Messenger, Instagram, and WhatsApp and was expanded to several regions including India in April 2024 before making it globally available by September. The AI-powered chatbot was also added to its Ray-Ban Meta glasses with real-time vision processing capabilities. Even while using AI models of OpenAI, Microsoft was successful in carving an AI niche in the PC space. The Redmond-based tech giant quickly made its desires known when it partnered with Snapdragon (and later with Intel and AMD) to introduce the AI PC classification, which had a mandatory requirement -- the addition of a physical Copilot button on the keyboard. Thus came the era of Copilot+ PC, where the company's native chatbot was integrated into desktops and laptops via the Windows operating system. Expanding its AI chatbot to millions of users would be considered a success in every business playbook, however, the tech giant was far from done. In 2024, it also integrated Copilot tools into Microsoft 365 products, and added voice and vision capability to the chatbot. Additionally, it also launched the AI-powered Recall feature (in beta) that lets PC users ask the AI questions about past device activity. Many industry analysts had said that Amazon was late to enter the AI space, and while that might be true, the company took a unique route in 2024 to still remain relevant in the AI space. In terms of AI-based releases, the company did not have many standout moments. It did release the Rufus AI tool in the Amazon app that acts as a shopping assistant. It also released the Titan series of AI models and a video generation model for enterprises. However, the company also quietly took on the role of an aggregator and began integrating AI models from a large number of third-parties to its Amazon Web Services (AWS) platform. It also invested into releasing AI tools that improve the efficiency of the responses and reduce hallucinations. Amazon also bolstered its servers to enable them to handle large volume of AI processing. While the limelight was on the major AI players in 2024, smaller AI firms did not fail to impress either. Anthropic continued its success with Claude by releasing Claude 3 series early in the year, and the Claude 3.5 series towards the end. The company also launched a desktop app for Mac and Windows in beta, as well as standalone apps for Android and iOS. Additionally, its Tool Use and PDF understanding capabilities made Claude a more capable chatbot in 2024. Perplexity, the AI-powered search engine, launched a Pro mode that shows detailed responses for complex queries. It also launched a standalone Mac app this year. However, while there were positives, the company's decision to incorporate ads to even the premium subscribers drew some criticism. Mistral continued its consistent release of fully open-source AI models even in 2024. It began with the release of the 8x22B Mixture of Experts (MoE) AI models and followed it up with Mixtral Open 2 LLM. The company also surprised developers with the release of Pixtral 12B AI model which comes with computer vision capabilities. While we have tried to capture all the major announcements in the AI space in 2024, it is quite impossible to mention every single notable release given the AI fever that is running wild in the tech industry. But now that the year is ending, we expect 2025 to be a similarly action packed year for this technology. In the coming year, we expect to see the rise of agentic AI and its integration into platforms and devices. Imagine asking your chatbot to book a movie ticket or buy a product at the lowest price possible and it completing the action without requiring any intervention. That is what AI agents can offer. Additionally, we also believe the upcoming year will see better implementation of memory function in chatbots, ditching the rudimentary retrieval-augmented generation (RAG) framework. This will lead to chatbots becoming better assistants and companion for users. Real-time video processing might also become more accessible in the coming year. And finally, we believe India will take major strides towards adopting AI in 2025.
[12]
2024 in AI
AP - If 2023 was a year of wonder about artificial intelligence (AI), 2024 was the year to try to get that wonder to do something useful without breaking the bank. There was a "shift from putting out models to actually building products", said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book 'AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference.' The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others. Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not - for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools. "The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan. "What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people." AI'S STICKER SHOCK On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them. "We're talking about hundreds of billions of dollars of capital that has been poured into this technology," said Goldman Sachs analyst Kash Rangan. Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view. "We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT," Rangan said. "It's more expensive than we thought and it's not as productive as we thought." AI AND YOUR JOB Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkiye or India without the help of outside lawyers or translators. Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike. Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or "completely new things", said professor of electrical and computer engineering and AI expert at Virginia Tech Walid Saad. "We can train it with more data so it has more information. But having more information doesn't mean you're more creative," he said. "As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it's going to bounce. AI tools currently don't understand the world." AN 'AGENTIC FUTURE' That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI 'agents' that can do more useful things on people's behalf. That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025. Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said. AI MAKES GAINS IN MEDICINE AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry - one of two Nobels awarded to AI-related science - went to work led by Google that could help discover new medicines.
[13]
OpenAI Announcements in December 2024: From ChatGPT Pro to Sora
Introduction of ChatGPT Pro with advanced features.Sora model allows users to generate text-to-video content.Release of o3, the next-generation AI reasoning model. Artificial Intelligence (AI) company OpenAI, just ahead of the holiday season, announced several updates through its "12 Days of OpenAI," a series it began on December 5, 2024. On December 4, OpenAI co-founder and CEO Sam Altman revealed that the company would demo or launch a new product or feature every weekday from December 5 onward. Accordingly, OpenAI hosted a 12-day live-stream series, unveiling new AI tools, features, updates, and developments around Artificial Intelligence technology. December thus became an action-packed month, informally dubbed the "12 Days of Shipmas" by OpenAI. Also Read: OpenAI CEO Sam Altman Confident in Trump's Support for AI On the first day of its 12-day event, OpenAI introduced two major upgrades for its chatbot, ChatGPT: a new subscription tier called ChatGPT Pro and the full version of its o1 model. Priced at USD 200 per month, ChatGPT Pro offers unlimited access to OpenAI o1, o1-mini, GPT-4o, advanced voice mode, and an enhanced version of o1 known as o1 Pro Mode. The o1 Pro Mode uses more compute to tackle harder problems, providing superior answers, according to OpenAI. "As AI becomes more advanced, it will solve increasingly complex and critical problems. It also takes significantly more compute to power these capabilities," ChatGPT shared when introducing the Pro tier. "OpenAI o1 is more concise in its thinking, resulting in faster response times than o1-preview. Our testing shows that o1 outperforms o1-preview, reducing major errors on difficult real-world questions by 34 percent," OpenAI announced on X. Additionally, OpenAI awarded 10 grants of ChatGPT Pro subscriptions to medical researchers at leading US institutions, with plans to expand Pro grants to other regions and research areas in the future. The company expects to add more powerful, compute-intensive productivity features to this plan in the future. On December 6, the company announced that OpenAI o1 had been fully rolled out to 100 percent of ChatGPT Plus, Team, and Pro users. OpenAI also provided an update on its model customisation offerings, allowing users to fine-tune OpenAI's models using their own data. The company introduced a new fine-tuning method called reinforcement fine-tuning. So, day 2 focused on developers, with the expansion of the "Reinforcement Fine-Tuning Research Program." This enables the creation of domain-specific expert models with minimal training data. This is a preview release, with a public launch expected in 2025. However, OpenAI expanded alpha access to Reinforcement Fine-Tuning, inviting researchers, universities, and enterprises with complex tasks to apply for early access to the technology. Also Read: OpenAI and Anduril Partner to Develop AI Solutions for US National Security On December 9, 2024, OpenAI released Sora, its text-to-video generation model first introduced in February. This model allows users to generate realistic 1080p resolution videos, up to 20 seconds long, in widescreen, vertical, or square formats. Users can upload assets to blend, remix, or create entirely new content from text prompts. "Sora serves as a foundation for AI that understands and simulates reality -- an important step towards developing models that can interact with the physical world," OpenAI said. Sora, OpenAI's video product, is the company's holiday gift to users. The company also developed new interfaces to make it easier to prompt Sora with text, images, and videos. This includes a storyboard tool for precise frame-by-frame input and integration of text, images, and videos. OpenAI said Sora is available to users in the US and abroad (with the exception of the United Kingdom, Switzerland, and the European Economic Area) to ChatGPT Pro and Plus subscribers starting immediately. OpenAI also confirmed a significantly faster version of the model called Sora Turbo, released as a standalone product for ChatGPT Plus and Pro subscribers. Sora is included as part of Plus accounts at no additional cost. Users can generate up to 50 videos at 480p resolution or fewer videos at 720p each month. OpenAI confirmed that the version of Sora currently being deployed has several limitations, such as generating unrealistic physics and struggling with complex actions over long durations. Additionally, all Sora-generated videos come with C2PA metadata, which will identify a video as coming from Sora to provide transparency and can be used to verify its origin. On December 10, Sam Altman posted on X that sign-ups would be temporarily disabled, and video generations would be slower due to higher-than-expected demand. Also Read: OpenAI Raises USD 6.6 Billion to Accelerate AI Research and Expansion On Day 4, OpenAI announced several updates to Canvas, a collaborative tool integrated into ChatGPT that provides an editable side panel for writing and coding tasks. Canvas is a new ChatGPT interface for collaborative writing and coding projects, offering tools for editing, highlighting, and inline feedback. It enables targeted suggestions, version control, and customisable adjustments like changing tone or length, making complex projects easier to refine. On December 10, OpenAI made Canvas available by default in GPT-4 for all users, both free and paid. The company also announced new capabilities, such as the ability to integrate Python code within Canvas, bring Canvas to custom GPTs, and support shortcuts. Canvas is available to all ChatGPT users on web and Windows platforms, with upcoming availability on Mac and mobile platforms (iOS, Android, and mobile web). On Day 5, the live stream featured integration with Apple Intelligence, part of several features launched in iOS 18.2, which enhances Siri with ChatGPT responses across Apple's platforms. Demos showcased how users can ask ChatGPT through Siri to plan holiday parties, create Christmas playlists, and more. Apple iPhone owners do not need a ChatGPT subscription or account to use the technology. Also Read: OpenAI Targets 1 Billion Users by 2025 with AI Innovations and Apple Partnership: Report On Day 6, OpenAI announced a major update to its advanced voice mode, a feature enabling users to have natural conversations with a selection of AI voices that sound remarkably human. Additionally, video and screen sharing, along with image upload capabilities, began rolling out on December 12 to iOS and Android apps, with full availability expected within the next week for all Teams and most Plus subscribers. These features are already available to nearly all Pro subscribers, and OpenAI plans to make them available soon to Pro subscribers in the EU, OpenAI said. Usage of video and screen-sharing capabilities is limited for eligible plans on a daily basis, while image uploads count toward the plan's usage limits. OpenAI also announced that Santa Mode is available to all users who have access to voice chats on ChatGPT. For the month of December, users can click on a snowflake icon in the ChatGPT app to begin a conversation with Jolly Old Santa. Now you can chat with ChatGPT over video and voice in real-time and have fun conversations. On Day 7, OpenAI introduced Projects, a new interface for ChatGPT that makes it easier to organise chats with shared topics or context in GPT-4. A project keeps chats, files, and custom instructions together in one place, simplifying the process of returning to ongoing work. Users can upload files, set instructions specific to a project, and tailor multiple chats to the project itself. OpenAI confirmed that features like Search, Canvas, Advanced Data Analysis, and DALL-E also work within Projects. ChatGPT Projects are available to all Plus, Team, and Pro users. Projects will roll out to ChatGPT Enterprise/Edu early next year. Also Read: Microsoft AI Solutions Drive Transformation for Over 200 Businesses: December 2024 Edition On Day 8, OpenAI brought ChatGPT's search abilities to all logged-in users in regions where ChatGPT is available. ChatGPT will automatically choose to search the web based on your queries, or users can manually select the option to search by clicking the web search icon. Search is also integrated into advanced voice mode, so users can ask the AI voices to search the web on their behalf. OpenAI also added maps to ChatGPT in mobile apps, so users can search for and chat about local restaurants and businesses with up-to-date information. ChatGPT's enhancements, including faster searches and maps on mobile, are available to all users in ChatGPT's paid tiers. Users can now search while having a voice conversation with ChatGPT, and during voice chats, ChatGPT can provide answers by searching the web. On Day 9, December 17, OpenAI introduced developer-focused updates. The o1 reasoning model became available through OpenAI's API, replacing the previously available preview version. Pricing for the company's real-time API, including simple WebRTC integration for creating high-speed conversational experiences, was reduced by 60 percent. OpenAI also added GPT-4o mini (10x cheaper than previous prices), improved voice quality, and enhanced input reliability. Preference Fine-Tuning, a new model customisation technique that tailors models based on user and developer preferences, was introduced, along with two new official SDKs: Go and Java SDKs in beta for developers. On Day 10, OpenAI announced an experimental new launch with a toll-free number that anyone can call to interact with ChatGPT. By dialing 1-800-CHAT-GPT (1-800-242-8478), anyone with a US phone number can call the AI model, engage with the advanced voice mode, and discuss anything for up to 15 minutes per month, free of charge. Users can also now start a conversation with ChatGPT through WhatsApp by texting the same number, though daily limits on WhatsApp messages apply. On Day 11, December 19, OpenAI announced an update to its computer desktop apps. Users can now direct the ChatGPT app to control other apps on their computers to complete tasks like debugging in terminals, working through documents, analysing data repositories, or getting feedback on speaker notes. Support for additional note-taking and coding apps, including Apple Notes, Notion, Quip, and Warp, was also added. Also Read: EU Selects Seven Sites for First AI Factories, Marking EUR 1.5 Billion Investment On the final day of OpenAI's 12-day series, OpenAI announced its o3 and o3-mini models, the successors to its o1 and o1-mini "reasoning" models. Sam Altman explained, "Out of respect to our friends at Telefonica, and in the grand tradition of OpenAI being truly bad at names, we decided to skip o2 and name the new model o3." "On several of the most challenging frontier evaluations, OpenAI o3 sets new milestones for what's possible in coding, math, and scientific reasoning. It also makes significant progress on the ARC-AGI evaluation for the first time," OpenAI shared on X. The first version of o3 is expected to launch publicly in early 2025. However, OpenAI opened early access applications for safety and security researchers to test these frontier models starting December 21, 2024. On December 21, Sam Altman announced a special Sora bonus on X, stating, "Our GPUs get a little less busy during late December as people take a break from work, so we are giving all Plus users unlimited Sora access via the relaxed queue over the holidays!" Also Read: Microsoft CEO Highlights the Human Impact of AI Across Sectors in 2024 OpenAI announced on December 27 that its Board of Directors is evaluating its corporate structure in order to best support the mission of ensuring artificial general intelligence (AGI) benefits all of humanity. The plan involves transforming the for-profit entity into a Delaware Public Benefit Corporation (PBC), which will allow for conventional capital-raising while balancing shareholder and public interests. This change aims to make the non-profit more sustainable, with its significant interest in the PBC amplifying resources for charitable initiatives. The goal is to evolve OpenAI into a long-term, enduring organisation that continues advancing AGI safely and ethically for the benefit of all. The PBC will run and control OpenAI's operations and business, while the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science, the company said. "We've learned to think of the mission as a continuous objective rather than just building any single system. The world is moving to build out a new infrastructure of energy, land use, chips, datacenters, data, AI models, and AI systems for the 21st century economy. We seek to evolve in order to take the next step in our mission, helping to build the AGI economy and ensuring it benefits humanity," OpenAI said in a blog post.
Share
Share
Copy Link
In 2024, AI made significant strides in capabilities and adoption, driving massive profits for tech companies. However, concerns about safety, regulation, and societal impact also grew.
In 2024, artificial intelligence continued its rapid advancement, with generative AI models like ChatGPT, Google's Gemini, and others pushing the boundaries of what's possible 1. These systems demonstrated improved abilities in text, audio, video, and image generation. Notable achievements included Google DeepMind's AI earning silver-medal results at a prestigious math competition and ChatGPT passing a Stanford-administered Turing test 2.
The AI boom fueled extraordinary growth for tech companies at the forefront of development. Nvidia, the leading manufacturer of AI chips, saw its stock price nearly triple, briefly overtaking Apple as the world's most valuable company with a $3.7 trillion market cap 35. Other tech giants like Google and Microsoft also saw significant stock price increases tied to their AI advancements 4.
As AI capabilities grew, so did concerns about its potential risks and need for regulation. The European Union's Artificial Intelligence Act came into force, introducing safeguards for general-purpose AI systems and addressing privacy concerns 5. In the U.S., California's attempt at sweeping AI regulation (SB 1047) was vetoed, highlighting the challenges of balancing innovation with safety 5.
The role of AI in sensitive areas like elections and warfare became more prominent. AI-generated content was used in political campaigns, raising concerns about disinformation 2. In conflicts like those in Ukraine and Gaza, AI tools were integrated into military operations for surveillance, target identification, and cybersecurity 2.
The tech industry pushed back against "AI doom" narratives in 2024. Silicon Valley entrepreneurs and venture capitalists, like Marc Andreessen, promoted a more optimistic vision of AI's future, arguing against heavy regulation 1. This stance contrasted with earlier warnings from some technologists about potential catastrophic risks from advanced AI systems 1.
The widespread adoption of AI tools prompted changes in various sectors. In education, computer science professors altered their teaching approaches, focusing more on testing, debugging, and problem decomposition in light of AI coding assistants 5. Concerns about AI's impact on employment persisted, though some hoped for the creation of new job categories like prompt engineering 5.
Investigations revealed potential copyright infringements by AI image generators, with Midjourney producing images nearly identical to copyrighted movie scenes and trademarked characters 5. This raised questions about the training data used for these models and potential legal liabilities for users.
As AI capabilities expanded, discussions about the potential development of Artificial General Intelligence (AGI) intensified. While AGI remains a future goal, the rapid progress in 2024 brought this concept closer to reality in the minds of many researchers and industry leaders 3.
In conclusion, 2024 was a pivotal year for AI, marked by significant technological advancements, economic impacts, and growing debates about safety, ethics, and regulation. As AI continues to evolve, balancing innovation with responsible development remains a key challenge for the industry and society at large.
Reference
[1]
[4]
[5]
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
OpenAI secures a historic $6 billion in funding, valuing the company at $157 billion. This massive investment comes amid concerns about AI safety, regulation, and the company's ability to deliver on its ambitious promises.
7 Sources
7 Sources
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
6 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved