Curated by THEOUTPOST
On Thu, 3 Oct, 12:03 AM UTC
7 Sources
[1]
Lessons on open-source AI from Hugging Face cofounder Thomas Wolf
In this edition...a Hugging Face cofounder on the importance of open source...a Nobel Prize for Geoff Hinton and John Hopfield...a movie model from Meta...a Trump 'Manhattan Project' for AI? Yesterday, I had the privilege of moderating a fireside chat with Thomas Wolf, the cofounder and chief scientific officer at Hugging Face, at the CogX Global Leadership Summit at the Royal Albert Hall in London. Hugging Face, of course, is the world's leading repository for open-source AI models -- the GitHub of AI, if you will. Founded in 2016 (in New York, as Wolf reminded me on stage when I erroneously said the company was founded in Paris), the company was valued at $4.5 billion in its latest $235 million venture capital funding round in August 2023. It was fascinating to listen to Wolf speak about what he sees as the vital importance of both open-source AI models and making sure AI is ultimately a successful, impactful technology. Here were some key insights from our conversation. Smaller is better Wolf argued that it was the open-source community that was leading the way in the effort to produce smaller AI models that perform as well as larger ones. He noted that Meta's newly released Llama 3.2 family of models includes two small models -- at 1 billion and 3 billion parameters, compared to tens of billions or even hundreds of billions -- that perform as well as many text-based tasks, including summarization, as much larger models. Smaller models, in turn, Wolf argued would be essential for two reasons. One, they would let people run AI directly on smartphones, tablets, and maybe eventually other devices, without having to transmit data to the cloud. That was better for privacy and data security. And it would enable people to enjoy the benefits of AI even if they didn't have a constant, high-speed broadband connection. More importantly, smaller models use less energy than large models running in data centers. That's important to combating AI's growing carbon footprint and water usage. Democratizing AI Critically, Wolf sees open-source AI and small models as fundamentally "democratizing" the technology. He, like many, is disturbed by the extent to which AI has simply reinforced the power of large technology giants, such as Microsoft, Google, Amazon, and, yes, Meta, even though it has arguably done more for open source AI than anyone else. While OpenAI and, to a lesser extent, Anthropic, have emerged as key players in the development of frontier AI capabilities, they have only been able to do so through close partnerships and funding relationships with tech giants (Microsoft in the case of OpenAI; Amazon and Google in the case of Anthropic). Many of the other companies working on proprietary LLMs -- Inflection, Character.ai, Adept, Aleph Alpha, to name just a few -- have pivoted away from trying to build the most capable models. The only way to ensure that just a handful of companies don't monopolize this vital technology is to make it freely available to developers and researchers as open-source software, Wolf said. Open-source models -- and particularly small open-source models -- also gave companies more control over how much they were spending, which he saw as critical to businesses actually realizing that elusive return on investment from AI. Safer in the long run I pressed Wolf about the security risks of open-source AI. He said other kinds of open-source software -- such as Linux -- have wound up being more secure than proprietary software because there are so many people who can scrutinize the code, find security vulnerabilities, and then figure out how to fix them. He said he thought that open-source AI would prove to be no different. I told Wolf I was less confident than he was. Right now, if an attacker has access to a model's weights, it is simple to create prompts -- some of which might seem like gibberish to a human -- designed to get that model to jump its guard rails and do something it isn't supposed to, whether that is coughing up proprietary data, writing malware, or giving the user a recipe for a bioweapon. What's more, research has shown that an attacker can use the weights from open-source models to help design similar "prompt injection" attacks that will also work reasonably well against proprietary models. So the open models are not just more vulnerable, they are potentially making the entire AI ecosystem less secure. Wolf acknowledged that there might be a tradeoff -- with open models being more vulnerable in the near term until researchers could figure out how to better safeguard them. But he insisted that in the long-term, having so many eyes on a model would make the technology more secure. Openness, on a spectrum I also asked Wolf about the controversy over Meta's labelling of its AI software as open source, when open source purists criticize the company for placing some restrictions on the license terms of its AI models and also for not fully disclosing the datasets on which its models are trained. Wolf said that it was best to be less dogmatic and to think of openness existing on a spectrum, with some models, such as Meta's, being "semi-open." Better benchmarks One of the things Hugging Face is best known for is its leaderboards, which rank open-source models against one another based on their performance on certain benchmarks. While the leaderboards are helpful, I bemoaned the fact that almost none exist that seek to show how well AI models work as an aid to human labor and intelligence. It is in this "copilot" role that AI models have found their best uses so far. And yet there are almost no benchmarks for how well humans perform when assisted by different AI software. Instead, the leaderboards always pit the models against one another and against human-level performance -- which tends to frame the technology as a replacement for human intelligence and labor. Wolf agreed that it would be great to have benchmarks that looked at how humans do when assisted by AI -- and he noted that some early models for coding did have such benchmarks -- but he said these benchmark tests were more expensive to run since you had to pay human testers, which is why he thought few companies attempted them. Making money Interestingly, Wolf also told me Hugging Face is bucking a trend among AI companies: It's cashflow positive. (The company makes money on consulting projects and by selling tools for enterprise developers.) By contrast, OpenAI is thought to be burning through billions of dollars. Maybe there really is a profitable future in giving AI models away. With that, here's more AI news. Jeremy Kahn jeremy.kahn@fortune.com @jeremyakahn Before we get to the news. If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my book, Mastering AI: A Survival Guide to Our Superpowered Future. It's out now in the U.S. from Simon & Schuster, and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here. A Nobel Prize for neural network pioneers Hinton and Hopfield. The Royal Swedish Academy of Sciences awarded the Nobel Prize in physics to deep learning "godfather" Geoffrey Hinton and machine learning pioneer John Hopfield for their work on the artificial neural networks that underpin today's AI revolution. You can read more from my Fortune colleague David Meyer here. Meta debuts movie generation AI model. The social media company unveiled Movie Gen, a powerful generative AI model that can create high-quality short videos from text prompts. Text prompts can also be used to edit the videos and the model can automatically create AI-generated sound effects or music appropriate to the scene -- an advance over other text-to-video software that has so far only been able to create videos without sound, the New York Times reported. The model will compete with OpenAI's Sora, Luma's Dream Machine, and Runway's Gen 3 Alpha models. Another OpenAI researcher jumps ship -- this time to Google DeepMind. Tim Brooks, who co-led the development of OpenAI's text-to-video generation model, Sora, announced on X that he was leaving OpenAI to join Google DeepMind. Brooks joins a growing list of prominent OpenAI researchers who have left the company recently. TechCrunch has more here. Amazon deploys an AI HR coach. That's according to a story in The Information, which quotes Beth Galetti, Amazon's senior vice president of people experience and tech, from a conference. She said the company trained a generative AI model on employee performance reviews and promotion assessments to act as a coach for employees seeking advice on the best way to approach difficult conversations with managers or direct reports. OpenAI is drifting away from Microsoft for its data center demands. The Information reports, quoting people who have heard OpenAI CEO Sam Altman and CFO Sara Friar discussing plans to reduce the company's dependence on Microsoft's GPU clusters. OpenAI recently signed a deal to rent time on GPUs in a data center in Abilene, Texas, that's being developed by Microsoft rival Oracle. The publication said OpenAI is concerned Microsoft is unable to give OpenAI access to enough data center capacity for it to stay apace of competitors, particularly Elon Musk's X.ai. Musk has recently boasted about creating one of the world's largest clusters of Nvidia GPUs. Maybe next token prediction works for everything? Transformers that just predict the next token in a sequence have proven remarkably powerful for constructing large language models (LLMs). But for text-to-image, text-to-video, and text-to-audio generation, other methods have usually been used, often in combination with an LLM. For images, this is often a diffusion model, where the system learns to take an image that has been distorted and blurred with statistical noise and then remove that noise to restore the original crisp image. Sometimes this is what is called a compositional technique, where the model learns from images with text labels. But researchers at the Beijing Academy of Artificial Intelligence have published a paper that shows simply training a model to predict the next token and training it on multimodal data that includes text, still images, and video, can produce an AI model that is just as good as those trained in a more complicated way. The researchers call their model Emu3. You can read the research paper on arxiv.org here and see a blog with examples of its outputs here. Meet the former Amazon VP driving Hershey's tech transformation -- by John Kell Doctors and lawyers, need a side hustle? Startup Kiva AI pays crypto to overseas experts who contribute to its 'human-in-the-loop' AI service -- by Catherine McGrath Why Medtronic wants every business unit to have a plan for AI -- by John Kell Google DeepMind exec says AI will increase efficiency so much it's expected to handle 50% of info requests in its legal department -- by Paolo Confino AI assistants are ratting you out for badmouthing your coworkers -- by Sydney Lake Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here) If Trump wins, will we see a Manhattan Project to build AGI and ASI? Some people think so after noticing former President Donald Trump's daughter Ivanka post approvingly on social media about a monograph published by former OpenAI researcher Leopold Aschenbrenner. On Sept. 25, Ivanka posted on X that Aschenbrenner's book-length treatise, "Situational Awareness," was "an excellent and important read." In the document, which Aschenbrenner published online in June, he predicts that OpenAI or one of its rivals will achieve artificial general intelligence (AGI) before the decade is out, with 2027 being the most likely year. He also says the U.S. and its allies must beat China in the race to develop AGI and then artificial superintelligence (ASI), an even more powerful technology that would be smarter than all humanity combined. The only way to guarantee this, Aschenbrenner argues, is for the U.S. government to get directly involved in securing the leading AI labs and for it to launch a government-led and funded Manhattan Project-like effort to develop ASI. So far, the Republican Party's platform when it comes to AI has been heavily influenced by the Silicon Valley venture capitalists most closely affiliated with the e/acc movement. Its believers espouse the idea that the benefits of superpowerful AI so outweigh any risks that there should be no regulation of AI at all. Trump has promised to immediately rescind President Joe Biden's executive order on AI, which imposed reporting and safety requirements on the companies working on the most advanced AI models. It would be ironic then, if Trump wins the election and, influenced by Ivanka's views, and in turn Aschenbrenner's, he actually winds up nationalizing the AGI effort. I wonder what Ivanka's brother-in-law, Joshua Kushner, the managing partner at Thrive Capital, which just led OpenAI's record-breaking $6.6 billion funding round, thinks about that idea?
[2]
OpenAI Makes History Again, This Time by Scoring Billions From Investors
Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media OpenAI, which turned generative AI from a sci-fi concept into a consumer reality when it released ChatGPT two years ago, made history again last week when it raised $6.6 billion in one of the largest venture capital funding rounds in US history. The startup is now valued by investors, including Thrive Capital, Microsoft and AI-chip maker Nvidia, at $157 billion -- about the market value of "publicly traded household names such as Goldman Sachs, Uber and AT&T," The Wall Street Journal noted. Apple, which signed a deal to offer ChatGPT to iPhone users as part of its new Apple Intelligence system, was also considering making an investment before talks fell apart, the Journal added. The pressure is now on CEO Sam Altman to turn the San Francisco-based company from a charitable nonprofit into a for-profit company within two years, a shift that's raised concerns that in its new quest to make money OpenAI might rush out AI tools that could increase the risk to humanity. The company said in a statement to Bloomberg that it would use the money to advance AI research and to increase its computing capacity. OpenAI has already pushed the tech forward notably, announcing a version of its large language model in mid-September called o1 that aims to mimic human-level reasoning. Google has a team also working on an AI reasoning model as it works to challenge OpenAI in the market for software that's "more adept at solving multistep problems in fields such as math and computer programming," Bloomberg reported last week, citing anonymous sources. Whether you're a fan of OpenAI or not, the funding news affirms that a gen AI future is inescapable. But will it be a good future? Far be it from me to question the wisdom of venture capitalists, but I'll remind you that MIT economist Daron Acemoglu isn't so sure, saying again last week that AI is being overhyped and won't deliver the productivity boost and return on investment that companies investing over a trillion into the tech think it will. Why? Because, he thinks, AI in the next decade will be able to handle only a small percentage of today's jobs: 5%. "A lot of money is going to get wasted," Acemoglu told Bloomberg. For a deeper dive into his thinking, check out what he said to Goldman Sachs about this in June. Here are the other doings in AI worth your attention. California regulators, who passed what would've been the most far-reaching AI safety and development bill, saw their proposed SB 1047 go down in flames after California Gov. Gavin Newsom vetoed it. The bill called for safety testing on AI models costing more than $100 million and would've required companies to build in a kill switch to shut down systems in case of an impending disaster (with someone in California deciding what constitutes a disaster). Echoing language used by Silicon Valley tech companies and AI developers that the bill would stifle innovation -- California is home to 32 of the world's 50 leading AI companies -- the Democratic governor wrote in a Sept. 29 statement that he didn't think the proposal to "put certain safeguards in place to prevent catastrophic harm" was the right approach. "A California-only approach may well be warranted -- especially absent federal action by Congress -- but it must be based on empirical evidence and science," Newsom said. "The bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Though many AI companies say they welcome some form of AI regulation, the devil, as we all know, is in the details. In late September, more than 100 companies signed on to the European Union's AI Pact, including Amazon, Google and Microsoft, pledging that they'll agree to the law's provision aimed at helping make gen AI safe and secure. However, Apple, Anthropic and Meta didn't sign on, after expressing concerns that the EU regulation put too many regulatory hurdles in place. I've been writing a bit about how AI companies are going to rely on voice technology to convince you to feel more comfortable interacting with their AI assistants. Case in point: Meta last month licensed the voices of notable actors Awkwafina, Judi Dench, Kristen Bell, Keegan Michael Key and John Cena to provide answers, and tell jokes, using Meta AI. But it isn't just AI makers that are hoping to woo you with their conversational agents. Priceline demoed Penny Voice, which uses OpenAI's GPT-4o to allow you to have a back-and-forth audio chat to help book your travel plans. "Penny will be able to 'hear' and understand complex consumer queries and anticipate needs based on preferences and past interactions, then respond in a real-time voice," the company said. You can hear what that will all sound like here. If you're curious about how people are using AI, CNET offers some explainers on how to write a resume using ChatGPT; how AI can help you create a care plan for your plants; which AI tools you can use to summarize your Excel spreadsheets; and how to use gen AI for meal planning, or as author Corin Cesaric writes, to help you decide "what to cook for a week." Former President Donald Trump was the subject of a fake, AI-generated photo posted on Facebook that shows the Republican presidential candidate wearing an orange life vest and wading along a flooded street while accompanied by another man. Though Trump did survey the damage from Hurricane Helene during a trip to Georgia, engineering professor Walter Scheirer of the University of Notre Dame, told USA Today that you can tell the image isn't authentic. That's because of numerous artifacts in the low-res photo and the fact that: "The clothing of the two men appears to be dry. If they were wading through water, they'd be soaked -- an effect that can be seen in the numerous real photos from the hurricane." What happens when you ask popular AI text-to-image tools to create their versions of baby boomers, Gen Xers, millennials and Gen Zers? The stereotypes fly, according to a joint research project by Alport and Turing Post. They analyzed more than 1,200 images across four AI models, and you can see the results for yourself in their recap, titled Sad Boomers, Sober Zoomers, Sidelined Gen Xers (again), and No Avotoast for Millenials. Note the researchers, "The one thing they all have in common is a love for beers." Waymark, an AI-video-tech company, released the full-length version of what it's said is one of the first AI-generated movies, a dystopian work called The Frost. The final, 23-minute version combines part one, a 13-minute short released in June 2023, and described by The MIT Review as "impressive" and "bizarre," with the previously unreleased part two. The film was created with OpenAI's Dall-E and tools from Runway and Luma Labs. You can watch the trailer here. I found it a little unsettling -- we're just at the beginning of all this AI moviemaking, but you know it's only going to get better.
[3]
This Week in AI: OpenAI is stretched thin | TechCrunch
Hiya, folks, welcome to TechCrunch's regular AI newsletter. If you want this in your inbox every Wednesday, sign up here. After a brief hiatus, we're back with a few show notes on OpenAI's DevDay. The keynote yesterday morning in San Francisco was remarkable for its subdued tone -- a contrast to the rah-rah, hypebeast-y address from CEO Sam Altman last year. This DevDay, Altman didn't bound up onstage to pitch shiny new projects. He didn't even make an appearance; head of platform product Olivier Godement emceed. On the agenda for this first of several OpenAI DevDays -- the next is in London this month, followed by the last in Singapore in November -- were quality-of-life improvements. OpenAI released a real-time voice API, as well as vision fine-tuning, which allows developers to customize its GPT-4o model using images. And the company launched model distillation, which takes a large AI model like GPT-4o and uses it to fine-tune a smaller model. The event's narrow focus wasn't unanticipated. OpenAI tempered expectations this summer, saying DevDay would focus on educating devs, not showcasing products. Nevertheless, what was omitted from Tuesday's tight, 60-minute keynote raised questions about the progress -- and status -- of OpenAI's countless AI endeavors. We didn't hear about what might succeed OpenAI's nearly year-old image generator, DALL-E 3, nor did we get an update on the limited preview for Voice Engine, the company's voice-cloning tool. There's no launch timeline yet for OpenAI's video generator, Sora, and mum's the word on Media Manager, the app the company says it's developing to let creators control how their content is used in model training. When reached for comment, an OpenAI spokesperson told TechCrunch that OpenAI is "slowly rolling out the [Voice Engine] preview to more trusted partners" and that Media Manager is "still in development." But it seems clear OpenAI is stretched thin -- and has been for some time. According to recent reporting by The Wall Street Journal, the company's teams working on GPT-4o were only given nine days to conduct safety assessments. Fortune reports that many OpenAI staff thought that o1, the company's first "reasoning" model, wasn't ready to be unveiled. As it barrels toward a funding round that could bring in up to $6.5 billion, OpenAI has its fingers in many underbaked pies. DALL-3 underperforms image generators like Flux in many qualitative tests; Sora is reportedly so slow to generate footage that OpenAI is revamping the model; and OpenAI continues to delay the rollout of the revenue-sharing program for its bot marketplace, the GPT Store, that it initially pegged for the first quarter of this year. I'm not surprised that OpenAI now finds itself beset with staff burnout and executive departures. When you try to be a jack-of-all-trades, you end up being a master of none -- and pleasing nobody. AI bill vetoed: California governor Gavin Newsom vetoed SB 1047, a high-profile bill that would've regulated the development of AI in the state. In a statement, Newsom called the bill "well-intentioned" but "[not] the best approach" to protecting the public from AI's dangers. AI bills passed: Newsom did sign other AI regulations into law -- including bills dealing with AI training data disclosures, deepfake nudes, and more. Y Combinator criticized: Startup accelerator Y Combinator is being criticized after it backed an AI venture, PearAI, whose founders admitted they basically cloned an open source project called Continue. Copilot gets upgraded: Microsoft's AI-powered Copilot assistant got a makeover on Tuesday. It can now read your screen, think deeply, and speak aloud to you, among other tricks. OpenAI co-founder joins Anthropic: Durk Kingma, one of the lesser-known co-founders of OpenAI, this week announced he'll be joining Anthropic. It's unclear what he'll be working on, however. Training AI on customers' photos: Meta's AI-powered Ray-Bans have a camera on the front for various AR features. But it could turn out to be a privacy issue -- the company won't say whether it plans to train models on images from users. Raspberry Pi's AI camera: Raspberry Pi, the company that sells tiny, cheap, single-board computers, has released the Raspberry Pi AI Camera, an add-on with onboard AI processing. AI coding platforms have nabbed millions of users and attracted hundreds of millions of dollars from VCs. But are they delivering on their promises to boost productivity? Maybe not, according to a new analysis from Uplevel, an engineering analytics firm. Uplevel compared data from about 800 of its developer customers -- some of whom reported using GitHub's AI coding tool, Copilot, and some of whom didn't. Uplevel found that devs relying on Copilot introduced 41% more bugs and weren't any less susceptible to burnout than those who didn't use the tool. Developers have shown enthusiasm for AI-powered assistive coding tools despite concerns pertaining not only to security but also copyright infringement and privacy. The vast majority of devs responding to GitHub's latest poll said they've embraced AI tools in some form. Businesses are bullish too -- Microsoft reported in April that Copilot had over 50,000 enterprise customers. Liquid AI, an MIT spinoff, this week announced its first series of generative AI models: Liquid Foundation Models, or LFMs for short. "So what?" you might ask. Models are a commodity -- new ones are released practically every day. Well, LFMs use a novel model architecture and notch competitive scores on a range of industry benchmarks. Most models are what's known as a transformer. Proposed by a team of Google researchers back in 2017, the transformer has become the dominant generative AI model architecture by far. Transformers underpin Sora and the newest version of Stable Diffusion, as well as text-generating models like Anthropic's Claude and Google's Gemini. But transformers have limitations. In particular, they're not very efficient at processing and analyzing vast amounts of data. Liquid claims its LFMs have a reduced memory footprint compared to transformer architectures, allowing them to take in larger amounts of data on the same hardware. "By efficiently compressing inputs, LFMs can process longer sequences [of data]," the company wrote in a blog post. Liquid's LFMs are available on a number of cloud platforms, and the team plans to continue refining the architecture with future releases. If you blinked, you probably missed it: An AI company filed to go public this week. Called Cerebras, the San Francisco-based startup develops hardware to run and train AI models, and it competes directly with Nvidia. So how does Cerebras hope to compete against the chip giant, which commanded between 70% and 95% of the AI chip segment as of July? On performance, says Cerebras. The company claims that its flagship AI chip, which it both sells direct and offers as a service via its cloud, can outcompete Nvidia's hardware. But Cerebras has yet to translate this claimed performance advantage into profits. The firm had a net loss of $66.6 million in the first half of 2024, per filings with the SEC. And for last year, Cerebras reported a net loss of $127.2 million on revenue of $78.7 million. Cerebras could seek to raise up to $1 billion through the IPO, according to Bloomberg. To date, the company has raised $715 million in venture capital and was valued at over $4 billion three years ago.
[4]
OpenAI sets a VC fundraising record - but can it fulfill the promise? - SiliconANGLE
OpenAI sets a VC fundraising record -- but can it fulfill the promise? If OpenAI could just monetize all the ink that gets spilled on the company, perhaps it could justify raising such a crazy amount of money this week. And get this: The $6.6 billion round, at a (gulp) $157 billion valuation, the biggest VC round ever, won't even be enough to relieve CEO Sam Altman from having to go out and raise more next year to fund all those Nvidia chips and breakneck rate of new product development. Did someone say IPO? But we've seen this kind of frenzy many times before, and it often doesn't end well. Speaking of IPOs, Nvidia rival Cerebras just filed for one. Macro issues may be a bigger determinant than one hot chipmaker, but struggling VCs are sure hoping Cerebras kicks off a much-needed IPO cycle. Meantime, plenty of picks-and-shovels AI companies -- er, platforms -- are already cashing in and spinning their own grand visions. VAST Data, for instance, aims to be the data operating system for AI. That's not all. Nvidia and Accenture see a lot of potential in the emergence of agentic AI, and so do others you may not have heard of. And though generative AI coding assistants aren't new at this point, they're still red-hot, as Poolside raises $500M for its own assistants. AI safety remains a political hot potato as Gavin Newsom shoots down one bill, but that won't be the end of it. The only question is whether any governmental oversight can keep pace with how fast AI is moving. TL;DR: No. Ads are coming to Google's AI Overviews. You knew this was coming, but it's significant and a corrective to the assumption that generative AI will kill Google Search and Google's lucrative search ads. Here's the past week's enterprise and related tech news from SiliconANGLE and beyond: OpenAI closes record $6.6B funding round at $157B valuation California Gov. Gavin Newsom shoots down divisive AI safety bill SB 1047 Breaking Analysis: From LLMs to SLMs to SAMs, how agents are redefining AI Living in the feed: Trust and AI drive strategic shifts for advertisers and content creators AWS, Nvidia and others back new AI-enabled cancer research initiative Coverage from Vast Data Cosmos: Vast Cosmos: Powering AI and data innovation in real time Vast Data announces InsightEngine, Cosmos community for faster AI adoption With InsightEngine, Vast Data aims to be the data operating system for AI And some interesting interviews and analysis from our Media Week NYC: theCUBE + NYSE Wired 2024 event, mostly on AI of course: Causal AI: The next evolution in machine learning Glean's AI-driven enterprise search aims to ease enterprise woes AI trends shaping the future: The role of data engineering and scalable platforms in industry transformation Unlocking a multimodal future: How computer vision is revolutionizing data analytics and decision-making Microsoft enhances Copilot and Bing with new generative AI features Kyndryl launches new consulting services for customers struggling with Copilot for Microsoft 365 OpenAI introduces new multimodal processing, AI fine-tuning tools at DevDay OpenAI releases new ChatGPT tool for editing text and code Nvidia and Accenture partner to foster development of AI agents Google's lightweight Gemini 1.5 Flash-8B hits general availability Google brings ads to its AI Overviews, Lens search tools Google Cloud beefs up database services for generative AI workloads Meta advances generative AI video creation with Movie Gen Liquid AI debuts new LFM-based models that seem to outperform most traditional large language models Mostly AI aims to overcome the AI training plateau with synthetic text based on proprietary datasets Google brings new AI features to ChromeOS and announces two new Chromebook models Dataiku launches LLM Guard Services to enhance generative AI deployment at scale Credo AI debuts Integrations Hub for embedded responsible AI Sonar now inspects AI-generated code for glitches Anaconda brings the power of large language models to laptops Moveworks rolls out automation platform for building AI agents SoftBank expected to invest $500M in OpenAI as Apple exits funding round Poolside raises $500M to take on GitHub and others with its AI coding assistants Audio AI startup ElevenLabs may get a new round that could value it at $3B (per TechCrunch) Eon raises $127M to simplify access to cloud data backups Oracle to invest $6.5B+ in AI and cloud computing in Malaysia Microsoft to invest $4.8 billion on AI, cloud infrastructure in Italy (per the Wall Street Journal) Crescendo raises $50M for its AI-powered customer service platform DataPelago raises $47M to optimize hardware for analytical workloads Qodo raises $40M to advance AI-driven code integrity solutions Resolve AI launched its AI tools for automating software operations as it announced a hefty $35M seed round from Greylock and Unusual Ventures. Numa raises $32M to develop AI tools for car dealerships Snowflake, Databricks back $20M round for AI data management startup Voyage AI Harmonic Security raises $17.5M to protect sensitive data from leaking during AI use There's more AI and big data news on SiliconANGLE Nvidia rival Cerebras Systems files to go public VC market experiences broad decline in Q3 as deal volumes drop across US, Europe and Asia Coverage from Anyscale Ray Summit: The AI revolution unfolds: Outlining AI infrastructure scaling and complexity challenges Uber's journey to hybrid cloud AI with Anyscale Ray and Kubernetes AI-driven filmmaking: How Runway is transforming visual storytelling and creativity with Gen-3 Alpha Three insights you might have missed from Ray Summit DigitalOcean launches Nvidia H100 GPU-powered virtual machines for small developer teams Intel and Google Cloud launch Confidential Computing Instances powered by 4th Gen Xeon processors Submer raises $55.5M to scale sustainable immersion cooling for AI and data center servers OpenStack releases 2024.2 Dalmatian with AI and security enhancements SolarWinds launches next-generation observability platform with enhanced hybrid IT capabilities ServiceTitan expands AI-powered solutions for contractors with new sales and contact center tools The top five AI lessons learned from IBM's presence at SaaStr 2024 We have more news on cloud, infrastructure and apps Justice Department and Microsoft target Russian phishing campaigns with domain seizures AUKUS nations impose sanctions on Russian cybercrime group Evil Corp Elastic report highlights growing cyber risks from AI-driven phishing and deepfakes Exabeam unveils first product updates and new service following LogRhythm merger Akamai enhances API security with new native connector for Connected Cloud Intel and Google Cloud launch confidential computing instances powered by 4th Gen Xeon processors AuthenticID's new image comparison tool helps to stop fraudsters bypassing visual ID checks Darktrace expands AI detection to Azure for faster, agentless deployment Safe Security launches Safe X to help CISOs prioritize cyber risks with AI Metomic unveils AI-powered data classification solution for Google Workspaces Red Sift introduces Radar to help security teams address email and domain exposures Silverfort's new incident response solution prioritizes identity security for faster attack containment Dragos acquires Network Perception to enhance operational technology network visibility Cloud security firm Apono raises $15.5M to expand AI-powered access management More cybersecurity news here Epic Games sues Google, Samsung over 'Auto Block' mobile cybersecurity feature Texas AG sues TikTok over breach of new child safety law Autonomous warehouse robotics platform company Dexory raises $80M 3Laws nabs $4.1M in seed funding to make autonomous robots safer to operate
[5]
Data Sheet: OpenAI valuation, Google reasoning race, Apple budget phone
Good morning. Everyone in tech is talking about OpenAI's astonishing $6.6 billion fundraise, which got me wondering: What's the largest-ever round in tech? Leave it to the fine folks at PitchBook, who helped us with our inaugural Unicorn List in the go-go days of 2015, to track such things. (Back to 2006, anyway.) The answer as of last year? Didi Global. The Chinese ride-hailing company raised $5.5 billion in 2017 -- or $7.1 billion, adjusted for inflation -- four years before its IPO. Oh the humanity. In February, OpenAI was valued at $80 billion. As of yesterday, it's worth $157 billion. (Yes, that makes it a "hectocorn.") The maker of ChatGPT announced Wednesday that it had raised $6.6 billion at the nosebleed valuation in one of the largest venture funding rounds in history. (Elon Musk's xAI raised $6 billion just four months ago.) The money is necessary, OpenAI explained, to carry out its sacred mission of ensuring "that artificial general intelligence benefits all of humanity." Inspiring words, to be sure. Especially when you consider the turbulent times that have defined the San Francisco startup lately. Less than a year ago, OpenAI CEO Sam Altman was booted by the board, only to make a near-immediate comeback. Since then, nearly all of OpenAI's top executives, including CTO Mira Murati, have left the building. Nearly half of its AGI safety team has left, too. And some staffers are complaining that the company's most recent model, o1, was released prematurely. Also: The company is undergoing a major transition from a non-profit to for-profit business. Investors seem to think it's all just a bunch of noise, pouring in record capital anyway. Thrive Capital led the round, and other investors reportedly include Khosla Ventures, Altimeter Capital, Fidelity, SoftBank, and Abu Dhabi-based MGX, as well as chip giant Nvidia. -- Jessica Mathews We love a Big Tech arms race, and there's certainly one underway between Google and Microsoft- and Nvidia-backed OpenAI. Weeks after OpenAI released o1, a.k.a. "Strawberry," its first model with reasoning capabilities, a new Bloomberg report says that "multiple teams" at Google are hard at work on artificial intelligence reasoning software. (Google declined to comment.) Reasoning is the AI component that allows it to draw a logical conclusion and solve a complicated problem or make a considered decision. It's one of the attributes in AI that makes it seem unnervingly human and the attribute typically missing in your favorite TikTok bloopers. Reasoning also represents the next AI leap. Developers are using a technique called "chain of thought" prompting -- here's a detailed explanation of it from IBM Fellow Jerry Cuomo if you're interested -- to break down complex deductions into a series of smaller steps, approximating the way humans draw conclusions. You need not be a logician to see the commercial appeal. Will OpenAI or Google triumph? I asked a rudimentary reasoning tool from my youth. Its response: "Ask again later." -- AN Not all buttons are created equal. The fourth generation of Apple's budget-friendly phone model, iPhone SE, will drop the home button and add Face ID and Apple Intelligence capabilities, Bloomberg's Mark Gurman reports. The new phone is expected for release in early 2025. Apple's entry-level smartphone was last updated in 2022. The iPhone SE had kept a home button even as most top models ditched it. (The final flagship model with one: iPhone 8 in 2017.) The upcoming budget model, Gurman reports, will drop such differentiation and find savings elsewhere. It will appear and authenticate, using Face ID, like an iPhone 14. There are good reasons to unify the portfolio in this way: The iPhone SE 4 will reportedly be able to run the AI features that Apple unveiled earlier this year. Buyers of Apple's new iPhone 16 and 16 Pro are, of course, still waiting. The full Apple Intelligence suite isn't expected to hit devices until later this month. -- Jenn Brice The Biden Administration is committing up to $100 million to the development of new, "sustainable" semiconductor materials and processes. The sum, announced Wednesday, is part of the administration's two-year-old effort to increase domestic chip production. The funds will go to universities and businesses partnering on ways to make chipmaking more environmentally-friendly within the next five years. As part of the research, AI may be tapped to help reduce the vast amount of chemicals and water used in chip production. There would also no doubt be geopolitical benefits if the new materials reduce U.S. reliance on China-supplied rare earths. Wins all around, right? However, the feel-good nature of the funding may conflict with a bipartisan bill President Biden signed on Wednesday that's aimed at accelerating the construction of semiconductor plants receiving money from the so-called CHIPS Act. Critics say the new law will weaken federal environmental reviews of those plants and lead to more pollution, not less. -- David Meyer What's it like defending a controversial decision that you didn't have much time to plan for? That's what Whole Foods CEO Jason Buechel sounded like defending Amazon's new controversial RTO mandate in an all-hands meeting with the grocer's corporate employees this week, according to a recording reviewed by yours truly. Buechel attempted to reassure employees that he wanted the shift from a hybrid structure to a full, in-office one to involve more "carrot" than "stick." But he and his executive team were short on answers for what incentives might change the mind of those upset about the shift. Instead, Whole Foods corporate workers will have to wait for recommendations from a special "office experience task force." For real. -- Jason Del Rey -- Greed! Extortion! Abuse of power! The Automattic-WP Engine drama escalates further in a new lawsuit. -- R.I.P., World Wide Web Foundation. Tim Berners-Lee will carry on the fight against corporate control of the Internet through decentralization protocols. -- Google doubles down on Gmail summary cards. Can we maybe just let email be email? -- California's new deepfakes law hits a speedbump. A federal judge blocks it two weeks after it became law.
[6]
Data Sheet: Blindsided
Good morning. I had the pleasure of co-hosting a Fortune x ServiceNow dinner last night in San Francisco and was struck by how CIOs are thinking about artificial intelligence at the moment. A Zagat-style summary to start your day: What AI models "are doing with my data" is the top related concern for most, and it's "hard to hire AI talent" as the landscape rapidly evolves. That can be "intimidating" because "no one knows everything" and then "it all changes tomorrow." The future will have "fewer jobs" for people "who don't think critically" because AI is "enabling us to become more creative." We need less scary "autopilot" AI and more useful "copilot" AI and workers who "know what questions to ask." Education systems in this Intelligence Age must consider: "What will be 10 minutes into the future five years from now?" The news below. -- Andrew Nusca P.S. A reminder that I'm moderating a Fortune x Accenture virtual conversation tomorrow about the ROI of generative AI. A phenomenal group of tech leaders will speak. Read more and join us -- it's free. Is that an IPO on the horizon? And not just any IPO -- but an AI IPO? Cerebras, the AI chipmaker positioning itself as a rival to Nvidia, has filed its S-1, marking the start of its journey to go public. On the face of it, for the exit-starved private markets in the throes of the AI boom, this can only be good news. On a high level, Cerebras is a rocketship. Last year's revenue came in at $78.7 million, up from $24.6 million the year prior. But losses are large: $127.2 million in 2023 versus $177.7 million in the previous year. A deeper look at the filing tells a complex story. Cerebras, for example, is heavily dependent on a single customer, which accounted for a staggering 87% of its revenue in the first half of this year. (That customer was G42, a Abu Dhabi-based AI company.) Cerebras is clearly banking on a strong future for AI and demand for chips. And a home-run debut for Cerebras could spark a wave of feverishly-hoped-for tech IPOs, which have been in short supply in recent years. -- Allie Garfinkle Like The Devil Wears Prada's Andy Sachs after Nigel gets to her, Microsoft has given its AI-powered Copilot personal assistant a glow up. The new Copilot is more personal, more capable, and more accessible than its predecessor thanks to a host of upgrades Microsoft announced Tuesday for Windows users. Among them is Copilot Vision, which processes the page you're viewing (in the Edge browser, naturally) so it can answer questions about the content or help you find things. There's also Copilot Voice, an improved spoken language layer (with four voices) so you can chatter to it the way you naturally would. Microsoft also upgraded Copilot's reasoning so it can think through more complex queries, such as difficult math problems, thanks to a new "Think Deeper" feature, which Microsoft told TechCrunch is powered by "the latest models from OpenAI, fine-tuned by Microsoft." Microsoft AI chief Mustafa Suleyman told WIRED he's excited about the "emotional support" aspect of this release. So by all means channel your inner Miranda Priestly. You know how it thrills me. -- Jenn Brice Following "one of the most disastrous software releases by any consumer technology company in recent memory," according to Bloomberg Opinion, Sonos leadership is promising to do better. In April, the smart speaker company blindsided existing customers with a new app that was riddled with bugs and lacked basic functionality. Customers revolted while some employees claimed that their warnings about problems before the launch were ignored. So on Tuesday, Sonos' embattled CEO Patrick Spence unveiled a series of new internal guidelines and guardrails that he says should help prevent such a tremendous unforced error from happening again. The changes include product testing that is done more broadly and for a longer time period, debuting apps more gradually to customers, and the creation of an internal "ombudsperson" charged with elevating employee feedback before major launches. Candidly, it's wild that a 22-year-old public company didn't have some of these structures in place already. But, really, it'll be up to Sonos' most loyal customers to determine if this postmortem playbook is enough to firmly regain their trust. -- Jason Del Rey Hurricane Helene's trail of destruction across the southeastern U.S. could disrupt the global tech sector. That's because a small North Carolina town called Spruce Pine (population 2,194) has been badly hit -- and Spruce Pine is one of the world's main sources for high-purity quartz, which is necessary for chip production. The owners of Spruce Pine's quartz mines shut them down last Thursday, as the storm approached, and don't know when they will be able to resume operation. "We have no visibility on when they will restart," a representative of The Quartz Corp told NPR on Tuesday. "This is [the] second order of priority. Our top priority remains the health and safety of our employees and their families." Sibelco, another mine owner, said its operations were also disrupted. Chipmakers should have stocks of high-purity quartz to tide them over for a while, but they'll be hoping that the mines are back in action soon, and that damaged transport infrastructure will be repaired to allow for shipping to resume. -- David Meyer The head of Europe's largest software company is concerned that AI overregulation may hurt the continent's ability to compete with the U.S. SAP CEO Christian Klein tells the Financial Times in a new interview that he's "totally against regulating the technology" because "it would harm the competitiveness of Europe a lot." Shocking, I know. But the perspective comes at an opportune time. On Sunday, California Gov. Gavin Newsom vetoed a bill that would have enacted the U.S.'s most stringent AI rules, lifting the immediate regulatory threat to American businesses,while European execs are working through their own large-model restrictions, principally under the EU's new Artificial Intelligence Act and Digital Markets Act. How much regulation is too much regulation? Is Europe willing to break with the freewheeling U.S. to add additional protections? So far, that has been the case -- and U.S. tech companies (Meta, Apple) have pulled some AI-powered products as a result. Klein doesn't like the uneven playing field. "Don't regulate the technology," he told the FT. "Regulate the outcome." -- AN -- GM's Cruise will pay a $1.5 million NHTSA fine. To channel Johnnie Cochran: You didn't lie, but you must comply. -- Has social media created a teen mental health crisis? Suicide rates for young Americans jumped 62% between 2007 and 2021. -- A Sony PlayStation outage for the ages. The gamers are, shall we say, displeased. -- New Chromebooks have arrived. Lenovo and Samsung models come with AI baked in. -- Voter registration databases: not secure! "Riddled with vulnerabilities" is not something we want to read six weeks before an election. Fortune's Kali Hays overheard venture capitalist Marc Andreessen speak at Ray Summit, a conference for AI developers who work with the Ray framework, in San Francisco: On the politicization of tech: "Tech has become very politically controversial. Basically it's our fault, I blame us and by us I mean myself and our firm and our companies and everyone in this room." On tech seeking attention, pt. 1: "A lot of us grew up like, 'Wow, we think the stuff we're working on is really important, and we wish the rest of the world would understand how important it is' -- and now they do, right? They didn't for a long time, and now they do... On tech seeking attention, pt. 2: "I use the metaphor of the dog that caught the bus. The dog chases the bus never expecting to catch it, and God forbid the dog ends up with the tailpipe in its mouth. The reality is the bus just keeps on going and the dog gets dragged down the street. So that's us. We're the dog."
[7]
Corporate adoption is turbocharging the AI market, says top tech analyst
Good morning. Big tech has been making headlines this week. For starters, OpenAI is raising even more money and shaking up its management structure to become more corporate as it digs in for what is shaping up to be a bruising battle for AI dominance. Then there is Nvidia, whose CEO reminded businesses that his company is the only game in town when it comes to advanced AI chips -- including the new Blackwell line, which is a top priority for many firms. And speaking of Nvidia, the chip maker and professional services company Accenture announced an expanded partnership on Wednesday. Accenture will create what it calls an "Nvidia Business Group," which will help clients lay the foundation for AI use with its AI Refinery platform built on the Nvidia AI stack. Nvidia Business Group will consist of 30,000 employees focused on assisting clients scale enterprise generative AI adoption. I asked Dan Ives, managing director and senior equity research analyst at Wedbush Securities, what this move by Nvidia signifies. "We believe enterprise adoption of AI is one of the most important drivers of the market over the next few years," Ives told me on Thursday. And $1 trillion of AI CapEx is on the horizon in the next three years with "the Godfather of AI Jensen and Nvidia torch bearers of this AI Revolution," according to Ives. During Nvidia's earnings call in August, CFO Colette Kress said that the "Enterprise AI wave has started." Enterprises drove sequential revenue growth for the company in the quarter that ended July 28, and Nvidia is working the most with Fortune 100 companies on AI initiatives, Kress said. The fundraising news for OpenAI, the company that launched ChatGPT, came on Wednesday in the form of $6.6 billion in new funding, bringing the startup's valuation to $157 billion. The company did not disclose the investors. But venture capital firm Thrive Capital had invested in and led this latest round, Fortune reported. Microsoft and Nvidia were also among the investors as well, Wedbush Securities analysts wrote in a Thursday note. As a result of the funding round, OpenAI CFO Sarah Friar told employees on Wednesday that the company could offer a tender event to provide an option for liquidity, The Information reported. Friar, former CEO of Nextdoor, joined OpenAI in June. She also previously served as CFO of the tech company Square and oversaw its IPO. The funding drive is happening as OpenAI overhauls its board structure to be a for-profit corporation accountable to shareholders, and as it contends with ongoing C-suite exits, including that of CTO, Mira Murati. Wedbush analysts said in a Thursday note to investors that OpenAI has been the "linchpin" to AI success and adoption. "We also believe OpenAI now has a much more functional corporate structure which is a positive for the broader tech industry," the analysts wrote. Many CFOs are determining the business case for generative AI to help drive continuous change and create value. But, the rapid pace of technological change may be speeding up that process. And, the enterprise level is the beginning of a broader AI Revolution and demand adoption into 2025 and beyond, Wedbush analysts predict. The following sections of CFO Daily were curated by Greg McKenna Some notable moves this week: Selim Freiha was appointed CFO of Opendoor (Nasdaq: OPEN), an e-commerce platform for residential real estate, and will join the company's executive team in November. He arrives from Google-parent Alphabet, where he served as a vice president leading corporate financial planning and analysis and previously headed finance for Google Commerce and Payments. Jonathan Frates was appointed executive vice president and CFO of SandRidge Energy (NYSE: SD), an oil and gas company, effective Oct. 21. He will succeed current CFO Brandon Brown, who will assume the role of senior vice president and chief accounting officer. Frates has stepped down as the company's chairman as he transitions to the new role. Michelle Chang was appointed CFO of Zoom (Nasdaq: ZM), best known for its namesake video conference platform, effective Oct. 7. She will succeed current CFO Kelly Steckelberg, who will step down after seven years in the role following the company's earnings announcement for its third-quarter ending Oct. 31. Cassandra Harris was appointed CFO of Genesco (NYSE: GCO), the parent of footwear and apparel brands such as Journeys and Johnston & Murphy, effective Oct. 7. She will succeed Thomas A. George, who will retire on Dec. 12 after over thirty years as a public company CFO. Brad Garner was appointed CFO of flyExclusive (NYSEAM: FLYX), which owns and operates private jets, effective immediately. Garner joins from Hale Partnership Capital Management, where he served as CFO and chief compliance officer since 2015. Han Choi was appointed CFO of Vor Biopharma (Nasdaq: VOR), a clinical-stage biopharmaceutical company developing treatments for blood cancers, effective immediately. Choi previously served as a principal at Oracle Investment Management, a hedge fund that specializes in healthcare investments, where he spent over two decades. Brad Smith was promoted to CFO of Central Garden & Pet Company (Nasdaq: CENT), effective Sept. 29. He succeeds Niko Lahanas, who has been named the company's CEO. Smith joined Central in 2017 as CFO of the company's pet division after 12 years at Delhaize Group, where he eventually served as CFO of European operations. Morgan Conn was appointed CFO of Century Therapeutics (Nasdaq: IPSC), a biopharmaceutical company, effective Oct. 14. He most recently served as chief business officer of Pharvaris and previously spent almost 16 years at PTC Therapeutics, where he started as a researcher before transitioning into several senior business roles. Almost 90% of consumers are frustrated with rising prices across categories including groceries, gas, and restaurants, according to a new report from marketing and printing company RR Donnelley. The firm surveyed over 1,800 consumers across different generations and income levels in partnership with Prosper Insights & Analytics, as well as over 254 decision-makers from U.S.-based grocery, drug, and mass retailers. Those irritated by higher costs for everyday goods included 79% of households making over $100,000 per year, with 87% of baby boomers expressing concern over food and beverage prices. The report noted that consumers have adapted through several different strategies, including stocking up during sales and switching from name brands to private label products. Unsurprisingly, fair prices are the most important factor consumers weigh when choosing where to shop, with 58% of respondents saying they prioritize affordability. Thirty-nine percent, however, said data privacy concerns influence their retailer and brand selection, up 19 points from 2023. "Price sensitivity is a collective concern, presenting both a challenge and an opportunity for brands," said Beth Johnson, director of client strategy in RRD's grocery & convenience division. "To address this, we recommend that brands focus on offering value through personalized offers and relevant discounts. These strategies will help them adapt to shifting priorities, stay relevant, and strengthen customer loyalty." Here are a few Fortune weekend reads: "WNBA commissioner Cathy Engelbert is leading the league to historic highs. Critics wonder if she's fully seizing the moment," by Emma Hinchliffe "Who has time for a power lunch? The real business happens at 4 p.m. 'power hour.'" by Jane Thier "There's already one clear election winner: The company making hats for every major candidate," by Sasha Rogelberg "AI could be giving natural gas a second lease on life," by Richard Lum "AI data centres will be built next to energy production sites that can produce gigawatt-scale, low-cost, low-emission electricity continuously. Basically, next to nuclear power plants."
Share
Share
Copy Link
OpenAI secures a historic $6 billion in funding, valuing the company at $157 billion. This massive investment comes amid concerns about AI safety, regulation, and the company's ability to deliver on its ambitious promises.
OpenAI, the company behind ChatGPT, has made history once again by raising $6 billion in one of the largest venture capital funding rounds in US history. The startup is now valued at an astounding $157 billion, putting it on par with publicly traded companies like Goldman Sachs, Uber, and AT&T 1. This massive investment comes from a group of high-profile investors, including Thrive Capital, Microsoft, and AI-chip maker Nvidia 2.
The influx of capital brings with it immense pressure for CEO Sam Altman to transform OpenAI from a charitable nonprofit into a profitable company within two years. This shift has raised concerns about the potential rush to release AI tools that could increase risks to humanity 2. OpenAI has stated that the funds will be used to advance AI research and increase computing capacity 2.
Despite its meteoric rise, OpenAI faces several challenges:
Stretched resources: Recent reports suggest that OpenAI's teams are under significant pressure, with safety assessments for GPT-4o given only nine days 3.
Product delays: The company has delayed the rollout of its revenue-sharing program for the GPT Store 3.
Staff burnout and executive departures: OpenAI is reportedly experiencing burnout among staff and seeing key executives leave 3.
Performance concerns: Some of OpenAI's products, like DALL-3, are underperforming compared to competitors in certain tests 3.
The massive funding round comes at a time when AI regulation is a hot-button issue. California Governor Gavin Newsom recently vetoed SB 1047, a high-profile bill that would have regulated AI development in the state 4. This decision highlights the ongoing debate between fostering innovation and ensuring public safety in AI development.
OpenAI's funding news is part of a larger trend of rapid AI development and investment across the tech industry:
Google is reportedly working on AI reasoning software to compete with OpenAI's o1 model 5.
Microsoft has enhanced its Copilot AI assistant with new features 3.
Meta is advancing generative AI video creation with its Movie Gen tool 3.
As AI continues to evolve rapidly, questions arise about its long-term impact and potential risks:
Economic concerns: Some economists, like MIT's Daron Acemoglu, argue that AI is being overhyped and may not deliver the expected productivity boost 2.
Democratization of AI: There are ongoing efforts to make AI more accessible through open-source models and smaller, more efficient systems 1.
Safety and security: The development of AI raises concerns about potential misuse and the need for robust safety measures 1.
As OpenAI moves forward with its ambitious plans, the tech world watches closely to see if the company can deliver on its promises and navigate the complex landscape of AI development, regulation, and commercialization.
Reference
[3]
[4]
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
Meta expands Llama AI model usage to U.S. military and defense contractors, sparking debate over open-source AI and national security implications.
8 Sources
8 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
As AI development accelerates, companies face rising costs in data labeling. Meanwhile, a new trend emerges with Not-Large Language Models, offering efficient alternatives to their larger counterparts.
2 Sources
2 Sources
Former President Trump's comments on Taiwan's chip industry spark debate, while DeepL, a translation tech leader, finally adopts large language models.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved