Curated by THEOUTPOST
On Tue, 29 Apr, 4:05 PM UTC
12 Sources
[1]
Here's how to watch LlamaCon, Meta's first AI developer event | TechCrunch
On Tuesday, Meta is hosting LlamaCon, its first-ever AI developer event. It'll center around the company's Llama family of open AI models, and we're expecting some big updates for developers. Also on the agenda: keynotes from Meta executives and fireside chats with Big Tech CEOs and Meta's chief, Mark Zuckerberg. Meta is hosting a small number of developers and journalists at its Menlo Park, California headquarters. The company is also livestreaming the big keynotes and firesides for people around the world to tune in. LlamaCon will stream on the Meta for Developers Facebook Page, as well as on the Meta Developers YouTube channel. LlamaCon kicks off at 10:15 a.m. Pacific with a keynote speech delivered by Meta's Chief Product Officer, Chris Cox, the company's Vice President of AI, Manohar Paluri, and Meta Generative AI Research Scientist Angela Fan. At 10:45 a.m., Zuckerberg will sit down for a fireside chat with Databricks Co-founder and CEO Ali Ghodsi to chat about open source AI and AI-powered applications. What's the connection between Meta and Databricks? In January, Databricks announced that Meta was backing the data-focused AI startup as a "strategic advisor." Later in the day, at 4 p.m., Zuckerberg will participate in another fireside chat with Microsoft CEO Satya Nadella. The two CEOs are slated to discuss the latest trends in AI, and offer advice for how developers can stay ahead in the fast-moving AI space. Stakes are high for Meta going into LlamaCon. The company recently launched Llama 4, a new generation of AI models that was met with a muted reaction from developers. The Llama 4 models were not state-of-the-art on certain benchmarks compared to leading AI models from DeepSeek, OpenAI, Anthropic, and Google. Soon after the launch, Meta had to fend off allegations that it cheated on a popular crowdsourced AI benchmark, LM Arena. The company used a version of its Llama 4 Maverick model "optimized for conversationality" to achieve a high score on LM Arena, but released a different version of Maverick publicly. With LlamaCon, Meta is hoping to get back into the good graces of developers. We'll see if the company can achieve that.
[2]
Meta needs to win over AI developers at its first LlamaCon | TechCrunch
On Tuesday, Meta is hosting its first-ever LlamaCon AI developer conference at its Menlo Park headquarters, where the company will try to pitch developers on building applications with its open Llama AI models. Just a year ago, that wasn't a hard sell. However, in recent months, Meta has struggled to keep up with both "open" AI labs like DeepSeek and closed commercial competitors such as OpenAI in the rapidly evolving AI race. LlamaCon comes at a critical moment for Meta in its quest to build a sprawling Llama ecosystem. Winning developers over may be as simple as shipping better open models. But that may be tougher to achieve than it sounds. Meta's launch of Llama 4 earlier this month underwhelmed developers, with a number of benchmark scores coming in below models like DeepSeek's R1 and V3. It was a far cry from what Llama once was: a boundary-pushing model lineup. When Meta launched its Llama 3.1 405B model last summer, CEO Mark Zuckerberg touted it as a big win. In a blog post, Meta called Llama 3.1 405B the "most capable openly available foundation model," with performance rivaling OpenAI's best model at the time, GPT-4o. It was an impressive model, to be sure -- and so were the other models in Meta's Llama 3 family. Jeremy Nixon, who has hosted hackathons at San Francisco's AGI House for the last several years, called the Llama 3 launches "historic moments." Llama 3 arguably made Meta a darling among AI developers, delivering cutting-edge performance with the freedom to host the models wherever they chose. Today, Meta's Llama 3.3 model is downloaded more often than Llama 4, said Hugging Face's head of product and growth, Jeff Boudier, in an interview. Contrast that with the reception to Meta's Llama 4 family, and the difference is stark. But Llama 4 was controversial from the start. Meta optimized a version of one of its Llama 4 models, Llama 4 Maverick, for "conversationality," which helped it nab a top spot on the crowdsourced benchmark LM Arena. Meta never released this model, however -- the version of Maverick that rolled out broadly ended up performing much worse on LM Arena. The group behind LM Arena said that Meta should have been "clearer" about the discrepancy. Ion Stoica, an LM Arena co-founder and UC Berkeley professor who has also co-founded companies including Anyscale and Databricks, told TechCrunch that the incident harmed the developer community's trust in Meta. "[Meta] should have been more explicit that the Maverick model that was on [LM Arena] was different from the model that was released," Stoica told TechCrunch in an interview. "When this happens, it's a little bit of a loss of trust with the community. Of course, they can recover that by releasing better models." A glaring omission from the Llama 4 family was an AI reasoning model. Reasoning models can work carefully through questions before answering them. In the last year, much of the AI industry has released reasoning models, which tend to perform better on specific benchmarks. Meta's teasing a Llama 4 reasoning model, but the company hasn't indicated when to expect it. Nathan Lambert, a researcher with Ai2, says the fact that Meta didn't release a reasoning model with Llama 4 suggests the company may have rushed the launch. "Everyone's releasing a reasoning model, and it makes their models look so good," Lambert said. "Why couldn't [Meta] wait to do that? I don't have the answer to that question. It seems like normal company weirdness." Lambert noted that rival open models are closer to the frontier than ever before, and that they now come in more shapes and sizes -- greatly increasing the pressure on Meta. For example, on Monday, Alibaba released a collection of models, Qwen 3, which allegedly outperform some of OpenAI and Google's best coding models on Codeforces, a programming benchmark. To regain the open model lead, Meta simply needs to deliver superior models, according to Ravid Shwartz-Ziv, an AI researcher at NYU's Center for Data Science. That may involve taking more risks, like employing new techniques, he told TechCrunch. Whether Meta is in a position to take big risks right now is unclear. Current and former employees previously told Fortune Meta's AI research lab is "dying a slow death." The company's VP of AI Research, Joelle Pineau, announced this month that she was leaving. LlamaCon is Meta's chance to show what it's been cooking to beat upcoming releases from AI labs like OpenAI, Google, xAI, and others. If it fails to deliver, the company could fall even further behind in the ultra-competitive space.
[3]
Meta previews an API for its Llama AI models | TechCrunch
At its inaugural LlamaCon AI developer conference on Tuesday, Meta announced an API for its Llama series of AI models: the Llama API. Available in limited preview, the Llama API lets developers explore and experiment with products powered by different Llama models, per Meta. Paired with Meta's SDKs, it allows developers to build Llama-driven services, tools and applications. Meta didn't immediately share the API's pricing with TechCrunch. The rollout of the API comes as Meta looks to maintain a lead in the fiercely competitive open model space. While Llama models have racked up more than a billion downloads to date, according to Meta, rivals such as DeepSeek and Alibaba's Qwen threaten to upend Meta's efforts to establish a far-reaching ecosystem with Llama. The Llama API offers tools to fine-tune and evaluate the performance of Llama models, starting with Llama 3.3 8B. Customers can generate data, train on it, and then use Meta's evaluation suite in the Llama API to test the quality of their custom model. Meta said it won't use Llama API customer data to train the company's own models, and that models built using the Llama API can be transferred to another host. For devs building on top of Meta's recently released Llama 4 models specifically, the Llama API offers model-serving options via partnerships with Cerebras and Groq. These "early experimental" options are "available by request" to help developers prototype their AI apps, Meta said. "By simply selecting the Cerebras or Groq model names in the API, developers can [...] enjoy a streamlined experience with all usage tracked in one location," wrote Meta in a blog post provided to TechCrunch. "[W]e look forward to expanding partnerships with additional providers to bring even more options to build on top of Llama." Meta said it will expand access to the Llama API "in the coming weeks and months."
[4]
Meta's LlamaCon was all about undercutting OpenAI | TechCrunch
On Tuesday, Meta held its first-ever AI developer conference, LlamaCon, at its Menlo Park, California headquarters. The company announced the launch of a consumer-facing Meta AI chatbot app, which will compete with ChatGPT, as well as a developer-facing API for accessing Llama models in the cloud. Both releases aim to expand adoption of the company's open Llama AI models, but that goal may be secondary to Meta's true motive: beating OpenAI. Meta's AI ambition, in broad strokes, is fueling a thriving open AI ecosystem that sticks it to "closed" AI providers like OpenAI, which gate their models behind services. Meta's AI chatbot app feels almost like a preemption of OpenAI's rumored social network. It has a social feed where users can share their AI chats, and offers personalized responses based on a user's Meta app activity. As for the Llama API, it's a challenge to OpenAI's API business. The Llama API is designed to make it simpler for developers to build apps that connect to Llama models in the cloud, using just a single line of code. It eliminates the need to rely on third-party cloud providers to run Llama models, and allows Meta to offer a fuller array of tools for AI developers. Meta, like many AI companies, perceives OpenAI to be a top rival. Court filings in a case against Meta reveal that the company's execs previously obsessed over beating OpenAI's GPT-4, which was once a state-of-the-art model. Undercutting proprietary AI model providers like OpenAI has long been core to Meta's AI strategy. In a July 2024 letter, Meta CEO Mark Zuckerberg sought to contrast Meta with companies like OpenAI, writing that "selling access to AI models isn't [Meta's] business model." Several AI researchers who spoke with TechCrunch ahead of LlamaCon were hoping Meta would release a competitive AI reasoning model like OpenAI's o3-mini. The company didn't end up doing so. But for Meta, it's not about winning the AI race necessarily. During an onstage conversation with Databricks CEO Ali Ghodsi during LlamaCon, Zuckerberg said he sees any AI lab that makes its models openly available, including DeepSeek and Alibaba's Qwen, as allies in the fight against closed model providers. "Part of the value around open source is that you can mix and match. So if another model, like DeepSeek, is better -- or if Qwen is better at something -- then, as developers, you have the ability to take the best parts of the intelligence from different models and produce exactly what you need," said Zuckerberg. "This is part of how I think open source basically passes in quality all the closed source [models] ... [I]t feels like sort of an unstoppable force." Beyond stunting OpenAI's growth, Meta may also be trying to push its open models to satisfy a regulatory carveout. The EU AI Act grants special privileges to companies that distribute "free and open source" AI systems. Meta often claims its Llama models are "open source," despite disagreement on whether they meet the necessary criteria. Regardless of the reason, Meta seems content to kick off AI launches that strengthen the open model ecosystem and limit OpenAI's growth -- sometimes at the expense of failing to deliver cutting-edge models itself.
[5]
Meta's First LlamaCon Shows the Tech Giant's Still Playing Catch-Up
Katelyn is a writer with CNET covering social media, AI and online services. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can often find her with a novel and an iced coffee during her time off. If you were like me and went into Meta's LlamaCon keynote expecting the company to drop the reasoning model it teased earlier this month or its teacher model Behemoth, prepare to be disappointed. The company's first AI developers conference was today, and while we didn't get any new models, there were a couple of announcements that helped Meta catch up in what's become an ultra-competitive, fast race to build generative AI. But there wasn't much in the announcements to help it get ahead. Every big tech company is racing to build a model that can handle complex tasks without requiring a ton of computing power (and thus money) to run. Meta's approach to AI has focused on being open-source, which gives developers a peek behind the curtain at how models are built and trained. Chris Cox, Meta's chief product officer, dropped an updated stat, confirming that there have been 1.2 billion downloads of Llama models to date. Between that and the integrations of Meta AI in Facebook, Instagram and WhatsApp, Meta is certainly a major player in the space -- even if it's sometimes late to the party or does things differently. Here's everything Meta did release today and where that leaves the company going forward in the AI industry. The company is rebranding its smart glasses Meta View app into a standalone app for its AI, CEO Mark Zuckerberg confirmed via Instagram a few hours before the keynote. You can download the app now. If you can't find it by searching for "Meta AI" like I couldn't, try searching for "Meta View" instead. The app is an extension of its chatbot, with voice-mode capabilities meant to let you chat with Meta AI and a social discover feed. It's not the same as your Instagram or Facebook feeds; you can't find and follow your friends. Instead, you'll see posts from random users' experience with Meta AI, including AI images they created and prompts they asked and the chatbot's answers. CNBC reported the possibility of a standalone Meta AI app in February, but the choice to convert the Meta View app raises bigger questions about Meta's AI and VR future. "Meta making a play for another compelling phone app looks like a way to try to draw more people into the ecosystem faster than making a pitch to get glasses," my colleague and smart glasses expert Scott Stein wrote. Meta did not round out its class of Llama 4 models at LlamaCon; instead Cox just repeated information we mostly knew about Scout and Maverick. CNET reached out to Meta for the most up-to-date information on the release of Behemoth and the Llama 4 reasoning model Zuckerberg introduced earlier this month, but Meta declined to comment. The models available now in the Llama 4 family are Scout and Maverick. Scout is a smaller model designed to run on one Nvidia H100 GPU (with a 10 million token context window), and Maverick is the next level up with more power. There was some confusion back when Meta released the benchmarking scores for Llama 4. The company initially said Maverick outperformed GPT-4o from OpenAI. But eagle-eyed experts saw, and the benchmarking organization confirmed, that the Maverick model submitted for testing wasn't the same model people could actually use now; it was "optimized for conversationality." Meta denied training the model on post-testing data, which is a big no-no because that could give the model an unfair edge in benchmarking tests and not accurately assess its performance. Meta's AI policy states that it does train its models on information shared on Meta Platforms and with content you share with the chatbot. The company recently ended its opt-out option for European users, so this applies to them as well. You can check out Meta's full privacy policy for more info. Developers who want to build using Meta AI got good news on Tuesday when Meta announced it is going to being previewing its Llama API, an upcoming developer platform for Llama application development. Devs can request early, experimental access to Llama 4 fast inference now. "You should be able to take these custom models with you, whenever you want, no locking, ever," said Manohar Paluri, Meta's vice president of AI. He also called out that speed, ease of use and customization should be the hallmarks of using the Llama API. The new Llama 4 models, Scout and Maverick, will be included in the API. Angela Fan, research scientist in generative AI at Meta, also highlighted that the API privacy policy is a bit different than the regular Meta AI policy. When you use the API, Meta will not train on the inputs (your prompts and things you upload) or on the outputs (what it spits out). This is good for developers who want to build models for enterprise, or businesses, but need to ensure the data they upload stays secure. The announcements at LlamaCon help Meta catch up to its competitors but doesn't put them ahead of the curve, which might spell trouble for the future. There's still no word on when Meta will release Behemoth or the reasoning model it promised in its Llama 4 drop. The Meta View app is fine, but it really just helps Meta even the playing field as most of the major AI players already have mobile apps, including OpenAI, Claude and Perplexity. For Meta smart glasses users, the app's evolution might point to how AI is going to be at the forefront of those offerings. I left the keynote thinking that Meta is consistently late to the AI party -- OpenAI, Google and DeepSeek all have reasoning models already out now. As I wrote in my review of Meta AI last year, there's nothing wrong with being behind if the company comes out swinging. But so far, that doesn't seem to be the case. I think the most surprising thing was the social discover feed in the Meta AI app. With all of Meta's expertise in building social platforms, the discover/explore page could be a promising (though unlikely) replacement for people to flood that feed with AI instead of Facebook or Instagram. It's certainly something to keep an eye on, especially as Meta updates the app and moves forward with AI.
[6]
Meta introduces Llama application programming interface to attract AI developers
SAN FRANCISCO, April 29 (Reuters) - Meta Platforms (META.O), opens new tab on Tuesday announced an application programming interface in a bid to woo businesses to more easily build AI products using its Llama artificial-intelligence models. Llama API, which was unveiled during the company's first-ever AI developer conference, will help Meta go up against APIs offered by rival model makers including Microsoft (MSFT.O), opens new tab -backed OpenAI, Alphabet's (GOOGL.O), opens new tab Google and emerging low-cost alternatives such as China's DeepSeek. "You can now start using Llama with one line of code," chief product officer Chris Cox said during a keynote speech onstage. APIs allow software developers to customize and quickly integrate a piece of technology into their own products. For OpenAI, APIs constitute the firm's primary source of revenue. Meta, which released the latest version of Llama earlier this month, did not share any pricing details for the API. In a press release, it said the new API was available as a limited preview for select customers and would roll out broadly in weeks to months. The company also released a standalone AI assistant app earlier on Tuesday. It plans to test a paid subscription service of its AI chatbot in the second quarter, Reuters reported in February. Meta releases its Llama models largely free-of-charge for use by developers, a strategy CEO Mark Zuckerberg previously stated will pay off in the form of innovative products, less dependence on would-be competitors and greater engagement on the company's core social networks. "You have full agency over these custom models, you control them in a way that's not possible with other offers," Manohar Paluri, a vice president of AI, said at the conference. "Whatever model you customize is yours to take wherever you want, not locked on our servers." DeepSeek, which has also released partly open-source AI models, sparked a stock selloff in January amid concerns over the high costs of AI development needed by top U.S. firms. At the conference, Meta developers spoke about new techniques they used to significantly reduce costs and improve the efficiency of its newest Llama iteration. Zuckerberg welcomed increased competition that would steer the competitive ecosystem away from domination by a small number of leaders. "If another model, like DeepSeek, is better at something, then now as developers you have the ability to take the best parts of the intelligence from the different models and produce exactly what you need, which I think is going to be very powerful," Zuckerberg said. Reporting by Kenrick Cai in San Francisco and Deborah Sophia in Bengaluru; Editing by Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Kenrick Cai Thomson Reuters Kenrick Cai is a correspondent for Reuters based in San Francisco. He covers Google, its parent company Alphabet and artificial intelligence. Cai joined Reuters in 2024. He previously worked at Forbes magazine, where he was a staff writer covering venture capital and startups. He received a Best in Business award from the Society for Advancing Business Editing and Writing in 2023. He is a graduate of Duke University.
[7]
How to watch and follow LlamaCon 2025, Meta's first generative AI developer conference, today
After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conference called LlamaCon on April 29. The event is streaming online, and you'll be able to watch along live on the Meta for Developers Facebook page. LlamaCon kicks off today at 1PM ET / 10AM PT with a keynote address from Meta's Chief Product Officer Chris Cox, Vice President of AI Manohar Paluri and research scientist Angela Fan. The keynote is supposed to cover developments in the company's open-source AI community, "the latest on the Llama collection of models and tools" and offer a glimpse at yet-to-be released AI features. The keynote address will be followed by a conversation at 1:45PM PT / 10:45PM ET between Meta CEO Mark Zuckerberg and Databricks CEO Ali Ghodsi on "building AI-powered applications," followed by a chat at 7PM ET / 4PM PT about "the latest trends in AI" between Zuckerberg and Microsoft CEO Satya Nadella. It doesn't seem like either conversation will be used to break news, but Microsoft and Meta have collaborated before, so anything is possible. We'll be liveblogging the keynote presentation today, along with some of the subsequent interviews and sessions between Zuckerberg and his guests. Stay tuned and refresh this article at about 10AM ET today, when we'll kick off the live updates. Should you prefer to watch the video yourself, LlamaCon will stream live through the Meta for Developers Facebook page. Meta hasn't traditionally waited for a conference to launch updates to Meta AI or the Llama model. The company introduced its new Llama 4 family of models, which excel at image understanding and document parsing, on a Saturday in early April. It's not clear what new models or products the company could have saved for LlamaCon. Update, April 29 2025, 6:00AM ET: This story has been updated to include the details of Engadget's liveblog, and correct a few typos in timezones.
[8]
LlamaCon 2025 live: Updates from Meta's first generative AI developer conference keynote
After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conference called LlamaCon on April 29. The event is streaming online, and you'll be able to watch along live on the Meta for Developers Facebook page. LlamaCon kicks off today at 1PM ET / 10AM PT with a keynote address from Meta's Chief Product Officer Chris Cox, Vice President of AI Manohar Paluri and research scientist Angela Fan. The keynote is supposed to cover developments in the company's open-source AI community, "the latest on the Llama collection of models and tools" and offer a glimpse at yet-to-be released AI features. The keynote address will be followed by a conversation at 1:45PM PT / 10:45PM ET between Meta CEO Mark Zuckerberg and Databricks CEO Ali Ghodsi on "building AI-powered applications," followed by a chat at 7PM ET / 4PM PT about "the latest trends in AI" between Zuckerberg and Microsoft CEO Satya Nadella. It doesn't seem like either conversation will be used to break news, but Microsoft and Meta have collaborated before, so anything is possible. Meta hasn't traditionally waited for a conference to launch updates to Meta AI or the Llama model. The company introduced its new Llama 4 family of models, which excel at image understanding and document parsing, on a Saturday in early April. It's not clear what new models or products the company could have saved for LlamaCon. We'll be liveblogging the keynote presentation today, along with some of the subsequent interviews and sessions between Zuckerberg and his guests. Stay tuned and refresh this article at about 10AM ET today, when we'll kick off the live updates. Update, April 29 2025, 6:00AM ET: This story was updated to include the details of Engadget's liveblog, and correct a few typos in timezones.
[9]
Meta is making it easier to use Llama models for app development
Meta is releasing a new tool it hopes will encourage developers to use its family of Llama models for their next project. At its inaugural LlamaCon event in Menlo Park on Tuesday, the company announced the Llama API. Available as a limited free preview starting today, the tool gives developers a place to experiment with Meta's AI models, including the recently released Llama 4 Scout and Maverick systems. It also makes it easy to create new API keys, which devs can use for authentication purposes. "We want to make it even easier for you to quickly start building with Llama, while also giving you complete control over your models and weights without being locked to an API," the company said in a blog post published during the event. To that end, the initial release of the Llama API includes tools devs can use to fine-tune and evaluate their apps. Additionally, Meta notes it won't use user prompts and model responses to train its own models. "When you're ready, the models you build on the Llama API are yours to take with you wherever you want to host them, and we don't keep them locked on our servers," the company said. Meta expects to roll out the tool to more users in coming weeks and months. Despite the fact Meta's Llama models have been downloaded more than one billion times, the company typically isn't viewed as a leader in the AI space in quite the same way as OpenAI and Anthropic. It doesn't help push against that perception that the company was caught gaming LMArena to make its Llama 4 models look better than they actually were.
[10]
Meta Still Sees OpenAI as a Competitor, But Not DeepSeek Anymore | AIM
Meta still positions itself as the open-source torchbearer. Llama recently crossed 1 billion downloads. The vibe at LlamaCon 2025 -- Meta's first developer summit -- was noticeably different. It wasn't about chasing headlines or claiming dominance in the AI race. Instead, Meta focused on building cost-efficient tools for developers and enterprises. At the event, the company launched a standalone Meta AI app powered by Llama 4 to compete with ChatGPT, and introduced the Llama API to help enterprises customise Llama models. Both announcements reflect Meta's strategy to go head-to-head with OpenAI, which is reportedly working on a social app. The Llama API offers one-click key generation and an interactive playground for exploring models like Llama 4 Scout and Llama 4 Maverick. "We provide a lightweight SDK in both Python and Typescript," Meta said during the LlamaCon event, adding that the API is also compatible with OpenAI's SDK for easy migration. The company also rolled out tools for model fine-tuning and evaluation. Developers can customise the Llama 3.3 8B model, generate training data, and evaluate results directly through the API. It has partnered with Cerebras and Groq to support Llama 4 API inference. Meta still positions itself as the open-source torchbearer. Llama recently crossed 1 billion downloads. In a conversation with Meta chief Mark Zuckerberg, Databricks CEO Ali Ghodsi said the open-source nature of LLMs has people "super excited to mix and match the different models." "DeepSeek is better, Qwen is better at something. As developers, you have the chance to take the best parts of the intelligence from the different models and produce exactly what you need," said Zuckerberg. For instance, Alibaba's latest Qwen3's 235B parameter model outperforms OpenAI's o1 and o3-mini (medium) reasoning models on benchmarks that evaluate its abilities in mathematical and programming tasks. "People are doing crazy things -- slicing, combining models, and getting better results. All of this is completely impossible if it wasn't open source," said Ali Ghodsi. "When it comes to model API business and serving LLMs, every model will be open source. You might not know it yet." Zuckerberg acknowledged that every time Meta releases a new Llama model, competitors' API prices drop. "Every time we do a Llama release, all the other companies drop their API prices," he said. Claude 3.7 Sonnet is priced at $3 per million input tokens and $15 for output. Gemini 2.5 Pro costs $1.25 for input and $10 for output. GPT-4.1 comes in at $2 and $8, respectively. Moreover, Ghodsi observed two emerging trends among customers. First, there is a shift towards smaller models designed for specific use cases, and second, there is an increased focus on inference-time compute and reasoning models. "The most common model people were using on Databricks was the Llama-distilled DeepSeek ones, where you took the R1 reasoning and distilled it on top of Llama." Ghodsi said that most organisations don't need a model that can do everything -- they just need a smaller model that performs well on a specific task they repeat often. He explained that by using distillation, they can retain the intelligence of the larger model but make it smaller, faster, and more cost-effective to run billions of times a day. Zuckerberg revealed that Meta is working on a new model, internally referred to as "Little Llama." However, it is worth noting that Meta hasn't released any reasoning model yet. Meanwhile, OpenAI chief Sam Altman recently confirmed that a powerful new open-weight model, with strong reasoning capabilities, will be shipped soon. Zuckerberg, in a recent podcast with Dwarkesh Patel, stated that comparing Llama 4 with DeepSeek R1 isn't fair, as Meta hasn't yet released its reasoning model. "We're basically in the same ballpark on all the text stuff that DeepSeek is doing, but with a smaller model. The cost-per-intelligence is lower with what we're doing for Llama on text," he said. Moreover, when Patel asked that Llama 4 models, including Maverik, haven't been that impressive on Chatbot Arena lagging behind Gemini 2.5 Flash and o4-mini of similar size, Zuckerberg clarified that these open-source benchmarks like Chatbot Arena tend to evaluate language models using narrow or artificial tasks that don't reflect real-world use cases or how people interact with products. "As a result, these benchmarks can give a skewed or misleading view of a model's usefulness in real products," he noted. On licensing, Zuckerberg acknowledged concerns from open-source purists over the level of openness in Llama's license. However, he noted that most companies haven't raised objections, even with the clause requiring companies with more than 700 million users to contact Meta. He also suggested that it's reasonable for Meta to want large companies to discuss their needs with them before using a model that costs them billions to train. "I think asking the other companies -- the huge ones that are similar in size and can easily afford to have a relationship with us -- to talk to us before they use it seems like a pretty reasonable thing," he said.
[11]
LlamaCon: Meta's new AI app is a ChatGPT with a social feed
Meta held its first-ever AI developer conference, LlamaCon, at its Menlo Park, California headquarters on Tuesday, unveiling a consumer-facing Meta AI chatbot app and a developer-facing API for accessing Llama models in the cloud. The new Meta AI chatbot app will compete with ChatGPT and features a social feed where users can share their AI chats, offering personalized responses based on a user's Meta app activity. The app's design feels like a preemption of OpenAI's rumored social network. The Llama API is designed to simplify the process for developers to build apps that connect to Llama models in the cloud using just a single line of code. This eliminates the need to rely on third-party cloud providers to run Llama models and allows Meta to offer a fuller array of tools for AI developers, directly challenging OpenAI's API business. Meta's AI strategy is centered around undercutting proprietary AI model providers like OpenAI. Court filings reveal that Meta's executives previously focused on beating OpenAI's GPT-4, a state-of-the-art model at the time. In a July 2024 letter, Meta CEO Mark Zuckerberg contrasted Meta with companies like OpenAI, stating that "selling access to AI models isn't [Meta's] business model." During LlamaCon, Zuckerberg discussed Meta's vision for an open AI ecosystem, viewing any AI lab that makes its models openly available as allies in the fight against closed model providers. He cited DeepSeek and Alibaba's Qwen as examples, noting that open-source models allow developers to "mix and match" the best parts of different models to produce what they need. Meta's efforts may also be driven by regulatory considerations, as the EU AI Act grants special privileges to companies that distribute "free and open source" AI systems. Meta claims its Llama models are "open source," despite some disagreement over whether they meet the necessary criteria. Meta appears content to prioritize strengthening the open model ecosystem and limiting OpenAI's growth, even if it means not delivering cutting-edge models itself. The company's approach is to fuel a thriving open AI ecosystem that challenges "closed" AI providers.
[12]
Meta introduces Llama application programming interface to attract AI developers
SAN FRANCISCO (Reuters) -Meta Platforms on Tuesday announced an application programming interface in a bid to woo businesses to more easily build AI products using its Llama artificial-intelligence models. Llama API, which was unveiled during the company's first-ever AI developer conference, will help Meta go up against APIs offered by rival model makers including Microsoft -backed OpenAI, Alphabet's Google and emerging low-cost alternatives such as China's DeepSeek. "You can now start using Llama with one line of code," chief product officer Chris Cox said during a keynote speech onstage. APIs allow software developers to customize and quickly integrate a piece of technology into their own products. For OpenAI, APIs constitute the firm's primary source of revenue. Meta, which released the latest version of Llama earlier this month, did not share any pricing details for the API. In a press release, it said the new API was available as a limited preview for select customers and would roll out broadly in weeks to months. The company also released a standalone AI assistant app earlier on Tuesday. It plans to test a paid subscription service of its AI chatbot in the second quarter, Reuters reported in February. Meta releases its Llama models largely free-of-charge for use by developers, a strategy CEO Mark Zuckerberg previously stated will pay off in the form of innovative products, less dependence on would-be competitors and greater engagement on the company's core social networks. "You have full agency over these custom models, you control them in a way that's not possible with other offers," Manohar Paluri, a vice president of AI, said at the conference. "Whatever model you customize is yours to take wherever you want, not locked on our servers." DeepSeek, which has also released partly open-source AI models, sparked a stock selloff in January amid concerns over the high costs of AI development needed by top U.S. firms. At the conference, Meta developers spoke about new techniques they used to significantly reduce costs and improve the efficiency of its newest Llama iteration. Zuckerberg welcomed increased competition that would steer the competitive ecosystem away from domination by a small number of leaders. "If another model, like DeepSeek, is better at something, then now as developers you have the ability to take the best parts of the intelligence from the different models and produce exactly what you need, which I think is going to be very powerful," Zuckerberg said. (Reporting by Kenrick Cai in San Francisco and Deborah Sophia in Bengaluru; Editing by Matthew Lewis)
Share
Share
Copy Link
Meta hosts its first AI developer conference, LlamaCon, unveiling new tools and strategies to compete with rivals like OpenAI, while facing challenges in maintaining its position in the rapidly evolving AI landscape.
Meta, the tech giant formerly known as Facebook, hosted its first-ever AI developer conference, LlamaCon, at its Menlo Park headquarters on Tuesday. The event, centered around Meta's Llama family of open AI models, aimed to reinvigorate developer interest and showcase the company's AI capabilities amid fierce competition in the rapidly evolving AI landscape 12.
Meta unveiled several new offerings during LlamaCon:
Llama API: A cloud-based API for developers to access and build applications with Llama models. The API offers tools for fine-tuning and evaluating model performance, starting with Llama 3.3 8B 3.
Meta AI App: A standalone consumer-facing chatbot application, rebranded from the Meta View app for smart glasses. The app features a social feed for sharing AI chats and offers personalized responses based on user activity across Meta's platforms 45.
Llama 4 Models: Meta reiterated information about its recently released Llama 4 models, Scout and Maverick, designed for various computational requirements 5.
LlamaCon's announcements appeared to be strategic moves to challenge OpenAI and other AI competitors:
Despite the new releases, Meta faces several challenges in the AI space:
Benchmark Controversy: Meta recently faced allegations of using an optimized version of its Llama 4 Maverick model for benchmarking, which differed from the publicly released version 25.
Missing Reasoning Model: AI researchers noted the absence of a competitive AI reasoning model from Meta, which rivals like OpenAI, Google, and DeepSeek have already released 45.
Talent Retention: Reports suggest Meta's AI research lab may be struggling, with the recent departure of its VP of AI Research, Joelle Pineau 2.
While Meta's Llama models have garnered over a billion downloads, the company appears to be playing catch-up in some areas of AI development 35. The success of LlamaCon and Meta's new offerings will be crucial in determining whether the company can maintain its position as a leader in open-source AI and effectively challenge closed-source competitors.
As the AI race intensifies, Meta's strategy of fostering an open AI ecosystem may face increasing pressure from rapidly advancing rivals like DeepSeek, Alibaba's Qwen, and established players like OpenAI and Google 24. The coming months will be critical for Meta to deliver on its promises and regain momentum in the AI development community.
Reference
Meta is set to host LlamaCon, its inaugural AI-focused developer conference, on April 29, 2025. The event will showcase Meta's open-source AI developments and tools, including updates on the Llama model series.
6 Sources
6 Sources
Meta has released Llama 3, its latest and most advanced AI language model, boasting significant improvements in language processing and mathematical capabilities. This update positions Meta as a strong contender in the AI race, with potential impacts on various industries and startups.
22 Sources
22 Sources
Meta Platforms unveils Llama 3, a powerful open-source AI model, potentially disrupting the AI industry. The move aims to enhance developer freedom, privacy standards, and Meta's competitive position against rivals like OpenAI and Anthropic.
4 Sources
4 Sources
Meta has released Llama 3, an open-source AI model that can run on smartphones. This new version includes vision capabilities and is freely accessible, marking a significant step in AI democratization.
3 Sources
3 Sources
Meta has released Llama 3.3, a 70 billion parameter AI model that offers performance comparable to larger models at a fraction of the cost, marking a significant advancement in open-source AI technology.
11 Sources
11 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved