Curated by THEOUTPOST
On Tue, 25 Feb, 12:05 AM UTC
5 Sources
[1]
AI nonprofit CEO says 'closed nature' of most artificial intelligence research hinders innovation
A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft co-founder Paul Allen already had established his own nonprofit artificial intelligence research laboratory in Seattle. Their mission was to advance AI for humanity's benefit. More than a decade later, the Allen Institute for Artificial Intelligence, or Ai2, isn't nearly as well-known as the ChatGPT maker but is still pursuing the "high-impact" AI sought by Allen, who died in 2018. One of its latest AI models, Tulu 3 405B, rivals OpenAI and China's DeepSeek on several benchmarks. But unlike OpenAI, it says it's developing AI systems that are "truly open" for others to build upon. The institute's CEO Ali Farhadi has been running Ai2 since 2023 after a stint at Apple. He spoke with The Associated Press. The interview has been edited for length and clarity. Why is openness important to your mission? Our mission is to do AI innovation and AI breakthroughs to solve some of the biggest working problems facing humanity today. The biggest threat to AI innovation is the closed nature of the practice. We have been pushing very, very strongly towards openness. If you think about open-source software, the core essence was, 'I should be able to understand what you did. I should be able to change it. I should be able to fork from it. I should be able to use part of it, half of it, all of it. And once I build my thing, I put it out there and you should be able to do the same.' What do you consider an open-source AI model? It is a really heated topic at the moment. To us, open-source means that you understand what you did. Open weights models (such as Meta's) are great because people could just grab those weights and follow the rest, but they aren't open source. Open source is when you actually have access to every part of the puzzle. Why aren't more AI developers sharing training data for models they say are open? If I want to postulate, some of these training data have a little bit of questionable material in them. But also the training data for these models are the actual IP. The data is probably the most sacred part. Many think there's a lot of value in it. In my opinion, rightfully so. Data plays a significant role in improving your model, changing the behavior of your model. It's tedious, it's challenging. Many companies spend a lot of dollars, a lot of investments, in that domain and they don't like to share it. What are the AI applications you're most excited about? As it matures, I think AI is getting ready to be taken seriously for crucial problem domains such as science discovery. A good part of some disciplines involves a complicated search for a solution -- for a gene structure, a cell structure or specific configurations of elements. Many of those problems can be formulated computationally. There's only so much you can do by just downloading a model from the web that was trained on text data and fine tuning it. Our hope is to empower scientists to be able to actually train their own model.
[2]
AI nonprofit CEO says 'closed nature' of most artificial intelligence research hinders innovation
A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft co-founder Paul Allen already had established his own nonprofit artificial intelligence research laboratory in Seattle. Their mission was to advance AI for humanity's benefit. More than a decade later, the Allen Institute for Artificial Intelligence, or Ai2, isn't nearly as well-known as the ChatGPT maker but is still pursuing the "high-impact" AI sought by Allen, who died in 2018. One of its latest AI models, Tulu 3 405B, rivals OpenAI and China's DeepSeek on several benchmarks. But unlike OpenAI, it says it's developing AI systems that are "truly open" for others to build upon. The institute's CEO Ali Farhadi has been running Ai2 since 2023 after a stint at Apple. He spoke with The Associated Press. The interview has been edited for length and clarity. Our mission is to do AI innovation and AI breakthroughs to solve some of the biggest working problems facing humanity today. The biggest threat to AI innovation is the closed nature of the practice. We have been pushing very, very strongly towards openness. If you think about open-source software, the core essence was, 'I should be able to understand what you did. I should be able to change it. I should be able to fork from it. I should be able to use part of it, half of it, all of it. And once I build my thing, I put it out there and you should be able to do the same.' It is a really heated topic at the moment. To us, open-source means that you understand what you did. Open weights models (such as Meta's) are great because people could just grab those weights and follow the rest, but they aren't open source. Open source is when you actually have access to every part of the puzzle. If I want to postulate, some of these training data have a little bit of questionable material in them. But also the training data for these models are the actual IP. The data is probably the most sacred part. Many think there's a lot of value in it. In my opinion, rightfully so. Data plays a significant role in improving your model, changing the behavior of your model. It's tedious, it's challenging. Many companies spend a lot of dollars, a lot of investments, in that domain and they don't like to share it. As it matures, I think AI is getting ready to be taken seriously for crucial problem domains such as science discovery. A good part of some disciplines involves a complicated search for a solution -- for a gene structure, a cell structure or specific configurations of elements. Many of those problems can be formulated computationally. There's only so much you can do by just downloading a model from the web that was trained on text data and fine tuning it. Our hope is to empower scientists to be able to actually train their own model.
[3]
AI nonprofit CEO says 'closed nature' of most artificial intelligence research hinders innovation
A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft co-founder Paul Allen already had established his own nonprofit artificial intelligence research laboratory in Seattle. Their mission was to advance AI for humanity's benefit. More than a decade later, the Allen Institute for Artificial Intelligence, or Ai2, isn't nearly as well-known as the ChatGPT maker but is still pursuing the "high-impact" AI sought by Allen, who died in 2018. One of its latest AI models, Tulu 3 405B, rivals OpenAI and China's DeepSeek on several benchmarks. But unlike OpenAI, it says it's developing AI systems that are "truly open" for others to build upon. The institute's CEO Ali Farhadi has been running Ai2 since 2023 after a stint at Apple. He spoke with The Associated Press. The interview has been edited for length and clarity. Why is openness important to your mission? Our mission is to do AI innovation and AI breakthroughs to solve some of the biggest working problems facing humanity today. The biggest threat to AI innovation is the closed nature of the practice. We have been pushing very, very strongly towards openness. If you think about open-source software, the core essence was, 'I should be able to understand what you did. I should be able to change it. I should be able to fork from it. I should be able to use part of it, half of it, all of it. And once I build my thing, I put it out there and you should be able to do the same.' What do you consider an open-source AI model? It is a really heated topic at the moment. To us, open-source means that you understand what you did. Open weights models (such as Meta's) are great because people could just grab those weights and follow the rest, but they aren't open source. Open source is when you actually have access to every part of the puzzle. Why aren't more AI developers sharing training data for models they say are open? If I want to postulate, some of these training data have a little bit of questionable material in them. But also the training data for these models are the actual IP. The data is probably the most sacred part. Many think there's a lot of value in it. In my opinion, rightfully so. Data plays a significant role in improving your model, changing the behavior of your model. It's tedious, it's challenging. Many companies spend a lot of dollars, a lot of investments, in that domain and they don't like to share it. What are the AI applications you're most excited about? As it matures, I think AI is getting ready to be taken seriously for crucial problem domains such as science discovery. A good part of some disciplines involves a complicated search for a solution -- for a gene structure, a cell structure or specific configurations of elements. Many of those problems can be formulated computationally. There's only so much you can do by just downloading a model from the web that was trained on text data and fine tuning it. Our hope is to empower scientists to be able to actually train their own model. © 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[4]
AI nonprofit CEO says 'closed nature' of most artificial intelligence research hinders innovation
A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft co-founder Paul Allen already had established his own nonprofit artificial intelligence research laboratory in Seattle A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft co-founder Paul Allen already had established his own nonprofit artificial intelligence research laboratory in Seattle. Their mission was to advance AI for humanity's benefit. More than a decade later, the Allen Institute for Artificial Intelligence, or Ai2, isn't nearly as well-known as the ChatGPT maker but is still pursuing the "high-impact" AI sought by Allen, who died in 2018. One of its latest AI models, Tulu 3 405B, rivals OpenAI and China's DeepSeek on several benchmarks. But unlike OpenAI, it says it's developing AI systems that are "truly open" for others to build upon. The institute's CEO Ali Farhadi has been running Ai2 since 2023 after a stint at Apple. He spoke with The Associated Press. The interview has been edited for length and clarity. Our mission is to do AI innovation and AI breakthroughs to solve some of the biggest working problems facing humanity today. The biggest threat to AI innovation is the closed nature of the practice. We have been pushing very, very strongly towards openness. If you think about open-source software, the core essence was, 'I should be able to understand what you did. I should be able to change it. I should be able to fork from it. I should be able to use part of it, half of it, all of it. And once I build my thing, I put it out there and you should be able to do the same.' It is a really heated topic at the moment. To us, open-source means that you understand what you did. Open weights models (such as Meta's) are great because people could just grab those weights and follow the rest, but they aren't open source. Open source is when you actually have access to every part of the puzzle. If I want to postulate, some of these training data have a little bit of questionable material in them. But also the training data for these models are the actual IP. The data is probably the most sacred part. Many think there's a lot of value in it. In my opinion, rightfully so. Data plays a significant role in improving your model, changing the behavior of your model. It's tedious, it's challenging. Many companies spend a lot of dollars, a lot of investments, in that domain and they don't like to share it. As it matures, I think AI is getting ready to be taken seriously for crucial problem domains such as science discovery. A good part of some disciplines involves a complicated search for a solution -- for a gene structure, a cell structure or specific configurations of elements. Many of those problems can be formulated computationally. There's only so much you can do by just downloading a model from the web that was trained on text data and fine tuning it. Our hope is to empower scientists to be able to actually train their own model.
[5]
Ai2's Ali Farhadi advocates for open-source AI models. Here's why
A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft cofounder Paul Allen already had established his own nonprofit artificial intelligence research laboratory in Seattle. Their mission was to advance AI for humanity's benefit. More than a decade later, the Allen Institute for Artificial Intelligence, or Ai2, isn't nearly as well-known as the ChatGPT maker but is still pursuing the "high-impact" AI sought by Allen, who died in 2018. One of its latest AI models, Tulu 3 405B, rivals OpenAI and China's DeepSeek on several benchmarks. But unlike OpenAI, it says it's developing AI systems that are "truly open" for others to build upon. The institute's CEO Ali Farhadi has been running Ai2 since 2023 after a stint at Apple. He spoke with the Associated Press. The interview has been edited for length and clarity.
Share
Share
Copy Link
Ali Farhadi, CEO of the Allen Institute for Artificial Intelligence (Ai2), argues that the closed nature of most AI research is hindering innovation and calls for more openness in AI development.
The Allen Institute for Artificial Intelligence (Ai2), founded by Microsoft co-founder Paul Allen, has been at the forefront of AI research for over a decade. Established a year before Elon Musk's OpenAI, Ai2 has maintained its commitment to advancing AI for humanity's benefit 12345.
Ali Farhadi, CEO of Ai2 since 2023, emphasizes the importance of openness in AI development. He argues that the closed nature of most AI research is the biggest threat to innovation in the field 12345.
Farhadi defines true open-source AI as having access to every part of the puzzle, not just the model weights. He states, "Open-source means that you understand what you did. Open weights models are great because people could just grab those weights and follow the rest, but they aren't open source" 12345.
While advocating for openness, Farhadi acknowledges the challenges in sharing training data:
He notes, "Data plays a significant role in improving your model, changing the behavior of your model. It's tedious, it's challenging. Many companies spend a lot of dollars, a lot of investments, in that domain and they don't like to share it" 12345.
Ai2 has developed Tulu 3 405B, an AI model that rivals those of OpenAI and China's DeepSeek on several benchmarks. Unlike many competitors, Ai2 is committed to developing "truly open" AI systems for others to build upon 12345.
Farhadi is particularly excited about AI's potential in scientific discovery:
"As it matures, I think AI is getting ready to be taken seriously for crucial problem domains such as science discovery. A good part of some disciplines involves a complicated search for a solution -- for a gene structure, a cell structure or specific configurations of elements" 12345.
Ai2 aims to empower scientists to train their own models rather than relying on pre-trained models fine-tuned for specific tasks. Farhadi believes this approach will lead to more significant breakthroughs in various scientific disciplines 12345.
By promoting openness and collaboration in AI research, Ai2 continues to pursue the "high-impact" AI envisioned by its founder, Paul Allen, while addressing the challenges and opportunities in the rapidly evolving field of artificial intelligence.
Reference
[1]
[2]
[3]
[4]
The rise of open-source AI models is reshaping the tech landscape, with FTC Chair Lina Khan advocating for openness to prevent monopolies. Silicon Valley faces disruption as new models match industry leaders' capabilities.
4 Sources
4 Sources
OpenAI CEO Sam Altman admits the company has been on the "wrong side of history" regarding open-source AI development, as Chinese startup DeepSeek's success sparks industry-wide debate on AI strategies and market dynamics.
14 Sources
14 Sources
OpenAI, the company behind ChatGPT, plans to release its first open-weight language model since GPT-2 in 2019. This strategic shift comes as the AI industry faces increasing pressure from open-source competitors and changing economic realities.
20 Sources
20 Sources
The release of DeepSeek's open-source AI model, rivaling top proprietary systems, has ignited discussions about the future of AI development, its implications for global competition, and the need for effective governance.
3 Sources
3 Sources
OpenAI, once a non-profit AI research organization, is restructuring into a for-profit entity, raising concerns about its commitment to beneficial AI development and potential safety implications.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved