By clicking "Sign Up", you accept our Terms of Service and Privacy Policy. You can opt-out at any time by visiting our Preferences page or by clicking "unsubscribe" at the bottom of the email.
Peter Wang is the CEO of Anaconda, a company he cofounded in 2012 with the goal of democratizing business-data analytics by making Python tools easier to use. Under Wang's leadership, Anaconda grew alongside Python's rise to prominence as one of the world's most popular programming languages.
With Python established as a leading language for AI workloads, Anaconda is broadening its focus to data science and artificial intelligence, with the goal of becoming a common software layer for high-performance AI.
Anaconda has introduced tools to help companies and people get started with AI large language models. One such tool is AI Navigator, a desktop app that can run AI models locally on Windows and Mac, with Linux support coming soon. The company has 300 full-time employees and 40 million users worldwide.
Business Insider spoke with Wang to learn more about how AI workloads are prompting companies to think about bringing more of their IT infrastructure on-premises -- and the hardware and software challenges that come with making such a move.
The following has been edited for clarity and length.
Can you talk about on-premises infrastructure and what that term means to you in 2024?
On-premises initially meant servers within the company's physical location. Now it's more about the governance of infrastructure, including data, networking, and servers. Who manages it? Who gets to say, "No, absolutely, you can't do that," or, "Yes, you can do that?"
At Anaconda, we see customers across the spectrum. Some have what we call air-gapped systems, not connected to the internet at all. These might be a box in a building somewhere, oftentimes guarded by people with machine guns, where you go in with a flash drive. That's the hardcore level of secure on-prem.
On the other end, we see businesses that use a lot of cloud resources. But even they need stricter boundaries. They work with cloud providers to set up virtual private clouds or to provision resources with specific governance rules and policies.
Why are companies interested in on-premises solutions for AI and large language models?
We see a lot of interest from companies wanting control over their own destiny.
They want to fine-tune models on their own data, connect them to internal databases for retrieval-augmented generation, and use agent-based models. If a company can only consume AI as a cloud end point, you have to tie all your internal systems to that cloud AI service, which is difficult.
And many of these cloud AI companies, while they are well capitalized, are still relatively new as enterprise-software players. There are a lot of concerns about data leakage and compliance.
Running locally gives more control and reduces the risk of accidental data exposure. You don't have to worry about a junior IT person at a cloud startup accidentally misconfiguring something and causing a data breach.
The risks of a data breach are clear, but many companies seem concerned about any external use of their data for AI. Why is that?
People have said that data is oil. If your data is oil, LLMs are like the internal-combustion engine, meaning they provide a much more interesting way to use that oil.
Companies want to use their sensitive "crown jewel" data with LLMs to gain insights and improve predictive analytics. These use cases are central to their business, so they're protective of this data. They don't trust putting it on external systems where it could leak valuable information like customer insights or product preferences.
When people think of "on-prem," they tend to focus on hardware. But much of what you are describing seems like software. Can you explain that in more detail?
The hardware for AI work is often similar across setups. It's typically high-end Nvidia GPUs, though not always the latest or most expensive versions. On top of that hardware stack, the software needed to run an LLM isn't exotic if you know what you're doing. But there's a big asterisk here.
The challenges often come from internal IT policies, organizational competencies, and the dynamic nature of AI workloads. For example, if your organization is familiar with Docker or Kubernetes, great. But if you're a Java shop used to deploying with Maven, or a Ruby shop unfamiliar with Python, that creates hurdles. When these companies want to start an internal LLM, that's where they can use help.
AI workloads require varying amounts of computing power at different times. When you're training, you might need a lot of GPUs, fewer, or different kinds -- or even just CPUs.
This dynamic set of hardware requirements, sometimes very bursty in terms of when and how long you need it, creates an orchestration challenge. That becomes a software challenge, and then an organizational one.
Are you saying the challenge is optimizing software for efficient and dynamic use of hardware?
I actually think it's more of a broad competency challenge for a company.
In traditional software development, the IT group talks to the software-development group. The developers specify their needs for memory, bandwidth, storage, and IT provisions for them.
But data scientists and machine-learning teams have dynamic needs. They require newer, more advanced hardware, and the software they run is Python with many dependencies, like specific GPU-driver versions.
The challenges organizations face in on-premises AI relate to the dynamic nature of server and machine orchestration and the open-source-software ecosystem they're tapping into. This is especially true for compliance and security requirements.
Speaking of open source, what are your thoughts on open versus closed LLMs?
I've tried to stay out of the fray on social media, but my take is that current LLMs, especially the frontier models, have a lot of overlap in capabilities. Some are better than others in certain aspects, but at their core, once you throw enough data at them, these models start to become similar to each other.
Making open models like Meta's Llama freely available is a game changer. The original Llama release was significant, and the latest Llama 3.1 model, with 400 billion parameters, is a massive step forward. It will increase interest in running models on-premises, especially for fine-tuning on sensitive data.
But while these models are often called open, they're not open in the traditional open-source sense. You can use them freely, but you can't rebuild them from scratch or modify them at will. The training data, scripts, and hyperparameters are often not disclosed. It's a complex issue that involves considerations of safety and licensing. The data used for training is a particularly big issue that no one really talks about.
How should a company looking to implement on-premises AI get started?
Anaconda has an AI Navigator tool that is a great way to get started. It's a simple graphical interface where you can download appropriate models for your computer. We're currently running this in beta, and we're eager to get feedback from users.
Our tool connects to our curated model repository. We've quantized models to make them smaller and more efficient for different machines, which is important because downloading models from public repositories can pose security risks.
For example, we've seen attacks where someone uploads a fine-tuned code-generation model that hallucinates nonexistent Python packages. It generates code that tries to import or install these fake packages, and then the attacker creates malicious versions of these packages in the real world. When users try to run the generated code, they install these malicious packages.
Our tool helps users get past many initial hurdles in setting up and running AI models locally. It accelerates the process of getting the software running correctly for a given machine, making it easier and safer for businesses to start exploring on-premises AI solutions.