The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 14 Jan, 4:01 PM UTC
2 Sources
[1]
Blasting AI into the past: modders get Llama AI working on an old Windows 98 PC
Remember when you were young, your responsibilities were far fewer, and you were still at least a little hopeful about the future potential of tech? Anyway! In our present moment, nothing appears to be safe from the sticky fingers of so-called AI -- and that includes nostalgic hardware of yesteryear. Exo Labs, an outfit with the mission statement of democratising access to AI, such as large language models, has lifted the lid on its latest project: a modified version of Meta's Llama 2 running on a Windows 98 Pentium II machine (via Hackaday). Though not the latest Llama model, it's no less head-turning -- even for me, a frequent AI-naysayer. To be fair, when it comes to big tech's hold over AI, Exo Labs and I seem to be of a similarly wary mind. So, setting aside my own AI-scepticism for the moment, this is undoubtedly an impressive project chiefly because it doesn't rely on a power-hungry, very much environmentally-unfriendly middleman datacenter to run. The journey to Llama running on ancient-though-local hardware enjoys some twists and turns; after securing the second hand machine, Exo Labs had to contend with finding compatible PS/2 peripherals, and then figure out how they'd even transfer the necessary files onto the decades-old machine. Did you know FTP over an ethernet cable was backwards compatible to this degree? I certainly didn't! Don't be fooled though -- I'm making it sound way easier than it was. Even before FTP finagling was figured out, Exo Labs had to find a way to compile modern code for a pre-Pentium Pro machine. Longer story short-ish, the team went with Borland C++ 5.02, a "26-year-old [integrated development environment] and compiler that ran directly on Windows 98." However, compatibility issues persisted with the programming language C++, so the team had to use the older incarnation of C and deal with declaring variables at the start of every function. Oof. Then, there's the hardware at the heart of this project. For those needing a refresher, the Pentium II machine sports an itty bitty 128 MB of RAM, while a full size Llama 2 LLM boasts 70 billion parameters. Managing all of these hefty constraints, the results are even more interesting. Unsurprisingly, Exo Labs had to craft a comparatively svelte version of Llama for this project, now available to tool around with yourself via GitHub. As a result of everything aforementioned, the retrofitted LLM features 1 billion parameters and spits out 0.0093 Tokens per second -- hardly blistering, but the headline take here really is that it works at all.
[2]
Meta AI's Llama language model modded to run on decades-old Xbox 360
A hot potato: The open-source project llama2.c is designed to run a lightweight version of the Llama 2 model entirely in C code. This "baby" Llama 2 model is inspired by llama.cpp, a project created to enable LLM inference across a wide range of hardware, from local devices to cloud-based platforms. These compact code experiments are now being leveraged to run AI technology on virtually any device with a chip, highlighting the growing accessibility and versatility of AI tools. After seeing Exo Labs run a large language model on an ancient Pentium II running Windows 98, developer Andrei David decided to take on an even more unconventional challenge. Dusting off his Xbox 360 console, he set out to force the nearly two-decade-old machine to load an AI model from Meta AI's Llama family of LLMs. David shared on X that he successfully ported llama2.c to Microsoft's 2005-era gaming console. However, the process wasn't without significant hurdles. The Xbox 360's PowerPC CPU is a big-endian architecture, which required extensive endianness conversion for both the model's configuration and weights. Additionally, he had to deal with substantial adjustments and optimizations to the original code to make it work on the aging hardware. Memory management posed yet another significant challenge. The 60MB llama2 model had to be carefully structured to fit within the Xbox 360's unified memory architecture, where the CPU and GPU share the same pool of RAM. According to David, the Xbox 360's memory architecture was remarkably forward-thinking for its time, foreshadowing the memory management techniques now standard in modern gaming consoles and APUs. After extensive coding and optimization, David successfully ran llama2 on his Xbox 360 using a simple prompt: "Sleep Joe said." Despite the llama2 model being just 700 lines of C code with no external dependencies, David noted that it can deliver "surprisingly" strong performance when tailored to a sufficiently narrow domain. David explained that working within the constraints of a limited platform like the Xbox 360 forces you to prioritize efficient memory usage above all else. In response, another X user suggested that the 512MB of memory on Microsoft's old console might be sufficient to run other small LLM implementations, such as smolLM, created by AI startup Hugging Face. The developer gladly accepted the challenge, so we will likely see additional LLM experiments on Xbox 360 in the not-so-distant future.
Share
Share
Copy Link
Innovative developers have successfully adapted Meta's Llama 2 AI model to run on outdated hardware, including a Windows 98 Pentium II PC and an Xbox 360 console, showcasing the potential for AI accessibility on diverse platforms.
In a remarkable display of technological ingenuity, developers have successfully adapted Meta's Llama 2 AI model to run on outdated hardware, pushing the boundaries of AI accessibility and demonstrating the potential for widespread AI integration across diverse platforms.
Exo Labs, an organization dedicated to democratizing AI access, has achieved a significant milestone by running a modified version of Meta's Llama 2 on a Windows 98 Pentium II machine 1. This feat is particularly noteworthy as it operates without relying on power-hungry data centers, addressing environmental concerns associated with AI infrastructure.
The project faced numerous challenges, including:
Despite the Pentium II's limited 128 MB of RAM, Exo Labs created a streamlined version of Llama 2 with 1 billion parameters, now available on GitHub for public experimentation 1.
Building on Exo Labs' success, developer Andrei David took on the challenge of porting llama2.c, a lightweight version of the Llama 2 model, to the Xbox 360 console 2. This endeavor presented its own set of obstacles:
David successfully ran the llama2 model on the Xbox 360, demonstrating the potential for AI integration on gaming consoles 2.
These projects highlight several key points:
As the experiments continue, we may see further adaptations of small LLM implementations, such as Hugging Face's smolLM, on various legacy platforms 2. These developments could pave the way for more inclusive and diverse AI applications across a spectrum of devices, both old and new.
A software engineer successfully ran a modern large language model (LLM) on a 2005 PowerBook G4, demonstrating the potential for AI to operate on older hardware, albeit with significant performance limitations.
2 Sources
2 Sources
Meta has released Llama 3, an open-source AI model that can run on smartphones. This new version includes vision capabilities and is freely accessible, marking a significant step in AI democratization.
3 Sources
3 Sources
Meta has released Llama 3.3, a 70 billion parameter AI model that offers performance comparable to larger models at a fraction of the cost, marking a significant advancement in open-source AI technology.
11 Sources
11 Sources
Meta has introduced Llama 3.2, an advanced open-source multimodal AI model. This new release brings significant improvements in vision capabilities, text understanding, and multilingual support, positioning it as a strong competitor to proprietary models from OpenAI and Anthropic.
16 Sources
16 Sources
Meta has released Llama 3, its latest and most advanced AI language model, boasting significant improvements in language processing and mathematical capabilities. This update positions Meta as a strong contender in the AI race, with potential impacts on various industries and startups.
22 Sources
22 Sources