Software Engineer Runs Generative AI on 20-Year-Old PowerBook G4, Showcasing AI's Adaptability

2 Sources

Share

A software engineer successfully ran a modern large language model (LLM) on a 2005 PowerBook G4, demonstrating the potential for AI to operate on older hardware, albeit with significant performance limitations.

News article

Software Engineer Achieves AI Breakthrough on Vintage Hardware

In a remarkable demonstration of artificial intelligence's adaptability, software engineer Andrew Rossignol has successfully run a generative AI model on a 20-year-old PowerBook G4. This experiment, detailed in a recent blog post, pushes the boundaries of what's possible with older hardware in the age of AI

1

.

The Experiment: Llama 2 on PowerBook G4

Rossignol's project involved running Meta's LLM model Llama 2 on a 2005 PowerBook G4, equipped with a 1.5GHz PowerPC G4 processor and 1GB of RAM. This hardware, considered antiquated by today's standards, presents significant challenges for running modern AI models

1

.

The experiment utilized the open-source llama2.c project, which implements Llama2 LLM inference using a single vanilla C file. Rossignol made several improvements to the project, including:

  1. Adding wrappers for system functions
  2. Organizing the code into a library with a public API
  3. Porting the project to run on a PowerPC Mac

    2

Overcoming Technical Challenges

One of the main hurdles was the PowerBook G4's "big-endian" processor architecture, which conflicted with the "little-endian" expectations of the model checkpoint and tokenizers. Rossignol had to address these byte ordering issues to make the project functional

2

.

Performance Benchmarks

To assess the PowerBook G4's performance, Rossignol compared it to a single Intel Xeon Silver 4216 core clocked at 3.2GHz:

  • Xeon core: 26.5 seconds per query, 6.91 tokens per second
  • PowerBook G4 (initial): 4 minutes per query (9 times slower)
  • PowerBook G4 (optimized): 3.5 minutes per query (8 times slower)

The optimization included using vector extensions like AltiVec, which improved performance slightly

2

.

Model Selection and Limitations

Due to hardware constraints, Rossignol used the TinyStories model, focusing on the 15 million-parameter (15M) and 110 million-parameter (110M) variants. The 32-bit address space of the PowerBook G4 limited the use of larger models

2

.

Implications and Future Prospects

While the experiment proves that older hardware can run modern LLMs, the performance is far from practical for everyday use. However, this demonstration opens up possibilities for repurposing older devices for AI applications, albeit with limitations

2

.

Rossignol acknowledges that further significant improvements are unlikely due to hardware limitations. Nevertheless, he views the project as a valuable learning experience in understanding LLMs and their operations

2

.

As AI continues to evolve, this experiment highlights the potential for broader hardware compatibility. However, it also underscores the need for modern, powerful hardware to run cutting-edge AI applications efficiently.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo