Please login or register.

Login with username, password and session length
Advanced search  

News:

Pages: [1]   Go Down

Author Topic: I Put Modern LLMs on a 2002 Macintosh - No Internet Required  (Read 431 times)

Protools5LEGuy

  • Staff Member
  • 2048 MB
  • ******
  • Posts: 2851
I Put Modern LLMs on a 2002 Macintosh - No Internet Required
« on: March 20, 2026, 04:16:40 PM »

https://www.youtube.com/watch?v=W0kV_CCzTAM

Quote
This is MacinAI local, a vintage Macintosh AI inference program capable of running a 1.1 billion parameter language model locally on a PowerBook G4 (1GHz, 1GB RAM, Mac OS 9.2) - no internet, no server, no cloud. Just a Mac from 2002 and CDs.

Timestamps:
00:00 - Introduction
01:10 - MacinAI Tool Model (94M param)
04:10 - GPT-2 Model (124M param)
05:10 - SmolLM Model (360M param)
06:45 - Qwen 2.5 Model (500M param)
10:30 - TinyLlama (1.1B param)
33:45 - Conclusion

The inference engine is written entirely in C89 using the Mac Toolbox API, with AltiVec SIMD acceleration for the heavy matrix math. The TinyLlama 1.1B model requires 1.2 GB of weights but the machine only has 1 GB of RAM, so the engine pages transformer layers from the hard drive on demand - 14 layers in memory, 8 streamed from disk per token.

Models demonstrated:

    MacinAI Tool 94M (custom) - 2.86 tok/s
    GPT-2 124M - 2.36 tok/s
    SmolLM 360M Instruct - ~1.5 tok/s
    Qwen2.5 0.5B Instruct - ~2 tok/s
    TinyLlama 1.1B Chat - ~0.1 tok/s (with disk paging)


Able to be shipped on a 2-disc CD set with a custom installer - the first LLM ever distributed on CD (if people actually want it).

Export script supports any HuggingFace model (LLaMA-family, GPT-2-family) to the custom .bin format.

Hardware: PowerBook G4 Titanium 1GHz, 1GB RAM, Mac OS 9.2.2
Detailed implementation and downloads: https://oldapplestuff.com/blog/MacinAI-Local/

#MARCHintosh #RetroComputing #MacOS9 #AI #LLM
Logged
Looking for MacOS 9.2.4

Protools5LEGuy

  • Staff Member
  • 2048 MB
  • ******
  • Posts: 2851
Re: I Put Modern LLMs on a 2002 Macintosh - No Internet Required
« Reply #1 on: March 20, 2026, 04:24:07 PM »

https://oldapplestuff.com/blog/MacinAI-Local/

Quote
MacinAI Local: Building a Model-Agnostic LLM Inference Engine for Mac OS 9

Quote
By: Alex Hoopes | Published: March 19, 2026

How I built a complete AI platform (custom C89 inference engine, BPE tokenizer, AltiVec SIMD optimization, and a Python export pipeline) that runs GPT-2, TinyLlama, Qwen, and any HuggingFace model on a 2002 PowerBook G4.

Quote
Technical Reference

Source Language: C89 (ANSI C), compiled with CodeWarrior Pro 5

Target OS: System 7.5.3 through Mac OS 9.2.2

Target CPUs: Motorola 68000, 68030, 68040; PowerPC G3, G4

Supported Model Architectures:

    LLaMA-family: RMSNorm + SwiGLU + RoPE (LLaMA, Mistral, Qwen, Gemma, TinyLlama, SmolLM, StableLM)
    GPT-2-family: LayerNorm + GeLU + learned pos (GPT-2, OPT, Pythia, GPT-J, Falcon, Phi)

Quantization: Float32, Q8_0 (per-group int8, block size 32)

Custom Model: ~100M parameters, 1.1GB Macintosh training corpus, 5,800+ SFT instruction pairs, DPO refinement

Inference Speed (PowerBook G4 1GHz, Q8): 2.66 tok/sec (0.38s/token) for custom model, 0.63 tok/sec for Qwen 0.5B

Memory Usage (100M Q8 model): ~124MB total (100MB weights, 23MB KV cache, 1MB overhead)

AltiVec Speedup: 6.3x over scalar baseline (demo build), up to 7.3x measured on earlier build

BPE Vocabulary: 8,205 tokens (8,192 BPE + 13 special + command tokens)
Logged
Looking for MacOS 9.2.4

IIO

  • Staff Member
  • 4096 MB
  • *******
  • Posts: 4818
  • just a number
Re: I Put Modern LLMs on a 2002 Macintosh - No Internet Required
« Reply #2 on: March 26, 2026, 01:38:52 PM »

Quote
The model generates AppleScript to automate Mac OS tasks. Ask it to copy a file, empty the trash, launch an application, or eject a CD, and it writes the script, shows you a confirmation dialog, and executes it via the Open Scripting Architecture. It's a text-activated Macintosh automation tool, not just a chatbot.

110´s to do list from 1997

finally learn apple script
Logged
insert arbitrary signature here

Protools5LEGuy

  • Staff Member
  • 2048 MB
  • ******
  • Posts: 2851
Re: I Put Modern LLMs on a 2002 Macintosh - No Internet Required
« Reply #3 on: March 27, 2026, 08:54:30 AM »

Quote
finally learn apple script

I am sure with more than 1.5 GBytes of RAM we can run modded Mac OS 9 it would run bigger moddels more quickly, but he isnt aware we can push memory there.



Logged
Looking for MacOS 9.2.4

MacTron

  • Staff Member
  • 2048 MB
  • ******
  • Posts: 2120
  • keep it simple
Re: I Put Modern LLMs on a 2002 Macintosh - No Internet Required
« Reply #4 on: March 30, 2026, 07:47:41 AM »

Thank You. Awesome! I never imagine this could be ever possible! The main drawback is the slow speed -as we can spect- and the models available aren't very smart.

Quote
finally learn apple script
I am sure with more than 1.5 GBytes of RAM we can run modded Mac OS 9 it would run bigger moddels more quickly, but he isnt aware we can push memory there.

I don't know for sure. In mi experience The application try to use the longest continuous block of RAM up to 900 Mb. But no more even if more RAM are available.
I try TinyLLAma ind it takes up to 9 minutes to respond a simple question as Who are you?
this question takes 40 second to Qwen.
Logged
Please don't PM about things that are not private.

IIO

  • Staff Member
  • 4096 MB
  • *******
  • Posts: 4818
  • just a number
Re: I Put Modern LLMs on a 2002 Macintosh - No Internet Required
« Reply #5 on: March 31, 2026, 04:23:14 PM »

that is why i think that performing applescript and apple events based stuff is the most interesting application for that.

rather than generating HD videos based on chatgpt4, which together requires >5 TB of RAM and possiby a year or two to produce a 5 second long result on a G4 processor. :) even a mac pro 2012 is totally underperformed for that, it will cost you more money to do that locally compared to paying for hugging space & co GPU time.

GPT 3.1 is about 380 gb long and will run locally on a M3 ultra without swapping.
Logged
insert arbitrary signature here
Pages: [1]   Go Up

Recent Topics