The 14B Q4_K_M needs 9GB, but Q3_K_M is 7.3GB. But you also need some room for context. Still, maybe using `--override-tensor` in llama.cpp would get you a 50% improvement over "naively" offloading layers to the GPU. Or possibly GPT-OSS-20B. It's 12.1GB in MXFP4, but it’s a MOE model so only a part of it would need to be on the GPU. On my dedicated 12GB 3060 it runs at 85 t/s, with a smallish context. I've also read on Reddit some claims that Qwen3 4B 2507 might be better than 8B, because Qwen never released a "2507" update for 8B.
Haven't tried GPT-OSS-20B yet — the MOE approach is interesting for keeping VRAM usage down while getting better reasoning. 85 t/s on a 3060 is impressive. I'll look into that.
I've been on Qwen3 8B mostly because it was "good enough" for the mechanical stages (scanning, scoring, dedup) and I didn't want to optimize the local model before validating the orchestration pattern itself. Now that the pipeline is proven, experimenting with the local model is the obvious next lever to pull.
The Qwen3 4B 2507 claim is interesting — if the quality holds for structured extraction tasks, halving the VRAM footprint would open up running two models concurrently or leaving more room for larger contexts. Worth testing.
Thanks for the pointers — this is exactly the kind of optimization I haven't had time to dig into yet.
I wasn't sure where I'd seen that "retiring" spiel before, but then I remembered someone was (still is) selling a handmade jewelry website claiming $4.3M revenue and $1.3M profit.
I use an even older Macbook and an even older macOS. Of course, the browsers no longer work with the latest JS, so occasionally when I need to use some webapp I boot up a Linux VM and do what I need to do. With limited RAM even that's a pain, but it works for now.
Can confirm. I was trying to send the newsletter (with SES) and it didn't work. I was thinking my local boto3 was old, but I figured I should check HN just in case.
I have an RTX 3060 with 12GB VRAM. For simpler questions like "how do I change the modified date of a file in Linux", I use Qwen 14B Q4_K_M. It fits entirely in VRAM. If 14B doesn't answer correctly, I switch to Qwen 32B Q3_K_S, which will be slower because it needs to use the RAM. I haven't tried yet the 30B-A3B which I hear is faster and closer to 32B. BTW, I run these models with llama.cpp.
For image generation, Flux and Qwen Image work with ComfyUI. I also use Nunchaku, which improves speed considerably.
# Runs the DB backup script on Thu at 22:00 -- I download the database backup for a few websites that get new data every week. I do this in case my host bans my account.
# Runs the IP change check on Mon - Sun at 09:00, 10:30, 12:00, 20:00 -- If the power goes out or the router reboots I get a new IP. On the server I use fail2ban and if I log into the admin panel I might get banned for making too many requests. So my IP needs to be "blessed".
# Runs Letsencrypt certificate expiry check on Sundays at 11:00 and 18:00 -- I still have a server where I update the certificates by hand.
# Runs the "daily" backup -- Just rsync
# Download Godaddy auction data every day at 19:00 -- I don't actively do this anymore but I used to check, based on certain criteria, for domains that were about to expire.
# Download the sellers.json on the 1st of every month at 19:00 -- I use this to collect data on websites that appear and disappear from the Mediavine and Adthrive sellers.json
I've known about this issue since Lllama 1. Tried it with Llama 2 and Mistral when those models were released. LLMs are not databases.
The test I ran was to ask the LLM about an expired domain of a doctor (obstetrician). I no longer remember the exact domain, but it was similar to annasmithmd.com. One LLM would tell me it used to belong to a doctor named Megan Smith. Another got the name right, Anna Smith, but when I asked it what kind of a doctor, which specialty, it answered pediatrician.
So the LLM had no clue, but from the name of the domain it could infer (I guess that's why they call it inference) that the "md" part was associated with doctors.
By the way, newer LLMs are very good at making domains more human readable by splitting them into words.
I can answer question 3. Prompt processing (how fast your input is parsed) is highly correlated with computing speed. Inference (how fast the LLM answers) is highly correlated with memory bandwidth. So a good CPU might read your question faster, but it will answer pretty much as slow as a cheap CPU with the same RAM.
I have a Ryzen 3 4100. Just tested Qwen2.5-Coder-32B-Instruct-Q3_K_S.gguf with llama.cpp.
CPU-only:
54.08 t/s prompt eval
2.69 t/s inference
---
CPU + 52/65 layers offloaded to GPU (RTX 3060 12GB):
Renting could be a good choice to get started. I used to rent a g4dn.xlarge instance from AWS (for Stable Diffusion, not LLMs). More affordable options are Runpod and Vast.ai.
I started with a local system using llama.cpp on CPU alone and for short questions and answers it was OK for me. Because (in 2023) I didn't know if LLMs would be any good, I chose cheap components https://hackernews.hn/item?id=40267208.
Since AWS was getting pretty expensive, I also bought an RTX 3060(16GB), an extra 16GB RAM (for a total of 32GB) and a superfast 1TB M.2 SSD. The total cost of the components was around €620.
Here are some basic LLM performance numbers for my system:
The 14B Q4_K_M needs 9GB, but Q3_K_M is 7.3GB. But you also need some room for context. Still, maybe using `--override-tensor` in llama.cpp would get you a 50% improvement over "naively" offloading layers to the GPU. Or possibly GPT-OSS-20B. It's 12.1GB in MXFP4, but it’s a MOE model so only a part of it would need to be on the GPU. On my dedicated 12GB 3060 it runs at 85 t/s, with a smallish context. I've also read on Reddit some claims that Qwen3 4B 2507 might be better than 8B, because Qwen never released a "2507" update for 8B.