Hacker News .hnnew | past | comments | ask | show | jobs | submit | batch12's commentslogin

I have been needing to cut back on my subscription services, too. I also canceled.

Planted a few fruit trees, some strawberries, and vegetables. Every time I water them, I think about automated irrigation.


Is it a specific picture of the face or any picture of it?


This one is a call center metric. Similar to after call work or first call resolution. This one, I believe, is average handle time.


Sincerely, TH FART


Someone should tell that to the people who publish the gas station mugshot magazines.


Could they have added some swap?


No, just updated the parent comment, I added -c 4096 to cut down the context size, and now the model loads.

I'm able to get 6-7 tokens/sec generation with 10-11 tokens/sec prompt processing with their model. Seems quite good, actually—much more useful than llama 3.2:3b, which has comparable performance on this Pi.


> I added -c 4096 to cut down the context size

That’s a pretty big caveat. In my experience, using a small context size is only okay for very short answers and questions. The output looks coherent until you try to use it for anything, then it turns into the classic LLM babble that looks like words are being put into a coherent order but the sum total of the output is just rambling.


Thanks for posting the performance numbers from your own validation. 6-7 tokens/sec is quite remarkable for the hardware.


Some more benchmarking, and with larger outputs (like writing an entire relatively complex TODO list app) it seems to go down to 4-6 tokens/s. Still impressive.


Decided to run an actual llama-bench run and let it go for the hour or two it needs. I'm posting my full results here (https://github.com/geerlingguy/ai-benchmarks/issues/47), but 8-10 t/s pp, and 7.99 t/s tg128, this is on a Pi 5 with no overclocking. Could probably increase the numbers slightly with an overclock.

You need to have a fan/heatsink to get that speed of course, it's maxing out the CPU for the entire time.


for some reason I only get 3-4 tokens/sec. I checked the CPU does not throttle or anything.


Sounds like something that could be weaponized. Order a bunch of 'gifts' to be shipped to a target via UPS/FedEx or whichever vendor helpfully pays the tarrifs for you. Then your victim has to fight collections or pay up.


Its a cool idea, just beware. Saw some dead kids and some NSFW among the otherwise interesting content.


really sorry you had to experience that! i added a NSFW flag - i'm just pulling content randomly by date and didn't know the Archive had that kind of graphic content :(


Lighten up. People spend their time doing lots of things they enjoy regardless of the value others place on their efforts. Instead of projecting embarrassment, go save the world if that makes you happy.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: