I love it for the things that I do. It's not comparable to Claude which is the top coding experience right now but for typing chunks of code, refactoring, etc is more than enough.
I don't have a full CLI agent, I just installed deepseek as a plugin in my Sublime Editor, some python code that takes the selected text or file and can do things to it on command, sometimes I just enter a prompt without selection so it can create code for me.
All in all, enough for me as a lone coder, it doesn't have continuous integration or team edition which I don't care for, and for the price (cents per month) I couldn't ask for more.
I’m sorry you have to make do with that setup. I’d upgrade to an M5 Pro 64GB right away. In fact, your old one has no value. I can safely dispose of it for you.
It’s hard to tell at this point. You’ll get two more cores and more memory but they moved to chiplets which could hit performance on this first try. Best to wait for actual benchmarks
A few things. I intentionally clone the repo and build it locally for my use and use it as my-omp.. this way, I can make oh-my-pi make customisations like skills, tools, anything and yet retain the ability to do a git pull from upstream with cherry picking if necessary.
I have this in my shell rc.
# bun
export BUN_INSTALL="$HOME/.bun"
export PATH="$BUN_INSTALL/bin:$PATH"
alias my-omp= "bun/Users/aravindhsampathkumar/ai_playground/oh-my-pi/packages/coding-agent/src/cli.ts"
and do
1. git pull origin main
2. bun install
3. bun run build:native
every time I pull changes from upstream.
Until yesterday, this process was purely bliss - my own minimal custom system prompt, minimal AGENTS.md, and self curated skills.md. One thing I was wary of switching from pi to oh-my-pi was the use of Rust tools pi-native using NAPI. The last couple of days whatever changes I pulled from upstream is causing the models to get confused about which tool to use and how while editing/patching files. They are getting extremely annoyed - I see 11 iterations of a tool call to edit a damn file and the model then resorted to rewriting the whole file from memory, and we al know how that goes. This may not be a bug in oh-my-pi per se. My guess is that the agent developed its memory based on prior usage of the tools and my updating oh-my-pi brought changes in their usage. It might be okay if I could lose all agent memory and begin again, but I dont want to.
I'm going to be more diligent about pulling upstream changes from now on, and only do when I can afford a full session memory wipe.
Otherwise, the integrations with exa for search, LSP servers on local machine, syntax highlighting, steering prompts, custom tools (using trafilatura to fetch contents of any url as markdown, use calculator instead of making LLM do arithmetic) etc work like a charm. I haven't used the IPython integration nor do I plan to.
I definitely liked physics simulator you created,
and I don't understand why people think working with Claude is mostly prompting and it's definitely not it does require effect and now we just write the code as its out sourced to the Claude but you are definitely planning and thinking about the problem you are trying to solve
It's been fantastic. The mouse puts my hand in a natural "handshake" position, which has cut down on the wrist strain I used to get after long hours of work or browsing.
reply