Great question! We explored local LLMs (including llamafile-type solutions) in our early development, but found that the reasoning capabilities and consistency weren't quite there yet for our specific needs.
That's why we currently optimize for cloud AI models while implementing intelligent plan caching to significantly reduce API costs. This approach gives you the best of both worlds: high-quality execution plans with minimal API costs, plus much faster performance for similar actions.
Running a 7b coder in laptops with 4060 is possible and with very good results. Orra looks like a very good tool to be integrated with any IDE. Take a look at this: https://github.com/huggingface/llm.nvim -- it has a backend option. Ollama exposes a REST API, I think you guys should support it :)
The expectation: beautiful architecture diagrams where agents flow like poetry.
The reality: distributed chaos that made me question my life choices.
Who else has lived through:
• The "$$ - thinking!" messages multiplying faster than rabbits
• That one agent that decides to take a vacation mid-transaction
• The classic "rollback fail" that turns your elegant system into abstract art
• Users wondering why it's "so slow..." while you're debugging cascade failures
What started as "just three agents talking" somehow turned into material worthy of a distributed systems PhD thesis. It's wild how quickly "simple agent communication" evolves into a master class in distributed systems, eventual consistency, and prayer-driven development.
But here's the thing - if you've never had your multi-agent system burst into flames, are you even building something interesting?
What's your favourite "expectation vs reality" moment? Share your war stories!
Hey there, my co-founder and I have been hardcore podcast listeners since the early days of podcasting. The days when RSS was king :)
We've always thought podcast discovery was an issue. And, have tried to solve this many times. But came to the conclusion that we're coming at the problem from the wrong angle.
Instead of helping listeners find podcasts, we think it's better if podcasters find listeners. More importantly, find the right listeners who would immediately care about their show.
That's how we came up with Grro - podcasters can find new listeners by cross promoting on podcasts that their audiences already love.
For now we're only starting with recommendations.
THE TECH
For a podcast,
Our platform analyses listeners' habits to pinpoint cross-promotion opportunities with podcasts in their niche.
Then we build personalized recommendations using our Graph Algorithms, ensuring maximum audience overlap.
We also, use a bit of NLP to figure out your audiences make up.
If you produce a podcast, then we would love your feedback on how Grro works for you. We currently require a minimal audience before we can offer recommendations.
We love docker-compose. Especially, how it helps you to get an app, with all its dependencies, up and running in no time.
We are familiar with Kubernetes and can manage all configuration hiccups manually, however many of our users aren't and it's rather frustrating to them to migrate from compose to Kubernetes quickly without getting bogged down.
A solution that is often mentioned here is Kompose. Which we’ve tried but has left us with a lot of hoops to jump through. Mainly, we constantly had to patch bits of config here and there. It got really frustrating quick.
Our idea was to use conventions and defaults to make working between docker-compose and Kubernetes as seamless as we can.
We hope you find this useful, and let us know if you have any questions.
I've personally found that the 'Talking to Humans' book to be a great resource for thinking about customer development from the customer's perspective.
My favourite concept there was "walk in your potential customer's shoes". Not rocket science but can get forgotten in all the excitement of Lean Canvases etc..
Thanks! I also love that book, as it's full of practical advice. I really recommend reading Contagious if you haven't because it really made me realise how important all the offline stuff is for a tech startup :)
Love: Ask people how they found you. Docs are the product. Be ruthlessly iterative.