Hacker News .hn (a.k.a HN2)new | past | comments | ask | show | jobs | submit | ezodude's commentslogin

Nice write up.

Love: Ask people how they found you. Docs are the product. Be ruthlessly iterative.


Agreed, it’s a trade off between an easy solution vs more complexity but with better reliability.


Hi, yes you can simply run Orra using Docker as a self hosted service locally. Just clone the repo and follow the instruction here: https://github.com/orra-dev/orra?tab=readme-ov-file#2-get-or...

Lemme know if you have any issues.


I mean, local LLM, for example: can it be run with llamafile instead of cloud AI?


Great question! We explored local LLMs (including llamafile-type solutions) in our early development, but found that the reasoning capabilities and consistency weren't quite there yet for our specific needs.

That's why we currently optimize for cloud AI models while implementing intelligent plan caching to significantly reduce API costs. This approach gives you the best of both worlds: high-quality execution plans with minimal API costs, plus much faster performance for similar actions.

You might find our documentation on plan caching interesting - it explains how we maximize efficiency: https://github.com/orra-dev/orra/blob/main/docs/plan-caching...

We're always evaluating new LLM options though, so I'd be curious to hear about your specific use case.


Running a 7b coder in laptops with 4060 is possible and with very good results. Orra looks like a very good tool to be integrated with any IDE. Take a look at this: https://github.com/huggingface/llm.nvim -- it has a backend option. Ollama exposes a REST API, I think you guys should support it :)


Thanks! Will take a look.


Here's the launch blog post for more context: https://outgoing-icecream-a85.notion.site/The-Missing-Glue-L...


The expectation: beautiful architecture diagrams where agents flow like poetry.

The reality: distributed chaos that made me question my life choices.

Who else has lived through: • The "$$ - thinking!" messages multiplying faster than rabbits • That one agent that decides to take a vacation mid-transaction • The classic "rollback fail" that turns your elegant system into abstract art • Users wondering why it's "so slow..." while you're debugging cascade failures

What started as "just three agents talking" somehow turned into material worthy of a distributed systems PhD thesis. It's wild how quickly "simple agent communication" evolves into a master class in distributed systems, eventual consistency, and prayer-driven development.

But here's the thing - if you've never had your multi-agent system burst into flames, are you even building something interesting?

What's your favourite "expectation vs reality" moment? Share your war stories!


Hey there, my co-founder and I have been hardcore podcast listeners since the early days of podcasting. The days when RSS was king :)

We've always thought podcast discovery was an issue. And, have tried to solve this many times. But came to the conclusion that we're coming at the problem from the wrong angle.

Instead of helping listeners find podcasts, we think it's better if podcasters find listeners. More importantly, find the right listeners who would immediately care about their show.

That's how we came up with Grro - podcasters can find new listeners by cross promoting on podcasts that their audiences already love.

For now we're only starting with recommendations.

THE TECH

For a podcast,

Our platform analyses listeners' habits to pinpoint cross-promotion opportunities with podcasts in their niche.

Then we build personalized recommendations using our Graph Algorithms, ensuring maximum audience overlap.

We also, use a bit of NLP to figure out your audiences make up.

WOULD LOVE YOUR FEEDBACK

Here's our (no sign in) demo: https://grro.xyz/demo

If you produce a podcast, then we would love your feedback on how Grro works for you. We currently require a minimal audience before we can offer recommendations.


Hello everyone,

We love docker-compose. Especially, how it helps you to get an app, with all its dependencies, up and running in no time.

We are familiar with Kubernetes and can manage all configuration hiccups manually, however many of our users aren't and it's rather frustrating to them to migrate from compose to Kubernetes quickly without getting bogged down.

A solution that is often mentioned here is Kompose. Which we’ve tried but has left us with a lot of hoops to jump through. Mainly, we constantly had to patch bits of config here and there. It got really frustrating quick.

Our idea was to use conventions and defaults to make working between docker-compose and Kubernetes as seamless as we can.

We hope you find this useful, and let us know if you have any questions.


Great read.

I've personally found that the 'Talking to Humans' book to be a great resource for thinking about customer development from the customer's perspective.

My favourite concept there was "walk in your potential customer's shoes". Not rocket science but can get forgotten in all the excitement of Lean Canvases etc..


Thanks! I also love that book, as it's full of practical advice. I really recommend reading Contagious if you haven't because it really made me realise how important all the offline stuff is for a tech startup :)


This podcast goes through some of the highlights of the book, https://changelog.com/gotime/28

I'm reading this now. Less dense than other interpreter/compiler books & All about the code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: