Hacker News .hnnew | past | comments | ask | show | jobs | submit | netghost's commentslogin

TIL about `gum`. I love this and endeavor to integrate it into something sometime. Thank you for mentioning it!

Oh, you will love all of charmbracelet

https://github.com/charmbracelet


They might have opinions about it, but look at the pay for frontend engineers at the same company. It's not uncommon to see the same seniority be 20% lower than a backend role.


I see a lot of people say you have to use methodology X, or that methodology Y is worthless.

I'm the end, I think we have maybe different uses for notes. Journaling, scratchpads, to-do lists, research, etc.

Take a methodology with a grain of salt. If it doesn't fit, there's a good chance it's solving someone else's problem, but you can always inform your own approach with it.


Looks great on mobile in portrait mode. Bonus points for the audio.


I think you're operating in a scale that is small enough that there's little risk.

You'll be able to iterate if you run into anything that doesn't work. You should however be clear on what problem you and your team are solving, and not just "get some rag".


Sure - I neglected to include the pain point itself. Right now we spend a large amount of time during troubleshooting of a problem (incident) or when working features related to these two systems, and heavily rely on our existing internal documentation. Rather than combing through tons of those docs, a RAG chatbot made sense to me and the team seems to agree. Will move forward- thanks for the input.


Bun ships with lots of tools built in. It has support for bundling js, html, etc for the browser.

I suspect that if you want the best results or to hit all the edge cases you'd still want vite, but bun probably covers most needs.


Thank you. This looks like a nice improvement on pgtyped, and another good option.

I'm curious if there are any good patterns for dealing with dynamic query building or composing queries?


Per "how to handle dynamic queries", it's admittedly pretty different b/c we're an ORM (https://joist-orm.io/) that "fetches entities" instead of adhoc SQL queries, but our pattern for "variable number of filters/joins" looks like:

const { date, name, status } = args.filter;

await em.find(Employee, { date, name, employer: { status } });

Where the "shape" of the query is static, but `em.find` will drop/prune any filters/joins that are set to `undefined`.

So you get this nice "declarative / static structure" that gets "dynamically pruned to only what's applicable for the current query", instead of trying to jump through "how do I string together knex .orWhere clauses for this?" hoops.


I haven’t found a good way to handle dynamic queries in pg-typesafe yet.

For now, I type these manually, which is acceptable for my usage as they are pretty rare compared to static queries.


That seems like a reasonable tradeoff, thanks.


Out of curiosity, the post you linked mentions that it won't work for renames. What's the approach for these and other types of procedural migrations, such as data transformations (ie: splitting a column, changing a type, etc.)

With a declarative model, would you run the migration and follow immediately with a one off script?


For both data migrations and renames, there isn't really a one-size-fits-all solution. That's actually true when doing data changes or renames in imperative (incremental) migrations tools too; they just don't acknowledge it, but at scale these operations aren't really viable. They inherently involve careful coordination alongside application deploys, which cannot be timed to occur at the exact same moment as the migration completion, and you need to prevent risk of user-facing errors or data corruption from intermediate/inconsistent state.

With row data migrations on large tables, there's also risk of long/slow transactions destroying prod DB performance due to MVCC impact (pile-up of old row versions). So at minimum you need to break up a large data change into smaller chunked transactions, and have application logic to account for these migrations being ongoing in the background in a non-atomic fashion.

That all said, to answer from a mechanical standpoint of "how do companies using declarative schema management also handle data migrations or renames":

At large scale, companies tend to implement custom/in-house data migration frameworks. Or for renames, they're often just outright banned, at least for any table with user-facing impact.

At smaller scale, yeah you can just pair a declarative tool for schema changes with an imperative migration tool for non-schema changes. They aren't really mutually exclusive. Some larger schema management systems handle both / multiple paradigms.

For MySQL/MariaDB with Skeema in particular, a few smaller-scale data migration approaches are discussed in a separate post, https://www.skeema.io/blog/2024/07/23/data-migrations-impera...


Seems great for really small apps where you want your resource definitions colocated with the code using them. I'd imagine the benefits start to break down as your infrastructure gets more complicated.

The bigger answer is that if you're proficient and happy with CDK or anything else to wire resource up, you're probably not going to see much (if any) benefit.


Until there is a resource you need out of the 150+ AWS services that they don’t support or a new feature of an existing resource….


To be fair, AWS service teams don't always expose all features/options through cloudformation and you end up having to hit the API to manage them.


True, I have written my share of CloudFormation custom resources.

Funny anecdote: it was faster for an SA when i was at AWS to create a Terraform module and have it merged into Terraform than it was for us to wait for AWS to add support for the same resource in CloudFormation. They are getting much better now


Wonderfully done, thanks for sharing!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: