They might have opinions about it, but look at the pay for frontend engineers at the same company. It's not uncommon to see the same seniority be 20% lower than a backend role.
I see a lot of people say you have to use methodology X, or that methodology Y is worthless.
I'm the end, I think we have maybe different uses for notes. Journaling, scratchpads, to-do lists, research, etc.
Take a methodology with a grain of salt. If it doesn't fit, there's a good chance it's solving someone else's problem, but you can always inform your own approach with it.
I think you're operating in a scale that is small enough that there's little risk.
You'll be able to iterate if you run into anything that doesn't work. You should however be clear on what problem you and your team are solving, and not just "get some rag".
Sure - I neglected to include the pain point itself. Right now we spend a large amount of time during troubleshooting of a problem (incident) or when working features related to these two systems, and heavily rely on our existing internal documentation. Rather than combing through tons of those docs, a RAG chatbot made sense to me and the team seems to agree. Will move forward- thanks for the input.
Per "how to handle dynamic queries", it's admittedly pretty different b/c we're an ORM (https://joist-orm.io/) that "fetches entities" instead of adhoc SQL queries, but our pattern for "variable number of filters/joins" looks like:
const { date, name, status } = args.filter;
await em.find(Employee, { date, name, employer: { status } });
Where the "shape" of the query is static, but `em.find` will drop/prune any filters/joins that are set to `undefined`.
So you get this nice "declarative / static structure" that gets "dynamically pruned to only what's applicable for the current query", instead of trying to jump through "how do I string together knex .orWhere clauses for this?" hoops.
Out of curiosity, the post you linked mentions that it won't work for renames. What's the approach for these and other types of procedural migrations, such as data transformations (ie: splitting a column, changing a type, etc.)
With a declarative model, would you run the migration and follow immediately with a one off script?
For both data migrations and renames, there isn't really a one-size-fits-all solution. That's actually true when doing data changes or renames in imperative (incremental) migrations tools too; they just don't acknowledge it, but at scale these operations aren't really viable. They inherently involve careful coordination alongside application deploys, which cannot be timed to occur at the exact same moment as the migration completion, and you need to prevent risk of user-facing errors or data corruption from intermediate/inconsistent state.
With row data migrations on large tables, there's also risk of long/slow transactions destroying prod DB performance due to MVCC impact (pile-up of old row versions). So at minimum you need to break up a large data change into smaller chunked transactions, and have application logic to account for these migrations being ongoing in the background in a non-atomic fashion.
That all said, to answer from a mechanical standpoint of "how do companies using declarative schema management also handle data migrations or renames":
At large scale, companies tend to implement custom/in-house data migration frameworks. Or for renames, they're often just outright banned, at least for any table with user-facing impact.
At smaller scale, yeah you can just pair a declarative tool for schema changes with an imperative migration tool for non-schema changes. They aren't really mutually exclusive. Some larger schema management systems handle both / multiple paradigms.
Seems great for really small apps where you want your resource definitions colocated with the code using them. I'd imagine the benefits start to break down as your infrastructure gets more complicated.
The bigger answer is that if you're proficient and happy with CDK or anything else to wire resource up, you're probably not going to see much (if any) benefit.
True, I have written my share of CloudFormation custom resources.
Funny anecdote: it was faster for an SA when i was at AWS to create a Terraform module and have it merged into Terraform than it was for us to wait for AWS to add support for the same resource in CloudFormation. They are getting much better now
reply