It's the definition of simple that's the problem. For any definition of simplicity you might have, someone has an equal and opposite definition.
Take these two alternatives:
class UserService {
PostgresDatabase db;
}
class UserService {
IDatabase db;
}
There are some coworkers who will veto the first example for being too complex, because it brings Postgres (and its state and connections and exceptions and mappings) into the scope of what otherwise could have been a service concerning Users.
There are some coworkers who will veto the second example for being too complex, because Postgres is all you use for now, and if you really need to use a second database, you can change the code then (YAGNI). Also the Interface gives you a pointless indirection that breaks IntelliSense so you can't just 'click-through' to follow the code flow.
I agree with your comment, but I disagree a both the example opinions... complex is the discussion :D
I heard something that helps better framing those discussions, use "familiar" instead of "simple".
An highly abstract way to access a database table, with ORM for example, can be simple because everyone is expecting it and knows how to do all tasks (changing schema, troubleshooting, managing transactions, etc.).
Doing userService.pgSql("select ....") in the same way can be simple.
One more opinion piece uselessly recommending "simplicity" with no code samples or concrete takeaways.
> It also shows up in design reviews. An engineer proposes a clean, simple approach and gets hit with “shouldn’t we future-proof this?” So they go back and add layers they don’t need yet, abstractions for problems that might never materialize, flexibility for requirements nobody has asked for. Not because the problem demanded it, but because the room expected it.
$100 says the "clean, simple" approach is the one which directly couples the frontend to the backend to the database. Dependencies follow the control flow exactly, so that if you want to test the frontend, you must have the backend running. If you want to test the backend, you must have the database running.
The "abstractions for problems that might never materialize" are your harnesses for running real business logic under unit-test conditions, that is, instantly and deterministically.
If you do the "simple" thing now, and push away pesky "future-proofing" like architecting for testing, then "I will test this" becomes "I will test this later" becomes "You can't test this" becomes "You shouldn't test this."
> The future we are barreling towards is one where the PM is the most important role in a company
Our productivity went up when our PM was absent. The impact was significant enough to be recorded and discussed at the following retro. And it's happened more than once.
We work on software systems. The PM larps as a software professional. When the PM is absent, our standups are rich with discussion about the state of the software, and what the next step is. When the PM is present, he is an information sink, not an information source. He asks stupid questions and needs things dumbed down. The developer's goal is to make sure the software works for the customer, now and forever. The PMs goal is to report to his boss, who is even less in touch with what's going.
There is no glamour in 'make the system fulfil its primary purpose', so the PM will always pull engineering focus away from that.
Israel will bomb a few hospitals and universities, and the US will park some battleships nearby and unfurl a "mission accomplished" banner.
After a few fake declarations of ceasefire, the Mullahs will donate to the Trump Presidential library, Trump will take credit for bringing peace to the middle east, and then will speak as glowingly about the Mullahs as he does about Putin, Kim Jong Un and MBS.
OpenRouter is the leading place to go to to get general purpose models of all sorts. It's fairly popular, and processes tens of trillions of tokens a year.
OpenRouter is valued at >$500m and processes >$100m/year, 5% of which goes to them. Not that large compared to e.g. OpenAI, but it's the largest that doesn't produce its own models & with the largest selection I'm aware of.
Validating during parsing is still parsing, there's a reason why `Alternative f` exists after all: you have to choose between branches of possibilities and falsehoods. Now consider that there's another kind of validation that happens outside of program boundaries (where broader-than-needed data is being constrained in a callee rather than the calling site) that should've been expresed as `Alternative f` during parsing instead. That's the main point of the article, but you seem to only focus on the literal occurence of the word "validation" here and there.
So you are saying that if at a certain point in parsing the only expected terms are 'a', 'b' and 'c', one should not put the corresponding parsed entry in a `char` (after checking it is either of these aka validating), and instead it should be put in some kind of enum type (parsed via `Alternative f`). Right?
You put them however you like, be it in a char or a vector of, but the bottom line is that your parsed items are part of the "sanitized" label that allows you to either tuple-unpack or pattern-match (as long as it's near or literally zero-cost) without performing the same validation ever again for the lifetime of the parsed object. The callees that exclusively expect 'a', 'b' and 'c', and for which they perform internal validation step, should be replaced with versions that permit the inputs with sanitized labels only. How you implement the labels depends on the language at hand, in case of Haskell they can be newtypes or labelled GADTs, but the crucial part is: the "validation" word is systematically pushed to the program boundaries, where it's made part of the parsing interface with `Alternative f` and sanitization labels acting on raw data. In other words you collapse validation into a process of parsing where the result value is already being assembled from a sequence of decisions to branch either with one of the possible successful options or with an error.
> but the crucial part is: the "validation" word is systematically pushed to the program boundaries
Yea, so again. Isn't that freaking obvious?! That author seem to be experienced in Haskell where this kind of thing is common knowledge and for some reason this seems to be some kind of revelation to them...
apparently not, as I always find snippets of patterns of this kind from my coworkers (and I've worked in many companies, including the ones that require precision for legal compliance):
def do_business_stuff(data):
orders = data.get("orders")
if not orders:
return
for order in orders:
attr = order.get("attr")
if attr and len(attr) < 5:
continue
...
The industry's awareness baseline is very low, and it's across tech stacks, Haskell is no exception. I've seen stuff people do with Haskell at 9 to 5 when the only thing devs cared about was to carry on (and preferably migrate to Go), and I wasn't impressed at all (compared to pure gems that can be found on Hackage). So in that sense having the article that says "actually parse once, don't validate everywhere" is very useful, as you can keep sending the link over and over again until people either get tired of you or learn the pattern.
> They help define and navigate the representation of composite things as opposed to just having dynamic nested maps of arbitrary strings.
What would you say to someone who thinks that nested maps of arbitrary strings have maximum compatibility, and using types forces others to make cumbersome type conversions?
If the fields of a structure or the string keys of an untyped map don't match then you don't have compatibility either way. The same is not true for restricting the set of valid values.
edit: To put it differently: To possibly be compatible with the nested "Circle" map, you need to know it is supposed to have a "Radius" key that is supposed to be a float. Type definitions just make this explicit. But just because your "Radius" can't be 0, you shouldn't make it incompatible with everything else operating on floats in general.
I'd wager a fair share of grads don't yet understand:/
> The transaction didn't help. Postgres's default isolation level is READ COMMITTED — each statement sees all data committed before that statement started.
> work that could be done efficiently by the RDBMS
the RDBMS? You're only using one? Why not spread the work out a little? Even if you think you write all your queries efficiently, nothing stops your teammates from DOS'ing your efficient queries by writing inefficient queries themselves. Last week our team started started piling up write timeouts because another team was modifying one of their tables. Not in their db, in the db.
> Queries become much more convoluted
Please, every ounce of effort invested in ORMs like EF/LINQ is to make code look less like querying and more like plain old object access. For the most part, devs want to work with objects and store objects. If you didn't go the RDBMS route, you wouldn't need EF/LINQs help in decomposing your objects and scattering their parts into separate tables. The least convoluted query possible is to just grab the object you wanted directly.
- “Value capture,” as called out in the article. If new tools make engineers 10x more productive, that should be reflected in compensation
- End employment law workarounds like “unlimited PTO,” where your PTO is still limited in practice, but it’s not a defined or accruing benefit
- Protection against dilution of equity for employees
- A seat at the table for workers, not just managers, in the event of layoffs
- Professional ethics and whistleblower protections. Legally-protected strikes if workers decide to refuse on pursuing an ethically or legally dubious product or feature.
I could go on. There are a lot of abuses we put up with because of relatively high salaries, and it is now abundantly clear that the billionaire capital-owning class is dead set on devaluing the work we do to “reduce labor costs.” We can decide not to go along with that.
Take these two alternatives:
There are some coworkers who will veto the first example for being too complex, because it brings Postgres (and its state and connections and exceptions and mappings) into the scope of what otherwise could have been a service concerning Users.There are some coworkers who will veto the second example for being too complex, because Postgres is all you use for now, and if you really need to use a second database, you can change the code then (YAGNI). Also the Interface gives you a pointless indirection that breaks IntelliSense so you can't just 'click-through' to follow the code flow.
reply