Makes sense. It’s already illegal to even attempt to commit suicide here, so compared to that, this is just another small way the state micromanages your entire life.
Sarcasm aside, I wonder if they calculated how much we save by not trashing these items, versus the cost in time, bureaucracy, and administration this will demand. There is an episode of Freconomics that covered this. Managing and getting rid of free stuff is very expensive and hard. But that someone else's problem.
You're confusing being sarcastic with sardonic. It's also a grossly dishonest comparison.
> Managing and getting rid of free stuff is very expensive and hard. But that someone else's problem.
While I think we deeply disagree with what "hard" means, it does feel like its the kind of cost a reasonable organization would willingly take on. I compare it to the chefs, or restauranteers who after they're done cooking for the day bring all the food that they have to a local food bank or shelter instead of throwing it away. That's an equally expensive endevor, just on different scale. I think it's reasonable to expect all organizations to act with some moral character, and given larger companies have demonstrated they lack moral character, and would otherwise hyper optimize into a negative sum game they feel they can win. I think some additional micromanaging is warranted. You don't?
Everyone should be discouraged from playing a negative sum game.
For anybody that is interested in a clinical psychologist's take, here is mine…
This article triggers an overwhelming feeling that something is missing in the story. Of course, being fired is genuinely painful, and the author's emotional state is understandable. But I think there is a much better way to understand this situation that would be beneficial to the author. Please note that this is just a guess, and in reality, I would explore if this is a good fit for both reality and what the person is capable of talking about, and quickly back off if not both were true. This is just an exercise in hypothesis building that accompanies every meeting i have with a client, and initial theories are often wrong.
First is the defense mechanism of abstract answers. I once asked a girl why she stole from her mother AGAIN, and she responded, "I try to get back up, but I fall down." This is a deflection and a non-answer. This author does the corporate version of that. Instead of saying, "I struggled to read the room," they describe "The Three-Year Myth."
There is the bitterness here that often accompanies the wound to professional identity. The author literally tells us they are smarter than their boss, harder working than their peers, and more ethical than the company. The easiest explanation is to blame failure on the system being rigged against good people. This might be a coping mechanism, but it might also hinder personal growth.
Then there is the claim that the author didn't know why they were fired. However, i think they tell us exactly why in the hardware paragraph. Look at the what the author describes… a senior director presented a vision to a customer. The author (without checking with the director) proposed a totally different architecture because they "read the requirements line by line" (implying the director didn't). The author received a formal warning.
The author’s Interpretation is "My timing was perfect for the market, but poor for the systems of power." (I was too smart/right, and they were threatened). That might hold some truth, but its not implausible that the author undermined senior leadership, embarrassed the company regarding a client commitment, and likely communicated it with arrogance ("no AI summaries here!" as he writes).
And receiving a formal warning is an extremely serious signal. To frame a formal HR warning as simply timing being inconvenient to power that be, shows a near-total lack of accountability. There is zero reflection on how they advocated for their ideas. The author claims, "I'm literally not built for competition so much as cooperation," yet their anecdotes describe them fighting against cost centers and trying to override directors.
The self-reflection that does appear is careful and limited. The author admits to being "naturally helpful and cooperative" and bad at "game theory" but these are virtues reframed as vulnerabilities. "I'm too good and too cooperative for this corrupt world" isn't really self-criticism. The one moment that approaches genuine insight "I need to expand into leadership skills" is immediately followed by blaming stakeholders who "blocked change at all costs." The OCD mention functions similarly and it explains the overanalysis as a feature, not something that might be creating friction with colleagues.
This is someone who likely has high technical intelligence but problems with soft skills. They prioritized being technically right over being effective, and when the social consequences arrived (the warning, the firing), they built a defensive wall of abstraction to avoid seeing their own role in the fall.
A proper question is WHY has this happened repeatedly and in multiple roles, across multiple organizations, with the same pattern? The author even acknowledges this but thinks the answer is "I keep falling for the same trap." I think it would be more helpful to ask, "Why do I keep creating the same dynamic?"
> The OCD mention functions similarly and it explains the overanalysis as a feature, not something that might be creating friction with colleagues.
Because it is both and this is a very classic problem for neurodivergent people.
As a ADHD person I could very much relate. My pattern recognition allows me to see connections and structure where neurotypical people only see chaos. I am often three, four, five steps ahead and can see potential problems and solutions so much earlier.
Of course this doesn't help. If I point these things out, I will only be met with resistance regardless if I happen to be right later on or not.
So really the best solution is to just shut up. Let them catch up eventually. It just feels so isolating and frustrating. Not only do I have to mask the deficits that ADHD gives me but also my talents.
I think this is the core issue here. OP is hated and discriminated for their OCD. Corporations are not equipped harness the talents of people that think differently. They are not a "culture fit".
I don't really have a solution. Yes you can learn to mask and play the game but that is also not healthy in the long term.
> My pattern recognition allows me to see connections and structure where neurotypical people only see chaos. I am often three, four, five steps ahead and can see potential problems and solutions so much earlier.
A little humility would probably help a lot. Your post is already blaming everyone else for not listening to you. This isn't really about you thinking differently.
Oh I am sorry for highlighting one of the side effects of my crippling disability.
I did not even present it as an advantage but as something that causes feelings of isolation but I guess I am bragging about it and need more humility.
My brain's filtering function is defect. Where neurotypical people see one or two possible solution my brain automatically comes up with ten which is great for creativity but also paralyzing. Where neurotypical people can easily control their focus I can't.
Now I do think people that present their ADHD as a superpower are full of shit but I think it is fair to point out that some of the aspects could also be strengths if the structure I work with would allow them to be strengths. I think that is very fair to criticize.
I assure you that a significant chunk of my energy is spend every day in adjusting my communication to the needs of neurotypical people and always second guessing myself and improving how I do that. It just sucks that they get quite angry if I ever suggested they adjust their communication just a tiny bit for my sake.
Neither ADHD nor OCD have anything to do with communication style, 'being five steps ahead,' or patterns of interpersonal friction. The only symptom that remotely fits here is impulsive speech, and that's sporadic, it doesn't produce a consistent pattern of seeing yourself as above your colleagues, and its not related to the content style of the speech.
This is something i see allot of. People project what they want onto their pet diagnosis, without knowing what the diagnosis actually is. And god know what people mean when they say neurodevergent these days. The only thing i know for certain is that it never maps on to anything from real spectrum disorders.
OCD is ritualistic and compulsive behavior, often performed to decrease a negative feeling. It has nothing to do with anything described in the article or this thread. What does fit the described behavior: Rigidity, perfectionism, a need to do things the 'correct' way regardless of social cost,is OCPD, which is something completely different. And there is another diagnosis that is blindingly obvious but i wont name it out here.
There should also be noted that there are plenty of extremely smart people who don't end up in this pattern. If you're looking for myths, start with the myth of the troubled genius.
And a gift of seeing all possible solutions obviously doesn't extend to the interpersonal friction you're describing. The person you're replying to tried to point this out, and tried to communicate that you are missing something about the situation. I doubt it's the first time someone has. This reply is itself an example that just confirms the hypothesis: Someone offered feedback, and instead of sitting with it, you defended, reframed, and redirected blame outward. That's exactly the pattern I described.
> And there is another diagnosis that is blindingly obvious but i wont name it out here.
I wonder why a self identified mental health professional would go to such lengths to deny the viewpoint many of autistic people, who frequently report that the truth of what they say matters far less to organizations than the manner in which they say it.
Because it’s annoying that people can’t even stick with the criteria that are basically the same across all the major diagnostic manuals. And because I believe that words and concepts should mean somethin. Because it’s proof that they are not really as focused on details as they claim.
Every time someone wrongly claim they have PTSD, which is a lot these days, they water down and diminish the experience of people who have experienced severe and real trauma.
Said another way. Because it’s egotistical.
For the record, I have worked with hundreds of people with ASD and helped them understand how to navigate social relations. And I’ve tried to work with people that claim they have ASD, but in reality, just use it as an excuse to be a jackass. Guess which ones of them are defensive with regards to their pet diagnosis?
You sound like a dreadful psychologist if true. You clearly do not have empathy or understanding or tou have burned out and need a break. Your so assured in your antagonistic retorts that you are unraveling the very point of trust you staked to give your opinion social validation....You are lacking aelf awareness and it shows that clearly you are generalizing the diagnoses and possibily you just over diagnose narcissism because its easier?
I honestly hope you are lying about your profession, rather than venting your personal frustrations with clients by arguing with people that you believe resemble them online.
I find that there is a big difference between how people that use the fact that they are "A perfectionist OCD person".
Some wield it at a weapon. Some use it as an excuse. Some start with the assumption that it can be harness into something good. And some beat them self up over it uses it to degrade them self.
I think its most helpful to view it as a "know thy self" data point, and not make it someone else problem, but use it as information as to what is ones own challenges that must be kept in check. And if one is relay good, use it for something productive.
A great way for cultivating internalized self hatred and burn out.
You approach isn't wrong per se and might be the right one for some people. Some people need to be told to take more personal responsibility
But other people take too much personal responsibility already and only blame themselves and need to be told that they have a disability and it is their right to ask for accessibility and help. That the world is part of the problem.
The people to who take too much responsibility are not the one that "makes it other peoples problem", unless they are suffering from a dependent personality disorder.
And even then, consider rearranging what you just said in your reply. You are saying: You have to make it someone else problem to avoid self hatred and burnout.
There is a difference in relying and getting support from people, and being a jackass.
The trick is to be the Oracle of Delphi, not Cassandra.
Make the prediction once, with politeness and humility, and preferably in enough company that your opinion is noted even if (when) it is overridden. Use it as an opportunity to be seen as wise, not just smart.
Then, keep contingency plans. When the problem manifests, have a solution ready as best you can given your limited position. Even when it's too late to avoid the whole problem, you might be able to limit the blast radius. Again, be public but polite about it, and most importantly never say "I told you so" or otherwise appear smug.
You want to cultivate the reputation of "the person who is right but easy to work with, and who always has your back in a pinch."
I'd push back gently on 'just shut up' as the solution. In my experience, people like you are usually CORRECT about the problem, and the anger and annoyance is well funded. It can be annoyance with the bad architecture, the wasteful meetings, the dysfunctional team dynamics. But you are falling into the same pattern as the author... Where it breaks down is treating 'being right' as the end of the job. Figuring out how to get others to see what you see, that's the actual unsolved problem, and it is more often than not solvable. Giving up on it means real problems stay unfixed, which helps nobody. If you channel the energy into solving what annoys you, in a productive way, you make both your life and your team better.
> Figuring out how to get others to see what you see
but this is exactly the point of article: event if you make them see, they just pretend they don't because it's not in their personal immediate interest to admit you are right or you were right (later)
I gotta say you really nailed a solid explanation for what I felt reading the OA but would not have been able to articulate it this clearly.
As someone who personally had a history of wanting to be right, sometimes at the expense of being effective, this is a lesson worth taking to heart.
What I’ve learned is that raw engineering chops and deep end-to-end thinking is highly valued if and only if you understand where leadership is trying to go and you bring people along in your vision. If you pitch your boss and they say no, you need to take it to heart and understand why, if you plow ahead vowing to show how right you were you are forcing them into an awkward position where you can only lose.
A lot of replies in the thread siding with the original author and indignant on their own terms about how they’ve been wronged by “corrupt” leaders. But this betrays a misunderstanding of how large orgs work. The nature of success is you have to subvert yourself to the whims of the organization, and only stick your neck out to challenge the status quo when you have sufficient air cover from someone higher up who believes in you. Corporations are often dysfunctional and anyone working within them can clearly see the flaws, but you’ve got to be clear eyed about what influence you have, and even then, pick your battles, or you’ll be rejected like an immune response from the organization.
> And receiving a formal warning is an extremely serious signal.
To be nitpicky, the article doesn't say 'formal warning,' just 'warning.' That could have been anything from a gentle let-down to a reprimand.
That being said, I think your broader point is reasonably true: the author frames the 'political games' of promotion as a regrettable necessity rather than a job requirement beyond the juniormost levels. Despite their self-description as helpful and cooperative, they disdain the dyadic sport of cooperatively making their boss look good.
That's not to say that one should submit to base exploitation, of course, but there's a fine art to understanding the constraints and incentives of others and working with (and often within) that framework.
A second skill is being able to separate the person from the position, to maintain friendly or at least respectful personal relationships with people who might be professional adversaries at the moment. This is harder, but if professional hostility reads as personal contempt that will definitely destroy one's social weight in an organization.
As I wrote in the opening. That’s exactly what we do all the time. It’s called case formulation. It’s called hypothesis testing. In this case it’s also common sense about human nature.
It's called "strawman fallacy", you
replacing the thesis and add things that wasn't there to draw plausible conclusions instead of trying to get more information if there's not enough. Calling it "hypothesis" isn't charging anything.
... personal psychology aside (btw somehow i have never seen anyone taking on the top-winners but anyway)
but what i see, organisational-health-wise, is a way-too-long and totally broken communication chain. A Director presents a vision and does not communicate it to related/interested internal parties, someone on the floor invents something or develops something by the spec and does not show a preliminary versions / check ground / seek feedback while in-process, and how many levels in-between those, just one - or more - doing nothing to facilitate the information flow?
Honestly, I think your hypothesis betrays a naïveté on how corporations actually function. How much time have you spent working in a technical capacity at a mid or large size corporation?
I've worked for 25+ years in mid and large size corporations, including IBM, Google, and other places (so a pretty large gamut of cultures and behaviors), and i think it's exactly right, FWIW.
For example - there is little to no understanding presented by the OP as to the actual perspectives of others - IE giving factual examples of what happened, and how this made OP view the other person's perspective. Instead, you get exactly one side of a story, without really any facts, and then a cartoon caricatures they are presenting as the other side (also without any real facts).
What is the actual example of what the other side of any of these stories did that is being used to back up these perspectives?
The post you are responding to points this (and other things) out , in a fairly kind way, and it's totally right to do so.
FWIW - i'll point you did a variant of the same behavior OP did- you say it betrays someone as being naieve, but provide no examples that actually back this up (IE what facts and examples do you have that make you believe it is naieve), and then sort of try to place the burden of them to prove you wrong by asking how long they worked at corporations?
This is nowhere near as bad an example as what OP did, but I would offer, similar to the post you responded to - it is much more effective and helpful if, rather than sort of try to paint someone else with your feelings, instead provide your experience and why it made you agree or disagree with what they wrote.
That is actually helpful in understanding your perspective on the situation, and enables folks to have a real discussion about it.
Some. I was CTO of a mid-sized firm (~$30M revenue) and have sat on the board of two hospital psychiatric units. Granted, I'm in Norway, so office politics may differ.
But let me ask you the reverse: How much time have you spent helping people actually improve themselves? Because in my experience, the single biggest obstacle to professional growth isn't corporate politics, it's the lengths people will go to protect their ego from accountability. And focusing on systemic injustice is a destructive patterns I've seen in both the clinic and in the workplace.
So if you think Im naive with regards to office politics you might be right... But what if you are naive with regrades the psychology of defense mechanisms?
No, just me. As you can see from my long history I always took the time ever so often to comment in-depth on stuff i care about on HN, since its the place with the most interesting spread of content for me, and the place with the highest chance of getting interesting responses. I do admit that i use AI for spell-correction, but that sucks since it peppers my grammar with EM (—), which is obviously makes people suspect it pure AI. And i have to re-edit it to remove them to avoid comments like this. But its just me...
In later iOs versions i started making much more mistakes. Felt i got old or something. But whenever i type on my Android phone, its like nothing has changed. I swear that the iOs keyboard is trolling, I HIT O NOT I! O AM CERTAIN!
This is my favorite field for me to have opinions about, without not having any training or skill. Fundamental research i just a something I enjoy thinking about, even tho I am psychologist. I try to pull inn my experience from the clinic and clinical research when i read theoretical physics. Don't take this text to seriously, its just my attempt at understanding whats going on.
I am generally very skeptical about work on this level of abstraction.
only after choosing Klein signature instead of physical spacetime, complexifying momenta, restricting to a "half-collinear" regime that doesn't exist in our universe, and picking a specific kinematic sub-region. Then they check the result against internal consistency conditions of the same mathematical system.
This pattern should worry anyone familiar with the replication crisis.
The conditions this field operates under are a near-perfect match for what psychology has identified as maximising systematic overconfidence: extreme researcher degrees of freedom (choose your signature, regime, helicity, ordering until something simplifies), no external feedback loop (the specific regimes studied have no experimental counterpart), survivorship bias (ugly results don't get published, so the field builds a narrative of "hidden simplicity" from the survivors), and tiny expert communities where fewer than a dozen people worldwide can fully verify any given result.
The standard defence is that the underlying theory — Yang-Mills / QCD — is experimentally verified to extraordinary precision. True. But the leap from "this theory matches collider data" to "therefore this formula in an unphysical signature reveals deep truth about nature" has several unsupported steps that the field tends to hand-wave past.
Compare to evolution: fossils, genetics, biogeography, embryology, molecular clocks, observed speciation — independent lines of evidence from different fields, different centuries, different methods, all converging. That's what robust external validation looks like. "Our formula satisfies the soft theorem" is not that.
This isn't a claim that the math is wrong. It's a claim that the epistemic conditions are exactly the ones where humans fool themselves most reliably, and that the field's confidence in the physical significance of these results outstrips the available evidence.
I find it hard to care about claims of degradation of quality, since this has been a firehouse of claims that don't map onto anything real and is extremely subjective. I myself made the claim in error. I think this is just as ripe for psychological analysis as anything else.
Did you read the article? It's not about subjective claims, it's about a very real feature getting removed (file reads showing the filepath and numbers of lines read).
This is exactly what I am talking about. Let me try to explain.
I am interested in the more abstract and general concept of: "People excessively feel that things are worse, even if they are not." And I see this A LOT in the AI/LLM area.
For instance, the claim that Claude Code, on the UX/DX side, is dumbed down seems to me absolutely not a reasonable take. The "hiding" of the file name being read is no longer being shown neither supports that claim, AND has to be seen in the context of Claude Code as a whole.
On the first point: Could one not make the argument that "not showing files read", is part of a more advanced abstraction layer, switching emphasis to something else in the UX experience? That could, by some, be seen as the overall package becoming more advanced and making choices as to what is presented for cognitive load. Secondly... it's not removed. It's just not default shown in non-verbose mode. As I understand it, you can just hit CTRL+O to see it again.
Secondly, even if it was done ONLY to be less for "power user focus," and more for dumb people (got to love the humility in the developer world), it's blindly obvious that you can't just mention ONE change as proof that Claude Code is dumbed down. And to me, it just does not compute to say that Claude Code feels dumbed down over the last patches. The amount of more advanced features, like seeing background tasks, the "option" selection feature, lifecycle hooks, sub-agents, agent swarms, skills—all of these have been released in just the last few months. I have used Claude Code since the very beginning, and it is just insane to claim that it's getting dumber as a tool. And this is just in relationship to the actual functionality, UX, and DX, not the LLM quality. But people see "I now have to hit CTRL+O to see files being read = DUMBED DOWN ENSHITFICATION!!!" I don't get it.
My point was simply... I'm much more interested in the psychological aspects driving everybody to predictably always claim that "things are getting worse," when it seems to not be the case. Be that in the exaggerated (but sometimes true) claims of model degradation, or as in this example of Claude Code getting dumbed down. What is driving this bias towards seeing and claiming things are getting worse, out of proportion to reality?
Or even shorter: why are we obsessed with the narrative of decline?
I know you think that's a clever comeback, but it's not; it's just a shift in what level of analysis one does.
It's an experienced reality indeed, but THEN you create a narrative based on that. Obviously.
Experienced reality is, by definition, subjective and affected by filters for what you can, and how, experience things.
For instance, you can actually and truly experience something as bad, and then create a narrative around that. And you can be right, or you can be wrong in the narrative. Some narcissists experience themselves as a victim and unfairly treated, but everybody around them thinks the victim narrative is wrong, because they can clearly see that they are primarily at fault for their own situation.
So you just shifted the question to: "Why do people have a bias towards experience something as worsening, regardless of objective measures of quality"?
No. What you say is obviously true. My question is: Why do, on average, people always make wrong claims in the same DIRECTION? Towards negativity.
Let's say we had objective data on things people say that we know are wrong regarding LLMs. The amount of people who WRONGLY say "It's getting worse" dwarfs the amount of people who WRONGLY think it has gotten better.
All I said is that I'm starting to get more interested in the psychological factors for this observation, the negativity bias, than actually investigating if the latest in a series of "OMG MODEL DEGRADATION" or "UI SUCKS NOW" posts; is actually true.
That was a painfull read for me. It reminds me of a specific annoyance I had at university with a professor who loved to make sweeping, abstract claims that sounded incredibly profound in the lecture hall but evaporated the moment you tried to apply them. It was always a hidden 'I-am-very-smart' attempt that fell apart if you actually deconstructed the meaning, the logic, or the claimed results. This article is the exact same breed of intellectualizing. It feels deep, but there is no actual logical hold if you break up the claims and deductive steps.
You can see it clearly if you just translate the article's expensive vocabulary into plain English. When the author writes, 'When you hand-build, the space of possibilities is explored through design decisions you’re forced to confront,' they are just saying, 'When you write code yourself, you have to choose how to write it.' When they claim, 'contextuality is dominated by functional correctness,' they just mean, 'Usually, we just care if the code works.' When they warn about 'inviting us to outsource functional precision itself,' they really mean, 'LLMs let you be lazy.' And finaly, 'strengthening the will to specify,' is just a dramatic way of saying, 'We need to write better requirements.' It is obscurantism plain and simple. using complexity to hide the fact that the insight is trivial.
But that is just an estethical problem to me. Worse. The argument collapses entirely when you look at the logical leap between the premises.
The author basically argues that because Natural Language is vague, engineers will inevitably stop caring about the details and just accept whatever reasonable output the AI gives. This is pure armchair psychology. It assumes that just because the tool allows for vagueness, professionals will suddenly abandon the concept of truth or functional requirements. That is a massive, unsubstantiated jump.
If we use fuzzy matching to find contacts on our phones all the time. Just because the search algorithm is imprecise doesn't mean we stop caring if we call the right person. We don't say, 'Well, the fuzzy match gave me Bob instead of Bill, I guess I'll just talk to Bob now.' The hard constraint, the functional requirement of talking to the specific person you need, remains absolute. Similarly, in software, the code either compiles and passes the tests, or it doesn't. The medium of creation might be fuzzy, but the execution environment is binary. We aren't going to drift into accepting broken banking software just because the prompt was in English.
This entire essay feels like those social psychology types that now have been thoroughly been discredited by the replication crisis in psychology. The ones who are where concerned with dazzling people with verbal skills than with being right. It is unnecessarily complex, relying on projection of dreamt up concepts and behavior, rather than observation. THIS tries to sound profound by turning a technical discussion into a philosophical crisis, but underneath the word salad, it is not just shallow, it is wrong.
This sentence proves the author has no ability for logical thinking: Data centers in space only make sense if they are cost effective relative to normal data centers.
I too don't think it's currently a sensible solution. But the author completely unable to make a proper case. For instance, just to refute that one claim, there are many reasons to do it in space even at an cost.
Space-based data centers provide an off-world backup that is immune to Earth-specific disasters like earthquakes, floods, fires, or grid collapses.
Servers in orbit are physically isolated from terrestrial threats, making them safe from riots, local warfare, or physical break-ins.
Moving infrastructure to space solves local community disputes by removing the strain on residential power grids and freeing up land for housing or nature.
Space data centers do not deplete Earth’s freshwater supply for cooling, unlike terrestrial centers which consume billions of gallons annually.
Solar panels in orbit can access high-intensity sunlight 24 hours a day without interference from clouds, night, or the atmosphere.
Data stored in space can exist outside of national borders, protecting it from seizure, censorship, or the legal jurisdiction of unstable governments.
Data transmission can be faster in space because light travels roughly 30% faster in a vacuum than it does through fiber optic cables.
Processing data directly in orbit is necessary for satellites and future space stations to avoid the delay and cost of beaming raw data back to Earth
While it’s true that there are no floods or earthquakes in space it’s not exactly a safe place to be. Radiation and cosmic rays become a much greater threat. Shielding provided by the atmosphere would have to be replaced.
You also underestimate the cooling problem. The fact that space is cold doesn’t mean it’s easy to cool things off in space. On earth the main cooling strategy is to transfer heat through direct contact and move the hot stuff away. Be it air or water, as you mentioned. In space your only option is to radiate heat away. And that’s while half of you is under intense sunlight.
I think you also undersell the thread of warfare in space. Sure, a guy with Molotov can’t get you space data center but we’ve had satellite shot down. So maybe not every war is a threat but, say China or Russia (or other space-faring nation) could take care of a satellite if absolutely needed.
National seizures are also still a threat. If only being outside national borders was such a great defense we’d see some data centers in the sea by now.
So being in space is immune to some of the known problems but also comes with a whole lot of novel issues, not solved at scale yet. And so far I haven’t seen any sufficiently detailed proposed solutions to even consider the trade of known problems with readily available solutions for new issues with lots of unknowns.
Essentially all of your concerns concerns can be mitigated by building somewhere else.
Worried about natural disasters? Build some place less prone to natural disasters.
Worried about the strain on local communities? Build some place more remote.
Worried about energy availability? Build near a nuclear power plant or hydroelectric power station.
Worried about hostile governments? Don't build data centers within the territories of hostile governments. (If you consider every country a hostile government, that is a you-problem.)
For the cost of building a data center in space, you could instead build a second (or third, or fourth, ...) data center somewhere else.
Omg. I dident say my reasons where good. I said that the claim that price vs ground based data center alone made space based compute a no God, was a bad argument. I obviously suck at framing my point
Every artificial satellite falls around the Earth in a little bubble of terrestrial national law.
On the ground, legal notions of private property provide some legal protections against national government interference. But there is no private real property in space. 100% of the volume of space is subject to the direct jurisdiction of terrestrial national governments. Every artificial satellite persists only because they are permitted to do so by their national government.
Because of the speed and energy involved, in the U.S. all private space activity is a matter of national security. This means that there are far fewer legal protections, not more. The U.S. president could directly order SpaceX to do almost anything, and they would have to comply. Musk spends tremendous energy and money maintaining alignment with the governments he needs to satisfy to stay in business.
They may well avoid terrestrial threats by having them in space, but then they become subject to different threats such as solar storms, high energy cosmic rays, space debris collisions etc.
I think the main reason to host them in space is to escape Earth jurisdictions, but even that is dubious as there will be people involved that reside on the Earth.
> Data stored in space can exist outside of national borders, protecting it from seizure, censorship, or the legal jurisdiction of unstable governments.
You are aware physical persons exist on Earth and can be taken into custody? Additionally, space weapons exist, several governments could destroy any orbital satellite.
Spot on. It’s the lumberjack mourning the axe while holding a chainsaw. The work is still hard. it’s just different.
The friction comes from developers who prioritize the 'craft' of syntax over delivering value. It results in massive motivated reasoning. We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike. They will hunt for a single AI syntax error while ignoring the history of bugs caused by human fatigue. It's not about the tech. it's about the loss of the old way of working.
And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
I disagree. It's like the lumberjack working from home watching an enormous robotic forestry machine cut trees on a set of tv-screens. If he enjoyed producing lumber, then what he sees on those screens will fill him with joy. He's producing lots of lumber. He's much more efficient than with both axe and chainsaw.
But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.
That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.
> I sense a pattern that many developers care more about doing what they want instead of providing value to others.
And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.
I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.
But now you are conflating solving problems with a personal preference of how the problem should be solved. This never bodes well (unless you always prefer picking the method best suited to solve the problem.)
Well as I said, I consider myself compensated with intellectual challenge/stimulus as part of my compensation. It's _why_ I do the work to begin with. Or to put it another way: it's either done in a way I like, or it's probably not done at all.
I'm replaceable after all. If there is someone who is better and more effective at solving problems in some objectively good way - they should have my job. The only reason I still have it is because it seems this is hard to find. Employers are stuck with people who solve problems in the way they like for varying personal reasons and not the objectively best way of solving problems.
The hard part in keeping employees happy is that you can't just throw more money at them to make them effective. Keeping them stimulated is the difficult part. Some times you must accept that you must perhaps solve a problem that isn't the most critical one to address, or perhaps a bad call business wise, to keep employees happy, or keep them at all. I think a lot of the "Big rewrites" are in this category, for example. Not really a good idea compared to maintenance/improvement, but if the alternative is maintaining the old one _and_ lose the staff who could do that?
> And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
I use LLMs a lot. They're ridiculously cool and useful.
But I don't think it's fair to categorize anybody as "egotistical". I enjoy programming for the fun puzzley bits. The big puzzles, and even often the small tedious puzzles. I like wiring all the chunks up together. I like thinking about the best way to expose a component's API with the perfect generic types. That's the part I like.
I don't always like "delivering value" because usually that value is "achieve 1.5% higher SMM (silly marketing metric) by the end of the quarter, because the private equity firm that owns our company is selling it next year and they want to get a good return".
Egotistical would be to reject the new tools in principle and be a less efficient developer.
But really, most of us who personally feel sad about the work being replaced by LLMs can still act reasonable, use the new tooling at work like a good employee, and lament about it privately in a blog or something.
> We see people suddenly becoming activists about energy usage or copyright solely to justify not using a tool they dislike.
Maybe you don’t care about the environment (which includes yourself and the people you like), or income inequality, or the continued consolidation of power in the hands of a few deranged rich people, or how your favourite artists (do you have any?) are exploited by the industry, but some of us have been banging the drum about those issues for decades. Just because you’re only noticing it now or don’t care it doesn’t mean it’s a new thing or that everyone else is being duplicitous. It’s a good thing more people are waking up and talking about those.
People's mileage may vary, but in my instance, this was so bad that I actually got angry while trying to use it.
It's slow and stupid. It does not do proper research. It does not follow instructions. It randomly decides to stop being agentic, and instead just dumps the code for me to paste. It has the extremely annoying habit of just doing stuff without understanding what I meant, making a mess, then claiming everything is fine. The outdated training data is extremely annoying when working with Nuxt 4+. It is not creative at solving problems. It dosent show the thinking. The Undo code does not give proper feedback on the diff and if it actually did "undo." And I hate the personality. It HAS to be better than it comes off for me because I am actually in a bad mood after having worked with it. I would rather YOLO code with Gemini 3 flash, since it's actually smarter in my assessment, and at least I can iterate faster, and it feels like it has better common sense.
Just as an example, I found an old, terrible app I made years ago for our firm that handles room reservations. I told it to update from Bootstrap to Flowbite UI. Codex just took forever to make a mess, installed version 2.7 when 4.0.1 is the latest, even when I explicitly stated that it should use the absolute latest version. Then it tried to install it and failed, so it reverted to the outdated CDN.
I gave the same task to Claude Code. Same prompt. It one-shotted it quickly. Then I asked it to swap out ALL the fetch logic to have SPA-like functionality with the new beta 4 version of HTMX, and it one-shot that too in the time Codex spent just trying to read a few files in the project.
This reminds me of the feeling I had when I got the Nokia N800. It was so promising on paper, but the product was so bad and terrible to use that I knew Nokia was done for. If this was their take on what an acceptable smartphone could be, it proves that the whole foundation is doomed. If this is OpenAI's take on what an agentic coding assistant should be—something that can run by itself and iterate until it completes its task in an intelligent and creative way.... OpenAI is doomed.
If you're using 5.2 high, with all due respect, this has to be a skill issue. If you're using 5.2 Codex high — use 5.2 high. gpt-5.2 is slow, yes (ok, keeping it real, it's excruciatingly slow). But it's not the moronic caricature you're saying it is.
If you need it to be up to date with your version of a framework, then ask it to use the context7 mcp server. Expecting training data to be up to date is unreasonable for any LLM and we now have useful solutions to the training data issue.
If you need it to specify the latest version, don't say "latest". That word would be interpreted differently by humans as well.
Claude is well known at its one-shotting skills. But that's at the expense of strict instruction following adherence and thinner context (it doesn't spend as much time to gather context in larger codebases).
I am using GPT-5.2 Codex with reasoning set to high via OpenCode and Codex and when I ask it to fix an E2E test it tells me that it fixed it and prints a command I can run to test the changes, instead of checking whether it fixed the test and looping until it did. This is just one example of how lazy/stupid the model is. It _is_ a skill issue, on the model's part.
Yeah I meant it more like it is not intuitive to my why OpenAI would fumble it this hard. They have got to have tested it internally and seen that it sucked, especially compared to GPT-5.2
Perhaps if he was able to get Claude Code to do what he wanted in less time, and with a better experience, then maybe that's not a skill he (or the rest of us) want to develop.
still a skill issue, not a codex issue. sure, this line of critique is also one levied by tech bros who want to transfer your company's balance sheet from salaries to ai-SaaS(-ery), but in what world does that automatically make the tech fraudulent or even deficient? and since when is not wanting to develop a skill a reasonable substitute for anything? if my doctor decided they didn't want to keep up on medical advances, i would find a different doctor. but yet somehow finding fault with an ai because it can't read your mind and, in response to that adversity, refusing to introspect at all about why that might be and blaming it on the technology is a reasonable critique? somehow we have magically discovered a technology to manufacture cognition from nothing more than the intricate weaving of silicon, dopants, et al., and the takeaway is that it sucks because it is too slow, doesn't get everything exactly right, etc.? and the craziest part is that the more time you spend with it, the better intuition you get for getting whatever it is you want out of it. but, yeah... let's lend even more of an ear to the head-in-sand crowd-- that's where the real thought leaders are. you don't have to be an ai techno-utopian maximalist to see the profound worthiness and promise of the technology; these things are manifestly self-evident.
Sure, that's fine. I wrote my comment for the people who don't get angry at an AI agents after using them for the first time within five hours of their release. For those who aren't interested in portending doom for OpenAI. (I have elaborate setups for Codex/Claude btw, there's no fanboying in this space.)
Some things aren't common sense yet so I'm trying my part to make them so.
common sense has the misfortune of being less "common" than we would all like it to be. because some breathless hucksters are overpromising and underdelivering in the present, we may as well throw out the baby, the bath water, and the bath tub itself! who even wants computers to think like humans and automate jobs that no human would want to do? don't you appreciate the self-worth that comes from menial labor? i don't even get why we use tractors to farm when we have perfectly good beasts of burden to do the same labor!
TBH, "use a package manager, don't specify versions manually unless necessary, don't edit package files manually" is an instructions that most agents still need to be given explicitly. They love manually editing package.json / cargo.toml / pyproject.toml / what have you, and using whatever version is given in their training data. They still don't have an intuition for which files should be manually written and which files should be generated by a command.
Agree, especially if they're not given access to the web, or if they're not strongly prompted to use the web to gather context. It's tough to judge models and harnesses by pure feel until you understand their proclivities.
I try to make it a point to acknowledge when I am wrong. And after some time, and maybe after the release of Codex 5.3, the Codex App is a VERY good agentic coder.
I stand by my initial impression, so far as the claim that it did piss me off, but I completely retract my prediction that Codex/OpenAI is going down the Nokia death spiral based on the performance of the current Codex App paired with the Codex 5.3 model.
I think a combination of the new model version, and me having maximum bad luck in my first test (it truly didn’t do that great), made me come to the wrong conclusion. Codex App is actually pretty good. Thanks for pushing back.
Agreed, had the same experience. Codex feels lazy - I have to explicitly tell it to research existing code before it stops giving hand-wavy answers. Doc lookup is particularly bad; I even gave it access to a Context7 MCP server for documentation and it barely made a difference. The personality also feels off-putting, even after tweaking the experimental flag settings to make it friendlier.
For people suggesting it’s a skill issue: I’ve been using Claude Code for the past 6 months and I genuinely want to make Codex work - it was highly recommended by peers and friends. I’ve tried different model settings, explicitly instructed it to plan first and only execute after my approval, tested it on both Python and TypeScript backend codebases. Results are consistently underwhelming compared to Claude Code.
Claude Code just works for me out of the box. My default workflow is plan mode - a few iterations to nail the approach, then Claude one-shots the implementation after I approve. Haven’t been able to replicate anything close to that with Codex
+1 to this. Been using Codex the last few months, and this morning I asked it to plan a change. It gave me generic instructions like 'Check if you're using X' or 'Determine if logic is doing Y' - I was like WTF.
Curious, are you doing the same planning with Codex out-of-band or otherwise? In order to have the same measurable outcome you'd need to perhaps use Codex in a plan state (there's experimental settings - not recommended) or other means (explicit detailed -reusable- prompt for planning a change). It's a missing feature if your preference is planning in CLI (I do not prefer this).
You are correct in that this mode isn't "out of the box" as it is with Claude (but I don't use it in Claude either).
My preference is to have smart models generate a plan with provided source. I wrote (with AI) a simple python tool that'll filter a codebase and let me select all files or just a subset. I then attach that as context and have a smart model with large context (usually Opus, GPT-5.2, and Gemini 3 Pro in parallel), give me their version of a plan. I then take the best parts of each plan, slap it into a single markdown and have Codex execute in a phased manner. I usually specify that the plan should be phased.
I prefer out-of-CLI planning because frankly it doesn't matter how good Codex or Claude Code dive in, they always miss something unless they read every single file and config. And if they do that, they tip over. Doing it out of band with specialized tools, I can ensure they give me a high quality plan that aligns with the code and expectations, in a single shot (much faster).
Then Claude/Codex/Gemini implement the phased plan - either all at once - or stepwise with me testing the app at each stage.
But yeah, it's not a skill issue on your part if you're used to Plan -> Implement within Claude Code. The Experimental /collab feature does this but it's not supported and more experimental than even the experimental settings.
I only use claude through the chat ui because it’s faster and it gives me more control. I read most of it and the code is almost always better than what I would do, simply because lazy ass me likes to take shortcuts way too often.
Absolutely nothing new here. Don’t try to be ethical and be safe, be helpful, transition through transformative AI blablabla.
The only thing that is slightly interesting is the focus on the operator (the API/developer user) role. Hardcoded rules override everything, and operator instructions (rebranded of system instructions) override the user.
I couldn’t see a single thing that isn't already widely known and assumed by everybody.
This reminds me of someone finally getting around to doing a DPIA or other bureaucratic risk assessment in a firm. Nothing actually changes, but now at least we have documentation of what everybody already knew, and we can please the bureaucrats should they come for us.
A more cynical take is that this is just liability shifting. The old paternalistic approach was that Anthropic should prevent the API user from doing "bad things." This is just them washing their hands of responsibility. If the API user (Operator) tells the model to do something sketchy, the model is instructed to assume it's for a "legitimate business reason" (e.g., training a classifier, writing a villain in a story) unless it hits a CSAM-level hard constraint.
I bet some MBA/lawyer is really self-satisfied with how clever they have been right about now.
Sarcasm aside, I wonder if they calculated how much we save by not trashing these items, versus the cost in time, bureaucracy, and administration this will demand. There is an episode of Freconomics that covered this. Managing and getting rid of free stuff is very expensive and hard. But that someone else's problem.
reply