HN2new | past | comments | ask | show | jobs | submit | jjav's commentslogin

> generates far more in economic activity

The LVT focus on profit above all else is why it is an unsatisfactory solution.

If the most important goal for every plot of land is to maximize its economic activity & tax revenue, that's going to be a miserable place to live.

All of the space uses that make a town nice to live in, are also underutilizing the land if the sole goal is to maximize economic activity.

Open space with native vegetation, parks, playgrounds, sports fields of all kinds like soccer fields, community pools, hiking trails.. all of that is wasted land if viewed through the lens of LVT maximization. All that space should be crammed full of high rise offices and apartments.


> Open space with native vegetation, parks, playgrounds, sports fields of all kinds like soccer fields, community pools, hiking trails.. all of that is wasted land if viewed through the lens of LVT maximization.

No, because all of that would be open to the community. The waste is only if it was locked up for use by certain people.


> It is really easy for a Downtown to go into a downward spiral if you take away the ability of people to get there.

I've seen this sad downward spiral multiple times, it is not a good outcome.

I used to live not too far from a town with a mellow but nice downtown center. Not a huge draw but many small nice restaurants and shops and there was steady business. Sensing a profit machine, the city filled all streets with parking meters. Turns out that while it was a nice area, it wasn't so irreplaceable, so nobody goes anymore. Business collapsed. I drove by last summer and everything is closed, the parking meters sit empty.

Same is happening now to the downtown one town over. It used to very vibrant awesome downtown, although small. Bars, restaurants, music venues, fun shops. I was there every night for something or other. Loved it. Easy free parking around. Some of the parking lots have office buildings now and the city lots have become very expensive. Much less activity there now, about a third of the venues are closed and the remaining ones are saying they can't last very long with fewer people going. While in its heyday this downtown was far more active than my first example, turns out it wasn't irreplaceable either. People just don't go anymore.

Point is that this tactic works only when the downtown is so established and so dense that people are going to go anyway even if parking is hard, like Manhattan.


> Point is that this tactic works only when the downtown is so established and so dense that people are going to go anyway even if parking is hard, like Manhattan.

Or the facilitating of cars has now made it more unattractive for people to go and hangout there even if it is easier to drive to.


> ome of the parking lots have office buildings now and the city lots have become very expensive. Much less activity there now, about a third of the venues are closed and the remaining ones are saying they can't last very long with fewer people going.

Sounds to me like that found a valueable use for their land and got rid of the low value things you really enjoyed...

Of course to you this is bad, and the city lost the night life, but that might or might not be worse overall. They seem to be a denser area despite it, for whatever that means.


> Isn't this true of any greenfield project?

That is a good point and true to some extent. But IME with AI, both the initial speedup and the eventual slowdown are accelerated vs. a human.

I've been thinking that one reason is that while AI coding generates code far faster (on a greenfield project I estimate about 50x), it also generates tech-debt at a hyperastonishing rate.

It used to be that tech debt started to catch up with teams in a few years, but with AI coded software it's only a few months into it that tech debt is so massive that it is slowing progress down.

I also find that I can keep the tech debt in check by using the bot only as a junior engineer, where I specify precisely the architecture and the design down to object and function definitions and I only let the bot write individual functions at a time.

That is much slower, but also much more sustainable. I'd estimate my productivity gains are "only" 2x to 3x (instead of ~50x) but tech debt accumulates no faster than a purely human-coded project.

This is based on various projects only about one year into it, so time will tell how it evolves longer term.


In your experience, can you take the tech debt riddled code, and ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified? Presumably there's a set of tests that you'd keep the same, but you could leverage the power of ai in greenfield scenarios to just do a rewrite (while letting it see the old code). I dont know how well this would work, i havn't got to the heavy tech debt stage in any of my projects as I do mostly prototyping. I'd be interested in others thoughts.

I built an inventory tracking system as an exercise in "vibe coding" recently. I built a decent spec in conversation with Claude, then asked it to build it. It was kind of amazing - in 2 hours Claude churned out a credible looking app.

It looked really good, but as I got into the details the weirdness really started coming out. There's huge functions which interleave many concepts, and there's database queries everywhere. Huge amounts of duplication. It makes it very hard to change anything without breaking something else.

You can of course focus on getting the AI to simplify and condense. But that requires a good understanding of the codebase. Definitely no longer vibe-coded.

My enthusiasm for the technology has really gone in a wave. From "WOW" when it churned out 10k lines of credible looking code, to "Ohhhh" when I started getting into the weeds of the implementation and realising just how much of a mess it was. It's clearly very powerful for quick and dirty prototypes (and it seems to be particularly good at building decent CRUD frontends), but in software and user interaction the devil is in the details. And the details are a mess.


At the moment, good code structure for humans is good code structure for AIs and bad code structure for humans is still bad code structure for AIs too. At least to a first approximation.

I qualify that because hey, someone comes back and reads this 5 years later, I have no idea what you will be facing then. But at the moment this is still true.

The problem is, people see the AIs coding, I dunno, what, a 100 times faster minimum in terms of churning out lines? And it just blows out their mental estimation models and they substitute an "infinity" for the capability of the models, either today or in the future. But they are not infinitely capable. They are finitely capable. As such they will still face many of the same challenges humans do... no matter how good they get in the future. Getting better will move the threshold but it can never remove it.

There is no model coming that will be able to consume an arbitrarily large amount of code goop and integrate with it instantly. That's not a limitation of Artificial Intelligences, that's a limitation of finite intelligences. A model that makes what we humans would call subjectively better code is going to produce a code base that can do more and go farther than a model that just hyper-focuses on the short-term and slops something out that works today. That's a continuum, not a binary, so there will always be room for a better model that makes better code. We will never overwhelm bad code with infinite intelligence because we can't have the latter.

Today, in 2026, providing the guidance for better code is a human role. I'm not promising it will be forever, but it is today. If you're not doing that, you will pay the price of a bad code base. I say that without emotion, just as "tech debt" is not always necessarily bad. It's just a tradeoff you need to decide about, but I guarantee a lot of people are making poor ones today without realizing it, and will be paying for it for years to come no matter how good the future AIs may be. (If the rumors and guesses are true that Windows is nearly in collapse from AI code... how much larger an object lesson do you need? If that is their problem they're probably in even bigger trouble than they realize.)

I also don't guarantee that "good code for humans" and "good code for AIs" will remain as aligned as they are now, though it is my opinion we ought to strive for that to be the case. It hasn't been talked about as much lately, but it's still good for us to be able to figure out why a system did what it did and even if it costs us some percentage of efficiency, having the AIs write human-legible code into the indefinite future is probably still a valuable thing to do so we can examine things if necessary. (Personally I suspect that while there will be some efficiency gain for letting the AIs make their own programming languages that I doubt it'll ever be more than some more-or-less fixed percentage gain rather than some step-change in capability that we're missing out on... and if it is, maybe we should miss out on that step-change. As the moltbots prove that whatever fiction we may have told ourselves about keeping AIs in boxes is total garbage in a world where people will proactively let AIs out of the box for entertainment purposes.)


Perhaps it depends on the nature of the tech-debt. A lot of the software we create has consequences beyond a paticular codebase.

Published APIs cannot be changed without causing friction on the client's end, which may not be under our control. Even if the API is properly versioned, users will be unhappy if they are asked to adopt a completely changed version of the API on a regular basis.

Data that was created according to a previous version of the data model continues to exist in various places and may not be easy to migrate.

User interfaces cannot be radically changed too frequently without confusing the hell out of human users.


> ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified?

I haven't tried that yet, so not sure.

Once upon a time I was at a company where the PRD specified that the product needs to have a toggle to enable a certain feature temporarily. Engineering implemented it literally, it worked perfectly. But it was vital to be able to disable the feature, which should've been obvious to anyone. Since the PRD didn't mention that, it was not implemented.

In that case, it was done as a protest. But AI is kind of like that, although out of sheer dumbness.

The story is meant to say that with AI it is imperative to be extremely prescriptive about everything, or things will go haywire. So doing a full rewrite will probably work well, only if you manage to have very tight test case coverage for absolutely everything. Which is pretty hard.


Take Claude Code itself. It's got access to an endless amount of tokens and many (hopefully smart) engineers working on it and they can't build a fucking TUI with it.

So, my answer would be no. Tech debt shows up even if every single change made the right decisions and this type of holistic view of projects is something AIs absolutely suck at. They can't keep all that context in their heads so they are forever stuck in the local maxima. That has been my experience at least. Maybe it'll get better... any day now!


> well, your background just changed, didn't it?

The First Amendment is still in the constitution and has not been formally repealed (yet).

So, no.


See also this metric, showing how fast the US is falling away from a democracy:

https://www.ft.com/content/b474855e-66b0-4e6e-9b73-7e252bd88...


Well yes, but the US was supposed to have three separate branches of government to keep each other in check.

Unfortunately turns out that in practice two of the three don't actually have any power at all when push comes to shove.


I think Congress does have power, it's just chosen not to wield it to control this presidency.

It was a long period of time voting for totalitarians. Checks and balances worked by design: preventing immediate radical changes. And they worked by design: allowing changes gradually over a period of time if people keep voting for the same thing. And now it's here.

No such studies can exist since AI coding has not been around for a long term.

Clearly AI is much faster and good enough to create new one-off bits of code.

Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing.

Now with AI coding these take me just a few minutes, done.

But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed.

I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code.

How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see.


Absolutely!

On my desk I have an HP-28S and use it nearly every day.

In the office I have a newer HP, which isn't quite as nice to use as the 28S but still quite good.

The ergonomics of these are so far superior to using software apps that there is no comparison.


> Tesla secrecy is likely due to avoid journalists taking any chance they can to sell more news by writing an autonomous vehicles horror story

That would mean their secret data, if published, supports writing horror stories about it.

OTOH if the data turned out to show spectacularly safe operations, that would shut off any possible negative articles.

Of all people, how likely is it that Musk is intentionally avoiding good publicity by keeping a lot of data secret?


> I think the humans in London at least do not adjust their behaviour for the perceived risk!

Sure they do, all humans do. Nobody wants to get hurt and nobody wants to hurt anyone else.

(Yes there are few exceptions, people with mental disorders that I'm not qualified to diagnose; but vast majority of normal humans don't.)

Humans are extremely good at moderating behavior to perceived risk, thank evolution for that.

(This is what self-driving cars lack; machines have no fear of preservation)

The key part is perceived though. This is why building the road to match the level of true risk works so well. No need for artificial speed limits or policing, if people perceive the risk is what it truly is, people adjust instictively.

This is why it is terrible to build wide 4 lane avenues right next to schools for example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: