Hacker News .hnnew | past | comments | ask | show | jobs | submit | Veserv's commentslogin

When did we enter the twilight zone where bug trackers are consistently empty? The limiting factor of bug reduction is remediation, not discovery. Even developer smoke testing usually surfaces bugs at a rate far faster than they can be fixed let alone actual QA.

To be fair, the limiting factor in remediation is usually finding a reproducible test case which a vulnerability is by necessity. But, I would still bet most systems have plenty of bugs in their bug trackers which are accompanied by a reproducible test case which are still bottlenecked on remediation resources.

This is of course orthogonal to the fact that patching systems that are insecure by design into security has so far been a colossal failure.


Bugs are not the same as (real) high severity bugs.

If you find a bug in a web browser, that's no big deal. I've encountered bugs in web browsers all the time.

You figure out how to make a web page that when viewed deletes all the files on the user's hard drive? That's a little different and not something that people discover very often.

Sure, you'll still probably have a long queue of ReDoS bugs, but the only people who think those are security issues are people who enjoy the ego boost if having a cve in their name.


Eh, with browsers you can tell the user to go to hell if they don't like a secure but broken experience. The problem in most software is that you commit to bad ideas and then have to upset people who have higher status than the software dev that would tell them to go to hell.

That might have been true pre LLMs but you can literally point an agent at the queue until it’s empty now.

You literally cannot, since ANY changes to code tend to introduce unintended (or at least not explicitly requested) new behaviors.

Eventual convergence? Assuming each defect fix has a 30% chance of introducing a new defect, we keep cycling until done?

Assuming you can catch every new bug it introduces.

Both assumptions being unlikely.

You also end up with a code base you let an AI agent trample until it is satisfied; ballooned in complexity and redudant brittle code.


You can have an AI agent refactor and improve code quality.

But, have you any code that has been vetted and verified to see if this approach works? This whole Agentic code quality claim is an assertion, but where is the literal proof?

If it can be trained with reinforcement learning then it will happen

It’s agents all the way down - until you have liability. At some point, it’s going to be someone’s neck on the line, and saying “the agents know” isn’t going to satisfy customers (or in a worst case, courts).

Sure it can. It's not like humans aren't already deflecting liability or moving it to insurance agencies.


Why would it converge?

Which still means a single person with Claude can clear a queue in a day versus a month with a traditional team.

Your example must have incredible users or really trivial software.

I’ve had mine on a Ralph loop no problem. Just review the PR..

The fact that KiCad still has a ton of highly upvoted missing features and the fact that FreeCAD still hasn't solved the topological renumbering problem are existence proofs to the contrary.

Shouldn't be down voted for saying this. There are active repo's this is happening in.

"BuT ThE LlM iS pRoBaBlY iNtRoDuCiNg MoRe BuGs ThAn It FiXeS"

This is an absurd take.


It probably is introducing more bugs because I think some people dont understand how bugs work.

Very, very rarely is a bug a mistake. As in, something unintentional that you just fix and boom, done.

No no. Most bugs are intentional, and the bug part is some unintended side effects that is a necessary, but unforseen, consequence of the main effect. So, you can't just "fix" the bug without changing behavior, changing your API, changing garauntees, whatever.

And that's how you get the 1 month 1-liner. Writing the one line is easy. But you have to spend a month debating if you should do it, and what will happen if you do.


Oh geez. Legal did not give them the go ahead to make the unqualified statement: “We are not aware of any successful spyware attacks” they had to explicitly qualify it with “mercenary”.

There are more weasel words "we are not aware" - means they actually don't know if such attack was successful, "successful" - what is the definition of success? Maybe attackers got access, but didn't find anything interesting?

Apple is digging itself into a hole.


I think you are, the words make perfect sense. They know of a lot of attack attempts, and so far they have no reason to believe any were successful. Success can mean a lot of different things, why list it all out (were able to extract data, install malicious software, encrypt files with ransomware, delete any data, etc).

They have a legal department carefully directing what they say. In a court of law, their lawyers will successfully argue that they are beholden to only the precise letter of their statement. Are you arguing that their lawyers are incompetent and imprecise in their wording? If so, what evidence do you have that their lawyers are incompetent?

In light of the correct legal interpretation of their words, being only the specific letters, we can see that your interpretation is incorrect.

> They know of a lot of attack attempts

No, their statement says nothing about attack attempts.

> so far they have no reason to believe any were successful

No, their statement says nothing about their belief, only their explicit knowledge. Their statement says nothing about their investigation practices or whether they even attempted to investigate and learn about attacks. Their statement says nothing about non-mercenary attacks.

Their statement is technically correct as long as any successful attacks they know about are not explicitly known to be committed by mercenarys.


> No, their statement says nothing about attack attempts.

That's a good point. The best way not to know about any successful attacks is not to know about any of them. I also can definitively state that I'm not aware of any successful attacks, but for obvious reasons this is a basically meaningless statement. Without more data, it's not clear how meaningful the statement they gave is, and while it probably is more meaningful than mine, it doesn't make sense to jump from what they said to "there have definitively been no successful attacks" based on it.


I'm just going to ignore your entire first paragraph that tries to use hostility to overcome a clear willful misunderstanding, or strong evidence of a recent stroke.

> No, their statement says nothing about attack attempts.

Exactly, they're keeping the statement brief and correct. They have sent multiple batches of notifications to users on previous attacks.

The statement is clear, covers their primary use case for the product, and I'm sure is legally sound. You're grasping at straws trying to think up ways they can be lying to you. I would be very surprised if you ever have used their lockdown mode with any actual cause.


I am glad that you agree that their legal department’s explicit and intentional exclusion of known successful non-mercenary attacks is precise and legally sound.

It is advisable to not grasp at straws to think up ways that highly paid lawyers are not saying exactly the words they have approved. That is literally their job and they are good at it.

If they meant something more expansive they can do so. It is not the public’s job to do it for them while letting them retreat to the legally binding interpretation at their pleasure.


They can be perfectly aware of nation-state hacks. These are exactly the weasel qualifiers used by the NSA when they were claiming not to be watching the communications of US citizens. "No intercepts were made under program X" specifically sidesteps all the shady stuff under program Y.

> no reason to believe any were successful.

They have very good reason to believe that - shareholders and public perception. Apple maintains image of their phone being secure and that is far from the truth. As long as general public don't know their phones have holes like Swiss cheese, the shareholders will be happy.


How do you know their definition isn't only "received extortion letters" and "exfiltrate data" is fine as long as it didn't lead to the former?

>"successful" - what is the definition of success?

At risk of stating the obvious, isn't success "hacked it and no one ever found out (at the time)"? By definition, Apple could probably only be aware of unsuccessful attacks. Though that's not guaranteed either, considering all the myriad failure modes that there must be.


Automatic turret-mounted anti-air shotguns. Blow up 100 $ drones for the cost of a 0.50 $ shotgun shell.

I bet you could do aiming and firing in less than 0.1 seconds with nearly 100% accuracy in the 50 meter range which would enable ~10 destroyed drones per unit if the drones are going 150 km/h.

Shotgun pellets are also basically entirely safe when shot into the air as they have low falling velocity enabling usage when shooting over populated areas.


> Blow up 100 $ drones for the cost of a 0.50 $ shotgun shell.

Then two drones approach from opposite sides at 200 MPH. Your emplacement costs more than $200 and can only fire in one direction at a time.

Or, as we've seen in Ukraine, once your disposable low-cost drones have precisely identified a high-value, high-effectiveness static emplacement, you send in a cruise missile to clear it out, and then the drones continue sweeping forward.


Drones that can move that fast have extremely little cargo capacity for explosive charges and it's not fast enough to simply use the kinetic energy of the drone for much.

> Then two drones approach from opposite sides at 200 MPH.

A drone that can go 300 km/h is way more than 100 $, you are in the thousands of dollar range at that point. Turret wins if it blows up one.

Also, it could probably blow up more than one since at 300 km/h you would get 0.5 seconds to respond and I was arguing 0.1 seconds per target anywhere in a full 360. 0.25 seconds for anywhere on a full 360 would be enough for 2 and that is within human capability.

> you send in a cruise missile to clear it out

Cool, you sent in a hundred thousand dollar cruise missile to blow up a thousand dollar turret. Turret wins. Also you can put wheels on the turret, so it might not even be there.

Now you are probably going to argue about a drone that goes 1000 km/h at which point what you have is a cruise missile which costs tens to hundreds of thousands of dollars. At that point the entire argument about drones being too cheap to cost-effectively stop is moot.

Or you might argue that the drones just go high. 50 m is a ludicrously low flight ceiling. But then your drone can not explode on contact. You could use a drone that drops explosives, but that still requires flying over the target. High flying drones are easier to detect, and you could counter that with flying shotgun drones or turret mounted machine guns which have ranges in the hundreds to thousands of meters and would still only cost a few dollars of ammo per kill.

My main point is that bullets can easily disable a cheap drone and are much cheaper than a cheap drone. You just need a cost-effective way of deploying mass bullets against mass drones. Logical answers are ground deployments around targets or drones with bullets that cost-effectively shoot down drones without bullets.

You will then likely get into a arms race of fighter drones to protect your bomber drones. And scale up your drones until they are not easily bullet-destroyable. But then your drone costs have likely increased to the point where anti-air cannons shooting 100 $ explosive shells are cost-effective. And so on and so forth.


> Cool, you sent in a hundred thousand dollar cruise missile to blow up a thousand dollar turret. Turret wins.

Nope. The calculus is not about individual components, but about overall cost of the entire system and all of its associated support. What was the material, labor, and opportunity cost to install the turret? What was it protecting (which is now presumably destroyed by drones, or captured by the enemy)? You're also still assuming that you're facing off against guerillas fighting an asymmetrical war on a shoestring budget, but that's not the case. Whatever force you're fighting can be trivially bankrolled by a peer power who is happy to bankroll them to make you bleed to death. China will be happy to build plenty of cruise missiles, and plenty more drones.


The argument is literally that it is problematic to send 100 k$ interceptors to stop 1 k$ drones and then you turn about and argue you can end 100 k$ cruise missiles to stop 1 k$ turrets. Your argument is inconsistent with the entire premise.

You have presented no evidence as to the overall cost of this mystical unstoppable drone swarm. In contrast, we do know that shotguns, machine guns, and bullets are cheap, mass-produced, and mass-deployed by the tens of millions.

The key unknown of my proposal is the bulk cost and production of a small automated turret or fighter drone that can economically and flexibly deploy cheap bullet interceptors to asymmetrically defeat expensive drones. However, the operational requirements for such devices are simple and within the range of existing technology.

There is no clear evidence that cheap explosive drone swarms are magically cheaper than cheap fighter drone swarms or cheap ground drone swarms. It could easily go either way and without a rigorous actual analysis you and I are both unqualified to determine what is actually dominant.


Which only protect a small area, so drones just need to target less obvious things. Meanwhile your guns shoot birds and once in a while - an occasional bystander. Attackers are always advantaged since you have to protect _everything_ and they only need to target what's left unprotected. Some drones just drop grenades, I somehow don't see your shotgun hitting either the drone (too high) or a grenade (too fast and small).

> Which only protect a small area

We have these things called wheels. Or you could mount it on a drone.

> Meanwhile your guns shoot birds and once in a while - an occasional bystander

We are discussing protecting military bases or military assets.

> Some drones just drop grenades

That requires flying above the target. See counter-point 1.

Please put in the minimal effort needed to follow through at least a few steps of argument and counter-argument in your head. I assure you I am not putting in as little effort into my arguments as you did.


How many shotguns? How do they reload? What happens when they run out of ammo?

Can they be hacked, or duped into firing at friendly aircraft?

How will they deal with the enemy adapting their drones to have camoflage?

There's no way automatic turret mounted shotguns are the solution to this problem.

It simply isn't economical to produce, install and maintain all of these things, and now you've sunk a massive amount of resources into this infrastructure when the enemy doesn't even really have to launch a real attack.


I suspect they will run out of ammo much after the enemy runs out of drones.

What's their supply chain for being restocked with ammo? Is that supply chain susceptible to drone attacks along any part? Then you still lose eventually.

I don’t think so as shotgun shells are cheaper and smaller than drones.

When drones become cheaper then it’s a problem.


Great questions, I will reinstall Factorio for research purposes and get right back to you.

They might reload the same way semi-automatic shotguns reload.

Without writing an essay, I can definitely see automatic turtent mounted shotguns as an effective solution.


Imagine you're playing tower defense.

Now picture an American military base. They're pretty big, right?

Now imagine how many of these shotgun towers you need to secure the paremeter based on the firing range of these weapons, then imagine how many you shotgun towers you need to defend the interior of the base from drones that don't attack from the side but instead come in from the middle because they can fly.

How much ammunition can each of these shotgun towers hold? What happens when it runs out? Does a human have to go over there and refill it? What kind of equipment do they use to do that? How much time does this take and how much fuel does it consume? What is the opportunity cost of this?

Now that's just one military installation. How many does the US have? Are you going to put these shotgun towers outside the homes of high ranking military officers? The roads that they take to go to work?

What's stopping someone from doing this kind of drone attack on the highway to the military installation timed with the morning or evening commute? What's the counter to that?

Automated shotguns are not an economically viable defense to the threats that I described in my previous post.


And some Canada Gooses too?

How long till Canada wires up gooses brains and straps then with bombs for the ultimate biodrones? They already swarm naturally in attack formation!

trust the gooses

> Automatic turret-mounted anti-air shotguns. Blow up 100 $ drones for the cost of a 0.50 $ shotgun shell.

Yeah, doable. I went to a clay pigeon range last week (company outing). These are targets that move quite fast. They don't spring out from the same spot and some roll over the ground. I had never handled a gun before. I am 50, with the attendant poor eyesight and lack of twitch reflexes.

And yet, I still nailed 20/25 moving targets. A turret with a shotgun is going to hit much more than that.


Clay shooting is fun. What happens when all the clays are released at the same time, not one at a time as you shoot? And if you miss one, you die.

Then how come on Ukraine / Russian front drones rule. would not be the case if those were so easy to shoot down

Just to elaborate:

Tesla’s actually have zero binocular vision coverage because the cameras have different focal lengths and are too close even if they did have the same focal lengths.

They are also below minimum vision requirements for driving in many states.


How is this a kernel issue? The code that deadlocked was entirely written by Superluminal who grabbed a shared lock from a interrupt handler. Not doing that is literally the very first lesson of writing interrupt handlers and if you do not know that you have no business doing so.

The only way this could be considered a issue is that it appears that the Linux kernel added the rqspinlock which is supposed to automatically detect incorrect code at runtime and kind of “un-incorrect” it. That piece of code did not correctly detect callers who were blindly using it incorrectly in ways that the writers probably expected to detect.

However, this entire escapade is absurd. Not only does this indicate that eBPF has gotten extensions that grossly violate any concept of sandboxing that proponents claim, I do not see how you can effectively program in the rqspinlock environment. Any lock acquire can now fail with a timeout because some poorly written eBPF program decided that deadlocks were a enjoyable activity. Every single code path that acquires more than one lock must be able to guarantee global consistency before every lock acquire.

For instance, you can not lock a sub-component for modification and then acquire a whole component lock to rectify the state since that second lock acquire may arbitrarily fail.

Furthermore, even if you do that all it does is turn deadlocks due to incorrect code into incredibly long multi-millisecond denials of service due to incorrect code. I mean, yes, bad is better than horrible, but it is still bad.


Leaving aside the vitriol...

> The code that deadlocked was entirely written by Superluminal who grabbed a shared lock from a interrupt handler

We don't "grab a shared lock". We call a kernel-provided eBPF helper function `bpf_ringbuf_reserve`, which, we now know, internally grabs a lock. The spinlock usage is entirely internal to the eBPF ringbuffer implementation and is not exposed to or controlled by the eBPF program at all.

The whole design behind eBPF is that it is a very controlled and constrained environment, backed by a verifier to ensure safety within the kernel context. It has a specific, limited kernel API in the form of eBPF helper functions and data structures that are guaranteed to succeed in that environment. If it compiles, passes the verifier, and loads, it should work. It is not feasible to know as a developer which of the many eBPF helpers[1] are and aren't safe to call in which contexts.

If `bpf_ringbuf_reserve` is unsafe to use from an interrupt context, then that would be one thing, but if so, it should be rejected by the verifier. There are other eBPF helper functions that only work within specific eBPF program types and are rejected outside of those contexts, so the verifier already knows how to make this distinction.

> The only way this could be considered a issue is that it appears that the Linux kernel added the rqspinlock which is supposed to automatically detect incorrect code at runtime and kind of “un-incorrect” it. That piece of code did not correctly detect callers who were blindly using it incorrectly in ways that the writers probably expected to detect.

Yeeeeah....that is, in fact, what the kernel did, and what the entire article is about. It's not about "incorrect code" though. Our use of `bpf_ringbuf_reserve` is, again, perfectly valid. It is more about giving the internal kernel helpers a way to deal with unexpected locking situations other than deadlocking.

> I do not see how you can effectively program in the rqspinlock environment. Any lock acquire can now fail with a timeout because some poorly written eBPF program decided that deadlocks were a enjoyable activity. Every single code path that acquires more than one lock must be able to guarantee global consistency before every lock acquire.

It is not "any lock", it is "any usage of rqspinlock within eBPF". This is intentional and already accounted for throughout eBPF. In this particular case, `bpf_ringbuf_reserve` is specified to return NULL on failure, and the verifier already forces you to deal with that in your eBPF program. The lock failing to acquire is one of the reasons why it returns NULL, but as the consumer of the API, you don't (or shouldn't) have to care about that. That's the explicit design contract.

> Furthermore, even if you do that all it does is turn deadlocks due to incorrect code into incredibly long multi-millisecond denials of service due to incorrect code

It doesn't turn them into "incredibly long multi-millisecond denials of service"...as long as the bugs are fixed. That is, again, what the entire article is about; with the fixes, it now recovers instantly in this scenario.

You should read the article. I hear it's good.

[1] https://docs.ebpf.io/linux/helper-function/


One thing wasn't clear for me in the article: is there only one such ringbuffer defined by the kernel or can the eBPF program specify as many ringbuffers as it wants?

Geez, your company really needs to not be writing code in interrupt context until you learn how it works.

bpf_ringbuf_reserve() is perfectly fine to call from interrupt context. The problem is that you are manipulating the same data structure from non-interrupt and interrupt context. Your code was deadlocking with itself. You wrote every side of that deadlock.

For that matter, how are you even handling the deadlock detected return code? If the sampling event gets a deadlock error, that deadlock cause can not resolve until the context switch code you interrupted resolves. That means you can not reserve the space to store your sample. Are you just naively dropping that sample?


Not OP: how would you handle the second interrupt during the interrupt handler here then? I can see how you could use two separate ring buffers for different contexts, but I don't see how to handle the nested interrupt. Also indeed they just drop all these samples that get deadlocked.

Actually, as long as you use different ring buffers for interrupt/non-interrupt context, it should be fine to just drop if you encounter a deadlock due to interrupting an already running interrupt handler.


The code described is not nested interrupt handlers. It is eBPF code executing during a context switch which is interrupted by the sampling NMI which is also configured to execute eBPF code.

NMIs will not nest, so there is no risk of arbitrary nesting. So, there should be at most three nesting levels: regular, interrupt (I suspect they do not do logging during interrupts so this may not even exist in their use case), non-maskable interrupt.

Off the top of my head I can think of at least 5 unique ways to not drop the sample with your idea of separate ring buffers being one of them.


Indeed I misread. I figured the NMI must have been nested or otherwise they wouldn't have gone through all this trouble just to drop samples :)

To explain the mechanism simply.

Suppose you had a index of 100 companys each with a market cap of 1 G$ for a total of 100 G$. You have passive investors owning 20 G$ of that index, amounting to 20% of the total, 20% of each company, and 200 M$ per company.

You then rotate out a company for a new one also worth 1 G$. The index is still 100 G$, but to match the index you are contractually required to sell your 20% ownership of the old company and are contractually required to buy 20% ownership of the new company.

However, the newly added company only released 5% of its shares to the public and the founder kept hold of the remaining 95%. Those fund managers are contractually obligated to buy 20% of the newly added company, but only 5% is available. Like a short squeeze, where the squeezer buys and holds supply so there are not enough purchasable shares to cover the shorts (obligated ownership), this is a financial divide by zero.

To get the remaining 15%, which they are contractually obligated to acquire, they must purchase from the founder. As they are in violation of their contract if they fail to acquire the remaining 15%, the founder now has complete control to dictate any price they want.

That is the scheme described: how to short squeeze retirement funds who do not even have shorts for fun and profit.

Note that this is a minor variation on my post on the same underlying topic here: https://hackernews.hn/item?id=47392325


This is wrong in multiple ways.

First: 5x5 is 25, not 20. So it's 25% rather than 20%

Second: they only have to buy the 25% of the listed shares.

To take your 1 Trillion example: if SpaceX has a total market cap of 1T, but only 500b get listed on NASDAQ, and the free float is 5%, the index will weigh SpaceX at 25% of the listed shares, which means it will be weighted at 500 * 0.25 = 125b.

And also note that index ETFs have tracking errors all the time (that's why arbitrage traders still have business!), and the ETFs themselves could also track the performance of SpaceX via derivatives instead of buying the stock. And I think, there are many investors of SpaceX who would like to sell some shares. Fund managers won't have an issue finding their phone numbers.


It is amazing that you can complain about a simplified example and then both misunderstand it and get literally every single one of your "corrections" wrong.

1. As I made abundantly clear, 20% is the passive ownership of the index. It has no relation to the index weighting which you are mentioning.

2. They have to buy 20% of the weighted value. The actual weight is 5x the float. I chose to use a weight of 100% instead of a multiple of the float as a simplification since any weighting greater than the float could result in a squeeze given a large enough passive/obligated ownership pool. However, since I was expecting this sort of "correction", I chose 20% passive ownership of the index (i.e. 1/5) so that they would have to buy 20% of the 25% which is 5%, the same amount as the 5% float. This would result in the passive investors having to purchase all of publicly traded stock which is the divide by zero point that spikes the stock. So, even if your correction was not wrong, I also already countered it.

3. Tracking errors are distinct from intentionally not tracking the index you are contractually obligated to match. You are insinuating that the target of these financial manipulations will defend their clients by ignoring their legal obligations and blaming it on "tracking error". While that is possible, I see no reason to assume that will be the case upfront or to do anything other than apply blame to the entity attempting to financially manipulate retirement accounts into lining their own pockets.

4. Yes, there are other insiders with shares. I used a simplified example where there is a single insider, the founder, to highlight the power that the insiders have over the pricing in such a squeeze. However, you also got this wrong because insiders usually have lockup periods after the IPO that are longer than the 15-days expected for index inclusion. As such, the fund managers would not be able to purchase any shares other than the public shares until after the first rebalance.


I don’t think Nasdaq is free float based.

Also, I would be a lot more pessimistic of the index tracking fund managers’ ability or willingness to find extra shares: their goal is to match the index, not beat it. If the index includes the new firm at a blown-up price because everyone sent their buy orders at the same closing auction, then all the index-tracking funds still track their underlying index. They do not care that after that closing auction, the price of the new firm—and likely the index itself—is going to drop.


>I don’t think Nasdaq is free float based.

I recommend the NDX proposal from February which the whole discussion is based upon:

"To balance index integrity and investability, Nasdaq proposes a new approach for including and weighting low-float securities (those below 20% free float). Each low-float security’s weight will be adjusted to five times its free float percentage, capped at 100%. Securities with more than 20% free float will continue to be weighted at full, eligible listed market capitalization, while those below 20% free float will be weighted proportionally to preserve investability."

The document includes a scenario with the rules applied to SpaceX. "Company C" in the table is SpaceX (with some estimated numbers).

https://indexes.nasdaqomx.com/docs/NDX_Consultation-February...


    > also note that index ETFs have tracking errors all the time (that's why arbitrage traders still have business!)
I call bullshit. We are talking about tracking errors in single basis points for well structured ETFs with good liquidity. This spread is still (at least!) 10x less than what a normie retail trader could achieve on their own -- trading the basket manually.


I didn't mention retail traders anywhere. With arbitrage traders I mean those companies who do hft all the time and are directly connected to exchanges. They still do business.


Yes, when SpaceX gets added to the index, it's going to skyrocket for just that reason. The other reason why SpaceX stock is going to skyrocket is because of the "infinite potential". After all, Elon is going to be God-Emperor of Mars, and how much is a piece of that worth?

The OP knows this and wants a window to profit from this squeeze. For the general public index owners, the sooner it's added to the index the better, minimizing the time that traders can front run this squeeze ahead of them.

Perhaps better it's not added to the indices at all, but as long as it's inevitable, the sooner the better.


Being added to the index is literally the only thing causing "the squeeze" according to this description though so how does that benefit either the author or the index holder?

If the stock was added to the index at a normal period then all the shares would be available.


The author wants to buy ahead of the indexes and benefit from the squeeze; he wants the normal rules of waiting a year before SpaceX is eligible to join the indexes to apply.


this is hackernews.hn

Do you think there's some super dominos that happens? If he's trying some combo pump-dump scheme, there's much better places.

Also, you provide zero counter to the punch, so what is your word worth any more?


It's substack, not ycombinator. The article is obviously not targeted at ycombinator.

And I don't think he's doing a pump and dump. He's just doing the very human act about ranting about things that affect him. His self-interest colors the piece.


How will a colony on Mars be profitable?


How many Earth dollars would you pay to go live on Mars?

Zero. The beauty of nature on Earth far exceeds anything of that on Mars.

Come on man, either we ship the unmentionables, or the billionaires get to live with their robot love slaves.

Obviously, you don't have enough imagination to keep Musk's ego based cost-proposition elvated.


SpaceX has always been a about convincing private industry to fund the militarization of space.

See https://en.wikipedia.org/wiki/Golden_Dome_(missile_defense_s...

Mars is a thin cover story to get the engineers to feed the War machine. "National security" / nuclear threat is a great excuse to get politicians to sell out the country.

How about we focus on global security?


I thought it was obvious that "God Emporeror of Mars" was a satirical answer. There are a whole bunch of new markets that cheap access to space open up. Like Bezos' dream of in-space manufacturing. Or Musk's dream of data centres in space. Or power gen in space. Or the "cis-lunar economy". Or space tourism. Or He3 on the moon. People will buy SpaceX stock for the potential, even if that potential is pretty much worthless and the chance of SpaceX capturing the gains rather than some other company is fairly low.

"National Security" is just one more in a big list.


No, those other "dreams" were either developed or refined by, https://en.wikipedia.org/wiki/Citizens%27_Advisory_Council_o... as pretexts to pursue a space militarization agenda. The history is clear but the New Space propaganda is being fed to the younger generation.


I wouldn’t really mind seeing the SpaceX IPO flop initially. The God Emperor of Mars has quite the ego.

However, I’m pretty sure the opposite will happen and the stock valuation will go past the moon to mars and beyond.


That seems like cutting off your nose to spite your face. SpaceX is more important than whatever issue you disagree with Musk about. After graduating with a degree in aerospace engineering in the aughts, I switched to software because the practical alternatives were building missiles for Raytheon or going to GE and trying to figure out how to make gas turbines 1% more efficient. SpaceX jump-started a commercial aerospace industry that was utterly moribund as recently as when Hacker News started up.


Sorry to burst your bubble but SpaceX is Raytheon now. You should look at what they're doing with Starshield, SDA, Golden Dome, NRO, etc. The commercial stuff was small potato stepping stones made more palatable to engineers, but the pivot has already occured.


To be clear, I have great respect for military work. I used to work at a defense contractor. But in terms of building a career, it's a heavily regulated industry with little room for growth. SpaceX is doing defense work, but it has not pivoted to being merely a defense contractor. SpaceX's valuation is triple that of Raytheon and Lockheed put together. The market expects it to continue pushing forward on commercial space.


No, the market does not expect Musk to be mining Mars or selling Moon motels...

It expects Musk's connection with JD Vance and SDI insiders will give them the bulk of the $2-$4 trillion GD contract.


What’s your basis for saying that? It makes no sense. Even if Golden Dome was a trillion dollars, which it isn’t, that wouldn’t support a $1 trillion valuation. Defense contractors average around 10% profit. Raytheon got $24 billion in government contracts in 2023. Its revenue is about $90 billion, and its valuation is $277 billion.

Funding for Golden Dome was $24 billion in 2025 and 13 billion in 2026. Even if SpaceX got all that money, it wouldn’t move the needle on SpaceX’s valuation.


Traditional defense contractors have low profit margin because of the cost plus pricing on the contracts. They literally are only allowed to charge the cost they incur plus some fixed profit percentage. As such, they have incentive to drive up the costs, so that their profit, while low percentage, is on high base.

SpaceX wouldn’t need to so that. Companies like Anduril already are trying to win contracts on fixed price model, and if they succeed, they’ll have much higher profit margins than Raytheon et al.


The estimates that have Golden Dome at anything close to a trillion dollars are posited on the assumption that it will be much more expensive to build than the administration believes it will take. If it ends up as fixed price bids and costs less than people think, it will be well under $200 billion.


There are multiple estimates, including by Republican members of Congress and think-tanks that put it in the many trillions of dollars.


That's right.. and Golden Dome (which is definitely a mult-trillion dollar program if space based weapons are employed) has a bunch of convenient oligarch properties like built-in planned obsolescence with orbital decay that amplifies a launch monopoly.


> which is definitely a mult-trillion dollar program

The program already exists and you can see how much has been allocated to it.


Sure let's pretend the first year budget of the program represents its entire future.

Even still it is already 2.2% of the entire federal budget. Multiple estimates put the total Golden Dome cost in the $$ trillions.


> To get the remaining 15%, which they are contractually obligated to acquire, they must purchase from the founder. As they are in violation of their contract if they fail to acquire the remaining 15%, the founder now has complete control to dictate any price they want.

This is not correct and I'm surprised this comment is upvoted to the top. The float is the float, nobody goes to buy shares that aren't available in the float.


Thanks ++1


> To get the remaining 15%, which they are contractually obligated to acquire, they must purchase from the founder. As they are in violation of their contract if they fail to acquire the remaining 15%, the founder now has complete control to dictate any price they want.

I can't imagine "any price they want" is quite right here. At the very least, shouldn't we expect underwriters and other stakeholders (in this case Nasdaq, Inc.) to negotiate option-contracts as part of the IPO deal to cover their future obligations?

Yes, it might be a "worse" deal than those initial 5% - though we don't even know that - but then institutional investors time horizons are typically much longer than 6 months. Unless you think SpaceX goes straight down to 0, it seems like a risky but calculated, long-term investment.

I agree they could be more transparent about it, but maybe they will send out a notice in the prospectus update?


Index funds have a variety of ways to replicate the index beyond physical replication, including options, buying "similar things", sampling etc..

So yeah, they don't really need to stick to 100% of the presented issue.


Index funds and ETFs also have strict replication rules limiting the amount of non-physical replication in their legally binding prospectus...

The more physical a tracker is, the lower the tracking error, but also the more fees you have to pay. "Good" ETFs/IFs are often 98% physical. This makes for higher fees, but more safety for subscribers in case of large swings.

So it's not like they are _free_ to replicate however they see fit, the replication mechanism is part of the product.


What does physical mean in this context?


It means holding the actual stocks in the underlying index, as opposed to synthetic replication, which aims to achieve returns matching the index via derivatives or other techniques.

It's physical in the sense that literal means not literal nowadays.


ETF and index arb traders use the term physical to describe securities that require full margin. Example: Sell stocks, buy index futures (and reverse) is the classic EFP equity trade. To be clear, futures are highly leveraged, thus do not require full margin.


Such a bold claim. Since we are talking about stock indices here... Can you provide a well known (liquid) non-leveraged example that does not directly trade the underlying stocks? It would probably make the create/redeem process more complex for market makers.


Invesco S&P 500 UCITS ETF

100% synth replication

edit: ISIN: IE00B3YCGJ38


Hat tip! I was not aware that Europe has very particular laws (different from the US) about how ETFs need to treat dividends. As a result, using an underlying equity swap is more tax efficient than owning the shares directly. For US-listed ETFs, I believe that my original point still stands: Well-known (liquid) non-leveraged ETFs hold physical shares instead of replicating returns with derivatives (equity swaps).

But the real scenario is going to be different in two ways: Market capitalization of the new company will only be a small fraction of the index total, even after it's been inflated as indicated. And not all investors in companies on the index are index funds, which brings down the number shares needed to align a fund.

Maybe they propose the rule change because it adjusts for some other problematic effect of the existing index rules? The discontinuity might seem acceptable because it is unlikely to be reached according to their simulations.


>That is the scheme described: how to short squeeze retirement funds who do not even have shorts for fun and profit.

How many retirement funds use the nadasq 100 as the benchmark? The only thing that's really objectionable is the 5x multiplier, and so far as I can tell that's confined to the nasdaq 100 index. If the funds use a sane index without such shenanigans, it won't be affected nearly as much, and the whole debate just turns into the perennial question on whether [company] is overvalued and whether passive investors are being taken for a ride.


Most indexes will be affected. Two of the most common indices - the S&P500 and DJIA - are cross-exchange and include Nasdaq stocks. The biggest market cap companies on the market (MAG7) are all on the Nasdaq exchange and comprise about 35% of the S&P.


Is this grey cause it's wrong? They are all on Nasdaq; and also around 35% of S&P. What am I missing? Is it that the "Most indexes" part is wrong (cause there are more than a few thousand ETF)?


Yeah, it's wrong.

Nasdaq, Inc. is a company with a stock market ("the NASDAQ") and an index "Nasdaq 100"). They want SpaceX to be listed on their market, because they like having more things on their market for all the usual reasons. They are, apparently, offering to manipulate their index to win the listing.

Accordingly, anything that uses or tracks this particular index (Nasdaq 100), such as the QQQ fund, will potentially have to pay for this manipulation.

Anybody not holding or indexing to the Nasdaq 100 index contents will not particularly care and will not really gain or lose any more money than on an ordinary trading day. In particular, this will have zero effect on stocks that merely trade on the NASDAQ exchange.

Indexing to the Nasdaq 100 is pretty uncommon, outside of QQQ, so most people will not care.


What?! This absolutely affects more than Nasdaq 100 / QQQ.

The index is just a function of the stocks. It only moves if the underlying stocks move. Rebalancing Nasdaq will cause selling in the 100 companies that aren’t SpaceX. And those stocks are held elsewhere too…

The Nasdaq 100 shares 79/100 stocks with the S&P. So if those stocks move (probably down because they’re being sold so SpaceX can get bought) pretty sure that's gonna affect anyone exposed to those companies. Whether that’s directly or through other index ETFs. Many of which have a huge concentration in Mag7 right now, for example.


What you're saying is 100% correct, I fail to see how people are not aware of it.

We're talking about a $1.75 trillion (as per the article) company that is about to enter (a part) of the most important capital market in the world at a distorted price, of course that the market as a whole is going to become distorted, money and capital (and the accompanying money and capital signals) are one of the most "liquid" things in a modern economy (if not the most liquid), once you start putting a wrong price tag on them then those accompanying money and capital signals will for sure start doing their thing, imo that was one of the main lessons we should have taken from what happened back in 2008-2009.


Sorry, a lot of the comments around this have been really badly written and it's been hard to tell what they're actually arguing.

I countered a different argument (which does appear elsewhere in this thread). You are absolutely right that there will be general price distortion from this mess. I disagree that it will be extremely bad, but I do agree that it's a problem and needs attention. It's just been difficult to tell that this is what some comments have meant to discuss, instead of the more basic issues others have been talking about.


Ah, I re-read my original comment with that in mind, and I see how it can go a few directions depending on the context - thanks!


Actually these two indices will not be affected k as the article explains


I don’t see that in the article. The only thing I see is about S&P is where they mention that the S&P 500’s rules would prevent this manipulation if SpaceX were added to that index. But that’s not being proposed.


Who is contractually obligated to buy?


I have an index fund for NASDAQ with my broker. When I bought into the fund, the broker promised me that with my money, they will buy shares in companies traded on that exchange according to the specific formula that SpaceX is manipulating here. My broker is obligated to buy. They could open a new fund that has a contact like "we'll keep doing what we had been doing except for the whole SpaceX thing" but they would need my permission to move the money. And I'm only in this fund because it was recommended by my 401k provider -- I don't know anything about any of this. That's the messed up thing here -- the people being screwed are not sophisticated investors, it's nurses and school teachers who hope to retire.


Yeah basically this. These shenanigans water down the value of QQQ. The bottom line is if you don't like QQQ, then dont buy it. Buy the stocks separately or a different index. But for people who don't pay attention, or for people whose 401k's limit their investment options, it is difficult / impossible to avoid the shenanigans


If the rules used to compute the index change (as opposed to the index composition of course), are index funds obliged to follow them no matter what? I assume this is very fund dependent, but would be interesting to know what most guarantee.


and that's why sector specific indexes are not "good" - only broad market (heck, even global) indexes are worth passive investing for.

A nasdaq index is no different from any other thematic index (like an oil index, or a robotics index). Thematic indexes tend to fail the investor in the long term for capturing beta. But because of lack of knowledge of the _actual_ academic research by retail investors, a lot of clever marketeers sell the idea of a thematic index as tho it is similar to a broad market index ("safety" and diversification).

Caveat emptor.


Some funds promise to track the Nasdaq. I guess the idea is they can't sorta track it and they can't artificially track it through some financial proxy. They have to own real shares?



Suppose you had a index of 100 companys each with a market cap of 1 G$ for a total of 100 G$.

You have passive investors owning 20 G$ of that index, amounting to 20% of the total, 20% of each company, and 200 M$ per company.

You then rotate out a company for a new one. The index is still 100 G$, but to match the index you are contractually required to sell your 20% ownership of the old company and are contractually required to buy 20% ownership of the new company.

However, the newly added company only released 5% of its shares to the public and the founder kept hold of the remaining 95%. Those fund managers are contractually obligated to buy 20% of the newly added company, but only 5% is available. Like a short squeeze, where the squeezer buys and holds supply so there are not enough purchasable shares to cover the shorts (obligated ownership), this is a financial divide by zero.

To get the remaining 15%, which they are contractually obligated to acquire, they must purchase from the founder. As they are in violation of their contract if they fail to acquire the remaining 15%, the founder now has complete control to dictate any price they want.

That is the scheme described: how to short squeeze pensions who do not even have shorts for fun and profit.


It seems the S&P 500 indices only take the free-float shares into account when calculating weights:

S&P DJI’s market cap-weighted indices are float-adjusted – the number of shares outstanding is reduced to exclude closely held shares from the index calculation because such shares are not available to investors.

page 6 of https://www.spglobal.com/spdji/en/documents/methodologies/me...


Yes, but not the Nasdaq-100 [1] which added the "Fast Entry" rule listed right there in section 2 literally this February because SpaceX is demanding immediate inclusion into the Nasdaq-100 as a condition for listing on the Nasdaq instead of the NYSE.

[1] https://indexes.nasdaqomx.com/docs/NDX_Consultation-February... Section 3


> Yes, but not the Nasdaq-100 [1] which added the "Fast Entry" rule listed right there in section 2 literally this February because SpaceX is demanding immediate inclusion into the Nasdaq-100 as a condition for listing on the Nasdaq instead of the NYSE.

...Honestly, Nasdaq's to blame for bending the knee in this case. If they choose to chase that pie, then they alone should be the ones to bear the burden, including any/all reactions from passive investors.

Passive investors should (ideally) switch to another index if they wish to not be involved in this IPO. The decision of how they should invest is theirs, and if they're not happy with this index, they should move to another index: Their money talks louder than any posturing that could ever be put out.


Needing to punch holes in NAT is one of the most idiotic own-goals in the entire field of networking.

NAT is effectively your router doing DHCP with a 17-bit suffix (16-bit port + 1 bit for UDP vs TCP) to each of your applications and then not telling you the address it gave you or how long it is good for (which is what a regular DHCP lease does). This is in addition to it, most likely, already doing regular DHCP and allocating you a IP address that it does tell you about, but which is basically worthless since routing to just that prefix without the hidden suffix goes into a black hole.

If you could just ask your router for a lease on a chunk of IP+NAT addresses that you could allocate to your applications and rotate them as they expire, you would not need this horrifying mess.

The router would just need to maintain the last-leg routing table (what a concept, a router doing routing with routing tables) just like it already does DHCP.

The applications would have short-term stable addresses that they could just tell their peers and just directly tell the router/firewall to block anybody except the desired peer short-term address.


> If you could just ask your router for a lease on a chunk of IP+NAT addresses

The “just” is doing a lot of lifting there. I’m glad the various port mapping protocols didn’t really take off and it looks like IPv6 is going to actually make it instead. Much less complexity in most parts of the stack and network.


It is always a mystery how people just randomly misinterpret what I write. At literally no point did I mention port mapping.

I am pointing out how the problem NAT “solves” is just dynamic address configuration. They have implemented a N+K bit address where the N-bit prefix is routed and allocated using IP and the low K-bits are routed and allocated like a custom fever dream.

You can just do it all the same way instead of doing it differently and worse for the low bits.

To be clear, the router should rewrite zero bits in the packet under the scheme I am describing just like how routers have no need to rewrite any bits when routing to a specific globally-routable IP address.

You get a lease for a /N+K address. /N routes to your router which routes the last K bits just like normal as if it had a /N-M to a /N route. This is a generic description of homogenous hierarchical routing.


If I understand it correctly, you're suggesting formalizing a way to make parts of the (host-specific) port canonically part of the network-wide address, no?

This still sounds like a very bad mixing of layers, even if done in a perfectly standardized and uniform way.

> It is always a mystery how people just randomly misinterpret what I write.

If this is intended literally and not as a general complaint: My main problem of understanding your suggestion is that I don't know what you mean by "IP+NAT address". NAT is a translation scheme, not an address.

Maybe it would be clearer if you could provide an example?


I did provide a example:

> You get a lease for a /N+K address. /N routes to your router which routes the last K bits just like normal as if it had a /N-M to a /N route.

> This still sounds like a very bad mixing of layers, even if done in a perfectly standardized and uniform way.

No, I am describing a generalization of IP to arbitrary concatenated routing prefixs.

NAT has the same problems as if we lived in a alternate world where we decomposed IPv4 into 4 8-bit layers and then used a different protocol for each layer. That is obviously stupid because the subdivision of a /8 into /16s and a /16 into /24s is fractally similar. You can just use the same protocol 4 times. Or even better, use one protocol (i.e IP) that just handles arbitrary subdivision.

In the IPv4 (no NAT) world your application has a 49-bit address. Your router is running a DHCPv4 server and allocates your computer a /32 and your computer is “running” a DHCPvPort server that allocates a 17-bit prefix to your applications.

In the IPv4+NAT world your application has a 49-bit address. Your router is “running” a DHCPv4+Port server and allocates your applications a /49, but only tells them their /32 and then rewrites the packets because the applications do not know their address because the stupid router did not tell them.

In good world your application has a 49-bit address. Your router is “running” a DHCPv4+Port server and allocates your applications a /49 and tells them their /32 prefix and 17-bit segment. No packet rewriting is necessary.

Your router could also choose to allocate your computer a /32 subnet and leave DHCPvPort to your computer. Or it could give your computer a /31 if you have 8 interfaces. Or a /34 as a /32 subnet with 2-bit port prefix. Each node routes as much or as little routing prefix as it understands/cares about.

This is a generalization of IP that can handle arbitrary-length, arbitrarily-concatenated routing in a completely uniform manner and all the pieces are basically already there, just over-specialized.


The original SOCKS proxy specification was something like this. You'd LD_PRELOAD a library that would make the application think it was running directly on the proxy server, and it supported both connecting outbound and listening.


I didn’t see it as mysterious. 25 years ago, the problem as stated went through lots of consensus to become IPv6. It took a few years for SLAAC to emerge. But we don’t need it to be homogeneous; the router advertises different feature levels via ICMPv6.


NAT allocates ports. If you reserve a port, that's old good port forwarding.


Assuming IPv6 kills NAT is optimistic, plenty of orgs still stack private addressing and firewalls on top.


Firewalls aren't nearly as bad as NAT.


Basically the same thing. If you legitimately need to establish a connection then put a firewall rule in, whether that needs nat or pat is a function of your available addresses.

If you are tying to work around your firewall because it isn’t yours, that’s not a legitimate use.


Love it when random people tell me whether my use case is legitimate or not without apparently even knowing it exists!

Take mobile data connections, for example: Most people don't want to pay for metered (by the byte) inbound traffic they didn't ask for that also drains their battery, but do want to be able to establish P2P connections for lower latency VoIP etc.

This is a firewall that's definitionally "not theirs", but that still also serves their interests, yet usually doesn't offer any user-accessible management interface.

So may I please traverse this firewall now, or is my use case still illegitimate?


If you are trying to break through a firewall you don’t own then that’s not legitimate.

If you are buying firewall as a service then request a user interface or change your service provider.


Are you even acknowledging my example? Where does it exist in your bimodal model of reality of "my firewall" and "somebody else's firewall"?

What provider would you suggest somebody wanting to make VoIP calls on their smartphone switch to that allows port forwarding of the kind you describe? And which popular VoIP app would support statically forwarded ports like that?


You're assuming that the firewall was configured correctly or that the firewall admin is cooperative. That's a big ask.

On the other hand, there is plenty of badly written networked software. I bet most of the networked software developers have no idea how to correctly plumb their software. They just open whatever connection, e.g. sockets, their OS provides and just run with it without care of the underlying layers. The OSI model theory in fact encourages this ignorance.


If I get someone else to administer my firewall then they had better be cooperative or I will replace them.

Typically these complaints come from people using other people’s firewalls against the policy of that firewall.


P2P traffic is illegitimate according to you? Like Skype calls? You think Skype should not exist? (Well it doesn't exist any more, but whatever replaced it)


I have no problem with p2p traffic as I can open whatever holes I want in my firewall.

I can’t do it on someone else’s network as they have granted me a limited access and do big slow me to open such holes.


You think Skype shouldn't work on other people's networks?


That’s upto the person letting you use their network

Skype works fine on my network, and I have no problem with any networks I peer with.


Why not use plain IPv6 instead?


Even with IPv6 you still might have stateful firewalls allowing only for outbound connection at both ends (e.g. a CPE a.k.a. “WiFi router”) and to establish communication you’d need to punch a hole in those firewalls.


That’s true we won’t get rid of hole-punching with IPv6. But at least it will get rid of TURN.


The hole punching is so much simpler because you don't need to guess your own address and port - you just know it


IPv6 still allows proper NAT (prefix translation), but even then finding your global address wouldn’t need TURN, just STUN, actually not even that, just a service like “What’s My IP.”


It does allow it in the sense that it's possible, and even useful in some scenarios, but then you're on a weird experimental network and not a normal one.


Yes, you are right, quite literally, as RFC 6296 is marked ‘experimental.’


Doesn't that assume that your machine is given its own world-routable (and unfiltered) v6 address?


That's how it works in ipv6. If your network doesn't give you an address, it's broken. We do not assume unfiltered since we are talking about hole punching.


How will it get rid of TURN? Can't IPv6 addresses still be firewalled by your carrier like they do already for IPv4?


I thought TURN was for symmetrical PAT, not for proper NAT (which just needs STUN for address determination) or full/restricted cone PATs (which need STUN for address and port determination, and then, in case of restricted cone, performs a hole punch).

Standard-conforming IPv6 at most allows prefix translation (i.e., proper NAT, not PAT), which wouldn’t need it.


V6 adoption has reached 46.82%[1]. So it is increasingly viable for this.

[1] https://www.google.com/intl/en/ipv6/statistics.html


it's been already done ISPs just don't properly implement it (NAT-PMP and it's relatives)


Hole punching is doing exactly what you describe, just in a non-standardized way.

We could have a standard for doing that directly at the NAT box level instead of relying on a third party STUN server, it simply didn't happen (and in fairness, the benefits would be quite minimal).


If only router manufacturers could be trusted to implement UPnP safely, then none I'd this bullshit would be necessary.

At least with IPv6 this crap becomes a little easier because you no longer have randomized source ports (which this article just ignores because some devices indeed maintain the same source port) and the IP address contains all the routing information you need. A simple simultaneous open is all you need.


If you use UDP transport you don't even need to try to make it simultaneous.


I do not understand the point you are trying to make. The person you replied to showed how to evaluate it with simple math.

400 Gb/s is 50 GB/s. RTT of 300 ms would only require 15 GB of buffers. That would not even run a regular old laptop out of memory let alone a server driving 400 Gb/s of traffic. That would be single-digit percents to possibly even sub-percent amounts of memory on such a server.


I introduced the concept of bandwidth * delay product to the conversation...

The question was about why use dynamic allocation. In this branch of the thread we ere discussing the question "Are there TCP/IP stacks out there in common use that are allocating memory all the time?"

We'd not be happy to see the server or laptop statically reserving this worst-case amount of memory for TCP buffers, when it's not infact slinging around max nr of tcp connections, each with worst-case bandwidth*delay product. Nor would be happy if the laptop or server only supported little tcp windows that limit performance by capping the amount of data in-flight to a low number.

We are happier if the TCP stack dynamically allocates the memory as needed, just like we're happier with dynamic allocation on most other OS functions.


Do you have citations for sustained 0.5 core-cycles/byte (80 Gb/s)? The benchmarks I have seen are closer to ~20-30 Gb/core-s though I have heard claims of 40-50 Gb/core-s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: