Those shoes are gonna sell like crazy now but it would be hilarious if they were to be found to have been giving an unfair advantage because of some mechanical property of the shoe.
Reviews say that they have very very good, but not record breaking energy return and shock absorption. But what they are is insanely light at sub 100g.
For a while it was all about getting the lightest shoes, because picking up heavy shoes slowed you down. Then the energy return (pebax foam, carbon plates/rods) became the main focus because the weight didn't matter as much when the shoe was literally springy. Surely this is now going to spark a race for the optimal balance between weight and energy return.
I can absolutely imagine that the "correct" balance varies from one person to another, and yet that this is both measurable and also irrelevant for non-pro athletes.
Like the number one endurance runner in the world will get a minute off their marathon time because a shoe manufacturer spent $1M making custom shoes for that athlete which don't even have a size they're just "For this one specific person, now" but then some guy on Reddit wants better shoes because he's sure his four hour marathon would have been "more like three" if he had those elite shoes instead of the $100 Nikes he wore...
The big improvement then was a carbon plate. Adidas (and others) followed suit. The subsequent improvements since then have been marginal but the margins are thin at that level. In this case the big advancement has been the weight of the shoe.
EDIT: Also it's worth noting these shoes are $500 retail. Adidas will for sure get a boost in sales from this, but there's definitely competition in the $200~$300 marathon running shoe space that won't solely draw everyone to Adidas)
Well, the marathon record has been broken 53 times since the early 1900s. So, there are a lot of factors at play. Better training, better nutrition, better tactics, and, yes, better shoes.
The advancements in shoes have made a measurable impact, but there are lots of optimizations being worked on.
Also population and access. In the time since the early 1900s a lot more humans exist, and more of them have the opportunity to attempt this record. Population in Africa exploded in that time and access improved significantly.
If you're bloody quick and born in Birmingham (either of them) in 1900 you can probably find out about and get yourself a chance to attempt that world record, but if you're born in Kapsabet (in Kenya) in 1900 good luck, even in Nairobi I wouldn't bet on it.
Indeed, I've seen LLMs begin to cite their sources, which is a commendable advancement and something that I've been asking for since the beginning of this craze: LLMs as librarians, not as summarizers. But if the commenter had a reputable data source, they should have quoted it and linked it.
I’d be surprised if it were a lot. At that time (open to corrections) not a lot of scientific research was done on consumer intel platforms.
Obviously it was found by a mathematician, but I still suspect it wasn’t obvious in published research or that it ended up not causing significant enough deviations to cause research to revisit the calculations.
My team ran into some interesting but very small deviations when we moved our iterative solar wind model from 32 bit to 64 bit, but the changes weren’t significant enough to revisit or re-do prior research wholesale.
Like my team in the 2000s I suspect anyone who had data crunched by this bug also revisited it and either concluded it wasn’t significant enough or redid the work and it didn’t change the conclusions.
I am curious now if this bug was cited in any papers at the time to give a rough idea how aware or affected academics were.
At that time (open to corrections) not a lot of scientific research was done on consumer intel platforms.
We had researchers doing what I suppose might be called HPC on Sequent Symmetrys, which were i386s in the mid-80s and Pentiums by the mid-90s. There were other high-performance x86 SMP boxes that were roughly equivalent (e.g. NCR 3550). That plus some pretty good x86 FORTRAN compilers (e.g. Lehey (sp?)) made this reasonable. I also know a lot of folks who had desktop/side SMP PPros + FORTRAN to save grant money on the big iron and got useful work out of them.
Basically, x86 was way cheap and had useful amounts of FP. There's a reason x86 displaced risc; this is one. I'm sure they would have rather used something like an X/MP-48, but one plays the hand one is delt.
None of the science being sabotaged was being published in peer reviewed journals was it? (besides the Portuguese hydrodynamic modeling stuff, but it could have been accidental or had other uses)
And yes, to be clear, I don’t consider it contributing to “science” if it’s not published, reviewed, and reproducible.
That kind of notation, called SCCS/RCS, is the equivalent of finding a rotary phone in a modern office. Nobody uses it in 2005 Windows kernel code unless their programming background goes back decades, to government and military computing environments
—
The astrophysics lab I worked at in 2006 was still using svn and had a bunch of Fortran with references to systems from the 70s and 80s. The code ran perfectly well thanks to modern optimizing compilers and having moved from Vax to Linux in the 90s, it was a surprisingly seamless transition.
It reminds me of a conference talk I’ve referenced before “do over or make due” basically implying rewriting large amounts of mostly functioning code was not worth the effort if it could be taped together with modern tools.
Ha, I worked for a company that until ~2012 still used RCS-backed SCM, absolute hack job on a shared file share that wrapped RCS with a "project file" to allow a tree of specific revisions for a "project". "MKS" it was called. And by the sound of it the "old" '90s version, not the java EE rewrite.
That meant the files has the entire "$Revision: 1.3 $" nonsense and "file changelog" at the top too - though many newer files never bothered to include the tags to actually get RCS to replace them. Inconsistent as hell.
And while the "family" of devices the software was for traces it's origin to the mid '90s, functionally none of the code was older than ~5 years at that time.
Naturally even with only a few tens of engineers it regularly messed up, commits stepped on each other's toes and the entire tree got corrupted regularly. For fun I wrote a script that read it all and imported the entire history into git - you only had to go back a few years before the entire thing was absolute nonsense.
I have no idea why that was still being used then, but I assume it had been in use from the very start of that entire hardware family. Perhaps as it was fundamentally a "hardware" company - which until surprisingly recently seemed to consider "source control" to be "shared folders on remote machines" - "software" source control wasn't considered a priority.
The issue was the rcs files were simply corrupt - no matter what tool you used the older deltas were just bad. Just people didn't notice/care as they were "old" revisions.
And I couldn't find any tool that supported the mks "project" files that linked multiple rcs revisions into a single "commit", so something a little custom was needed anyway. At least for the ancient mks version used.
Quite a bit of effort was put into it during the "official" migration, but they eventually gave up too as even the oldest backup archives they could find had the same issues.
If you're using R in 2026, you're probably invoking code compiled from Fortran from the 70s/80s somewhere along the line. It's a foundation for a lot of numerical computing.
Yeah, I used to be skeptical of the government provenance of things like Stuxnet (I am not any more, I'm fully sold, like everyone else), and notes like this were why. People used RCS well into the 2000s! RCS as a tool had virtues over SVN and CVS.
My favorite part of the paper is that the “attack” isn’t just exploiting a bug — it’s exploiting how different components interpret the same input. Modifying an executable as it’s loaded into memory is one example, but the deeper pattern is the mismatch.
What’s interesting about the malware in this post is that it goes one step further: instead of exploiting mismatches, it corrupts the computation itself — so every infected system agrees on the same wrong answer!
More broadly: any interpretive mismatch between components creates a failure surface. Sometimes it shows up as a bug, sometimes as an exploit primitive, sometimes as a testing blind spot. You see it everywhere — this paper, IDS vs OS, proxies vs backends, test vs prod, and now LLMs vs “guardrails.”
Fun HN moment for me: as I was about to post this, I noticed a reply from @tptacek himself. His 1998 paper with Newsham (IDS vs OS mismatches) was my first exposure to this idea — and in hindsight it nudged me toward infosec, the Atlanta scene, spam filtering (PG's bayesian stuff) and eventually YC.
The paper starts with this Einstein quote "Not everything that is counted counts and not everything that counts can be counted", which seems quite apt for the malware analyzed here :)
On Linux, with a compose key, it's <compose><-><-><.> (at least with the settings I have, I don't think I overrode that one). "⸻" is even more fun. You can even make your own sequences, e.g. I've got <compose><O><h><m> for "Ω", and <compose><m><u> for "μ", very handy for electrical stuff like "160μA at 1.8V needs a resistance of 1.25kΩ, dissipating 288μW".
On a Mac, at least, the "correct combination of buttons" is trivial and easy to remember, even for someone like me who rarely uses em-dash. (But, I want to start using it more because I'm sick to death of people treating it as a scarlet letter.)
> used to be skeptical of the government provenance
Do you mean skeptical on which government was responsible or that it was in fact a government effort?
I can see how attribution could be debatable (between two main suspects mainly), but are / were there any good arguments against this being a gov effort? I would find it highly unlikely that someone other than a gov could muster up so much domain knowledge, source pristine 0days and be so stealthy at the same time.
I didn't want to give bumptious government CNE teams that much credit, and also a lot of the indicators people were giving of state origin didn't seem all that predictive. I don't agree with your premise that it takes a state-level adversary to collect the domain knowledge needed to do this stuff, and I certainly don't agree about the "pristine zero days".
I do wonder if these breadcrumbs were also left intentionally. “Oh look, we are using old stuff, don’t be afraid!” Or for some other reason. It is a little surprising to pull off such a sophisticated attack and miss details you could find running ‘strings’ unless I’m missing something and this part was encrypted.
I think that in the time period we're talking about, RCS wasn't really even all that old. Like, RCS is old, sure, but it was also in common use especially by Unix systems people; it's what you might have reached for by default to version your dotfiles, for instance.
Yes, but even back then I was aware of the sections in executables (wasn’t this where it was found?) and any neckbeard from the 70s and 80s might be even more so aware. That said, yeah, sure, it’s a very possible and understandable oversight, but I’m weary because of all the text in viruses and such as indicators. Seems like a pass over ‘strings’ would be obvious. Though. TIL, strings doesn’t necessarily scan the entire executable.
The same binary has encrypted strings so I assume there was a pass, but if you look at the source control strings they seem to decrease the appearance of maliciousness, even today they are out of place for malware
Does that mean that three-letter agencies were/are able to recruit from the fields for each type of malware? For example, fast16 might actually be written by someone who used to write scientific calculation software, while Stunex was written by someone who used to work for Siemens?
Don't think of it as a materials simulation engineer being recruited and trained on how to write complex malware.
Rather this was developed by a team of 6-8 people. Maybe two or three of them working on the implant, another engineer handling the exploits and propagation, and yet another building the LP and communications channels. They are supported by a scientist with deep knowledge of the process they are messing around with (say developing nuclear weapons), and a mathematician that knows how to introduce subtle and undetectable errors.
I doubt you will find an answer here, but a few bits of anecdata:
1. CIA had recruiting events that invited STEM majors at my university, I suspect they do this very broadly.
2. Our funding came partially from the Air Force and part of the rules was our data and source had to be open. We know from conversations and other details from integrating with Air Force partners that they had models like ours that were an order of magnitude more accurate because they amalgamated models from all academics in our field and had their own career scientists on staff (often coming up through military ranks)
Try to remember how hypothetical everything tended to be before Snowden. And 'twas a meager pittance that was revealed. They have toys that'd blow minds and people yee'd swear weren't people. It's all fun and games to poke fun, but holy shit those guys are NTBF'dW.
Every academic institution, every school, all under the radar of recruitment and more. It's difficult to believe, but the network is real.
There are certainly people here on HN who've been solicited, most who'll never mention it.
It's fun to imagine, though, what tight groups of highly motivated, stupidly intelligent people can do when they collectively commit to doing so - and with a hefty budget to assist.
Fun to imagine that and painful to think of what we could have if such efforts and budgets were put toward education, healthcare, social welfare, public infrastructure + reliability, etc.
Exactly. But there's ideology, and there's reality. You know how pervasive and colossal the black budget is. We could be, as a society, almost unimaginably advanced of where we are, sans such things, sans the modern patent system, sans greed, sans corruption.
But we are, precisely where we are
Edit: I thought it prudent to leave a reminder, that the US military operates beyond patent regulations. If they want or need something, the silly games end there. And they do what they will.
Re-factoring code is a _panacea_ -- it's more likely factors that contributed to the code needing re-factoring in the first place, are very much in place still to contribute to the same condition repeating eventually, and another round you go. The factors that produce the causes of re-factoring, usually border on psychological causes embedded deeply within the brains of the developer or developers that are owners of the code. Habits, beliefs, convictions, even "professional traumas". Related here is Conway's Law, where the team, for all individual capacity and capability, cannot but build software that mimics the structure of the developers' ultimate (larger) organisation, thus tying the success of the former to the success of the latter. Re-factoring will only largely repeat the outcome if the organisation hasn't changed.
The exception being obviously a team approaching someone else's codebase -- including that of their predecessor, if they can factor in for Conway's Law -- to re-factor it.
But the same person or persons announcing re-factoring? I always try to walk away from those discussions, knowing very well they're just going to build a better mouse trap. For themselves.
Don't get me wrong, iteration of your own then-brain's product is all well and good, but it takes _more_ to escape the carousel. It takes sitting down and noting down primary factors driving poor architecture and taking a long hard look in the mirror. Not everything is subjective or equivalent, as much as many a developer would like to believe. It's very attractive to stick to "as long as we're careful and diligent, even sub-optimal design can be implemented well". No, it won't be -- this one is a poster-child exception to the rule if there ever was one -- your _design_ is the root and from it and it alone springs the tree that you'll need to accept or cut down, and trimming it only does so much.
I mean to write "not a panacea", my bad. That it's not the universal cure people think it is. And people _do_ think that re-factoring will magically solve problems, while it doesn't do all that much in practice, less so when you factor in the costs spent on re-factoring.
We used cvs, but did switch to svn before/around 2006, but I could be mixing that up. We did not switch to git even by 2012 when I left.
The reference to the 70s and 80s code didn’t imply it was version controlled before svn/cvs though if that’s what you meant, but by that time it was and still had old timestamps commented in the text files.
I just wanted to say that "still using svn in 2006" sounds odd when talking about version control system that existed just for several years and what turned out to be its replacement was 1 year old.
gcc, for example, transitioned to subversion in 2006 and switched to git only in 2019 [0]
I miss the days of knowing who last touched every source file and precisely what version it was:
$ what /usr/bin/file
/usr/bin/file:
PROGRAM:file PROJECT:file-106
$File: apprentice.c,v 1.309 2021/09/24 13:59:19 christos Exp $
$File: apptype.c,v 1.14 2018/09/09 20:33:28 christos Exp $
$File: ascmagic.c,v 1.109 2021/02/05 23:01:40 christos Exp $
$File: buffer.c,v 1.8 2020/02/16 15:52:49 christos Exp $
$File: cdf_time.c,v 1.19 2019/03/12 20:43:05 christos Exp $
$File: cdf.c,v 1.120 2021/09/24 13:59:19 christos Exp $
$File: compress.c,v 1.129 2020/12/08 21:26:00 christos Exp $
$File: der.c,v 1.21 2020/06/15 00:58:10 christos Exp $
$File: encoding.c,v 1.32 2021/04/27 19:37:14 christos Exp $
$File: fsmagic.c,v 1.81 2019/07/16 13:30:32 christos Exp $
$File: funcs.c,v 1.122 2021/06/30 10:08:48 christos Exp $
$File: is_csv.c,v 1.6 2020/08/09 16:43:36 christos Exp $
$File: is_json.c,v 1.15 2020/06/07 19:05:47 christos Exp $
$File: is_tar.c,v 1.44 2019/02/20 02:35:27 christos Exp $
$File: magic.c,v 1.115 2021/09/20 17:45:41 christos Exp $
$File: print.c,v 1.89 2021/06/30 10:08:48 christos Exp $
$File: readcdf.c,v 1.74 2019/09/11 15:46:30 christos Exp $
$File: readelf.c,v 1.178 2021/06/30 10:08:48 christos Exp $
$File: softmagic.c,v 1.315 2021/09/03 13:17:52 christos Exp $
$File: file.c,v 1.190 2021/09/24 14:14:26 christos Exp $
...
WHAT(1) General Commands Manual WHAT(1)
NAME
what - show what versions of object modules were used to construct a file
SYNOPSIS
what [-qs] [file ...]
DESCRIPTION
The what utility searches each specified file for sequences of the form
"@(#)" as inserted by the SCCS source code control system. It prints the
remainder of the string following this marker, up to a NUL character,
newline, double quote, `>' character, or backslash.
The following options are available:
-q Only output the match text, rather than formatting it.
-s Stop searching each file after the first match.
EXIT STATUS
Exit status is 0 if any matches were found, otherwise 1.
SEE ALSO
ident(1), strings(1)
STANDARDS
The what utility conforms to IEEE Std 1003.1-2001 ("POSIX.1"). The -q
option is a non-standard FreeBSD extension which may not be available on
other operating systems.
HISTORY
The what command appeared in 4.0BSD.
BUGS
This is a rewrite of the SCCS command of the same name, and behavior may
not be identical.
macOS 26.4 December 14, 2006 macOS 26.4
I wonder if this app is also an “app clip” - the small size applets that can bypass download size restrictions (and maybe, because of that, other restrictions).
How big is the installed size, and can you go to settings and delete app data before deleting the app? You may also check to see if it’s installed on other devices connected to your iCloud account or if you have any devices you never disconnected including Apple Watch as they have a watch app.
My wife works for macrumors and she once found an incomprehensible HomeKit bug and we had an Apple engineer come to our house to diagnose (we lived nearby to Cupertino)
It turned out to be a bug in the legacy HomeKit after upgrading on the backend. Totally opaque to the end user.
I think it likely speaks to how much more common they are as exotic pets than they have been in the past. That she found it before it died is surprising, and the longer I think about this story the longer I wonder if they just bought it as a pet and the river discovery was a gag for online clout.
Wales is a lot smaller than the continental United States. What do you expect them to say? "Cardiff is part of Wales, unceded territory of the Welsh"? That would be entirely performative. If you feel strongly about this topic, you ought to demand more meaningful steps, such as the use of Welsh language place names.
Yeah: and none for Wales. There's a database (https://historical-boundaries-of-wales-rcahmw.hub.arcgis.com...), but I've never ever seen or heard anyone using this information to make land acknowledgements, nor can I find examples online. The last major genocide in Wales, to my knowledge, was over a thousand years ago: none of the cultures involved still exist in any meaningful form. The closest we have in modern times is events like Boddi Tryweryn (the 1965 destruction of Capel Celyn), and everybody knows about those.
You can't just take a cultural practice (land acknowledgements) from one part of the world (Abya Yala, Ahitereiria, Aotearoa, Te Waipounamu), and copy-paste it onto a different part of the world (Wales), and expect it to have the same significance. That's cultural appropriation, and it is frowned upon: by reducing the significance of an act to empty symbology and meaningless facade, you do harm in the trappings of a revolutionary activism intended, by its actual practitioners, to reduce and mitigate harm.
Ironically, I'm right now appropriating the vocabulary of those activists – probably incorrectly – in order to try to win an argument with no real significance, on a forum run by Venture Capital. But this isn't a particularly harmful thing for me to be doing (except perhaps to my reputation: what if the children call me cringe?), so I think it's alright.
"We inhabitants of Dry Land must acknowledge that we all descended from these superior beings of Water Worlds. All the salt water in our veins is a debt and homage to the Water Beings from whom we stole it. We Dry Landers will forever devote ourselves to lifting up on a pedestal, these Water Beings, as long as that pedestal is submerged deep underwater. We solemnly pledge and promise the payment of reparations, in the form of Sea Monkeys for breakfast."
From the article it doesn't appear they've ever been found alive in the wild anywhere but their natural habitat. This was likely a remarkable chance happening where an owner released one and she found it within close succession or else it likely would have died very quickly.
If there is a wild population, that would be an even more amazing story.
I did think it was strange they didn't spell that out though. Maybe they thought 'Mexican' makes it clear, but it reads too easily like a species name.
Yeah, I didn't want to spoil the article with my comment, it was a good read, but it did immediately make sense why they were so popular now. I've met multiple people in passing who own Axolotl. I used to think I was super special that I met a guy who owned one, and I assumed it was because he was a famous neuroscientist, and had some special permission, but now they're relatively common as pets (to a degree).
Sure, but odds are that if you think you've seen an axolotl in the wild, it's far more likely to be a juvenile salamander or one of the other more populous species of salamanders exhibiting neoteny.
Unfortunately, the whole Minecraft thing caused a lot of people to buy them with little understanding of proper care, so I suspect there's some "that's cool but please don't rush in unprepared" in the hard to keep message. There are also some misconceptions around water quality requirements, they really don't like chemical pollutants, but I have no issues with local municipal water, other areas could have issues and require RO water, etc. but there are plenty of tropical fish keepers in this same situation.
And then there's the water temp thing, that caught me off-guard and I was using frozen water bottles for a few weeks until my chiller arrived, if the tank had been located in a different part of the house it might have been required.
From another comment here: "you need to be able to keep the water below 24 Deg C, this means spending some money on chillers even in sub-tropical countries"
I think people anticipate needing heaters for certain types of fish, but I'd never have expected to buy a cooling unit for aquatic life.
Yeah, adding in a chiller makes things way more complicated than just adding a resistive heater. A decent looking chiller for an aquarium is ~$1,000, plus you need temp sensors and control wiring to maintain the setpoint properly, and then you need to pray the electricity doesn’t go out. A 1/3rd HP chiller draws around 1kW including the circ pump
An aquarium backup battery for a simple pump is like $50 for something that'll last a few hours of outage, but for a chiller with that kind of draw, it's a bit more expensive.
Yeah, you’d need a 1500 VA UPS to back it up, plus a decent amount of batteries (I don’t know the math on those, someone else figures that part out for me haha)
Aquarium circ pumps can probably be powered directly by 12VDC? That would make sense if it’s only $50 for battery backup.
reply