Hacker News .hnnew | past | comments | ask | show | jobs | submit | cloudbonsai's commentslogin

Here is a user manual of xv that contains prenty of screenshots:

https://dav.lbl.gov/archive/NERSC/Software/xv/help/xvdocs.pd...

I think the "Miscellaneous Ramblings" on the final page really illustrates the color of his personality:

Section 13.3: Miscellaneous Ramblings

And, of course, thanks to everyone else. If you contributed to the developement of xv in some way, and I somehow forgot to put you in the big list, my humble apologies. Documentation and careful record- keeping are not my strong suits. “Heck,” why do you think it takes me a year and a half to come up with a minor new release? Because, while I love to add new features to the code, I dread documenting the dumb things. Besides, we all know that writing the documentation is the hardest part of any program. Particularly when the good folks at id Software insisted upon releasing DOOM II...

And finally, thanks to all the folks who’ve written in from hundreds of sites world-wide. You’re the ones who’ve made xv a real success. (Well, that’s not actually true. My love of nifty user-interfaces, all the wonderful code I’ve gotten from the folks listed above, and the fact that xv actually serves a useful purpose (albeit “displaying pictures of naked women”) are the things that have made xv a real success. You folks who’ve written in have given me a way to measure how successful xv is.) But I digress. Thanks!

By the way, when I last counted (in October 1992), xv was in use at 180 different Universities, and dozens of businesses, goverment agencies, and the like, in 27 countries on 6 of the 7 continents. Since then, I’ve received messages from hundreds of new sites. And xv has been spotted in Antartica, bringing the total to 7 of 7 continents, and allowing me to claim that xv is, in fact, truly global software. That’s probably a good thing. Does anybody know if there’s a Unix workstation in the Space Shuttle?... :-)


I listened to the podcast linked in the article, and my understanding of the timeline is:

- The owner originally had two dogs. Both disappeared from her backyard one day. One dog returned home. The other vanished without a trace.

- Eleven years later, a random girl found the missing dog outside. She befriended the dog and brought him home. She talked with her parents and contacted ACCT Philly, who in turn found the original owner through a microchip.

Does this make sense? To me, this story managed to be a rare mix of heartwarming, insightful and frustrating.


Eleven years seems like a very long time to be a Philly street dog - kinda makes you wonder if it wasn't adopted by somebody in the interrim before ending up with the girl somehow.

To fill in the details, here is the code used for the measurement:

https://github.com/lemire/counters/blob/main/include/counter...

It fetches the number of mispredicted instructions from Linux's perf subsystem, which in turn gathers the metrics from CPU's PMU (Performance Monitoring Unit) interface.


Internally Python holds a string as an array of uint32. A utf-8 representation is created on demand from it (and cached). So pansa2 is basically correct [^1].

IMO, while this may not be optimal, it's far better than the more arcane choice made by other systems. For example, due to reasons only Microsoft can understand, Windows is stuck with UTF-16.

[1] Actually it's more intelligent. For example, Python automatically uses uint8 instead of uint32 for ASCII strings.


There is no caching of a "utf-8 representation". You may check for example:

  >>> x = '日本語'*100000000
  >>> import time
  >>> t = time.time(); y = x.encode(); time.time() - t # takes nontrivial time
  >>> t = time.time(); y = x.encode(); time.time() - t # not cached; not any faster
Generally, the only reason this would happen implicitly is for I/O; actual operations on the string operate directly on the internal representation.

Python uses either 8, 16 or 32 bits per character according to the maximum code point found in the string; uint8 is thus used for all strings representable in Latin-1, not just "ASCII". (It does have other optimizations for ASCII strings.)

The reason for Windows being stuck with UTF-16 is quite easy to understand: backwards compatibility. Those APIs were introduced before there supplementary Unicode planes, such that "UTF-16" could be equated with UCS-2; then the surrogate-pair logic was bolted on top of that. Basically the same thing that happened in Java.


> There is no caching of a "utf-8 representation".

No there certainly is. This is documented in the official API documentation:

    UTF-8 representation is created on demand and cached in the Unicode object.

    https://docs.python.org/3/c-api/unicode.html#unicode-objects
In particular, Python's Unicode object (PyUnicodeObject) contains a field named utf8. This field is populated when PyUnicode_AsUTF8AndSize() is first called and reused thereafter. You can check the exact code I'm talking about here:

https://github.com/python/cpython/blob/main/Objects/unicodeo...

Is it clear enough?


The C API may provide for it, but I'm not seeing a way to access that from Python. This sort of thing is provided for people writing C extensions who need to interface to other C code.

(And the code search seems to be broken; it can't find me the definition of `unicode_fill_utf8` although I'm sure it's obvious enough.)



Coincidentally, Chi-kawa is a very popular anime character in Japan.

https://en.wikipedia.org/wiki/Chiikawa

It's a portmanteau of "Chiisai" (small) and "Kawaii" (cute).


The evolution of software engineering is fascinating to me. We started by coding in thin wrappers over machine code and then moved on to higher-level abstractions. Now, we've reached the point where we discuss how we should talk to a mystical genie in a box.

I'm not being sarcastic. This is absolutely incredible.


And I've been had a long enough to go through that whole progression. Actually from the earlier step of writing machine code. It's been and continues to be a fun journey which is why I'm still working.


Among the interviews, one with the former engineering director was the most eye-opening for me.

https://data.ntsb.gov/Docket/Document/docBLOB?ID=17236880&Fi...

It appears that all the engineers -- system designer, material engineer and structural analyst -- thought that OceanGate CEO was going to kill himself:

    If you ever find <name-of-the-engineer>, he’s not going
    to have a whole lot of nice to say. He was very frustrated
    with the company. (...) And I understand why. He thought
    Stockton was going to kill himself.
And the director himself declined to dive on Titan when asked:

    Now, the question is, why wouldn’t the engineer get inside
    his own vehicle? It was because of what I felt -- and I have a
    background in Navy diving in EOD operations. I knew firsthand
    that the operations group was not the right group for that role,
    and I told him as much, that I don’t trust operations and who he
    has there.


The number of stupid decisions that went in the design and construction of the Titan is astonishing. One of my favorites was that, after putting on the carbon fiber around the tube, they would sand imperfections to make the surface perfectly smooth, severing layers in the process! It shouldn't require an engineering degree from MIT to recognize this as ill-advised.


Even without that, the material is just wrong. It’s strong in tension, not so much compression. Tends towards sudden brittle fractures. Doesn’t like impacts, as it tends to have issues with delaminating.

It’s just not what you ever want as a sub hull. It’s dumb.

And weight is not even a huge issue for a sub!


Yes, using carbon fiber was also a very bad decision; it was known for a very long time that it was only good for single-use sub, because after the first dive it was too damaged to continue. In 2014, Virgin Oceanic, which had similar plans with similar technology, closed shop because it didn't make economic sense to build a new sub for each dive.

But weight is absolutely an issue; the basic and tried-and-true metal sphere design allows for only three people. Since size and thickness grow exponentially, making a sphere for more than three people becomes more and more difficult. And it should also be possible to lift the vehicle with a crane.

But if you want to carry paying passengers (like Oceangate did), having only two per dive is very limiting. That's why they went with a tube design, and carbon fiber to limit weight. But it couldn't work, and it didn't.


  >size and thickness grow exponentially
It's a [reverse] pressure vessel, so it follows pressure vessel scaling. Mass scaling is linear with internal volume.


It’s funny how “literally” often means “figuratively” now, and “exponentially” means “polynomially”.



Ok yes "exponentially" was hyperbolic. Mass scales linearly with volume, but volume is proportional to the cube of the radius (not linear).

Also, in practice, small imperfections can have a disproportionate impact on the resistance of the sphere so design codes typically apply conservative reductions that can have a big impact on actual thickness requirements.


Did this thing meet any design codes though? I doubt it.


I read the report when it come out. From memory, no. It never had any components or certification for human pressure vessels. IIRC theres no existing regs for carbon fiber and it would have cost like $50M to do the design and test work. They did buy some things, like the viewport, from companies who do certified parts, but instead opted for the same design minus any test certs to save money. The craft was never certified or inspected by the uscg. It did have a registration for a while, but they had to play find-a-new-district-sign-off shell games for a while, then… just stopped bothering.


Thanks for the detailed answer! It doesn't suprise me at all.


“Strong in tension, not compression” is a meme, and obviously wrong. It is certainly stronger in tension, but it is also remarkably strong in compression. That’s why it’s used - yes, in compression - in modern passenger aircraft. You don’t even need to know that, though; the simple fact is that the Titan had a double-digit number of deep dives. If it was weak in compression it would not have survived diving to 3.7 kilometers deep or even a fraction of that depth _once_.

That said, yes, it’s a difficult material to use properly, and they were a bunch of cowboys slapping things together. It’s no surprise that they missed several critical steps and created a sub doomed to fail.

N.b. all of this was kickstarted by James Cameron saying that carbon fiber has “no strength in compression” in a New York Times “science” article, which the Times printed directly.


Aircraft fuselages are typically loaded in tension. It’s a key part of the design.

Carbon fiber compressive strength is only ~ 30-50% of it’s tensile strength because of the way the fibers and the epoxy interact. When compressed, the carbon fibers don’t do as much. [https://www.sciencedirect.com/science/article/abs/pii/S02638...]

But don’t believe me, actually read a useful paper on the subject.

In fact, it’s a major factor limiting it’s wider use. As is it’s fatigue behavior, which would probably also explain why it eventually imploded!

I never followed James Cameron’s interview, but it sounds like he knows what he is talking about!


James Cameron certainly knows a lot about submarines, but if he says something factually incorrect then it’s factually incorrect, period. Carbon fiber does not have “no strength in compression” and it is used in compression in countless applications, for example airplane wings. Again, the fact that the sub - built at absurdly low cost for its size, built by a bunch of cowboys that didn’t know what they were doing - DID survive to 3.7 km deep on several occasions is proof sufficient. If CF had no compressive strength than the whole thing would have failed at a tiny fraction of that depth. If CF had no compressive strength then what kept the sub together during the successful dives? Hopes and dreams?

I’m not here to defend the decision to use carbon fiber, and as I’ve said I completely agree that there are many issues with using it in this application. Delamination, water ingress, bonding the titanium to the carbon fiber, difficulty of manufacture including varying layer thickness and voids, sensitivity to impact, the list goes on. But _those_ are the issues, not the compressive strength.


Moved the goalposts again eh? While completely ignoring the cites and discussion? What, were you a major shareholder? Family member?


Speaking of which I heavily recommend reading interview the prime ancestor comment to this chain linked. It’s really clear he knows what he is talking about.


I don't like this interpretation of things. Its worthwhile to experiment and try things. They were basically mentally ill as a group and rejected genuine concern. Everyone wants to shit on the build but it was the human relations that killed it.


Also, honestly, the build. That “genuine concern” they ignored was that the build was critically flawed. I don’t think anyone here would have these takes if a small group of curious engineers tried their hand at a composite submersible, it was when they kept doing it after all the qualified engineers had said, “This is crazy, I’m out.”

The build was kind of dumb, and I’m hardly an engineer. It’s common sense. Carbon fiber composites are interesting because they’re strong relative to their weight. Remove either of those features and they become pointless.

Who cares if a submarine is heavy?


A submarine needs to be light to be neutrally buoyant in order to fly though the water properly. Otherwise you have a bathyscaphe, which has some other not nice failure modes (in addition to the "implodes if you f it up" one) and are much less maneuverable and arguably the whole system is less durable more costly for high tempo operations. "Just build a tube strong enough and big enough to not need all that" is a better answer, if you can pull it off.


'And the director himself declined'

An anecdotal personal story as it aligns with this exact statement although no one got killed but data breaches certainly occurred.

Many years ago now I was propositioned to be on the board of a financial technology company and they spared no expense in literally rolling out the red carpet for my arrival. I found it all very laughable being solely focused on business and the technical details as I was not being fooled by all the schmoozing. After hearing all the unrealistic business objectives and the promise of having the Philadelphia Flyers involved I then asked to meet the technology team that built the product to see a demo. They bring in one young guy who built it all, the executives are still present mind you, and they allow me to ask any and all questions about the platform that nearly no one in management comprehended. After seeing the demo which involved several blatant security issues I asked only one more question of the sole developer: "Would you put your financial information into this system?"

He provided his answer in front of the companies executive board and I can still see their reactions to this very day. I then stood up and thanked everyone for opportunity and left.


Wow! Man, an insider with these kinds of concerns isn't exactly exonerating or excusing themselves with such a testimony. Whistle-blowing to any relevant authority as hard as possible seems like the bare minimum? And if there's no governing agency to pass the responsibility over to, I think you gotta quietly approach the first customer (or victim) with these concerns if not a newspaper


I read that the pilot was also basically suicidal. His wife had died, and he was completely fine with the danger because he would die doing what he loved, and he didn't really want to live anymore.


Wasn't the pilot Stockton Rush? His wife was alive. Who are you referring to? I tried to check your claim but I couldn't verify it.


they're talking about nargeolet

> Wreck expert Paul-Henri "P.H." Nargeolet, who was also onboard, told me he wasn't worried about what would happen if the structure of the Titan itself were damaged when at the bottom of the ocean. "Under that pressure, you'd be dead before you knew there was a problem." He said it with a smile.

(as recounted by Arnie Weissmann, in Travel Weekly article published June 22, 2023)


Yeah I got the roles of pilot and guide mixed

https://www.newyorker.com/news/a-reporter-at-large/the-titan...


There was only one other crew member in that vessel (well, actual crew and not paper "mission specialist"). He was an older gentleman, and it's quite common for older people to have lost their spouse. Was that so hard to figure out?


yeah but he remarried


> No data with a timestamp after May 16th was found on the camera, so it is likely that none of the data recorded on the SD Card were of the accident voyage or dive.

Evidently the camera data was recorded to an external SSD card in the mission computer when the accident occurred.

The investigation team actually managed to salvage the PC as well:

https://data.ntsb.gov/Docket/Document/docBLOB?ID=19169363&Fi...

Sadly it turned into a compressed ball of metal...


From the report:

> To conduct the CT scans, the large mass was evaluated by a third-party laboratory under NTSB supervision. This facility had a range of scanners with different power and energy levels and could scan large masses using a rotating table, avoiding the need to rotate the mass itself. Ultimately, the third-party laboratory attempted to image the large mass at a power as high as 320 kilovolts (kV). The scans conducted at 320 kV were not powerful enough to penetrate the object, and as a result, no internal structures or voids were visible, and no memory devices could be identified. The NTSB evaluated using another laboratory with higher power and energy CT scan devices, however, there was concern that increased CT scan energy could damage data stored on any surviving NVM chips. Consequently, higher-energy scans were not pursued.

I'm no expert, but remember reading about neutron imaging ([1]). I'm curious if that was deemed unfeasible, too expensive, or having little chance of success? From Wikipedia:

> X-rays are attenuated based on a material's density. Denser materials will stop more X-rays. With neutrons, a material's likelihood of attenuation of neutrons is not related to its density. Some light materials such as boron will absorb neutrons while hydrogen will generally scatter neutrons, and many commonly used metals allow most neutrons to pass through them.

[1] https://en.wikipedia.org/wiki/Neutron_imaging#Neutron_radiog...


That's a striking image! Thanks for sharing - that really hits home on the pressures involved.


You can just make out the heatsink fins of the three PCs there, stacked atop (and now kind of inside) each other.

That truly is one of those “let God sort them out” situations.


Pretty sure tech exists to recover data from flash memory with cracked dies...

I guess they decided it wasn't worth pursuing.


> Pretty sure tech exists to recover data from flash memory with cracked dies...

If you have anymore on this would love to see any relevant materials.


> What goes into a mount that makes it so expensive? Its essentially just a piece of metal, right?

I think this graph sheds some light on your question:

https://www.nrel.gov/solar/market-research-analysis/solar-in...

What happened in the last decade was that solar panels ("Module" in this graph) got very very cheap. They used to cost $3 per watt in 2010, but now only cost $0.3 per watt.

This extreme price drop happened thanks to technical innovations (such as commoditization of PERC cells), and the large-scale production in PRC.

Metal components ("Hardware - BOS" in this graph) did get cheaper in the same time frame ($0.6 per watt to $0.5 per watt), but their cost cannot be reduced as much exactly because they are just a piece of metal i.e. there is no low-hanging fruit in Metallurgy.


> You can take TWO screenshots, moments apart, open in GIMP, paste one over the other and choose any one of these laying modes:

You actually don't need any image editing skill. Here is a browser-only solution:

1. Take two screenshots.

2. Open these screenshots in two separate tabs on your browser.

3. Switch between tabs very, very quickly (use CTRL-Tab)

Source: tested on Firefox



This was used in some early-20th-century astronomical setting, I think to detect supernovae. I can't find any documentation now, but my memory is that it was called "blink testing" or something similar, where one switched rapidly between two images of a star field so that changes due to a supernova would stand out.



That's it exactly! Thanks.


I went cross-eyed on my screenshot, and I couldnt read the word, but I did notice some artifacts


What does that accomplish? You can just read the web page as-is...

Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.

Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...


Or use Blink Comparison from F-Droid.


> use CTRL-Tab

Thank you forever for this, I ever used Ctrl-Page up/down for that.


You could also just record a video.


Hah, indeed, that was my first thought. This is clearly for fun though, it’s a cool project idea


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: