HN2new | past | comments | ask | show | jobs | submit | bobbylarrybobby's commentslogin

I think “the fifth revision of that URL routing library that everyone uses” is a much less common case than “crate tried to explore a problem space, five years later a new crate thinks it can improve upon the solution”, which is what Rust’s conservatism really helps prevent. When you bake a particular crate into std, competitor crates now have a lot of inertia to overcome; when they're all third-party, the decision is not “add a crate?” but “replace a crate?” which is more palatable.

Letting an API evolve in a third-party crate also provides more accurate data on its utility; you get a lot of eyes on the problem space and can try different (potentially breaking) solutions before landing on consensus. Feedback during a Rust RFC is solicited from a much smaller group of people with less real-world usage.


Is there that much to explore in a given problem space. I believe a lot of people will take the good enough, but stable API over the unstable one that is striving for an unknown state of perfection. The customer of a library are programmers, they can patch over stuff for their own use case. A v2 can be released once enough pain points have been identified, but there should be a commitment to support v1 for a while.

Interesting, my sonnet 4.6 starts with the following:

The classic puzzle actually uses *eight 8s*, not nine. The unique solution is: 888+88+8+8+8=1000. Count: 3+2+1+1+1=8 eights.

It then proves that there is no solution for nine 8s.

https://claude.ai/share/9a6ee7cb-bcd6-4a09-9dc6-efcf0df6096b (for whatever reason the LaTeX rendering is messed up in the shared chat, but it looks fine for me).


Yeah, earlier in the GPT days I felt like this was a good example of LLMs being "a blurry jpeg of the web", since you could give them something that was very close to an existing puzzle that exists commonly on the web, and they'd regurgitate an answer from that training set. It was neat to me to see the question get solved consistently by the reasoning models (though often by churning a bunch of tokens trying and verifying to count 888 + 88 + 8 + 8 + 8 as nine digits).

I wonder if it's a temperature thing or if things are being throttled up/down on time of day. I was signed in to a paid claude account when I ran the test.


Select all always appears if you have no text selected and never appears if you have some text selected. Insane UI decision by apple but that's how it is.


Which means you can't select all on text which isn't editable - insane!


I have a JavaScript share sheet shortcut that forces a select all on any page. Really useful.

Something like this:

var result = [];

body = document.body;

sel = window.getSelection(); range = document.createRange(); range.selectNodeContents(body); sel.removeAllRanges(); sel.addRange(range);

selString = sel.toString();

// Call completion to finish completion(selString);


It honestly doesn't surprise me. Apple is not some bastion of good design. They are mediocre at best, always have been.

It was pretty hilarious to me that for so many years the keyboard on iOS only had CAPITAL letters. No matter the state of the shift key, the letters on the keyboard just stayed the same. After many years they finally figured it out, but it's one example of many about how Apple just doesn't have the great UX people claim they do.


I have some kind of mental block that prevents me from figuring out the state of touchscreen controls.

"Is that a Play button because it's currently playing, or because it is paused/stopped, and will play when I tap it?"

"Is Bluetooth on or off? That depends if Dark Mode on?"

I end up tapping the control 3 times or so. The latter dilemma could sometimes be worked out by surveying the state of every surrounding control, but tunnel vision and impatience keep winning.


I actually prefer the all caps keyboard and switch it on on iOS. It looks like a physical keyboard and the constant flicking between upper/lowercase is distracting and annoying


As bfinn once said on IRC, as he wrote in caps:

<BFINN/#debian> ALL BIG LETTER ON KEYBOARD HERE!!

<CosmicRay/#debian> haha

<BFINN/#debian> TO NO LITTLE LETTER!

https://groups.google.com/g/comp.sys.amiga.misc/c/7AdXvE7KQz...


There are dozens of us!


Well good for you, I guess?


Apple has fantastic UX people. Also really bad ones. It’s a mistake to think a company that size is homogenuous.

In general, IMO they are better than most companies but far from perfect. Maybe 80th percentile. I’m hard pressed to think of a top 10 tech company that’s better. Lots of smaller companies are.


they are not bastion of good design. they are the bastion of intentional opinionated design. Meaning they don't listen to feedback. ("we don't have focus groups" - Steve Jobs).


Looking at every UI/UX implementation around be and on my devices... I'm not sure anyone does anymore. Not in a haha way, I actually see so many trivial issues all around, I don't understand how they passed any contact with testing and user feedback.


This was not poor design, but a decision to restrict the user from copy pasting entire articles and the like. Most unfair and this iPhone 3G to iPhone 17 Pro user is seriously considering ditching them over select all


Yeah that doesn’t surprise me. You can’t copy magnet links either.

where did you hear this?


I’m reading between the lines

> They are mediocre at best, always have been.

Come on. OSX was a paradigm shift in desktop usability and intuitive design.

My 85 year old grandpa asked me about 20 years ago how he should go about learning how to use computers. We were a windows family at home but I was using Macs in school and OSX was relatively new and I thought it blew Windows out of the water as far as usability.

Didn’t take long for my grandpa to be sending me emails and news links, and becoming an overall competent and comfortable computer user, in his late 80s, and I credit that to Apple’s fantastic design.

I think maybe we forget how using Windows 98 and XP was day-to-day.


This is accurate. Apple’s been losing its soul ever since they spent a billion dollars making a headquarters that’s shaped like the “Home” button that they then immediately got rid of.


So what I’m reading, is they are able to dumb-down the design to fit less tech savvy people.

When tailoring for one audience you usually do tale away something from other audiences.


I understand your point and have a long list of bitter grievances against Apple, but OS X triggered a large influx of geeks to the Mac world. It was a Unix that just worked, and there were all kinds of important ways that appealed to key tech people.

>Come on. OSX was a paradigm shift in desktop usability and intuitive design.

OSX was born by moving from a real crap OS that couldn't even multitask property, to slapping the same UX paradigms on a Unix base.

The first release of OSX wasn't meaningfully different from OS9 in UX. They had the same goofy window gadgets for minimizing and maximizing a window, and still couldn't resize a window from any corner/side.

Finder is still just as much garbage as it ever was, nothing has really changed there. "About this software" is still the first thing on the first menu, because of course that's the most important thing a user could do with MacOS software is to look at what version they are using.

There's a reason MacOS has never gone above 15% market share - part of that is the extortionate cost of Apple hardware, as well as their shitty UX.

I will gladly take Windows XP over any version of MacOS.


Not always, if we go back to the 1980s. But in very modern times, they've lost all the learnings from back then.


old school apple design stubborness: I remember they insisted on putting the grooves on the "D" and "K" keys instead of the "F" and "J" keys. So you had to find home base on the keyboard with your middle fingers on an apple rather than index fingers like on everything else. No, that place has always been a design shop run amok.


It made sense because the numeric keypad had the dot on the 5. Early IBM keyboards (Model F) didn't have home markers, IIRC. But the PC world standardized on F and J, and eventually everyone else, too.


echo "It made sense because the numeric keypad had the dot on the 5." | sed 's/had/has/'


lol, no, they sucked even more in the 1980s.

Did you ever notice that "About this software" is the first thing on the first menu of every application? Is that because people have to know what version of the software they are using every time they start it? It's still like that today, and it's very very stupid. Other OSs get it right and put the version information on the last menu, where it doesn't clutter up the most prominent area in the most used menus.

Finder was crap in the 1980s. Still is crap, but it used to be crap too.

The window system in the 80s and 90s was also crap. Could not resize a window from any side or corner of the window except the lower right. Windows has had resizing from any edge or corner since forever.

Apple "design" is just not as good as people seem to think it is.

They've also had plenty of weird and unloved hardware designs... the infamous trash can, the clamshell laptop, the weird anniversary macs, a mouse with a charging port on the bottom so that you can't use the mouse while it's charging, and the list goes on and on and on.


As someone who has switched from Windows to Apple recently, my God the Finder is terrible. I can't understand how people aren't flipping tables over how bad it is.


Finder has to be used with the Miller columns; otherwise, it doesn't make sense.

But since the switch to the new filesystem, it's kinda slow and annoying.

They have built some proprietary stuff around their filesystem to increase their walled garden height. Which is kind of stupid in the era of cloud computing, because you cannot use any of it if you share files/directories with other people who don't use Macs.


Because Mac OS X Finder has always been kinda terrible. There was a lot of talk about this in the early 2000s and it's just faded away since the people using macOS now probably never experienced the good old Mac OS 9 Finder.

And its Windows competition Windows Explorer has likewise gotten worse and worse each revision of Windows.


Oh... Finder is the name of the default file browser? I always thought it was the search results that popped down from the top right search area.

Last Mac I was on still had OSX on it.

Thank goodness for Dopus.


lol, directory opus? I was using that on the Amiga way back in the day. I tried it like a decade ago, but it didn't stick for me. It doesn't seem to run on Linux, and it costs $$$, so no chance I'll try it again.


I can't think of a better rationale for the ubiquitous worsening of local search than increasing ignorance of comp sci fundamentals.

There's no reason a senior at undergrad level shouldn't be able to write an efficient, fast, deterministic, precomputed search function.

... and yet, professional developers at major companies seem completely incapable.

Minimum acceptance criteria for any proposed shipping search feature should be "There is no file / object in the local system that fails to show up if you type its visible name" ffs.


The whole window management system is an exercise in contrarianism. They basically chose to do things in the opposite manner of their competitor and mostly against what intuition would dictate for the sole reason of being different.

macOS is very frustrating to use without utility apps that provide the necessary improvements. But they are never as well integrated, cost money or are a hassle to set up.

Apple just wins because they make good-looking, well-built hardware, and sometimes they win on some performance metrics (in the Apple Silicon era, it's mostly about efficiency and single-core speed, which is not as useful as some like to believe).


Apple only "wins" by charging exorbitant prices that idiots are willing to pay to have a digital status symbol. What they have not "won" is market share. They have always been an "also-ran" in market share.

Android (70%) beats iOS (30%). Windows (68%) beats MacOS (13%).


Well, I agree with that if we are talking about the general population. But Apple does have some niches it serves very well that make the prices worth it for some. But of course, this is a very tiny minority of their customers.

For example, they always have been focusing on video editing since the PPC days, starting with the iMac DV. And nowadays, Macs are still quite good for video editing; even when you factor in the price, it's not that bad of a deal. Previously it was about DTP and desktop graphics generally.

But it's always the same playbook; they are first to offer the possibilities of a new usage, but that comes with their high price; over time they lose competitiveness, and they end up switching to something else.

The question is always if the asking price is going to be worth it for whatever you try to accomplish with a computer at the moment. If you are doing work that doesn't require being on the bleeding edge, the answer is probably no.

However, in general, people buy Apple stuff for the status, very often as an ego trip (to prove they are better) and not infrequently out of ignorance/incompetence (it's crazy how much stupid shit Apple fans believe).


What makes you think the first menu is one of the most used menus?


Well it probably isn't because Apple doesn't put useful things there, which is completely stupid from a UX perspective.


Heh, you're going to mention a mouse without bringing up the puck?!


I'm a little surprised they never came out with some oversized mouse pad and a mouse that charges from it.

Always seemed like an apple sort of idea.


> a mouse with a charging port on the bottom so that you can't use the mouse while it's charging

I'm surprised you went for that over the puck. At least when you unplugged it, you could use it. The puck was just terrible. And old.


>This was not poor design, but a decision to restrict the user from copy pasting entire articles

do you have a source for that?


The physical keyboard on your computer is also always in capital letters. Is that bad design too?


The advantage of software is the 'soft' part i.e. it's much easier to change than hardware.

Unless physical keyboards had mini displays for every key, they're a good design given the 'physical' limitation of their design.

A touchscreen displays 'soft'ware that's easy to change and make smarter than physical items.


In editable text fields you can tap a word a few times and it'll select the whole paragraph, if that's any help.

What drives me insane though, is double tapping a word is supposed to select that word. But I think starting in iOS 18 it started selecting the word and a random amount of surrounding words, but only about half the time. I couldn't tell you what it could possibly be trying to do but it's maddening.


It’s using AI to try and determine if it’s a proper noun or other scenario where multiple words are really one semantic term. Except it’s really really bad at it and it’s almost never the behavior I want, but there’s no way to turn it off. (I vaguely remember there was a WWDC talk sometime a couple years ago where they went into how this works)


Word segmentation has been a longstanding problem in CJK languages too. Coupled with the terrible text selection in iOS it makes it really hard to select substrings.


I know when I was on Android they'd do some smarts to detect stuff like that (handy for copying links)

But I swear if that's what they're trying to do here, I've never seen it work properly once. It's always just a random substring of the sentence.


It works surprisingly well on Android; expanding to grab a full address, for instance, or complete phone number. Sometimes it needs tweaking, but mostly it's directionally correct and helpful rather than harmful


Just keeping my finger on the word works for me every time to select it. Double tap works only works in the edit fields. Also works reliable for me here in the hacker news post editor, as long as I do it in the middle of the word.


You can just type the text to find in the address bar — “find on page” will be the at the very bottom of the list of suggestions.


I agree, the x-axis labels are not helpful! Thankfully, the first example is “buttons with corrected icon spacing”, and the image on the right looks much better than the one on the left (a bigger difference in quality than in the other two examples), which is visible when the slider is on the left.

Suggestion to devs: put the label “material-style” in the lower left of its image and “liftkit” in the lower right of its image, and cover them appropriately as the slider moves, and then it'll be clear which framework the current image (or portion of it) belongs to.


Thanks for the tip! That actually was the first idea but I didn't end up doing it, for some reason. Thanks for the suggestion.


... just to be a (hopefully helpful) pedant:

If you were going to do this for the slider approach you can arrange the labels to the `block-start` and `block-end` of the image and support non-RTL scripts/languages natively.


> the first example is “buttons with corrected icon spacing”, and the image on the right looks much better than the one on the left

For me the better image appears on the left.

The left image has the icon in the centre of the radius and the right image has it in a random place.


Are accelerometers not sufficient for pose determination? I would assume they'd work as well as cameras if not better.


for your head, more than enough, for the position of the arm you need something else.


Why would this own a server? ls lists itself, but listing itself shouldn't cause it to run again? Where's the infinite loop that brings the server down?


I think parent comment means "cp badthing ls" and leave it latent for someone to run. Maybe $PATH has CWD first for convenience?


They're not talking about the same scenario. Owning isn't denial of service. And they didn't say the `ls` lists things (though it probably will do that at the end).


I really like that Claude feels transactional. It answers my question quickly and concisely and then shuts up. I don't need the LLM I use to act like my best friend.


I love doing a personal side project code review with claude code, because it doesn't beat around the bush for criticism.

I recently compared a class that I wrote for a side project that had quite horrible temporal coupling for a data processor class.

Gemini - ends up rating it a 7/10, some small bits of feedback etc

Claude - Brutal dismemberment of how awful the naming convention, structure, coupling etc, provides examples how this will mess me up in the future. Gives a few citations for python documentation I should re-read.

ChatGPT - you're a beautiful developer who can never do anything wrong, you're the best developer that's ever existed and this class is the most perfect class i've ever seen


This is exactly what got me to actually pay. I had a side project with an architecture I thought was good. Fed it into Claude and ChatGPT. ChatGPT made small suggestions but overall thought it was good. Claude shit all over it and after validating it's suggestions, I realized Claude was what I needed.

I haven't looked back. I just use Claude at home and ChatGPT at work (no Claude). ChatGPT at work is much worse than Claude in my experience.


I feel like this anecdote represents the differing incentives / philosophies of each group rather well.

I've noticed ChatGPT is rather high in its praise regardless of how valuable the input is, Gemini is less placating but still largely influenced by the perspective of the prompter, and Claude feels the most "honest" but humans are rather easy poor at judging this sort of thing.

Does anyone know if "sycophancy" has documented benchmarks the models are compared against? Maybe it's subjective and hard to measure, but given the issues with GPT 4o, this seems like a good thing to measure model to model to compare individual companies' changes as well as compare across companies.


The issue i think is that to model sycophancy you'd need another model that can address signs of sycophancy - it's turtles all the way down


Weirdly I feel like partially because of this it feels more "human" and more like a real person I'm talking to. GPT models feel fake and forced, and will yap in a way that is like they're trying to get to be my friend, but offputting in a way that makes it not work. Meanwhile claude has always had better "emotional intelligence".

Claude also seems a lot better at picking up what's going on. If you're focused on tasks, then yeah, it's going to know you want quick answers rather than detailed essays. Could be part of it.


as a problem, it means you need a ralph loop on top of it, if you want it to finish a problem without it waiting on a checkpoint


fyi in settings, you can configure chatGPT to do the same


where?


Settings > Personalization > Custom Instructions.

Here's what I use:

    WE ARE PROFESSIONALS. DO NOT FLATTER ME. BE BLUNT AND FORTHRIGHT.


Then why are they advertising to people that are complete opposite of you? Why couldn’t they just … ask LLM what their target audience is?


Quickly and concisely? In my experience, Claude drivels on and on forever. The answers are always far longer than Gemini's, which is mostly fine for coding but annoying for planning/questions.


They do understand, that's why they're doing this. This is a fundamentally anti-fact administration — when facts aren't known, you can fabricate reality for the masses, which is what they want.


You’re replying to someone who is suggesting that the CIA can manipulate the facts and fabricating reality.


For “looking at a text file with pretty print”, try CotEditor. https://coteditor.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: