HN2new | past | comments | ask | show | jobs | submit | kstrauser's commentslogin

My hometown got busted making yellow lights shorter than the legally required duration, then hitting drivers with tickets for running a red light they couldn't have safely and reasonably avoided.

There are standards for this kind of thing, like if a light is on a road with a speed limit of X, then a yellow light has to last Y seconds. Imagine a yellow light that lasted .5s: you'd have to stand on your brakes and risk causing a rear end collision from the car behind you to even have a chance of not getting fined. That's the opposite of safety. My place wasn't that bad, but a defendant successfully demonstrated that the yellow light he was tricked by was illegally short, and a judge basically threw out all the tickets from it and others.

I mention this as just one example of specific light setups that suck. I bet you're right, and this is just a money grab from the local gov't.

Read this if you want to be angry today: https://ww2.motorists.org/blog/6-cities-that-were-caught-sho...


In same states they also mark the intersection start where the curb ends and not at the crosswalk starts, so you think since you passed the crosswalk under yellow you are safe to proceed but you have not yet entered the intersection.

Is this the case where instead of admitting to it, the municipality attempted to have the complainant prosecuted for practicing engineering without a licence?

No, that was Oregon's turn to be Embarrassment of the Week: https://ij.org/press-release/oregon-engineer-wins-traffic-li...

In my city they synchronized the light so that each one turns red just as the pack of cars is reaching it. To be clear the obvious implication I'm making is that they did this to increase the chance someone would run the light and increase revenue.

This does mean that if you're in the front of the pack and go about 15 over the speed limit, you won't "catch" the red light.

When you're not in the front of the pack it can be frustrating trying to travel just 3 or 4 miles with the red lights not even a full half mile from each other. Even late at night if you follow the speed limit, you are penalized. You will sit at every red light and look at the vast stretch of nothingness that has the right of way.

If they didn't do this to generate red light revenue, they could have done this to generate more revenue from the gas tax they collect by making people start & stop more often, and from sitting in traffic longer. But I suppose both things could be true. And no, I won't accept any other plausible explanations (/s, but holy heck is government awful here).


I haven't run into those (I mostly drive in rural areas--in fact, there's no stoplight in my county) -- but I do run into some lights that just change in the middle of the night, for no reason, and then take a really long time to change back to green, despite not even a single car being present / going through.

Lights with sensors have a backup pattern of timed changes so that you won’t get stuck at a light where the sensor isn’t seeing your car.

Why do you add FreshRSS in instead of subscribing directly in NNW? (Real question, not leading.) What am I missing by not doing that?

> cross-device sync

FreshRSS acts first as your central backend to manage your subscriptions, refresh and read status.

You can use its web UI to act on it, or you can use any reader app you want (NNW, Reeder...)


Gotcha. I guess where our use cases differ is that I use only one RSS reader so don’t care about sharing its state with other readers. That’d be super handy if I did, though.

Algorithms other than FIFO are fine when they serve you. Way back when I had a mail reader (Gnus) that used a Bayesian classifier to predict which emails I might especially want to read, based on past reading experiences. That was nifty! An RSS reader could do the same, on my own machine, based on my own preferences and not some marketer’s. I’d like that an awful lot.

You sort of can with very little work. When I used RSS more I had a "primary" folder and a number of secondary folders. I always looked at the primary; I'd dip into various secondaries when I had the time.

I do sort of agree with the general premise. The sort of social media that sort of replaced RSS is largely dead.


Why do you keep posting here? Asking seriously. You open a new account, immediately get it banned, then move on to the next. Doesn’t that get boring?

My sister liked to make her own clothes. One time I asked her if she did that to save money. No, she replied. She often spent more to make her dress than it would have cost her at the store. But her version was nicer, and better fitting, using nicer materials, in exactly the style she wanted.

I think there’s a lot of overlap between that and modern PC builders. It’s not necessarily cheaper, but is likely to be a lot nicer at the same price.


If I discovered that were my patient ID, I would laugh myself into unconsciousness and buy the staff a pizza.

Those are all single byte characters in UTF-8.

We are talking nvarchar here, yes UTF-8 solves this issue completely and MSSQL supports it now days with varchar.

But nvarchar is UTF-16

No. Look closer.

The first draft of Unicode was in 1988. Thompson and Pike came up with UTF-8 in 1992, made an RFC in 1998. UTF-16 came along in 1996, made an RFC in 2000.

The time machine would've involved Microsoft saying "it's clear now that USC-2 was a bad idea, so let's start migrating to something genuinely better".


I don't think it was clear at the time that UTF-8 would take off. UCS-2 and then UTF-16 was well established by 2000 in both Microsoft technologies and elsewhere (like Java). Linux, despite the existence of UTF-8, would still take years to get acceptable internationalization support. Developing good and secure internationalization is a hard problem -- it took a long time for everyone.

It's now 2026, everything always looks different in hindsight.


I don’t remember it quite that way. Localization was a giant question, sure. Are we using C or UTF-8 for the default locale? That had lots of screaming matches. But in the network service world, I don’t remember ever hearing more than a token resistance against choosing UTF-8 as the successor to ASCII. It was a huge win, especially since ASCII text is already valid UTF-8 text. Make your browser default to parsing docs with that encoding and you can still parse all existing ASCII docs with zero changes! That was a huge, enormous selling point.

Windows is far from a niche player, to be sure. Yet it seems like literally every other OS but them was going with one encoding for everything, while they went in a totally different direction that got complaints even then. I truly believe they thought they’d win that battle and eventually everyone else would move to UTF-16 to join them. Meanwhile, every other OS vendor was like, nah, no way we’re rewriting everything from scratch to work with a not-backward compatible encoding.


Microsoft did the hard work of supporting Unicode when UTF-8 didn't exist (and mostly when UTF-16 didn't exist).

Any system that continued with only ASCII well into the 2000s could mostly just jump into UTF-8 without issue. Doing nothing for non-English users for almost two decades turned out to be a solid plan long term. Microsoft certainly didn't have that option.


At the time it was introduced it was understandable, and Microsoft also needed some time to implement it before that of course. But by about 2000 it was clear that UTF-8 was going to win, and Microsoft should have just properly implemented it in NT instead of dithering about for the next almost 20 years. Linux had quite good support of it by then.

Blame Java - their use of utf-16 is the sole reason that Microsoft chose it.

Sun sued Microsoft in 1996 for making nonportable extensions to Java (a license violation). Microsoft lost, and created C# in 2000.

At the time, “Starting Java” was the most feared message on the internet. People really thought that in-browser Java would take over over the world (yes Java, not Javascript)

Sun chose UTF16 in 1995 believing that Unicode would never need more than 64k characters. In 1996 that changed. UTF16 got variable length encoding and became a white elephant

So Microsoft chose UTF16 know full well that it had no advantages. But at least they can say code pages were far worse :)


MS could easily have added proper UTF-8 support in the early 2000s instead of the late 2010s.

Yep. It would've been a better landing pad than UTF-16 since they had to migrate off UCS-2 anyway.

> UTF-8 is a relatively new thing in MSSQL and had lots of issues initially, I agree it's better and should have been implemented in the product long ago.

Their insistence on making the rest of the world go along with their obsolete pet scheme would be annoying if I ever had to use their stuff for anything ever. UTF-8 was conceived in 1992, and here we are in 2026 with a reasonably popularly database still considering it the new thing.


I would be more critical of Microsoft choosing to support UCS-2/UTF-16 if Microsoft hadn't completed their implementation of Unicode support in the 90s and then been pretty consistent with it.

Meanwhile Linux had a years long blowout in the early 2000s over switching to UTF-8 from Latin-1. And you can still encounter Linux programs that choke on UTF-8 text files or multi-byte characters 30 years later (`tr` being the one I can think of offhand). AFAIK, a shebang is still incompatible with a UTF-8 byte order mark. Yes, the UTF-8 BOM is both optional and unnecessary, but it's also explicitly allowed by the spec.


It's not really a Linux vs MS thing though. When Unicode first came out, it was 16-bit, so all the early adopters went with that. That includes Java, Windows, JavaScript, the ICU lintaries, LibreOffice and its predecessors, .NET, the C language (remember wchar_t?), and probably a few more.

Utf8 turned out to be the better approach, and it's slowly taking over, but it was not only Linu/Unix that pushed it ahead, the entire networking world did, especially http. Props also to early perl for jumping straight to utf8.

Still... Utf8's superiority was clear enough by 2005 or so, MS could and should have seen it by then instead of waiting until 2019 to add utf8 collations to its database. Funny to see Sql Server falling behind good old Mysql on such a basic feature.


Database systems are inherently conservative -- once you add something you have to support it forever. Microsoft went hog wild on XML in the database and I haven't seen it used in over a decade now.

In 92 it was a conference talk. In 98 it was adopted by the IETF. Point probably stands though.

the data types were introduced with SQL Server 7 (1998) so i’m not sure it’s accurate to state that it’s considered as the new thing.


And pervasive in Rust.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: