HN2new | past | comments | ask | show | jobs | submit | robviren's commentslogin

For me it is an active question if coding training data "purity" matters. Python has Go on volume, but within that is a ton of API changes, language changes, etc. Is that free regularization or does it poison the dataset? As the author points out Go code is nominal because basically all published Go code looks the same and the library APIs are frozen in time to some degree.

I actually spent some time trying to get to the bottom of what a logical extension of this would be. An entirely made up language spec for an idealized language it never saw ever, and therefore had no bad examples of it. Go is likely the closest for the many reasons people call it boring.


I find the dependency creep for both rust and node unfortunate. Almost anything I add explodes the deps and makes me sweat for maintenance, vulnerabilities, etc. I also feel perpetually behind, which I think is basically frontend default mode. Go does the one thing I wish Rust had more of which is a pretty darn great standard library with total backwards compatibility promises. There are awkward things with Go, but man, not needing to feel paranoid and how much can be built with so little feels good. But I totally understand just getting crap done and taking off the tin foil. Depends on what you prioritize. Solo devs don't have the luxury.

Those deps have to come from somewhere, right? Unless you're actually rolling your own everything, and with languages that don't have package managers what you end up doing is just adding submodules of various libraries and running their cmake configs, which is at least as insecure as NPM or Crates.io.

Go is a bit unique a it has a really substantial stdlib, so you eliminate some of the necessary deps, but it's also trivial to rely on established packages like Tokio etc, vendor them into your codebase, and not have to worry about it in the future.


> Those deps have to come from somewhere, right? Unless you're actually rolling your own everything

The point is someone needs to curate those "deps". It's not about rolling your own, it's about pulling standard stuff from standard places where you have some hope that smart people have given thought to how to audit, test, package, integrate and maintain the "deps".

NPM and Cargo and PyPI all have this disease (to be fair NPM has it much worse) where it's expected that this is all just the job of some magical Original Author and it's not anyone's business to try to decide for middleware what they want to rely on. And that way lies surprising bugs, version hell, and eventually supply chain attacks.

The curation step is a critical piece of infrastructure: thing things like the Linux maintainer hierarchy, C++ Boost, Linux distro package systems, or in its original conception the Apache Foundation (though they've sort of lost the plot in recent years). You can pull from those sources, get lots of great software with attested (!) authorship, and be really quite certain (not 100%, but close) that something in the middle hasn't been sold to Chinese Intelligence.

But the Darwinian soup of Dueling Language Platforms all think they can short circuit that process (because they're in a mad evangelical rush to get more users) and still ship good stuff. They can't.


I mean somebody could make a singular rust dependency that re-packages all of the language team's packages.

But what's the threat model here. Does it matter that the Rust STD library doesn't expose say "Regex" functionality forcing you to depend on Regex [1] which is also written by the same people who write the STD library [2]? Like if they wanted to add a back-door in to Regex they could add a backdoor into Vec. Personally I like the idea of having a very small STD library so that it's focused (as well as if they need to do something then it has to be allowed by the language unlike say Go Generics or ELM).

Personally I think there's just some willful blindness going on here. You should never have been blindly trusting a giant binary blob from the std library. Instead you should have been vendoring your dependencies and at that point it doesn't matter if its 100 crates totaling 100k LOC or a singular STD library totaling 100k LOC; its the same amount to review (if not less because the crates can only interact along `pub` boundaries). [1]: https://docs.rs/regex/latest/regex/

[2]: https://github.com/rust-lang/regex


> I mean somebody could make a singular rust dependency that re-packages all of the language team's packages.

That's not the requirement though! Curation isn't about packaging, it's about independent (!) audit/test/integration/validation paths that provide a backstop to the upstream maintainers going bonkers.

> But what's the threat model here.

A repeat of the xz-utils fiasco, more or less precisely. This was a successful supply chain attack that was stopped because the downstream Debian folks noticed some odd performance numbers and started digging.

There's no Debian equivalent in the soup of Cargo dependencies. That mistake has bitten NPM repeatedly already, and the reckoning is coming for Rust too.


> A repeat of the xz-utils fiasco

Wasn't that a suspected state actor? Against that threat model your best course of action is a prayer and some incense.

Notably, xz utils didn't use any package manager ala NPM and it relied on package management by hand.

> because the downstream Debian folks

Not sure what you mean by this, but this was discovered by a Postgres dev running bleeding edge Debian. No Debian package maintainer noticed this.

> There's no Debian equivalent

How would Debian approach help? Not even their maintainers could sniff this one.

There exists a sort of extended std library of Rust dep. But no one is using it.


> Wasn't that a suspected state actor? Against that threat model your best course of action is a prayer and some incense.

No? They caught it! But they did so because the software had extensive downstream (!) integration and validation sitting between the users and authors. xz-utils pushed backdoored software, but Fedora and Debian picked it up only in rawhide/testing and found the issue.

> Notably, xz utils didn't use any package manager ala NPM and it relied on package management by hand.

With all respect, this is an awfully obtuse take. The problem isn't the "package manager", it's (and I was explicit about this) it's the lack of curation.

It's true that xz-utils didn't use NPM. The point is that NPM's lack of curation is, from a security standpoint, isomorphic to not having any packaging regime at all, and equally dangerous.

> a Postgres dev running bleeding edge Debian

Exactly. Not sure how you think this makes the point different. Everything in Debian is volunteer, the fact that people do other stuff is a bonus. Point is the debian community is immunized against malicious software because everyone is working on validation downstream of the authors.

No one does that for NPM. There is no Cargo Rawhide or NPM Testing operated by attested organizations where new software gets quarantined and validated. If the malicious authors of your upstream dependencies want you to run backdoored software, then that's what you're going to run.


> No? They caught it!

No? Who else has 2-3 years worth of time to become a contributior and maintainer for obscure OSS utils?

Plus made sockpuppets to put pressure on OG maintainer to give Jia Tan maintainer privilege.

> Exactly. Not sure how you think this makes the point different. Everything in Debian is volunteer, the fact that people do other stuff is a bonus.

What you mean exactly? This isn't curation working as intended. This is some random dev discovering it by chance. While it snuck past maintainers and curator of both Debian and Red Hat.

> Everything in Debian is volunteer, the fact that people do other stuff is a bonus. Point is the debian community is immunized against malicious software because everyone is working on validation downstream of the authors.

You can do same in NPM and Cargo. Release a v1.x.y-rc0, give everyone a trial run, see if anyone complains. If they do, it's downstream validation working as intended.

Then yank RC version and publish a non-RC version. No one is preventing anyone from making their release candidate version.

> No one does that for NPM. There is no Cargo Rawhide or NPM Testing

Because, it makes no more sense to have Cargo Rawhide than to have XZ utils SID.

Cargo isn't an integration point, it's infra.

Bevy, which integrates many different libs, has a Release Candidate. But a TOML/XYZ library it uses doesn't.


Isn't xz-utils exactly why you would want a lot of dependencies over a singular one?

If say Serde gets compromised then only the projects depending on that version of Serde are as opposed to if Serde was part of the std library then every rust program is compromised.

> That mistake has bitten NPM repeatedly already, and the reckoning is coming for Rust too.

Eh, the only things that coming is using software expressly without a warranty (expectantly) will mean that software will cause you problems at an unknown time.


The tradeoff Go made is that certain code just cannot be written in it.

Its STD exists because Go is a language built around a "good enough" philosophy, and it gets painful once you leave that path.


> The tradeoff Go made is that certain code just cannot be written in it.

Uh... yeah? That's true of basically all platforms, and anyone who says otherwise is selling something.

> it gets painful once you leave that path

Still less painful than being zero-day'd by a supply chain attack.


> > The tradeoff Go made is that certain code just cannot be written in it.

> Uh... yeah? That's true of basically all platforms, and anyone who says otherwise is selling something.

What code can you not write in C?

Might be painful for some(many) cases, but there is nothing you can't write in C.


SIMD code.

And if you are going to point out compiler extensions, they are extensions exactly because ISO C cannot do it.


> What code can you not write in C?

This falls under the "selling somthing" angle I mentioned. Yes yes yes, generality and abstraction are tradeoffs and higher level platforms lack primitives for things the lower levels can do.

That is, at best, a ridiculous and specious way to interpret the upthread argument (again c.f. "selling something").

The actual point is that all real systems involve tradeoffs, and one of the core ones for a programming language is "what problems are best solved in this language?". That's not the same question as "what problems CAN be solved in this language", and trying to conflate the two tells me (again) that you're selling something. The applicability of C to problem areas it "can" solve has its own tradeoffs, obviously.


not really in-topic but constant-time crypto primitives are considered hard for any compiled language with a lot of optimizations

It is more of a cultural thing. Package managers encourage lots of dependencies while programmers using language with no package managers will often pride themselves in having as few dependencies as possible. when you consider the complete graph, it has an exponential effect.

It is also common in languages without package managers to rely on the distro to provide the package, which adds a level of scrutiny.


Technically it's the same. But behaviorally it's not. When pulling in more dependencies is so easy, it's very hard to slow down and ask the question do we need all of this?

Mucking around with cmake adds enough friction that everyone can take a beat for thoughtful decision-making.


> Go is a bit unique a it has a really substantial stdlib

It’s not that unique though. I can say that Python and hell, even PHP have pretty complete but also well documented stdlib.

Java is meh tier but C# is also pretty good in this aspect.

It’s totally a choice for Rust not to have a real stdlib and actually I feel like that would maybe make Rust maybe the best language overall.


java didn't have an http client (I guess it had a url 'stream') for the longest time and STILL doesn't have an http server.

It has one - it’s been a part of the JDK for a while https://docs.oracle.com/en/java/javase/11/docs/api/jdk.https...

sun is typically not available anymore / deprecated even and not available in the JRE or it's early access, but fair point.

I get what you’re saying but this is actually a public api in HotSpot. It was provided in JEP 408: https://openjdk.org/jeps/408

so this is what it feels to be old...

but to clarify, this was about a year ago where I struggled to find an auto completion for HttpServer and when I searched it up jdk HttpServer was simply not in the results so I made assumptions that were wrong.


java didn't have an http client [...] and STILL doesn't have an http server.

Wow.

How long has it been since you guys have used Java?

Serious question?


I tried to implement a minimal server just to realize that there is still no way to do so in java 21... I stand corrected I guess it was recently added: https://docs.oracle.com/en/java/javase/25/docs/api/jdk.https..., but it's a sun package instead of standard RT - but probably because it is still early.

see comments above for correction

When I run into things like this that I know are wrong, I try to remember when reading things I don't know about...

Not really? Iirc `HttpUrlConnection` has been around since the 90s?

I did mention that, but for a lot of things it is not enough compared to a full http client most stdlib's have. HttpClient was introducted for a reason.

Python used to have a great standard library, too. But now it's stuck with a bunch of obsolete packages and the packaging story for Python is awful.

In a decade or so Go the awkward things about Go will have multiplied significantly and it'll have many of the same problems Python currently has.


> the packaging story for Python is awful.

Big caveat that this is just for me personally, but uv has fixed this for me personally. Game changing improvement for Python. Appropriately, uv is written in rust.


The fact that you have to know to use uv rather than any of the other package managers is kind of the point.

Lots of removals have already happened and uv took over packaging in Python-land.

Which, ironically, is written in rust

Well, Python is largely written in C, so there's that.

I just ported (this week) a 20-year-old Python app to uv/polars. (With AI it took two days). App is now 20x faster.

that's polars for ya

uv should not impact runtime performance at all


uv just happens to make installation trivial. 20 year old app much less so.

Both uv and polars are technically Rust, too.

> In a decade or so Go the awkward things about Go will have multiplied significantly and it'll have many of the same problems Python currently has.

The stdlib packages are far better designed in Go than in Python. “The standard library is where packages go to die” is literally not a thing in Go, in fact quite the opposite.


So far. But Google's in charge, and Google is the place where things go to die.

These are two sides of the same coin. Go has its quirks because they put things in the standard library so they can't iterate (in breaking manners), while Rust can iterate and improve ideas much faster as it's driven by the ecosystem.

Edit: changed "perfect" to "improve", as I meant "perfect" as "betterment" not in terms of absolute perfection.


There is a moral hazard here. By accepting that APIs are forever, you tend to be more cautious and move toward getting it right the first time. Slower is better... And also faster in the long run, as things compose. Personally, I do believe that there is one best way to do things quite often, but time constraints make people settle.

At least it is my experience building some systems.

Not sure it is always a good calculus to defer the hard thinking to later.


The golang.org/x/ namespace is the other half of the standard library in all but name. That gets iterated often.

For stuff in the standard library proper, the versioning system is working well for it. For example, the json library is now at v2. Code relying on the original json API can still be compiled.


Iterating often is not helpful for stable systems over time.

I like go's library it's got pretty much everything needed out of the box for web server development. Backwards compatibility is important too.


The cost of "perfecting" an idea here is ruining the broader ecosystem. It is much much better for an API to be kinda crappy (but stable) for historical reasons than dealing with the constant churn and fragmentation caused by, for example, the fifth revision of that URL routing library that everyone uses because everyone uses it. It only gets worse by the orthogonal but comorbid attitude of radically minimizing the scope of dependencies.

Which has been working great for go, right. They shipped "log" and "flag" stdlib packages, so everyone uses... well, not those. I think "logrus" and "zap" are probably the most popular, but there's a ton of fragmentation in Go because of the crappy log package, including Go itself now shipping two logging packages in the stdlib ('log/slog').

Rust on the other hand has "log" as a clear winner, and significantly less overall fragmentation there.


I think you underestimate how many programs use log and flag, if you just focus on the few (bloated) popular projects.

> It is much much better for an API to be kinda crappy (but stable) for historical reasons

But this does more than just add a maintenance burden. If the API can't be removed, architectural constraints it imposes also can't be removed.

e.g. A hypothetical API that guarantees a callback during a specific phase of an operation means that you couldn't change to a new or better algorithm that doesn't have that phase.


Yes you can, and Go has done exactly that.

Realize the "log" api is bad? Make "log/slog". Realize the "rand" api is bad? Make "rand/v2". Realize the "image/draw" api is bad? Make "golang.org/x/image/draw". Realize the "ioutil" package is bad? Move all the functions into "io".

Te stdlib already has at least 3 different patterns for duplicating API functionality with minor backwards-incompatible changes, and you can just do that and mark the old things as deprecated, but support it forever. Easy enough.


> mark the old things as deprecated, but support it forever

Is that 'supported'? A library that uses a callback that exists in 'log' but not in 'slog'; it'll compile forever, but it'll never work.

'Compiles but doesn't work' does not count as stable in my book. It's honestly worse than removing the API: both break, but one of them is noticed when the break happens.


I think “the fifth revision of that URL routing library that everyone uses” is a much less common case than “crate tried to explore a problem space, five years later a new crate thinks it can improve upon the solution”, which is what Rust’s conservatism really helps prevent. When you bake a particular crate into std, competitor crates now have a lot of inertia to overcome; when they're all third-party, the decision is not “add a crate?” but “replace a crate?” which is more palatable.

Letting an API evolve in a third-party crate also provides more accurate data on its utility; you get a lot of eyes on the problem space and can try different (potentially breaking) solutions before landing on consensus. Feedback during a Rust RFC is solicited from a much smaller group of people with less real-world usage.


Is there that much to explore in a given problem space. I believe a lot of people will take the good enough, but stable API over the unstable one that is striving for an unknown state of perfection. The customer of a library are programmers, they can patch over stuff for their own use case. A v2 can be released once enough pain points have been identified, but there should be a commitment to support v1 for a while.

The dependency creep keeps on happening in web frameworks where ever you look.

I was thinking of this quote from the article:

> Take it or leave it, but the web is dynamic by nature. Most of the work is serializing and deserializing data between different systems, be it a database, Redis, external APIs, or template engines. Rust has one of the best (de)serialization libraries in my opinion: serde. And yet, due to the nature of safety in Rust, I’d find myself writing boilerplate code just to avoid calling .unwrap(). I’d get long chain calls of .ok_or followed by .map_err. I defined a dozen of custom error enums, some taking other enums, because you want to be able to handle errors properly, and your functions can’t just return any error.

I was thinking: This is so much easier in Haskell.

Rather than chains of `ok_or()` and `map_err()` you use the functor interface

Rust:

``` call_api("get_people").map_or("John Doe", |v| get_first_name(v)).map_or(0, |v| get_name_frequency(v)) ```

Haskell:

``` get_first_name . get_name_frequency <$> callApi "get_people" ```

It's just infinitely more readable and using the single `<$>` operator spares you an infinite number of `map_or` and `ok_or` and other error handling.

However, having experience in large commercial Haskell projects, I can tell you the web apps also suffer from the dreaded dependency explosion. I know of one person who got fired from a project due to no small fact that building the system he was presented with took > 24 hours when a full build was triggered, and this happened every week. He was on an older system, and the company failed to provide him with something newer, but ultimately it is a failing of the "everything and the kitchen sink" philosophy at play in dependency usage.

I don't have a good answer for this. I think aggressive dependency reduction and tracking transitive dependency lists is one step forward, but it's only a philosophy rather than a system.

Maybe the ridiculous answer is to go back to php.


Rust has trouble supporting higher-kinded types like Functor (even though an equivalent feature is available, namely Generic Associated Types) due to the distinctions it makes between owned and referenced data that have no equivalent in Haskell. Whether these higher abstractions can still be used elegantly despite that complexity is something that should be explored via research, this whole area is not ready for feature development.

> I know of one person who got fired from a project due to no small fact that building the system he was presented with took > 24 hours when a full build was triggered, and this happened every week.

Incremental Nix builds can take less than 1 munute to build everything, including the final deployable docker image with a single binary on very large Haskell codebases. That fact the the person was fired for everybody around him systematically failing to admit and resolve a missing piece of supportive infrastructure for the engineering effort of one person tells a lot about the overall level of competence in that team.

> but ultimately it is a failing of the "everything and the kitchen sink" philosophy at play in dependency usage.

Not really, as the kitchen sink only has to build once per its version change, for all future linkage with your software for the entire engineering team doing the builds in parallel.


24 hours? is the haskel compiler written in javascript running in a python js-interpreter written in bash?

php is the only popular language that regularly removes insane legacy cruft (to be fair, they have more insane cruft than almost any other language to begin with).

I've found Go's standard library to be really unfortunate compared to rust.

When I update the rust compiler, I do so with very little fear. My code will still work. The rust stdlib backwards compatible story has been very solid.

Updating the Go compiler, I also get a new stdlib, and suddenly I get a bunch of TLS version deprecation, implicit http2 upgrades, and all sorts of new runtime errors which break my application (and always at runtime, not compiletime). Bundling a large standard library with the compiler means I can't just update the tls package or just update the image package, I have to take it or leave it with the whole thing. It's annoying.

They've decided the go1 promise means "your code will still compile, but it will silently behave differently, like suddenly 'time1 == time2' will return a different result, or 'http.Server' will use a different protocol", and that's somehow backwards compatible.

I also find the go stdlib to have so many warts now that it's just painful. Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it. Don't use the non-context methods (like 'NewRequest' is wrong, use 'NewRequestWithContext', don't use net.Dial, etc), except for all the places context couldn't be bolted on.

Don't use 'image/draw', use 'golang.org/x/image/draw' because they couldn't fix some part of it in a backwards compatible way, so you should use the 'x/' package. Same for syscall vs x/unix. But also, don't use 'golang.org/x/net/http2' because that was folded into 'net/http', so there's not even a general rule of "use the x package if it's there", it's actually "keep up with the status of all the x packages and sometimes use them instead of the stdlib, sometimes use the stdlib instead of them".

Go's stdlib is a way more confusing mess than rust. In rust, the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus). In rust, updates to the stdlib are actually backwards compatible, not "oh, yeah, sha1 certs are rejected now if you update the compiler for better compile speeds, hope you read the release notes".


Man, I've been using Go as my daily driver since 2012 and I think I can count the number of breaking changes I've run into on one finger, and that was a critical security vulnerability. I have no doubt there have been others, but I've not had the misfortune of running into them.

> Don't use "log", use "log/slog", except the rest of the stdlib that takes a logger uses "log.Logger" because it predates "slog", so you have to use it.

What in the standard library takes a logger at all? I don't think I've ever passed a logger into the standard library.

> the ecosystem has settled on one logging library interface, not like 4 (log, slog, zap, logrus)

I've only seen slog since slog was added to the standard library. Pretty sure I've seen logrus or similar in the Kubernetes code, but that predated slog by a wide margin and anyway I don't recall seeing _any_ loggers in library code.

> In rust, the ecosystem has settled on one logging library interface

I mean, in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.


> I don't think I've ever passed a logger into the standard library.

`net/http.Server.ErrorLog` is the main (only?) one, though there's a lot of third-party libraries that take one.

> I've only seen slog since slog was added to the standard library

Most go libraries aren't updated yet, in fact I can't say I've seen any library using slog yet. We're clearly interfacing with different slices of the go ecosystem.

> in Rust everyone has different advice on which crates to use for error handling and when to use each of them. You definitely don't have _more standards_ in the Rust ecosystem.

They all are still using the same error type, so it interoperates fine. That's like saying "In go, every library has its own 'type MyError struct { .. }' that implements error, so go has more standards because each package has its own concrete error types", which yeah, that's common... The rust libraries like 'thiserror' and such are just tooling to do that more ergonomically than typing out a bunch of structs by hand.

Even if one dependency in rust uses hand-typed error enums and another uses thiserror, you still can just 'match' on the error in your code or such.

On the other hand, in Go you end up having to carefully read through each dependency's code to figure out if you need to be using 'errors.Is' or 'errors.As', and with what types, but with no help from the type-system since all errors are idiomatically type-erased.


> When I update the rust compiler, I do so with very little fear. My code will still work. The rust stdlib backwards compatible story has been very solid.

This is not always true, as seen with rustc 1.80 and the time crate. While it only changed type inference, that still caused some projects like Nix a lot of trouble.


That caused compilation errors though, which are alright in my book, and don't increase my fear to update.

Silent runtime changes are what spook me and what I've gotten more often with Go.


FYI: Deno includes:

1. The web standard APIs themselves 2. It's own standard library inspired by Go's standard library (plus some niceties like TOML minus some things not wanted in a JS/TS standard library since they're already in the web standard APIs) 3. Node's standard library (AKA: built-in modules) to maintain backwards compatibility with node.

Bun has 1 and 3, and sort of has it's own version of 2 (haphazard, not inspired by go, and full of bun-isms which you may like but may not, but standard database drivers is nice).


Honestly this is one of the biggest reasons I stick with Elixir. Between Elixir’s standard library, the BEAM/OTP, and Phoenix (with Ecto)—- I honestly have very few dependencies for web projects. I rarely, at this point, find the need to add anything to new projects except for maybe Mox (mocking library) and Faker (for generating bits of test data). And now that the Jason (JSON) library has been more or less integrated into OTP I don’t even have to pull it in. Elixir dev experience is truly unmatched (IMHO) these days.

Same. That’s why Go is such a great tool.

A Tauri hello world app has about 500(?) deps out of the box, always makes me laugh.

I get that cross platform desktop app is a complicated beast but it gives off those creepy npm vibes.


With Go it's good to keep in mind the Proverbs, which includes this gem:

  A little copying is better than a little dependency.

Good luck, if little copying is ICU based localization.

The whole point of "a little copying" is when there's self-contained code that can be copypasta'd -- for situations that are appropriate.

I have run into a surprising number of basic syntax errors on this one. At least in the few runs I have tried it's a swing and a miss. Wonder if the pressure of the Claude release is pushing these stop gap releases.

I have run into that a lot which is annoying. Even though all the code compiles because go is backwards compatible it all looks so much different. Same issue for python but in that case the API changes lead to actual breakage. For this reason I find go to be fairly great for codegen as the stability of the language is hard to compete with and the standard lib a powerful enough tool to support many many use cases.


I find it fascinating to give the LLMs huge stacks of reflective context. It's incredible how good they are at feeling huge amounts of csv like data. I imagine they would be good at trimming their context down.

I did some experiments by exposing the raw latent states, using hooks, of a small 1B Gemma model to a large model as it processed data. I'm curious if it is possible for the large model to nudge the smaller model latents to get the outputs it wants. I desperately want to get thinking out of tokens and into latent space. Something I've been chasing for a bit.


Yes - I think there is untapped potential into figuring out how to understand and use the latent space. I'm still at the language layer. I occasionally stumble across something that seems to tap into something deeper and I'm getting better at finding those. But direct observability and actuation of those lower layers is an area that I think is going to be very fruitful of we can figure it out


Trying to use ESNs as a random projection for audio data and potentially rendered text data for some AI workflows. Seeing it I can use the echo states running both forward and backward through the data as a holographic representation which would act as a temporally dense token for potential use in LLM or audio encoder inputs.


I feel like wishing for UI innovation is using the Monkey's paw. My web experience feels far too innovative and not enough consistent. I go to the Internet to read and do business not explore the labyrinth of concepts UI designers feel I should want. Take me back to standards, shortcuts, and consistency.


Yes! I don't want a car with an "innovative" way of steering. I don't want a huge amount of creativity to go into how my light switches work. I don't want shoes that "reinvent" walking for me (whatever the marketing tagline might say).

Some stuff has been solved. A massive number of annoyances in my daily life are due to people un-solving problems with more or less standardized solutions due to perverse economic incentives.


> I don't want a car with an "innovative" way of steering.

99.5 % agree, because I would love to try SAAB:s drive-by-wire concept from 1992: https://www.saabplanet.com/saab-9000-drive-by-wire-1992/


The thing why this was only a research project and never came into mass production was regulatory stuff, IIRC? (most EU countries require, still until today, a "physical connection between steering wheel and wheels" in their trafic regulation)


This was a few years before Sweden joined the EU, but yes, I think the lack of a physical connection was one of the main problems.

From what I've read the test drivers also thought the car was too difficult to drive, with the joystick being too reactive. I wonder how much of that could be solved today with modern software and stability control tech.

I can't find it now, but I do remember a similar prototype with mechanical wires (not electrical) that was supposed to solve the regulatory requirements. That joystick looked more like a cyclic control from a helicopter.


Having played enough video games that use joysticks for steering I don't want to drive a real car with a joystick. Crashing in Mario kart or Grand theft Auto because I sneezed is fine but not in real life.


Exactly. The control needs to have both an intentional and major motor movement from the driver. Modern steering wheels have the same benefit as the original iPod wheel. Easy for small movements, even accidental ones; possible for big movements.

Also funny that they had the ability to swap to the passenger to drive it. So acceleration/break for one person, steering for another? Really not a good idea.


I think there's a ton of innovation left to be done regarding steering and light switches.

You're right that it's not going to be better designs, but paradigm shifts.

We still don't know what it means to provide input to a mostly self-driving car. It hasn't been solved and people continue to complain about attention fatigue and anxiety. Is the driving position really optimal for that? Are accident fatalities reduced if the driver is sitting somewhere else? Even lane assist still sucks on traditionally designed cars. Is having to fight a motorized wheel to override steering really all that safe?

Light switches may be reliable and never go away, but we have many well-established everyday examples of automatic lights: door switches, motion sensing, proximity sensing, etc. You never think about it and that's the point.


> Yes! I don't want a car with an "innovative" way of steering.

You might, but you'll never really know.

I mean, steering wheels themselves were once novel inventions. Before those there was "tillers" (a rod with handle essentially)[0], and before those: reigns, to pull the front in the direction you want.

[0]: https://en.wikipedia.org/wiki/Benz_Patent-Motorwagen


I highly doubt there's a steering input device so superior to the current wheel shape that it's worth throwing out the existing standard. Yes, at one point how steering should work (or how you should navigate the Web) was uncertain, but eventually everyone settled on something that worked well enough that it was no longer worthwhile to mess with it.

Although, one thought I had is that there's nothing wrong with experimenting with non-standard interfaces as long as you still have the option to still just buy, say, a Toyota with a standard steering wheel instead of 3D Moebius Steering or whatever. The problem is when the biggest manufacturers keep forcing changes by top-down worldwide fiat, forcing customers to either grin and bear it or quit driving (or using the Web) entirely.


I sympathise with the frustration, but I think the issue isn't innovation itself: it's that we've lost the ability to distinguish between solving actual problems and just making things different.

Take mobile interfaces. When touchscreens arrived, we genuinely needed new patterns. A mouse pointer paradigm on a 3.5" screen with fat fingers simply doesn't work. Swipe gestures, pull-down menus, bottom navigation—these emerged because the constraints demanded it, not because someone thought "wouldn't it be novel if..."

The problem now is that innovation has become cargo-culted. Companies innovate because they think they should, not because they've identified a genuine problem. Every app wants its own navigation paradigm, its own gesture language, its own idea of where the back button lives. That's not innovation, that's just noise.

However, I'd have to push back on the car analogy: steering wheels were an innovation over tillers, and a crucial one. Tillers gave you poor mechanical advantage and required constant two-handed attention. The steering wheel solved real problems: better control, one-handed operation, more space for passengers. It succeeded because it was genuinely better, and then it standardised because there was no reason to keep experimenting.

The web needs more of that approach: innovate when there's a genuine problem, then standardise when you've found something that works. The issue isn't innovation, it's the perverse incentive to differentiate for its own sake.


Leaving aside the externalities of constantly breaking everyone's workflow and potentially introducing disastrous bugs, there's an opportunity cost to innovating where there isn’t a clear need. Google and others are wasting massive resources endlessly tweaking browsers and the Web because that's all they know how to do, their users are locked in and without recourse, and they don't feel threatened by any competitors or upstarts. I would argue the web and smartphones and similar tech are boring now but because the market is controlled by only a few huge companies, the tech hasn't been allowed to become low-margin, standardized cookie-cutter commodities. Instead these attempts to make this old boring tech seem exciting is getting to the point where it's sad and comical.


Your last paragraph reminded me of HTML5 and the WHATWG which led to official W3C adoption.

Back when that started W3C was still strongly embedded in the XML hellhole.


You need to be careful here, because we have a real tendency to get stuck in local maxima with technology. For instance, the QWERTY keyboard layout exists to prevent typewriter keys from jamming, but we're stuck with it because it's the "standardized solution" and you can't really buy a non-QWERTY keyboard without getting into the enthusiast market.

I do agree changing things for the sake of change isn't a good thing, but we should also be afraid of being stuck in a rut


I agree with you, but I'm completely aware that the point you're making is the same point that's causing the problem.

"Stuck in a rut" is a matter of perspective. A good marketer can make even the most established best practice be perceived as a "rut", that's the first step of selling someone something: convince them they have a problem.

It's easy to get a non-QWERTY keyboard. I'm typing on a split orthlinear one now. I'm sure we agree it would not be productive for society if 99% of regular QWERTY keyboards deviated a little in search of that new innovation that will turn their company into the next Xerox or Hoover or Google. People need some stability to learn how to make the most of new features.

Technology evolves in cycles, there's a boom of innovation and mass adoption which inevitably levels out with stabilisation and maturity. It's probably time for browser vendors to accept it's time to transition into stability and maturity. The cost of not doing that is things like adblockers, noscript, justthebrowser etc will gain popularity and remove any anti-consumer innovations they try. Maybe they'll get to a position where they realise their "innovative" features are being disable by so many users that it makes sense to shift dev spending to maintenance and improvement of existing features, instead of "innovation".


> For instance, the QWERTY keyboard layout exists to prevent typewriter keys from jamming, but we're stuck with it because it's the "standardized solution" and you can't really buy a non-QWERTY keyboard without getting into the enthusiast market.

So, we are "stuck" with something that apparently seems to work fine for most people, and when it doesn't there is an option to also use something else?

Not sure if that's a great example

Sometimes good enough is just good enough


> the QWERTY keyboard layout exists to prevent typewriter keys from jamming

even if it is true (is it a myth by any chance?), it does not mean that alternatives are better at say typing speed


As someone that makes my own keyboard firmware, 100% agree. For most people, typing speed isn't a bottleneck. There is a whole community of people that type faster than 250wpm on custom, chording-enabled keyboards. The tradeoff is that it takes years to relearn how to type. Its the same as being a stenographer at that point. Its not worth it for most people.

Even if there was a new layout that did suddenly allow everyone to type twice as fast, what would we get with that? Maybe twice as many social media posts, but nothing actually useful.


I'd imagine at this point that most social media posts are done by swiping or tapping a phone's virtual keyboard (if one is used at all).


One don't need to be a scientist to take a look at own hands and fingers, to see that they are not crooked to the left. Ortholinear keyboard would be objectively better, even with the same keymap like QWERTY, but we don't produce those for masses for a variety of reasons. Same with many other ideas.


> to see that they are not crooked to the left

how it makes ortholinear keyboards better?


If I recall correctly, QWERTY was designed to minimize jamming. The myth is that it was designed to slow people down.

Whether it does slow people down, as a side effect, is not as well established since, as another person pointed out, typing speed isn't the bottleneck for most people. Learning the layout and figuring out what to write is. On top of that, most of the claims for faster layouts come from marketing materials. It doesn't mean they are wrong, but there is a vested interest.

If there was a demonstrably much faster input method for most users, I suspect it would have been adopted long ago.


It's been debunked by both research (no such mention at the time) and practice on extant machines.


These days QWERTY keyboards are optimal because programs, programming languages and text formats are optimized for QWERTY keyboards.


Depends on the language no? Qwerty isn't great for APL.


I have a QWERTZ keyboard!

Is my digital life at a natural end now?


If you mean the default German keyboard layout then, yes, putting backslashes, braces and brackets behind AtlGr makes it sub-optimal in my book. Thankfully what's printed on the keys is not that important so you too can have a QWERRTY keyboard if you want.


I wish for browser ui innovation.

The labyrinth of ways to interact with the temporal path between pages is a cluster. History, bookmark, tab, window,, tab groups.

There are many different reasons to have a tab, bookmark, or history entry. They dont all mean the same thing. Even something as simple as comparison shopping could have a completely different workflow of sorting and bucketing the results, including marking items as leading candidate, candidate, no, no but. Contextualizing why I am leaving something open vs closing it is information ONLY stored in my head, that would be useful to have stored elsewhere.

Think about when you use the back button vs the close tab button. What does the difference between those two concepts mean to you? When do you choose to open a new tab vs click? There is much to be explored and innovated. People have tried radical redesigns, havent seen anything stick , yet.


If you expect the browser to help you manage your various workflows beyond generic containers (tabs, tab groups), then you become tied into the browser's way of doing things. Are you sure you want that?

I'm not saying your hopes are bad, exactly. I'm interested in what such workflows might look like. Maybe there _is_ a good UX for a web shopping assistant. I have an inkling you could cobble something interesting together quite fast with an agentic browser and a note-taking webapp. But I do worry that such a app will become yet another way for its owner to surveil their users in some of the more accurate and intimate areas of their lives. Careful what you wish for, I reckon.

In the meantime, what's so hard about curating a Notepad/Notes/Obsidian/Org mode file, or Trello/Notion board to help you manage your projects?


shopping assistant was a specific example, but in the process of research, brainstorming, etc theres a bunch of different ways id like to see visualization and record of how i got somewhere, what was discarded, summary of what was retained, whats coming next, options for branching.

the web is a document structure, but browsing it doesnt need to be linear.


We had that ability in Firefox, through XUL. Then it was removed. Tree Style Tab addon doesn't work properly to this day because of this.

We had that ability in Chrome, through Chrome Apps. You could make a browser app, load pages in webviews, with the whole browser frame customizable. Then it was removed.

We had an ability to make a new innovative browser, until Google infested all the standartization committees, and increased complexity of standards on a daily basis for well over a decade. Now they monetize their effort on making Chrome by removing adblockers and enforcing their own ads, knowing full well that even keeping a fork that supports manifest v2 is infeasible for a free open-source project.

There is no way forward with the web we have right now. No innovation will happen anymore.


Kinda yeah, kinda no. Big-thinking drastic UI experiences are usually shit. But small, thoughtful touches made with care can still make a big difference between a website that just delivers the data you need and one that's pleasant to interact with.

There's a similar amateurs-do-too-much effect with typography and design. I studied typography for four semesters in college, as well as creative writing. The best lessons I learned were:

In writing, show, don't tell.

In typography, use the type to clarify the text - the typography itself should be transparent and only lead to greater immersion, never take the reader out of the text.

Good UI follows those same principles. Good UX is the UI you don't notice.


It definitely feels like it is gone. Of course I'm largely talking about the applications that I use, e.g. MS Word which is still using the searchless 1980s character map and has a crazy esoteric add-on installation process. It's hilariously bad when we consider the half-screen UI which obscures a considerable amount of the ribbon.

The UX is also awful.

But I think this is a compounding problem that spans generations of applications. Consider the page convention — a great deal of the writing content we typically publish, at a societal level, will be digital-only so why are we still defaulting to paper document formats? Why is it so fucking hard to set a picture in?

And it's that kind of ossification and familiar demand that reinforces the continuum that we see, I think. And when a company does get creative and sees some breakthrough success it is constrained to nascency before it gets swallowed by conglomerate interests and strangled.

And Google's alternative ecosystem has all of these parallels. It's crazy to see these monolithic companies floundering like this. That's what I don't understand.


I want to explore the space of audio encoding and GPT like understanding of audio. I'm so highly interested in how a simple 1d signal must go through so much processing to be understood by language models, and am curious what tradeoffs occur. Would also be fun to make a TTS Library and understand it.


I'm trying to make a neural audio codec using a variety of misguided methods. One I am using ESNs wrong spreading leak rates in a logarithmic fashion acting like a digital cochlea. The other is trying to do the same with a complex mass-spring-damper system to simulate the various hairs of the cochlea as well. Both approaches make super interesting visuals and appear to cluster reasonably well, but I am still learning about RVQ and audio loss (involves GANs and spectral loss). I kinda wanna beat SNAC if I can.


Do you have a log available somewhere?


I keep everything in my self hosted gitea. Just made it public.

https://gitter.swolereport.com/robviren/cspace


Thanks, I’ll check it out

Edit: timed out


Reminds me of https://github.com/RobViren/kvoicewalk where people take voice clips and train a text to speech using random walks.

Not related, misguided methods :D


Well, it’s the same author so it is kind of related.


Love to see the Pi getting some rather creative use! The most use I got out of one was as a health check endpoint for power in my garage which was holding frozen milk for my newborn, but the circuit kept tripping. Had another server email me if it couldn't reach the Pi for some reason. Just used some real simple Go code. It was not production but it worked. Not everything needs to change the world, maybe just make your day easier.


Exactly. When it helps your daily life, the whole build process is way more exciting. I really liked your project as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: