it's probably a good thing to have domestic advanced manufacturing if only to have real-world testbeds for development of advanced automation technology.
it's cool and all that boston dynamics can do what they do, but i wonder if one reason why the chinese robotics industry is so advanced is because they've been able to test in production on real production lines, experiment with dark factories and learn a ton in the process.
it's kind of funny when you think about it. both the west and east are facing down the same set of potential problems that come with real automation of industries that have served as true economic dynamos for decades.
> it's probably a good thing to have domestic advanced manufacturing if only to have real-world testbeds for development of advanced automation technology.
Yes, it's a good thing to have domestic advanced manufacturing, but this probably doesn't qualify.
According to the article, it's a site where they already assemble servers for Apple's own use, and will now start assembling Mac Minis as well. Electronics assembly is, for the most part, a pretty low-value part of the supply chain.
It's not nothing, but it pales in comparison to the scientific and technological sophistication and financial value of wafer fabs and IC test and packaging facilities. (I worked at Intel's flagship fabs in Oregon, and have worked as a consultant with other semi fabs around the world.)
maybe would be interesting to include a lecture on how to interact with the open source community and successfully contribute to an open source project while respecting maintainer time and energy (and other unwritten rules of (n)etiquette).
edit: already in the "beyond the code" section... cool!
orcad is the commercial classic for doing schematics with a spice backend. (spice is an oss engine out of berkeley for simulating circuits on computers. for dc it just solves the classic nodal analysis and for ac you can feed in things from a fantasy signal generator and capture things at various nodes in the circuit) there's also some pretty cool looking commercial web thing now that also will maintain netlists with real-time prices and will let you swap parts out and set minimum quantities etc.
kicad is the oss orcad, but i never got good at it. (to be fair, orcad was weird to learn as well)
I think altium has taken over as the top tier commercial offering in this space.
I always disliked Orcad. Especially because cadence had excellent software that predated OrCAD, and for reasons that I cannot fathom chose to promote OrCAD after they acquired it instead of the better software.
Here's a specific example in the interface. If you wanted to draw a wire, the keyboard shortcut of the old software was 'w' but orcad required you to type 'ctrl + w'. Why are you forcing me to use control when w doesn't do anything on its own? It was filled with similar tiny annoyances that just slowed things down. (Admittedly, it's been years since that was my primary work, and free stuff is good enough for what I do now.) I sincerely hope that orcad has continued to improve over the years.
Altium has taken over a lot of small to medium sized shops. Mostly because the price is right for its capability. It also has a history of being the least bad compromise between the odd mixtures of excellence and user-hostility Cadence and Mentor tend to come up with, going back to the Protel days, and they've done a good job in the last decade+ of marketing it to those shops. Cadence and Siemens nee Mentor (and maybe Zuken? I've never seen Zuken in the wild, but it always makes these lists) have been neglecting the entry level and smaller organizations and aggressively trying to move their customers to their higher tier offerings during that time. But while it's Altium's flagship product, it is not top tier. It is really entry-level for a professional PCB-level design package, like PADS and OrCAD as opposed to Xpedition and Allegro.
> To me that implies the input isn't deterministic, not the compiler itself
or the system upon which the compiler is built (as well as the compiler itself) has made some practical trade offs.
the source file contents are usually deterministic. the order in which they're read and combined and build-time metadata injections often are not (and can be quite difficult to make so).
I mean, if you turn off incremental compilation and build in a container (or some other "clean room" environment), it should turn out the same each time. Local builds are very non-deterministic, but CI/CD shouldn't be.
Either way it's a nitpick though, a compiler hypothetically can be deterministic, an LLM just isn't? I don't think that's even a criticism of LLMs, it's just that comparing the output of a compiler to the output of an LLM is a bad analogy.
> I mean, if you turn off incremental compilation and build in a container (or some other "clean room" environment), it should turn out the same each time. Local builds are very non-deterministic, but CI/CD shouldn't be.
lol, should. i believe you have to control the clock as well and even then non-determinism can still be introduced by scheduler noise. maybe it's better now, but it used to be very painful.
> Either way it's a nitpick though, a compiler hypothetically can be deterministic, an LLM just isn't? I don't think that's even a criticism of LLMs, it's just that comparing the output of a compiler to the output of an LLM is a bad analogy.
llm inference is literally sampling a distribution. the core distinction is real though, llms are stochastic general computation where traditional programming is deterministic in spirit. llm inference can hypothetically be deterministic as well if you use a fixed seed, although, like non-trivial software builds on modern operating systems, squeezing out all the entropy is a non-trivial affair. (some research labs are focused on just that, deterministic llm inference.)
it's been 14 years since i've used msvc for anything real. iirc the philosophy back then was yearly versioned releases with rolling intermediate updates.
this seems to go down the road towards attempts at determinsticish builds which i think is probably a bad idea since the whole ecosystem is built on rolling updates and a partial move towards pinning dependencies (using bespoke tools) could get complicated.
this highlights the saddest thing about this whole generative ai thing. beforehand, there was opportunity to learn, deliver and prove oneself outside of classical social organization. now that's all going to go away and everyone is going to fall back on credentials and social standing. what an incredible shame for social mobility and those who for one reason or another don't fit in with traditional structures.
Vouch is a good quick fix, but it has some properties that can lead to collapsed states, discussed in the article linked here: https://hackernews.hn/item?id=46938811
it's also going to kill the open web. nobody is going to want to share their ideas or code publicly anymore. with the natural barriers gone, the incentives to share will go to zero. everything will happen behind closed doors.
GitHub has never been a good method of clout chasing. in decades of being in this industry, I've seen < 1% of potential employers care about FLOSS contributions, as long as you have some stuff on your GH.
The origin of the problems with low-quality drive-by requests is github's social nature[0]. AI doesn't help, but it's not the cause.
I've seen my share of zero-effort drive-by "contributions" so people can pad their GH profile, long before AI, on tiny obscure projects I have published there: larger and more prominent projects have always been spammed.
If anything, the AI-enabled flood will force the reckoning that was long time coming.
I feel this is a bit too pessimistic. For example, people can make tutorials that auto-certify in vouch. Or others can write agent skills that share etiquette, which agents must demonstrate usage of before
PRs can be created.
Yes, there's room for deception, but this is mostly about superhuman skills and newcomer ignorance and a new eternal September that we'll surely figure out
Vouch is forge-agnostic. See the 2nd paragraph in the README:
> The implementation is generic and can be used by any project on any code forge, but we provide GitHub integration out of the box via GitHub actions and the CLI.
And then see the trust format which allows for a platform tag. There isn't even a default-GitHub approach, just the GitHub actions default to GitHub via `--default-platform` flag (which makes sense cause they're being invoked ON GITHUB).
So I can choose from github, gitlab or maybe codeberg? What about self-hosters, with project-specific forges? What about the fact that I have an account on multiple forges, that are all me?
This seems to be overly biased toward centralized services, which means it's just serving to further re-enforce Microsoft's dominance.
It's a text string, platform can be anything you want, then use the vouch CLI (or parse it yourself) to do whatever you want. We don't do identity mapping, because cross-forge projects are rare and maintaining that would centralize the system and its not what we're trying to do. The whole thing is explicitly decentralized with tiny, community specific networks that you build up.
i always got the sense that spinlocks were about maximum portability and reliability in the face of unreliable event driven approaches. the dumb inefficient thing that makes the heads of the inexperienced explode, but actually just works and makes the world go 'round.
it's cool and all that boston dynamics can do what they do, but i wonder if one reason why the chinese robotics industry is so advanced is because they've been able to test in production on real production lines, experiment with dark factories and learn a ton in the process.
it's kind of funny when you think about it. both the west and east are facing down the same set of potential problems that come with real automation of industries that have served as true economic dynamos for decades.
reply