And HTML/CSS/JS are far more powerful for designing than any of SwiftUI/IB on Apple, Jetpack/XML on Android, or WPF/WinUI on Windows, leaving aside that this is what designers, design platforms and AI models already work best with. Even if all the major OSes converged on one solution, it still wouldn't compete on ergonomics or declarative power for designing.
Lol SwiftUI/Jetpack/WPF aren’t design tools, they’re for writing native UI code. They’re simply not the right tool for building mockups.
I don’t see how design workflows matter in the conversation about cross-platform vs native and RAM efficiency since designers can always write their mockups in HTML/CSS/JS in isolation whenever they like and with any tool of their choice. You could even use purely GUI-based approaches like Figma or Sketch or any photo/vector editor, just tapping buttons and not writing a single line of web frontend code.
Who said anything about mockups? Design goes all the way from concept to real-world. If a designer can specify declaratively how that will look, feel, and animate, that's far better than a developer taking a mockup and trying their hardest to approximate some storyboards. Even as a developer working against mockups, I can move much faster with HTML/CSS than I can with native, and I'm well experienced at both (yes, that includes every tech I mentioned). With native, I either have to compromise on the vision, or I have to spend a long time fighting the system to make it happen (...and even then)
Yikes. I spent 15 years developing native on both mobile and desktop. If you think that native has the same design flexibility as HTML/CSS, you're objectively wrong.
By design, each operation system limits you to their particular design language, and styling of components is hidden by the API making forward-compatible customisation impossible. There's no escaping that. And if you acknowledge that fact, you can't then claim native has the same design flexibility as HTML/CSS. If you don't acknowledge that fact, you're unhinged from reality.
There's pros and cons to the two approaches, of course. But that's not what's being debated here.
The real disconnect is that the user doesn't really care all that much. It's mostly the designers who care. And Qt for example but also WPF let you style components almost to unrecognizable and unusable results. So if everyone will need to make do with 8GB for the foreseeable future, designers might just be told "No.", which admittedly will be a big shock to some of them. Or maybe someone finally figures out how to do HTML+CSS in a couple of megabytes.
I recently switched from Spotify (well known Electron-based app) to Apple Music (well known native app). The move was mostly an ethical one, but I must say, the UI functionality and app features are basically poverty in comparison. One tiny example, navigating from playlist entry to artist requires multiple interactions. This is just one of many frustrations I've had with the app. But hey, it has beautiful liquid glass effects!
In short: iteration time matters. Times from design to implementation, to internal review, to real user feedback, and back to design from each phase should be as fast as possible. You don't get the same velocity as you do in native. Add to that you have to design and implement in quadruplicate, iOS design for iOS, Android for Android, MacOS for Mac, Windows design for windows. All that is why people use Electon.
Are there any polls (or any educated guesses) gauging what proportion of people who identify as Zionists want equal status with all Palestinians (particularly democratic rights) within the bounds of what was once Mandatory Palestine?
A) Wasn't the article suggesting that would be 4-bits end-to-end in this hypothetical photonic matrix multiplication co-processor? ie. the weights are 4-bits
B) Power consumption and speed. Essentially chips are limited by the high resistance (hence heat loss) of the semiconductor. Photonics can encode multidimensionally, and data processing is as fast as the input light signal can be modulated and the output light signal can be interpreted. I guess this would favour heavy computations that require small inputs and outputs, because eventually you're bottlenecked by conventional chips.
While power consumption and speed are indeed the advantages, depending on the concrete application it may be very difficult or even impossible to make an optical computing device with a size comparable with an electronic device.
The intrinsic size of the optical computing elements is much larger, being limited by wavelength. Then a lot of additional devices are needed, for conversion between electrical and optical signals and for thermal management.
Optical computing elements can be advantageous only in the applications where electronic devices need many metallic interconnections that occupy a lot of space, while in the optical devices all those signals can pass through a layer of free space, without interfering with each other when they cross.
This kind of structure may appear when doing tensor multiplication, so there are indeed chances that optical computing could be used for AI inference.
Nevertheless, optical computing is unlikely to ever be competitive in implementing general-purpose computers. Optical computers may appear but they will be restricted to some niche applications. AI inference might be the only one that has become widespread enough to motivate R&D efforts in this direction.
Bold claim to say these challenges will never be surmounted. Either a more-economic technology would have to mature first, or civilisation halt progress for that to be true. If scientific advances could yield miniaturised photonics that offer a significant cost/benefit over any contemporary technology the concept will still be pursued. Unless you are suggesting that it is theoretically and physically impossible?
The tokens still land in the context window either way. Prompt caching gives you a discount on repeated input, but only for stable prefixes like system prompts. Git output changes every call, so it's always uncached, always full price. Nit reduces what goes into the window in the first place.
I was thinking more if you write a prompt into an IDE that has first-party integration with an LLM platform (e.g. VS Code with Github Copilot), it would make sense on their end to reduce and remove redundant input before ingesting the token into their models, just to increase throughput (increase customers) and decrease latency (reduce costs). They would be foolish not to do this kind of optimisation, so surely they must be doing it. Whether they would pass on those token savings to the user, I couldn't say.
no because tool calls are all client side generally. unless you mean using a remote environment where Claude Code is running separately but usually those aren't being charged by the token.
Except for the times you do want it to run the CI.
LLM issues can often be solved by being more and more specific, but at some point being specific enough is just as time consuming as jumping in and doing it yourself.
Looking at their misrepresentations and over-exaggerations regarding Erlang it now seems like a long lead-up to a sales pitch. Their motivation to exaggerate deficiencies in existing approaches is to lend chapter 7 more rhetorical punch. All the same, I'm keen to hear what they have to say.
A real interesting read as someone who spends a bit of time with Elixir. Wasn't aware of the atomic and counter Erlang features that break isolation.
Though they do say that race conditions are purely mitigated by discipline at design time, but then mention race conditions found via static analysis:
> Maria Christakis and Konstantinos Sagonas built a static race detector for Erlang and integrated it into Dialyzer, Erlang’s standard static analysis tool. They ran it against OTP’s own libraries, which are heavily tested and widely deployed.
> They found previously unknown race conditions. Not in obscure corners of the codebase. Not in exotic edge cases. In the kind of code that every Erlang application depends on, code that had been running in production for years.
I imagine that the 4th issue of protocol violation could possibly be mitigated by a typesafe abstracted language like Gleam (or Elixir when types are fully implemented)
> They found previously unknown race conditions. Not in obscure corners of the codebase. Not in exotic edge cases. In the kind of code that every Erlang application depends on, code that had been running in production for years.
If these race conditions are in code that has been in production for years and yet the race conditions are "previously unknown", that does suggest to me that it is in practice quite hard to trigger these race conditions. Bugs that happen regularly in prod (and maybe I'm biased, but especially bugs that happen to erlang systems in prod) tend to get fixed.
True. And that the subtle bugs were then picked up by static analysis makes the safety proposition of Erlang even better.
> Bugs that happen regularly in prod
It depends on how regular and reproducible they are. Timing bugs are notoriously difficult to pin down. Pair that with let-it-crash philosophy, and it's maybe not worth tracking down. OTOH, Erlang has been used for critical systems for a very long time – plenty long enough for such bugs to be tracked down if they posed real problems in practice.
Erlang has "die and be restarted" philosophy towards process failures, so these "bugs that happen to erlang systems in prod" may not be fixed at all, if they are rare enough.
As of now, the post you're replying to says "bugs that regularly happen ... in prod"
Now, if it crashes every 10 years, that is regular, but I think the meaning is that it happens often. Back when I operated a large dist cluster, yes, some rare crashes happened that never got noticed or the triage was 'wait and see if it happens again' and it didn't happen. But let it crash and restart from a known good state is a philosophy about structuring error checking more than an operational philosophy: always check for success and if you don't know how to handle an error fail loudly and return to a good state to continue.
Operationally, you are expected to monitor for crashes and figure out how to prevent them in the future. And, IMHO, be prepared to hot load fixes in response... although a lot of organizations don't hot load.
not all races are bugs. here's an example that probably happens in many systems that people just don't notice: sometimes you don't care and, say, having database setup race against setup of another service that needs the database means that in 99% of cases you get a faster bootup and in 1% of cases the database setup is slow and the dependent server gets restarted by your application supervisor and connects on the second try.
More succinctly, Carmack only contributes his code to OSS, but not his time, and shouldn't impose his values on the wider community that contribute both.
> technically correct, but in context I think that's somewhat beside the point
Talking past people to argue on semantics and pedantry is a HN pastime. It may even be it's primary function.
As pointed out in the OP comment, it's basically 'money for jam' by the point he releases the source code:
> It's an entirely different thing; he made a thing, sold it, and then when he couldn't sell more of it, gave it away. That's nice!
Carmack has extracted as much profit as he could care for from the source code. The releasing of the code is warm fuzzy feelings for zero cost, while keeping it closed source renders zero benefit to him.
If that was the intent don’t you think it would be stated somewhere, or in the faq?
>“Talking” past
It’s only text, there’s no talking past. You can’t talk past someone when the conversation isn’t spoken. At best, you might ignore what they write and go on and on and on at some length on your own point instead, ever meandering further from the words you didn’t read, widening the scope of the original point to include the closest topic that isn’t completely orthogonal to the one at hand, like the current tendency to look for the newest pattern of LLM output in everyone’s’ comments in an attempt to root out all potential AI generated responses. And eventually exhaust all of their rhetoric and perhaps, just maybe, in the very end, get to the
Unless you recruit mainly via this new job board I can hardly see why employers would go for it, it would just show a very small percentage because you fill most openings via LinkedIn or whatever, damaging the company branding.
reply