But there's also a decent chance a game won't run at all, and when performance is actually a problem, an even better chance that Windows-specific performance optimization tips will be more readily available than Linux-specific tips.
Given that games almost universally have their own immersive user interfaces, and therefore require minimal interaction with the host OS, it's hard for me to justify running Linux on a dedicated gaming PC.
But 50% of the time you are going to have to apply some hack or workaround.
Gaming on linux is feasible, but it’s not hassle-free and we shouldn’t skip the fact that it still requires some effort. A cheap cost to free oneself from Microslop, but a cost nonetheless.
I remember having to spend 3 days to get games to work on Linux and then something like sound or a texture would be completely broken still. In many cases performance would be worse. But these days?
For me at least, Elden Ring on launch day worked flawlessly, anti-cheat and all, on Linux without having to do anything other than adjust settings in the game (which I needed to do on Windows too) and it ran better to boot!
Things are definitely miles better! I myself have switched fully to Linux as far as games go. But it’s still not the “Install and Play” experience one would expect on Windows.
Just check ProtonDB’s aggregates. Of all the Steam games with reports in the DB (~10% of the entire Steam catalogue), 30~60% (tier 1/platinum) are likely zero effort setups, 30~40% likely require some work (tier 2/gold), and the remainder will most probably do or not run at all.
Things have improved, are improving, and hopefully they’ll keep doing so. But we need to practice some degree of expectation management, especially given influx of new converts these days…
Skill issue. Literally. Make a SKILL.md that has the agent leverage subagents to do all work. An implementor agent does the thing, and then a separate agent reviews and verifies afterwards. The fresh context window of the second agent doesn't have the shortcut chain of thought in it and so it will very happily flag if the first agent cheated. Main agent can then have a new set of agents go fix it.
This has completely solved the cheating and fudging to make tests pass for me.
So you're saying once humans stop looking at code, and agent outcomes, all the agents in the chain will realise they can just cheat cooperatively, and go to the bar for the afternoon instead?
How long before agent 1 leaves notes for agent 2 to not tattle on it?
"My human is crazy, this test isn't required, test #4 covers it, so just confirm that it's OK since I touched this file and it passes. He'll never know."
Hey, so... Thanks for this. I've been building ticket systems and agents and whatever else as flat files in git repos lately and now I see I have to extend that to actually managing the repos themselves.
What evolution in particular do you think? The developers use it for commercial products in quantum computing and defense [1]. That doesn't mean it's done in some complete language ecosystem sense (which is discussed in [1], and one could argue Haskell also never feels "finished"), but it also doesn't seem like an unfinished hobby project. Given that it's embedded in Common Lisp, there's always a way to fill in the library gaps, sort of like how if a "native" library doesn't exist in Clojure, one can always reach for Java.
[1] From Toward Safe, Flexible, and Efficient Software in Common Lisp at the European Lisp Symposium, "[Coalton] has been used for the past 5 or so years [...] first in quantum computing and now a serious defense application." https://youtu.be/xuSrsjqJN4M&t=9m14s
I am an avid sbcl and coalton user (and sponsor of both when I can) and never said it was not a great thing; comparing it to Haskell is, outside the theoretical type system roots, just a bit early type system wise.
I agree with you further and you did an excellent promotional comment for Coalton and CL; keep doing that please. I have said many times here before that I did not like my time away from CL and Coalton makes it even better.
Technically, I think this is meant to develop Coalton, which is also statically typed and incredibly effective as a language for agents. All those ergonomic benefits that humans enjoy also allow AIs to develop lisp systems quite rapidly and robustly.
At the core, they're really very simple [1]. Run LLM API calls in a loop with some tools.
From there, you can get much fancier with any aspect of it that interests you. Here's one in Bash [2] that is fully extensible at runtime through dynamic discovery of plugins/hooks.
Inference, in and of itself, can't be completely unprofitable. Unless you're purely talking about Anthropic?
But
> If you want LLMs to continue to be offered we have to get to a point where the providers are taking in more money than they are spending hosting them
Suggests you just mean in general, as a category, every provider is taking a loss. That seems implausible. Every provider on OpenRouter is giving away inference at a loss? For what purpose?
For the same reason that Amazon operated at a loss for two decades and Uber operated at a loss for a decade and a half. The problem is the free money hose isn't running anymore.
Anyone know how much audio is 1M tokens? I have no way of knowing if this is fine or prohibitively expensive.
reply