99% is usually the best you can do. So you can only layer multiple defences together, this makes sense as one layer to me.
I have an issue with security layers that are inherently nondeterministic. You can't really reason strongly about what this tool provides as part of a security model.
But also, it's in an area where real security seems extremely hard. I think at some point everyone will have a situation where they wanna give an agent some private information and access to the web. You just can't do that in a way that's deterministically safe. But if there are usecase where making it probabilistically safer is enough to tip the balance, well, fine.
> And to agree with others on this thread, the folks who push for war should 100% be required to participate in them and lead from the front
I agree but I don't think it goes far enough. Leading from the front of the best equipped military in the world doesn't balance your incentives against the misery you are inflicting on the innocent denizens of the poor country you're pointlessly destroying.
There's also the economic destruction back home to balance against. So, those who call for war should be forbidden to privately fund their healthcare and children's education.
Agree. I believe during WW2 the government put rules in place to prevent companies from making too much profit from the war. From what I recall in history class taxes were raised significantly as well.
War is a mighty economic engine, this cannot be denied. But if we take an entire country to war, then it stands to reason that the entire country should benefit from the spoils (to the extent that there are any).
I may be misunderstanding but I don't think so. War forces people's hand in terms having to make progress. This is because during progress can be measured in number of body bags returning from the front and the reduction thereof.
Our modern world was born out of scientific advancements made during WW2. Could these same achievements have occurred in peace time? Obviously the answer is yes. However during war, everything becomes accelerated and things that normally would take a long time can happen very quickly.
I agree that paying for scientific progress with human lives is a bad thing.
Not just with human lives but with staggering amounts of economic growth. In a globalised system there's absolutely no way the stimulation of war pays for the destruction and disruption.
Yes; WWII was an economic disaster for huge swaths of the world. The US is pretty much the only industrialized country at the time where it wasn't a complete economic disaster, because it was separated by oceans from nearly all the fighting and destruction.
If there's a shootout in a town that ends up with most peoples' windows getting shot out, the one town glazier will make money off of this, even though it's a net-negative for the town as a whole.
You can use a reverse proxy and still have working app auth, I have set this up via Authelia with the OIDC Jellyfin plugin.
However:
- This is EVEN MORE complex than "just" a reverse proxy.
- I'm not really sure it wins much security, because...
- at least I'm not relying on Jellyfin's built-in auth but I'm now relying on its/the plugin's OIDC implementation to not be completely broken.
- attackers can still access unauthenticated endpoints.
Overall I really wish I could just do dumb proxy auth which would solve all these issues. But I dunno how that would work with authing from random clients like Wii (and more importantly for me, WebOS).
Ha, I had a similar story with Jekyll but my build wasn't containerised. At some point it stopped being compatible with the latest [something. Ruby? Gems? I don't care, just build my fucking HTML templates please] so I just migrated to Hugo.
I stuck around on Hugo for quite some time and I've never had any such issues yet, but now I've also wrapped the build in Nix. So yeah I'll do the same - if it ever stops working I'll just pin the build inputs at the last version that worked.
I _think_ the Hugo folks seem to understand the "just build my fucking HTML templates" principle. I.e. for most use cases the job of a static site generator is simple enough that breaking compatibility is literally never justified. So hopefully pinning won't be necessary.
Just last week updating Hugo broke my templates. That‘s happening every few months. They deprecate and then remove or rename template variables like crazy.
Yeah, I really don't understand why some developers have an extreme compulsion to constantly deprecate and rename things like this, causing massive upgrade headaches to users.
In addition to Hugo, it happens constantly in GoReleaser. In both cases, they're excellent tools, but the unending renaming spree is just awful. Weirdly, both are in the Go ecosystem, which generally values backwards compatibility.
Damn that's interesting I have not run into that at all after about 4 years.
Maybe it's just that my site is extremely dumb? I forked an "ultra minimal" theme and deleted most of its code. So perhaps I just use such a tiny subset of the template system that I haven't been affected.
The article itself acknowledges that the headline is bullshit:
> The change isn't about the core operating system becoming resource-hungry. Instead, it reflects the way people use computers today—multiple browser tabs, web apps, and multitasking workflows
Basically the change reflects the fact that, at this level of analysis (how much RAM do I need in my consumer PC), the OS is irrelevant these days. If you use a web browser then that will dominate your resource requirements and there's nothing Linux can do about that.
If this really works there would seem to be a lot of alpha in running the expensive model in something like caveman mode, and then "decompressing" into normal mode with a cheap model.
I don't think it would be fundamentally very surprising if something like this works, it seems like the natural extension to tokenisation. It also seems like the natural path towards "neuralese" where tokens no longer need to correspond to units of human language.
But it can't, we see models get larger and larger and larger models perform better. <Thinking> made such huge improvements, because it makes more text for the language model to process. Cavemanising (lossy compression) the output does it to the input as well.
but some tokens are not really needed? This is probably bad because it is mismatched with training set, but if you trained a model on a dataset removing all prepositions (or whatever caveman speak is), would you have a performance degradation compared to the same model trained on the same dataset without the caveman translation?
There was actually a post on here a few months back where someone claiming robotics expertise posted exactly what you asked for: a list of things they didn't think robots were close to being able to do.
IIRC the list included folding textiles, and soon after a video was released of a robot folding textiles, but it was very janky, it's not clear to me if it proved the original article wrong or was more of an "exception that proves the rule".
Personally I have my washing machine in the basement, you need a key to access it (and I can't modify it, it's a shared space in a building I don't own). I'm always thinking about that. A robot that can do my laundry and open locked doors doesn't seem to be on the horizon yet.
Trust me, plenty of millionaires are doing their laundry in a shared Waschküche in Zürich!
Current Chinese dev bots cost like $15k. Vapourware startups are claiming they'll ship their humanoid robot product at $20k. I'd pay that in a heartbeat for robot that could actually do my laundry.
(But more impactfully surely there are loads of Californians with a utility room in their garage, or a basement that can't be accessed from inside the house)
(Also... I just realised, if there were robots that could do laundry, but couldn't navigate to my basement, I would move. I think laundry bots would genuinely be that desirable)
The companies servicing that echelon would replace staff as soon as they could. In an apartment, the building owner would plant one in the shared laundry and add an optional price for tenants to use it.
I wonder if we'll start to see gimmicks in home appliances for taking advantage of variable prices.
Like for EV charging I assume it's a basic requirement, you simply wouldn't buy a car that didn't let you adjust the charging schedule based on cost.
But what about... Freezers? Maybe there are scenarios where your freezer could drop 20° below its usual temp while prices are low, and thereby avoid running the compressor for several hours while prices are high.
What about a tumble dryer button that says "these clothes are fine to stay wet for up to 8 hours, dry them at the cheapest moment during that window"?
TBH I doubt these things would really pay for themselves but as a consumer I'd still be tempted by the "lol, neat" factor.
Also I assume the local-LLM heads are already finding ways to have their agents do useful work while the GPU can churn tokens for almost-free.
Also makes me think of fun Home Assistant workflows. Like, "when energy is expensive, just try to keep the house between 16-26°. When energy approaches free, I want to live at exactly 20°". (I assume heat pumps also have ways to take advantage of this in more roundabout ways).
I think freezers would definitely be a gimmick as they don't really use that much power.
I can see it being a nice feature for higher-load tasks though, e.g. my dishwasher uses about 1.8kWh for a cycle. On this tariff it's trivial to compute the best start-end time based on the 30 minute price windows, so if the dishwasher could do that it would be pretty sweet. Right now my dishwasher just supports a 3h delay function. I wouldn't mind if my dishwasher had a (local) API you could hit to control its schedule. Sadly this usually comes with some cloud requirement though.
Our Mitsubishi heat pump water heater has integration with some solar systems and weather services to attempt to time hot water production to solar peak.
We don't have solar panels but have a solar energy plan (cheap during the day, expensive during dawn/sundown, average overnight with fixed hourly schedule independent of actual production) so for us it's just programmed to follow a fixed schedule of prioritizing filling up during the solar hours and 100% forbidden from filling up during the expensive "duck curve" hours.
Things like freezers don't take a huge amount of power. It's definitely about things that do space heating/cooling. The traditional approach is to put your electric water heater on a timer. That way you can schedule your hot water use on a consistent schedule but only heat the water at night when you can be sure the rates are lower.
In my case I've got a contactor on the house power board that disconnects the hot water at night, so it's only heating off solar power. You can also get specialised solar diverters that do the same thing, but they cost literally ten times as much and only squeeze a tiny bit of extra efficiency out of the system.
If you do go down this path, make sure you insulate the crap out of the cylinder and surrounding water pipes, mine only heats once a day and that's once the sun's providing the power for it.
I second this. I live in Spain and put solar panels on my roof. I have no batteries. In the last 2 years, the price of the energy returned to the grid has been 0 or even negative. Therefore I started running automation using Homeassistant and I'm stuck at appliances that cannot be simply turned on and off with a power switch (i.e. my freezer, the AC, washing machine etc). If someone could produce a refrigerator that I can control via an open protocol(i.e. ZigBee) it would be awesome!
You can basically do that today if you wanted to by buying consumer grade batteries and smart switches. A whole house battery would be better, but it's more expensive to install.
For the tumble drier and dishwasher, those usually come with time delay features. That's usually good enough if your goal is to timeshift a load.
I have a battery for my fridge not for this purpose, but because I'd rather not have a power outage spoil my food.
With "smart" appliances that can be controlled, there's often a community integration to HomeAssistant ... and then there's the free EMHASS addon which will optimise for profit, or self consumption based on energy prices (both incoming and outgoing) as well as any on-site generation (e.g. Solar PV) batteries etc. etc.
Yeah, but that’s strictly worse for some of these examples. You can’t overcome the loss of energy just going into the battery and getting it back and the there is the huge cost of the battery itself.
The freezer example would require like $10 of electronics assuming there isn’t already a WiFi chip in it.
Many homes in Norway has this. Its a smart plug in your fuse box. For me it offsets EV charging untill the electricity is at its cheapest, it also cuts down on heating for the peak hours etc etc.
I have an issue with security layers that are inherently nondeterministic. You can't really reason strongly about what this tool provides as part of a security model.
But also, it's in an area where real security seems extremely hard. I think at some point everyone will have a situation where they wanna give an agent some private information and access to the web. You just can't do that in a way that's deterministically safe. But if there are usecase where making it probabilistically safer is enough to tip the balance, well, fine.
reply