There are often controllers which do indeed just mimic the signals. Doesn't work with every appliance, it depends on the way it's implement and if the manufacturer wanted to make that approach infeasible.
But there absolutely are options to record such Signals and then replicate them via home assistant - I used them before to control a ceiling fan and various infrared devices (same idea, but not a radio there instead a "blaster" - I think it was called)
I didn't set it up again after my last move though, as I couldn't mount the ceiling fan in this apartment and the Infrared devices were just my media center (tv, audio), which are hardly in use currently
Personally I usually just create a devcontainer.json, the vscode support for that is great and I don't really mind if it fucked up the ephemeral container.
Which for the record : hasn't actually happened since I started using it like that.
Hey thanks for this! I hadn't thought about leveraging devcontainer.json, but it's a damn good idea. I'm building yoloAI for exactly this use case so I hope you don't mind if I steal it ;-)
One thing to be aware of with the pure devcontainer approach: your workspace is typically bind-mounted from the host, so the agent can still destroy your real files. Network access is also unrestricted by default. The container gives you process isolation but not file or network safety.
I'm paranoid about rogue AIs, so I try to make everything safe-by-default: the agent works on a copy of your workdir, you review a unified diff when it's done, and you apply only what you want. So your originals are NEVER touched until you explicitly say so, and network can be isolated to just the agent's required domains.
Anyway, here's what I think will work as my next yoloAI feature: a --devcontainer flag that reads your existing devcontainer.json directly and uses it to set up the sandbox environment. Your image, ports, env vars, and setup commands come from the file you already have. yoloAI just wraps it with the copy/diff/apply safety layer. For devcontainer users it would be zero new configuration :)
If your threat model includes the USA government then you can only go with obscurity, honestly - preferably self hosted with a completely locked down system that cannot initiate any network communication besides on the relevant mail protocol ports, completely immutable filesystem beyond the mail data with encryption at rest
And with all of that they'll still be able to pwn you through network equipment which relays your mail, eg some router or switch which they backdoored and mirrors all traffic to their datacenter.
> This morning I asked Claude to use a library to load a toml file in .net and print a value.
Legit this morning Claude was essentially unusable for me
I could explicitly state things it should adjust and it wouldn't do it.
Not even after specifying again, reverting everything eventually and reprompt from the beginning etc. Even super trivial frontend things like "extract [code] into a separate component"
After 30 minutes of that I relented and went on to read a book. After lunch I tried again and it's intelligence was back to normal
It's so uncanny to experience how much ita performance changes - I strongly suspect anthropic is doing something whenever it's intelligence drops so much, esp. Because it's always temporary - but repeatable across sessions if occuring... Until it's normal again
But ultimately just speculation, I'm just a user after all
Here's an evil business idea: Use the LLMs to identify the users most likely to be "vocal influencers" and then prioritize resources for them, ensuring they get the best experience. You can engineer a bubble this way.
And then the next step is to dynamically vary resources based on prediction of user stickiness. User is frustrated and thinking of trying competitor -> allocate full resources. User is profiled as prone to gambling and will tolerate intermittent rewards -> can safely forward requests to gimped models. User is an resolute AI skeptic and unlikely to ever preach the gospels of vibecoding -> no need to waste resources on him.
This is part of why running open models on hardware you control is valuable. They may trail SOTA by 6-12 months (really less for many use cases) but there's more reliability, control etc.
"Here's an evil business idea: Use the LLMs to identify the users most likely to be "vocal influencers" and then prioritize resources for them, ensuring they get the best experience. You can engineer a bubble this way."
Its quite likely this is already happening buddy...
The 'random' degradation across all LLM-based services is obvious at this point.
> Legit this morning Claude was essentially unusable for me
I could explicitly state things it should adjust and it wouldn't do it.
Honestly, this is my experience. Every now and again it just completely self implodes and gives up, and I’m left to pick up the pieces. Look at the other replies who are making sure I’m using the agrntic loop/correct model/specific enough prompt - I don’t know what they’re doing but I would love to try the tools they’re using.
Last year I had a friend chatting with me about how Claude had rather quickly transformed their small coding shop, except that they noticed after 3pm it consistently became incredibly dumb. I kind of laughed at the time but you know what, who knows. There's very likely some load-balancing shenanigans going on behind the scenes.
See I thought they were the same thing, considering the Queensland Health payroll database issues, I assumed someone coined the term assuming it would clobber Health acronyms.
> Before AI, I worked at a B2B open source startup, and our users were perpetually annoyed by how often we asked them to upgrade and were never on the latest version.
And frankly, they were in point.
Especially in the B2B context stability is massively underrated by the product.
There is very little I hate more then starting my work week on a Monday morning and find out someone changed the tools I'm using for daily business again
Even if it's objectively minor like apples last pivot to the windows vista design... It just annoys me.
But I'm not the person paying the bills for the tools I'm using at work, and the person that is almost never actually uses the tools themselves and hence shiny redesigns and pointless features galore
But there absolutely are options to record such Signals and then replicate them via home assistant - I used them before to control a ceiling fan and various infrared devices (same idea, but not a radio there instead a "blaster" - I think it was called)
I didn't set it up again after my last move though, as I couldn't mount the ceiling fan in this apartment and the Infrared devices were just my media center (tv, audio), which are hardly in use currently
reply