Hacker News .hnnew | past | comments | ask | show | jobs | submit | AgentK20's commentslogin

How does ECH make it impossible for parents to control their children's access to computers? Sure they can't block sites at the router level, just like your ISP won't be able to block things at the ISP level, but you (the parent) have physical access to the devices in question, and can install client-side software to filter access to the internet.

The only thing this makes impossible is the laziest, and easiest to bypass method of filtering the internet.


Because there are network operators who have mal-intent increasingly no network operators are permitted to exercise network-level control. A parent who wants to filter the network access in their house is the same as a despotic regime practicing surveillance and censorship on their citizens.

Given that it's pretty much the norm that consumer embedded devices don't respect the owner's wishes network level filtering is the best thing a device owner can do on their own network.

It's a mess.

I'd like to see consumer regulation to force manufacturers to allow owners complete control over their devices. Then we could have client side filtering on the devices we own.

I can't imagine that will happen. I suspect what we'll see, instead, is regulation that further removes owner control of their devices in favor of baking ideas like age or identity verification directly into embedded devices.

Then they'll come for the unrestricted general purpose computers.


If you have a device you don't trust, don't allow it on your network, or have an isolated network for such devices. Meanwhile, devices are right to not allow MITMing their traffic and to treat that as a security hole, even if a very tiny fraction of their users might want to MITM it to try to do adblocking on a device they don't trust or fully control, rather than to exploit the device and turn it into a botnet.

Along similar lines, a security hole you can use for jailbreaking is also a security hole that could potentially be exploited by malware. As cute as things like "visit this webpage and it'll jailbreak your iPhone" were, it's good that that doesn't work anymore, because that is also a malware vector.

I'd like to see more devices being sold that give the user control, like the newly announced GrapheneOS phones for instance. I look forward to seeing how those are received.


> If you have a device you don't trust, don't allow it on your network...

That's what I do. That means large swaths of potentially interesting "smart" devices are unavailable to me (since they won't work without Internet access and I'm unable to inspect their traffic). I'm not too heartbroken about it, but it does make me a little sad that I don't get to use some of this "we're living in the future" tech.

> ...devices are right to not allow MITMing their traffic and to treat that as a security hole...

> ...a security hole you can use for jailbreaking is also a security hole that could potentially be exploited by malware...

Yes. Complete agreement. Devices are right not to allow unauthorized parties to MiTM their traffic, tinker w/ their innards, etc. I would never suggest otherwise.

Owners, with physical access, should be permitted to MITM the traffic, tinker with the innards, etc. They're authorized parties.

Device manufacturers should compelled by regulation to allow device owners, with physical access, to examine and manipulate the device internals. I'm thinking of the "developer mode" physical switches on Chromebook devices. If I own it I should have the same access to the device the manufacturer does.

If a manufacturer's business / security model isn't compatible with this regulation the manufacturer should be required to deal with any e-waste concerns and it should clearly be marketed as a rental and not a sale.

None of this will ever happen. I know I'm tiling at windmills. The tech world will continue to get more locked-down, the public will lose unfettered access to general purpose computers, and the personal computer revolution will become a distant memory. We already lost and could never really win because "normies" don't care about this stuff.


> If a manufacturer's business / security model isn't compatible with this regulation the manufacturer should be required to deal with any e-waste concerns and it should clearly be marketed as a rental and not a sale.

I would be generally in favor of this. I don't like the idea of forbidding building a device that's locked down; there are potential use cases for such a thing. I do like the idea of saying "either allow tinkering or you are subject to numerous other things, like warranty / liability laws".


Network segmentation does nothing for the types of attacks these devices perform (e.g. content recognition for upload to their tracking servers, tracking how you navigate their UI, ad delivery). I'm not worried about them spreading worms on my network. The problem is their propensity to exfiltrate data or relay propaganda. The solution to that is a legal one, or barring that, traffic filtering.


That was my motivation for the "or" (don't allow it on your network, or put it on an isolated network); it depends on your threat model and what the device could do. Some devices (like "smart" TVs) shouldn't have network access at all.


"Sure, you can use my wifi while you're over. Just enroll in MDM real quick".

As brought up in another thread on the topic, you have things like web browsers embedded in the Spotify app that will happily ignore your policy if you're not doing external filtering.


Fair point.

I guess it (network-level filtering) just feels like a dragnet solution that reduces privacy and security for the population at large, when a more targeted and cohesive solution like client-side filtering, having all apps that use web browsers funnel into an OS-level check, etc would accomplish the same goals with improved security.


I think the population at large generally needs to get over their hangups (actually, maybe they have, and it's just techies). No one in a first world country cares if you visit pornhub just like no one cares if you go to amazon. Your ISP has had the ability to see this since the beginning of the web. It does not matter, but we can also have privacy laws restricting their (and everyone else like application/service vendors) ability to record and share that information. If you really want, you can hide it with a VPN or Tor. As long as not everything is opaque, it's easy to block that traffic if you'd like (so e.g. kids can't use it). In a first world country, this works fine since actually no one cares if you're hiding something, so you don't need to blend in. At a societal level, opaque traffic is allowed.

You could have cooperation from everyone to hook into some system (California's solution), which I expect will be a cover for more "we need to block unverified software", or you could allow basic centralized filtering as we've had, and ideally compel commercial OS vendors to make it easy to root and MitM their devices for more effective security.


Yes well some of us live in first world countries that are at risk of declining into third world status, where some states DO actually care what sites you visit and would jump at the chance to further restrict traffic.

Rather than “get over” it I think we need to fight. You seem to insist that monitoring/control is a done deal and we only need to argue about the form it takes, but this is not correct. Centralized monitoring/control can be resisted and broken through a combination of political and technical means. While you may not want this, I do. (And many others are being swayed back in my direction as they start to feel the effects of service enshittification, censorship under the guise of “fighting misinformation”, and media consolidation.)


I think you misunderstand what I mean by "centralized". I mean e.g. at your gateway/firewall/router. As in a single place for you to enforce policy on your network.

At least in the US, what happens outside of your network is mostly irrelevant (except perhaps that free, open wifi should be liable for any lack of filtering). Centralized (as in e.g. government) control is non-existent, and centralized monitoring is easily defeated if you'd like with a variety of methods (though like I said we could have laws against the monitoring).


There's nothing technical stopping device manufacturers from making this easy for parents to do. They choose not to.


A lot of endpoint protection products rely on SNI sniffing. E.g. Apple's network extensions filters look at TLS handshakes.


Then they would drop the connection with esni


“concerns about confidentiality or respect for the persons family” Sooo clearly you didn’t even click the link, given this is a post BY the family to raise awareness of scummy corporate behavior. While discussing mental health and self harm can be distressing, this post seems totally in-line with other HN discussions calling out malicious corporate behavior?


Uh.....what planes are you on where you, the passenger, can simply "pull down the oxygen mask"? Also, wouldn't the P95 only help with particulates (e.g. soot), but not with the actual toxic fumes?


OP has no idea what he's talking about. Passengers masks are for depressurisation events and oxygen supplies last 15 minutes - enough time for the pilots to descend. Pilots have a separate longer lasting oxygen supply. In many (most older?) planes, a single passenger activating their mask will activate the chemical based oxygen supply that feeds all passenger masks.


> but not with the actual toxic fumes?

3M 8577 has a bonus carbon layer for this purpose. Its protection is not complete, but can limit the damage. You should also carry spares in case the carbon layer is exhausted.


How will you know that you should change mask?


The mask itself starts to smell. You will start sneezing and also dripping from the nose, but this will stop if you remove or replace the mask.

Also, you start to smell the diesel fumes. The more you wear a clean mask, the more sensitive your nose becomes to mild fumes.


Oh, very cool. TIL. Thanks!


Really that's the problem - Anticheat. Sure, at this point most games work on Linux. The problem is, most people don't play most games. Most people play a handful of games, and where the players go, the cheaters follow. In response, the game studios deploy more and more aggressive anticheat measures, ultimately breaking the tiny minority of people who would've otherwise been able to play the game on Linux/Proton.

Take a look at https://areweanticheatyet.com at some of the biggest games on the planet, and how most of them don't support Linux or Proton.


CVE 10.0 is bonkers for a project this widely used


The packages affected, like [1], literally say:

> Experimental React Flight bindings for DOM using Webpack.

> Use it at your own risk.

311,955 weekly downloads though :-|

[1]: https://www.npmjs.com/package/react-server-dom-webpack


That number is misleadingly low, because it doesn't include Next.js which bundles the dependency. Almost all usage in the wild will be Next.js, plus a few using the experimental React Router support.


As far as I'm aware, transitive dependencies are counted in this number. So when you npm install next.js, the download count for everything in its dependency tree gets incremented.

Beyond that, I think there is good reason to believe that the number is inflated due to automated downloads from things like CI pipelines, where hundreds or thousands of downloads might only represent a single instance in the wild.


It's not a transitive dependency, it's just literally bundled into nextjs, I'm guessing to avoid issues with fragile builds.


why is it not normal for CI pipelines to cache these things? its a huge waste of compute and network.


It's certainly not uncommon to cache deps in CI. But at least at some point CircleCI was so slow at saving+restoring cache that it was actually faster to just download all the deps. Generally speaking for small/medium projects installing all deps is very fast and bandwidth is basically free, so it's natural many projects don't cache any of it.


These often do get cached at CDNs inside of the consuming data centers. Even the ISP will cache these kind of things too.


The subjects of theses types of posts should report the CVSS severity as 10.0 so the PR speak can't simply deflect to what needs to be done.


Unfortunately, CVSS scores are gamified hard. Companies pay more money in bug bounty programs, so there's an incentive for bug bounty hunters to talk up the impact of their discovery. Especially the CVSS v3 calculation can produce some unexpected super high or super low scores.

While scores are a good way to bring this stuff to people's attention, I wouldn't use them to enforce business processes. There's a good chance your code isn't even affected by this CVE even if your security scanners all go full red alert on this bug.


It’s possible to create a scoring system based on actual root cause analysis and impact scores.

Surprised there isn’t more talk about a solution like this or something and more downplaying CVSS.

Downplaying CVSS alone can smell a little like PR talk even however unintentional.


A CVSS score of 10.0 may be warranted in this case, but so many other CVSS scores are wildly inflated, that the scores don't mean a lot.


Regardless it can still provide some context and adjustment cs none.

The above could be seen as spin too, how could cvss be more accurate so you’d feel better?


React is widely used, react server components not so much.


Next.js is still pretty damn widely used.


And here we see a system that was already stretched to the breaking point BEFORE the shutdown, put under an incredible strain and failing. A more robust system can handle sudden shocks, but when you’ve spent years whittling away at it there’s no slack.


Still seeing issues on the OAuth flow despite a "a fix [having] been implemented". Looks like whatever happened probably trashed the session database since it's forcing Claude Code to re-auth.


The market can stay irrational longer than you can stay solvent.


I don’t think most people are arguing against the concept, or even implementation, of the system as developed. Obviously it’s both a publicity stunt and beta test as they learn how to build and operate a tunnel system like this. The concern is that much of the environmental harm that’s being done (according to the EPA) is repetitive, and that The Boring Company (TBC) actively pledged to hire an environmental inspector three years ago and is now being fined for having not done so. Given that, who knows how many violations that don’t leave a permanent mark are going unnoticed.

Do you think that they are going to ignore environmental laws for JUST this project, or do you think that is their modus operandi? I’d be happy to have a tunnel system installed near my home, even if there’s temporary disruption during the construction process. What I wouldn’t tolerate is active, and unmonitored (by TBC’s insistence on “self-monitoring”), pollution occurring near my home. Fines only cover so much, and un-polluting something after the fact costs far more than the fines that are being levied and, when it comes to pollutants that harm humans (like improper disposal of chemicals from digging, as they have been fined for), you can’t just “undo” the human harm with a fine.


What I think is that environmental review rules are so convoluted that almost any project you would investigate breaks plenty of them. I also don't trust the definition of "environmental" when it comes to environmental regulations. When you hear "environmental" you think dumping toxic chemicals, but in reality environmental reviews have components like a building casting a shadow on a playground for 1 hour a day. And on top of that I don't trust journalists for counts of number of violations. In this case they get to 800 by counting one real violation 700 times:

> The letter also accuses the company of failing to hire an independent environmental manager to regularly inspect its construction sites. State regulators counted 689 missed inspections.


> as they learn how to build and operate a tunnel system like this.

Yes, why do they even do that. Not that they are never any improvements, but this pretty much a solved problem. They have a stupid amount of NIH syndrome, but apply that to the physical world and that always results in fatalities.


There were tens of thousands of riders _when you were there?_ Or there were tens of thousands of riders over the lifetime of the system?

Most videos I've seen recently show a system that, while functional, typically only has a handful of vehicles running simultaneously, each with carrying capacity for one party of up to 3 people.


When I was there. During SEMA, the worlds biggest automotive show.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: