Hacker News .hnnew | past | comments | ask | show | jobs | submit | xoa's commentslogin

>It’s very disputable whether BEVs are industry’s future

It is not disputable (unless you're including Old Auto lobbyists I suppose). Without government imposed restrictions keeping the public from buying Chinese BEVs without an outrageous markup (or at all) the ICE industry would already be imploding. The government could and should require that all vehicles be under full control of their owners with no remote telematics required, or even allowed necessarily (and heavily restricted even then). That'd resolve concerns about Chinese kill switches or gathering intel data or whatever. But of course the Western industry hates that too because they want to fully enshittify cars next and turn them into locked-in subscription revenue and advertising data sources. So they can't even compete on trustworthiness. Total embarrassment and also long term ruin.

The present gas price mess and global instability Trump has kicked off is just going to draw an even bigger line under both the personal and the national security value of not being tied to any single source of energy for mechanized transportation. BEVs are simply fundamentally superior particularly in a risky world.


Not to mention, regenerative braking. It recovers something like 30% of energy that was previously just wasted, so in terms of having energy independence, it's worth mentioning.

I'll admit I've still stuck with the original FreeBSD based TrueNAS, and still am kinda bummed they swapped it. So it's interesting to see a direct example of someone for whom the new Linux based version is clearly superior. I'm long since far, far more at the "self-hosted" vs "homelab" end of the spectrum at this point, and in turn have ended up splitting my roles back out again more vs all-in-one boxes. My NAS is just a NAS, my virtualization is done via proxmox on separate hardware with storage backing to the NAS via iSCSI, and I've got a third box for OPNsense to handle the routing functions. When I first compared, the new TrueNAS was slower (presumably that is at parity or better now?) and missing certain things of the old one, but already was much easier to have Synology or Docker style or the like "apps" AIO. That didn't interest me because I didn't want my NAS to have any duty but being a NAS, but I can see how it'd be far more friendly to someone getting going, or many small business setups. A sort of better truly open and supported "open Synology" (as opposed the xpenology project).

Clearly it's worked for them here, and I'm happy to see it. Maybe the bug will truly bite them but there's so much incredibly capable hardware now available for a song and it's great to see anyone new experiment with bringing stuff back out of centralized providers in an appropriately judicious way.

Edit: I'll add as well, that this is one of those happy things that can build on itself. As you develop infrastructure, the marginal cost of doing new things drops. Like, if you already have a cheap managed switch setup and your own router setup whatever it is, now when you do something like the author describes you can give all your services IPs and DNS and so on, reverse proxy, put different things on their own VLANs and start doing network isolation that way, etc for "free". The bar of giving something new a shot drops. So I don't think there is any wrong way to get into it, it's all helpful. And if you don't have previous ops or old sysadmin experience or the like then various snags you solve along the way all build knowledge and skills to solve new problems that arise.


One of the most helpful realizations I had as I played around with self-hosting at home is that there is nothing magical about a NAS. You don't need special NAS software. You generally don't need wild filesystems, or containers or VMs or this-manager or that-webui. Most people just need Linux and NFS. Or Linux and SMB. And that's kind of it. The more layers running, the more that can fail.

Just like you don't really need the official Pi-hole software. It's a wrapper around dnsmasq, so you really just need dnsmasq.

A habit of boiling your application down to the most basic needs is going to let you run a lot more on your lab and do so a lot more reliably.


Kind of expanding on this, it feels like a huge chunk of specialized operating systems are just someone just putting their own skin over Debian. The vast majority of services and tools they wrap aren't any more complicated than the wrapper.

Hardware is kind of the same deal; you can buy weird specialty "NAS hardware" but it doesn't do well with anything offbeat, or you can buy some Supermicro or Dell kit that's used and get the freedom to pick the right hardware for the job, like an actual SAS controller.


>it feels like a huge chunk of specialized operating systems are just someone just putting their own skin over Debian. The vast majority of services and tools they wrap aren't any more complicated than the wrapper.

That's exactly what TrueNAS is these days: it's Debian + OpenZFS + a handy web-based UI + some extra NAS-oriented bits. You can roll your own if you want with just Debian and OpenZFS if you don't mind using the command line for everything, or you can try "Cockpit".

The nice thing about TrueNAS is that all the ZFS management stuff is nicely integrated into the UI, which might not be the case with other UIs, and the whole thing is set up out-of-the-box to do ZFS and only ZFS.


There are exceptions to this such as Proxmox which can actually be added to an existing Debian install. I must admit that when I first encountered it I didn't expect much more than a glorified toy. However it is so much more than that and they do a really good job with the software and the features. If anybody is on the fence about it I recommend giving it a go. If you do, I recommend using the ISO to install, pick ZFS as the filesystem (much much more flexible), and run pbs (proxmox backup server) somewhere (even on the same box as an lxc host with zfs backed dir).

Same with a router. Any Linux box with a couple of (decent) NICs is a powerful router. You just need to configure it.

But for my own sanity I prefer out of the box solutions for things like my router and NAS. Learning is great but sometimes you really just need something to work right now!


> splitting my roles back out again more

The fiasco you can cause when you try fix, update, change etc makes this my favourite too.

Household life is generally in some form of ‘relax’ mode in evening and at weekends. Having no internet or movies or whatever is poorly tolerated.

I wish Apple was even slightly supportive of servers and Linux as the mini is such a wicked little box. I went to it to save power. Just checked - it averaged 4.7w over the past 30 days. It runs Ubuntu server in UTM which notably raises power usage but it has the advantage that Docker desktop isn’t there.


>The fiasco you can cause when you try fix, update, change etc makes this my favourite too.

I think some of the difference between "self-hosted" vs "homelab" is in the answer to the question of "What happens if this breaks end of the day Friday?" An answer of "oh merde of le fan, immediate evening/weekend plans are now hosed" is on the self-hosted end of the spectrum, whereas "eh, I'll poke at it on Sunday when it's supposed to be raining or sometime next week, maybe" is on the other end. Does that make sense? There are a few pretty different ways to approach making your setup reliable/redundant but I think throwing more metal at the problem features in all of them one way or another. Plus if someone moves up the stack it can simply be a lot more efficient and performant, the sort of hardware suited for one role isn't necessarily as well suited for another and trying to cram too much into one box may result in someone worse AND more expensive then breaking out a few roles.

But probably a lot of people who ended up doing more hosting started pretty simple, dipping their toes in the water, seeing how it worked out and building confidence. And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data. Issues with circular dependencies and the like or what happens if things go down when it's not convenient for you to be around in person don't really matter that much. I think anything that lowers the barrier to entry is good.

Of course, someone can have some of each too! Or be somewhere along the spectrum, not at one end or another.


> And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data.

Docker-compose isn’t a backup, but from a fresh ubuntu server install, it’ll have me back in 20 mins. Backing up the entire VM isn’t too hard either.

I was in a really sweet spot and then ESXi became intolerable. Though in fairness their website was alway pure hell.


> And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data.

Docker-compose isn’t a backup, but from a fresh ubuntu server install, it’ll have me back in 20 mins. Backing up the entire VM isn’t too hard either.

I was n a really sweet spot and then ESXi became intolerable. Though in fairness their website was alway pure hell.


I'm similar to you[0]. I still run FreeBSD TrueNAS, and it's just a NAS. Although I do run the occasional VM on it as the box is fairly overprovisioned. I run all my other stuff on an xcp-ng box. I'm a little more homelab-y as I do run stuff on a fairly pointless kubernetes cluster, but it's for learning purposes.

I really prefer storage just being storage. For security it makes a lot of sense. Stuff on my network can only access storage via NFS. That means if I were to get malware on my network and it corrupted data (like ransomware), it won't be able to touch the ZFS snapshots I make every hour. I know TrueNAS is well designed and they are using Docker etc, but it still makes me nervous.

I guess when I finally have to replace my NAS I'll have to go Linux, but it'll still be just a NAS for me.

[0] https://blog.gpkb.org/posts/homelab-2025/


I also regret that change.

Big downgrade after moving to Linux:

- https://vermaden.wordpress.com/2024/04/20/truenas-core-versu...


Fair point! When I first started on this I went down a deep rabbit hole exploring all the ways I could set this up. Ultimately, I decided to start simple with hardware that I had laying around.

I definitely will want to have a dedicated NAS machine and a separate server for compute in the future. Think I'll look more into this once RAM prices come back to normal.


There was just not a good reason to stay with BSD, especially with NAS -> homeserver evolution.

Really, we should rename that kind of devices to HSSS (Home Service Storage Server)


I'll second this, also adding that while it remains more of a project to setup Frigate has made significant advances over the last few years and has improved a lot. So if you previously looked at it and were put off, might be worth looking at again.

Also fwiw, if someone is willing to spin up a Windows VM or are running that stack anyway than Blue Iris is probably the default contender for local security software, well polished. I know a few people who still keep a single remaining W10 with GPU passthrough install just for that, not even for games anymore where Linux has gotten good enough in the last few years.

All of this though benefits a lot from already having some sort of homelab and/or self-hosted stack. If you do then the marginal investment may be pretty minimal and value quite high as you use it for a lot of other stuff. If starting from scratch it's a lot more of a haul which of course is precisely why a lot of people use other solutions.


I dabbled earlier but started using 1Password in earnest in 2010 or so with 1PW3. There are plenty of things that could be argued about when it comes to the switch from a native Mac application to Electron, degradations in the GUI etc, some of us may be more sensitive then others. But one major objective thing you're apparently missing was the shift to a forced subscription, including deactivating previous supported sharing methods, and with the typical-for-VC-driven-feudalism-model eye wateringly, outrageously expensive and inferior multi-user support. Pure, proud rent seeking. And then naturally as well the artificial segregation of simple features like custom templates began too.

I hope someday that's made illegal. In the meantime there's Vaultwarden.


As another alternative, rather than using Touch ID you can setup a Yubikey or similar hardware key for login to macOS. Then your login does indeed become a PIN with 3 tries before lockout. That plus a complex password is pretty convenient but not biometric. It's what I've done for a long time on my desktop devices.


Thanks for sharing, hadn't seen it but at almost the same time he made that post I too was struggling to get decent NAS<>NAS transfer speeds with rsync. I should have thought to play more with rclone! I ended up using iSCSI but that is a lot more trouble.

>In fact, some compression modes would actually slow things down as my energy-efficient NAS is running on some slower Arm cores

Depending on the number/type of devices in the setup and usage patterns, it can be effective sometimes to have a single more powerful router and then use it directly as a hop for security or compression (or both) to a set of lower power devices. Like, I know it's not E2EE the same way to send unencrypted data to one OPNsense router, Wireguard (or Nebula or whatever tunnel you prefer) to another over the internet, and then from there to a NAS. But if the NAS is in the same physically secure rack directly attached by hardline to the router (or via isolated switch), I don't think in practice it's significantly enough less secure at the private service level to matter. If the router is a pretty important lynchpin anyone, it can be favorable to lean more heavily on that so one can go cheaper and lower power elsewhere. Not that more efficiency, hardware acceleration etc are at all bad, and conversely sometimes might make sense to have a powerful NAS/other servers and a low power router, but there are good degrees of freedom there. Handier then ever in the current crazy times where sometimes hardware that was formerly easily and cheaply available is now a king's ransom or gone and one has to improvise.


My personal favorite is iHP48 (previously I used m48+ before it died) running an HP 48GX with metakernal installed as I used through college. Still just so intuitive and fast to me.


I still have mine. Never use it though as I'm not handy with RPN anymore. :'(


>Is this going to be like that submarine that guy built to bring people to the Titanic?

Doubtful. Might seem counter intuitive but in some ways space is an easier problem then under water, at least once you get up there. The pressure differential between ~vacuum and 1 atmosphere obviously is just one atmosphere, and outward instead of inward, whereas you get to 1 atm of pressure (14.6 psi) in water at almost exactly the 10m mark (in salt water). The Titanic wreck (which is what the sub you're referring to was designed to reach) is at 3800m, at which point the pressure is around 380 atm (~5600 psi). Any failures are going to be absolutely catastrophic with no time to react. Whereas a space station can handle small leaks just fine for quite awhile (as ISS has had to [0]) if it has some buffer, it's "just" a supply loss and if it became too much would mean people having to get into a safe area or suits and eventually abandon it in the worst case, but it doesn't go pop like a soap bubble. And such things can definitely be patched. Assuming normal proven safety procedures are followed (most importantly having some margin and constant backup life boats or rooms sufficient for all humans on board until all can get to Earth) an impact or mistake or the like might put the station out of business but should be very survivable.

At any rate nothing like the titan, where IIRC the implosion went supersonic and thus they literally didn't even know what did them in because the collapse front was faster then the speed of human nerve signal propagation (120 m/s at best, usually lower).

----

0: https://www.scientificamerican.com/article/the-international...


> "That's over 150 atmospheres of pressure!"

> "How many atmospheres can the ship withstand?"

> "Well, it's a spaceship. So I'd say anywhere between zero and one."


>But it's not always the periapsis.

>But since the impact point isn't guaranteed to be the periapsis or apoapsis, the above mentioned diametrically-opposing point also cannot be guaranteed to be an apsis.

You're correct on the generalized case of the math here, no argument at all, but this also feels like it's getting a bit away from the specialized sub-case under discussion here: that of an existing functional LEO satellite getting hit by debris. Those aren't in wildly eccentric orbits but rather station-kept pretty circular ones (probably not perfectly of course but +/- a fraction of a percent isn't significant here). So by definition the high and low points are the same and which means we can say that the new low point of generated debris in eccentric orbits will be at worst no lower then the current orbit of the satellite (short of a second collision higher up, the probability of which is dramatically lower). All possible impact points on the path of a circular orbit are ~the same. And in turn if the satellite is at a point low enough to have significant atmospheric drag the debris will as well which is the goal.


I'm sitting behind one of these right now, got it back at the start of last November, attached to an Ergotron monitor arm. It's worth noting that there are a number of screens coming out that seem to be using roughly this panel but with different price points and feature mixes. MacRumors (and no doubt others) maintaining a nice little dedicated thread [0] on 6K screens with info in an easily digestible form. And so on a meta-note, the most exciting thing to me is simply that we're finally seeing a big leap forward all at once in the screen fundamentals (resolution, refresh, color) after a long and frustrating (to me anyway) period of stagnation. Apple did the first iMac 5K iirc in 2014, twelve(!) years ago. And I thought at the time it wouldn't be long before we had a range of higher res options, but instead Apple eventually did a standalone, then long after LG did a release, there were a couple of rando ones from Dell that got dropped... and that was it. Now we've got lots of 5K and 6K options, 8K ones are coming, currently it's 60Hz but CES has seen higher refresh announced, next few years are looking good. While the LG doesn't take advantage of TB5's full bandwidth, but having 120 Gbps on tap means that we also have plenty of headroom for everything, high resolution, high refresh, and higher color bit depths without having to compromise. So that's all pretty nice.

As far as this one specifically, on a physical level it's perfectly decent. I actually like that unlike the previous LG and most screens it seems nowadays, there is no camera at all. The only real irritation about it is the ginormous power brick it has, which is bigger and heavier then a Mac Mini, and on top of that has a fixed cord (a SHORT fixed cord) which I hate. I prefer having power be integrated and just using a normal power cable, but if nothing else it's irritating that even on high end electronics OEMs still don't use GaN and shrink everything a lot.

I'm no longer doing significant graphics work so haven't invested in updating color calibration hardware, none of my old stuff still works with current higher bit-depth/HDR etc screens. I'm mostly doing coding, CAD, light non-print graphics, etc. So my impressions are purely subjective. List in no particular order vs the older 5k and other screens I've used:

• Whether good luck or just (not) bad luck, quality control on the physical parts hasn't been an issue. There isn't any banding, no dead pixels, light/dark patches or the like that other comments report.

• It claims to be cutting edge in terms of IPS displays, "nano ips black" blah blah, but there isn't any significant noticeable contrast increase vs the old. It's definitely excellent for a standard IPS display but OLED/µLED it is not (though conversely I have no concerns about it being on hours a day display static GUI elements).

• Matte instead of glossy doesn't really do anything for me since I'd reoriented my office space long ago due to everything being glossy. There is a slight shimmer if I focus that bothered me a little more than new but I don't notice after a few months. I don't think it's quite as good as Apple's treatment, but for myself I'd probably just go back to glossy given the choice. YMMV based on lighting.

• It claims to be cutting edge in terms of IPS displays, "nano ips black" blah blah, but there isn't any significant noticeable contrast increase vs the old. It's solid for a standard IPS display but OLED/µLED it is not (though conversely I have no concerns about it being on hours a day display static GUI elements).

• Software situation is mediocre. I have not been able to get LG's software to perform a firmware update, it fails with odd error messages, so I haven't been able to experiment at all with some of the modes that it was advertised with. Their software wants a lot of invasive permissions and is wonky. LG support has not been helpful. Newer screens will presumably come with current firmware out of the box at some point but this was disappointing.

• Also on software, at least under macOS 15 the HDR story seems a bit odd. It's the first desktop Mac screen I've used that has an HDR toggle in the System Settings, and enabling it does make HEIC photos and a few other workflows I surveyed work more like an MBP screen. However it also causes the Mac GUI colors to get all washed out and strange, there isn't compensation there with just the toggle. The may be improved in macOS 26, or might be something one of the Studio modes will help with if I can ever get access to them, but it isn't plug-and-play here.

• If I do toggle it on, having the HDR support with true 10-bit is noticeable in working with high bit depth photos, including everything from any iPhone in awhile.

• Having TB bandwidth out of the hub doesn't matter much to me but does work and means the TB5 input isn't totally wasted. Sometimes convenient to have an extra port. This would probably be of more value for someone using a notebook which is clearly the intended use-case.

Anyway, it's fine, I needed a new screen and it gives me a noticeably improved amount of screen space for my aging eyes but is still on the right size (for me) of not being so big that I'd need a curve though it's right on the edge. I've run 2 and 3-screen (1920x1200) primary use (ie, all for regular system use vs having a secondary proof/video screen like I do now) setups in the past, and there are pluses and minuses particularly with having one be vertically oriented, but it's not bad to have so much space all as a single unified thing.

I think most people would be better off waiting, this was clearly not all baked yet when I got it and there is plenty of competition here or coming, but I'm not returning it either. I'm looking forward to hopefully finally seeing screens that will arguably be "done", basically hitting the limits of human visual acuity in all respects (or at least to the many-9s level of diminishing returns) in the next few years. And I'm also kind of curious longer term still about what effects that might have on the industry, for my entire life progress in video, unlike audio, has been constant and there was always clearly more to do. Once resolution and refresh stops and monitors are "finished" I wonder if that might be interesting for media in terms of reducing the technical rat race?

----

0: https://forums.macrumors.com/threads/the-complete-list-of-6k...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: