HN2new | past | comments | ask | show | jobs | submitlogin
Redoing all my home networking (jessfraz.com)
304 points by msh on Dec 3, 2017 | hide | past | favorite | 182 comments


I've found that the home lab is a great place for cast-off corporate equipment either bought from ebay or scrounged from data center upgrades. Mine, assembled at little to no cost, consists of:

42u IBM-branded cabinet

Cisco ASA 5510 firewall

Dell R610 running ESXi

Old Dell 2950 full of 2TB drives running FreeNAS

Cisco 'Small Business' 50 port managed switch

A couple of 4U server cases containing the guts of obsolete gaming systems, repurposed as assorted servers

If you've got the space and can deal with the noise, I can't say too many good things about the R610s - available dirt cheap, and though they're about four generations old at this point, still perform decently since Intel/AMD have, until recently, pretty much stagnated on CPU performance.

Luckily I have a basement.


I've found that enterprise gear is not exactly power friendly -- the Cisco 5510 is rated at 150W steady state, if it uses half that much power in practice (75W), that's around $90/year where I live.

In contrast, I paid around $120 for a PCEngines APU2 (running PFSense), and it uses around 8W of power (measured).

Old servers are the same -- the amount of power they use for the performance makes them expensive to run 24x7.


This is pretty much what I ended up doing. It's a collossal waste of power (and money) to run enterprise equipment in my area for a home network that is nowhere near enterprise-level in terms of utilization.


I also used to have more gear, but hav really scaled back and consolidated. I now have one fairly beefy server (storage array, decent zeon, lots of ram), run a few vms (dev, plex, etc), and export storage. Then I have a pair of NUCs doing various things.


Plus hard drive capacity seems to have gone back on a steep increasing curve with 12TB drives coming to the market, offering further opportunities for consolidation.


Someone shared this website with me a while back to find used servers on ebay https://www.labgopher.com/

What do you think of that?

I am trying to put together a render farm where I need a lot of parallel CPUs to run Unreal Engine Swarm [0] but I just don't have the time or experience to execute on it. My goal is 16 Xeon chips across maybe 8 'boxes' to get to around 64/128 cores.

Anyone know where I could look to find someone I could contract to put something like this together?

[0] - https://docs.unrealengine.com/udk/Three/Swarm.html


Have you looked into one of these guys [1]? They go for ~800USD for 16/32 cores/threads. You can get a good high-end build for ~1500USD. It behaves like a modern system with good power profiles and single threaded and multithreaded performance.

Or dual epyc for a single box 64/128 system?

https://www.newegg.com/Product/Product.aspx?Item=N82E1681911...


I had done some very basic research on it and found that the performance for the 'Lightmass' program used in Unreal Engine called Swarm was not ideal https://forums.unrealengine.com/community/off-topic/1388243-...

From that thread: "Definitely not the threadripper - there were some benchmarks scattered around in other topics and it's actually slower at compiling and lighting builds than my 7820X, while costing more to build a system with."

But not sure if I should trust those opinions from the dev community as the benchmarks aren't too sophisticated.

I'll continue doing more research on those thank you! I think dual epyc could be interesting for the cost.


> I am trying to put together a render farm where I need a lot of parallel CPUs to run Unreal Engine Swarm [0] but I just don't have the time or experience to execute on it. My goal is 16 Xeon chips across maybe 8 'boxes' to get to around 64/128 cores.

These days, one dual-socket server can give you that, for a pretty reasonable cost compared to the time, expense, and power usage of running a rack full of systems.


Yea I think the cost savings when it comes to cores may not make sense in the long run when it comes to setup/teardown and power usage.

If you had to put together a copy of the Amazon X1 instance https://aws.amazon.com/ec2/instance-types/x1/ any recommendation on a mobo/memory/case to use?


I put together a few dual E5-2683v3 machines for a colo lab earlier this year. Pricing looks a little less favorable today, but I'd ballpark around $1500 per 28 cores/56 threads before RAM and disk.

As for contracting, there's lots of options, but everything depends heavily on your usage model. Let's say your render jobs take the shape of "press button, enqueue 10,000 CPU-minutes of work". How often do you press the button? How valuable are low completion times? How penalizing are high completion times? Are there ever times when there is no work?

Cloud computing is a good fit for some problems: you can press a button, run your code on 10,000 CPUs, and finish the job in a minute. Cloud computing is expensive for other problems: in terms of hardware, the $1500 machine above compares favorably versus a cc2.8xlarge instance which costs about $1500/month at on-demand pricing. Server hardware is not the only cost -- you still need space and power and disks and network and time for setup and ongoing maintenance -- but there's definitely a break-even point.

PaaS, IaaS, dedicated servers, colocation, and in-house datacenters all make sense for users seeking different tradeoffs. It's difficult to give useful advice without knowing more about the tradeoffs you want to make and the relative values you place on setup time versus run time versus money.


You might be better off using EC2 or similar. Do the work to configure everything once and package it as an AMI, then when you need your swarm, spin up as many instance as you need, render whatever needs rendered, and then turn off the instances. Should be reasonably cheap, and it'll let you decide on the speed vs cost tradeoff (i.e. more servers cost more, but will get your work done faster; fewer servers cost less, but will take longer). I don't think a rack full of hardware is a very good choice for this.


I will probably try my luck with one of these X1 instances https://aws.amazon.com/ec2/instance-types/x1/

Only fear is that the time/cost it takes to spool up/down would not let us iterate as fast as ideally possible if we owned the hardware and had it running 24/7 for dedicated just in time use.

So rather than rent an X1 instance indefinitely we might just pony up the $ upfront and buy our own.

"Each X1 instance is powered by four Intel® Xeon® E7 8880 v3 (codenamed Haswell) processors and offers up to 128 vCPUs."

Anyone know if the X1 instance is 1 motherboard with 4 sockets or 2 separate mobos with 2?


The time to spin up is on the order of seconds.

IMO, if your core competency isn't sourcing and buying hardware in order to build render farms, you shouldn't do it. It's just not worth it. AWS brings a lot of value to the table, not the least of which is the fact that you won't have to maintain/fix hardware when you're just trying to render something. If the instance is broken, you can just kill it and spin up a new one.

Seriously, this will make your life so much easier.

> Anyone know if the X1 instance is 1 motherboard with 4 sockets or 2 separate mobos with 2?

This shouldn't matter. It's abstracted away from you by design.


You can have a look at xeon e5-2670 v2 cpus, they're relatively cheap, around $200 and have 8c/16t so in a double cpu configuration you could have 64 cores with only 4 machines.


I'm not a fan of running a hardware firewall that's long since passed its end-of-life date. There have been a number of critical vulnerabilities on these ASAs in the past. I'd rather stick with software that's kept up to date, like PFSense or OpenBSD.

Definitely agree on using other cast-off hardware, though.


Or you build a super quite desktop server, that will both have more CPU and use less power. The trick is to use large heat sinks with large silent fans. When the CPU's crank up to 100% all you'll hear is a subtle wind breeze.


Sys Admin/Hardware dork here.

Absolutely this. No need to run enterprise level hardware at home. You can easily run a huge environment with 128gb of ram and an 8-core Ryzen processor. If you want to get crazy you could even get an Epyc CPU with a SuperMicro motherboard, put it into a eATX whitebox case and run silent fans with a huge heatsink and still barely use any power. You can even run all the networking OS's in VM's, either with PFSense or some other vm template or use GNS3. No need to purchase a ton of hardware.

It really makes no sense to me why people drop all this money on R710's and make their house sound like the Hartsfield-Jackson Delta Terminal.


Do you have more details on this? I've been looking for a way to build a fanless server but I've been having a difficult time finding info on it. The only stuff I can find is one or two commercial sites that sell custom builds, and so I wonder if it's difficult to do on one's own, or if the fanless tech is still very niche and undeveloped


You could go with an intel Nuc, though they use mobile CPU's you can still get a lot of performance out of it, at least considering the form factor. If you want more power you need fans though, try the Noctua fans, they are close to quite. There's really no point going fanless.


Check out the builds here [1]. I used this mobo for a NAS build recently since it has ECC support.

1: https://pcpartpicker.com/builds/by_part/fk98TW


What about the benefit of using ECC RAM for a file server? I don't see that available in consumer grade H/W. (As far as network H/W I'm in full agreement.)


AMD h/w pretty much always supports ECC DIMMs, it just depends on whether the MoBo propagates that to the sockets, as it takes more traces.


If you plan on running a ZFS file server at home, which enthusiasts prefer because it's one of the only filesystems that supports full file checksumming to prevent bit rot.

Anyway, the ZFS approach is to inherently trust the data that's in RAM over the data on the drive if there's a discrepancy, so ECC RAM is required to maintain the integrity of data on RAM.


All filesystems inherently trust the data that is in RAM. There's nothing any piece of software can reasonably do about memory it can't trust. It's therefore a good idea to run ECC RAM, regardless of which filesystem you run. You can be running your filesystem on Ext4 or the legacy BSD Unix File System, and it would still be a good idea to use ECC.

That ZFS needs ECC moreso than other filesystems is an often repeated misconception. It's just that ZFS (and e.g. Btrfs) is often run on file servers, on which ECC is recommended.


> the ZFS approach is to inherently trust the data that's in RAM over the data on the drive if there's a discrepancy, so ECC RAM is required to maintain the integrity of data on RAM.

This is absurd. Please read up (and understand) this before talking about it again. The benefits of ZFS and the benefits of ECC are orthogonal, though if your fileserver has both, it's a pretty darn robust system with respect to integrity -- assuming that (for example) the disk controllers don't lie about something being synced.


I used to have an ASA has my home firewall, but the problem was that its ethernet ports were 100 megabit, which meant that it eventually became the bottleneck in my internet connection. I ended up replacing it with a Ubiquiti router, which was faster, far cheaper (than even a used market ASA), still getting security updates (without a support contract), and a lot easier to manage (and I say this as someone with extensive experience with Cisco gear, dating back to the 90s). The Ubiquiti might not look as nice or be as cool of hardware, but in practice it's better in every way that matters to me.


You can get some good deals here too: https://forums.servethehome.com/index.php?forums/for-sale-fo....

I've been wiring up 10G throughout the house, and managed to get a ER-8-XG (beta version) on this site for under $500. (8 10-gig SFP+). Wife made me get rid of it because it was too loud. :( Got a Dell T1700 (they're like giving them away on EBay). Quad core Xeon E3, plenty of headroom to do firewall/routing in software.


That’s really cool! I have a bunch of old hardware, unfortunately I can only bring them online when I want to play with them and don’t have any permanent space.

Do you know how many watts you consume? Are alternatives to old enterprise gear from eBay going to be cheaper from an electricity point of view?


I'm kind of guessing here but:

-Cisco ASA 5510 firewall - 100W

-Dell R610 running ESXi - 400W

-Old Dell 2950 full of 2TB drives running FreeNAS - 400W

-Cisco 'Small Business' 50 port managed switch - 100W

-A couple of 4U server cases containing the guts of obsolete gaming systems, repurposed as assorted -servers - 2 x 300W

So, maybe 1600W under medium load? A 15A circuit can deliver 1800W max., so it would be occupying one whole circuit.

Hardly worth it, considering the noise, heat, space, and power. Old enterprise hardware gets given away for a reason.


yeah, I considered holding onto a pair of old 2950s but ditched them considering my peak electricity rate is 43c kw/h. This setup makes me shudder.


How do you find data centers that trash gear? I want a setup like this eventually. There's a lot of cool things (in my opinion) I'd want to do if I had serious compute power.


Coming from an IT/Ops background, often the answer is, know the Ops/sysad folk in a small shop, and be around when they're upgrading.

Apart from that, places like www.weirdstuff.com might exist in your area, and have stuff like this laying around for pennies on the dollar.


Look for government auctions. I bought a pallet of LCD screens for my son’s school for like $75.

A few years ago, they sold 4 pallets of Extreme 10/100 switches for $20! The catch is they sell lots and you have to pick them up at one time.


And even a few years ago, 100 mbit switches were obsolete.


Can you tell me more about how you did the Dell 2950? I'm looking at NAS server options and would love to roll my own. I have 2-3 of these sitting around.


This was more like the setup I was expecting when I clicked on the link, not some buying of UniFi and setting up a few VLANs and SSH Keys.

I agree with you that they are great value but I find the noise and power consumption of that generation a bit much so just built my own boxes


One of my favorite pieces of equipment in my home lab has to be an APU2 running OpenBSD as a router. It's cheap, feels solid, and pf lets me configure complex network routing rules with quite a nice interface. Hasn't choked yet. IPsec built into the OS!

https://www.pcengines.ch/apu2.htm


I've been running OpenBSD on an APU2 as a router for over a year (and on the ALIX it replaced many more years).

OpenBSD makes it so dead simple to setup the basics that (unlike with my NAS) I've never been tempted to try a GUI distribution such as pfSense, yet it also gives you the flexibility to do more esoteric things like forwarding DNS using DNSCrypt, or allowing UPnP, but only to the PlayStation on a VLAN that can't talk to your main network. And the PC Engines gear works perfectly with OpenBSD.


My OpenBSD router (a VM running on ESXi) is probably the most satisfying piece of modern technology I have. A joy to configure, update (every 6 months), and does exactly what I need it to with zero problems.


Can that APU2 do filtering at full wire speed on those 1Gb NICs? In my experience, many of the smaller SBPC routers can't keep up. I'm running a Netgate 4860 at home and it keeps up with my gigabit fiber connection but several of my previous attempts would not (Soekris, etc.).


I also have gigabit, and in my research it seems that the APU2 tops out around 600mbit. Newer hardware like the Celeron 3855U should handle gigabit fine though.


I did the same and have the exact sentiments about the APU2+OpenBSD.


One of the best possible things you can do for your home network, if you have the knowledge to do it properly, is to separate the functions of modem, router and WAP into three separate devices. The cablemodems provided by Comcast, Charter and other that have built in NAT and wifi are atrocious and bug-ridden security nightmares.

Example of a basic separated setup:

DOCSIS 3.0 cablemodem that is a dumb layer 2 bridge: https://www.amazon.com/TP-Link-Download-Certified-Spectrum-T...

router good for up to a 150-200 Mbps class cablemodem connection, $50: https://www.amazon.com/Ubiquiti-Networks-ER-X-Router/dp/B014...

if you want to mess around with stuff from the CLI, a ubiquiti edgerouter is actually a very tiny debian system. their edgerouter OS is a fork of vyatta and is developed by a team of people they hired away from vyatta when brocade acquired them.

802.11ac 2x2 MIMO dual band WAP: https://www.amazon.com/Ubiquiti-Unifi-Ap-AC-Lite-UAPACLITEUS...

or go for the more expensive 3x3 MIMO and 802.11ac wave2 WAPs if you really feel the need for it.

set up the unifi controller in a virtualbox VM that runs on your laptop and use it to do the initial setup/provisioning. bring up the VM whenever you need to make changes.


What's the benefit of separating router and WAP vs. running OpenWRT on a wireless router?


I really like Mikrotik hardware (https://mikrotik.com/products) for home routers and wireless access points. Their products are not very expensive, the hardware is great, and the OS is highly configurable.


I setup a couple of Mikrotik routers at my work and it's done wonders. The CCR (Cloud-Core Routers) are absolutely amazing, but probably overkill for most home users. But considering when I first started at my job the network was an absolute nightmare. Because they were using a NetGear Nighthawk AC1900 (N7000), for around 70-80 users at any given time across 3 separate buildings. Considering every desk has 2 or 3 connected devices (Computer, VoIP phone, and some people also have laptops as well, plus everyone wants to connect their cellphones to the wifi). Now they treat me like a god dam wizard in disguise. They went from massive internet congestion, slow speeds, and phone calls always dropping (VoIP), to smooth sailing. The previous IT guy still works there, but he's more of a media guy.

Now, for the most part I do general IT support and a ton of web development for company, without a ton of oversight.


I did the same when I joined my last job. They were running everything off two BT business hubs. I've put both of them into a brain-dead modem mode, with one as a failover WAN, bought a Mikrotik RB2011UIAS-RM (which has been wonderful, 100% uptime), replaced the 10/100 switches with gigabit equipment and installed a set of four Ubiquiti WiFi access points around the building.

We've got a nice guest wireless network for our clients with token based auth and a really reliable and consistent office network.

I recently bought a Mikrotik HAPlite for home which has also been great, I can VPN in and check my security cams without any horrifying cloud service getting a livestream of my home.


pcengines APU2 is also a pretty sweet platform: https://www.pcengines.ch/apu2.htm

Also for homelabbers the https://www.supermicro.com/products/system/Mini-ITX/SYS-E300... is like the "ultra-NUC", really small but 2x10 gbe NICS built in, plus IPMI.


The Xeon D's are nice (8 physical / 16 logical cores for less than 45W for the D-1541) but I wouldn't recommend buying Supermicro's pre-assembled machines for home lab use; they are noisy. It would be better to buy their bare motherboards instead (e.g. http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV...) and self-install it into a low-noise mini-ITX case instead.


How loud is that little Supermicro, though?


Yeah definitely noisier. https://tinkertry.com/supermicro-superserver-sys-e200-8d-and... shows about 60db(!) with fans at full blast during spinup but 42db or so when dialled down. So not too bad.

The difference with IPMI tho is you can safely put them in the attic or wherever.


How is Mikrotik's software? I find UBNT's to be excellent for someone who doesn't want to do everything via CLI.


MikroTik Webfig (www GUI), winbox (Windows native eve), and SSH/Telnet CLI have parity feature sets.

The CLI is nice in ways Cisco IOS isn’t, like a safe mode, auto-complete and in-line tab suggestions.

As if the feature parity between UIs wasn’t huge, the feature-set itself is major. RouterOS runs on PCs, there is a VM version as well, but you can get a an performant SOHO Routerboard unit for stupid cheap and they even have the actual Routerboard hardware available in bare PCB form-factor for OEM integration.


Perhaps it's an acquired taste. Some of our older office/test network routing is Mikrotik hardware, but since I started pitching in, all the new stuff has been Ubiquiti. I think it's night and day, the UBNT stuff is great but I really dislike the Mikrotik UIs - I find it really confusing and in my experience the documentation is generally poor (unless you only use the CLI).

The Ubiquiti hardware generally seems much higher performance and is also low cost. But on the other hand, I do know some guys who really like the Mikrotik stuff so maybe it's just me...


It's a "pretty" UI around what is basically Linux. E.g. you won't get far without understanding Linux's iptables, but both MT's command line and GUI are a heck of a lot more discoverable and user-friendly than the iptables command line.

If you want to do anything more than very basic single AP home scenario, you have to be willing to get your hands dirty. BUT they are very configurable, make for great learning experiences, and (from my experience) are very reliable, both hardware and data-plane software.

That said, the configuration software DOES have bugs from time to time, but MT is decently responsive.


Some things are better via the gui (web or a windows exe that's wineable), but I prefer ssh access.

I like the fact everything is included, ospf, bgp, pim, mpls/vpls, you name it it's there (aside from decent user management). I used to run bgp at home but moved to ospf recently


Yeah it's pretty impressive to have all of those protocols included in such "low cost" routers (less than a hundred dollars).

Why do you need bgp/ospf for your home network? Are you just experimenting?

I used to manage the network of my school campus, we connected about 1000 persons, and ospf was working great! We didn't use mikrotik though, we used extremenetworks routers


Multiple subnets to keep the wireless domain, wired domain, protected domains and dmz apart. I used to run the backbone over wireless when I rented (couldn't run cat5 through the walls), so didn't vlan everything back to a single router.

OSPF makes it far easier to manage those, but I did used to run BGP as it made more sense to me when I learned about routing protocols and was more forgiving of wireless issues (which could have been me using the wrong OSPF mode to be honest)

In addition to the normal fixed infrastructure, I have a cluster of 5 cheap mikrotiks that I use for a little experimentation with things like failover time, but that's mainly for work purposes.


Mikrotik does in fact support VLAN bridging over WiFi (only between Mikrotik devices), unless you're running in CAPsMAN mode.

When I switched to CAPsMAN, I ended up using VPLS for my wireless backbone, thus giving me proper layer 2 isolation, without fragmentation (MPLS not being limited by the L3 MTU).


One goal was to keep layer 2 multicast network off wireless where possible. Mikrotik's don't do IGMP snooping (until very recently). The wireless backbone had to support multi-networked traffic of course, but local traffic (between a couple of devices in one room) could be kept off the link


Looking at the Unifi controller setup, I'm a bit perplexed as to why they would choose to use MongoDB, although I'll admit I lack experience with everything related to networking. I'd think that using SQLite would allow for a simpler setup while providing better performance. Am I missing or misunderstanding something?

Does anyone have suggestions for beginner friendly guides to home networking? My home setup is pretty hacky, and I'd love to setup something more secure as well as improving my understanding. Right now I have an ASUS RT-AC66U router with asuswrt-merlin, and it runs an OpenVPN server, so I can remotely access my home network.

How do people setup their home network domain name and device hostnames? I have the router set to update a public DDNS entry each time it connects to the internet, and a LAN DHCP Server with manually assigned IP and hostname for each known MAC address. This works alright for home devices, but it gets awkward for mobile devices and laptops. How do you restrict sharing functionality to VPN connections? Should your hostname remain the same regardless of what network you're connected to, or should it vary?

Do you use IPv6? My ISP supports IPv6, and I had enabled it on my router for a few months, but eventually ended up turning it off since I felt uncertain that everything was configured securely. Am I just being paranoid?

Is it ever worth setting up a RADIUS Server for WPA2-Enterprise wireless security? I kinda like the idea of having a centralized location for handling authN/authZ, but the relationship between the different technologies (Kerberos, LDAP, ActiveDirectory, etc) is pretty confusing, and nobody seems to do a good job at explaining how it all ties together. Right now I just generate an SSH key per device, and I import it to each device which should allow connections. But that increases the friction of deploying home services, which gets a bit demotivating.

What monitoring tools do people setup for their home network? Do you handle updates manually or do you have that process automated? Every time I consider setting up a network monitoring tool, I kinda end up going down the rabbit hole and getting overwhelmed by the huge number of options. Many of these tools kinda assume that the user is already familiarized with best practices and that they know exactly what they want, which couldn't be further from the truth in my case.


Unifi uses MongoDB because somebody at Ubiquiti thought it was cool. It's not an open-source project, you don't get to see their IRC chat logs or commit messages.

If you're uncomfortable with IPv6, you are right to turn it off. Learn it, become comfortable, then configure it properly. This advice holds for everything.

Is it worth setting up RADIUS? Only if you want the experience. One of the possible functions of a home lab is to gain experience.

It sounds like you want to learn how to do operations. It's always advisable to start by gathering requirements, generating threat models, listing resources, and then trying to construct a plan. None of this needs to be fixed in stone - as you research one thing, you should learn of alternatives with different trade-offs. Keep track of all that. There is never just one set of "best practices".


Just letting you know that the reason you are probably not receiving any responses is because nearly every question you ask is either easily Google-able, or you are asking for opinions versus facts.

Want to know if IPv6 is secure? Do research to find out. Then you will know for sure.


This comment gives very little to the discussion. The original poster had plenty of valid questions that are very hard to find the answers to as its mainly enterprise/business systems.

I would say you will have a hard time figuring anything out for yourself when your infrastructure uses business-class solutions and is cross vendor.


Pulling your SSH keys from Github to your authorized_keys file is a terrible idea, even with 2FA enabled on the GitHub side. You're trusting Github to manage access to your home network and all of its SSH-able machines!


True, but it would be pretty nifty if there were a thing that would download an authorized_keys file, check the PGP signature against a key I had specified, and copy the file to the SSH dir if the signature was okay.


Uh, why couldn't you just sign your authorized_keys file and post it somewhere public? Then you just literally download, verify signature, and 'import' it (overwrite or append to existing file).


Yes, that's exactly what I said "would be nifty" above. I meant a client-side bash script or similar.


Here's a start: https://bpaste.net/show/049673c13cbf

You can clear sign an authorized_keys file with "gpg --clearsign <authorized_keys>", then just pass the resulting *.asc file to this script. It will verify the signature and 'import' it by copying it to ~/.ssh.


Looks good, thanks! Good way to update all my machines' SSH keys.


What about something like monkeysphere? http://web.monkeysphere.info/getting-started-ssh/


Ignorant dev here: why would I want to buy custom hardware when whatever off-the-shelf 2013ish netgear/linksys router I have can do port forwarding and max out my connection?


In my case (also a dev) because my 2015 Fritz!Box wasn't coping in a dense WiFi area (I can see over 200 other APs from my apartment). I was also running in to issues while really pushing my network, I couldn't get much over 100Mbps in a multi-client environment. A single client-to-client test could go much higher, but I was running in to CPU load issues when using multiple clients, possibly related to NAT rewriting.

The main advantage is following the UNIX philosophy (albeit in a limited manner). The security gateway does NAT, VPN and firewalling, the APs do wireless and the switches switch packets. The CPUs can't get overloaded with other tasks.

I don't think I'm anything like an average user though. I have over 20 devices on my home network continually and 100Mbit fibre to my home - though with my new gear I could increase that to 800Mbit.


My Netgear WND4000 was updated to a build that makes it trivial to permanently lock yourself out of your router. In fact, it will nearly guaranteedly happen if you install their most recent firwmare update from a wireless client.

My Mikrotik and Netgear both regularly shut down when I max out my Internet connection. I've had days where I've had to manually restart them numerous times trying to push large docker images to AWS.

They both have spotty upnp implementations, which I need to work because there are game consoles in this apartment and upnp is necessary to get them online reliably. The Mikrotik's AP is pretty terrible. Doesn't even span a small 1BR in Seattle.

Netgear's models are notorious for being vulnerable to stupid exploits even after numerous hacky patchy series.

The most handwavey one: I find consumer routers just sorta need to be replaced every few years.

I spent less doing the whole Unifi thing recently than I've spent on routers and such in the last 6 years. I don't think I'll be needing to replace this gear any time soon, and Unifi isn't really custom hardware at all. They non-rack mount stuff is relatively affordable if you're willing to make a moderate investment.


> The Mikrotik's AP is pretty terrible. Doesn't even span a small 1BR in Seattle.

Which AP do you have? They sell many, which vary greatly in transmit power. Some (like the RB951G-2HnD) support up to the legal maximum of 30 dBm, but most (like the cAP lite) run at 20 dBm. The notorious RB951-2n (my first) ran at only 15 dBm, which is underpowered.

The 20 dBm models are designed to be used in a multiple-AP scenario, e.g. one AP per room. They definitely won't span multiple rooms, by design. (This is for several reasons, both to properly segregate client devices onto separate APs, and to ensure TX power parity between the AP and devices, which often operate in the 17 dBm range.)

I've run exclusively MT for years and never had one lock up under load. What model do you have and what sort of bandwidth are you talking about? I regularly run 200+ Mbps transfers to/from my NAS (across a wAP ac) and never have trouble. (I did once own a hAP ac that would – after an electrical event that destroyed some other equipment of mine – reboot occasionally.)


I'm not sure which one I have and I'm not at home sadly. When I was having problems most was at my first apartment where I had symmetric gigabit.


For higher-end home connections (100mbit - gigabit) those older "Wi-Fi aisle at Best Buy" type of routers may or may not be able to saturate the connection.

By separating the router and wireless APs you also get the ability to place multiple APs throughout your home for improved signal.


I have an okay homelab, and while I'm still looking into a custom router box, I did replace my router's built in wireless access point with a standalone dedicated Ubiquiti AP. Part of this was that my router only had wireless N, and like you say, it does do port forwarding and max out my connection fine. Also I do plan to add a second AP in the coming week, and Ubiquiti can handle them both easily enough.


> why would I want to buy custom hardware when whatever off-the-shelf 2013ish netgear/linksys router I have can do port forwarding and max out my connection?

for me it was because

- it could not max out my connection: it was maxing out a 70Mb/s on a 100Mb/s connection

- those off the shelf routers do not support vlans, and I like so segment my home lan in stuff I trust, and stuff I don't trust.


I had two commodity APs (NetGear, TP-Link) barely providing enough coverage for my two story house. I replaced them with one UniFi AP that gets better coverage. It also gets regular software updates, something which most of the commodity kit don't.


I wanted to share another killer feature of mikrotik routers - it's called metarouter and it's basically vm hypervisor on a router; you can run other routers or openwrt with basically anything you want, only limited by the router's hardware.


Unifi gear is great until you need something that the UI can’t do and figure out you’ve got to load a custom JSON config file from the controller. Their support is also atrociously bad.

The Unifi product is getting better, but I sometimes wish I had gone with their EdgeRouter line instead of Unifi for routing and switching.

I would say it’s certainly good enough for home, but I’d hesitate to use it in a lab environment where configs might get a little more unique.


Doing stuff from terminal is easy. Finding anything on their forum is easy. And they answer all the questions. I got all things UNIFI, same as the guy that made the post, and I am not network engineer. Setting up the networking was easy and fun.


With the Unifi line it doesn't matter how easy the terminal is since the config gets overwritten by the controller on each re-provision. So not only do you get to figure out what config works via terminal, you also get to insert it into the config via JSON and find the right way to provision it.

It's certainly not rocket science, but I don't know too many junior sysadmins who could do it without hours of research and pain. Whereas making a change on an EdgeRouter is more legacy - find, apply, save. Done.


If I can do it easily with 20mins of research and I am just a software engineer without networking background than any sysadmin should be able to do it to. Btw I have unifi ap, switch and router, also rocking backup 4g link on wan2 and some NAS storage.


Gal


I was under the impression that they were going to decommission the EdgeRouter line as more and more features get added to their UniFi line.

(I have both and my EdgeRouter X feels definitely in “maintenance mode”).


No. They're trying to position the UniFi line as their 'enterprise networking' line and the EdgeMax line as their 'carrier networking' products (a lot of their customers are running small ISPs).

Software wise, the UniFi line does seem a little more actively developed, but new EdgeRouter products are and have been coming out (like the 10Gbit EdgeRouter Infinity a little while ago, and the new EdgeRouter 4 etc.). I think a lot of the software slowness might have been that some of the devs on the EdgeMax team left (judging by the staff posting on the forums), so it's probably been taking a while for the new devs to get up to speed.


I agree, I've got all UniFi at home except for my router, which is an EdgeRouter X.


The best thing about the home lab is that I can buy whatever I want and I don't have to sit there arguing with a sysadmin at work about whether this is going to be a problem on the network.


I have:

o Zotac dual nic with pfsense o netgear 24 port switch with vlans (no PoE, too expensive) o i5 NUC o quadcore atom ZFS storage box o 2x unifi AC APs o Many raspberry pis for environmental sensors/control and the like

There are a number of VLANs some for public things like cheapy chinese CCTV (don't want that seeing the internal network) media and the spouse(s)

Everything was controlled with puppet, now migrated to ansible. I was tempted to replace most of it with an old HP z600, but that would blow the power budget (all the compute when on consumes < 45watts) a z600 with dual CPU pulls about 100-130.

There is an hosted Atom box that provide website, VPN and backup coordination.


Both Ubiquiti and Mikrotik gear are great for home use, especially for us techies, and the Ubiquiti stuff especially can often be hacked on a bit. Just remember to protect your management interfaces and keep everything up-to-date.


I often ask people I interview what their home network looks like and what their visions are. This often open doors to both inner motivation and thoughts you don't find elsewhere in an ordinary tech interview.


Asking about home labs/home networking runs the risk of being a very discriminatory question if you take the negative as a mark against the question. I highly recommend not asking about it.

Particularly, it is discriminatory in that it penalizes people based on their home situation, both from a time and financial perspective. People with children, especially single parents, are often not in positions to actively maintain a home lab or complex home network, due to time constraints.

Others just might not have the inclination, and it doesn't say anything at all about their ability to perform the job.

Even if you're not going to take the lack of a home lab as a negative, it can throw a candidate off - candidates are very conditioned to feel that a hard no to an interview questions is going to be taken as a mark against them, and this is going to effect their ability to answer other questions as effectively.

A much better question is to ask how someone approaches learning new technologies or keeping up with the changes in technology in general. For some people that will be work related, for some it will be home labs, and is an important skill regardless of how much time you have available. People with home labs will almost universally talk about it, and you can get the same discussion with them, without risking discriminating against people that have to keep their skills sharp through other methods. It's also more likely to be directly relevant to the skills related to the job, rather than random projects that might be tech related but not relevant to the position being interviewed for.


I was interviewing someone who one their CV had no real qualifications for the role they were applying for (sysadmin, this was back in 2012) Basically they were a social worker for both young and old in spain (this was a UK position)

However they were reasonably promising on the test, so as part of the "getting to know you" part of the interview we went over where they got their knowledge. Home labs, ebay, second had stuff. Do I expect _everyone_ to have a home lab? fuck no. Do I explicitly ask about them? no.

I must take issue with discriminatory, the whole point of an interview is to discriminate against people who are not capable of the job role.

What you are talking about is social bias, and that frankly has nothing really to do with home labs, and a lot to do with people being arseholes.


I'm not taking a side on whether asking about home labs in an interview is a good idea or bad idea but it's not discriminatory in a legal sense.


This statement is so generic, you could replace "home labs/home networking" with literally anything.


Not literally anything - but things outside of work. In which case, yes, it still all applies.

You should focus on the things required for the job. Not what people do in their free time.


The reality is that spending your free time on job-related stuff is likely correlated with improved job-related skills, though. It's not a correlation of 1 obviously, but it's probably not 0 either, so it is relevant.


Except that studies show reduced capacity to work after a certain number of hours. Does doing the equivalent activity to "work" at home count against those hours? I don't know the answer there.

We do know that diversity increases the effectiveness and problem solving ability of a team - at worst, getting a bunch of people that have home labs is actively detrimental to this, and at best does nothing to improve it.

There's limited time available in interviews, and you're probably better suited asking relevant questions.

(And from a personal perspective, asking about home lab details would benefit me. I've got ten gig fiber throughout the house and close to a terabyte of RAM consumed by my VMs... But none of that is realistically making me a better employee)


Interesting, I'm not sure if it counts against one's ability to work, but it seems like it might, since it's a similar activity. And you're right that other activities can also yield helpful skills.

I agree that competency testing, especially project based, is better. The only reason I'd ask about their off time is to get a sense of what they like and better getting to know what kinds of projects they might gravitate to, given the choice.


Why psychoanalyze when you can simply test?


Good point, I agree.


What are some example replacements that would make the statement less applicable?


Interesting! I mostly focus on whether they're capable of performing the functions of the job I'm hiring for.


I hope this isn't unusual. I like to think I'm at least OK at software development, but my home network consists of a wifi router that I bought in late 2005 from PC World.


Ah, someone who doesn't feel the need to overcomplicate things. When can you start?


While I don’t believe advanced home networking is a good indicator of software engineering skills, I’d at least hope you occasionally check for firmware updates/vulnerabilities and replace it if the manufacturer stopped supporting it. Consumer routers (especially from that era) aren't known for good security (drive-by pharming etc).


You can interview for that? (Not a joke)

Very few of the job functions we rely on in my space can be reasonably handled in 45 minutes. Resultantly, every interview question is indirect. We either reduce problems to what we hope are aggregate indicators (e.g. coding questions) with little to no certainty that we've correctly aggregated the skillset, use fast-search limit test questions that can spook candidates, or resort to fuzzy indicators that end up serving as pet questions.

For me, I check for things like accurate self assessment and subtle asshole cues, but largely indirectly, since everybody lies...


One response is, no you can’t. Technical interviews almost always devolve into proxy questions & those proxy questions rarely actually test for what the interviewer thinks they do.

“Tell me about your home network” is one of the odder proxy questions I’ve seen, having spent a lot of time looking at this problem.

What I’ve recommended for years is to replace technical interviews with take home work sample tests. Those also have their downsides and have recently gotten a bad name because of how poorly many people execute them, but replacing your interview process with them almost always leads to better results in my experience.


[flagged]


Well, you're quite right, but I disagree that it's a problem. Snark, sarcasm and underhanded jabs are absolutely valid rhetorical techniques; and, if you don't like them, there are plenty of good responses in kind, that will reflect on you far better than if simply tell on them to teacher.

In case you were having difficult guessing, I'm minded to agree with tptacek. (And not just because of my sibling comment, in which I admit that my home network is indistinguishable from that of a 90 year old technophobe who got his not-daft-but-still-technically-ignorant grandson to set it up for him.) Is the interviewee's home network, assuming they even have one, really relevant?

If they're going to be setting up a network for you, wouldn't you rather they had experience doing something significant on a professional basis, not just setting something up at home?

If they're going to be doing something else, what relevance does any of this have?


I'm not sure what is exactly supposed to be "underhanded" about it.


It was underhanded because you were giving a compliment but meant it to be an insult. Your reply here is willful ignorance and it’s done with malicious intent.

The fact that you are being defended shows that HN is a echo chamber that ultimately supports bullies like you even if they break the rules.


Read first paragraph after “In comments” https://hackernews.hn/newsguidelines.html doesn’t matter if you agree, snark is not supported here.

On the other it’s fine if you agree with tptacek’s intent but you were more clear and civil with your point. Tptacek’s comment comes off as pretentious and insulting, and doesn’t promote good faith discussion.


It's strange you're so familiar with the forum's norms and guidelines and would yet start a completely off-topic, pointless meta-subthread. Those are much worse than tptacek's fairly tame response. If you disagree with the comment, you can downvote or if you feel it's that terrible, you can flag it so it gets moderator attention. If those options aren't available yet for your account, you can just participate a little longer until they are.


I wish someone had asked me about my homelab. Trying to run a homelab from a sailboat presented some very unique challenges (power, no physical internet) and opportunities (low interference, environmental data). While most homelabbers are hacking on pfsense or other networking tools I was hacking our boats NMEA sensor network.


Another HN liveaboard! But I have a feeling your boat was a larger boat with lots of people living aboard from your wording. What did you end up doing with the homelab/sensor network?


It was a great life. Now we're domesticated bluewater sailors waiting to take off again. It was a 38ft catamaran. So there were boats larger than us and many more smaller than us. For us it was the right size, comfortably took a family of 3 (+2 crew) across the Med and Atlantic.

The network was pretty tightly integrated into the boat, a mix of Seatalkng, RS485 and RS232. All connected to my PC through wifi. Sadly, my homelab suffered from saltwater exposure and high humidity. My Mikrotik router and LR wifi antenna stayed with me. Since coming back stateside I've replaced most of my gear with Azure, IFTTT, and Alexa. 3 things we didn't have access to while sailing.


At this point in the interview, I make up a story involving a FreeBSD box doing the firewall duties and segregated guest network, failover to 4G hotspot, caching proxy, fancy graphs and logs...and also pretend to be a little embarrassed about it.


What percentage have done anything beyond the default wifi-router that comes with their provider?


Who even uses the provided router? Docsis is a horrible spec and basically a perma-backdoor, use your own cable modem if you can manage it with your isp, turn off everything and pass through to your own edge firewall, and keep the WiFi router separate, preferably on a DMZ.


Who cares if the ISP controls the router? They control my connection to the Internet anyway. I just assume my LAN is untrustworthy, and authenticate & encrypt everything inside it.


No idea but I'll submit myself for counting. I replaced my provider's router with some Unifi gear and am running DHCP/DNS/Storage/Backups/Misc. off a small server running KVM. I would be excited to talk about my lab in an interview.


Here on HN likely a much higher percentage than in the general population.

In my case, the default wifi-router that came from the provider sits unplugged and idle.


I was asked this before, and I enjoyed sharing my network; I'm not sure how it related to what I was hired for. Perhaps debugging my home network provided tcpdump experience, which is always handy for a job with networking. It also lead up to a load balancing discussion which was interesting. I'm now good friends with that interviewer, but I'm still not too sure it's a good question.

Maybe it's useful for 'can this person describe something complex they worked on' with lots of places to dig into 'how much low level networking knowledge does this person have,' but a large number of my peers use the WiFi that comes with the modem from their ISP, and don't even realize they're doing everything wrong.


My answer is a phone with a cellular plan. Would I get extra points for my minimalism?


I love this -- thank you for the tip. I'll start using this immediately :)


> I always have some random side project I am working on, whether it is making the world’s most over engineered desktop OS all running in containers

Identified the author immediately by the opening sentence. The "overengineered" CoreOS deskop was great, I wish it became an actual project.


im actually very puzzled why more people dont use Mikrotik/RouterOS switches ? they seem super powerful at a fraction of the cost of "enterprise switches". It almost feels that Cisco is aspirational..


Over the summer I upgraded to gigabit symmetric fibre and wanted something a bit more 'enterprise' at home that wouldn't have any issues keeping up (compared to consumer wifi router running OpenWRT I'd used before). I bought a Mikrotik RB750Gr3 "hEX" and Unifi UAP-AC-Lite - it was around €180, so the same as buying a good consumer router.

I admit I haven't had any performance issues with the router (although I'm not running any complicated routing rules), and out of the box it was setup to run as a NAT router, so it was just plug and play. However once I started digging into it I started reaching the limitations. As others have said the documentation is pretty bad, and the only way to tell if something is supported by your hardware is usually to try it and see what happens (or read a 40 page thread in the forums). Here are a few things I had issues with:

- Using a USB LTE dongle as a backup WAN connection. At first the device wasn't recognised, I then connected it through a powered USB hub and it worked, but then after rebooting it wasn't recognised again, but if I physically reconnected it then it worked - but for a 'backup' that kind of sucked (this was later fixed in a software update).

- VLANs. I wanted some ports to be tagged on a certain vlan, no matter what the device was sending (I want to have a 'media' VLAN), after I while I found out the hardware doesn't support that though, so I gave up on that idea. I setup the AP to have a guest network running on a separate vlan, and wanted that to have no access to my network, after trying for a few hours (and usually locking myself and having to factory reset the router each time) I gave up. This was only a stop-gap anyway, as I'd planned to get a managed PoE switch.

- VPNs. I wanted to setup an outgoing OpenVPN client, and route some traffic through it based on IP (Netflix). I also wanted to setup an incoming OpenVPN server, so I could access my network from the outside. RouterOS has built in support for OpenVPN, but the features are somewhat limited, for example as a client it doesn't support certificate authentication (but as a server it does).

The Unifi was a breeze in comparison, like the Apple of networking gear, but in the same way the options are somewhat limited (and having to install an application to configure networking gear feels weird). To be honest I liked OpenWRT better than both of these... the documentation, ease of use, and being a developer I feel right at home editing configuration files. I assume PFSense would be a good choice too.


> I setup the AP to have a guest network running on a separate vlan, and wanted that to have no access to my network, after trying for a few hours (and usually locking myself and having to factory reset the router each time) I gave up.

It probably took me weeks to get a working guest network setup on my MT. That is one thing I wish they'd automate. I don't regret it, as I'm partly into MT so I can learn more about networking, but it makes it hard to recommend to others who want a basic home router.

(CAPsMAN helps a little, in that you can direct it to isolate all clients on a given SSID from each other, even across APs, but that still leaves you with configuring routing. It also has a packet fragmentation problem in that mode…)

I had my own VLAN fun when I tried to configure VLANs through the switch menu. Turns out that doing so overrides port master/slave configuration with no warning. I ended up bridging my WAN and LAN ports for a few days… (Now MT is thankfully starting to move away from direct configuration of the switch chip and instead using it to transparently accelerate bridges.)


I love playing with this sort of stuff, but my home network works really well with one AP and really cheap gear. If I had a few more people using it maybe then I could justify that sort of investment.


TIL that you can run an SSH agent using an OpenPGP smartcard on a non-rooted Chromebook now.

EDIT: Tried following the linked instructions but nassh just hung while trying to connect, with no prompt from Smart Card Connector :( Oh well, at least it's possible in principle.

https://chromium.googlesource.com/apps/libapps/+/master/nass...


For all the people asking "why?" It's the same reason people pour money into home theater, cars, or any other hobby. People do this because they enjoy it.


I like to have the "perfect" setup. Of course, that definition is relative and ever-changing. In reality, I end up spending an insane amount of time adjusting, switching to different software, or trying to get the last 1% of a feature working. It's not always enjoyable :-)


Worth it for the link to Carolyn's my little pony k8s cluster, mastered by Twilight Sparkle. Obvs.

http://carolynvanslyck.com/blog/2017/10/my-little-cluster/


Im currently doing the exact opposite; burning all my mkv's to dvd so I can have a physical collection, with kunaki.com to print the dvd case and cover.

Main issue is that dvd covers are ugly as sin, and there's no archive of the disk image that isn't a literal photo of the disk (but the case covers are scanned on http://www.cdcovers.cc/), so I've been creating my own,

and dvd authoring is a pain in the ass if you want soft-coded subs, and multiple audio tracks

But since my target player is ps3, which apparently supports avi w/ xsubs and mpeg4, I'm planning to switch out to that: video quality is much better for the disk size between mpeg2 and mpeg4 (kunaki only produces cd and dvd5, so a blue-ray archive is out)


Honest question - why go the other way (unless the answer is because you got them with questionable legality)? I have stacks of DVDs that I bought over the last 15 years, and I’m ripping them all because I hate having the discs around and would rather have them in Plex. Going deliberately in that direction isn’t something I’d see desirable, but I only see from my point of view.


Same reason I have a bookshelf instead of just a list of pdfs and ebooks; its a lot nicer to browse through a curated collection physically.

I don't like caring about individual copies though, so having a way to recreate them cheaply on lost/damage/borrowed-but-never-returned is pretty useful imo. I get to treat it both digitally and physically on preference.

Ofc, all the stuff I haven't watched, like but wouldn't recommend, etc will never leave digital. The ideal scenario is just that I can pull a dvd off a shelf when I recommend it, instead of scrambling for a usb or mega or scp; if its possible, physical sharing is a lot nicer than digital


Because why wouldn't you drop multiple grands on NUCs to encode your DVDs.


It's weird the things that people do when discovering personal computing. No sense in transcoding one's DVDs (and even buying more machines to do it faster?!) when they can be torrented quicker and with higher quality even! But we've all got to start somewhere, I suppose.

The simple approach is to make a list of your physical media collection, stow it away for backup purposes, and download the collection over time. Then eventually you can even move towards torrenting the rest of your media and drop that netflix subscription that's supporting the destruction of net neutrality!


"we've all got to start somewhere" is pretty condescending.

If the author is in the united states, torrenting the videos would violate copyright (when you torrent (if you don't have uploading disabled somehow), you redistribute without authorization). Transcoding DVDs you've already bought is not distribution.


In the United States, ripping the DVD's to a drive violates the DMCA. You are illegal either way, just by transcoding you are violating a grossly unjust law instead of a disproportionate but justifiable one. And no one will catch you for ripping discs, except possibly if you brag about it on the internet.


.... IANAL but yeah I think you're right. walp


Okay, putting aside the legality (although in Australia there's some provisions for "format shifting"), are you really happy with the "backup" quality provided by most scene groups?

I find I can do a better job (at least to my eyes) than most 4.7/8GB movie rips, and the standards for x265 aren't set yet.

Ripping your own media gives you a lot more flexibility.


If you're part of the large segment of the world population that needs subtitles to watch most movies, it can be very difficult to impossible to find well-made subtitles that match your video.


They have a 1TB ingress cap (Comcast)


Unless you pay them $50 more.


I Pay 79$ for uncapped gigabit fiber. I can understand why they wouldn’t want to pay such a premium.


I currently have an Apple AirPort Time Capsule (latest 2 TB version with AC support), but I've been looking at Google Wifi. Honest question, why spend all the extra cash for Unifi gear? Google Wifi is mesh, and should all just work.


I had a few simple reasons:

1) PoE wired backhaul.

2) Way more management and configuration control.

3) Separability of routing, switching, and wireless.

4) Simple L2TP VPN endpoint support.

I'm very happy with my choice, and I just installed AP's only at my folks' place over Thanksgiving. Simple remote management may be Overkill, but I'm willing to wager that the lifecycle in this gear will be much better than the once-every-18-months dance that they've been doing with extenders, routers, etc.


I, for one, don't want google anywhere near my home network.


range.

I have a 1930s solid brick built house, and a 50 meter garden with a "work shed" at the other end.

with two I have the entire house, garden and shed covered at a decent speed.


I wanted to find out more about the author, because the post looked interesting. Should not have started with Twitter. Why do people feel the need to wear their political agenda on their sleeve so much in this industry.

"I love writing opinionated blog posts that make dudes go all ape shit and territorial if I didn’t do it the same way as them. Think for yourselves :) there can be more than one way ya doofuses it’s all about what your personal tradeoffs are."

Is this the kind of response you want to have to people sharing their opinions on a public forum; having a productive discussion that you started.


I've been experimenting with pfSense running via KVM on my Linux workstation. Was considering buying a Ubiquiti ER-4, but I don't think I can go back.

I'm considering buying a CompuLab fitlet 2 to give pfSense a permanent home but may end up leaving it on my workstation. The increase in power consumption over leaving my workstation on 24/7 is probably neglible. But the fitlet2 seems simpler.


Had great experience with sub-$50 TP-Links + LEDE. I can hardly justify $149 and can't imagine buying one for $549.


The home lab is dead. With EC2 spot instances you can get machines for far cheaper than it costs to power them at home, unless you live in a place with super cheap power like less than $0.08/Kwh.


I don't think I'd spend the money for UAC-AC-SHD ($600?) vs. UAC-AC-PRO ($100). Maybe UAC-AC-HD ($250) if I had a lot of 4-stream devices, but enh.


I have an irrational fetish for rack-mount gear in my home lab. Are there any good rack-mount NUCs, or similar?


Some older workstations (e.g. Dell T7500) are rack mountable. If you use Windows 10 they can suspend and wake up on LAN to save you power and they should cost you less than the NUC on eBay. It's what I use - Hyper-V on domain joined W10 Pro which can be remote managed so no monitor necessary - it's suspended most of the time - WOL and my VMs are ready to go!


So how much does it cost to set up UniFi for the home and what are the benefits?


I'm a huge fan of using UniFi products in the home. I run a UniFi Switch 8 and UniFi UAP-AC-Pro, covering around 2300sqft. The setup cost around $250.


How quickly do they patch vulnerabilities, and what guarrantees do they provide to continue doing so for the life of the product? As "sexy" as these products look, they look to me like yet another walled garden.


Ubiquiti had patches for the KRACK vulnerability before public disclosure.

https://help.ubnt.com/hc/en-us/articles/115013737328-Ubiquit...


Being that these products are catered more towards enterprise, and pro-sumer users I would think they'd be inclined to release patches in a timely manner. After taking a quick peek at their software change logs going back to ~2013, they consistently release monthly patches.


Like someone above, I use an ER-X (with hw offload enabled) and a UAP-AC-PRO. Cost is about $200 total.

Rock solid reliability. I have never had to reboot my ER-X or UAP-AC-PROs except for firmware updates. You'll also get updates for a long time compared to consumer gear.


Unifi kit is, relatively, cheap for managed networking equipment, exact prices will depend on what country you're in.

I've got 3 Unifi APs , a 24-port switch and a security gateway.

Personally I like managed kit 'cause it provides some insight into whats happening on your network. If you see a slowdown what's causing it, what devices do I have connected etc.

Additionally, Ubiquiti has a nice manager interface which makes updating the firmware pretty easy, which is nice security patches are needed and making those easy to apply is a good thing.


My setup was pretty cheap: an EdgeRouter X for £50 and an AP AC Lite for about £70 (I think). The X is surprisingly good for such a small/low price router. Took me a couple of months to get it configured how I wanted (split ISP/VPN networks across both cabling and wifi) but it has been very solid ever since.

When I move to a bigger place I'll probably put the router behind a big PoE switch and buy some of the 48v PoE compatible APs. We use them in work and they've been very good so far.


You’re interested in advanced home networking, a dual nic nuc running pfsense is going to give you a ton more options while at the same time teaching you real networking.


Jessie Frazelle is not somebody just getting started, I am all but certain that she knows about pfsense and chose to not use it.

There are a lot of reasons to not use pfsense. Basically no SQM (eg: fq_codel), everything is a giant php script running as root, tons of simple features (eg: backup) have to be implemented as plugins unless you pay for the commercial support, and the most important of all: because it's not a normal freebsd system, you don't know exactly where to look when something breaks; you don't just have to find the broken config file, you have to find the script responsible for generating that broken config file. I am not a fan of pfsense.

It's more likely that she'd go with a simple linux system considering the projects she's worked on, but I think it's cool that she didn't do that, either. There's something to be said for not having a bunch of different systems to maintain. Ubiquiti equipment is a bargain. You pay a one-time fee for equipment that is probably twice marked up from what it costs to make, and in result you get immediate patching of things like that recent WPA2 bug, great throughput via hardware acceleration, advanced features, etc., for a very long time. Set it and forget it.


Are there dual NIC Intel NUCs now? Last time I looked I couldn't find one.


I run a NUC as my router, with only one NIC. I use tagged vlans to separate the networks, and have a cheapo switch that understands tagged vlans. Unless you have a really nice external network connection, you don't really need multiple NICs for your router.


Nope but there are certainly some articles discussing the use of USB NICs, and their performance.

http://www.virten.net/2016/06/additional-usb-nic-for-intel-n...


You can hook up an extra nic on the miniPCI-e slot. But that is a bit hacky.

You can also use one NIC and VLANs.


Are there any disadvantages to using a single NIC with VLANs rather than two NICs for a router?


It's also easier for misconfigurations to lead to security issues since your switch resetting to default means your network is now connected directly to the internet. I had this happen after the power went out last time, luckily I'm on a PPPoE connection so it wasn't really connected to the net. Replaced my NUC + managed switch setup after that.


My one complaint is the lack of plug sensing: if you unplug a cable modem and plug it back in, that typically does things like triggering a DHCP request, which doesn't happen when the cable modem is plugged into a switch port on a vlan.


Mostly just bandwidth, if you have a dedicated storage device it's pretty easily to saturate a 1Gbe NIC.


Zotac makes ZBoxes with two NICs, they're about the same price.


I have one of these, they have the advantage over the NUC being that are silent and cheaper.

The only NUCs that are worth it are the i5/i7s but they are rather warm and need a fan. (I have a first gen i5, its fucking awesome though.)


This is the route I’ve just started down - a NUC with as much memory I can shove into it. Doesn’t cost a fortune to run, and as basically silent. Love it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: