Hacker News .hnnew | past | comments | ask | show | jobs | submit | dmayle's commentslogin

I run two 1.5TB Optanes in raid-0 with XFS (I picked them up for $300 each on sale about two years ago). These are limited to PCIE 3.0 x4 (about 4GB/s max each). I also have a 64GB optane drive I use as my boot drive.

It's hard to tell you, because it's subjective, I don't swap back and forth between an SSD and the optane drives. I have my old system, which has a 2TB Samsung 980 Pro NVME drive (PCIE 4.0 x4, or 8GB/s max) as root, and a Sabrent rocket 4 plus 4TB drive secondary (also PCIE 4.0), so I ran sysbench on both systems, so I could share the differences. (Old system 5950X, new system 9950X3D).

It feels snappier, especially when doing compilations...

Sequential reads: I started with a 150GB fileset, but it was being served by the kernel cache on my newer system (256GB RAM vs 128GB on the old), so I switched to use 300GB of data, and the optanes gave me 5000 MiB/s for sequential read as opposed to 2800 MiB/s for the 980 Pro, and 4340 MiB/s for the Rocket 4 Plus.

Random writes alone (no read workload) The optane system gets 2184 MiB/s, the 980 Pro gets 32 MiB/s, and the Rocket 4 Plus gets 53 MiB/s.

Mixed workload (random read/write) The optanes get 725/483 as opposed to 9/6 for the 980 Pro, and 42/28 for the Rocket 4 Plus.

2x1.5TB Optane Raid0: Prep time: `sysbench fileio --file-total-size=150G prepare` 161061273600 bytes written in 50.41 seconds (3047.27 MiB/sec).

    Benchmark:
    `sysbench fileio --file-total-size=150G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
    WARNING: --max-time is deprecated, use --time instead
    sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)

    Running the test with following options:
    Number of threads: 1
    Initializing random number generator from current time

    Extra file open flags: (none)
    128 files, 1.1719GiB each
    150GiB total file size
    Block size 16KiB
    Number of IO requests: 0
    Read/Write ratio for combined random IO test: 1.50
    Periodic FSYNC enabled, calling fsync() each 100 requests.
    Calling fsync() at the end of test, Enabled.
    Using synchronous I/O mode
    Doing random r/w test
    Initializing worker threads...

    Threads started!

    File operations:
        reads/s:                      46421.95
        writes/s:                     30947.96
        fsyncs/s:                     99034.84

    Throughput:
        read, MiB/s:                  725.34
        written, MiB/s:               483.56

    General statistics:
        total time:                          60.0005s
        total number of events:              10584397

    Latency (ms):
             min:                                    0.00
             avg:                                    0.01
             max:                                    1.32
             95th percentile:                        0.03
             sum:                                58687.09

    Threads fairness:
        events (avg/stddev):           10584397.0000/0.00
        execution time (avg/stddev):   58.6871/0.00
2TB Nand Samsung 980 Pro: Prep time: `sysbench fileio --file-total-size=150G prepare` 161061273600 bytes written in 87.15 seconds (1762.53 MiB/sec).

    Benchmark:
    `sysbench fileio --file-total-size=150G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
    WARNING: --max-time is deprecated, use --time instead
    sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)

    Running the test with following options:
    Number of threads: 1
    Initializing random number generator from current time

    Extra file open flags: (none)
    128 files, 1.1719GiB each
    150GiB total file size
    Block size 16KiB
    Number of IO requests: 0
    Read/Write ratio for combined random IO test: 1.50
    Periodic FSYNC enabled, calling fsync() each 100 requests.
    Calling fsync() at the end of test, Enabled.
    Using synchronous I/O mode
    Doing random r/w test
    Initializing worker threads...

    Threads started!

    File operations:
        reads/s:                      594.34
        writes/s:                     396.23
        fsyncs/s:                     1268.87

    Throughput:
        read, MiB/s:                  9.29
        written, MiB/s:               6.19

    General statistics:
        total time:                          60.0662s
        total number of events:              135589

    Latency (ms):
             min:                                    0.00
             avg:                                    0.44
             max:                                   15.35
             95th percentile:                        1.73
             sum:                                59972.76

    Threads fairness:
        events (avg/stddev):           135589.0000/0.00
        execution time (avg/stddev):   59.9728/0.00
4TB Sabrent Rocket 4 Plus: Prep time: `sysbench fileio --file-total-size=300G prepare` 322122547200 bytes written in 152.39 seconds (2015.92 MiB/sec).

    Benchmark:
    `sysbench fileio --file-total-size=300G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
    WARNING: --max-time is deprecated, use --time instead
    sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)

    Running the test with following options:
    Number of threads: 1
    Initializing random number generator from current time

    Extra file open flags: (none)
    128 files, 2.3438GiB each
    300GiB total file size
    Block size 16KiB
    Number of IO requests: 0
    Read/Write ratio for combined random IO test: 1.50
    Periodic FSYNC enabled, calling fsync() each 100 requests.
    Calling fsync() at the end of test, Enabled.
    Using synchronous I/O mode
    Doing random r/w test
    Initializing worker threads...

    Threads started!

    File operations:
        reads/s:                      2690.28
        writes/s:                     1793.52
        fsyncs/s:                     5740.92

    Throughput:
        read, MiB/s:                  42.04
        written, MiB/s:               28.02

    General statistics:
        total time:                          60.0155s
        total number of events:              613520

    Latency (ms):
             min:                                    0.00
             avg:                                    0.10
             max:                                    8.22
             95th percentile:                        0.32
             sum:                                59887.69

    Threads fairness:
        events (avg/stddev):           613520.0000/0.00
        execution time (avg/stddev):   59.8877/0.00

Fun...

This is something I have been thinking about and researching for awhile, because there is so very much confusing language out there.

Your quote says over the last century, so I'm going to use roughly 1920 as the baseline. It also refers to a per capita increase of meat consumption by 100 pounds, or about 45.4 kilograms (to make the math easier). This is roughly an increase of 124g of meat per person per day (or about 3oz if that makes more sense to you).

This equates to a daily increase in per-capita protein intake by 25-30g (depending on which meat and how lean it is).

In 1920, the average American adult male was about 140 pounds, and ate about 100g of protein per day, which works out to roughly 0.71 grams per pound of body weight (or about 1.6 grams per kilogram).

In 2025, one century later, the average American adult male is 200 pounds, and if he eats the same ratio of weight to protein, you would expect that he would eat around 140g of protein per day, which is slightly higher than the increase in per-capita meat consumption over the same time.

However, if you look at actual statistics of what people are eating in protein, you'll see that the average American adult male is actually eating about 97g of protein per day, or about 0.49 grams per pound (1.1 grams per kg), which is much less than we ate a century ago, which means that that the increase in meat consumption doesn't match change in protein, so is offset by either less non-meat protein, meat with lower protein content (e.g. more fat), or both.

There was some discussion lower in the thread about bodybuilders vs normal people, and about basing your calculations on lean body weight vs full bodyweight. Lean body weight calculations are often used for bodybuilders, but those numbers are elevated (typically 1 gram of protein per pound of lean body weight). For someone who is sedentary to lightly active (e.g. daily walks), the calculation is based on full body weight, not lean body weight, and is about 0.7 gram per pound (or 1.5 grams per kilogram), which matches this recommendation exactly.

Hitting these targets has been shown to greatly increase satiation, reduce appetite, but it does not make you lose weight, and it is not permanent (reducing your protein intake removes the effect, which makes sense). However, long term studies show that people who increase their protein intake to these levels and lose weight (through calorie reduction or fasting) keep that weight off.

Finally, from what I've been able to cobble together, high protein intakes combined with high fat and high sugar intakes does not have the same effect as a diet that matches the recommendations here (ie. it's not just about higher protein intake, it's about percentage of calories from protein, which should be around 20-25%... 200 pound sedentary to lightly active adult male, 140g of protein, or 560 calories, in a total diet of 2250-2800 calories, depending on activity level)


The sad part of all of this was that the company that does this tried to poach me back in 2013 or 2014, but I was disgusted by the practice, so I refused to even interview.

Since then, I've made sure every single TV I own has this turned off (I go through the menu extensively to disable, and search on Google and reddit if it's not obvious how to disable like the case with Samsung).

I have an LG Smart TV, and just a week or two ago I was going through the settings and found Live Plus enabled, which means either they renamed the setting (and defaulted this to on), or the overrode my original setting.

Either way, I'm super annoyed. I want to switch to firewalling the TV and preventing any updates, but I need a replacement streaming device to connect to it.

Does anyone have recommendations for a streaming device to use (presumably one with HDMI CEC, that supports 4k and HDR)? I use the major streaming services (Netflix, Prime, Hulu, Apple TV) and Jellyfin.


An Apple TV 4k: https://www.apple.com/apple-tv-4k/

It will just work. You will maybe get an ad or two from Apple, rarely, about Apple services, but it's very rare and easy to ignore.

Otherwise you only get ads if your service(Netflix, etc) delivers ads.

Apple won't share your data with anyone, and generally does a fairly decent job(compared to other giant tech companies) of not collecting much.


The only acceptable number of ads is zero.


Good luck with that, no company anywhere offers no ads.


Which is why things like The Pirate Bay remain popular.


My main peeve with the Apple TV (device) is that the home button keeps sending me into Apple TV (App) instead of to the main screen.

I have to click it twice to get back to the home screen.


As is also commented, within the device settings you change the behaviour to be a home button.

You should also be able to hold the ‘menu’ or ‘<‘ button, depending on which remote you have, to directly go to the home page


Just dig into the menu, that's an option if I remember well.


I used to manage a team working on the news feed at Facebook (main page).

We did extensive experimentation, and later user studies to find out that there are roughly three classes of people:

1) Those that use interface items with text 2) Those that use interface items with icons 3) Those that use interface items with both text and icons.

I forget details on the user research, but the mental model I walked away with this that these items increase "legibility" for people, and by leaving either off, you make that element harder to use.

If you want an interface that is truly usable, you should strive to use both wherever possible, and ideally when not, try to save in ways that reduce the mental load less (e.g. grouping interface by theme, and cutting elements from only some of the elements in that theme, to so that some of the extra "legibility" carries over from other elements in the group)


Sounds like me: 1. For new UI/tool, I depend on text to navigate. 2. Once I'm more familiar, I scan using icons first then text to confirm. 3. With enough time, I use just icons. 4. Why the ** do they keep moving it/changing the icons?


Hooray, actual user research and data!! This is what I tell all my clients: "We can speculate all day long, but we don't have to. The users will tell us the correct answer in about 5 minutes."

It's amazing that even in a space like this, of ostensibly highly analytical folks, people still get caught up arguing over things that can be settled immediately with just a little evidence.


> Those that use interface items with icons

This is the bane of my existence since icons aren't standardized* and the vast majority of people suck at designing intuitive ones. (*there are ISO standard symbols but most designers are too "good" to use them)


Cite cliché about the only intuitive user interface is the nipple; everything else is learned.

Having done my share of UI work, my value system transitioned from esthetics to practicalities. Such as "can you describe it?" Because siloed UI, independent of docs, training and tech supp, is awful.

All validated by usability testing, natch. It's hard to maintain strong opinions UI after users shred your best efforts. Humilitating.

Having said all that... If stock icons work (with target user base), I'm all for using them.

PS I do have one strong opinion: less is more.


I recently learned about the (ancient?) greek concept of amathia. It's a willful ignorance, often cultivated as a preference for identity and ego over learning. It's not about a lack of intelligence, but rather a willful pattern of subverting learning in favor of cult and ideology.


The only actual problem with cheating is leaderboards.

When you have accurate matchmaking, you will be playing against other players of a similar skill level. If you we're playing in single-player mode, it wouldn't bother you that some of the players were better than others.

Whether the person you're playing against is as good as you because they have aim assist, while you have a 17g mouse and twitch reflexes shouldn't matter. You're both playing at equivalent skill levels.

The only reason it matters to anyone is that they want their skills to be recognized as better than someone else's. Take down the leaderboards, and bring back the fun.

I say, let the people cheat.


Comments like this just make me upset to the point I can't cohere an appropriate argument. It's so out-of-touch with reality and completely ignores the core problem that I have to believe you're just fucking with us.

No, it is not fun to play against smurf accounts using hacks. They aren't doing it for the leaderboards, they actively downrank themselves to play against worse players!

And no, it's not fun to play against cheaters who are so bad at situational awareness their rank is still low, but who instantly headshot you in any tense 1v1 and ruin your experience.

And no, I actually do care that people are cheating in multiplayer games because it's not fair. Since when do we reward immoral fuckwits who can't or won't get better at the game?

Why don't we just start letting basketball players kick each other and baseball players tar their hands while we're at it. Who cares if the sanctity of the sport or competition is ruined - we're a community of apathetic hacks.


I play online FPS with friends for fun. I don't care about leaderboards, but I know people that do and don't want to take them away from them.

You can't have accurate matchmaking and allow cheating. People cheat for a variety of reasons, at lot of cheaters are just online bullies that enjoy tormenting other players. In a low ELO lobbies, you would have cheaters that have top tier aim activated only if they lose too much, making the experience very inconsistent.

Top tier ELO would revolve around on how the server handle peeker advantage and which cheater as the fastest cheating software. It's an interesting technical challenge, but not a fun game. As soon as a non cheating player is in view of a cheating player, the non cheating player dies. That doesn't make for a fun game mechanic.


>"Top tier ELO would revolve around on how the server handle peeker advantage and which cheater as the fastest cheating software. It's an interesting technical challenge, but not a fun game"

Fun fact, this does exist. There used to be old CS:GO servers that were explicitly hack v hack, would make it abundantly clear to any new visitors that stumbled upon the servers that you would NOT have any fun without a "client", and it was a bunch of people out-config'ing each other. It was actually kinda cool for those people, it would NEVER be fun for anyone else.


Pretty sure HvH is still alive and well in CS2 and high rank Premier is still basically Valve-hosted HvH.


Try playing Rust without anti-cheat and you will immediately change your tune. It isn't fun playing a game where you can lose everything to a guy who can cause bullets to bend around objects.


>"Try playing Rust [...] and you will immediately change your tune."

In general, really.


Yea, that's one game that's more fun to watch than play I will admit, so mostly I'm a "pro rust watcher with over 300 hours watching rust" (this is a bit of an in-joke, sorry) who sees the annoyance and lack of fun people have when they get destroyed by cheaters. I did play one wipe, and spent 25 hours over 3 days in the game, so I chose to quit right there instead of doing that on a regular basis.


Oh for sure, and even now I occasionally watch videos as background noise during work - but it's just not a fun game for the vast majority of times I log in. I'm about 175 hours in now; and for the one time a month I play, I can't be assed joining anything besides a 3x anymore with a wipe at least 10 days away, it's just too much sunk cost and wasted time even getting a 2x1 down in official or vanilla servers.

It's fun when I get something down (and fully use a starter kit, if a server has it) and have neighbors to dick around with, or there's not enough traffic to actually enjoy landmarks and underground, or occasionally when I meet someone that's down to team up or just talk on mic while pacing around one of our bases. But when that doesn't happen, I'm just not having fun haha


I am a casual watcher of Willjum' Rust single and cooperative plays whenever it rarely pops into my YT feed.


It's almost a meme in Willjum's comments that people watch Willjum's channel but don't play Rust!

I am also a WJ watcher.


There was plenty of cheating when there were no global leaderboards. People will happily do things just to ruin other people's day.


Not just single player. Even in competitive multiplayer a lot of the complaints about "cheating" are actually complaints about matchmaking, and "cheating" is a giant red herring (griefing is a different matter, of course, that gets lumped into the umbrella term of "cheating"). But trying to explain this is typically like pissing against the wind, because people already believe in the existing status quo (no matter how irrational it is) and no one wants to change their beliefs unless it obviously and immediately short-term benefits them.


At least in the world of chess (which has the OG matchmaking system, ELO), cheating is genuinely a problem.

The problem is that it doesn't matter how good you are. You will not beat a computer. Ever. Playing against someone who is using a computer is just completely meaningless. Without cheating control, cheaters would dominate the upper echelons of the ELO ladder, and good players would constantly be running into them.


> Without cheating control, cheaters would dominate the upper echelons of the ELO ladder, and good players would constantly be running into them.

...and, even worse, if they ever got to the very top of the ladder and started only playing against other cheaters, then they'd actually weaken their cheats so that they could drop down in ranking to play against (and stomp) non-cheaters again, and/or find creative ways to make new accounts.

Cheaters ruin games. The fact that the GP is so deluded as to claim that "The only actual problem with cheating is leaderboards." suggests that they've never actually played a competitive matchmade game on a computer before.


I worked at a company once that didn't use any anti cheat. I once asked why, and they said the matchmaking system solved the problem for them. The matchmaking was good enough so cheaters only ever played with other cheaters, and it kept the numbers up.

Honest players never really complained, so I guess it worked for them.


This is fine if you are low level, because the cheaters will be too good to play in the low level games.

If you are in the higher skill levels, you might end up playing too many cheaters who are impossible to beat. If the cheat lets you be better than the best human players, the best human player will end up just playing cheaters.


> If you are in the higher skill levels, you might end up playing too many cheaters who are impossible to beat.

It's almost kind of worse than this. If you are in higher skill levels, you end up getting matched with cheaters who lack the same fundamental understanding of the game that you do and make up for it with raw mechanical skill conferred by cheats.

So you get players who don't understand things like positioning, target priority, or team composition, which makes them un-fun to play with, while the aimbots and wallhacks make them un-fun to play against.

And as a skilled player, you are much better equipped to identify genuine cheaters in your games. Whereas in low skill levels cheaters may appear almost indistinguishable from players with real talent so long as they aren't flat out ragehacking with the aimbot or autotrigger.


You're saying it's not a problem when cheaters to completely ruin the experience for top-10% players


I've been using an 8k 65" TV as a monitor for four years now. When I bought it, you could buy the Samsung QN700B 55" 8k, but at the time it was 50% more than the 65" I bought (TCL).

I wish the 55" 8k TVs still existed (or that the announced 55" 8k monitors were ever shipped). I make do with 65", but it's just a tad too large. I would never switch back to 4k, however.


What do you watch on an 8K TV?

There's no content

Average bitrate from anything not a Bluray for even HD is not good, so you do not benefit from more pixels anyway. Sure, you are decompressing and displaying 8K worth of pixels, but the actual resolution of your content is more like 1080p anyway, especially in color space.

Normally, games are the place where arbitrarily high pixel counts could shine, because you could literally ensure that every pixel is calculated and make real use of it, but that's actually stupidly hard at 4k and above, so nvidia just told people to eat smeary and AI garbage instead, throwing away the entire point of having a beefy GPU.

I was even skeptical of 1440p at higher refresh rates, but bought a nice monitor with those specs anyway and was happily surprised with the improvement, but it's obvious diminishing returns.


>There's no content

This is exactly why 8K tv's failed in the market, but the point here is that your computer desktop is _great_ 8k content.

The tv's that were sold for sub-1000 usd just a few years ago should be sold as monitors instead. Replace the TV tuners, app support, network cards and such and add a displayport.

Having a high-resolution desktop that basically covers your useable FOV is great, and is a way more compelling use case than watching TV on 8K ever was.


What standard does reliably work to drive 8K at 60 Hz and how expensive cables are?

How far away do you sit from it? Does it sit on top of your desk? What do you put on all this space, how do you handle it?

I don’t think you’re maximizing one browser window over all 33 million pixels


HDMI 2.1 is required, and the cables are not too expensive now.

For newer gpus (nvidia 3000+ or equivalent) and high end (or M4+) macs hdmi 2.1 works fine but Linux drivers have some licensing issue that makes hdmi 2.1 problematic.

It works with certain nvidia drivers but I ended up getting a DP to HDMI 8K cable which was more reliable. I think it could work with AMD and Intel also but I haven't tried.

In my case I have a 55 and sit normal monitor distance away. I made a "double floor" on my desk and a cutout for the monitor so the monitor legs are some 10cm below the actual desk, and the screen starts basically at the level of the actual desk surface. The gap between the desk panels is nice for keeping usb hubs, drives, headphone amps and such. And the mac mini.

I usually have reference material windows upper left and right, coding project upper center, coding editor bottom center, and 2 or 4 terminals, teams, slack and mail on either side of the coding window. The center column is about tice as wide as the sides. I also have other layouts depending on the kind of work.

I use layout arrangers like fancyzones (from powertoys) in windows and a similar mechanism in KDE, and manual window management on the mac.

I run double scaling, so I get basically 4K desktop area but at retina (ish) resolution. 55 is a bit too big but since I run doubling I can read stuff also in the corners. 50" 8K would be ideal.

Basically the biggest problem with this setup is it spoils you and it was only available several years ago. :(


What is the model number and how has the experience been?

I've mostly read that TV's don't make great monitors. I have a TLC Mini LED TV which is great as a TV though.


Definitely not useless!

I run a ttyd server to get terminal over https, and I have used carbonyl over that to get work done. That's limited to a web browser (to get access to resources not exposed via the public internet), so having full GUI support is very useful


I am truly shocked by this.

I looked it up, and it turns out you're right. Both the iPhone 17 and the iPhone Air use USB2.

USB3 was introduced in 2008 (!!!). That is 17 years ago.

I already wasn't interested in this tech, to be fair, but I've had to support family phones synchronizing/backing up over the cable, and even at full theoretical speed for the transfer, we're talking over an hour vs just under 7 minutes. Which, considering the flash most likely suppports the read in under a minute, is crazy.


Which is literally what Apple announced in this video:

"and the 2x telephoto has an updated photonic engine, which now uses machine learning to capture the lifelike details of her hair and the vibrant color of her jacket"

"like the 2x telephoto, the 8x also utilizes the updated photonic engine, which integrates machine learning into even more parts of the image pipeline. we apply deep learning models for demosaicing"


They've been using that terminology for like a decade. They take multiple photos and use ML to figure out how to layer them together into a final image where everything is adequately exposed, and applies denoising. Google has done the same thing on Pixels since they've existed.

That's very different from taking that final photo and then running it through generative AI to guess what objects are. Look at the images in that article. It made the stop sign into a perfectly circular shiny button. I've never seen artifacting like that on a photo before.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: