Hacker News new | past | comments | ask | show | jobs | submit | wing-_-nuts's comments login

I went to a new dentist when I was 18 because my other dentist was unavailable. He declares that I have two cavities and fills one, but doesn't use enough anesthetic. Given my bad exp with him I went back to my other dentist who's flabbergasted that there was supposedly a cavity on the other tooth. No sign of any decay.

These kind of experiences are why I try to vet a new dentist very hard before trusting them, even going so far as to getting a second opinion if the new one finds anything.


When I was 12, I was scheduled by my regular dentist to have two cavities filled. It was the first time I had anything negative in a dental checkup. We were very poor, so my dad was pissed that it was going to be almost $400 to get them filled. He found a different dentist that was supposed to be a bit cheaper, and I went to that one instead. He was shocked to hear that I had been scheduled for two fillings. Since I was a new patient, he did x-rays, which showed zero decay. The dentist that lied about me having cavities is still in practice today more than 20 years later, and has 4.5 stars on Google.

I fear there's not really a good way to vet Dentists effectively since most people probably never find out that they've been scammed for years. I'd love to learn some new strategies though.


> I fear there's not really a good way to vet Dentists effectively since most people probably never find out that they've been scammed for years. I'd love to learn some new strategies though.

It's something the government should be doing; running sting operations against dentists with compliants against them. Unfortunately, dentists and prosecutors are in the same social circles.


I like the idea of that proposal, but I'm not sure how it would work in practice.

The problem is that the crooked dentist will argue that the "bait" patient has in fact have cavities. And then if the prosecutor finds somehow convincing evidence that the patient does not actually have cavities the crooked dentist can change tactic and say it was a honest mistake on their part.

With other crimes where "sting operations" work the situation is much more clear cut. The target of the drug sting is either selling drugs or not selling drugs. If you find drugs you can easily prosecute them. With the dental scam even if you manage to catch them red handed once, it is still a long and complicated process to prove it was a scam and not a mistake.

Or alternatively we can legislate to make making mistakes with dental diagnosis illegal the same way having large batches of drugs is illegal. That will make the prosecution easier, but will have all kind of other negative consequences.


It could be done. You do a first pass of a significant number of dentists, with people with confirmed healthy teeth, and then do a second pass on every dentist who recommends fillings. Caught scamming twice? License suspension. Repeat offender? Jail time. The odds of such a program putting an innocent dentist in jail gotta be near-nil.

There's no shortage of people who would damn-near volunteer for the work, given how many of us have had multiple run-ins with crooked dentists.

Even if it cost, say, $10k to catch each scumbag dentist, the ROI to society would be tremendous. Catch enough dentists in the space of a few months, apply appropriate consequences, and the whole culture will change.

That said, in reality the dentists would 'hire lobbyists' to kill anything like this.


Hmmmm. I like this.

Could it be done as a non-profit perhaps? I mean we clearly can’t do the licence suspension and the jail time that way but we could maybe shame the scammers?


Hahaha I love it. Extinction Rebellion but for dentists? ... How about "Extraction Action"?

The main issue I foresee is that there's just so much to be outraged about in the world now. People are using shameful behavior to distract from worse shames - it's been weaponized!

So, making a significant impact in a crowded 'shame market', across a tightly controlled media landscape, generally requires a highly skilled dedicated team.

I was wondering about this problem since this thread; specifically, wouldn't it be rather easy to use big data techniques to catch dentists who are way outside norms?

And after looking into it, it seems that insurance companies, Medicaid etc have been doing exactly this, and catching some pretty big fish. It's new for me to give insurance companies much credit for doing anything good, and I feel weird now. Real enemy of my enemy stuff.


> dentists and prosecutors are in the same social circles

What? Are there dentist/prosecutor cocktail parties us software grunts are missing out on?


I wonder if there is room for a service that just performs X-rays and passes them through some kind of AI model as a kind of a "dental fizz-buzz." Surely they wouldn't have any perverse incentives in that case.

Oh man. I would dearly love to see scumbag dentists lose their ability to easily scam vulnerable people, often desperate and in pain.

That said - I'm sure they have tight regulations on who is allowed to X-ray teeth, and diverse ways to keep their own in line should they threaten the apple cart.

Ever heard of nano silver fluoride? ... Exactly. (Unless you saw the HN story on it here recently [0].)

0 - https://news.ycombinator.com/item?id=41474080


You’d think they were breaking a law or something, recommending unnecessary medical procedures.

Exactly the same happened to me when I was in my teens and next thing I know my dentist has drilled pretty much most of the good teeth under the name of cavity. In my 40s now and I am still paying for it as I now have to keep on visiting a dentist every year because of my constantly broken fillings. I have paid a lot out of pocket and the insurance has paid a lot on my behalf to the dentists.

The cost of each filling nets the dentist $100+ and each patient now becomes a repeat customer and serves the dental industry for life. There is no ethics in this space and it's unfortunately a BIG SCAM.


I instantly distrust all dentists. I assume most of them are involved in organized crime.

> doesn't use enough anesthetic

My school dentist always botched the anesthesia, and afterwards I had to grind my teeth for three days to make them fit together again.

I never told anyone because adults kept saying dentistry hurts so I assumed it was normal. I didn't realize how fucked up this was until I went to college and experienced a competent dentist for the first time.


Not sure if the anesthetics have got better or if it's just a skill issue, injecting it in precisely the right place?

I had problems way back in the 90s with them not working too well on me. But my current dentist gets it perfect every time - properly numb very fast, but remaining fairly localised.


I had a similar experience -- went to a new dentist, they found two "cavities", tried to hard-sell me into getting them filled right then. I declined, never went back to that practice, and 10 years later my teeth are perfectly fine.

#notalldentists, of course, but there are certainly unscrupulous ones out there, and not just a few.


I had a rougher life in my 20s.

I once went to a dentist and they told me they want to pull 13 of my teeth and give me dentures.

I knew they were in bad shape but this absolutely freightened me. Four of them were my wisdom teeth but I still thought it was nuts.

15 years later, I still have all of the 13 they wanted to pull.

I did lose two unrelated molars and the matching wisdom teeth basically slid into place replacing them. Then two root canals + crowns.

That experience turned me off dentists for a long time.

My current dentist is great. They do all they can do save a tooth and only extract as a last resort.


This is fascinating!!

So the molars are a back up replacement system?


That was apparently their purpose back when we lost teeth more regularly - it's only with modern dentistry that they become a problem for being "extra"

It was not an intentional replacement or anything the dentist did. I had my back molars pulled and then a few years later while trying to start flossing again I noticed they were back and basically in the same position. I doubted myself for a moment if my memory was failing. Just such a slow long process.


Exact same thing happened to me as well. I now travel very far to go to a dentist that I can trust. It never even crossed my mind that this would be a possibility when I was younger.

I had this experience a couple years ago. I hate dentists man

but you don't hate dentists woman? /jk

I am pretty sure Ive had cavities taken care of that were not cavities. Ultimately its a small procedure and nets the dentist a few $100 bucks - and the patient can't be bothered to get a 2nd opinion.

Maybe this is where AI helps with analysis of x-rays. Is there really an urgent issue? Or can it wait?


I'm unclear on who is using AI in this scenario. Are you going to use your own AI on your X-rays, or expect that the dentist will use a new tool to tell them to not do the procedure to get them more money?

Probably the latter scenario. Its a hypothetical - but it could happen if health records move increasingly online and if enough patients demand that level of control.

Insurers could potentially require specific tools be used to cover a procedure.

The dentist earns a few hundred, and the patient has a permanently-damaged health in the case of a tooth that was, in fact, healthy.

Maybe insurance companies would be interested in AI review on the basis of future costs. Informed patients might be, too.


Unfortunately, It'll just be great for insurance to deny necessary prescriptions/procedures/scans because AI review found nothing wrong

Some dentists want to fill "crevices" that may become a problem later, others wait until there is a problem. I've been fortunate to mostly have dentists that were happy to just do the semi-annual cleaning and annual xrays and nothing more than that unless I had a complaint or they spotted obvious decay.

I've been to some awful, painful, overly-expensive dentists that acted more like car salesmen than dentists.

I finally found an honest one that prioritizes my comfort and doesn't charge me an arm and a leg, and I've been a customer for over 20 years.

I moved to a different city a few years ago, but I will still drive 1.5 hours in traffic to go see my dentist (I'll try to book outside of rush hour though).

I did try one dentist close to my new house, and it was awful. It reinforced my confidence in my regular dentist. Never going to anyone else as long as I live.

Once you find a good one, stay with them as long as you can. Not all dentists are the same, it's no joke, some are just there to rip you off.


Dentists rushing with the anesthetic is my biggest pet peeve with them. They always try to blame you in that you're special and need extra. I know there is some element of that, but it's mostly rushing

Not had that myself, but have known people in the UK reporting the same.

Me, I had mine "professionally cleaned" for the first time in my life about 9 months back, and they've felt permanently a bit off ever since.


you must have loads of money to be going around getting multiple opinions

isn't that exactly what insurance is for?

I do wonder if PC desktops will eventually move to a similar design. I have a 7800x3d on my desktop, and the thing is a beast but between it and the 3090 I basically have a space heater in my room

A game I play with friends introduced a Mac version. I thought it would be great to use my Apple Silicon MacBook Pro for some quiet, low-power gaming.

The frame rate wasn’t even close to my desktop (which is less powerful than yours). I switched back to the PC.

Last time I looked, the energy efficiency of nVidia GPUs in the lower TDP regions wasn’t actually that different from Apple’s hardware. The main difference is that Apple hardware isn’t scaled up to the level of big nVidia GPUs.


I sincerely believe that the market for desktop PCs is completely coopted by the gaming machines. They do not care one whit about machine size or energy efficiency, with only one concern in mind: bare performance. This means they buy ginormous machines, incredibly inefficient CPUs and GPUs, with cavernous internals to chuck heat out with no care for decibels.

But they spend voriously. And so the desktop PC market is theirs and theirs alone.


Desktop PCs have become the Big Block V8 Muscle Cars of the computing world. Inefficient dinosaur technology that you pour gasoline through and the output is heat and massive raw power.

Desktops are actually pickup trucks. Very powerful and capable, capable of everyday tasks, but less efficient at them. Unbeatable at their specialty, though.

Yeah. It's been the case for a while now that if someone just wants a general computer, they buy a laptop (even commonly a mac).

That's why the default advice if you're looking for 'value' is to buy a gaming console to complement your laptop. Both will excel at their separate roles for a decade without requiring much in the way of upgrades.

The desktop pc market these days is a luxury 'prosumer' market that doesn't really care about value as much. It feels like we're going back to the late 90's, early 2000's.


Unless you play games where you stare at the map while balancing exel spreadsheets.

That's okay, Factorio has awesome Apple Silicon support.

What about Paradox games? genuinely curious about that.

I played a bunch of EU4 and HOI4 without any issues. But I think those use emulation under the hood.

That's the thing with macs, all the strategy games tend to release there because the market for mac users and strategy gamers is a circle.


Stellaris is great on my M2

Well because that's the audience that upgrades before something breaks and also lets you capture high-end market of professionals.

The price of a high end gaming pc (7800x3d and 4080) is around 2k USD. That's comparable to the MacBook Pro.

Yeah sure, if you start buying unnecessary luxury cases, fans and custom water loops it can jump up high, but that's more for clueless rich kids or enthusiasts. So I wouldn't place pc gaming as an expensive hobby today, especially considering Nvidia money grubbing practices that won't stay forever.


It would make sense, but it depends heavily on Windows / Linux support, compatibility with nvidia / amd graphics cards, and exclusivity contracts with Intel / AMD. Apple is not likely to make their chips available to OEMs at any rate, and I haven't heard of any 4th party working on a powerful desktop ARM based CPU in recent years.

I just bought a Beelink SER9 mini pc, about the same size as the Mac Mini. It's got the ridiculously named AMD Ryzen AI 9 HX 370 processor, a laptop CPU that is decently fast for an X64 chip (2634/12927 Geekbench 6 scores) but isn't really competition for the M4. The GPU isn't up to desktop performance levels either but it does have a USB4 port capable of running eGPUs.

It would be nice. Similarly have a 5950X/3080Ti tower and it’s a great machine, but if it were an option for it to be as small and low-noise as the new Mini (or even the previous mini or Studio), I’d happily take it.

For what it is worth, I'm running that with open loop water cooling. If your chassis has the space for it, my rig won't even need to turn on fans for large amounts of the day. (Loop was sized for a threadripper, which were not really around for home builders) Size is an issue, however :)

That 3090 uses about 5x more power than the 7800x3d.

After having my PC for (almost) 4 years, I can say that this beast is the last large form computer I will buy.

I get where both sides are coming from.

On the one hand, buying a console and a reasonably spec'd laptop is clearly the better value. I did this during college, and both my laptop, and my console both lasted about a decade without requiring any upgrades. I did this again with the PS4. You wind up spending far less than you otherwise would trying to keep a gaming pc reasonably up to date, and both devices are optimized for their usage.

On the other hand? At some point most of us will realize that we've been successful enough that we don't have to optimize for value, and we can choose 'all of the above'. I now have a PS5 AND a stupidly overspec'd AI / gaming desktop. I've enjoyed having both.


Part of it is reflexes too. I used to love fast paced FPS games as a teen and was actually pretty good at them until my early 30's. As time went on, I started noticing I was doing consistently worse in 1v1 firefights. I started gravitating towards games that had a 'slower' way to contribute like playing vehicles in the battlefield series.

As time goes on I've gotten more and more into single player games, especially games that let me build stuff.


>Laptops in general are just better than they used to be, with modern CPUs and NVMe disks.

I've had my xps 13 since 2016. Really the only fault I have against it nowadays is that 8gb of ram is not sufficient to run intellij anymore (hell, sometimes it even bogs down my 16gb mbp).

Now, I've also built an absolute beast of a workstation with a 7800x3d, 64gb ram, 24 gb vram and a fast ssd. Is it faster than both? Yeah. Is my old xps slow enough to annoy me? Not really. Youtube has been sluggish to load / render here lately but I think that's much more that google is making changes to make firefox / ublock a worse experience than any fault of the laptop.


Regarding Youtube, Google is also waging a silent war against Invidious. It's to the point that even running helper scripts to trick Youtube isn't enough (yet). I can't imagine battling active and clever adversaries speeds up Youtube page loads as it runs through its myriad checks that block Invidious.

I am excited for 8k monitors in the future, because they give you a lot more options for integer scaling than current 4k displays.

I know this a nerdish hill to die on, but I hate fractional scaling with the blazing fury of a thousand suns. To get a 1440p sized UI on a 27" 4k display, you can't just divide by 1.5x the OS has to 3x/2 for every frame. OS X does this best as they've had retina displays for a while, but no OS does this well, and it leads to all sorts of performance issues especially when dealing with view ports. Linux is especially bad.

Having said all that, I absolutely will not be using an 8k tv as a display. I'm currently using a 27" 1440p monitor, and while I could probably handle a 32" 8k display that is the absolute max size I'd tolerate. You start to get into all sorts of issues with viewing distance and angle going larger.

My 27" 1440p is fine for now. I sit far enough away from it that I don't really 'see the pixels' unless I go looking for them. It was also a crazy good deal as it's a 144hz monitor that also has a built in KVM switch that's very useful for WFH.


I am curious as to what OS's you've tried. Fractional scaling is flawless on Windows and KDE6 with wayland in my experience.

I wouldn't describe any OS as 'flawless', they're all doing what I describe under the hood. QT does have better support than GTK atm. I've also seen bad behavior on windows, esp with older apps. OS X is about the best out there, but even it can have issues with applications that have a view port (i.e. video editors, etc).

I'd prefer to skip all that so I'm happy staying on 1440p until 8k monitors are where 1440p monitors are today with regard to price and quality.


It may well be doing what you described under the hood, but I've never seen any evidence of a performance problem as a result.

That's why we have so-called 5k monitors in 27" size class? Being exactly 2x pixel density of conventional 1440p

27” 1440p at 100% is too small for me, so 5K at 200% has the same problem. More generally, the available PPIs combined with integer scaling only yield relatively few options at a given viewing distance. More choice would be nice.

Yep pretty much, and 6k 32" monitors. Both are fringe monitors mainly used by mac people.

This is huge for AI / ML at least for inference. Apple chips are among the most efficient out there for that sort of thing, the only downside is the lack of cuda

Lack of Cuda is not a problem if for most ML frameworks. For example, in PyTorch you just tell it to use the “mps” (metal performance shaders) device instead of the “cuda” device.

That simply isn't true in practice. Maybe for inference, but even then you're running up against common CUDA kernels such as FlashAttention which will be far from plug and play with PyTorch.

Cuda Apple license it from nVidia?

Cude, er, cute, but... no.

I tried training some models using tensorflow-metal a year ago and I was quite disappointed. Using a relu activation function led to very poor accuracy [0] and training time was an order of magnitude slower than just using the free tier of Google Colab

[0] https://github.com/keras-team/tf-keras/issues/140


So, do you think that when the Mac Studio gets upgraded, it will also come with less max RAM, but be unified?

Is the whole "unified" RAM a reason that the iMac and Mini are capped at 32G?


The Mac Studio has always had unified memory

Fun fact: any PC with integrated graphics has also had unified memory (yep, including Intel Macs), for at least the past decade!

True, but PCs are 128 bits wide, apple lets you upgrade to 256 bits wide (M4 pro), 512 bits wide (m3 Max) or 1024 bits wide (M2 Ultra).

Unified memory is much more useful when you can get more bandwidth to it.


Yep, there's no performance x86 CPUs on the market with ambitious GPUs, only laptop chips. Games are optimized for discrete GPUs, Apple didn't have that software inertia to deal with.

Sort of, obviously quite a few games are optimized for the PS5 and Xbox series X.

GPU cores are generally identical between the iGPUs and the discrete GPUs. Adding a PCIe bus (high latency and low bandwidth) and having a separate memory pool doesn't create new opportunities for optimization.

On the other hand having unified memory creates optimization opportunities, but even just making memcpy a noop can be useful as well.


GPUs are all about (compute & memory) bandwidth. Using the same building blocks of compute units doesn't yet it go fast. You need a lot of compute units and a lot of bandwidth to feed them.

The performance dependency on DGPUs doesn't come from the existance of a PCIe bus and partitioned memory, but from the fact that the software running on the DGPU is written for a system with high bandwidth memory like GDDR6X or HBM. It creates opportunities for optimization the same way as hardware properties and machine balances tend to, the software gets written, benchmarked and optimized against hardware with certain kinds of performance properties and constraints (like here compute/bandwidth balance and memory capacity, and whether CPU & GPU have shared memory).


All "Apple Silicon" products, going back to the first one, which was the iPhone 4.

I consider that a plus. Maybe the AI community wil start to wake up and realize that going all in on cuda is ridiculously stupid.

To be totally honest, there's enough money in the ML / AI / LLM space now that I fully expect some companies to put forward alternative cards specifically for that purpose. Why google does not sell their TPU to consumer and datacenter instead of just letting you rent is beyond me.

Ceiling is too low on alternative hardware for that.

Until Apple can bang something out as good as an h100, it's no competition.

Cuda thrives bc of the hardware offering too.


Not if you're an NVDA shareholder!

> Apple chips are among the most efficient out there for that sort of thing

Not really? Apple is efficient because they ship moderately large GPUs manufactured on TSMC hardware. Their NPU hardware is more or less entirely ignored and their GPUs are using the same shader-based compute that Intel and AMD rely on. It's not efficient because Apple does anything different with their hardware like Nvidia does, it's efficient because they're simply using denser silicon than most opponents.

Apple does make efficient chips, but AI is so much of an afterthought that I wouldn't consider them any more efficient than Intel or AMD.


For inference, Apple chips are great due to a high memory bandwidth. Mac Studio is a popular choice in the local Llama community for this particular reason. It's a cost effective option if you need a lot of memory plus a high bandwidth. The downside is poor training performance and Metal being a less polished software stack compared to CUDA.

I wonder if a little cluster of Mac Minis is a good option for running concurrent LLM agents, or a single Mac Studio is still preferable?


The memory bandwidth on Apple silicon is only sometimes comparable to, and in many cases worse than, that of a GPU. For example, an nVidia RTX 4060 Ti 16GB GPU (not a high-end card by any means) has memory bandwidth of 288GiB/sec, which is more than double that of the M4 CPU.

On the higher end, building a machine with 6 to 8 24GB GPUs such as RTX 3090s would be comparable in cost (as well as available memory) to a high-end Mac Studio, and would be at least an order of magnitude faster at inference. Yes, it's going to use an order of magnitude more power as well, but what you probably should care about here is W/token which is in the same ballpark.

Apple silicon is a reasonable solution for inference only if you need the most amount of memory possible, you don't care about absolute performance, and you're unwilling to deal with a multi-GPU setup.


Note that they said the _Mac Studio_ which in the M2 model has between 400GB/s and 800GB/s memory bandwidth.

https://www.apple.com/mac-studio/specs/

Edit: since my reply you have edited your comment to mention the Studio, but the fact remains that the M2 Max has at least ~40% greater bandwidth than the number you quoted as an example.


Exactly, the M2 Ultra is competitive for local inference use cases given the 800 GB/s bandwith and a relatively low cost and energy efficiency.

The M4 Pro in the Mini has a bandwidth of 273 GB/s, which is probably less appealing. But I wonder how it'd compare cost-wise and performance-wise, with several Minis in a little cluster, each running a small LLM and exchanging messages. This could be interesting for a local agent architecture.


See my sibling reply below, but I disagree with your main point here. M2 Ultra is only competitive for very specific use cases, it does not really cost less than a much higher-performing setup, and if what you care about is true efficiency (meaning, W/token, or how much energy does the computer use to produce a given response), a multi-GPU setup and Mac Studios are on about equal footing.

For reference comparing to what the big companies use, an H100 has over 3TB/s bandwidth. A nice home lab might be built around 4090s — two years old at this point — which have about 1TB/s.

Apple's chips have the advantage of being able to be specced out with tons of RAM, but performance isn't going to be in the same ballpark of even fairly old Nvidia chips.


The cheapest 4090 is EUR 110 less than a complete 32GB RAM M2 max Mac Studio where I live. Speccing out a full Intel 14700K computer (avoiding the expensive 14900) with 32 GB RAM, NVMe storage, case, power supply, motherboard, 10G Ethernet … and we are approaching the cost of the 64GB M2 ultra which has a more comparable memory bandwidth to the Nvidia card, but with more than twice the RAM available to the GPU.

Apple will let you buy more RAM for cheaper than Nvidia, but it won't be the same speed — it'll be ~20% slower than a 4090.

That's my point. I would absolutely be willing to suffer a 20% memory bandwidth penalty if it means I can put 200% more data in the memory buffer to begin with. Not having to page in and out of disk storage quickly make those 20% irrelevant.

If you have enough 4090s, you don't need to page in and out of disk: everything stays in VRAM and is fast. But it's true that if you just want it to work, and you don't need the fastest perf, Apple is cheaper!

How _exactly_ do I keep 50+ Gigabytes of data in the 4090's VRAM without paging back and forth to disk?

Yeah, sorry, I realized that as well so I edited my post to add a higher end example with multiple 3090s or similar cards. A single 3090 has just under 1TiB/sec of memory bandwidth.

One more edit: I'd also like to point out that memory bandwidth is important, but not sufficient for fast inference. My entire point here is that Apple silicon does have high memory bandwidth for sure, but for inference it's very much held back by the relative slowness of the GPU compared with dedicated nVidia/AMD cards.


It's still "fast enough" for even 120b models in practice, and you don't need to muck around with building a multi-GPU rig (and figuring out how to e.g. cool it properly).

It's definitely not what you'd want for your data center, but for home tinkering it has a very clear niche.


> It's still "fast enough" for even 120b models in practice

Is it? This is very subjective. The Mac Studio would not be "fast enough" for me on even a 70b model, not necessarily because its output is slow, but because the prompt evaluation speed is quite bad. See [0] for example numbers; on Llama 3 70B at Q4_K_M quantization, it takes an M2 Ultra with 192GB about 8.5 seconds just to evaluate a 1024-token prompt. A machine with 6 3090s (which would likely come in cheaper than the Mac Studio) is over 6 times faster at prompt parsing.

A 120b model is likely going to be something like 1.5-2x slower at prompt evaluation, rendering it pretty much unusable (again, for me).

[0] https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...


And yet the GPU costs about as much as the whole Mac Mini and wouldn't even come close to fitting inside one.

You're mostly correct, though a 4060Ti 16GB is 20-30% cheaper than the cheapest Mac Mini. More importantly though, "fits inside a Mac Mini" is not a criterion I'm using to evaluate whether a particular solution is suitable for LLM inference. If it is for you, that's fine, but we have vastly different priorities.

Does it use the GPU? I was under the impression that it uses the CPU. It's only faster because of the massive memory bandwidth compared to DDR4/5

The AI features use all three of NPU ("ANE"), GPU, CPU, mostly depending on model size.

https://machinelearning.apple.com/research/neural-engine-tra...


Frankly you’re very wrong. NPUs and GPUs aside, 16gb of GPU memory is very rare in consumer hardware

I'm not sure what you mean. RTX 4060 Ti/4070 Ti Super/3090/4090 cards can be easily purchased at any major electronics store in person or online and have 16GB or 24GB depending on model. Once you get up to 32GB, your point would stand, but 16-24GB GPUs are common.

Yes but the average user is not purchasing those, let alone putting together a system with one for $600

You said nothing about price in your initial comment, and the cards I listed are some of the most popular GPUs of the last several years.

You can't use all 16GB because it's unified, so it's shared with the system, SSD controller etc. You can use something like 12-14GB though.

Sure, still incredibly rare in a $600 device


>The disappointment in his eyes when he saw that dumb, useless, cat button was priceless.

I once wrote a p2p filesharing app in college for a networking class, and I had put an easter egg in there where if you requested a certain file (lenin.jpg?) it displayed ascii art of a communism meme and started playing a midi of the soviet national anthem.

My little old networking prof sets down to test my program, and of course, that's the file she requested. She slowly turns to me, looks me in the eye and says 'be honest, you spent more time on that than the actual assignment didn't you?' ...'yes'. She shakes her head and mutters 'nerds!' under her breath, lmao


The question of consequences is real. I graduated in 2008 at the beginning of the great recession, and I watched some of my classmates go off and do startups. I asked them 'Dude, what if you fail?!' and one of them just shrugged his shoulders and said he'd move back home with his parents, and get a job at his dad's company. No big deal.

That's when I realized it was much easier to take those risks when you knew you had a safety net to fall back on and you didn't have to worry about winding up homeless sleeping under an overpass.


I did some real estate deals with guys from wealthy families a long time ago. We failed but the difference in consequences was enormous. I lost my savings of 10 years and struggled getting out of the situation. The other guys got bailed out by dad, did more deals and are now very successful business people.


Sorry to hear. For the benefit of others, I read this as the importance of sizing the bet relative to your portfolio and nobody else’s, and is broadly applicable.

If someone bets a million bucks on stock A, but is worth a billion bucks, then that’s not $1m conviction, it’s <1% conviction. And that information is then factored into the size of my bet.

Unfortunately I also learned this the hard way.


Problem is that your bets have to be of a certain size to make a difference. Making 100% on $100 is not doing much good for you. So the guy with less money has to take much higher relative risks if he wants to get somewhere.


that's get rich quick mentality. get rich quick mentality is bad.

Sometimes it's exactly the needed mentality. You can't always assume that you have all the time in the world to get to where you need to be.

If $100 doesn’t mean much to you, it probably isn’t 100% of your portfolio. Consider future cash flow from your labor in your portfolio.

Kelly Criterion

I really like the word conviction here for this concept and this use is new to me; has this been used before? A casual search for me doesn't term up anything.


41 Jesus sat down opposite the place where the offerings were put and watched the crowd putting their money into the temple treasury. Many rich people threw in large amounts. 42 But a poor widow came and put in two very small copper coins, worth only a few cents.

43 Calling his disciples to him, Jesus said, “Truly I tell you, this poor widow has put more into the treasury than all the others. 44 They all gave out of their wealth; but she, out of her poverty, put in everything—all she had to live on.”


There's a formalism in math/information theory describing this idea called the Kelly criterion. Not a common/colloquial phrase, but it describes a similar idea of portioning bet size according to percentage of available cash based on the risk of the bet.


I really appreciate this comment! (I think that word just popped into my head given some professional experience recently rather than being sourced from the field.)


Matching risk profiles with business partners or co-founders is vital and often overlooked

Mostly Folk match expectations, I.e. upsides

Downside fear is a very great motivator

A lack of consequential downsides can make folk sensibly choose to cut their losses and move on.

Sometimes this is invisible as parents bailing you out is not known before and may be a private discussion.

All or nothing downsides can mean folk will work themselves close to death to win ( or just not lose )

It also effects how much someone will gamble to get a unicorn rather than settle for a comfy income

Have the downside discussion early on


Family is easier, but you can also build up a support network with close friendships too. It doesn't necessarily come for free, but it's certainly possible. That isn't to say it's equal, especially when it comes to money.


[flagged]


Dig a little deeper into your dataset and you will find that the reason these people didn't end up homeless is because most of them have well-off / supportive families, trust funds, or Ivy League degrees.

That said I agree that people give in to fear too much, which prevents them from taking risks.


Homelessness is not happening. If you have the skills to even attempt a startup, you can for sure get a job.


I saw a bunch of people who had the skills to get a job take months to get one in 2023 and 2024. I've also seen people who think they have the skills to attempt a startup that did not.

All it takes is a downturn at the wrong time and a low capacity for risk.


I'll take that guarantee and raise you. Someone on here who posts was in that situation.

If you risk homelessness on a lottery ticket you are making a bad choice.


Homelessness != sleeping outdoors or in a homeless shelter. If you have to move out of your home and live with family or friends because you lost your job, you have lost your home and are homeless.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: