Hacker News .hnnew | past | comments | ask | show | jobs | submit | rob74's commentslogin

Never mind an outsourced receptionist, some of those calls could be handled simply by the mailbox. Of course, some people will hang up once the mailbox message starts - but then again, some will also hang up once they realize they're talking to an AI chatbot, so...

YES!

This is the critical data —» how many people hang up on the AI chatbot vs how many people hang up on the voice message prompt.

If it is even close, well, the AI needs to be improved.

If the AI is way ahead, but still loses/drops more than a live receptionist (outsourced or in-house), the AI either needs improvement, or to be dumped for a live receptionist, and that's kind of a spreadsheet problem (how many jobs lost in each case, vs costs).


I think the question of lost opportunities versus costs is the best thing to look at here. You could pay a receptionist like 50-60k a year but they have to bring in the work. Maybe the AI dumps a percentage over a real receptionist but they still bring in more than the mailbox. But there's a cost to the AI too.

But the real question you should also ask is what else can that human do for you that the AI can't because they have eyes and ears and hands?


The question is more why employ a full time receptionist when fractional services are available and it’s an old well established industry. A couple hundred dollar a month could employ a human only when the phone rings and to schedule their visit plus any FAQ. I’m sure Ruby.com already has plenty of auto shop customers.

Very soon, it will be difficult to tell the difference unless you probe it.

I think most folks already wouldn't be able to tell, with the modern TTS.

It's like AI photos, they fool you unless you're looking for it.


Yeah, it’s getting harder to tell. At some point the difference won’t be in the voice itself, but in how the conversation flows.

Even now, i think they're quick enough--but they interrupt at the wrong times, where humans know if they have enough context yet.

So, I agree. But I believe the problem is pretty solvable with enough tokens.


Right, so it's time to dismantle environment/climate protection, worker safety/rights, employee protections etc. etc. like the Trump administration is currently doing over in the US, and make Europe great again?!

(actually, MEGA would be a great acronym, but Trump's friends in the EU are more focused on dismantling it rather than making it great)


That's probably for the best, really, prevents you from wasting your time reading lots of AI slop...

Is this based on anything real or just AI-generated slop meant to trigger angry reactions? It doesn't quote any sources for any of the stories, so as far as I can see, they're probably 100% made up...

At first, I also thought that rejecting collaboration excludes any kind of teamwork, but then I noticed the quotation marks - so they're apparently only rejecting quote-unquote-collaboration (as in "collaboration theatre": endless calls with no tangible outcome, wanting to involve everyone in decisions etc.), not actual collaboration (which is also consistent with what the article itself says).

Well honestly, that's the easiest problem to fix: just install any of the dozens of excellent and stable third party file managers. I for instance am (or was, while I still used Windows) a fan of Total Commander (actually, when I started using it, it was called Windows Commander). As a bonus, you'll be spared the useless UI and usability changes inflicted upon you with every new Windows version.

If you're going to replace tools as fundamental as the file manager, you may as well switch to a stable and fast operating system like most Linux distributions or Mac.

Yeah, that's what I did, eventually, but some people still need some software that only runs under Windows, or want to play games without messing around with Proton etc. etc.

I'd love to share your optimism and trust in that country-that-shall-not-be-named's self-healing mechanisms, but its system is already centuries old and based on a sort of "gentleman's agreement" that each of the powers in the state will respect the others. To make things worse, since WW2, the executive has amassed more and more power which a sufficiently unscrupulous president can use to start an authoritarian takeover. Currently, the only hope I see is that enough people are fond enough of their democracy (even if only because that's what they grew up with) to stand up for it when push comes to shove...

A major problem facing that nation is that not many people are particularly interested in maintaining a democracy. They just want to see the “other team” lose.

Well, if that's really the cause, then thanks CCC, I guess. For such a serious vulnerability which is probably non-trivial (not to mention expensive) to patch, is it really responsible to give only 3.5 months of time before disclosing it (according to slide #56 https://cdn.prod.website-files.com/5f6498c074436c349716e747/..., they notified EFR about the vulnerability on 2024-09-12 and disclosed it on 2024-12-28)?

IMHO wouldn't make much a difference, the issue had been known to them for years up to that point. To a large part still exists, the Spanish grid only committed to upgrade the hardware after this incident. Even so it will require about another year to complete the upgrade over there.

I don't follow in detail the news on other European nations but haven't seen much focus on hardening their security until they actually get breached. A recent example (albeit different attack vector) would be the Polish grid: https://arstechnica.com/security/2026/01/wiper-malware-targe...


The question that comes up then is: how much traffic can Starlink handle until it gets saturated? I'm not sure it can handle even a significant percentage of the users that currently use wired connectivity. And if they see that demand for their services starts overwhelming supply, they will definitely raise the prices...

Starlink has, essentially, fixed capacity per area. Thus, sparsely placed rural users are "cheap" to Starlink.

Densely placed city users would strain the system, but cities also favor wired connectivity.

That makes the two complimentary. Wired makes the most sense in urban formations, satcom makes the most sense in Bum Fuck Nowhere.


_Lots_ of traffic. It's going to end up being the global Internet backbone.

Internet traffic today is estimated to be a few tens of exabytes per day. Even if you assume 100000 Starlink satellites (we're far from that), each satellite would have to handle hundreds of terabytes per day. That's tens of gigabits per second per satellite, assuming traffic is split evenly among them (will never happen in real situations).

Starlink V3 can pump out some seriously impressive speeds and handle thousands of clients. Starlink is both a great leap forward in rocketry and radio technology. I do still think funny how we are going back to the pre war technology tree for a re-visit

That's not even sufficient to handle the needs of a single large city. The limitation is that even with the much larger constellation they hope to deploy there won't be enough satellites visible at once from any given large metro area.

Citation definitely needed.

If you really have a loop that is reading from a database at 100ms each time, that's not because of not having optimized it prematurely, that's just stupid.

Got it. What about initiating a 800mb image on a CPU limited virtual machine that THEN hits a database, before responding to a user request on a 300ms roundtrip? I think we need a new word to describe the average experience, stupidity doesn't fit.

Reminds me of this quote which I recently found and like:

> look, I'm sorry, but the rule is simple: if you made something 2x faster, you might have done something smart if you made something 100x faster, you definitely just stopped doing something stupid

https://x.com/rygorous/status/1271296834439282690


And yet... :)

I think there is just a current (I've seen it mostly in Jr engineers) that you should just ignore any aspect of performance until "later"


and, I guess, context does matter. If you need to make 10 calls to gather up some info to generate something, but you only need to do this once a day, or hour, and if the whole process takes a few seconds that's fine, I could see the argument that just doing the calls one at a time linearly is simpler write/read/maintain.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: