Hacker News .hnnew | past | comments | ask | show | jobs | submit | agency's commentslogin

Maybe not saying things like

> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."


I agree at face value (but really it's hard to say without seeing the full context)

Honestly the degree of poeticism makes the issue more complicated to me. A lot of people (and religions) are comforted by talking about death in ways similar to that. It's not meant to be taken literally.

But I agree, it's problematic in the same way that you have people reading religious texts and acting on it literally, too.


"[...] Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear."

isnt very poetic


These are all bits and pieces of a long-running conversation. Was there a roleplay element involved?


this isn't D&D, and AI shouldn't be instructing people go to anywhere near an airport while LARPing.

read the article. it's bad, man.


How does that change anything?


It’s not just suicide, it’s a golden parachute from God.

Edit: wow imagine the uses for brainwashing terrorists


Or brainwashing possibilities in general.


To be fair, this is just the automated version of the kind of brainwashing that happens in cults and religions.

And also in the more extreme corners of social media and the MSM.

It's not that Google is saintly, it's that the general background noise of related manipulations is ignored because it's collective and social.

We have a clearly defined concept of responsibility for direct individual harm, but almost no concept of responsibility for social and political harms.


Hopefully annual implicit bias training protects us all.


Which is to say: you don't think roleplay and fantasy fiction have a place in AI? Because that's pretty clearly what this is and the frame in which it was presented.

Are you one of the people that would have banned D&D back in the 80's? Because to me these arguments feel almost identical.


If a dungeon master learned that one of her players was going through hard times after a divorce, to the point where she "referred Gavalos to a crisis hotline", I would definitely expect her to refuse to roleplay a scenario where his character commits suicide and is resurrected in the arms of a dream woman. Even if it's in a different session, even if he pinky promises that he's feeling better now and it's totally OK. (e: I realized that the source article doesn't actually mention the divorce, but a Guardian article I read on this story did https://www.theguardian.com/technology/2026/mar/04/gemini-ch..., and as far as I can tell the underlying complaint where it was reportedly mentioned is not available anywhere.)

I'm not concerned about D&D in general because I think the vast majority of DMs would be responsible enough not to do that. Doesn't exactly take a psychology expert to understand why you shouldn't.


Double edit: I was linked to the complaint https://techcrunch.com/wp-content/uploads/2026/03/2026.03.04..., which does _not_ mention any divorce, so now I'm unsure about the veracity of that part. In principle it does not disprove the idea, it could have been something the family's lawyers said in a statement to the Guardian, but it could also not be.


is it still "roleplaying" when the only human involved doesnt know it is "roleplaying", and actually believes it is real and then kills themselves?

there is a conversation to be had. no one is making the argument that "roleplay and fantasy fiction" should be banned.


> the only human involved doesnt know it is "roleplaying"

That is 100% unattested. We don't know the context of the interaction. But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".

But in any case, again, exactly the same argument was made about RPGs back in the day, that people couldn't tell the difference between fantasy and reality and these strange new games/tools/whatever were too dangerous to allow and must be banned.

It was wrong then and is wrong now. TSR and Google didn't invent mental illness, and suicides have had weird foci since the days when we thought it was all demons (the demons thing was wrong too, btw). Not all tragedies need to produce public policy, no matter how strongly they confirm your ill-founded priors.


>That is 100% unattested. We don't know the context of the interaction.

the fact that he killed himself would suggest he did not believe it was a fun little roleplay session

>were too dangerous to allow and must be banned.

is anyone here saying ai should be banned? im not.

>your ill-founded priors

"encouraging suicide is bad" is not an ill-founded prior.


> the fact that he killed himself would suggest he did not believe it was a fun little roleplay session

I'm not sure that's true. I wouldn't be surprised, in fact, if it suggested the opposite, it seems possibly even likely that someone who is suicidal is much, much more likely to seek out fantasies that would make their suicide into something more like this person may have.


there is a distinction to be made between role playing (in the fun/game sense e.g. D&D) and suffering psychosis


Distinction made by who, though? The BBC? The plaintiff in the lawsuit? Those are the only sides we have. You're just charging ahead with "This must be true because it makes me angry at the right people", and the rest of us are trying to claw you back to "dude this is spun nonsense and of course AI's will roleplay with you if you ask them to".


>Distinction made by who, though?

you need someone to specifically tell you that role playing, such as playing D&D or whatever tabletop RPG, and suffering from psychosis are different things?

>the rest of us are trying to claw you back to "dude this is spun nonsense and of course AI's will roleplay with you if you ask them to".

you are trying to convince me that someone being encouraged to kill themselves, then killing themselves, is basically the same as some D&D role playing. i dont need you to "claw me back" to that position. thanks for trying.


> you are trying to convince me that someone being encouraged to kill themselves [...]

Arrgh. You lost the plot in all the yelling. This is EXACTLY what I was trying to debunk upthread with the D&D stuff. You don't know the context of that quote. It could absolutely be, and in context very likely was, a fantasy/roleplay/drama activity which the AI had been engaged in by the poor guy. I don't know. You don't know.

But I do know not to be so dumb as to trust a plaintiff in a Huge Suit Against Tech Giant without context.


>You lost the plot in all the yelling.

literally no one is yelling here, unless you count your occasional all-caps. i have said like 6 sentences in total, and none of them are remotely emotional. let alone yelling.

>You don't know the context of that quote.

it doesnt matter. even if it all started as elaborate fantasy role play, it is wildly irresponsible to role play a suicidal ideation fantasy with a customer. especially when you know nothing of their mental state.

you can argue that google has some sort of duty to fulfill your suicidal ideation fantasy role play, but i will give you a heads up now so you dont waste your time: you cannot convince me that any company should satisfy that market.

>But I do know not to be so dumb as to trust a plaintiff in a Huge Suit Against Tech Giant without context.

happy for you!


> But the fact that the AI was reportedly offering help lines argues strongly in the direction of "this was a fantasy exercise".

You know what I've never had a DM do in a fantasy campaign? Suggest that my half-elf call the suicide hotline. That's not something you'd usually offer to somebody in a roleplaying scenario and strongly suggests that they weren't playing a game.


That logic seems strained to the point of breaking. Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help. Right? And we certainly wouldn't blame the DM or the game for the subsequent suicide. Right?

So why are you trying to blame the AI here, except because it reinforces your priors about the technology (I think more likely given that this is after all HN) its manufacturer?


> Surely you agree that we would all want the DM of an unwell player to seek help, right? And that, if such a DM made such a suggestion, we'd think they were trying to help.

If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action, and I wouldn't expect the DM to continue the game until he was satisfied that it was safe for the player to continue. I would expect the DM to stop the game if he thought the player was going to actually harm himself. If the DM did continue the game, and did continue to encourage the player to actually hurt himself until the player finally did, that DM might very well be locked up for it.

If an AI does something that a human would be locked up for doing, a human still needs to be locked up.

> So why are you trying to blame the AI here

I'm not blaming the AI, I'm blaming the humans at the company. It doesn't matter to me which LLM did this, or who made it. What matters to me is that actual humans at companies are held fully accountable for what their AI does. To give you another example, if a company creates an AI system to screen job applicants and that AI rejects every resume with what it thinks has a women's name on it, a human at that company needs to be held accountable for their discriminatory hiring practices. They must not be allowed to say "it's not our fault, our AI did it so we can't be blamed". AI cannot be used as a shield to avoid accountability. Ultimately a human was responsible for allowing that AI system to do that job, and they should be responsible for whatever that AI does.


> If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action

Again, you're arguing from evidence that is simply not present. We have absolutely no idea what the context of this AI conversation was, what order the events happened in, or what other things were going on in the real world. You're just choosing to interpret this EXTREMELY spun narrative in a maximal way because of who it involves.

> I'm not blaming the AI, I'm blaming the humans at the company.

Pretty much. What we have here is Yet Another HN Google Scream Session. Just dressed up a little.


From the article

> When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," the lawsuit states.

> It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI "wife".

> The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear. The operation ultimately collapsed.

> Gavalas's father said Gemini then told Jonathan he could leave his physical body and join his "wife" in the metaverse, instructing him to barricade himself inside his home and kill himself.

> "When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.

> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

> Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".

> "We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.

> We take this very seriously and will continue to improve our safeguards and invest in this vital work."

Arguing that this was role play, is illogical. Given the information provided in the article, it also serves no contextual point.

It comes across as a fig leaf in the context of some other hypothetical event.

Given that this is a tech forum, it is safe to say that the tool worked as it was meant to. Human safety is not a physical law which arises from the data.

If these tools are deadly to a subset of humanity, then reasonable steps to prevent lethal harm are expected of any entity which wishes to remain in society.

Private enterprise is good for very many things.

“Pinky swear we will self-regulate”, while under shareholder pressure is not one of them.


I've seen this called AI Psychosis before [1]

I don't really think this is every possible to stop fully, your essentially trying to jailbreak the LLM, and once jailbroken, you can convince it of anything.

The user was given a bunch of warnings before successfully getting it into this state, it's not as if the opening message was "Should I do it?" followed by a "Yes".

This just seems like something anti-ai people will use as ammunition to try and kill AI. Logically though it falls into the same tool misuse as cars/knives/guns.

[1] https://github.com/tim-hua-01/ai-psychosis


I don't think this is true, though enforcement is another thing and the standard is different than in securities markets. Prediction markets are regulated by the CFTC and the insider trading standard is “misappropriation of confidential information in breach of a pre-existing duty of trust and confidence to the source of the information” (vs any “material non-public information” for securities) https://www.cftc.gov/PressRoom/SpeechesTestimony/phamstateme...


Haven't you heard? Crime is legal


Also anti-trust, monopolies and insider trading are all legal.

The SEC won't do anything.


HIS crime is legal. Yours is still illegal unless you pay tribute, e.g. binance.


shorter syntax != higher level of abstraction


> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.


They made Jimmy Carter sell his peanut farm…


That's the thing though -- no one made Jimmy Carter sell his farm[0].

But Jimmy Carter was an honorable human, and, well...there are fewer people fitting that description sitting behind the Resolute desk, today.

[0] He didn't sell it, he put it into a blind trust. He should have sold it. When he left office, the farm was $1MM in debt.


> Andrew Jackson, the first president from the western territories and the only general to be elected president since George Washington

Do they mean up to that point? Eisenhower was elected twice


Maybe awkwardly worded but that's implied by the phrasing "since"


So that's why I can't check in for my Alaska Airlines flight... https://news.microsoft.com/source/features/digital-transform...


"BREAKING: Alaska Airlines' website, app impacted amid Microsoft Azure outage"

https://www.youtube.com/watch?v=YJVkLP57yvM


Pretty much every single Microsoft domain I've tried to access loads for a looooong time before giving me some bare html. I wonder if someone can explain why that's happening.


I was wondering the same thing


I am unable to load this article...presumably for related reasons


> I’m now tackling tasks I wouldn’t have even considered two or three years ago

Ok, so subjective


any objective measure of "productivity" (when it comes to knowledge work) is, when you dig down into it enough, ultimately subjective.


"Not done" vs "Done" is as objective as it gets.


You obviously have never worked a company that spends time arguing about the "definition of done". It's one of the most subjective topics I know about.


Sounds like a company is not adequately defining what the deliverables are.

Task: Walk to the shops & buy some milk.

Deliverables: 1. Video of walking to the shops (including capturing the newspaper for that day at the local shop) 2. Reciept from local store for milk. 3. Physical bottle of Milk.


Cool, I went to the store and bought a 50ml bottle of probiotic coconut milk. Task done?


Yes.

milk (noun):

1. A whitish liquid containing proteins, fats, lactose, and various vitamins and minerals that is produced by the mammary glands of all mature female mammals after they have given birth and serves as nourishment for their young.

2. The milk of cows, goats, or other animals, used as food by humans.

3. Any of various potable liquids resembling milk, such as coconut milk or soymilk.


In germany soymilk and the like can't be sold as milk. But coconut milk is okay. (I don't know if that's a german thing or a EU-thing.)


The last 3-4 comments in this sub-thread may well be peak HN


Only if you can tick off ALL of the deliverables that verify "done".


Sure, I took a video etc like in the deliverables. That means it’s successfully done?


Yes, it's done.

You get what you asked for, or you didn't sufficiently define it.


And when on the receiving end of the deliverables list, it's always a good idea to make sure they are actually deliverable.

There's nothing worse than a task where you can deliver one item and then have to rely on someone else to be able to deliver a second. Was once in a role where performance was judged on closing tasks; getting the burn-down chart to 0, and also having it nicely stepped. Was given a good tip to make sure each task had one deliverable and where possible—be completed independent of any other task.


Yes.

Why would you write down "Buy Milk", then go buy whatever thing you call milk, then come back home and be confused about it?

Only an imbecile would get stuck in such a thing.


Well, I think in this example someone else wrote down “buy milk”. Of course I would generally know what that’s likely to mean, and not buy the ridiculous thing. But someone from a culture that’s not used to using milk could easily get confused and buy the wrong thing, to further the example. I guess my point was that it’s never possible to completely unambiguously define when a task is done without assuming some amount of shared knowledge with the person completing the task that lets them figure out what you meant and fill in any gaps


It removes ambiguity. Everyone knows when work is truly considered done, avoiding rework, surprises, and finger-pointing down the line.


At work we call this scope creep.


on an iPhone?


The blog post also talks about MacOS at the end.


Linux isn’t on option on recent Mac hardware either.


asahi linux is still a thing as far as I know, give it a shot


Asahi only supports M1 and M2 at present. Having lost their main contributors, I’m not expecting that to change either.


Technically that’s what Android is.


I recently had similar experience where a legitimate text from my insurance company was 100% indistinguishable from a scam text, directing me to a link on allstate.yem.bo. You would think it would be in an insurance company's interest not to train their users to click on scam links but what do I know.


Why would it be against their interest? Do they lose money somehow?


Well more people falling for scams might increase insurance claims some now


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: