HN2new | past | comments | ask | show | jobs | submitlogin

One bothersome aspect of generative assistance for personal and public communication not mentioned is that it introduces a lazy hedge, where a person can always claim that "Oh, but that was not really what I meant" or "Oh, but I would not express myself in that way" - and use it as a tool to later modify or undo their positions - effectively reducing honesty instead of increasing it.




> where a person can always claim that "Oh, but that was not really what I meant"

that already happens today - they claim autocorrect or spell checks instead of ai previously.

I don't accept these as excuses as valid (even if it was real). It does not give them a valid out to change their mind regardless of the source of the text.


Arguably, excusing oneself because of autocorrect is comparable to the classic "Dictated but not read" [0] disclaimer of old. Excusing oneself because an LLM wrote what was ostensibly your own text is more akin to confessing that your assistant wrote the whole thing and you tried to pass it off as your own without even bothering to read it.

[0] https://en.wikipedia.org/wiki/Dictated_but_not_read


Yep! However the problem will increase by many orders of magnitude as the volume of generated content far surpasses the content created by autocorrect mechanisms, in addition to autocorrect being a far more local modification that does not generate entire paragraphs or segments of content, making it harder to excuse large changes in meaning.

I agree that they make for poor excuses - but as generative content seeps into everything I fear it will become more commonly invoked.


> I fear it will become more commonly invoked.

yep, but invoking it doesnt force you to accept it. The only thing you get to control is your own personal choices. That's why i am telling you not to accept it, and i hope that people reading this will consider this their default stance.


Never in my life would I accept that as a valid excuse. If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.

Are you embracing the fundamental attribution error?

Good question. I certainly commit that error sometimes, like everyone else. But the issue here is people using LLMs to write eg emails and then not taking responsibility for what they write. That has nothing to do with attribution, only accountability.

"I was having a bad day, my mother had just died" is a very valid explanation for a poorly worded email. "It was AI" is not.


You must be a delightful person to work with.

> If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.

Have you discussed this with your therapist?


I mean he mentioned it in IMO too harsh of a way (e.g. “pathetic”) but I do think it raises the point: if you don’t own up to your actions then how can you be held accountable to anything?

Unless we want to live in a world where accountability is optional, I think taking responsibility for your actions is the only choice.

And to be honest, today I don’t know where we stand on this. It seems a lot of people don’t care enough about accountability but then again a lot of people do. That’s just my take.


Yes, thank you. I used "pathetic" in the meaning of something which makes feel sorry for them, not something despicable. I fully expect people to stand by what they write and not blame AI etc, but my comment came across as too aggressive.

I mean we're only human. We all make mistakes. Sure, some mistakes are worse than others but in the abstract, even before AI, who hasn't sent an email that they later regretted?

Yes, we all make mistakes. But when I make mistakes when sending an email you can be damn sure that they are my own mistakes which I take full accountability for.

Making mistakes and regretting is of course perfectly ok!

What I reacted to was blaming the LLM. "I am sorry,I meant like this ..." versus "it wasn't me, it was AI".


Therapists are also supposed to take responsibility for their work.

I guess you got hung up on the word "pathetic". See my comment below, I used it not as "despicable" but rather "something to feel sorry for". Indeed, people writing emails using LLMs and then blame the AI for consequences, that is something that makes me feel sorry for them.

Implying mental health issues? That makes me think you were triggered by my comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: