I'm far too lazy to be able to responsibly use a machine that can give me semi-sensible answers.
I saw the danger of it as a form of learned helplessness down the line and swore off using LLM's for that reason, that and I feel no need to delegate my thinking to a machine that can't think and I like thinking.
Same reason snacks are upstairs in the kitchen and not in my office on the ground floor - I'm too lazy and if they are easily available I'll eat them.
This resonated for me. It’s really easy to hit that tiny cognitive speedbump of needing to put a bit of effort into recalling some API detail or other and instead of reaching into the old trusty manages, tabbing into the spammy chatbot for a quick fix.
I saw the danger of it as a form of learned helplessness down the line and swore off using LLM's for that reason, that and I feel no need to delegate my thinking to a machine that can't think and I like thinking.
Same reason snacks are upstairs in the kitchen and not in my office on the ground floor - I'm too lazy and if they are easily available I'll eat them.