Hacker News .hnnew | past | comments | ask | show | jobs | submit | iambateman's commentslogin

Do you have something to say about it?

I work with these tools every day, and I’d say my productivity increase is on the order of astronomical.

This isn’t true of everyone, but depending on pre-existing skill sets and context, there’s a group of people who are actually multiples more productive than they were in 2022.


I had a friend with plenty of experience in HR get laid off.

He looked for a job for 13 months. One of the top 3 smartest people I’ve ever known looked for seven months and had to take a big step back in his career, despite having Amazon and Home Depot on his resume.

Both of them said that even getting an interview was almost impossibly hard.

These are people in different parts of the county, and in different industries.

I think we have a serious problem on our hands with employment that’s probably not getting better any time soon.


Don't want to discredit or invalidate your/your friend's experience, but I do want to offer some hope during these times where a lot of what you see on the web is doomer posts.

From Google: The Jevons paradox occurs when technological improvements increase the efficiency of a resource, but the resulting drop in cost causes demand to rise so much that total consumption of that resource increases rather than decreases.

The message being, it may seem now that because the friction for creating software is so minimal now that there will be no need for software engineers in the future. But historically when friction has been reduced, we have seen an increase in demand that outweighs the efficiency gain, increasing total consumption. I believe that software won't be an exception to this millennia-old pattern.

While what "software engineering" may look like might change, I still believe strongly that people who understand software will actually be in higher demand than ever in the future.


To paraphrase Keynes, in the future we are all dead. His friend is suffering now, not in some theoretical far off future where the magic market had been blessed by the invisible hand.

So much pre-employment screening and automated filtering. Getting to the interview stage is like having your paper refereed instead of desk rejected.

Cold applications are very difficult, especially because of the sheer volume of applicants.

Unless I have a referral, it’s such a low probability exercise it’s not worth it for me.

Whenever I see “100+ candidates have applied” on LinkedIn, I just ignore the job posting.


HR, particularly recruitment, was one of the biggest hit departments when the blood-letting started at Meta in 2022. The hiring freeze that followed didn't help.

When was he looking and when did he get the job?

Two different friends…

First spent most of 2025 looking.

Second started last week


Giving an LLM write access is insane but I gave LLM’s read-only access to our database and it’s been a huge productivity win.

Executives who wouldn’t take the time to build a report are happy to ask an AI agent to do so.


I would hope that you're running this on a replica so that the massive table scan doesn't choke writes to the main db. Even then it's possible to bring the replica down and depending on the technology still create a problem (WAL backup for instance)

Another way to bring prod down even with read is depending on your atomicity settings, try starting a transaction and don’t commit or abort it, just leave it dangling. That’s a cute one


How do you validate that the reports are correct? What if an executive makes a wrong business decision because the LLM wrote a wrong SQL query?


> What if an executive makes a wrong business decision

I jokingly tell students, "We all know executives are gonna make bad decisions no matter what the data says. Might as well give them the random numbers more quickly."


The same way we've always done it - glance at it and see if the numbers look like they're within an order of magnitude of what looks reasonable.

so what if there were some numbers in the report which are in actuality, an order of magnitude or two outside of what you think is reasonable, because something was wrong, but the AI agent reports something that looks normal?

So as long as the LLM only makes errors in the single-digit percentage range, everything is peachy. Make number go up, but not by too much.

If you already know the report's numbers, why are you asking an LLM to generate it?

Usually because you need something vaguely technical and authoritative sounding to push for a decision you're already made.

How do you prevent your customer data being used for training?

The same way everyone does, by not using free LLMs, but instead paying OpenAI/Microsoft/Anthropic for an enterprise subscription?

I thought the way is not feeding customer data to the LLM.

At some point, the market for markets will demand better regulation than this. But I find it a little absurd that anyone is still putting their money into these markets with story after story of obvious fraud.

> I find it a little absurd that anyone is still putting their money into these markets with story after story of obvious fraud.

Then you do not understand gamblers. They make emotional decisions not rational decisions. That's why the lottery works (i.e. "someone has to win")


They spent a lot on marketing, it doesn't surprise me at all

Occam would disagree.


The most Occam-safe explanation is not insider trading but actually hard work in analytics to rapidly surface intelligence from X and other alternative data sources.


occam doesn't work in financial markets.

because if it did you would print money by following occam.


But it does work in conspiracy theories which is what we have here.


I also live about half an hour from Congaree. I wish it was a state park…it’d be on everyone’s list of “coolest state parks.”

It doesn’t have the same wow factor as other national parks, but it’s a special place for sure.

See you at the fireflies!


Microsoft teams: not as bad as people say, except for this situation.

I have accidentally sent so many messages trying to get to a new line.


It's because enter does different things at different times in the exact same text box.

Write a code snippet/block text. Does [enter] insert a newline, exit the block, or send the message?

What about in a bulleted or numbered list?

And my 2 biggest pet peeves with MS Teams:

1. trying to edit the first letter in a `preformat block`. It's not possible. It will either exit the block or go to the second letter.

2. Consistency with bold/italics. Bold a selection of text. Then backspace once. Are you going to write bold or normal? What does ctrl-B do? Anytime you backspace into a bolded section, it will convert your editing back to bold, and you cannot disable bold.


I have a very small Kevin Bacon number to the "guy who runs Teams". The message from them is "please use the built-in feedback tool to tell us about these things".


I also sent a LOT of Slack messages prematurely for the same reason. Used to it now, though. The more an interface emphasises the single-line nature of a text input, the better. Multi-line should never submit on enter, single-line always should.


Same, but it's configurable in slack so now I have it configured the Enter inserts a line break and Cmd+Enter submits the message


while I haven't changed it, it seems that you configure that behavior in the current version of Teams (Settings > Chats and channels)


They buried the lede…

Participation in sports betting appears to make people about 2x more likely to be delinquent on their loans.

Whether you think that’s “bad enough” is another question, but the article doesn’t make it very clear what the effect size is.


I wonder if this is just selection bias

People who are bad with money are bad with money


Well, at least you'd want to be careful about correlations vs causation, yes.


I mean…keep reading.

For the affected population, it’s around 10 percentage points—or double.

So people who sports bet are twice as likely to be delinquent as those who don’t. I’ll give you that the effect is smaller than I expected.

Here’s the thing though…it’s not like that trend is slowing down. The finalization of prediction markets and continued normalization of betting as a pro-social behavior is currently headed to the moon…so we should ask if it’s causing major side effects.

Smoking makes someone 25x more likely to develop lung cancer. Right now it looks like sports betting makes you 2x more likely to be delinquent on your car loan. At what incidence does that become anti-social enough to try to curb?


Sports betting is regulated, prediction markets aren't though. That's a pretty stark difference


In the US, the CFTC regulated prediction markets. They are more regulated (at a federal level) than gambling.


There's plenty of regulation around them. But sure, you can ask for even more, or different regulation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: