Hacker News .hnnew | past | comments | ask | show | jobs | submit | johndhi's commentslogin

Hell yeah.

Observation: people act like this challenge is unique to the young generation, but it certainly affected me (millennial). It was a long, scary process of getting comfortable talking to people. It's still hard! And I have to re-learn it in different phases of life:

>talking to people at school

>talking to people in college

>talking to girls at bars

>getting over the idea that I don't/shouldn't talk to girls at bars anymore, post-marriage

>talking to other parents, male or female, once becoming a parent

all different lessons, all challenging. all worth the effort.


I agree but unfortunately there are more and more young people who seem to think that not doing this challenge is okay :(

Possible that there aren't measures that will actually achieve long-term safety while maintaining a highly popular platform?

Another way to phrase this might be that LLMs make better resumes no?

If that were the case they would select the ones generated by other models at a similar rate to the ones they generated themselves.

Where I work, my boss decided to make an application that uses AI to score long text field entries to ensure required information is present.

The AI lacks the ability to extract nuance and implicit information, which means entires end up being long winded and repeatitive. For each requirement its looking for, it must be explicity expressed-- it's quite unnatural, and almost feels like solving a puzzle, to which the obvious solution is to write a comment, then give it and the AI feedback to a failing comment to AI, so it can generate the proper structure the rubric-AI is looking for.

LLMs are statistically driven, and I can only imagine having the AI rewrite the comment produces a result that's more statistically fitting to the model than if any given human were to write it. So, it might mean, yeah, LLMs are better at writing resumes that the LLM can successfully classify-- are they better for a human to consume? Who knows.


You'd have to define "better".

All this shows is that LLMs generate resumes that fit the heuristics LLMs use to judge resumes. And that makes sense, but isn't necessarily a given.


Or in other words: LLM it is optimizing function which is generated by same LLM, think you have random variable y, where generator sin(x+r) and your optimizer trying to fit function sin(x+unkown1) + unknown2 ("unknown" function) - it is obvious that will find best fit.

By one metric, yes!

If you are a candidate who wants to be hired, and your target employers use LLMs to filter resumes, then an LLM-generated resume that the employer LLM-powered resume filters favor is "better" — as in "more likely to get you the job".


In text generation, LLM language is full of very emphatic phrases. At a surface level it might sound stronger. But as a human reader, it's not necessarily better

*for getting past ATS reviews.

yes, this!

Do they train Gemini other people use based on this or just generate images for me?


The article posits that sycophancy is inherent to how models are trained.

I think there's a simpler explanation. Every leaked system prompt from every model pretty much includes instructions to "be helpful," and the models are trained to be assistants, not just general knowledge repositories or research tools.

My hunch is that's the core of the problem -- the system prompt.


My prompts always contain the phrase 'no sycophancy'. The results are more direct.


I wonder what happens if you prompt it to be a tool and not an assistant and that it does not need to be helpful just do as instructed or something like this


sorry for disagreeing with everything on social media, but...

in my experience it's actually a bad thing for industry to add very specific requirements for them to follow


The govt can't spend money well


Yes! I want this so bad. But for the weather or my calendar for the day.


If you've got an old Kindle, this project is totally doable over a weekend! Especially if you start with only weather data to begin with.


Where do you guys come up with these ideas?

Simply having a lot of money makes someone evil? Why? They are obviously all quite competitive in business but the philanthropy they've done is pretty crazy. Gates for example is giving away hundreds of billions of dollars. What does it even matter if he's compassionate or not if he's doing that?


> Where do you guys come up with these ideas?

By thinking.


Thinking, experiencing the world, knowing that throughout our entire history of a species that tales of "excess greed" were also cautionary tales on how greed ruins society throughout the entire world.


Excess greed is a different thing than excess wealth. Bill gates is giving away all of that wealth - is that a greedy action?


My working class family members always gave 10% to charity (kind of the standard social contract in the US for giving) when that 10% made up a huge percentage of the money it takes for them to live a very basic life. Compare that to billionaires who have more money than they could ever spend and the percentage they have given:

Zuckerberg 2.1%. Ballmer 3.7% Bezos 1.6% Sergey Brin 2.5% Michael Dell 2.6% Ken Griffin 5%

https://www.forbes.com/sites/forbeswealthteam/2025/02/03/ame...


> the philanthropy they've done is pretty crazy

If philanthropy and normal living expenses (even assuming billionaire living standards) were the only things super-rich people spent money on that's fine. Unfortunately they use it to directly influence politics and society.

Wealth, like celestial bodies, has a gravitational field.


In general (not always, but it is mostly true) philantropy from billionaires and very profitable companies tend to be overshadowed by how much they profit from a system biased toward enriching them (see: The divide by Jason Hickel). A small metaphor to illustrate: are you a philantropist if you film yourself giving away 100$ to homeless people but make tens of thousands from posting the video?


If you gave away as much money proportionately, you'd have about 75 fewer dollars in your pocket.

Tell me again how generous billionaires are.


Gates has pledged 200 billion - 99 pct of his wealth - and has already given 60 billion (close to half of it). Is that generous or not?


Because it's about power, control, and influence. The wealth is just the tool. Melinda French and MacKenzie Scott are true philanthropists, Gates and Bezos are just status chasers. "Look at me!" "Please clap." and so on. There are only ~3000 billionaires in the world, so I am not too concerned about broad support for them in a world with 8-10 billion people.

"Fuck you" money is fine, we all strive for freedom during our lifetime as humans. "Fuck everyone" money is not a welcome target, imho. That's unelected power. Its easy to not be a billionaire of course: philanthropy. But do most billionaires? They do not. They hold tightly to their power.

"Why does it even matter?" Because many of us do not want to be ruled or governed by these people, who by all indications, are not fond of other humans and see them as a resource to exploit and control. I assure you, I have no envy for these people and their wealth, I am allergic to what it would take to accumulate and maintain it (as a high empathy, high justice sensitivity human). I know what enough is. This is self preservation from a class of predator.

> Where do you guys come up with these ideas?

https://en.wikipedia.org/wiki/Theory_of_mind

https://medium.com/roaring-rivers/are-all-billionaires-socio... | https://archive.today/nX2Fh

https://www.washingtonpost.com/politics/interactive/2025/bil... | https://archive.today/Gb2RF

https://www.forbes.com/sites/ellamalmgren/2025/09/09/america... | https://archive.today/nLx78

https://www.google.com/search?q=billionaires+sociopaths


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: