Hacker News .hnnew | past | comments | ask | show | jobs | submit | YeezyMode's commentslogin

It likely helps to take in the cultural moment or context around the statements or the nature of the statements you're making. It's fine to state a fact but it's also helpful to make it clear whether you are saying "it is what it is " or "I wish things were different" or "I am doing X, Y, and Z to try and help and I recommend others do so". Jokes are an exception and I think misunderstandings are fine there. But it's unreasonable to think that on the Internet, people will "check to see if you are serious".


The comment was serious. It didn't feel the need to take a side.

The DoD declaration reflects a certain context, we had the patriotic act, a whistleblower exiled in Russia for defending the constitution, etc etc. We didn't need to wait a MAGA movement to be expecting such comment from the DoD.

If hackernews threads turn into mouthpieces for opinions then we have no use posting anything in here.

The comments are naively claiming commercial agreements make Anthropic right, as if contracts had more weight than the constitution.

I would rather call out a "virtuous signalling" entity in the valley simply standing for something aligned with civil liberties, and using it as a political stance in what nobody would deny is an unfortunate polarized political climate.

What to make of OpenAI then. Should I give my opinion that they took a falsely constitutional stance, or simply made for-profit move to land a juicy government contract, while making the public think they kept the same red lines as their main competitor?

Or just stick to the fact: The DoD will, as always, get away with its liberticide demands to get what it wants, because other big tech will fall inline.


Would you say that this in itself is due to how incomplete human reasoning is in the first place? That as a result, our ideas of logic and what perfect logic looks like are bound to fail? Or are you saying that the purest mathematical representation of logic cannot scale to a point where they can model and predict real world relationships successfully?


The second. Mathematical logic thrives on precision, clear definitions, and unambiguous axioms, but real-world systems are often marked by vagueness, uncertainty, and dynamic change.

Gödel’s Incompleteness Theorems also demonstrates that in any sufficiently powerful mathematical system, there are true statements that cannot be proven within the system. This implies that no matter how refined a logical system you devise, it will invariably be incomplete or inconsistent when grappling with real-world phenomena.


Gödel didn't say anything about real world phenomena. He was talking about formal languages and mathematics.


Of course. But if you truly cannot model every true statement in any formally devised system, then you are by definition going to have to reject valid rules that your logic cannot verify if you intend your system to perfectly logical.


I believe that's right but only in a deductive setting, and as long as there's a requirement for soundness. Inductive and abductive logical inference are not sound and they are very useful for real-world decision-making. But that is a developing field and there are still many unknowns there.


It isn't misinformation. Here is a study that basically rebuts every implication of the study you linked: https://twitter.com/VirusesImmunity/status/17063329657922727...

Another link questioning the process behind the entire study itself: https://www.sciencemediacentre.org/expert-reaction-to-an-ana...


Assuming the scenario where ISPs end up blocking access to the archive, the fact that this is happening alongside rapid advances in LLMs is pretty insane. Comparisons with historical snapshots of the web is probably one of the best ways to combat disinformation and misinformation generated using AIs. There are of course other risks and dangers that people have talked about re: restricted access to knowledge and precedents for future publications, but this one in particular seems very dangerous in a timely manner - almost like a setup for very clear and obvious disasters surrounding propaganda, memes, and social chaos.


This is a possibility that shouldn't be dismissed. Trees use mycorrhizal networks to communicate and have been around for a very long time on this planet. They model the environment and use either a set of micro-decisions or a set of larger, slower moves that are made across longer timescales than humanity is used to. You can argue whether they possess sentience or not, but when discussing models, decisions, and consequences - trees seem to act with plenty of coordination and understanding and self-interest.


I've seen kids respond the same way and I totally did not fully see the disparity in reactions until you pointed it out. It definitely looks like people who have spent years priming themselves for a singularity, intelligence, or consciousness at every corner are far more susceptible to equating the recent advances as parallels to conscious experience of humans. I read a highly upvoted post on the Bing subreddit titled "Sorry, You Don't Actually Know the Pain is Fake" that argued for Sydney possibly being just like a brain, and experiencing conscious pain. It was disturbing to see the leaps the OP made and the commenters who agreed as well, though I do agree that we should avoid purposefully being toxic to a chatbot nonetheless, but due to the consequences to our own spirit and mind.


I've rarely come across photography online that completely transports me. Great work on the Baikal blog!


This is beautiful.


Straight from Wikipedia: Kinopio, which is a mixture of the word for mushroom (“kinoko”) and the Japanese version of Pinocchio (“pinokio”). Those blend to be something along the lines of “A Real Mushroom Boy.”


> “A Real Mushroom Boy.”

Sometimes that's how I feel about myself


It seems so unlikely to me that large swaths of people who act this way can make it to a place in any industry where they're surrounded by more resourceful peers. How does this even happen? I just want to make sure that I'm not creating a view out of reading exaggerated anecdotes, and that this actually reflects reality.


College degrees or plain salespersonship conferring an inappropriate amount of merit. This is the unfortunate corrolary of impostor syndrome: some people really are impostors.

I think if we want to start pushing more people into trades and de-emphasize college education: (1) we make it much harder to graduate college and (2) make college much cheaper so that pursuing that risk isn't ruinous. Then a student can explore a college path and pivot to something more suited to their attitude and skills to remain a productive member of society if it doesn't work out.


Worth reading this book: https://en.wikipedia.org/wiki/Peter_principle

Combine that with the Pareto Principle and you start to get a fairly accurate picture of "why the world is the way it is." And this applies to all industries, not just tech.

If you look at it through a nihilist lens, it can bum you out quick. But, if you look at it like "I was given the gift/responsibility to do this well" (sans God-complex, elitist "useless eaters" attitude), it makes it a bit more fun/tolerable.


I've personally encountered this quite a bit circling around tech, in most of the cases I've seen it was due to _deeply seated_ insecurity more than anything else. Usually it corresponds with folks who find themselves in a position that they're a poor fit for but for whatever reason can't move on. They bounced into a position they can't exit and their 'value add' is applying this sort of tactic under the guise of being helpful externally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: