Not really, no, at least not with Claude. It seems to already understand the UE5 way of doing things, but there were a couple of edge cases for new features beyond its cutoff date where I had to refer Claude to the UE5 documentation. Once it read the documentation however, it understood and continued without issue. Also, for any compilation errors, I just copy and paste the error messages into Claude Code and it usually fixes it immediately.
Russian here. Speaking of Dugin, I'm not a fan of his ideas about archaic way of living, and I don't personally know anyone who likes them. I consider him to be a hypocrite who talks the talk but doesn't walk the walk – unlike German Sterligov, who actually lives in a village without modern technology that he founded and built.
However, there's a sad fact about Dugin. He experienced a personal tragedy – his adult daughter was murdered, likely for political reasons. So, while I'm not a supporter of his ideas, I can never judge him for these ideas considering what he went through. Maybe advocating for these ideas is his way of surviving his tragedy.
As a Russian how would you characterize Dugin? Over here he’s usually considered a fascist (which I don’t think is technically accurate, though he is authoritarian) or grouped with neo-monarchist type “trads.”
I’ve always had a dark view of him. His writing gives me a veiled nihilist vibe. He seems like someone who is bitter about something. He lives in a fantasy world and wants people punished if it can’t be real.
Not a unique thing. This is the dark side of many idealists and romantics. The more someone lives in a dream the more they often hate the real world.
I agree that the attempt on his life is sad and I’m sure further radicalized him, though most of his ideas predate that. Who do you think it was? I always had three possibilities: Ukraine, anti war Russians, or the Putin regime itself for some reason.
I wonder, is an electronic system capable of doing anti-entropy work on itself (the way life does) necessarily AGI-complete? It turns out that there are many complex behaviors (like drawing or generating sensible text) that don't require AGI-completeness.
(Stumbled upon the answer while formulating the question – no, being capable of doing anti-entropy self-maintenance work isn't AGI-complete because there's plenty of life that's perfectly capable of that without being generally intelligent.)
Unlike the elephant / mammoth pairing, there aren't any marsupials similar to Thylacine (as far as I know). What surrogate options did you have in mind?
Thanks to ChatGPT, I'm much less hesitant to delve (haha) into unfamiliar topics. "Hi! I'm a beginner programmer. I'm interested in learning Idris but I know next to nothing about dependent types. Could you explain them in a couple of sentences?"
Then, after the answer, I ask follow-up questions. I also try to check the answers against other sources, e.g. docs or Wikipedia in order to spot hallucinations.
Steam works in Russia, but it doesn't accept payments. Free and previously owned games can be played, but new ones cannot be bought. The usual way to circumvent this is via gifting from another account that has working a payment method, e.g. a Georgian or Armenian bank card, or simply playing bought games on the new account.
I feel that long, unexplainable, hard-to-comprehend proofs may become commonplace when we get to AIs that are capable of discovering new proofs on their own.
I think that’s very likely and in fact even necessary to advance the field. Proofs won’t even be stored in a human-readable format. It will just be a collection of data necessary for an automated theorem prover (or verifier in this case) to connect the truth of a formal statement back to its axiomatic basis.
I don’t see any problem with that. Most of the human work will shift to designing the meta-algorithms that efficiently search “proof space” for interesting results.
Abstract math is a human hobby. Machines doing it on their own is an interesting idea, but not satisfying to the humans. May as well conjecture whatever you want, and not worry about proof at all.
How so? You can just assume that some conjecture is true and proceed with your work. (For some conjectures that is acceptable, for others, not so much.)
An interesting proof would have to show something more than just the truth: maybe it's constructive and shows how to compute something, or it shows a connection between fields previously seen as barely related. Or it uses a new trick or language that could be applied elsewhere. But I think all that requires that the proof is a bit more than transparent than just having a formally verifiable representation.
I expect that we will move in a direction that all proof. Not just steps is done by computers. The human input remaining is just to ask it what to prove.
It is quite easy to come up with questions which have well-defined answers but the solution is insanely complex. Nobody guarantees that if the question is short and the answer as well, the path from A to B will be short too.
Yes, same way computers generate strategies for games that are seemingly incomprehensible at first but after further analysis humans discover deeper underlying principals that went unnoticed
To me, both Outer Wilds and The Witness were absolutlely savourable, and I made a point of never looking up anything on the Internet while playing these games. That would rob me of the feelings these games were designed to impart.
Tunic is so good. Although I'm yet to complete it, I've gotten like 90% of the way I think after runs of 20-50%. Maybe I'll finally complete it this year because I adore the aesthetic, vibe and that little fox.
Definitely the game opens up and becomes more deep/mysterious than I thought it would get, super cool.
reply