This article tries to paint the fact that the US struck a military base as proof that they also struck as school with no evidence to join the two events. The title is literally just clickbait.
Man, the overwhelming majority of your comments over the past several months are you whining about AI or being extremely salty about anything remotely AI related. You bash AI content, people who use AI to make cool stuff, AI companies, people who say anything positive about said companies... I really wonder what exactly you think your negative attitude contributes to these discussions.
It contributes far more than yet another low effort AI-generated Show HN on top of the dozens already submitted every day.
If you think you made "cool stuff" with AI, great, enjoy it, but also please keep it to yourself because anyone else can generate the exact same thing if they want it, you are not special, and are actively downing out real human effort and passion.
I would say this doesn't actually work well for UX, because people are more likely to know their street address and city than their zip code. Personally, every time I've moved over the years, it took a few weeks for me to internalize my new zip code.
I have no idea what I'm doing differently because I haven't experienced this since Opus 4.5. Even with Sonnet 4.5, providing explicit instructions along the lines of "reuse code where sensible, then run static analysis tools at the end and delete unused code it flags" worked really well.
I always watch Opus work, and it is pretty good with "add code, re-read the module, realize some pre-existing code (either it wrote, or was already there) is no longer needed and delete it", even without my explicit prompts.
>> Even speaking from a pure statistical perspective, it is quite literally impossible for "AI" that outputs world's-most-average-answer to be better than "most engineers". In fact, it's pretty easy to conclude what percentage of engineers it's better than: all it does is it consumes as much data as possible and returns the statistically most probable answer
Yeah, you come across as someone who thinks that the AI simply spits out the average of the code in its training data. I don't think that understanding is accurate, to say the least.
>> I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM.
Same here. This is also why I haven't been able to switch to Claude Code, despite trying to multiple times. I feel like its mode of operation is much more "just trust to generated code" than Cursor, which let's you review and accept/reject diffs with a very obvious and easy to use UX.
Most of the folks I work with who uninstalled Cursor in favor of Claude Code switched back to VSCode for reviewing stuff before pushing PRs. Which... doesn't actually feel like a big change from just using Cursor, personally. I tried Claude Code recently, but like you preferred the Cursor integration.
I don't have the bandwidth to juggle four independent things being worked on by agents in parallel so the single-IDE "bottleneck" is not slowing me down. That seems to work a lot better for heavy-boilerplate or heavy-greenfield stuff.
I am curious about if we refactored our codebase the right way, would more small/isolatable subtasks be parallelizable with lower cognitive load? But I haven't found it yet.
Elixir has a LangChain implementation by the same name. And in my opinion as a user of both, the Python version and the Elixir version, the Elixir version is vastly superior and reliable too.
This agentic framework can co-exist with LangChain if that's what you're wondering.
I went down this path a bit the other night, curious what OP's answer is. My mental model was that they could be complimentary? Jido for agent lifecycle, supervision, state management, etc, LangChain for the LLM interactions, prompt chains, RAG, etc. Looks like you could do everything in Jido 2.0, but if you like/are familiar with LangChain it seems like they could work well together.
reply