Hacker News .hnnew | past | comments | ask | show | jobs | submit | spzb's commentslogin

Can't say for sure but the first commit was only four days ago and has a gitignore mentioning to Claude so probably yes. https://github.com/hectorvent/floci/blob/main/.gitignore

Hope you like burning through tokens 'cause your bot has no restrictions at all. It just gave me Python code for calculating pi and then a detailed itinerary for a three day trip to Vancouver.

haha, good catch! I focused on the AEC logic but did not put enough on the general guardrails. Your vancouver and Pi questions, including someone else on marine vessels seemed to have slipped thru. Got questions about refining Uranium and stuff, I'll be tightening the system prompt to manage this. Appreciate the stress test!

In a normal world, price per terabyte would fall as a consequence of greater storage density and better power efficiency. A world with AI and a brewing oil crisis is not like that.

That is not true. HDD Drive price or cost per TB hasn't dropped for a very long time long before the AI craze.

Is this a paid ad placement? I'm seeing a load of breathless "commentary" on Taalas and next to no serious discussion about whether their approach is even remotely scalable. A one-off tech demo using a comparatively ancient open source model is hardly going to be giving Jensen Huang sleepless nights.

Probably being astroturfed by people with a financial interest in it working. The critical commentary in this thread is what to watch for.

If you can’t be bothered to write it, why should I bother to read it?

Because it contains information of value to you ? I mean if it doesn’t, just don’t read it.


To quote another HN comment recently made:

> Using AI to write content is seen so harshly because it violates the previously held social contract that it takes more effort to write messages than to read messages. If a person goes through the trouble of thinking out and writing an argument or message, then reading is a sufficient donation of time.

However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.


So to a large extent I appreciate that argument, however I feel this applied more to throwaway comments or sales outreach, writing with low information density. In this occasion the work that went into it is a lot, it would be lost or inaccessible to me otherwise, I am genuinely grateful someone stuck their work in an LLM, said tidy this up to post, and hit enter.

Claude vibe-coded. No thanks.

New account: check

Dot ai domain: check

Slop project: check

Even sloppier LLM-generated blog post: check


This account is new, but I'm a long time participant. All three blogs posts were written by me, FWIW.


Mathematical proof vs a web app that doesn't actually run? Not much of a contest.

Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.


> Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.

This is empirically wrong as of early 2026.

Since Christmas 2025, 15 Erdos problems have been moved from "open" to "solved" on erdosproblems.com, 11 of them crediting AI models. Problems #397, #728, and #729 were solved by GPT-5.2 Pro generating original arguments (not literature lookups), formalized in Lean, and verified by Terence Tao himself. Problem #1026 was solved more or less autonomously by Harmonic's Aristotle model in Lean.

At IMO 2025, three separate systems (Gemini Deep Think, an OpenAI system, and Aristotle) independently achieved gold-medal performance, solving 5 of 6 problems.

DeepSeek-Prover-V2 hits 88.9% on MiniF2F-test. Top models solve 40% of postdoc-level problems on FrontierMath, up from 2%.

Tao's own assessment as of March 2026: AI is "ready for primetime" in math and theoretical physics because it "saves more time than it wastes."

You can disagree about where this is heading, but "haven't and aren't going to" doesn't survive contact with the data.


Indeed. And adding on to this, in a slightly different realm, Donald Knuth's conjecture that he solved with Claude: https://www-cs-faculty.stanford.edu/%7Eknuth/papers/claude-c...

> solved more or less autonomously

So, not autonomously.


q.e.d.

You got really specific to help prove your point. We were generalising to projects built by AI, not web apps that don’t run, which isn’t relevant since LLMs can clearly build fully working projects.

Also how does getting into the specifics of which type of AI can solve mathematical problems helps the comparison here?


You were the one who made the comparison

Same on a Mac

You've just made all of that up, haven't you?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: