HN2new | past | comments | ask | show | jobs | submit | dr_dshiv's commentslogin

“The impotence of naive idealism in the face of economic incentives.”

“The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.”

Amen


AI;DR

DigID in the Netherlands is amazing— it works super well and is central to many services. Then it was bought by an American company—oops.

Why is having a digital ID that captures your biometric information, that is central to accessing many services so amazing? Please elaborate. I guess if it was allowed to be purchased by a foreign actor, it's not so amazing after all...

I agree, it should not be purchaseable.

But, it beats going to physical places to show physical ids for the same service.


I understand the convenience factor, but the privacy / potential ethical concerns with digital IDs connected to private sector services like banking transactions, internet access, sim card availability, etc... outweigh the convenience factor in my mind. Too easy for governments to do bad things.

One hour of Claude code— well, I’d guess it would be comparable to an hour of driving an electric car. How to know?

OP says one query uses 0.3 Wh. Driving an electric car for 10 miles = 3,000 Wh which is roughly 10,000 Wh per hour.

I'm not sure how many queries is equivalent to an hour of Claude code use, but maybe 5 seconds, which means an hour of continuous use = 216 Wh, or ~50x less than an electric car.

OP has a longer article about LLM energy usage: https://hannahritchie.substack.com/p/ai-footprint-august-202...


Beside the point, but 10,000 Wh per hour is kind of an insane unit. It's 10,000 watts. Or 10 kW if you're really into the whole brevity thing.

My point is that Claude might easily be about 50x more energy intensive than normal ChatGPT prompting.

A coding agent runs near-constantly, so of course it'd require a lot more compute than running even, say, a multi-minute query with a thinking model every hour. How much exactly is pretty hard to calculate because it requires some guesswork, but...

For a long input of n tokens from a model with N active parameters, the cost should scale as O(N n^2) (this is due to computing attention - for non-massive n, the O(N n) term is bigger, which is why API costs per token are fixed until a certain point and then start to rise). From the estimates from [1], it's around 40Wh for n=100k, N=100B. I multiply by 2.5 to account for Opus probably being ~2.5x larger than gpt-4o, and also multiply by 2 to pessimistically assume we're always close to Opus's soft context limit of 200k (it's possible to get a bigger context for extra cost, but I suspect people compact aggresively to not have to use it). That gets me 7.2J/t, which at a rough throughput estimate of 20t/s gives me power of 144W. Like a powerful CPU or a mediocre GPU, and still orders of magnitude lower than a car.

[1] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...


It is not only about raw power consumption. Comparing driving an electric car with using AI only in kW hides a major point: Hyperscale datacenters are massively centralised, which brings it's own problems; a lot of energy is used for cooling, and water consumptions is enormous. Charging electric cars at home is distributed and does not suffer from the same problems as the centralised hyperscalers do. Also, running AI models at home is not much different than a gaming session :)

This is an incredible sequence of assertions, every single one of which is very incorrect.

"A lot of energy used for cooling": hyperscale data centers use the least cooling per unit of compute capacity, 2-3x less than small data centers and 10-100x less than a home computer.

"Water consumption is enormous": America withdraws roughly 300 billion gallons of fresh water daily, of which IT loads are expected to grow to 35-50 billion gallons annually by 2028. Data center water demands are less than a rounding error.

"distributed and does not suffer from the same problems": technically correct I guess but distributed consumption has its own problems that are arguably more severe than centralized power consumption.


If you want to try quantum vibecoding, I threw up a site at https://www.haiqu.org where you can mcp with the quantum computer at TU Delft. Free, after you make an account.

I have a PhD student working on EEG audio decoding. We are presently focused on a simpler subtopic: the detection of consonance and dissonance in the brain as it listens to music.

It does make me wonder how advanced remote sensing devices are now. With more advanced hardware, can you remotely capture EEG level signals with any accuracy?

As an aside, I briefly read that as the detection of cognitive dissonance. Which I think would be a much more difficult topic.


Could you link some of your works? I’m very curious about reliability of EEG in terms of consistency between sessions.

Sounds awesome!

Thanks for all the work you do! I had used terminal just a few dozen times before November — and now i am in terminal more than any app (even more than the web browser).

It’s common for me to have 15-25 different terminal windows open for using Claude code. I shifted to Ghostty because I was looking for more features.

Unfortunately, none of the features I wanted are available anywhere (though I’ve come to appreciate Ghostty anyway). Here’s what I had wanted:

1. Basic text editing features (ie click to place cursor in the text input field; highlight to delete)

2. Change colors or fonts mid session (to make it easier to find particular windows)

3. Window management and search (eg, a way to find my windows when I lose them and to otherwise control them)

Apparently, it is really hard to develop features like these for terminal emulators. I’d love to understand why…


The next release includes a way to use a command palette to search for and jump between surfaces (windows, panes), which sounds like it partially addresses your third point. I had a small hand in it, by building the initial UI for the Linux version.

IMO this isn’t the job of the emulator. You can do this all in `tmux` for example.

As for editing text, ghostty+tmux most definitely supports editing text with the mouse (even an in terminal right click menu!) although sounds like your intended use of select to delete isn’t common so you’ll need to do some customizations.


What makes you say that isn't the job the emulator? Sure it is. In fact, tmux itself is a terminal emulator that you just so happen to run inside of another terminal emulator that you want to multiplex.

I’ve been using scroll back search for 15+ years with Terminal.app and iTerm2, and there’s no way that’s not the job of the terminal. You don’t know how good that is until you use it.

Tmux does it better

What am I missing?


For 1, take a look at https://github.com/alex-903/zsh-mouse-and-flex-search. It runs as a zle-line-init hook, so it works in any terminal. A full terminal app implementation would be cleaner, however.

Cool! Isn’t this what cursor initially tried to do before they pivoted? Hence cursor?

Must have been really hard. What was the breakthrough?


Another pattern is:

1. First vibecode software to figure out what you want

2. Then throw it out and engineer it


I made getinput.io to make it easy to edit copy and make comments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: