Hacker News .hnnew | past | comments | ask | show | jobs | submit | ezfe's commentslogin

A crypto miner needs consent because it burns your battery and CPU power with no benefit to you. This AI model would only be used when you invoke it so the only problem is disk space, which the comment you're replying to acknowledges as a point of issue.

Or some website decides for you that you now want to talk to your local AI chatbot using google chrome prompt api.

https://developer.chrome.com/docs/ai/prompt-api


> This AI model would only be used when you invoke it

You sure about that? How explicit is the invocation? assuming it’s only run when the user does something (big assumption), does the user know clicking that summarize button is going to bog their system down and crank up their electricity use?


Indeed. Trusting that it will only be processing the user's queries - as opposed to, say, becoming part of a distributed grid of AI processing nodes - isn't a bet I'd be willing to place much money on.

You would be right if there's a popup box with two buttons appearing before installing the model and before every time it's used by some site.

Button 1: "Stop the AI now to save X GB of RAM".

Button 2: "Erase all browser AI to save X GB of RAM and Y GB of disk"

This isn't asking for consent, it's simply informing the user about what oversized resources are optional and providing an honest way to save them.

The only alternative to that is formal consent.


A crypto miner generates revenue needed to run the service, similar to ads.

9to5mac writes clickbait headlines but MacRumors violates journalistic integrity. They always have been worse, there’s nothing new.

LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).



A human brain is not conscious on its own.

Many things the human brain does don’t rise to the level of conscious awareness.

It remains to be seen whether a human brain can be conscious in a jar. If it can, then I’d still argue that some sub-unit of the whole brain is not conscious on its own, similarly a GPU running a GPT probably isn’t conscious, but there may be some scale of number of GPUs running software that might give rise to consciousness as an emergent ability.

GTP’s have exhibited emergent abilities as scale increased dramatically.


It sound like you believe in magic then? What is this "something else" to consciousness that can't be done with sufficiently advanced math?

Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?

This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.

This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.


> minor changes in a question can get conflicting answers from even the best models

Humans are notorious for doing this.


Not unless you’re referring to significant mental illness, no. Individual people may vary if, say, I ask for health advice but if I ask the same doctor they’re not going to flip the answer based on whether I use medical or wellness influencer phrasings — and that allows them to build a reputation which other people can rely on.

This especially applies to mistakes: the junior developer who drops a database by mistake is unlikely to ever do that again, whereas the same AI companies models keep doing that to a small but non-zero number of customers because they don’t have that higher level learning process or anything like fear of consequences.


Humans can't reliably subitize more than five-ish objects, while chimps can actually do this task better than us. That's our "cant count the R's in strawberry" (which flagship models can reliably do now, general letter counting).

https://en.wikipedia.org/wiki/Subitizing


That’s not a valid analogy: humans reliably perform that task billions of times daily. It’s still routine to find cases which reveal that while models may have improved on some basic tasks (or learned to call a tool) there isn’t a deeper understanding of the underlying concept or the problem they’re being asked to solve.

And AI agents reliably-ish do tasks billions of times a day that humans struggle with, namely regurgitating information at incredible rates across wide breadths of topics. I see it as merely a matter of degree, not category.

How do you measure "deeper understanding" in humans? You usually do it by asking them to show their work, show how the dots connect. Reasoning models are getting there, and when they do, I'm sure the goalposts will move yet again.


Physical fields like dendritic integration, EM, diffusion, it isn’t binary logic. Brains are a different substrate. Metabolism power efficiency affects cognition too.

I came here to say this. But your neurons are faster than mine.

They stopped being autocomplete years ago with RLHF

I am very familiar with how they are trained. That doesn't change the fact that they are matrix math based on pre-trained weights. Something like RLHF makes those weights more effective but it doesn't change the fact it's autocomplete.

This is reinforced (pun not intended) by the continued issues with things like "should I walk or drive to the car wash"


Is someone saying otherwise here?

YahooTube’s comment is heavily implying that the market systems in-place stifled invention via patent restrictions.

Yes, but isn’t that working as intended?

As we can see by this thread… It’s heavily debated as to whether the intentions we should be following are those of long-dead forbears, or the will of the people, and in the latter, which people.

What do you mean by that?

> Gawiser filed a “writ of execution” (another $240 in court fees) just yesterday, which would allow Texas law enforcement to seize and sell off enough of Tesla’s property as would be required to pay the judgment against them.

Since they probably won't do business with anyone who owns that car anymore, take a battery and run.


The assumption that telemetry is not allowed by GDPR is flawed

https://gdpr-info.eu/recitals/no-26/


Anonymous telemetry is allowed – and I don’t have a problem with that.

Unfortunately there's no such thing as anonymous telemetry. There are multiple techniques to re-identify scrubbed data, and some [seemingly innocuous] data is inherently identifying.

https://techcrunch.com/2019/07/24/researchers-spotlight-the-... | https://www.eff.org/deeplinks/2023/11/debunking-myth-anonymo...


I disagree with your premise (I’ve worked on anonymous telemetry and it can be done well.)

Not every company will do it well. Simpleanalytics.com seems to be one of the better ones.

But it’s still way better than the alternatives which don’t even try to be anonymous.


Absolutely is

Web based AI tools are remarkably helpful these days since they no longer try to do math themselves and instead write python to do it.

I used mine constantly in highschool (10 years ago).

Same. But I agree with the parent, I always got the vibe it was a giant racket between public schools and TI. Writing code for it was probably cool back in the 80s-90s but it's so dated now.

I used mine in highschool (20 years ago) and still use one today.

Same except mine was over 30 years ago (an OG TI-85). Still on my desk, still use it almost every day for something or other.

I don't know how the TI-85 compares to the other models without looking it up, but there's a forever soft spot in my heart for mine. It got me through a comp sci degree and still works flawlessly today.

I use mine constantly in high school (now).

> disable the eSIM card in the vehicle

Disabling a SIM card almost certainly means no connection to the network.


Your phone still connects to the cellular network without a sim card or eSim. It is mandated by law in the US. The only way to prevent your phone from connecting/pinging/being pinged by the cellular network is to put it in airplane mode.

(https://grapheneos.org/faq#cellular-tracking)

Whether there is a sim enabled/disabled/installed is irrelevant. The question is whether this feature is Airplain Mode or if it is just disable cellular.


Ah, I thought you were likening it to the disable cellular data button which does not disconnect the cellular network.

Instead you are referring to the fact that the radio may remain on even if it has no active SIM card.

Given that the primary concern of connected vehicles is changes over time and manufacturer control, I don’t see any reason to make that distinction for most people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: