A crypto miner needs consent because it burns your battery and CPU power with no benefit to you. This AI model would only be used when you invoke it so the only problem is disk space, which the comment you're replying to acknowledges as a point of issue.
> This AI model would only be used when you invoke it
You sure about that? How explicit is the invocation? assuming it’s only run when the user does something (big assumption), does the user know clicking that summarize button is going to bog their system down and crank up their electricity use?
Indeed. Trusting that it will only be processing the user's queries - as opposed to, say, becoming part of a distributed grid of AI processing nodes - isn't a bet I'd be willing to place much money on.
LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
Many things the human brain does don’t rise to the level of conscious awareness.
It remains to be seen whether a human brain can be conscious in a jar. If it can, then I’d still argue that some sub-unit of the whole brain is not conscious on its own, similarly a GPU running a GPT probably isn’t conscious, but there may be some scale of number of GPUs running software that might give rise to consciousness as an emergent ability.
GTP’s have exhibited emergent abilities as scale increased dramatically.
This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.
This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.
Not unless you’re referring to significant mental illness, no. Individual people may vary if, say, I ask for health advice but if I ask the same doctor they’re not going to flip the answer based on whether I use medical or wellness influencer phrasings — and that allows them to build a reputation which other people can rely on.
This especially applies to mistakes: the junior developer who drops a database by mistake is unlikely to ever do that again, whereas the same AI companies models keep doing that to a small but non-zero number of customers because they don’t have that higher level learning process or anything like fear of consequences.
Humans can't reliably subitize more than five-ish objects, while chimps can actually do this task better than us. That's our "cant count the R's in strawberry" (which flagship models can reliably do now, general letter counting).
That’s not a valid analogy: humans reliably perform that task billions of times daily. It’s still routine to find cases which reveal that while models may have improved on some basic tasks (or learned to call a tool) there isn’t a deeper understanding of the underlying concept or the problem they’re being asked to solve.
And AI agents reliably-ish do tasks billions of times a day that humans struggle with, namely regurgitating information at incredible rates across wide breadths of topics. I see it as merely a matter of degree, not category.
How do you measure "deeper understanding" in humans? You usually do it by asking them to show their work, show how the dots connect. Reasoning models are getting there, and when they do, I'm sure the goalposts will move yet again.
Physical fields like dendritic integration, EM, diffusion, it isn’t binary logic. Brains are a different substrate. Metabolism power efficiency affects cognition too.
I am very familiar with how they are trained. That doesn't change the fact that they are matrix math based on pre-trained weights. Something like RLHF makes those weights more effective but it doesn't change the fact it's autocomplete.
This is reinforced (pun not intended) by the continued issues with things like "should I walk or drive to the car wash"
As we can see by this thread… It’s heavily debated as to whether the intentions we should be following are those of long-dead forbears, or the will of the people, and in the latter, which people.
> Gawiser filed a “writ of execution” (another $240 in court fees) just yesterday, which would allow Texas law enforcement to seize and sell off enough of Tesla’s property as would be required to pay the judgment against them.
Since they probably won't do business with anyone who owns that car anymore, take a battery and run.
Unfortunately there's no such thing as anonymous telemetry. There are multiple techniques to re-identify scrubbed data, and some [seemingly innocuous] data is inherently identifying.
Same. But I agree with the parent, I always got the vibe it was a giant racket between public schools and TI. Writing code for it was probably cool back in the 80s-90s but it's so dated now.
I don't know how the TI-85 compares to the other models without looking it up, but there's a forever soft spot in my heart for mine. It got me through a comp sci degree and still works flawlessly today.
Your phone still connects to the cellular network without a sim card or eSim. It is mandated by law in the US. The only way to prevent your phone from connecting/pinging/being pinged by the cellular network is to put it in airplane mode.
Whether there is a sim enabled/disabled/installed is irrelevant. The question is whether this feature is Airplain Mode or if it is just disable cellular.
Ah, I thought you were likening it to the disable cellular data button which does not disconnect the cellular network.
Instead you are referring to the fact that the radio may remain on even if it has no active SIM card.
Given that the primary concern of connected vehicles is changes over time and manufacturer control, I don’t see any reason to make that distinction for most people.
reply