HN2new | past | comments | ask | show | jobs | submitlogin
A question about the AI doomsday that's been bothering me
5 points by vba616 on Dec 22, 2022 | hide | past | favorite | 8 comments
ChatGPT prompt: "What is the logic in assuming a particular computer node or system will develop intelligence before the global Internet as a whole?"

"The development of intelligence in a computer system would depend on a variety of factors, such as the capabilities and resources of the system, the algorithms and software being used, and the data and experiences available to the system.

It is not necessarily logical to assume that a particular system or node would be more likely to develop intelligence than the global Internet as a whole, which is a vast and interconnected network of computers and devices."



I've played with ChatGPT in many contexts. For the most part it is like politicians dishing up word salad answers to uncomfortable questions.

When I ask ChatGPT for the rationale or foundations of what it produces, it tends to become evasive. Just like an annoying friend with strong opinions on everything but unwilling to explain the reasoning underlying the pronouncements.


Well, after some more experimentation, its forte seems to be as a programmer's assistant.

It's not that it writes production ready code, but asking it questions seems remarkably like interviewing a fresh graduate who shamelessly BS'es, and has some short term memory issues (like a human under stress!) but can glibly respond when you point out a mistake.

I asked it to create some Javascript classes, and when I pointed out it wasn't using the "#" to make properties private, it said that was too new to be supported. That sort of makes sense if its model is a bit out of date, but it was inconsistent - earlier it did use that syntax.

And when I questioned it, pointing out that caniuse says 93% of browsers support it, it immediately agreed, and revised the code.

It also was able to change loops using forEach() to traditional for loops, or to for...of. That sort of thing is neither creative nor especially necessary, but it reduces the friction to trying out different styles and seeing what one likes.

It doesn't seem to be very good at putting subtasks in a logical order or factoring out repeated code, but neither of those are necessary to provide ideas, and because it's not production ready code, you the human are still doing something necessary and maybe more enjoyable.

I do still feel that it's a glorified database, but at the same time it might not be unreasonable to call it as much of an advance as the first compiler was. Or maybe the relational database.

I wonder if it would work better if it incorporated its own adversarial doppelganger - evaluating each response from the opposite point of view, and then providing some of the results from both sides.

There were some results I wasn't happy with.

I asked it to base a class structure on something from a different language and environment and it pretty much made it up, and had no idea what methods were important. When I told it to remove all but the most commonly used X number of methods, it acted like it understood, but the ones it selected weren't reasonable.


I find the reaction to Chatbot et al somewhat like a prism to the thought of the people who pose the questions and how they interpret the results. GPT-3 is amazing in its ability to generate coherent grammar based on petabytes of ‘screenscraped’ Internet content… but looking for wisdom from it is somewhere between Magic 8 Ball and Ouija boards imo …


I'm not looking for wisdom, just patterns.

I guess that I'm looking for whether things that I don't see on the internet do exist somewhere.

If they are in the model, then perhaps I'm just in the wrong bubble.

If they aren't in the model, that's interesting.

The foremost question I have is what can it do that Google can't.

It seems like it might be a good programming tutor, which is really valuable for someone who is isolated from mentors.

It doesn't seem like it writes ready-made code for you, but quickly gives (maybe) relevant examples.

It's not very helpful on problems with a niche piece of free software. I mean, it provides the standard answers to the problems, which are not helpful.

But it told me that I should use fetch instead of XMLHttpRequest, and confirmed that fetch doesn't require a library like Node.js - which seems to be true.


So, here's a question to really keep you up at night: How do we know the global Internet as a whole hasn't developed intelligence already?

It might have already, but pretty much like ants aren't aware they make up a system that's greater than the sum its parts (i.e. an anthill), we might just not be aware of a higher-order intelligence we're part of.


Sure, although, why would it develop intelligence prior to acting in a coordinated way?

Wouldn't it make sense that something on the level of a hydra or flatworm would develop prior to a rat-like entity, prior to a human level consciousness?

And it doesn't take consciousness or intelligence to manipulate intelligence; it just takes evolution and necessity, as is seen with plants, bacteria, fungi, and parasites.


In a nutshell: Emergence. Like an anthill, the Internet very much is a coordinated system.

The thing is, the constituents of such a system wouldn't necessarily recognise the higher-order system they're part of. An ant doesn't know it's part of an anthill.

Such a higher-order system wouldn't necessarily follow the same or even remotely similar evolutionary steps or patterns as the systems and structures that preceded it.


When I was young I read "The Adolescence of P1" by Thomas J Ryan. The plot involved an AI created by mistake, and its creator. I long supposed something like that was possible for the internet as a whole, as there are any number of self-replicating programs out there infecting systems that interact with each other.

Having seen the amount of compute that goes into training large language models, it seems evident to me that even all of the compute on the internet ever in a random process wouldn't come anywhere near close to that type of result.

It is now possible to use large amount of compute to compile a chatbot, and it is possible that a network of those chatbots could actually learn things. That remains to be seen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: