HN2new | past | comments | ask | show | jobs | submitlogin

I used to think this but now I'm fairly convinced that it "knows" somewhat less than someone who was locked in a tiny dark room with no input except the ability to read a lot of text from the internet would know if that was their whole life. I don't believe it has a sense of self or consciousness, just that it possesses whatever knowledge is embedded in written text. Maybe a better analogy would be if you could cut out someone's language centre and put that into a jar hooked up to text input and output. It's not a whole mind but it sure feels like it's a piece of a mind that can do mind-like stuff.


You can compare GPT-4's limitations to Hellen Keller's. Someone who is deaf and blind can still reason as well as someone with "all inputs enabled". Hellen Keller still had a "full mind."


That's why I included the part about being locked in a small room with only text for input. People who are deaf and blind still interact with the world through their other senses. GPT-4 has no other senses and no body.


That’s just the illusion of LLM drawing you in deeper. Knowing the correct thing to say is not the same as knowing things.


And what is “knowing”?

Everyone repeats the retort you gave, yet I’ve yet to see a clear definition of “knowing”.


Are you talking about the difference between memory and reasoning? It's a bit hard to understand what you mean by knowing the correct thing vs knowing things. Both mean you know things, correct or not.


When you know something you know it in multiple different contexts and forms, not just how it relates in response to a stimulus, or a prompt.


Still not sure what you're talking about that's different from what GPT can do. It's very good at transferring from one context to another while retaining the same intent or meaning. Could you give an example of something you think it can't do?


Innovation.

Could GPT be given some screenshots of a game you want to play and then code it up?

Could you run through a demo of some competitor’s app and have it make something similar but better?


I'm using it right now to help write a game that I had an idea for. I'm writing in a programming language that isn't the same one I use daily and I'm using a graphics library that I've only used once before to make a small game and GPT has been a massive help with this. It's helped me solve some tricky problems like getting a shader to work the way I wanted it to and I've used it to create first drafts of all of the code so far. I guess that's not pure innovation but it sure as hell has a better grasp on a lot of the stuff it's writing then I did at first. It can't just look at a picture and produce the exact game you want but neither could I. I'd have to ask you a bunch of questions about what you wanted the gameplay to be like, if you wanted to release it for PC or console or both, I'd have to get an artist to create a whole bunch of concept art and then ask you to approve the ones you like and then I'd need to implement all the code and play test it with you and make changes etc. It's a bit unfair that you want this tool to do more than a single person could just to prove that it "knows something". Just because it isn't 100% autonomous doesn't mean it has 0 knowledge or ability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: