It describes the present moment as a series of causal events. Like event x led to y which led to z. Doesn't matter if you ask it for English or code or to not use any tenses, those conditions don't affect its baseline understanding. I might be missing your point though.
For the second thing. I think from any point in history saying "coming soon" , well the current moment is the most accurate time to say it. And especially with events x and y and chat gpt right behind us. Chat gpt has basically been a problem since before I was born too, but stating as much a few months ago would just be as pessimistic as the statement you made. Only because i think the LLM hallucination problem may be simple. But it's only a hunch, based on our wetware.
Grammar parsers have been able to do this since the 90s. There is no reason to believe that it's not just a slightly-fancier grammar parser: the kinds of errors it makes are those you'd expect from a pre-biased stochastic grammar parser.
> But it's only a hunch, based on our wetware.
Our "wetware" fundamentally does not work like a GPT model. We don't build sentences as a stream of tokens. (Most people describe a "train of thought", and we have reason to believe there's even more going on than is subjectively accessible.) ChatGPT does not present any kind of progress towards the reasoning problem. It is an expensive toy, built using a (2017, based on 1992) technology that represented progress towards better compression algorithms, and provided some techniques useful for computational linguistics and machine translation. The only technological advance it represents is "hey, we threw a load of money at this!".
The "LLM hallucination problem" is not simple. It's as fundamental as the AI-upscaler hallucination problem. There is no difference between a GPT model's "wow amazing" and its "hallucinations": eliminate one, and you eliminate the other.
These technologies are useful and interesting, but they don't do what they don't do. If you try to use them to do something they can't, bad things will happen. (The greatest impact will probably not be on the decision-makers.)
> well the current moment is the most accurate time to say it.
This is true of every event that is expected to happen in the future.
The take that its a sophisticated grammar parser is fine. Could be lol. But when it is better at humans then the definitions can just get tossed as usage changes. You can't deny its impact (or you can, but it's intellectually dishonest a bit to just call it old tech with monies and nothin' special from impact alone). But that's your experience so it's fine.
For the stuff about it being a hard problem , now I know you aren't expressly making a false equivocation right? But I did say simple not easy. You are saying hard not complex.
I think there's too much digression here. You're clearly smart and knowledgeable but think LLM are over rated, fine.
And yes I know it's always the best time to say it that's the point of a glass half full, some sugar in the tea, or anything else nice
(It's not just a grammar parser, for the record: that was imprecise of me. The best description of the thing is the thing itself. But, when considering those properties, that's sufficient.)
> But when it is better at humans then the definitions can just get tossed as usage changes.
I'm not sure what this means. We have the habit of formally specifying a problem, solving that specification, then realising that we haven't actually solved the original problem. Remember Deep Blue? (We could usually figure this out in advance – and usually, somebody does, but they're not listened to.) ChatGPT is just the latest in a long line.
> You are saying hard not complex.
Because reasoning is simple. Mathematical reasoning can be described in, what, two-dozen axioms? And scientists are making pretty good progress at describing large chunks of reality mathematically. Heck, we even have languages like (formal dialects of) Lojban, and algorithms to translate many natural languages into it (woo! transformers!).
… Except our current, simple reasoning algorithms are computationally-intractable. Reasoning becomes a hard problem with a complex solution if you want it to run fast: you have to start considering special-cases individually. We haven't got algorithms for all the special-cases, and those special-cases can look quite different. (Look at some heuristic algorithms for NP-hard problems if you want to see what I mean.)
> but think LLM are over rated,
I think they're not rated. People look at the marketing copy and the hype, have a cursory play with OpenAI's ChatGPT or GPT4, and go "hey, it does what they say it can!" (even though it can't). Most discussion seems to be about that idea, rather than the thing that actually exists (transformer models, but BIG). … but others in this thread seem to be actually discussing transformers, so I'll stop yelling at clouds.
For the second thing. I think from any point in history saying "coming soon" , well the current moment is the most accurate time to say it. And especially with events x and y and chat gpt right behind us. Chat gpt has basically been a problem since before I was born too, but stating as much a few months ago would just be as pessimistic as the statement you made. Only because i think the LLM hallucination problem may be simple. But it's only a hunch, based on our wetware.