LeCun is the main author of the paper "Backpropagation Applied to Handwritten Zip Code Recognition" 1989, the earliest real-world application of a neural net trained end-to-end with backpropagation. "AI godfather" is fair enough.
LeCun’s research into Convolutional Neural Networks contributed to modern AI, so the nickname is quite appropriate. It’s because of this that he was pursued by Facebook.
Turing is a Godfather of AI. But you can't say there never will be another. In light of recent achievements there should be new pioneers and Godfathers. Its the nature of innovation.
No, the incumbents think it should continue. The xrisk folks are not the AI companies. The AI companies pay lip service to xrisk and try to misdirect towards regulation they think they can work with. Judging by this thread, the misdirection is working.
(I'm a lot more optimistic than such doomers, but I can at least see their position is coherent; LeCun has the opposite position, but unfortunately, while I want that position to be correct, I find his position to be the same kind of wishful thinking as a factory boss who thinks "common sense" beats health and safety regulations).
How well GPTZero can detect ChatGPT generated text by measuring perplexity (how random the words choices are) and burstiness (how diverse the sentence structure is) shows that whatever algorithm our brain uses has stronger creative capabilities than this LLM.
GPT-3.5 isn’t a great writer like AlphaGo is a great go player. Maybe one day AI will generate better scripts and novels than humans, but not this model.
Medium-quality writing is ok for informative content though, but it’s problematic when the model doesn’t know fact from fiction. That’s the important complaint.
Is it dangerous? Maybe.
But is it useful? Not if it’s wrong too often.
You’re right that this tech should be taken seriously, but so should the hallucination problems. These problems can be solved. And maybe they should be solved before anyone trusts it with serious questions.
> it’s problematic when the model doesn’t know fact from fiction.
This is in no way unique to an AI. Have you ever interacted with humans? Half the population thinks the other half can't tell fact from fiction. The other half thinks the same about the first half. We're all wrong about all the time.
There's a fundamental difference between "some people are wrong some of the time"—or even "half the population has trouble telling fact from fiction some of the time", if we grant that as true for the sake of argument—and "ChatGPT (and similar ML algorithms) don't even have a metric to determine truth from fiction; they just predict what's the most likely set of words to stick together in response to your prompt."
ChatGPT fundamentally cannot ever know when it's wrong. I should hope it goes without saying that that's not true of humans.
> LangChain uses pre-trained models from Hugging Face, such as BERT, GPT-2, and XLNet. For more information, please see the Getting Started Documentation[0].
That links to a 404, but I did find the correct link[1]. Oddly that doc only mentions an OpenAI API wrapper. I couldn’t find anything about the other models from huggingface.
Does LangChain have any tooling around fine tuning pre-trained LLMs like GPTNeoX[2]?
> We're predisposed to seeing meaning, and perhaps "intelligence", everywhere.
I’m guilty of this with my dog. I can’t help it with her head tilts and deep stares! Her inner monologue is probably less sophisticated than I like to think it is.
Yes on all three points! Especially search. Notion’s search is so painfully slow. Whenever I try to link a row to another row the loader sounds for 2+ secs, even if I’m searching a table with <10 rows.
The first use case in the demo is generating a blog post, so I’d assume that they’re trying to win the “cynical marketers wanting to use GPT-3 to churn out keyword nonsense to game Google Search” market. I know of some marketing teams that use notion for content management.
The concept of AI has been around since Turing and if anyone deserves a title like “Father of AI” it’s him.
LeCun is Chief AI Scientist at Meta. They can just leave it at that.