HN2new | past | comments | ask | show | jobs | submit | peterth3's commentslogin

Can we stop labeling prominent AI researchers as “AI Godfather”? It’s so silly and barely truthful.

The concept of AI has been around since Turing and if anyone deserves a title like “Father of AI” it’s him.

LeCun is Chief AI Scientist at Meta. They can just leave it at that.


LeCun is not called AI godfather because of his job at meta, but becase of pioneering CNNs in 1989.


LeCun is the main author of the paper "Backpropagation Applied to Handwritten Zip Code Recognition" 1989, the earliest real-world application of a neural net trained end-to-end with backpropagation. "AI godfather" is fair enough.


that term usually refers to one of the three people got Turing award in 2018: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun,


LeCun’s research into Convolutional Neural Networks contributed to modern AI, so the nickname is quite appropriate. It’s because of this that he was pursued by Facebook.


According to the dictionary, godfather is defined as: "a man who is influential or pioneering in a movement or organization"

Are these prominent AI researches not seen as being prominent exactly because they influenced movement in relation to AI?


That’s definition #2 with #1 being:

> a man who presents a child at baptism and promises to take responsibility for their religious education.

So, the original definition is a gendered and religious term.

Sounds silly to me. Especially in this context.


Clearly #1 has no applicability here and the intent is to convey #2, which is quite apt. Words have multiple meanings. Such is life.


Turing is a Godfather of AI. But you can't say there never will be another. In light of recent achievements there should be new pioneers and Godfathers. Its the nature of innovation.


Licensing can definitely turn into regulatory capture if it expands enough. It’s effectively a barrier to entry defined by the incumbent.


They're saying don't do licensing, just make it illegal.


Sorry, who’s “they” here? The incumbents OpenAI / DeepMind / Anthropic?


No, the incumbents think it should continue. The xrisk folks are not the AI companies. The AI companies pay lip service to xrisk and try to misdirect towards regulation they think they can work with. Judging by this thread, the misdirection is working.


Yudkowsky and similar.

(I'm a lot more optimistic than such doomers, but I can at least see their position is coherent; LeCun has the opposite position, but unfortunately, while I want that position to be correct, I find his position to be the same kind of wishful thinking as a factory boss who thinks "common sense" beats health and safety regulations).


In this case I meant the GP comment from JoshTriplett.


The Design of Everyday Things, by Don Norman is a good place to start.

https://www.amazon.com/Design-Everyday-Things-Revised-Expand...


Glad to hear we’re not the only ones having an emergency LangChain hackathon.

So far what we’ve been able to build feels brittle. But, LLMs are fun to play with though


Why does Bing AI sign every message with an emoji?

ChatGPT doesn’t do it and it comes off so strange.


It was programmed in to make it more "friendly" or "personable".


Maybe it’s gpt 4 internally and it now has the ability to express emotions.


How well GPTZero can detect ChatGPT generated text by measuring perplexity (how random the words choices are) and burstiness (how diverse the sentence structure is) shows that whatever algorithm our brain uses has stronger creative capabilities than this LLM.

GPT-3.5 isn’t a great writer like AlphaGo is a great go player. Maybe one day AI will generate better scripts and novels than humans, but not this model.

Medium-quality writing is ok for informative content though, but it’s problematic when the model doesn’t know fact from fiction. That’s the important complaint.

Is it dangerous? Maybe.

But is it useful? Not if it’s wrong too often.

You’re right that this tech should be taken seriously, but so should the hallucination problems. These problems can be solved. And maybe they should be solved before anyone trusts it with serious questions.


> it’s problematic when the model doesn’t know fact from fiction.

This is in no way unique to an AI. Have you ever interacted with humans? Half the population thinks the other half can't tell fact from fiction. The other half thinks the same about the first half. We're all wrong about all the time.


There's a fundamental difference between "some people are wrong some of the time"—or even "half the population has trouble telling fact from fiction some of the time", if we grant that as true for the sake of argument—and "ChatGPT (and similar ML algorithms) don't even have a metric to determine truth from fiction; they just predict what's the most likely set of words to stick together in response to your prompt."

ChatGPT fundamentally cannot ever know when it's wrong. I should hope it goes without saying that that's not true of humans.


From what I’ve read, LLM can’t reliably detect itself.


What LLMs does LangChain support?

Btw I asked chat.langchain.dev and it said:

> LangChain uses pre-trained models from Hugging Face, such as BERT, GPT-2, and XLNet. For more information, please see the Getting Started Documentation[0].

That links to a 404, but I did find the correct link[1]. Oddly that doc only mentions an OpenAI API wrapper. I couldn’t find anything about the other models from huggingface.

Does LangChain have any tooling around fine tuning pre-trained LLMs like GPTNeoX[2]?

[0]https://langchain.readthedocs.io/en/latest/getting_started.h...

[1]https://langchain.readthedocs.io/en/latest/getting_started/g...

[2]https://github.com/EleutherAI/gpt-neox


Here are the docs on the built in models at the moment: https://langchain.readthedocs.io/en/latest/reference/modules...

One of their examples is

    from langchain import NLPCloud
    nlpcloud = NLPCloud(model="gpt-neox-20b")
So it looks like you're good to go.


> We're predisposed to seeing meaning, and perhaps "intelligence", everywhere.

I’m guilty of this with my dog. I can’t help it with her head tilts and deep stares! Her inner monologue is probably less sophisticated than I like to think it is.


Why are you inclined to think that way?


Yes on all three points! Especially search. Notion’s search is so painfully slow. Whenever I try to link a row to another row the loader sounds for 2+ secs, even if I’m searching a table with <10 rows.


The first use case in the demo is generating a blog post, so I’d assume that they’re trying to win the “cynical marketers wanting to use GPT-3 to churn out keyword nonsense to game Google Search” market. I know of some marketing teams that use notion for content management.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: