HN2new | past | comments | ask | show | jobs | submit | jug's commentslogin

And this is it. This is why we are where we are today. That it is seen as taking a religious zeal to realize how flying very frequently is disastrous for the climate. That's our bar and what we have to work with. Yes, we are properly fucked.

I think it has more to do with LLM's being statistical models than human creativity lacking in the input. The creativity and millions of voices and tones may be there, but since these models tend to go for the most likely next words, polishing this away becomes a feature.

A text by a human mind may be seen as a jagged crystal with rough edges and character. Maybe not perfectly written but it's special.

An LLM takes a million of crystals and trims the most likely tokens to be chosen into what would rather appear as a smooth pebble; the common core of all crystals. And everyone using the LLM will get very similar pebbles because to the LLM, regardless who is speaking to it, it will provide the same most likely next tokens. It's not that creativity is lacking in the input, but the LLM picks the most commonly chosen words by all humans in given contexts.

For that to sound imaginative and great as you go, it would have to not only exist in the data, but be a common dominating voice among humans. But if it was, it wouldn't be seen as creative because it would be the new normal.

So I'm not sure how there's a good way out of this. You could push LLM temperature high so that it becomes more "creative" by picking less popular tokens as it writes, but this instead tend to make it unpredictable and picking words it shouldn't have. I mean, we are still dealing with statistical models here rather than brains and it's a rough tool for that job.


>I think it has more to do with LLM's being statistical models than human creativity lacking in the input. The creativity and millions of voices and tones may be there, but since these models tend to go for the most likely next words, polishing this away becomes a feature.

I have always thought this is a rather misguided view as to what LLMs do and indeed what statistical models are. When people describe something as 'just statistics' I feel like they have a rather high-school-ish view of what statistics represents and are transferring this simplistic view to what is going on inside a LLM. Notably they do not find the most probable next word. They find the probability of every word that could come next. That is a far richer signal than most imagine.

And ultimately it's like saying that human brains are just chemical bonds changing and sometimes triggering electrical pulses that causes some more chemicals to change. Complex arrangements of simple mechanisms can produce human thought. Pointing at any simple internal mechanism of an entity without taking into account the structural complexity would force you to assume that both AI and Humans are incapable of creativity.

Transformers are essentially multi-layer perceptron with a mechanism attached to transfer information to where it is needed.


> They find the probability of every word that could come next.

If we're being pedantic, they find a* probability for every token (which are sometimes words) that could come next.

What actually ends up being chosen depends on what the rest of the system does, but generally it will just choose the most probable token before continuing.

* Saying the probability would be giving a bit too much credit. And really calling it a probability at all when most systems would be choosing the same word every time is a bit of a misnomer as well. During inference the number generally is priority, not probability.


I was using the term word to be consistent with the previous comment. It need not be a word, or even text at all.

Most systems choosing the high probability thing is what probability is.

They're just relative scores. If you assume they add to one and select one based on that it's a probability.


As a community alpha tester of GW1, this was a fun read! Such an educational journey and what a well organized and fruitful one too. We could see the game taking shape before our eyes! As a European, I 100% relied on being young and single with those American time zones. :D Tests could end in my group at like 3 am, lol.

Oh yeah, those were some good times. It was great getting early feedback from you & the other alpha testers, which really changed the course of our efforts.

I remember in the earlier builds we only had a “heal area” spell, which would also heal monsters, and no “resurrect” spell, so it was always a challenge to take down a boss and not accidentally heal it when trying to prevent a player from dying.


Huh? Servers aren't people and thus have completely different expectations, or what am I missing here

Yes, back 10-15 years ago MBP felt more prosumer to me but they have monstrous performance and price points nowadays, like true luxury items or enterprise devices, that I'm happy to see good base specs on the MBA. The base spec on that device matters a lot. Also, Apple will probably release a cheaper MacBook this week and if the rumor holds, it'll be good enough for most consumers.

The base 15" MacBook Pro was $2,399 10 years ago ($3,251.07 adjusted for Inflation) today it is $2,699.

https://everymac.com/systems/apple/macbook_pro/specs/macbook...


Even with the $100 price bump, I think this is a win. 16/512 is a very nice base spec on Mac.

That works for a LOT of people. Not me, but the everybody else in my family.

These are kinda popular today in the form of AI Dungeon, NovelAI, FictionLab etc.

Basically you create characters with bios, traits. Then a setting/context. Now, as you write your story, you can have multiple characters involved and they'll act from their perspective and traits.

Then these also have lorebooks with triggers, so if you mention The Barking Dog Inn, the AI and the characters will know what you mean and your characters with an outgoing personality type will be more eager to go there than others etc.

Finally, these systems usually have a long term memory where key events are saved and the AI remembers.

So a lot of this already exists!


What's your favorite game of this type?

Also apparently eating 2 GB RAM or so to run an entire virtual machine even if you've disabled Cowork. Not sure which of this is worse. Absolute garbage.

This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.

AI companies yes, RAM manufacturers no.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: