Yeah, Epstein (like a lot of people) was obsessed with Trump, who was the President at the time. But what’s the smoking gun? It seems like if there was one that would be your lede.
You're snart. You can figure out why those 3M files they were legally required to release weren't released. Also, the article specifically mentions thks.
Not one person in the current administration or its party (or yourself) would agree with you a few years ago in regards to the administration having communications with Twitter/Meta over a laptop story, regardless if any strongarming even took place at all.
I don't remember that detail, but I do remember most people not treating their inbox as an archive at the time. So there was less friction to switch to gmail, and more reason to do so due to the "real time" ticking storage amount of gmail, which then became an archive (again for most people).
> I do remember most people not treating their inbox as an archive at the time.
Indeed. For me, the step was gmail. With its humongous 1GB of storage, that was the moment when I stopped having to delete stuff to save space. It’s funny because a lot of people I know who were already older at that point kept the habit of deleting emails, even today.
You could merely be convinced by one. That is sadly an unpaid position though.
But more seriously, this discussion has come up so many times on this site, that I could instantly find myself talking about it a handful of times at least:
And that doesn't even go into whether sites actually need to ask for cookie consent at all if they aren't collecting user data outside of functional necessity (they don't).
> this might be misleadingly interpreted as an LLM having "thought out an answer"
I'm convinced that that is exactly what happens. Anthropic confirms it:
"Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so."
This is about reasoning tokens right? I didnt mean that, nanogpt doesnt do that. Nanogpt inference just outputs letters directly, no intermediate tokens.
No, this is about normal tokens. While a SOTA LLM outputs a token at a time, it already has a high level plan of what it is going to say many tokens ahead. This is in reply to the GP who thinks that an LLM can somehow produce coherent and thoughtful sentences while never seeing more than one token ahead.
Thats actually an interesting way to look at it. However i just posted that because i often see articles expressing amazement at how training an llm at next token prediction can take it so far. Seemingly ontrasting the simplicity of the training task to the complexity of the outcome. The insight is that the training task was in fact 'predict the next book', just as much as it is 'predict the next token'. So every time i see that 'predict the next token' representation of the training task it rubs me the wrong way. Its not wrong, but misleading.
I didnt mean to suggest that is how it 'thinks ahead' but i believe you can see it like that in a way. Because it has been trained to 'predict all the following tokens'. So it learned to guess the end of a phrase just as much as the beginning. I consider the mechanism of feeding each output token back in to be an implementation detail that distracts from what it actually learned to do.
I hope this makes sense. Fyi im no expert in any way, just dabbling.
Ben Folds co-composing the album and songs featuring Henry Rollins, Lemon Jelly, etc, as well as having Nick Hornby (and I believe Aimee Mann?) on That's Me Trying is incredible, and also probably my favorite song on it.
I generally agree with the concept of what you describe, but I think the crucial variable (and it very much is variable) is the "extremely broad training set" and whether that will be tainted by slop (human or otherwise). I wouldn't make any assumptions either way here.
reply