I went to the recent re-screenings of the Lord of the Rings movies -- and maybe 30 minutes into the first film, I realized I'd never actually seen it in a theater before. It was glorious.
There seems to be a lot of gloating in this thread from people who haven't been to a theater in many, many years -- lots of disgust at the idea of a movie ever being a communal, social experience. I get the annoyance of other moviegoers talking or otherwise disturbing the movie, but you have no idea what we're missing when big films don't have big individualized social moments to match.
We can't go two hours without picking up our phones. We don't deserve the great experiences movie theaters once gave us.
I fully agree that, when it works, the cinema is a wonderful experience. It's amazing, very different from watching something on a home screen.
Yes, it can get ruined by obnoxious movie goers, but wow...when it works it's amazing.
My main criticism is mostly aimed at the kind of movies that do reach the big screen where I live (big theater franchises, small arthouse cinemas are dying and almost completely gone, and their screens and sound were never good to begin with). I don't even complain about the price of the ticket. And the food, who on earth needs popcorn and soda to watch a movie? Go eat after, or before.
Yeah. I feel like I recall Matt Damon or some similarly famous actor saying that the reason there's been such a shift is largely due to the loss of physical media. It would give every movie a second "push" with reach and sales.
Related (a little): I recommend everyone go to a film festival at least once in their lives. Doesn't have to be a huge one, something regional works. It's a great opportunity to experience some of what I'm describing.
They detect bots but let a ton of them run free because any character having membership = revenue and an extremely significant chunk of active characters are bots. They nuked them all in 2011 I think and the game was nearly empty.
SirPugger's youtube channel has loads of videos monitoring various bot farms.
It is early in the news but I also read about the US working on directly having the nation transition. Gives me bad vibes as someone who lived through the invasion of Iraq. TBH, I am not very knowledgeable but I assume there's less sectarianism and lack of infrastructure so it is a different situation. Although with all things Trump, it's his execution and competency following through after the immediate ready decision.
I like the idea of using vintage LLMs to study explicit and implicit bias. e.g. text before mid-19th century believing in racial superiority, gender discrimination, imperial authority or slavery. Comparing that to text since then. I'm sure there are more ideas when you use temporal constraints on training data.
They also search online and return links, though? And, you can steer them when they do that to seek out more "authoritative" sources (e.g. news reports, publications by reputable organizations).
If you pay for it, ChatGPT can spend upwards of 5 minutes going out and finding you sources if you ask it to.
Those sources can than be separately verified, which is up to the user - of course.
Right, but now you are not talking about an LLM generating from it's training data - you are talking about an agent that is doing web search, and hopefully not messing it up when it summarizes it.
Yes, because most of the things that people talk about (ChatGPT, Google SERP AI summaries, etc.) currently use tools in their answers. We're a couple years past the "it just generates output from sampling given a prompt and training" era.
It depends - some queries will invoke tools such as search, some won't. A research agent will be using search, but then summarizing and reasoning about the responses to synthesize a response, so then you are back to LLM generation.
The net result is that some responses are going to be more reliable (or at least coherently derived from a single search source) than others, but at least to the casual user, maybe to most users, it's never quite clear what the "AI" is doing, and it's right enough, often enough, that they tend to trust it, even though that trust is only justified some of the time.
reply