Hacker News .hn (a.k.a HN2)new | past | comments | ask | show | jobs | submit | filoleg's commentslogin

To answer your question: LLMs don't have free speech, because they aren't companies/businesses, they are a tool (that is used by companies/businesses).

Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.


I appreciate the answer and the open minded thoughtful answer.

I don't have much meaningful info to contribute to this, but it is interesting to observe how the rollout of the red light cams happens in different places, and how it eventually turns out.

IIRC there was a point in time roughly around ~2017 when it happened in Redmond WA (i.e., in the town that the Microsoft HQ is in). I might be off by a year or two, but it doesn't really change the overall point.

TLDR: in under 2 years, that whole red light cam initiative got canceled and reverted, because the local stats showed that it just made things more dangerous on the roads (by significantly increasing the rate of rear-ending accidents at traffic lights).


I generally agree with your position overall, but the person in the OP was 36 years old. I don't think that his parents can be blamed for not doing their job here.

> Try taking a photo of somebody with your phone. Usage will definitely look like you are snapping a picture, nobody walks around with phones straight up.

I urge you to visit any big city and see for yourself how wrong you are. I see it at least every time every day just during my barely 20-25min subway commute to work.

And that's the most unremarkable the most uninteresting place and scenario here. Any big park, any even remotely touristy location, any public square, any concert/sports venue, and even an overwhelmingly large proportion of restaurants are like that.


> I see it at least every time every day just during my barely 20-25min subway commute to work.

Guess its not that subtle then


I mean, it is about as subtle as a middle school student or someone wearing a suit+tie on subway. I would notice them, but absolutely nobody would mind or care about it.

People holding their phone up get pretty much the exact same treatment. I.e., being something that you would notice, but pay no mind to it as being something entirely unremarkable.


Those two hypothetical scenarios you listed don’t necessarily work the way you are describing it, which is why the whole logic and mechanisms behind the US copyright laws might seem incomprehensible or illogical to you.

In reality, it is way more complex and less clear-cut. Which makes sense, because oversimplifying it will lead to silly-sounding conclusions and an almost entirely incorrect understanding of how this works.

For those who don’t want to read the actual full explanation (which is a totally normal position, as the explanation is going fairly into the weeds), I will just a put a TLDR summary at the end. I suggest everyone to check out that summary first, and then come back here if there is interest in a more detailed explanation.

----------------------------

First, we gotta settle on 3 key concepts (among many) the US copyright law relies on.

1. Human authorship - self-explanatory; you cannot assign authorship to a fish or your smartphone.

2. Original/minimal creativity - some creative choices, not just "I pressed the button."

3. Fixation - the content needs to be recorded on a tangible medium; you cannot copyright a "mood" or a thought, since those aren’t tangible media.

Now onto your hypothetical scenarios:

1) "Initialize an algorithm to point your camera at the street and write those bytes to disk and you are the author of a perpetual stream of data."

Writing bytes to disk satisfies fixation, but it doesn’t automatically make you the author of a copyrightable work. You gotta satisfy the minimum creativity requirement too (e.g., camera positioning, setup, any other creative choices/actions, etc.). Otherwise you are just running a fully automated security cam feed with zero human input, and those videos aren’t easily copyrightable (if at all). You might own copyright in a video work if there’s sufficient human creative authorship - but mere automated recording doesn’t guarantee that.

2) "Initialize an algorithm to point your camera at the street and describe those bytes in words and you are no longer the author a perpetual stream of data."

This is just close to being plainly incorrect. If you (a human) write a textual description, that text is typically copyrightable as a literary work (assuming it’s not purely mechanical like "frame 1: car, frame 2: another car, etc." with no expressive choices). Creating a description doesn’t erase any copyright you may or may not have had in the underlying recording. They’re just different works (audiovisual work vs. text work).

Important to note: neither makes you the author or owner of the underlying "data" of reality, because copyright protects expression, not the underlying facts.

----------------------------

TLDR:

* Recording the street can produce a copyrightable work if there is human authorship and minimal creativity in how the recording is made. Pure automated capture may fail that.

* Describing the street in words is usually a separate, independently copyrightable work (e.g., a text or audio version of those words), but it doesn’t change the status of the underlying recording.


But how does that apply to photography vs AI photo generation?

Photo (w/ camera): 1. MET: Human authorship - somebody picked the tools (lens, body) and used them.

2. MET: Creativity - somebody chose a subject, lighting, etc.

3. MET: Fixation - film (or SD card)

Photo (w/ AI): 1. MET: Human authorship - somebody picked the tools (models etc) and used them.

2. MET, maybe?: Creativity - somebody wrote the prompt, provided inputs, etc. (how is this substantially different than my wife taking a random snapshot on her phone?)

3. MET: Written to disk, same as a digital camera.


The camera analogy breaks at one specific point: who determines the expressive elements of the final work.

With photography, the human determines framing, angle, timing, lens, exposure. The camera just records light from a scene the human selected and composed. Even a random photo reflects where the photographer stood and when they pressed the shutter. The device doesn’t invent the composition.

With AI imagen, the user provides high-level instructions, but the system determines the actual composition, lighting, geometry, textures, etc. The expressive details of the final image are generated by the model, not directly controlled by the user.

That’s why the US copyright laws currently treat them differently. It is less of a "tool vs. tool", and more of whether the human determined the expressive content (or if the system did). Prompting can be creative (in a legal sense), but giving instructions is not the same as controlling the expression.

If I tell a human painter “paint XYZ in an expressionist style,” I don’t become the author of the painting. The painter does, because they determined the expression. And since the painter (in the case of AI imagen) is not a human, then that work usually cannot be copyrighted.

There is an important caveat to all of this: it’s not binary or perfectly clear-cut. If someone iteratively refines prompts, controls seeds, manually inpaints, selects and arranges outputs, heavily edits the result, etc., then those human contributions can be protected. But purely AI-generated output, where the system determines the expressive elements, is not considered human-authored under the current US copyright laws.

Mind you, none of this is perfectly settled, as this is a very rapidly evolving/adapting area of law (as it pertains to AI usage). I am not claiming that this is the end-all of how it should be legislated or that there are no ways to improve it. But the current reasoning within the US copyright law used to address this type of a scenario (at the present moment) doesn't strike me as illogical or unreasonable.


Not needing to charge as much due to much better battery capacity and/or usage efficiency is objectively a good thing, full stop.

How that additional time is actually spent is a whole separate story, but that's entirely tangential to assessing the impact of battery life improving.


Is there any evidence of this being an actual pattern? I cannot speak for the rest of the americans, but I, personally, haven’t noticed it because it didn’t seem to be the case to me at all.

Asking because from my perception over the past 12 months, US ambassadors got more friendly and cordial with some countries (e.g., Japan[0]/Taiwan/South Korea[1]) and less cordial with others (e.g., certain european countries, like UK, that attempt to [imo unjustly] press american businesses that don’t even have any business presence within their jurisdiction).

0. U.S. Ambassador George Glass participated in remarks emphasizing the “new golden age” of U.S.-Japan relations, underlining partnership. (https://jp.usembassy.gov/ambassador-glass-remarks-at-yomiuri...)

1. The U.S. signed Technology Prosperity Deals with both Japan and South Korea in late 2025, advancing shared technology and innovation goals. (https://www.whitehouse.gov/articles/2025/10/the-united-state...)



That's Jared Kushner's dad. The one who went to prison for setting his brother-in-law up with a prostitute to break up his sister's marriage. I am sure he will approach this new mission with the same finesse he demonstrated in the previous ones.


> Your package can explode, these torrents cannot (as far as I am aware).

Sure, but what if the scenario was slightly modified, with explicit 100% guarantees regarding rhe package you would receive in the maile:

1. It could only contain either an SSD/hard drive or a usb drive. The storage device has not been tampered with. It was only ever used as a regular storage device out of the box.

2. There is no malware or any malicious executables on the storage device. The only types of data that it could contain would be text/html, structured data/document files (json, csv, office suite files, pdf, etc.), and media files (audio, video, images, etc.). None of those files will exploit any vulnerabilities in the software that opens them (neither through the parser nor anything else)

This makes it nearly a perfect 1:1 analogy to the torrenting scenario, both involving the exact same set of imo the most important dangers.

Which, for me personally, is the fear of ending up with illegal content (CSAM, stolen credit card dumps, etc.) on a storage device in my possession through no fault of my own.

Even if it could be a winnable battle in the end, it would be pretty much over reputationally way before it gets to the legal resolution. Just being accused of having any illegal content of that nature is not something I would want to ever deal with at all.

You gotta realize how it would sound and how you would appear to most uninvolved average people in real life, when your legal defense isn’t even something like statement #1 below, and is way closer to the statement #2:

> “I am not guilty, the accusarions are false, those files were never present on any of my storage devices.”

> “I am not guilty, despite those files being actually present on a storage device in my possession. That’s all due to how torrents inherently work, so, let’s start from the basics…” [and now we gotta explain simplified basics of torrent technology and how it works to the DA, the judge, as well as anyone else observing the trial, and pray they will try to actually understand]


> Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users?

Not gonna lie, that’s exactly the potential scenario I am personally excited for. Not due to any particular love for Anthropic, but because I expect this type of a tight competition to be very good for trying a lot of fresh new things and the subsequent discovery process of new ideas and what works.


My main gripe is that it feels more like land grabbing than discovery

Stories like this reinforce my bias


Yeah, it is still there, and there is a pretty clear cut logic for when it will appear.

If you tap while a word is selected, it won’t appear. If you tap on the cursor while a word isn’t selected, it will appear.


If by "clear cut logic" you mean a consistent process, then sure. But if you mean intuitive, I must disagree.


Especially because it was working fine and understandable in older iOS versions.

Also for some reason autocorrect seems to have gotten a lot worse. It has become nearly impossible to type a grocery list without all kinds of annoying wrong corrections.


Yeah, good point. Your guess was right, I meant it in the sense of consistency, not in the sense of it being intuitive without knowing about it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: