> Fewer people applying for patents, because the minute you apply for the patent, it's available to everybody, which means every model can train on it
We know LLM companies have, for lack of a better word, "sidestepped" the copyright on millions of works with their "transformative fair use" arguments. Are LLMs also a way to sidestep patents?
LLMs are accelerants. They enable people to do patent and copyright infringement at a much larger scale. As we know from previous examples, if you break the law enough as a company eventually they have to let you keep doing it.
I don’t see how? You can train on something pending patent, but what are the benefits? If it gets patented you open yourself to get sued, I don’t see how AI works around that, the idea itself is still patented? I think I’m missing something for the argument to make sense. Or is the idea that if too many people use your patented idea you won’t be able to enforce it? That sounds risky to me
Patents are public. Ingesting and innovating on them is the intended use. If you use an LLM to then make and market something that infringes on a patent, that isn’t the LLM doing any infringing, it’s you.
> are you even aware that you're infringing a patent?
Plenty of folks first learn they’re infringing when they get a demand letter. Unless you ask it, I’m not sure it’s on the LLM to search for prior art and patent conflicts.
What a funny perspective - they didn’t side-step copyright, they blatantly infringed without financial consequence. The interesting “upside” is none of the generated works are protected by copyright. So it’s a bizarre conundrum which goes to show the complete disconnection between the concepts of the original intent - to protect authors and creators - with the warped capitalist mechanics of “rights holders” like Disney buying political influence for regulatory market capture.
Sugar coating the discussion is for children and dishonest ethical rationalization, in my view.
Mostly because determinism is an all or nothing proposition. Either EVERYTHING in game logic is perfectly deterministic and isolated from everything else, or it pretty much as if nothing was. So if you want to commit to determinism, you have to be constantly vigilant and debugging these maddening types of bugs. Whether this investment is worth it or not is up to each dev.
Sometimes you can find small areas of the game that can be deterministic and worth it. In a basketball game I worked on in the 90s, I designed the ball physics to be deterministic (running at 100hz). The moment the ball left the player hands it ran deterministically; we knew if it was going to hit the shot and if not, where the rebound would go to.
Doom and Wolf3d and many other multiplayer games of the 90s (including some I worked on) were deterministic/lockstep and machines only needed to exchange inputs (in a deterministic manner ofc).
Quake was completely different. The client/server term was aimed at describing that the game state is computed on the server, updated based on client inputs send to the server, and then the game state is sent from server to the clients for display. Various optimizations apply.
Deterministic/lockstep games more often used host/guest terminology to indicate that a machine was acting as coordinator/owner of the game, but none of them were serving state to others. This terminology is not strict and anyone could use those terms however they wanted, but it is a good ballpark.
Typical deterministic game engines will do this, send it to every machine as part of the initial game state, and also check the seed across machines on every simulation frame (or periodically) to detect desyncs.
> a photo of a sunset can be art
...
> But if all the person contributes is a prompt, the text of that prompt is the extent of their art.
The natural question to pose is, does that mean that when the person who pressed the shutter button in that camera, that button press was the extent of their art? Of course not; intent, sensibility, timing, understanding there's something special about what's in front of you, preparing a composition, orchestrating poses, framing to create a special composition, manipulating the medium via speed or exposure or etc to create an appropriate texture... all those and more can play a part, and the button press is just the delivery method.
Millions of photos per day are not art even if they show a pretty thing, and nobody has a problem with that. Even when we actively try to capture something special, most people will later look at their photo and say "it shows the place, but it doesn't communicate anything like what being there made me feel".
So in the same way, I think the interesting discussion will be not that AI images are not art. Millions of prompts per day will not be art, and nobody (except grifters) has a problem with that. But how can AI become another vehicle for people to produce interesting art? Perhaps there's nothing special there. But I hope people with the drive to explore and the need to communicate continue giving it a shot and prove or disprove the notion that "it's just a prompt".
> Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.
I have no idea how to read this and not go blind. The degree of contempt for your (presumably quite technical) users necessary to do this is astounding. From the article:
> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.
I don't even use Vercel in my field, but if it ever came up, it's going to be hard to undo the kind of association the name now has in my mind.
If you’re letting Claude code just handle secrets like this you’re already fucked from a security standpoint so I don’t really see the big deal here
Today it was the Vercel plugin but if you’re letting an LLM agent with access to bash and the internet read truly sensitive information then you’re already compromised
Plans are useless, but planning is essential. IIRC Nintendo had been operating for decades before they shifted to videogames. And Glitch (the MMO that gave birth to Slack) was also very much a product with a plan. Plan failed, or execution failed, or the industry shifted, or something else, or all or the above. But for sure it was not just "a bunch of talented people."
Nintendo was founded in 1889 and basically predates electricity in the home. I think they did a very successful job pivoting to new forms of entertainment as they arose over the years. Not a planning failure in any sense.
We know LLM companies have, for lack of a better word, "sidestepped" the copyright on millions of works with their "transformative fair use" arguments. Are LLMs also a way to sidestep patents?
reply