It's not an intern because you can speak at a much higher level of abstraction that in the old world you could only speak with at an architect level.
In the new world this has become the potential expectation at an intern level.. which means forget leetcode - learn to deal with higher level architecture concepts and practice them.
Please explain this in more detail. I don't understand. The level of abstraction seems straightforward, even for an intern.
Properly understanding this level of abstraction in context, without lengthy explanations, is a completely different matter. Beginners struggle with this, just as AI systems struggle with it (or rather, it's impossible). Of course, I'm talking about the conceptual, anthological context here, not the previous textual "LLM context".
Prior to LLM the concept of "Open Source" could co-exist with "Free Software" - one was a more pragmatic view of how to develop software, the other a political activist position of how code powering our world should be.
AI has laid bare the difference.
Open Source is significantly impacted. Business models based on it are affected. And those who were not taking the political position find that they may not prefer the state of the world.
Free software finds itself, at worst, a bit annoyed (need to figure out the slop problem), and at best, an ally in AI - the amount of free software being built right now for people to use is very high.
Your question has nothing to do with the GPL. If your concern is that the code may count as derivative work of existing code then you also can't use that code in a proprietary way, under any license. But that probably only applies if the LLM regurgitated a substantial amount of copyrighted code into your codebase.
Fair; that was an example instance. People interested in “Free software” rather than “open source” seem to often favor the GPL, though other licensing options also count as “free software”.
But in any case, the question really refers to, can the LLM-generated software be copyrighted? If not, it can’t be put under any particular license.
Is your concern the potential for plagiarism or the lack of creative input from the human? If the latter, it would depend on how much intellectual input was needed from the human to steer the model, iterate on the solution etc.
If it can’t be copyrighted, then no. Licenses rely on the copyright holder’s right to grant the license. But that would also mean it’d be essentially public domain. I’m not sure there’s really settled legal opinion on this yet. Iirc it can’t be patented.
The way the world is currently working is code created by someone (using AI) is being dealt with as if it was authored by that someone. This is across companies and FOSS. I think it's going to settle with this pattern.
Very fresh take on APIs. With proliferation of compute types (container, lambda, cloudflare worker, VM, offline-first apps), there's something to be said for a common interface.
This brings upon an ethical dilemma soon, partly explored by a black mirror episode, where AI can call upon gig workers. What if a rogue agent gets to things done: asks gigworker1 to call a person to meet under a bridge at 4, and asks gigworker2 to put up a rock on the bridge, and asks gigworker3 to clear the obstruction and drop the rock down the bridge at 4.
None of the 3 technically knew they were culpable in a larger illegal plan made by an agent. Has something like this occured already?
The world is moving too fast for our social rules and legal system to keep up!
This was explored a bit in Daniel Suarez’s Daemon/Freedom (tm) series. By a series of small steps, people in a crowd acting on orders from, essentially, an agent assemble a weapon, murder someone, then dispose of the weapon with almost none of them aware of it.
The recent show Mrs. Davis also has a similar concept in which an AI would send random workers with messages to the protagonists, unbeknownst to the workers.
I'd say abstracting it away from ai, Stephen King explored this type of scenario in 'Needful Things'. I bet there is a rich history in literature of exactly this type of thing as it basically boils down to exploration of will vs determinism.
Not ai but there was the 2017 assassination of Kim Jong-nam which was a similar situation and something which could have been organised by an ai.
Two women thought they were carrying out a harmless prank, but the substances they were instructed to use combined to form a nerve agent which killed the guy.
Not AI but I've heard car thieves operate like this - as a loose network of individuals, who do just a part of the process, which on their own are either legal, or less punishable by law than stealing the car.
One guy scouts the vechicle and observes it, another guy is called to unlock it, and bypass the ignition lock, yet another guy picks it up and drives away, with each given a veneer of deniability about what they're doing.
Extrapolate a bit to when AI is capable of long-term, complex planning, and you see why AI alignment and security are valid concerns, despite the cynicism we often see regarding the topic.
If you are asked, or paid, to drop a rock of a bridge, you are responsible for checking that there's no one underneath first. It doesn't matter of if you're being asked to do it by an AI or another person.
Investigators would need to connect the dots. If they weren't able to connect them, it would look like a normal accident, which happens all day. So why would an agent call gigworker1 to that place in the first place? And why would the agent feel the need to kill gigworker1? What could be the reasoning?
Edit: I thought about that. Gigworker 3 would be charged. You should not throw rocks from a bridge, if there are people standing under it.
Or just don't throw rocks from a bridge, at all. /s
Who's at fault when: Your CloowdBot reads an angry email that you sent about how much you hate Person X and jokingly hope AI takes care of them, only for it to orchestrate such a plan.
How about when your CloowdBot convinces someone else's AI to orchestrate it?
The AI can hire verifiers too. It of course turns into a recursive problem at some point, but that point is defined by how many people predictably do the assigned task.
The essay is direction-less. For someone looking at a cogent take, I think the recent post by Anthropic's CEO is one such: https://www.darioamodei.com/essay/the-adolescence-of-technol.... I'm still processing it, but it's put across logically, albeit biased viewpoint.
I'd like to point Simon and others to 2 more things possible in the browser:
1) webcontainer allows nodejs frontend and backend apps to be run in the browser. this is readily demonstrated to (now sadly unmaintained) bolt.diy project.
2) jslinux and x86 linux examples allow running of complete linux env in wasm, and 2 way communication. A thin extension adds networking support to Linux.
so technically it's theoretically possible to run a pretty full fledged agentic system with the simple UX of visiting a URL.
My eventual goal with that is to expand it so an LLM can treat it like a filesystem and execution environment and do Claude Code style tricks with it, but it's not particularly easy to programmatically run shell commands via v86 - it seems to be designed more for presenting a Linux environment in an interactive UI in a browser.
It's likely I've not found the right way to run it yet though.
On the second tab (which is a text/browser interface to the VM) here: https://copy.sh/v86/?profile=buildroot , you can start sh shell, and run arbitrary commands, and see output. making a programmatic i/o stream is left as an exercise (to claude perhaps :).
One of the very first experiments I did with AI was trying to build a browser based filesystem interface and general API provider. I think the first attempts were with ChatGPT 3.5 . I pretty quickly hit a wall, but Gpt4 got me quite a lot further.
I see the datestamp on this early test https://fingswotidun.com/tests/messageAPI/ is 2023-03-22 Thinking about the progress since then I'm amazed I got as far as I did. (To get the second window to run its test you need to enter aWorker.postMessage("go") in the console)
The design was using IndexedDB to make a very simple filesystem, and a transmittable API
importScripts("MessageTunnel.js"); // the only dependency of the worker
onmessage = function(e) {
console.log(`Worker: Message received from main script`,e.data);
if (e.data.apiDefinition) {
installRemoteAPI(e.data.apiDefinition,e.ports[0])
}
if (e.data=="go") {
go();
return;
}
}
async function go() {
const thing = await testAPI.echo("hello world")
console.log("got a thing back ",thing)
//fs is provided by installRemoteAPI
const rootInfo = await fs.stat("/");
console.log(`stat("/") returned `,rootInfo)
// fs.readDir returns an async iterator that awaits on an iterator on the host side
const dir = await fs.readDir("/")
for await (const f of dir) {
const stats = await fs.stat("/"+f.name);
console.log("file " +f,stats)
}
}
I distinctly remember adding a Serviceworker so you could fetch URLs from inside the filesystem, so I must have a more recent version sitting around somewhere.
It wouldn't take too much to have a $PATH analog and a command executor that launched a worker from a file on the system if it found a match existed on the $PATH. Then a LLM would be able to make its own scripts from there.
It might be time to revisit this. Polishing everything up would probably be a piece of cake for Claude.
Isn't webcontainers.io a proprietary, non-open source solution with paid plans? Mentioning it at the same level of open source, auditable platforms seems really strange to me.
Technically, it runs on Chrome, so making an open source version is viable. then bolt.diy project was giving opencontainers a shot, which is a partial implementation of the same. But broadly, if this method works, then FOSS equivalent is not a worry, should come soon enough.
No, for every established company that may be hesitant, there's up and comer with nothing to lose who will jump on the opportunity, and the industry will continue moving forward.
In the new world this has become the potential expectation at an intern level.. which means forget leetcode - learn to deal with higher level architecture concepts and practice them.
reply