I see the argument for whoever paid for the tokens. Or in the case of a free AI usage, the person who sent the prompt (or whoever they are acting on behalf of, i.e. the company they are working for at the time).
The primary issue being that it's all built on stolen data in the first place.
Even taking the least generous interpretation of what LLMs do and saying they're just "copy/pasting others' code" it's still not stealing because the original still exists and presumably still makes money. The original has to be gone for theft to have occurred.
In order to have a sane conversation about this we have to all agree not to lie.
Headlamps for sensitive applications will ALWAYS involve a replaceable or external battery. Can't have your light going out when you are in a dangerous situation. When I used to rock climb, I always kept 3x AAA's taped to the strap in case the one I had in died. Never needed to use them, but made me much more confortable.
A lithium ion headlamp plus power bank does the same thing for the same weight and more flexibility. You can also charge phone, InReach, cameras. You can decide what is most important use of power. That’s the standard these days.
Another approach is to bring 2 charged headlamps. Again, same weight as an old headlamp + 3AAAs. But covers additional failures like breaking or dropping.
You should look into how other companies and tools that use FFMPEG handle this situation.
I wonder if you can keep your application itself closed source, but make an open-source adapter that handles the interaction with FFMPEG.
I'm not super familiar with open source licensing, and IANAL, so make sure to do your own research :)
As an example, I believe Audacity required me to install ffmpeg manually myself, and add it to my path. This is slightly different since Audacity itself is also open source. But could be helpful to reference.
Prompting the user to install, or auto-installing FFmpeg separately, rather than bundling the binary. The problem is FFmpeg can be configured in so many ways and the general install binaries are usually LGPL compatible.
I was surprised to learn that copilot is priced based on request, not token count. I've only used models through work before, so hadn't looked into how things are priced. So this is all somewhat new to me.
While this is certainly cool to see. And I love seeing how fast webservers can go.. The counter question "Do you even need 25,000 RPS and sub-ms latency?" comes to mind.
I don't choose a DB over a flat file for its speed. I choose a DB for the consistent interface and redundancy.
You mention the client-server architecture of opencode. Is that a local server or is it calling home to opencode servers?
One of the ideas I like about opencode is the ability to prompt and such from a web browser. So I'm curious if that is the client-server architecture you are talking about, or if it's something else.
For reference, I used replit for some vibe-ish coding for a little bit and really liked that I could easily prompt and view output on my phone when hanging out away from my computer. Or while waiting at the airport for example.
(RIP OG replit by the way. They've pretty much completely pivoted from a REPL to AI, which is pretty hilarious to me given the company name xD)
> One of the ideas I like about opencode is the ability to prompt and such from a web browser. So I'm curious if that is the client-server architecture you are talking about, or if it's something else.
Yes, this is what I meant. And yes it's ok that you like this about opencode :)
> For reference, I used replit for some vibe-ish coding for a little bit and really liked that I could easily prompt and view output on my phone when hanging out away from my computer. Or while waiting at the airport for example.
I use Google Jules and also appreciate being able to nudge it forward when not at the computer. In general I often appreciate when things run on other people's machines. However, if I'm to run a thing on my machines, it better be minimalist!
The harness or the tool is ok but all the defaults as part of the paid pieces of the tool have really bad privacy decisions. So they offer Zen as a pay as you use credit system with access to the models they think work best with the harness. Their own stealth model in it along with a number of the leading new models are always-on sharing data for training purposes. They don’t make this immediately obvious either you have to click through links on their website to see the breakdown.
I am not usually super privacy minded but if you already made it nonobvious this is happening I don’t really trust the underlying tool.
Above is the link. The front page says your privacy is important and says they don’t do training on your data except for the following exceptions which links to this page. Then even their own model is training on your data except there is no opt out. So if you pay for zen and you select one of these models in the tool you have no clue it’s auto training on your data.
The primary issue being that it's all built on stolen data in the first place.
reply