From that perspective, which is totally correct, it makes you wonder what other domains of knowledge look like when pushed to the boundaries of our capabilities as a species.
"Backed by" as in "running git under the hood", not as in "supported by the git organization". I'd probably use "powered by" in this case to avoid confusion
I think it means parallel branches. Normally in git you can use one branch at a time. With agentic coding you want agents to build multiple features at the same time, each in a separate branch
Can agents not checkout different branches and then work on them? It's what people also do. I have a hard time to understand what problem is even solved here.
to be entirely fair while git is getting better, the tooling UI/UX is still designed with expectation someone read the git book and understood exactly how it works.
Which should be basic skill on anyone dealing with code, but Git is not just programmer's tool any more for a long time so better UI is welcome
claude can use worktrees.. so if you have a system with say 10 agents, each one can use a worktree per session.. no need to clone the the repo 10 times or work on branches. Worktreeees.
Seconding others here, what you're bringing up as distinct features of Gitbutler seems to just be stuff git can do.
- One local copy of a repo with multiple work trees checked out at once, on different branches/commits? Git does that.
- "Add a patch to any commit in any branch" I can't think of a way of interpreting this statement (and I can think of a couple!) that isn't something git can do directly.
Maybe it adds some new UI to these, but those are just git features. Doesn't mean it's a bad product (I have no idea, and "just UI" can be a good product) but these seem to be built-in git features, not Gitbutler features.
Does it checkout different branches at the same time, provides an in memory representation to be modified by another API, or does it to multitasking checkouts. The first thing is already natively in Git. I guess the others are innovation, although the second sounds unnecessary and the third like comedy.
Lol. Unfortunately VCs and ever-so-ernest founders are impervious to irony. Best to just let them get their grift on and just be happy it isn't your money they're boondoggling.
> especially for dealing with rebase/merge conflicts where I would say Git is mediocre.
It seems like everyone that hold this opinion want Git to be some magical tool that will guess their intent and automatically resolve the conflict. The only solutions other than surfacing the conflict are locking (transactions) or using some consensus algorithm (maybe powered by logical clocks). The first sucks and no one has been able to design the second (code is an end result, not the process of solving a problem).
> It seems like everyone that hold this opinion want Git to be some magical tool that will guess their intent and automatically resolve the conflict.
Absolutely not. There are plenty of fairly trivial solutions where Git's default merge algorithm gives you horrible diffs. Even for cases as simple as adding a function to a file it will get confused and put closing brackets in different parts of the diff. Nobody is asking for perfection but if you think it can't be improved you lack imagination.
There are a number of projects to improve this like Mergiraf. Someone looked at fixing the "sliders" problem 10 years ago but sadly it didn't seem to go anywhere, probably because there are too many core Git developers who have the same attitude as you.
Well, yeah, but Git is basically UNIX/POSIX or JPEG. Good enough to always win against better like Plan 9 or JPEG XL (though I think this one may win in the long term).
I will speculate the DDOS attacks are funded by companies and governments that benefit from not being held accountable for their past deeds. I suspect X, Google, China, PRNK, Hungary, etc
Thankfully, I am nearing the end of my career with software after 25 years well spent. If I had been born in a different decade, I would be facing the brunt of the AI shift, and I don’t think I would want to continue in the industry. Obviously, this is a personal decision, but we are in a totally different domain now, where, at best, you’re managing an LLM to deliver your product.
I'm surprised Python is on that list. TypeScript doesn't seem like a terrible choice, as it can leverage vast ecosystems of packages, has concurrency features, a solid type system, and decent performance. C++ lacks as robust of a package ecosystem, and Python doesn't have inbuilt types, which makes it a non-starter for larger projects for me. Rust would have been a great choice for sure.
Python and C++ have been used for countless large projects— each one for many more than typescript. It’s all about trade-offs that take into account your tasks, available coders at the project’s commencement, environment, etc.
People like to put companies that are household names on pedestals, but the choices they make are mostly guided by what their people can do and which choices give them the most value for free. They mostly operate how smaller companies do but they have a bigger R&D budget to address issues like scale that the larger market has little incentive to solve.
Also, this product is like a year old… it has barely hit its teething phase. I wouldn’t be surprised if the core is still the prototype someone whipped up as a proof of concept.
I reckon some believe these companies are basically magical, and are utterly astonished when they’re shown to be imperfect in relatively uninteresting ways. I’m a lot more concerned about the sanity of the AI ecosystem they operate in than the stability of some front-end Anthropic made.
I mostly mentioned it because it is pre-installed on some (linux) systems. Though of course if you're trying to obfuscate the sourcecode you need to bundle an interpreter with the code anyway.
But it has historically been used for big programs, and there are well established methods for bundling python programs into executables.
Yeah, didn't people used to make like $10/week as the median wage at the turn of the 20th century? I agree that we have big problems now, but I feel like this analysis is deeply flawed without the inclusion of wage data.
Wage data, population growth, overall consumption, credit (and guarantees against it) are all drivers of inflation.
Look at student loans vs the cost of college:
1958: Federal program to encourage science and engineering.
1976: Remove restrictions on bankruptcy dismissal of this debt.
2005: Same rules for private loans.
Today college has a (as someone here so eloquently put it) a cruise ship ascetic, and has far more "administration" than "eduction" in terms of raw staff.
Tv went from an expensive box (fixed cost) to cable (monthly fee) to on demand programing (several monthly fees, and with ad's).
A phone used to be a single item in your house with a monthly fee. It was an item so durable that you could beat a robber with it and still call the police (see old att, black rotary phone). Now its an item per person in a household, that you can easily loose, might break if you drop it, and costs any where from 200 to 1500 dollars.
None of this is inflation in the traditional sense, but it does impact the velocity of all money in the system, and puts pressure on individual spending in a way that isnt even accounted for in this chart.
Actually, a past that never existed. It's pretty typically for authoritarian regimes to create idealized versions of the past as they attempt to rewrite history to better fit with their talking points and agendas.
It's a direct violation of the fourth amendment. The worst thing you can do is just accept it, as that normalizes it. This is an end-around to avoid going through judicial channels to obtain information about private citizens, full stop. I'd love to hear about such brazen examples in the past, as right now, we have Kash Patel openly admitting to this activity either out of ignorance or hubris, either of which is terrible.
reply