HN2new | past | comments | ask | show | jobs | submit | dubi_steinkek's commentslogin

That looks pretty good to me. Every `unsafe` function has clearly stated safety requirements, and every `unsafe` blocks justifies why the requirements are met.


mainly, but it also supports alternative codegen backends (cranelift, rustc_codegen_gcc)


Why is this a perf footgun? As someone who doesn't write a lot of c++, I don't see anything intuitively wrong.

Is it that iterating over map yields something other than `std::pair`, but which can be converted to `std::pair` (with nontrivial cost) and that result is bound by reference?


Close, it is a std::pair, but it differs in constness. Iterating a std::map<K, V> yields std::pair<const K, V>, so you have:

  std::pair<const std::string, int>
vs

  std::pair<std::string, int>


And what does casting const change, that would involve runtime inefficiencies?


It is not a cast. std::pair<const std::string, ...> and std::pair<std::string,...> are different types, although there is an implicit conversion. So a temporary is implicitly created and bound to the const reference. So not only there is a copy, you have a reference to an object that is destroyed at end of scope when you might expect it to live further.


I guess this is one of the reasons, why I don't use C++. Temporaries is a topic, where C++ on one side and me and C on the other side has had disagreements in the past. Why does changing the type even create another object at all? Why does it allocate? Why doesn't the optimizer use the effective type to optimize that away?


> Why does changing the type even create another object at all?

There's no such thing as "changing the type" in c++. Function returns an object type A, your variable is of type B, compiler tries to see if there is a conversion of the value of type A to a new value of type B


Each entry in the map will be copied. In C++, const T& is allowed to bind to a temporary object (whose lifetime will be extended). So a new pair is implicitly constructed, and the reference binds to this object.


Maybe they can with postinstall scripts, but they usually don't.

For the most part, installing packaged software simply extracts an archive to the filesystem, and you can uninstall using the standard method (apt remove, uv tool remove, ...).

Scripts are way less standardized. In this case it's not an argument about security, but about convenience and not messing up your system.


> there is no difference as jj is only a frontend to git.

That's not really true in this case, as the worktree feature from jujutsu is not implemented on top of git worktrees.


This is kind of unfortunate in this case as it breaks some tooling since the extra trees are not collocated with git, like editor inline history/blame or agents that know to look in git history to fix their mistakes


The fix for having worktrees be colocated is in progress. Not sure when it’ll be done but it’s coming.


isn’t it strictly better to use your editor’s JJ support and ask Claude to use jj?

(I personally think jj shouldn’t support collocated repositories, but happy to learn what others see in them…)


I think the biggest benefits of colocation are, in rough approximation of the order I encounter them:

1) Various read-only editor features, like diff gutters, work as they usually do. Our editor support still just isn't there yet, I'm afraid.

2) Various automation that tends to rely on things like running `git` -- again, often read-only -- still work. That means you don't have to go and do a bunch of bullshit or write a patch your coworker has to review in order to make your ./run-all-tests.sh scripts work locally, or whatever.

3) Sometimes you can do some kind of nice things like run `git fetch $SOME_URL` and then `git checkout FETCH_HEAD` and it works and jj handles it fine. But I find this rare; I sometimes use this to checkout GitHub PRs locally though. This could be replaced 99% for me by having a `jj github` command or something.

The last one is very marginal, admittedly. Claude I haven't had a problem with; it tends to use jj quite easily with a little instruction.


jj support in editors is way behind git support.


wtf? it can diverge from git?

wasn't git compatibility it's main pro?


To be technical, it's more that it can read and write the on-disk Git format directly, like many other tools can.

I think the easiest way to conceptualize it is to think of Git and jj as being broken down into three broad "layers": data storage, algorithms, user interface. Jujutsu uses the same data storage format as Git -- but each of them have their own algorithms and user interface built atop that storage.


Yes, jj is its own VCS with pluggable backends.

Google uses it with Piper, their centralized VCS.

Being compatible and being purely a frontend aren’t the same thing.


No, that's just a practical approach to supporting migration to jj.

If we want to improve systems, we need to be able to migrate conveniently from older systems to better ones.


You can still use git worktrees in a colocated repository. jj workspaces are a different, but similar capability that provide some extra features, at the cost of some others.


git is the storage layer. JJ's commands do not have to, and never did, match git one-to-one.


I don't think that's a fair characterization.

It's more that there isn't really a big difference between the workflow of

    # you're on
      staging area
    @ commit A
    # make some untracked changes and console logs you don't wanna commit
    git add -p && git commit # select only what you want
vs

    # you're on
    @ empty commit 
    | commit A
    # make some local changes (which are tracked in @)
    jj commit -i # select only what you want
You're still "in charge what gets tracked" if you treat the last @ commit as your local playground, exactly the same as the staging area. The only difference is that you can use exactly the same tools to manipulate your "staging area" as other commits. The only difference being that you can manipulate your staging area with the same tools as any other commit.


Wait, what happens when there is a multi-GB file laying around and a jj command is being invoked? Does it start to scan it or is there some threshold. What does it do with cyclic hard-/sym-links?


There's a (configurable, easily bypassed) limit for newly created file size to catch that common mistake.


It's possible but I would prefer to just pass &str at that point, leading to less monomorphizations and a simpler API


This seems to be a somewhat common occurrence. When I was first learning programming with python I wanted multiple balls in a pong-like game, so after some googling I ended up writing something like `globals()["ball" + i]` until someone mentioned that arrays were a thing I could use for that.


It’s a recurring thing because programing at its fundamental level is extremely simple. If you have variables, conditionals and loop/goto you can do anything. It’s pretty easy to learn the basics and get much more empowered than you had any right to be.

Armed with such power, you will build a slow, unreadable and unmaintainable mess, but you don’t know that yet, because it’s all working after all.

But as your lines multiply you begin to misstep on your own code and wonder if there’s a better way to do this. And you then stumble upon data structures, functions, classes, etc.

The self-taught road is always more or less like that.


I believe this is the way. That is why I sometimes think whether programming can be taught by using shell environment at first. Every command is like a program and you can do a lot without loops.

Piping also helps. But maybe piping is not a good way to start, dunno.


Shell (and its odd cousin the .BAT file) is interesting because it’s got such wildly different sets of “easy things” and “complex things” compared to almost any language.

Write some ASCII to a file - one character in Bash, 2-?? lines of annoying boilerplate in every other language

Pick the second item in an array - in Bash, a series of random-looking punctuation I can never recall to even create a true array. In any other language, roughly myArray[1]

So I worry that in some cases the student forced to only learn using shell might be bitter once they learned other languages.

PS: it’s funny you mentioned loops as a “do without” — I’d say bash is pretty okay with that — as long as you’re looping over a list of files ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: