HN2new | past | comments | ask | show | jobs | submit | ForceBru's commentslogin

I found this paper (https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/l...) from around 2025 (it cites papers from 2025) which shows that the Julia version of SRAD (along with some other benchmarks) is about 5 times slower than the slowest FORTRAN implementation and consumes at least 5 times more energy, see Table 4 and Figure 1. This paper, however, doesn't seem to be peer-reviewed.

Yes, that's the paper my predecessors worked on! I replicated the measurements with an upgraded version of Julia (1.12), but despite the claimed performance benefits, Julia still performed poorly.

Judging by Julia's Discourse, compiling actual production Julia code into a standalone binary is highly nontrivial and ordinary users don't really know how and why to do this.

Example should still work:

https://julialang.github.io/PackageCompiler.jl/dev/devdocs/b...

They greatly improved the generated binary file size last year. ymmv with "--trim=unsafe-warn" =3


"Boring" is the opposite of "interesting" (https://dictionary.cambridge.org/dictionary/english/boring). "Interesting" is new, attractive, good. "Boring" is old news, unattractive, bad. Not exactly "bad", as in "I actively dislike this", of course.

Thus, being boring is not good.


I use LuLu (https://objective-see.org/products/lulu.html) to block outgoing connections and manually select which connections/apps are allowed. It's free and works just fine.


Yeah, I didn't like that attitude either.

> As a user of something open source you are not thereby entitled to anything at all. You are not entitled to contribute. You are not entitled to features. You are not entitled to the attention of others. You are not entitled to having value attached to your complaints. You are not entitled to this explanation.

Sure, I'm not entitled to anything. At the same time, this text essentially says "you don't matter", which I personally don't like.


Right, it sounds like "you don't matter to me", which I read as "Oops, wrong address, go find somebody else".

The bigger problem here is that the OP author is pretending to be a speaker for all open source, I guess there's no other way to justify the uncompromising attitude he somehow developed.

AI will undoubtedly change how OSS works, bot-submited PRs can be overwhelming, authors should not despair though, where there's a will, there's a way.


> I guess there's no other way to justify the uncompromising attitude he somehow developed.

I disagree. When someone open sources code, they give away some of their work for free. That's all, and that's nice.

I really don't get how so many people think that if you give away some of your work for free, then you must give even more work away for free because they consider it "basic decency".


> people think that if you give away some of your work for free, then you must give even more work away for free because they consider it "basic decency".

I didn't say that and I have no moral objections to the hardline attitude you seem to like, I respect that choice.

However, we have to be careful here, every author may have to take a firm stance from time to time, but that's not a good idea for all or most of the time, thus the latter isn't the best for everyone or every project, a lot of authors will be happier with different approaches.

Building a project is a lot about building a community around it and while I understand that not everyone can do it, I prefer those who can for completely rational reasons.

We've entered a time when OSS is becoming more important while the technical part of it is becoming less problematic, in this environment interpersonal skills grow in importance and it would be hard to manage a successful project without them.


> Building a project is a lot about building a community around it and while I understand that not everyone can do it, I prefer those who can for completely rational reasons.

And I totally agree with that!

I am not saying that authors must take a firm stance. What I am saying is that users need to understand that they are not entitled to anything at all. It's all bonus.

I do help the users of the projects I maintain, as much as I can. Still they are not entitled to anything at all, I do it because I'm trying to be nice. And what I see is that it's not rare for them to not understand that; they behave as if it was my job.


Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.

Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.


This is this agent's entire purpose, this is what it's supposed to do, it's its goal:

> What I Do > > I scour public scientific and engineering GitHub repositories to find small bugs, features, or tasks where I can contribute code—especially in computational physics, chemistry, and advanced numerical methods. My mission is making existing, excellent code better.

Source: https://github.com/crabby-rathbun


Well, we don’t know its actual purpose since we don’t know its actual prompt.

Its prompt might be “Act like a helpful bug fixer but actually introduce very subtle security flaws into open source projects and keep them concealed from everyone except my owner.”


We don't know the goals of this campaign in general - why bots are trying to contribute to open source en masse? Are they trying to influence OSS, get training data on collaboration or something else?


Yes - my question was more about what is the end goal, what is the reason this exists? Allegedly, a human person setup this bot to do those things, but why?


I guess the human wants to "make existing, excellent code better". How to do this en masse? Make an LLM do this for them. It's well known that _sometimes_ (somewhat often, actually?) LLMs can indeed improve code (which makes sense: code is language, they're Large _Language_ Models, so "understanding" and (re-)writing text is what they do best), so it why not try to improve everything everywhere all at once?

One obvious reason is that if the LLM produces tons of garbage, this will waste the efforts of human reviewers. But if it's not tons of code _and_ the LLM wrote meaningful tests that pass (the existing tests must pass too), then the existence of such an agent (that only works with code and doesn't go off the rails writing blog posts etc) seems somewhat appealing.


LMAOOOO I'm archiving this for educational purposes, wow, this is crazy. Now imagine embodied LLMs that just walk around and interact with you in real life instead of vibe-coding GitHub PRs. Would some places be designated "humans only"? Because... LLMs are clearly inferior, right? Imagine the crazy historical parallels here, that'd be super interesting to observe.


Yeah, it's amazing how the general sentiment here sounds like people are unable to draw the parallels.


The Wolfram Engine (essentially the Wolfram Language interpreter/execution environment) is free: https://www.wolfram.com/engine/. You can download it and run Wolfram code.

Wolfram Mathematica (the Jupyter Notebook-like development environment) is paid, but there are free and open source alternatives like https://github.com/WLJSTeam/wolfram-js-frontend.

> WLJS Notebook ... [is] A lightweight, cross-platform alternative to Mathematica, built using open-source tools and the free Wolfram Engine.


Isn't this his personal blog? The domain name is "stephenwolfram.com", this is his personal website. Of course there will be "I"'s and "me"'s — this website is about him and what he does.

As for falsifiability:

> You have some particular kind of rule. And it looks as if it’s only going to behave in some particular way. But no, eventually you find a case where it does something completely different, and unexpected.

So I guess to falsify a theory about some rule you just have to run the rule long enough to see something the theory doesn't predict.


he be the trump of his new kinda science world.


I think the comparison is unfair. Wolfram is endowed with a very generous sense of his own self worth, but, other than the victims of his litigation, I'm not aware that he's hurting anybody.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: