Hacker News .hnnew | past | comments | ask | show | jobs | submit | kakwa_'s commentslogin

> I usually find it easier to take their branch, do all of that work myself (attributing authorship to commits whenever appropriate), push it to the master branch and close the PR than puppeteering someone halfway across the globe through GitHub comments into doing all of that for me.

The PR system is great for reviewing changes within a dev team.

But, indeed, when its an external contribution, it kind of falls apart. It's unrealistic to expect from an external contributor, often a one time one, to know the ins and outs of a project (code standards, naming tastes, documentation, tests, ...)

I really like your workflow, and often did something similar in my projects (or merge in main, with subsequent fixes/realignments).

But I'm wondering if it could be smoothed out and normalized by Github/Gitlab/Forgeo, or maybe at the version control software level.


IMHO, the law tries to target the last entity which has practical control over the OS design and implementation aka the final developer/integrator.

For example in the Linux world, it's the distributions.

Where it gets murky is with Android (and to a lesser extent Windows).

IMHO, the entities which should be responsible are Google and Microsoft.

But since vendors, specially in the Android world, can heavily tweak the OS, there is a case that it's more the device manufacturers like Samsung which are responsible.

The relevant interpretation in practice will usually happen naturally, and the most ambiguous stuff will be set by jurisprudence if necessary.


> IMHO, the entities which should be responsible are Google and Microsoft.

So let's say someone in e.g. South America publishes an Android ROM. >99% of the code was written by Google but the author isn't using Google Play or any form of Google service for updates, it's only using the base Android code with their own updater.

If someone in California uses this ROM and the person in the other jurisdiction made modifications so that it doesn't comply with this law, who is in trouble? It's pretty unreasonable to claim that it's Google, but then is it no one, since the party responsible is outside the jurisdiction?


That one is murky.

I would argue it falls more on a case like the Linux distributions one and the guy in South America is responsible.

Google, in my opinion, can be held accountable for Android because they deeply control the ecosystem (App Store, APIs, Services) and de-facto prevents significant modifications, specially when they deal with Android security framework & mechanisms (if you modify this stuff too deeply, the Apps could break).

But if a distribution cuts ties with that, then it's the author or the entity behind it who is responsible, and if it's deemed illegal, downloading his ROM should be blocked in the US.

In truth, if I were a defender of this law (which I'm not), I would not worry too much about it. This text is here to force mainstream OS vendors to provide an API for age verification. The micro-subset of people flashing custom roms onto their phone or recompiling a piece of OSS software with some flag disabled is in practice so small that it's not really an issue.

While I do agree that this law could be better written, properly categorizing in it the cases of MS, Apple, Google and maybe entities like Linux Distribution honestly is kind of a nightmare.


More like an idea decently likely to be resold for more.

Good ideas are a decent subset, but you could also have a bit of "Greater Fool Theory" compliant ideas.


Sure, but that doesn't really change anything. The poster plainly states:

> Money is not given to good ideas (though, it doesn’t hurt). Money is given to friends.

I have an obvious counter example. I'm sure money is invested for all sorts of reasons to all sorts of people. I'm also sure that money is not exclusively invested based on friendships, and I'm quite sure that money is at times invested based on the merits of an idea. Obviously those merits have to correspond to the ability to form the basis of a successful company, unless it's a philanthropic investment.


What I meant is that yes, good ideas will get funding, if they like you and if you are a good ROI (though, not all are required). This also may allow you to enter the clique/network. However, a lot of this money circulates between the same network. Convincing the right person of the value of your idea can enable you to join the network and access that money at a much, much lower threshold later on.

Obviously, it is not that cut and dry, but it is kind of impressive how much of the money circulating around is between the same people. I’m not really condemning it. I think it is a natural consequence because humans trust other humans they know. People should be more aware of it and need to make sure they keep it in check. Otherwise, you eventually start getting high on your own supply.


While I do get why CMake is a scripted build system, I cannot help but notice that other languages don't need it.

In Rust, you have Cargo.toml, in go, it's a rather simple go.mod.

And even in embedded C, you have platformio which manages to make due with a few .ini files.

I would honestly love to see the cpp folks actually standardizing a proper build system and dependency manager.

Today, just building a simple QT app is usually a daunting task, and other compiled ecosystems show us it doesn't have to be.


Platformio is not simple by any means. That few .ini files generate a whole bunch of python, and this again relies on scons as build system.

That's a nice experience as long as you stay within predefined, simple abstractions that somebody else provided. But it is very much a scripted build system, you just don't see it for trivial cases.

For customizations, let alone a new platform, you will end up writing python scripts, and digging through the 200 pages documentation when things go wrong.


The graphic stack in NT is done in a microkernel fashion, it runs in kernel space but doesn't (generally) crash the whole OS in case of bugs.

There are a few interviews of Dave Cutler (NT's architect) around where he explains this far better than I am here.

Overall, you have classic needs and if you don't care about OSS (either for auditability, for customizability or for philosophical choice about open source), it's a workable option with its strength and weaknesses, just like the Linux kernel.


Parts of the kernel can be made more resilient against failures, but that won't make it a microkernel. It'll still run in a shared address space without hardware isolation. It's just not possible to get the benefits of microkernels without actually making it one.

Also Linux being OSS can't be dismissed because it means it'll have features that Microsoft isn't interested in for Windows.


That's not enough by a long shot.

There are already plenty of devices, from old phones to vacuum robots, where we have that or near enough.

Technically, we know how we could maintain/re-flash these devices.

Yet, we don't. Why? lack of standardization, specially the boot process in non-x86 platforms.

Having to maintain per device images is not really practical at scale.


Well, not mass produced enough.

Common mass produced products manufacturers have incentives to not mess-up too badly: recalls or warranties on such scales are a nightmare.

With military contracts, its a paid maintenance opportunity.


Just a few bits about that.

I would recommend looking into the chroot based build tools like pbuilder (.deb) and mock (.rpm).

It greatly simplifies the local setup, including targeting different distributions or even architectures (<3 binfmt).

But I tend to agree, these tools are not easy to remember, specially for the occasional use. And packaging a complex software can be a pain if you fall down the dependency rabbit hole while trying to honor distros' rules.

That's why I ended-up spending quite a bit of time tweaking this set of ugly Makefifes: https://kakwa.github.io/pakste/ and why I often relax things allowing network access during build and the bundling of dependencies, specially for Rust, Go or Node projects.


Fragile against upgrades, tons of unmaintained plugins, admin panel UX is a mess where you struggle to find the stuff your are looking for, half backed transition to nicer UI (Blue Ocean) that has been ongoing for years, too many ways to setup jobs and integrates with repos, poor resource management (disk space, CPU, RAM), sketchy security patterns inadvertently encouraged.

This stuff is a nightmare to manage, and with large code bases/products, you need a dedicated "devops" just to babysit the thing and avoid it becoming a liability for your devs.

I'm actually looking forward our migration to GHEC from on-prem just because Github Actions, as shitty as they are, are far less of an headache than Jenkins.


Maybe I have low standards given I've never touched what gitlab or CircleCi have to offer, but compared to my past experiences with Buildbot, Jenkins and Travis, it's miles ahead of these in my opinion.

Am I missing a truly better alternative or CI systems simply are all kind of a pita?


I don't enough experience w/ Buildbot or Travis to comment on those, but Jenkins?

I get that it got the job done and was standard at one point, but every single Jenkins instance I've seen in the wild is a steaming pile of ... unpatched, unloved, liability. I've come to understand that it isn't necessarily Jenkins at fault, it's teams 'running' their own infrastructure as an afterthought, coupled with the risk of borking the setup at the 'wrong time', which is always. From my experience this pattern seems nearly universal.

Github actions definitely has its warts and missing features, but I'll take managed build services over Jenkins every time.


Jenkins was just build in pre-container way so a lot of stuff (unless you specifically make your jobs use containers) is dependent on setup of machine running jenkins. But that does make some things easier, just harder to make repeatable as you pretty much configuration management solution to keep the jenkins machine config repeatable.

And yes "we can't be arsed to patch it till it's problem" is pretty much standard for any on-site infrastructure that doesn't have ops people yelling at devs to keep it up to date, but that's more SaaS vs onsite benefit than Jenkins failing.


My issue with Github CI is that it doesn't run your code in a container. You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.


> My issue with Github CI is that it doesn't run your code in a container.

Is this not what you want?

https://docs.github.com/en/actions/how-tos/write-workflows/c...

> You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.

Yeah checking out everytime is a slight papercut I guess, but I guess it gives you control as sometimes you don't need to checkout anything or want a shallow/full clone. I guess if it checked out for you then their would be other papercuts.

I use their runners so never need to do any cleanup and get a fresh slate everytime.


Gitlab is much better


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: