You can only prove that all contributions are pushed by those humans, and you can quite explicitly/clearly not prove that those humans didn't use any AI prior to pushing.
I absolutely do not care what autocomplete tools someone used. Only that they as humans own and sign what is submitted so it is attached to their very expensive reputations they do not want to lose.
That’s great, and I also don’t care. But I think all people are saying is that by most definitions you cannot “prove all contributions to stagex are by humans”.
Or are you saying you can prove that aliens and cats didn’t make them? Because I’m not sure that’s true either.
And once you find out someone has trained their dog to commit something, how exactly do you revoke your trust?
I think if you answer these questions you’ll see pretty quickly why this solution isn’t the silver bullet you think it is.
It is not a silver bullet by itself, but when combined with the other tactics in stagex I believe it gives us a very strong supply chain attack defense.
I can not prove the tools used, but I can prove multiple humans signed off on code with keys they stake their personal reputations on that I have confirmed they maintain on smartcards.
While nothing involving humans is perfect I feel it is best effort with existing tools and standards and makes us one of the hardest projects to deploy a successful supply chain attack on today.
The point is that you’re deluding yourself if you think that there is any difference in terms of relative “unfathomability” between 3 billion and 300 billion.
3 billion generates more in interest per day than 99.99% of people make in a year. That’s unfathomable volumes of wealth for even the very rich.
Over the last ~15 years I have been shocked by the amount of spam on social networks that could have been caught with a Bayesian filter. Or in this case, a fairly simple regex.
Well, large companies/corporations don't care about Spam because they actually benefit from spam in a way as it boosts their engagement ratio
It just doesn't have to be spammed enough that advertisers leave the platform and I think that they sort of succeed in doing so.
Think about it, if Facebook shows you AI slop ragebait or any rage-inducing comment from multiple bots designed to farm attention/for malicious purposes in general, and you fall for it and show engagement to it on which it can show you ads, do you think it has incentive to take a stance against such form of spam
> Well, large companies/corporations don't care about Spam because they actually benefit from spam in a way as it boosts their engagement ratio
I'm not sure that's actually true. It's just that at scale this is still a hard problem that you don't "just" fix by running a simple filter as there will be real people / paying customers getting caught up in the filter and then complain.
Having "high engagement" doesn't really help you if you are optimizing for advertising revenue, bots don't buy things so if your system is clogged up by fake traffic and engagement and ads don't reach the right target group that's just a waste.
reply