> the path of least resistance is very different in the US, Europe and Asia
My theory is that in US compared to Europe, you are going to need the path of least resistance more often. If you are working two part-time jobs with variable hours and schedules to make ends meet, then you are going to reach for the easy & fast food options. Whereas if you have the stability of 40 hour work weeks, regular schedule and social safety nets - regardless of the total income - then you have the time and mental energy to eat healthier.
This seems more interesting, as it's not a code for a physical address but a lookup key for one.
You can update your code to point to a new address when you move:
> Their digital addresses will not change even if their physical addresses change. Their new addresses will be linked to the codes if they submit notices of address changes.
> later investors effectively collude with founders
> a small startup that never found product-market fit. The economy was bad, and they were running out of money, and they took - as I understood it - a dubious Series B led by a dubious investor
The unfortunate reality is that if a startup cannot survive for long on its own, the economy is bad, and investment interest is low - then past invested effort from founders and employees and money from early investors is a sunk cost. They have together created something with almost no independent economic value.
The later investors can buy the assets created so far at near zero cost (the alternative is a bankruptcy auction). They can reasonably argue that the future value of the business is all from their investment, together with a deal to hire the founders and current employees to invest future effort into it.
I mean, yes, that's exactly the argument that the bigger, later investors make, and their lawyers are happy to back them up on that for money.
But consider this. If that were truly the case, why would the later investors work so hard to maneuver their way into this allegedly worthless startup? Why not hire an entirely separate team to build an entirely separate app, and they can own the whole thing with no fuss? If they value the founding team, why not tempt them away to a new venture, and shed all the baggage? Economics has an idea called "revealed preferences" - that words can be deceiving, but costly behaviors are honest - and this does look to be the revealed preferences of the investors.
In other words, just because the later investors can use the threat of insolvency to get their way doesn't mean what's already there doesn't have value.
Not sure where you get 75% for Bandcamp. They take a 15% cut for digital sales, 10% for physical, plus processing fees.
Also, they’re not really a streaming service: you can preview a lot of music on the platform, but it’s primarily about buying music. It’s not really a good comparison to Spotify at all.
One of the (only) things I think Spotify gets wrong as a service is they’re too cheap. I pay for prime, Spotify and Netflix in my house - (we occasionally sub Netflix for Disney). A price rise to Netflix or prime would cause us to reconsider, but I think I would stomach Spotify doubling their price quite easily with no change in service.
Counter example, if they raised their price by more than $1-2, I'd cancel it. The music discovery hasn't been great and it mostly suggests playlists of the songs I already listen to. Inertia is the only reason I haven't cancelled and bought all the songs directly
See I think how you feel about Spotify is exactly how I feel about Netflix. I don’t use the discovery of Spotify much, if at all. The value is the catalog.
We have duo for my wife and I, so Spotify is £8.50/month each. Apple Music is £11/mo.
60% of my listening is through a pair of AirPods over Bluetooth, 30% on a Sonos system and 10% using semi decent wired headphones. Lossless isn’t something that is a differentiator for me.
I have some music on both Spotify and Apple Music - the reality is that even with a few thousand streams per month we haven’t even made back the cost of 2 hours of rehearsal space. The reality is that for artists making a living off this, the problem isn’t the difference between 75 and 80% that Spotify holds onto, it’s the fact that the artist only sees 15-20% of what’s left over.
Spotify is already steep in my eyes when I compare it to Netflix. Probably because I'm looking at video bandwidth vs audio bandwidth but paying for music more than you pay for movies feels weird in my monkey brain. No shiny picture, less money monkey say.
For me the value proposition is that there’s zero fragmentation. I know that by paying what I do for Spotify, I have access to pretty much everything. That’s worth a decent premium to me.
The problem with Netflix is the same as console exclusives in video games - fragmenting the ecosystem means I look at the service for the content it has vs the other services. But with Spotify it fills that niche entirely.
I mean compared to video streaming sites - Netflix and prime have vastly different libraries. If Spotify and Apple Music had different libraries to the same degree, id probably bounce between them both and be more price sensitive. The fact that Spotify (and apple and tidal) have the full catalog mean the network effect is likely to be my main decider.
(a) is the real problem for many of the musicians who have vocally complained about this. If you look at most songs produced by record labels, you will see 5 songwriter credits, 10 producers, and a whole band to pay. Not to mention the army of recording engineers and the marketing staff.
Those people are doing real work, it's normal they're paid too. If musicians want more for themselves they could cut middlemen and produce and commercialize themselves their music.
Absolutely, and they are pretty much all equal participants in the creation of the sound you are hearing. It's just one worker (the headline artist) who gets all the attention.
> Theoretically it should be possible to do that using two cameras connected to some kind of image processing unit
That "some kind of image processing unit" in humans has an awful lot of compute power and software.
If you remove $100k of sensors but have to add $200k of compute to run more advanced computer vision software, then it's a bad tradeoff to use only cameras, even if in theory that software is possible.
This is probably a question about classic CRDTs as much as eg-walker:
Do all possible topological sorts of the event graph result in the same final consensus document? If yes how do we know that, and if no, how do they resolve the order in which each branch is applied?
> Do all possible topological sorts of the event graph result in the same final consensus document?
Yes. Thats usually referred to as the "convergence property".
> If yes how do we know that
Usually, careful design, mathematical proofs and randomized (fuzz) testing. Fuzz testing is absolutely essential - In over a decade of working on systems like this, I don't know if I've ever implemented something correctly first try. Fuzz testing is essential. You shouldn't trust the correctness of any system which haven't been sufficiently fuzzed. (Luckily, fuzzers are easy to write, and the convergence property is very easy to test for.)
For Eg-walker, I think we've pumped around 100M randomly generated events (in horribly complex graphs) through our implementation to flush out any bugs.
This seems to be a field perfect for theorem proving, I think I've seen some work by Kleppmann using Isabelle.
I once tried to understand the Yjs paper, but I came to the conclusion that their proof is just wrong! They do some impressively looking logical reasoning in the paper, but they define some order in terms of itself, so they don't really show anything, if I remember correctly. If you tried that in Isabelle, it would stop you already at the very start of all that nonsense.
I talked to Kevin Jahns (the author of the YATA paper & Yjs) about his paper a few years ago. He said he found errors in the algorithm described in the paper, after it was published. The algorithm he uses in Yjs is subtly different from YATA in order to fix the mistakes.
He was quite surprised the mistakes went unnoticed through the peer review process.
There have also been some (quite infamous) OT algorithm papers which contain proofs of correctness, but which later turned out to actually be incorrect. (Ie, the algorithms don't actually converge in some instances).
I'm embarassed to say I don't know Isabelle well enough to know how you would use it to prove convergence properties. But I have gotten very good at fuzz testing over the years. Its wild how many bugs in seemingly-working software I've found using the technique.
Ah, that makes sense! I thought that Yjs must be doing something differently than described, because it seems to work well in practice, but I couldn't see how Yata would. Anyway, I learnt a lot by thinking through that paper :-)
Fuzz testing and proof are complementary, I think, both catch things the other one might not have caught. The advantage of Fuzz testing is that it tests the real thing, not a mathematical replica of it.
> Overall I've taken eight weeks throughout the year and never been questioned with it.
If you have taken eight weeks of vacation per year, and have not even seen push-back on it, you are definitely on "unlimited" compared to most of the world.
> If I could give my employees a $50k cash bonus and it got taxed at 24% or I could gift them a $50k car "for testing" and it was tax free, everyone would be getting paid in cars.
Belgium has exactly that (use of a car is tax-free) and as a result company cars are wildly popular. Getting rid of this tax loophole has been unpopular, but as a compromise they will only apply it to electric cars in the future.
> Was the backdoor used on obscure build servers or obscure pieces of build infrastructure somewhere?
And developer machines. The backdoor was live for ~1 month on testing releases of Debian and Fedora, which are likely to be used by developers. Their computers can be scraped for passwords, access keys and API credentials for the next attack.
Variability in software runtime arises mostly from other software running on the same system.
If you are looking for a real-world, whole-system benchmark (like a database or app server), then taking the average makes sense.
If you are benchmarking an individual algorithm or program and its optimisations, then taking the fastest run makes sense - that was the run with least external interference. The only exception might be if you want to benchmark with cold caches, but then you need to reset these carefully between runs as well.
My theory is that in US compared to Europe, you are going to need the path of least resistance more often. If you are working two part-time jobs with variable hours and schedules to make ends meet, then you are going to reach for the easy & fast food options. Whereas if you have the stability of 40 hour work weeks, regular schedule and social safety nets - regardless of the total income - then you have the time and mental energy to eat healthier.