I disagree. They should bring quality back before reintroducing more changes. Okay, maybe that means dropping Liquid Glass. But also readopt the HIG. Increase stability and performance and reduce attack vector.
It has been on iPhones for quite some while, but on androids even longer. Before that it was in the form of some smart charging scheme that it would only finish charging until the moment it thought you would unplug it.
The added cost of having an additional person to provide room and food for way exceeds that €300/month. Especially, when taking into consideration that you might have to extend/renovate the house to lodge another person. Adding an extra bedroom and possibly bathroom is not cheap.
Even if you assume the cost of lodging was 1000€ (which it isn't) then the au-pair would still be significantly underpaid.
A normal full time employee costs at least 2000€ a month (salary, tax, pension plan, health insurance, etc). If you are paying less than that you are definietly exploiting them.
Off-Topic: Are you sure delivery is free? When comparing prices online vs my local supermarket of the same brand, online prices trend higher. Locally the store also has more products on sale than available online. Only recently online shopping has become slightly cheaper because they now have “bulk” deals for 5-20% discount.
In my perception there is a difference between 1req/s as a rate limit, and 60/min. The difference has to do with bucketing. If we agree that the rate limit is 1/s, I expect to be able to exactly that and sometimes 2 within the same second. However, if we agree on 60/min, then it should be fine to spend all 60 in the first second of a minute, or averaged out, or some other distribution.
This also helps with the question I always get when discussing rate limits “but what about bursts?”. 60/min already conveyed you are okay to receive bursts of 60 at once, in contrast to with 1/s.
In my experience it is exactly the low rate service that care about rate limits as they are the most likely to break under higher load. Services that already handle 100k req/s typically don’t sweat it with a couple extra once in a while.
An effective rate limiting system has multiple bases in my experience, depending on what the goal is. But I usually implement the configuration as a list where you can define how much requests are allow maximum per how many units of time.
E.g. to prevent fast bursts you limit it to 1 request per 1 second, but to avoid someone sending out 86400 requests a day you also cap them at 100 per 86400 seconds (24 hours) and 1000 per 3600 seconds (1 hour).
Whichever limit they hit first will stop it. That isn't hard to implement if you know how to deal with arrays and it allows long term abuse, while still along fast retries if something went wrong.
That is why you would do it before you let your phone go out of sight. I used to even turn off my electronics to prevent damage by scanners. Now I don’t bother anymore but it could be a plausible excuse.
One of my favourite experiences coming up as an engineer was working with a very senior engineer right in the beginning. Whenever he had a task or problem, he would start out thinking, maybe doodling a bit on paper, go for a walk, and only then sit down at his computer and start typing. He would type in one go only compiling in the end, and it would work. (Even typos were rare.)
All this to say that it is extremely useful to have the program and the problem space in your head and to be able to reason about it before hand. It makes it clearer what you expect and easier to catch when something unexpected happens.
I see some of the value in planning, but experimentation is so cheap, there's also a lot of value in trying it, seeing what works, and learning from it. The main drawback I see from experimentation is failing to understand why something worked.
I second that. There is a mile difference between the sorry excuse of a burger that’s called Big Tasty and McCrispy in the Netherlands versus the already way better proportioned and fresher one you get in Germany, up to the better ones in Italy.
Besides the bun, it is noticeable in every part. The amounts and quality of the sauce, vegetables, and meat. And finally how the burger is presented.
So if this difference can occur within 1000km of each other in the same continent, I fully accept that it is even more varied in the whole world.
Maybe your sample size is too small? I've lived close to the NL/D border for a while and the McD quality was indistinguishable on both sides of the border. The variation between restaurants in the same country and also between different days/times in the same restaurant was much greater than between countries.
Thatis, if you happen to go to a random McD in some country and the big mac was great that day and you go to a different restaurant in a different country on a different day and the big mac was bad, then that difference has likely least to do with them being in different countries. It's not like they actually use different recipes.
Okay, granted maybe it is. In NL it is mostly in cities (Amsterdam, Utrecht, Rotterdam, Dordrecht, Lelystad), in Germany with smaller places, and in Italy only in touristic places like Siena and Genoa. So maybe it is just a problem with McDonalds in Dutch cities.
reply