Microservices are risk mitigation. If you know what are you doing - Monolith is optimal way.
We use microservices, because we acknowledge that we don't know how final product would look like, and we mitigate risks associated with adding and removing unknown number of features.
Facebook was creating clone of Twitter, well known product, well known path. There is no need for microservices.
I would actually put it exactly the other way round.
If you don't have a crystallized architecture yet, prototyping with a monolith is faster. You don't have to make the cuts (this belongs to this service) which are expensive to change later (unlike moving classes between modules).
Doing internal Java/.NET/whatever interfaces is easier than introducing external API (and associated infrastructure complexity, authorization, routing...), they are much easier to tweak. You have transactions, don't have to deal with a lot of asynchronicity, network overhead etc. For a prototype, I'd always rather do a monolith.
How do you lower risks of adding and removing feature if you need to not only need to add/remove the feature but also decide which service it goes in, set up coordination, and manage networks between the services? That's more work, and more surface area.
Actually for a big codebase there are always reasons to use microservices: much better scalability, easier to develop using small teams, easier to maintain, the software is more robust.
There are even more reasons to use a monolith, or to simply use a service oriented architecture.
Probably not necessary to repeat here, but microservices add a lot of complexity due to asynchronicity, limited interfaces, and more complex error handling.
If you don't know how to design schemas (extensible, modular, ..) and don't know how to design coherent APIs around that, then yes, go Microservices to mitigate the risk. The risk here being the likely chance that you got the schema wrong and end up hacking around it in your "monolith" [almost always sic].
Nothing except their own arrogance. See svnhub.com. It was registered by github to block potential competition long before SVN support on github became available.
Preventing some other party from riding on the coattails of your trademark is reasonable behavior as far as I'm concerned. This doesn't prevent anyone from offering an SVN host, just from stealing free publicity from GitHub in the process. That isn't arrogant at all.
I've tried that link on three browsers (Safari, Firefox, Brave) and there's no contents at any of them.
Nevertheless, I'll assume you're not playing some weird game, and that git is trademarked, in which case I stand corrected. It's therefore safe to assume that GitHub and GitLab are allowed to use git in their trade names, according to the holder of the trademark. Unlike a notional "SVNHub", the trademark violation in those two names is quite clear, so they either have explicit permission and pay a license, or Linus (presumably) is ok with their existence.
I'm sorry that the link seems to have expired. It would show that the trademark is held by Software Freedom Conservatory Inc.
I invite you to repeat my search. Unfortunately, the USPTO trademark database search interface is not bery intuitive, so be prepared for some trial and error if you do.
I wonder if customer support and warranty is part of the calculation. If they sell open platform – they will have to provide support and potential warranty for it.
But I guess that most people are not interested in open platform, only tinkers like us are. And if they make it officially closed platform, they don't have to deal with our complaints, when something does not work, or more likely underperforms.
The article talks a lot about technology choice, but not about given product. I do tend to agree – that SPAs are not necessary, but I would not use this article as an argument.
And if somebody would present this article to me – I would not take it seriously. The fact that author is using very interesting language when refering to React and Angular does not help. I'd love to hear customer-centric or product-centric reasoning.
* Don't use SPA because customers hate it when you change entire app on them.
* Don't use SPA, because customers of our particular product require faster turn-around.
* Don't use SPA, because they are harder to test in our organisation.
I had a similar problem with my Windows gaming PC after I upgraded CPU cooler. Motherboard/BIOS decided that I've upgraded my CPU and it would not proceed without me pressing "F2" key on my keyboard.
Obviously my BT keyboard did not yet connect to the computer, since it was before BT drivers loaded or something, so I had to buy a wired keyboard to press "F2" and proceed. Would be nice to have 2-3 buttons right on a case to interact with BIOS. Or you know…load BT drivers.
I think some BIOS's do have support for bluetooth keyboards. There is a crazy thing where windows will send the bios the necessary encryption keys, so while the BIOS cannot pair with a keyboard, it can use a previously paired one.
Doubt it works with linux, and I bet it depends on using the exact right config even on windows.
I think I'm in the minority, but I just would not order a delivery in 90% of the cases. If I remember myself say in 2005-2010 I would order Pizza once in 1.5 months, and I was ready to come downstairs to receive a package.
These days I order food, knowing that it will allow me uninterrupted work (I'll just receive a package and get back to work). If I have to interrupt my "flow-ish" state anyway, I'll just go out or make an omelette in 10 minutes.
Context: I live 10 minutes away from a high street in London.
Indeed. They should just state "It's an Ad". I doubt they actually going to try every product that they are advertising, to make wording "suggestion" be even close to the truth.
That said, I don't understand, why would somebody use Firefox instead of Chrome, Edge, or Safari if they ad in-browser ads.
But I'm software engineer, and I didn't know about it. I don't think most (as in 95%+, even in tech) people would know difference to care. My point is visibly – firefox seem to be just as ad-hungry as competition.
They could be better behind the scenes, but customer wouldn't know it.
You know what I miss? Software I can pay for. I'd happily shell out couple of bucks a year for a browser. Or a social network. I'd be more than happy to pay regularly for something that requires regular, constant, maintenance, upkeep, servers, salaries. But no, somehow humanity landed on free+ads. Ugh.
True, and it's already been happening with TVs from what I recall.
You know what I wonder? Where's the hatred threshold for ordinary people? When you see an ad in a middle of something and you're so pissed that you'll go out of your way to specifically avoid the product that's shoved in your throat. I know mine's been reached, but, hey, I'm just an grumpy asshole ;)
Wasn't there some google cookie in firefox that couldn't be deleted at some point, related to safe browsing?
Then when people kept complaining they just hid it from the UI?
I honestly don't know what the situation is now though.
Yes, Google sends a cookie in the responses from its Safe Browsing service. As of Firefox 27 (released in February 2014), Firefox has sandboxed the Google Safe Browsing cookie in a separate cookie jar, isolated from normal web browsing.
Giving Google the power to decide what users are and are not allowed to download is another thing that Mozilla should not be doing.
Guess what Mozilla's response is when Google lists something they shouldn't? "Take it up with Google". And you think Google supports those who they defame better than their usual customers? No. This is not a hypothetical scenario.
Because all the browsers you mention are controlled by megacorps who want to own the web for their own gain. As mismanaged as Mozilla is, they are in it to make the web a decent place for users.
This said, of course, I hate this "feature" and I'll make sure I disable it.
"No new data is collected, stored, or shared to make these new recommendations."
"When contextual suggestions are enabled, Mozilla receives your search queries. When you see or click on a Firefox Suggest result, Mozilla collects and sends your search queries and the result you click on to our partners through a Mozilla-owned proxy service. The data we share with partners does not include personally identifying information and is only shared when you see or click on a suggestion."
Doesn't sound like "datamining everything you search for".
> "When contextual suggestions are enabled, Mozilla receives your search queries. (...)"
This, right here. They get those regardless of whether you click on anything. What happens with those queries afterwards?
> "No new data is collected, stored, or shared to make these new recommendations."
If that's true, it would imply search queries were already being sent to Mozilla. I hope it isn't true. I feel incredibly dumb that I never bothered to verify it, that I trusted them. If it turns out the queries were sent, I'll look into filing a GDPR complaint, because I sure as hell didn't give consent for my queries - intended for the search engine of my choice, and which might contain PII - to be processed by Mozilla.
> Mozilla approaches handling this data conservatively. We take care to remove data from our systems as soon as it’s no longer needed. When passing data on to our partners, we are careful to only provide the partner with the minimum information required to serve the feature.
> A specific example of this principle in action is the search’s location. The location of a search is derived from the Firefox client’s IP address. However, the IP address can identify a person far more precisely than is necessary for our purposes. We therefore convert the IP address to a more general location immediately after we receive it, and we remove the IP address from all datasets and reports downstream.
>> When passing data on to our partners, we are careful to only provide the partner with the minimum information required to serve the feature.
That's hogwash without access to details of actual cases. What is the definition of "minimum" for a given partner here?
Reminds me of the UX of Android a couple years ago:
- Android: "I'm a better system than desktops, I offer fine-grained permissions that ensure apps only have access to what they need, nothing more."
- Every single app, upon installation: "I need every single permission enumerable in the current SDK version."
>> A specific example of this principle in action is the search’s location. (...)
Oh, that's nice, I feel a bit more relaxed - this means they can't enable this feature for me at all, because they first have to seek informed consent from me for this kind of processing. They'd better remember to ask.
I think I can confidently assume that despite not providing IP or accurate location data, there are enough features in the data for their partners to fingerprint individuals. Might require a lot more work, but when advertisers go out of their way to identify individuals based on their browser/os/hardware settings, they'll attempt to do it on just about anything they could get their hands on.
I wonder how containers affect this behavior? Since the same history seems to pop up regardless of which container I'm in, I wonder if this effectively makes containers permeable?
That's unsustainable. It's what some projects already try to do with Chromium, and it just doesn't work: every time Google steers a bit more towards evil, the downstream burden increases significantly, and sooner or later the bad patches creep in anyway.
There is an institutional problem at Mozilla, we should focus on fixing that rather than trying to come up with even more complex sticking plasters.
Whats needed is a rebirth, basically the Mozilla team quits, creates a new mozilla inc and leaves the old one behind to go full chrome appocalypso.
Basically what nature does when the elder generation acquires to many parasites and dna damages, spawn a new generation and die, taking most of the parasites with them.
> There is an institutional problem at Mozilla, we should focus on fixing that rather than trying to come up with even more complex sticking plasters.
How do you fix that from the outside if not by forking or at least using a different Browser to make it clear that the current behavior is unacceptable.
Forking is an option only insofar as the forking team is actually capable of leading development. Purely reactive forks, like most of the ones we see in the Chromium world, are not sustainable in the long run. Is there a team, out there, who could fork FF and take it into a new direction? Maybe. But that has happened only once in the history of Mozilla, and it was paid for by Mozilla itself, because it's a huge task.
I couldn't agree more, i shifter from software engineering to software architecture exactly to pursue autonomy.
I was so tired completing user stories that I knew will be either too expensive to maintain or plain useless that decided to climb up towards the source, and help people set up requirements and measurements loops.
So yeah, why do people hire software engineers who are extremely good at analytical thinking, and then try pre-digest every bit of requirement?
Because when you don't, at least ~50% will complain that the ticket isn't well specced enough.
For every person who wants autonomy there is another who would rather be on auto-pilot, mindlessly coding up arbitrary ticket JIRA-1234 with a Twitch window open on their 2nd screen.
It's not really faith, it an educated guess:
CEO (or Director of Department) - Makes an educated guess, that if they can release Product A in next 18 months, they will earn $1,000,000.
That means that they need to know that A:
1) Product A will be released in next 12-14 months.
2) They will spend less than $700,000. Otherwise they might be better off buying bonds or something.
It's effectively a job of an exec / director to make bets. Hopefully they are educated guesses, but there are times when somebody just have to flip a coin too.
There is a lot of education in those guesses. We know what previous sales are, we know what industry sales are. We know population growth rates. We know many other things. Everything we know puts bounds on what we can sell.
this is literally why a significant portion of high-end intellectuals refuse to participate in this charade, and instead build quality technology in the open. Open source technology, built largely without artificial deadlines and fake drama, define the execution environment that this exchange is taking place upon (in many cases).
The old industrial plant planning algebra in 1950s textbooks, say something like "if we make widget N in six months with a team of 6 and $100,000 budget, we have a 30% chance of breaking even in two years, but if we make widget P in 12 months with a team of 4 and $20,000, we have a 60% chance of breaking even in 24 months" etc.. this is slightly useful in certain environments, but anyone here is going to argue that the "future" in 12 or 24 months is stable, technically?
The capacity of certain mental models to "Reductio ad Absurdum" is amazing to me, in an age of moving literally one billion+ conversations at once across wires and through space.
In finance, it's called risk appetite. Different people and different entities have a different appetite for risk, and they will act accordingly. There is nothing intrinsically wrong guesswork if parties understand the context, and don't lie to themselves.
>12 or 24 months is stable, technically?
Hm. Faster iterations are better, usually, again if we understand the context, but I see a lot of companies that are too quick to go into full panic mode. An example would be recession – If you plan to invest for the next 60 month, and a recession hits you on month 20, the right choice would be to ignore the inputs, and keep investing. Those who pull their assets when prices are low usually lose.
Quick or long iterations are not be all solutions, once always have to keep the context in mind and use first principles to understand what is that we are doing, and trying to achieve.
We use microservices, because we acknowledge that we don't know how final product would look like, and we mitigate risks associated with adding and removing unknown number of features.
Facebook was creating clone of Twitter, well known product, well known path. There is no need for microservices.