With all the discussion about what the “big trick” is that makes the M1 seem to be such a breakthrough, I can’t help but wonder, if the M1 is more like the iPhone: The sum of a large number of small engineering improvements, coupled with a lot of component integration detail work, topped off by some very shrewd supply chain arrangements.
Analogous to the iPhone being foreshadowed by the iPod without most experts believing Apple could make a mobile phone from that, the M1 was foreshadowed by the A1 for mobile devices with many(most?) experts not forecasting how much it could be the base for laptops and desktops.
It seems, the M1 includes numerous small engineering advances and the near term lockup of the top of the line fab in the supply chain also reminds me of how Apple had secured exclusivity for some key leading edge iPhone parts (was it the screens?).
So the M1 strikes me as the result of something that Apple has the ability to pull off from time to time.
And that is rather hard to pull off financially, organizationally and culturally. And it more than makes up for some pretty spectacular tactical mis-steps (I’m looking at you, puck mouse, cube mac, butterfly keyboard).
> The sum of a large number of small engineering improvements, coupled with a lot of component integration detail work, topped off by some very shrewd supply chain arrangements.
I think the vertical integration they have is a major advantage too.
I used to work at arm on CPUs. One thing I worked on was memory prefetching which is critical to performance. When designing a prefetcher you can do a better job if you have some understanding or guarantees as to the behaviour of the wider memory system (better yet if you can add prefetching specific functionality to it). The issue I faced is the partners (Samsung, Qualcomm etc) are the ones implementing the SoC and hence controlling the wider memory system. They don't give you detailed specs of how that works, nor is there an method where you can discuss with them appropriate ways to build things to enable better prefetching performance. You end up building something that's hopefully adaptable for multiple scenarios and no one ever gets a chance to do some decent end to end performance tuning. I'm either working with a model of what the memory system might be and Qualcomm/Samsung etc engineers are working with the CPU as a black box trying to tune their side of things to work better. Were we all under one roof I suspect we could easily have got more out of it.
You also get requirements based upon targets to hit for some specific IP, rather than requirements around the final product, e.g. silicon area. Generally arm will be keen to keep area increase low or improve performance / area ratio without any huge shocks on overall area. If you're apple you just care about the final end user experience and the potential profit margin. You can run the numbers and realise you can go big on silicon area and get where you want to be. With a multi-company/vendor chain each link is trying to optimise for some number they control, even if that overal has a negative impact on the final product.
Very interesting comment. I mean you see some of the same things with companies like Tesla also pushing vertical integration.
A lot of the examples you see are similar to what you talk about. You can cut down on the friction between different parts.
I remember an example of software controlling a blinking icon on the dashboard, where this was a 10 minute code change for Tesla but a 2-3 month update cycle for a traditional automaker due to the dashboard hardware coming from a supplier.
If we're comparing the M1 to x86, though, then all the prefetching and other memory shenanigans are on the CPU die. The A1 had an advantage over the SoCs used in Android phones here, but the M1 doesn't have an advantage over Intel and AMD CPUs.
> the partners (Samsung, Qualcomm etc) are the ones implementing the SoC and hence controlling the wider memory system.
And I assume the partners also do some things differently, for at least somewhat legitimate reasons, and no one ARM design can be optimal for everyone.
You use the word partner with the proper noun Qualcomm but there are no quotes. Qualcomm's only focus is to make money while delivering the worst experience in every direction. They are often stuck in local maximums and they are too big to just flow around.
Apple has used exclusive access to advanced hardware as a differentiator several times. With screens it was Retina. They funded the development and actually owned the manufacturing equipment and leased it to the manufacturing subcontractors.
Also in 2008 they secured exclusive access to a then new laser cutting technology that they used to etch detail cuts in the unibody enclosures of their MacBooks, and then iPads. This enables them to mill and then precision cut the device bodies out of single blocks of Aluminium.
They’ve also frequently bought small companies to secure exclusive use of their specialist tech, like Anobit for their flash memory controllers, Primesense for the face mapping tech in FaceID, and there are many more. For Apple simply having the best isn’t enough, they want to be the only people with the best.
Retina is a very interesting example for how Apple works. They have identified the necessary resolution (200+ ppi) for this technology and worked towards across their whole product range. The technology isn't exclusive to Apple, but they are the only company which pushes it, even if it sometimes means quite odd display resolutions.
Other manufacturers seem to be completely oblivious to it. They still equip their laptops either with full hd or 4k screens. The resulting ppi are all over the place. Sometimes way to low (bad quality) or way to high (4k in an 13" laptop, halves the runtime). Same with standalone screens, there is a good selection around 100ppi, but for "high res" the manufacturers just offer 4k in whatever size, so once again the ppi are all over the place again.
Retina is a great example of how Apple operates in general. They care about outcomes, not spec sheets. Sure, they'll take the time to spec dunk when the opportunity presents itself. It's just rarely the reason they do something, whereas Dell/HP/whatever want to say "4K SCREEN OMG" on the box regardless of whether that actually leads to a better experience.
Apple realizes there's diminishing returns in upping the resolution beyond what your eyes can see. So they hit Retina, and then move on to focus on color accuracy, brightness, and contrast.
They do this throughout their product stack. Only as much RAM as their phones actually need, because RAM is both costly in terms of BOM and also consumes battery life.
Using fewer Megapixels than competing phones because it's about the quality, not quantity, when other manufacturers trip over themselves to have the most Megapixels.
> Using fewer Megapixels than competing phones because it's about the quality, not quantity, when other manufacturers trip over themselves to have the most Megapixels.
When your customers are picking a product based off a spec sheet, that's the trap you easily fall into.
> Apple realizes there's diminishing returns in upping the resolution beyond what your eyes can see. So they hit Retina, and then move on to focus on color accuracy, brightness, and contrast.
It's still not enough resolution for x2 upscaling that Apple argues optimal.
> The technology isn't exclusive to Apple, but they are the only company which pushes it, even if it sometimes means quite odd display resolutions.
Apple's focus and commitment can be a pain to users who want something different, but overall it's a huge strategic advantage.
Software developers are building their apps for M1 because they know beyond a shadow of a doubt that Apple isn't going to keep Intel around any longer than they have to, whereas Microsoft had and will continue to have a hard time persuading anyone to adopt ARM Windows because the opposite is true.
That would imply split-brain development. Yes I agree that devs will support M1 out of necessity in supporting a minority market share OS that will expand from M1's apparent superiority....
But Wintel will own business desktops for probably a decade, unfortunately.
There are a few PC manufacturers offering 3K laptops at least (Lenovo, Huawei when I looked). For monitors it’s nuts, just Iiyama has a 5k screen with multiple inputs (the LG has only one input so useless for switching between pc and Mac)
Perhaps I'm lucky, but my LG 5K has aged really well. Fantastic upgrade in 2016 and still going strong. I'm surprised there aren't more 5K 27" monitors because they are great.
Personally, I hope Dell makes their own screen with the panel of the 6k Apple display. I would probably grab it in a heartbeat. I would consider the Apple display, but I don't need the reference video quality and especially I would like to have more than one input on a display that expensive.
Apple, what are you thinking? People who can afford that display might want to connect their desktop mac and their laptop to it.
I believe this is the only consumer 5nm chip currently available as well. Ryzen gen 3 is still on 7nm. I'd be interested to see how well general purpose compute on the m1 vs ryzen gen 3 mobile will be.
The thing is M1 performance isn't really the point in itself though. This is the lowest performance core architecture Apple will ever produce for MacOS, aimed at their lowest end cheapest hardware. It's only one data point towards what we can expect when they take the gloves off and go after the performance end of the market for real.
Universal binaries. Apple calls those binaries “Universal binaries”. Rosetta is more like running different arch binaries through qemu. A brief look through Google says that there was a FatELF specification created years back, but never really went anywhere. Presumably because Linux users tend to know what arch they are using.
Fat binaries would make distribution easier, but would double (or triple) the size of a binary. I doubt it would be worth the size trade off.
I’m thinking of all of the small utilities and small command line programs that make up a stock Linux distro. Those don’t have many resources other than the binaries. Sure, the size of each is not much in absolute scale, but combined, you have a pretty significant increase if they were all fat binaries.
That said, I don’t know what Apple does. For example, in the main download for Big Sur, is (for example) zsh a universal binary, or are there a specific x86/M1 downloads. I haven’t looked.
Analogous to the iPhone being foreshadowed by the iPod without most experts believing Apple could make a mobile phone from that, the M1 was foreshadowed by the A1 for mobile devices with many(most?) experts not forecasting how much it could be the base for laptops and desktops.
It seems, the M1 includes numerous small engineering advances and the near term lockup of the top of the line fab in the supply chain also reminds me of how Apple had secured exclusivity for some key leading edge iPhone parts (was it the screens?).
So the M1 strikes me as the result of something that Apple has the ability to pull off from time to time.
And that is rather hard to pull off financially, organizationally and culturally. And it more than makes up for some pretty spectacular tactical mis-steps (I’m looking at you, puck mouse, cube mac, butterfly keyboard).
EDIT for typo