For those who don't know, that's why big and little endian were called that, because the debate was so frivolous. It's a reference to the book Gulliver's Travels by Jonathan Swift in which an island folk was split about from which end you should crack a boiled egg. (I'm a big endian for example).
Try to find (or even better, write) a big-endian arbitrary-precision arithmetic implementation if you're convinced that the difference is frivolous. You'll easily see why one is sometimes called "logical endian" and the other "backwards endian".
I used to be big endian. Then i changed my mind. Have a look at the following:
-123
-12
-1
Here are three numbers. If we read them from left to right, they all begin with 1, but we cant tell what the 1 means, in one case it means 100, in one it means 10 and in one it means 1. We need to parse the entire number to know what the first number means. If we parse from right to left, each number always means the same thing. the first is how many once, the second how many 10s and so on.
So it makes sense to store the smallest first. In a big endian architecture, if i want to read an 8, 16, 32 or 64 bit word each byte will always mean the same, if we pad out the type with zeros. So little endian is right, and Arabic numerals are wrong.
Indic scripts are LTR, and so stayed the big-endian numerals when they were borrowed by medieval Islamic mathematicians. Numerals in modern spoken Arabic are big endian (mostly, with an exception for the teens that also exists in English).
Watch as every pitchfork gets pointed at you when you talk about middle endian.
Which is a real thing. There are systems that would store, say, a 32-bit word as two 16-bit words that were big endian relative to each other, but little endian within the 16-bit word.