> Postgres calls a server is a multi-process instance on a single machine, often a commodity cloud VM. What ExaData calls a server is a dedicated bare metal machine with at least half a terabyte of RAM and a couple hundred CPU cores, with a gazillion processes running on it, linked to other cluster nodes by 100Gb networks. Often the machines are bigger.
You can also have PG cluster of beefy machines linked by 100Gb network, I didn't get what's the difference in this case.
This is again an issue of terminology. A PG cluster is at best one write master and then some full read replicas. You don't necessarily need a fast interconnect for that, but your write traffic can never exceed the power of the one master, and storage isn't sharded but replicated (unless you host all the PG files on a SAN which would be slow).
A RAC cluster is multi-write master, and each node stores a subset of the data. You can then add as many read caches (not replicas) as you need, and they'll cache only the hot data blocks.
So they're not quite the same thing in that sense.
PG ecosystem actually has multiple shared nothing cluster implementations, example is www.citusdata.com, unlike RAC, where masters need to sync to accept writes, so technically write load is not distributted across servers.
Citus is great but it's just sharding. Only some very specific use cases fit within those limits, and there are cost issues too (being ha requires replicas of each node etc). RAC masters don't need to sync writes to each other, that's not how it works. Every node can be writing independently, they only communicate when one node has exclusive ownership over a block another node needs to write to, and the transfer then occurs peer to peer over rdma. But if writes are scattered they work independently.
> they only communicate when one node has exclusive ownership over a block another node needs to write to
and it needs to communicate that he has exclusive ownership, and also after each writes you need to invalidate cached data on other nodes, and read new data from some transactionally consistent store, which will do all heavylifting (syncing/reconciling writes), which is kinda FDB.
You can also have PG cluster of beefy machines linked by 100Gb network, I didn't get what's the difference in this case.