I’m interested in how this works (this thread is now getting old so I’m not sure there will be replies…)
One of the important aspects of a stable ABI is the ability for clients to continue to work even after the size of a remote type changes… for instance, if Foo is a type defined in Foo.dll, and I link to it, and my code does `Foo *f = new Foo()`, the compiler will emit code that malloc’s `sizeof(Foo)` bytes on the heap. If later on, Foo gets a new member variable “m”, it will increase in size, and now code in Foo.dll will segfault if it tries to access `this->m`, if a client allocated it.
COM solves this by basically not letting you do `new Foo`, but instead you have to ask COM itself to allocate and return an instance for you, with something like `CoCreateInstance`. This allows .dll’s to evolve even if the size of the object changes, by abstracting away things like size information so that clients don’t care about it. ObjC solves this similarly with `[SomeClass alloc]`, where clients just ask the class to alloc itself and returns a pointer (which is delayed until runtime, not in the emitted code), and Swift solves it with value witness tables which delay the lookup of size/stride information until runtime.
I don’t understand how, you can write .dll’s with not only a stable ABI, but the ability to actually evolve your types, in plain vanilla C++… I think the language itself has painted itself into a corner that prevents things like this.