I haven't found a wholistic integration testing framework where you can effortlessly mock out upstreams as running services, so that you don't have to ham-fist integration testing code into your app or rely on actual, live upstreams which are probably prone to breakage anyway.
One way to balance is contract testing where you run a test suite against your mock/stub and the same test suite against the real thing. Your mock/stub would usually just have hardcoded data that aligns with the test suite to pass.
Another way is creating separate code libraries to abstract away external services. For instance, you might write ObjectStorage library that has real integration tests that use S3 but in your application you just stub out the ObjectStorage library. A lot of frameworks have verifying doubles or "magic mocks" or similar that will stub the method signatures so that can help make sure you're calling methods correctly (but wouldn't help catch data/logic issues).
Live upstreams are also prone to inconsistent state and race conditions--especially when you try to run more than 1 instance of the test suite at once.
For web services, I also like using "fixtures". You just manually run against a real service and copy the requests to a file (removing any sensitive data) than have your stub load the data from the file and return it. If you hit any bugs/regressions, you can just copy the results that caused them into a second file and add another test case.
Shameless plug: this is exactly the spirit behind what we're building at Kurtosis! (https://www.kurtosistech.com/) You get an isolated environment running real services connected to each other, so if you have a container with your mock then from your app's perspective it will look just like the actual upstream.
I have looked at your docs and don't quite get it. Is the idea that I have say 10 microservices that app-in-test will / may use, and then I set up the 10 instances (somehow all in one container?) and then can test that.
I think that seems sensible, but how is that different to 10 containers ? Is the issue something about simpler network management inside a container?
I've done things like this in ad-hoc setups with Docker Compose and environment variables, managed with shell scripts. I would be very interested in a more thoughtful alternative that was integrated with testing.
I've done a thing automating whole environment setups with VMware cloning (undocumented feature because there is it's own product on top of it) and Docker via CI, in a bank. Basically we did golden images of VMs with databases having all accounts for test cases and prepared images for services to be deployed, while getting all necessary variables from CI and VSphere to write down in configs and Consul/Vault via scripts. Initial version control for the env could be done via same prefixes in the Git branches of services, after the deployment each env had a way to deploy whatever via CI's Web UI or API.
Running automated integration tests on the whole env is just an added step in the pipeline.