The resources are usually capped in a "soft" way. So if the scene uses too much RAM for a console to handle, it will trigger a chain of emails that is sent to everyone involved, and well.....either I will be given a new task to somehow fit all of it in memory as a programmer,or a designer will be given a task to remove some stuff to go below limit. But framerate can be really tricky to handle,because ultimately, it's not up to me whatever fps drops are acceptable or not.
The problem is, that fps stays below 30 for 90% of the development cycle. Only in the last few months of a project everyone rips out useless code, debug code, debug overlays, loggers, extra network connections for statistic and performance servers,and finally you can produce a "nearly" final version of the game that doesn't have any debug code in - and only in that state you can see how it will run on actual hardware and start optimizing from there. So basically you arrive at a situation where you have 3-6 months before release,and you have this game running at 20fps on a console,and you have to somehow make it run at 30 or 60fps. Obviously profilers help with that a lot, but it's rarely a process which you can afford to be doing during most of development.
This is why we see all these Early Access games that run like garbage. They are still in the develop new features and fix bugs phase, and have not yet gotten to the optimization/strip out debug code phase.
And yet, I wonder if they couldn't release a production build without all that (maybe crash handling/reporting). Early access games are what agile development is to software - or should be. Going to production several times a day without giving up on the application's performance, and such.
Early access game devs pushing unoptimized releases should set up their release system betterer.
This exactly - no reason why you couldn't ifdef everything that is debug only, if your team is consistent with it from the very start.
Backfilling existing code with ifdefs and dealing with compile breaks and other more weird things can be intimidating, time consuming, with hard to define ROI, so I can empathize if someone doesn't do it.
That sounds like a very, very risky software development process. I think best practice is to make sure release builds are done and are used for testing from day one. QA doesn't test with all this debug code and debug overlays do they?
They do. The project I'm on has been in development for 5 years, has hundreds of people working on it globally, and not a single #ifdef DEBUG in it. It's currently my job to add that in,it's going to take weeks to finish.