I kind of get the sentiment about openness but I think it's way more nuanced than you are making out.
There are very good reasons for withholding SOTA models, primarily from the info hazard angle and avoiding escalating the capabilities race which is basically the biggest risk we have right now.
Google / Deepmind have actually made some good decisions to try and slow down the race (such as waiting to publish).
I'm not saying they are doing a good enough job, but that doesn't mean their approach isn't entirely without merit.
Even ignoring the infohazard angle if they published everything immediately that would escalate the race. By sitting on their capabilities and waiting for others to publish (e.g. PaLM, Imagen vs GPT-3, DALL-E) they are at least only playing catch up.
There are very good reasons for withholding SOTA models, primarily from the info hazard angle and avoiding escalating the capabilities race which is basically the biggest risk we have right now.
Google / Deepmind have actually made some good decisions to try and slow down the race (such as waiting to publish).