Weights are more valuable than training code in one regard. Even with the training code you may not have the dataset and reproduction requires a massive GPU cluster that few can afford.
Weights are more valuable to random individuals who want to mess around with the model. Training code is more valuable to other companies that have the resources to use them, because then they can tweak/modify however they want. But even then, you still need the training data, which in the case of OpenAI and DeepMind is a big part of the secret sauce (not just the raw data but also the process for cleansing and de-duplicating it).