The inconvenient truth might be that the other models score higher than OLMO because they aren't restricted to purely "open and accessible" training data. Who knows what private or ethically dubious data went into training Mistral or llama, for example.
Exactly. If we really wanted to benchmark the various models on the merits of their individual implementations, we should be comparing them all on the same open dataset.
Yeah. Reasoning models like r1 tend to be good for architecting changes and less optimal for actually writing code. So this allows the best of both worlds.
Apple's marketing team must be so good if they can convince HN users that an iPad is a suitable replacement for a "real" computer. No one who knows what a terminal or root access even means should be fooled by this.
The book covers some earlier aspects of the strategy. And I think the "spirit" of the strategies exists today, though tangibly very different and not actionable.
I believe they were doing ML based trading. Their edge was data collection, cleaning and standardisation and the ability to trade a lot at very cheap borrowing cost. This was way before computers became a thing in trading or ML became a thing.
Me too. I've been trying to find the piece of art with the earth's kingdoms of life surrounding a macro scale - it used to be my wallpaper but I lost track of it. I'd put some of those artists' works on my wall.