SpaceX valuation is also going to be interesting. Talking about CapEx, SpaceX has deorbiting assets on top of depreciating ones. And without Starlink the space launch market size is pretty small.
> SpaceX has deorbiting assets on top of depreciating ones
The deorbiting part is redundant. Their satellite are just that, a depreciating asset. Their lifetime seem to be 5 to 7 years. The important claim is if the total cost, including the launch, can be recuperate over that lifetime or not.
It's possible that the training data (and research data) is already out there, just not (yet) combined into a single open source CAD kernel.
Then again, the success of such a project might depend on other factors. Given the complexity of the task, I can imagine that just "lucking into" the right design decisions early on could have a major impact.
The marketing game is already moving to game LLMs. Somehow you have to get what you want to have into the training data or the context window.
Currently it is probably just mostly quantity that does the trick w.r.t. training data. So e.g. spam the Internet with "product comparisons" featuring your product as the winner.
Shifting the balance on training data seems like the wrong approach vs focusing on showing up in agent search tool results and swaying them there.
It’s been a long time since agents couldn’t even conduct web search and could only riff off their model. But the examples in this thread are things an agent would search for immediately, and agents are leaning harder on tool calls and external info over time, not less.
LLM research will go back to (government funded) research labs with government funded supercomputers. All AI investment will need to be written off.
Running the LLMs the research generated will of course be possible, e.g. via AWS bedrock or alternatives. Initially there will be no "flat rate" subscriptions like currently (similar to early Internet), those will come once the prices are low enough further out.
Running the LLMs is a low-margin business not justifying high multiples.
Luckily we have a digital id system that would preserve your privacy in Germany, so you'd not have to scan your face. All the social media site would get would be "is over 18".
I concede that you'd have to trust that the id systems real identity => per site pseudo id mapping is not disclosed to anyone.
Last time I read into this was when they introduced the first generation of these passports. You probably still need custom gov-certified hardware and some java application to make use of it?
Yes, indeed. The list operation is expensive. The S3 spec says that the list output needs to be sorted.
1. All filenames are read.
2. All filenames are sorted.
3. Pagination applied.
It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.
reply