Because if SpaceX were valued like a normal company, they would lose their money.
SpaceX, as technologically awesome as it is, simply cannot be that big of a company because the market for space launches is relatively small.
SpaceX is targeting an IPO at a valuation 500x earnings. They need to jump on the "AI" / datacenter bandwagon to even hope to sell that kind of valuation.
The whole "datacenters in space" thing is an answer to the question "what could require 1000x the satellite launches that we have now?"
It has nothing to do with what makes sense economically for datacenters!
Radiator size scales linearly with power but, crucially, coolant power, pumps, etc do not.
Imagine the capillary/friction losses, the force required, and the energy use(!) required to pump ammonia through a football-field sized radiator panel.
Amazon stock is flat over the past year. The rest of the "magnificent seven":
- Google: +70%
- Nvidia: +49%
- Apple: +7%
- Meta: Flat
- SHOP (closest comp): +19.41%
- Mercado Libre (international comp): +20.73%
So basically, the "tech world" is dividing itself, in the eyes of investors, into two camps: companies that will benefit from AI tailwinds and companies that will not. And all the money is going to the companies that will.
Amazon is more and more considered to be part of the latter group.
This is especially concerning of Amazon because it seems like AWS--the cash cow--has somehow missed becoming the cloud provider for AI compute needs.
As such, Amazon needs to give investors some reason to hold amazon stock. If you're not part of a rising tide, the only reason left is "we are very profitable."
So yeah, Amazon will have to cut costs to show more profitability and become further investable.
So yes, the layoffs have to do with AI...but not the way they are spinning it.
On one hand, he claims that he "fixed problems that had been sitting untouched because no one else could untangle them." And on the other hand he claims his layoff on "a global labor market with almost no guardrails."
So which is it: did he really work on problems no one else could solve, or was he replaced by cheap foreign labor?
Probably neither. The most likely scenario here is one of two things:
a) Amazon made a mistake by firing him. They laid off someone truly valuable.
b) He wasn't as valuable as he thinks he was. Those problems were not worth paying him a meaningful fraction of a million dollars a year (what an L7 makes at amazon).
What I can guarantee is that he wasn't replaced by a cheap, foreign, plug-and-play replacement.
It all makes sense when you realize the point of his tweet is that he's plugging his run for congress: so yeah, of course he's tapping in to the absolute worst nationalistic sentiment. Shame on him.
Lesson it took me far too long to learn about what "the best" is. Bona fides: I'm no titan of industry but I've worked with many, across many industries.
I've seen "the best." I've had what could be considered "life changing" success by most metrics (but irrelevant by SV-billionaire standards).
The lesson:
There are, in general, two groups of people you can work with. People that do what they say they are going to do, and people who don't.
People who don't do what they say they are going to do outnumber those that do by 20-1.
If you surround yourself with the first group, you're going to be ok. If you don't, most of your time and your organization's time will be spent not-doing, not-measuring, and not-advancing.
"The best" really is that simple, and the bar really is that low.
Of course, if you do what you say you're going to do, and you're incredibly smart, and you have vision, and (insert whatever you care for here) then yeah, you'll be the "best of the best"...but those things are legitimately not necessary for success.
This is the most accurate comment in response to this article. I pretty much have discovered exactly the same thing over a more than 20 year career across multiple startups that had successful exits. The absolutely most important traits are accountability, honesty, and willingness to learn, if you have these three traits you will be one of the best people on your team regardless of what you do. I have these traits, and it's been why I've been successful in /many/ different kinds of roles over the years, because I am willing to be honest about what I don't know, listen and learn, and hold myself accountable for both successes and failures, and when I commit to do something I actually do it.
I hate being negative but it sounds like par for the course for fly.
Incredible (truly, incredible, world-class) engineers that somehow lack that final 10% of polish/naming/documentation that makes things...well, seriously usable.
I remember last time I tried them the bizarre hoops/documentation around database creation. I _think_ they solved that but I remember at the time it felt almost like I was getting looked down upon as a user. Ugh, you need clarity? how amateurish!
+1. This thread, the thread about documentation, and the thread about turning off Sprites, when taken together, thoroughly illustrate why I'm not currently a Fly user.
When it comes to electronic music, there are basically two types of clubs.
One type is the mega club. These are see-and-be-seen places. Ibiza. Vegas. Miami. The DJs are very famous, but often because they have a few EDM "hits"--not because of their selection or mixing skills. These places have tables, bottle service, and beautiful people. The DJs here often do nothing: they may have a USB drive that they press "play" on and then pretend to mix.
Then there's another class of club: the "indie" electronic music club. These places have djs whose skill is more about selection, mixing, and crowd/vibe management. The djs here often have respected production careers but often not.
What you do at these clubs is simply dance. Usually by yourself, listening to music. The best of these clubs will not blow out your hearing because the sound system is exquisitely tuned.
As an introvert, the second class of club sounds like it would be perfect for you!
As an aside, if you've never experienced a truly world-class DJ (of the second type, not the first), it's an incredible experience. Even if you find electronic music "boring", these people are absolute masters at taking you on an emotional journey.
The best way to experience this is at one of the top festivals. The second best way is at a club. The third best way is at home, with great headphones, and soundcloud/youtube dj sets. NOT spotify.
> Even if you find electronic music "boring", these people are absolute masters at taking you on an emotional journey. The best way to experience this is at one of the top festivals. The second best way is at a club. The third best way is at home
Hell I LOVE The Midnight, Com Truise, Essenger, Empire of the Sun, Robert Parker, Gunship, FM-84, Молчат Дома (Molchat Doma) and other Slavsynth groups I can't even pronounce the names of..
but the best way to jam out to that is to go on an actual journey ^^ a long drive, an open road, no stops.. Being stationary even if dancing just doesn't fit in with the images those sounds give me (picture those synthwave/outrun posters with purple suns and ringed planets in the background sky)
I really wish there were some modern games and movies made around that vibe.
> The best way to experience this is at one of the top festivals. The second best way is at a club.
I would respectfully disagree and swap these, unless the festival is something like Freerotation. Festivals tend to bring out the more consumer friendly, hands-in-the-air side, and more often than not, force everyone to condense their sets, losing a lot of the "storytelling", risk-taking and deeper cuts.
What remains of techno has largely rotted in the last 5+ years due to the high-visibility, high octane arms race "business techno" festival energy, for example.
> People constantly assert that LLMs don't think in some magic way that humans do think,
It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.
And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.
Is the substrate important? If you made an accurate model of a human brain in software, in silicon or using water pipes and valves, would it be able to tnink? Would it be conscious? I have no idea.
Me neither but that's why I don't like arguments that say LLM's can't do X because of their substrate, as if that was self-evident. It's like the aliens saying surely humans can't think because they're made of meat.
I am just trying to make the point that the machines that we make tend to end up rather different to their natural analogues. The effective ones anyway. Ornithopters were not successful. And I suspect that articifial intelligences will end up very different to human intelligence.
Okay... but an airplane in essence is modelling the shape of a bird. Where do you think the inspiration for the shape of a plane came from? lmao. come on.
Humans are not all that original, we take what exists in nature and mangle it in some way to produce a thing.
The same thing will eventually happen with AI - not in our lifetime though.
I recently saw an article about LLMs and Towers of Hanoi. An LLM can write code to solve it. It can also output steps to solve it when the disk count is low like 3. It can’t give the steps when the disk count is higher. This indicates LLMs inability to reason and understand. Also see Gotham Chess and the Chatbot Championship. The Chatbots start off making good moves, but then quickly transition to making illegal moves and generally playing unbelievably poorly. They don’t understand the rules or strategy or anything.
Could the LLM "write code to solve it" if no human ever wrote code to solve it? Could it output "steps to solve it" if no human ever wrote about it before to have in its training data? The answer is no.
Could a human code the solution if they didn't learn to code from someone else? No. Could they do it if someone didn't tell them the rules of towers of hanoi? No.
A human can learn and understand the rules, an LLM never could. LLMs have famously been incapable of beating humans in chess, a seemingly simple thing to learn, because LLMs can't learn - they just predict the next word and that isn't helpful in solving actual problems, or playing simple games.
I think if you tried that with some random humans you'd also find quite a few fail. I'm not sure if that shows humans have an inability to reason and understand although sometimes I wonder.
It's not some "magical way"--the ways in which a human thinks that an LLM doesn't are pretty obvious, and I dare say self-evidently part of what we think constitutes human intelligence:
- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)
- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")
We only have a sense of time in the presence of inputs. Stick a human into a sensory deprivation tank for a few hours and then ask them how much time has passed afterwards. They wouldn't know unless they managed to maintain a running count throughout, but that's a trick an LLM can also do (so long as it knows generation speed).
The general notion of passage of time (i.e. time arrow) is the only thing that appears to be intrinsic, but it is also intrinsic for LLMs in a sense that there are "earlier" and "later" tokens in its input.
Sometimes LLMs hallucinate or bullshit, sometimes they don't, sometimes humans hallucinate or bullshit, sometimes they don't. It's not like you can tell a human to stop being delusional on command either. I'm not really seeing the argument.
If a human hallucinates or bullshits in a way that harms you or your company you can take action against them
That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted
It's more that "thinking" is a vague term that we don't even understand in humans, so for me it's pretty meaningless to claim LLMs think or don't think.
There's this very cliched comment to any AI HN headline which is this:
"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."
or its cousin:
"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking".
Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that.
But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".
Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
That's creativity which is a different question from thinking.
I disagree. Creativity is coming up with something out of the blue. Thinking is using what you know to come to a logical conclusion. LLMs so far are not very good at the former but getting pretty damn good at the latter.
> Thinking is using what you know to come to a logical conclusion
What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion.
I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it.
In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them.
It's just a mathematical way to describe an agent, whether it's an LLM or human.
They're linked but they're very different. Speaking from personal experience, It's a whole different task to solve an engineering problem that's been assigned to you where you need to break it down and reason your way to a solution, vs. coming up with something brand new like a song or a piece of art where there's no guidance. It's just a very different use of your brain.
I guess our definition of "thinking" is just very different.
Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.
But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.
Thinking is better understood than you seem to believe.
We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.
Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".
Who says what degree of complexity is enough? Seems like deferring the problem to some other mystical arbiter.
In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal. A human went and put minimal effort into making something with an AI and put it online, producing slop, because the actual informational content is very low.
> In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal
And you'd be disagreeing with the vast amount of research into AI. [0]
> Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.
But it does mention that prompt complexity is not related to the output.
It does say that there is a maximal complexity that LLMs can have - which leads us back to... Intelligence requires organizational complexity that LLMs are not capable of.
This seems backwards to me. There's a fully understood thing (LLMs)[1] and a not-understood thing (brains)[2]. You seem to require a person to be able to fully define (presumably in some mathematical or mechanistic way) any behaviour they might observe in the not-understood thing before you will permit them to point out that the fully understood thing does not appear to exhibit that behaviour. In short you are requiring that people explain brains before you will permit them to observe that LLMs don't appear to be the same sort of thing as them. That seems rather unreasonable to me.
That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.
By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".
Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle
to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.
[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.
[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.
[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.
LLMs are absolutely not "fully understood". We understand how the math of the architectures work because we designed that. How the hundreds of gigabytes of automatically trained weights work, we have no idea. By that logic we understand how human brains work because we've studied individual neurons.
And here's some more goalpost-shifting. Most humans aren't capable of novel mathematical thought either, but that doesn't mean they can't think.
We don't understand individual neurons either. There is no level on which we understand the brain in the way we very much do understand LLMs. And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs. As I mentioned in [1] what we can't do is "explain" individual behaviours with simple stories that omit unnecessary details, but that's just about desiring better (or more convenient/useful) explanations than the utterly complete one we already have.
As for most humans not being mathematicians, it's entirely irrelevant. I gave an example of something that so far LLMs have not shown an ability to do. It's chosen to be something that can be clearly pointed to and for which any change in the status quo should be obvious if/when it happens. Naturally I think that the mechanism humans use to do this is fundamental to other aspects of their behaviour. The fact that only a tiny subset of humans are able to apply it in this particular specialised way changes nothing. I have no idea what you mean by "goalpost-shifting" in this context.
> And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs
we understand on this low level, but LLMs through the training converge to something larger than weights, there is a structure of these weights which emerged and allow to perform functions, and this part we do not understand, we just observe it as a black box, and experimenting on the level: we put this kind of input to black box and receive this kind of output.
my favourite game is to try to get them to be more specific - every single time they manage to exclude a whole bunch of people from being "intelligent".
Yes, and the name for this behaviour is called "being scientific".
Imagine a process called A, and, as you say, we've no idea how it works.
Imagine, then, a new process, B, comes along. Some people know a lot about how B works, most people don't. But the people selling B, they continuously tell me it works like process A, and even resort to using various cutesy linguistic tricks to make that feel like it's the case.
The people selling B even go so far as to suggest that if we don't accept a future where B takes over, we won't have a job, no matter what our poor A does.
What's the rational thing to do, for a sceptical, scientific mind? Agree with the company, that process B is of course like process A, when we - as you say yourself - don't understand process A in any comprehensive way at all? Or would that be utterly nonsensical?
Again, I'm not claiming that LLMs can think like people (I don't know that). I just don't like that people confidently claim that they can't, just because they work differently from biological brains. That doesn't matter when it comes to the Turing test (which they passed a while ago btw), just what it says.
When I write a sentence, I do it with intent, with specific purpose in mind. When an "AI" does it, it's predicting the next word that might satisfy the input requirement. It doesn't care if the sentence it writes makes any sense, is factual, etc, so long as it is human readable and follows gramatic rules. It does not do this with any specific intent, which is why you get slop and just plain wrong output a fair amount of time. Just because it produces something that sounds correct sometimes does not mean it's doing any thinking at all. Yes, humans do actually think before they speak, LLMs do not, cannot, and will not because that is not what they are designed to do.
Actually LLMs crunch through half a terabyte of weights before they "speak". How are you so confident that nothing happens in that immense amount of processing that has anything to do with thinking? Modern LLMs are also trained to have an inner dialogue before they output an answer to the user.
When you type the next word you also put a word that fits some requirement. That doesn't mean you're not thinking.
"crunch through half a terabyte of weights" isn't thinking. Following grammatical rules to produce a readable sentence isn't thought, it's statistics, and whether that sentence is factual or foolish isn't something the LLM cares about. If LLMs didn't so constantly produce garbage, I might agree with you more.
They don't follow "grammatical rules", they process inputs with an incredibly large neural net. It's like saying humans aren't really thinking because their brains are made of meat.
Because if SpaceX were valued like a normal company, they would lose their money.
SpaceX, as technologically awesome as it is, simply cannot be that big of a company because the market for space launches is relatively small.
SpaceX is targeting an IPO at a valuation 500x earnings. They need to jump on the "AI" / datacenter bandwagon to even hope to sell that kind of valuation.
The whole "datacenters in space" thing is an answer to the question "what could require 1000x the satellite launches that we have now?"
It has nothing to do with what makes sense economically for datacenters!
reply