For me the freedom to own my computer means I can run any software I want on it.
Self hosting is predicated on some openness of computing in general. Interestingly it still does not practically allow you to use certain services like Google Maps, where even if the end user has great benefit, they get it for free because they give back their data.
I share your preference for Organic Maps over OsmAnd, and while I haven't been daily-driving CoMaps for long (nor has anyone, really) I already significantly prefer it over Organic Maps. I need to use it long enough to see what the edge cases are like, but after using it three time zones worth of rural places and dense cities, it has worked well.
The Sammy Jankis link was certainly interesting. Thanks for sharing.
Whether or not AGI is imminent, and whether or not Sammy Jankis is or will be conscious... it's going to become so close that for most people, there will be no difference except to philosophers.
Is AGI 'right around the corner' or currently already achieved? I agree with the author, no, we have something like 10 years to go IMO. At the end of the post he points to the last 30 years of research, and I would accept that as an upper bound. In 10 to 30 years, 99% of people won't be able to distinguish between an 'AGI' and another person when not in meatspace.
After looking for a top level comment pointing to why instead of how, I logged in just for this as I could not find one. Extremely bullish on this move, let me try to explain.
As most engineers realize right away, it is not going to be profitable to operate a regular datacenter in space, per the article (and I agree), so something else is going on here. Almost all the discussion is about feasibility, which is not by itself going to explain the situation.
It is clearly somewhat feasible to build Starlink level infrastructure and operate it profitably. I would posit that the narrative is a funding vehicle for a more conservative, incremental objective.
The very fact that the infrastructure is in space places the datacenter on the legal and geopolitical high ground. It's hard to raid servers if they are in orbit. It's hard to disable, audit, or arm-wrestle into submission. It doesn't have to have the scale we've come to expect in 2026 to be useful. And it's for inference, not training, of course. Useful levels of inference is computationally cheap. There are implications with the financial system as well.
In combination with PLTR technology, what I see is another intelligent and strategic move by Musk to enable and be part of hegemony. He is a central player not making decisions in isolation. They are playing a game with different rules, and therefore different unit economics.
If you are interested in UX a youtube series I found enjoyable and thought provoking is "liber indigo" (sorry, on mobile)
What comes after the desktop metaphor and mobile? There is VR but... no one is sure it will get anywhere. It's cool but probably won't supplant tradition.
Maybe the ability of AI to accept somewhat imprecise inputs will help us get away from text. Multimodal gesture, voice, and touch perhaps?. So we would all be sort of body acting like players on a stage, in order to convey to a machine what direction you wish to turn its attention
I think this is an excellent point. I believe the possibility of 'computing' a conscious mind is proportional to the capability of computing a meaningful reality for it to exist in.
So you are begging the question: Is it possible to compute a textual, or pure symbolic reality that is complex enough for consciousness to arise within it?
Let's assume yes again.
Finally the theory leads us back to engineering. We can attempt to construct a mind and expose it to our reality, or we can ask "What kind of reality is practically computable? What are the computable realities?"
Perhaps herein lies the challenge of the next decade.
LLM training is costly, lots of money poured out into datacenters. All with the dream of giving rise to a (hopefully friendly / obedient) super intelligent mind. But the mind is nothing without a reality to exist in. I think we will find that a meaningfully sophisticated reality is computationally out of reach, even if we knew exactly how to construct one.
Is anybody working on learning? My layman's understanding of AI in the pre-transformers world was centered on learning and the ability to take in new information, put it in context with what I already know, and generate new insights and understanding.
Could there be a future where the AI machine is in a robot that I can have in my home and show it how to pull weeds in my garden, vacuum my floor, wash my dishes, and all the other things I could teach a toddler in an afternoon?
You can show to LLM how you expect your problem to be solved and it will adhere to the example you demonstrated within the context. If it can be done with textual AI I don't see why it shouldn't be possible for emodied ones.
This is where the robotics industry wants to go. Generalist robots that have an intelligence capable of learning through observation without retraining (in the ML sense). Whether and when we'll get there is another question entirely.
This is very fascinating as a limit case, which always serve as a good example of the bound. I think it highlights that “efficiency isn’t everything” just like in so many other systems like healthcare and justice. In this case we could figure out the activation functions by analysis, which is impossible for problems of higher dimensionality. The magic of AI isn’t in it’s efficiency, it’s in making things computable that simply aren’t by other means.
I don’t know why there aren’t full fledged computers in a GPU sized package. Just run windows on your GPU, Linux on your main cpu. There’s some challenges to overcome but I think it would be nice to be able to extend your arm PC with an x86 expansion, or extend your x86 PC with an ARM extension. Ditto for graphics, or other hardware accelerators
There are computers that size, but I guess you mean with a male PCIe plug on them?
If the card is running its own OS, what's the benefit of combining them that way? A high speed networking link will get you similar results and is flexible and cheap.
If the card isn't running its own OS, it's much easier to put all the CPU cores in the same socket. And the demand for both x86 and Arm cores at the same time is not very high.
You may be interested in SmartNICs/DPUs. They're essentially NICs with an on-board full computer. NVIDIA makes an ARM DPU line, and you can pick up the older gen BlueField 2's on eBay for about $400.
Very much agree, I first encountered the idea of an engineering team ‘diva’ or ‘prima donna’ on the zipcpu blog. The archetype immediately resonated with me because I could see it in myself and in others, especially the high potential, high talent people. Fortunately I work for a team full of high performers and I can learn a lot from them and get along just fine, because above all we are kind to one another. We also happen to kick ass and ship systems but it’s a lot more fun to do it with a team you can genuinely like working for. I am the team lead but all the important results come from ICs so I work for them, really.
For me the freedom to own my computer means I can run any software I want on it.
Self hosting is predicated on some openness of computing in general. Interestingly it still does not practically allow you to use certain services like Google Maps, where even if the end user has great benefit, they get it for free because they give back their data.
reply