Hacker News .hnnew | past | comments | ask | show | jobs | submit | yashvg's commentslogin

This thread offers a really elegant way to think about special relativity. The key insight is that everything moves through spacetime at speed c. It's just a question of how that velocity is distributed between spatial and temporal dimensions.

When you're at rest, you're traveling through time at maximum speed (c). Start moving through space, and you "trade" some of that temporal velocity for spatial velocity, which is why time dilation occurs. Light is special only because it's massless. All of its velocity budget goes toward spatial movement, leaving none for temporal movement.

This framing makes time dilation much more intuitive than the usual "moving clocks run slow" explanation. You're not moving slower through time because of some mysterious effect. You're moving slower through time because you're now also moving through space, and your total spacetime speed is conserved.


> The key insight is that everything moves through spacetime at speed c

That's the key error in thinking. The key insight of special relativity is there is no such thing as absolute time or absolute space. You have a perception of your passage of time and moving through space, but it differs from other observers in other inertial frames. There's simply no way to say what your "real" passage of time and movement through space is - that's a meaningless concept. Therefore there is no distribution of your velocity between time and space.


But there is absolute spacetime. Observers will disagree about distance between events in space or time, but they will all agree on the value of the Minkowski metric that includes space and time.


And they'll all disagree on your spatial velocity and how much time has passed for you. The only thing they'll agree on is the spacetime interval between any two events, calculated using the Minkowski metric. That's a fundamental invariant, but it's not absolute spacetime in the sense of a fixed, unchanging background. Instead, it highlights that space and time themselves are relative to the observer's motion.


TL;DR: YC just filed an amicus brief in the Google-search antitrust case. They tell the judge that (1) Google’s default-search/pay-to-play deals crushed the “kill-zone” around search, keeping VCs away; and (2) the coming wave of AI-search/agentic tools will suffer the same fate unless the court imposes forward-looking remedies—open Google’s index, ban exclusive data+distribution deals, bar self-preferencing, add anti-circumvention teeth, and even spin off Android if Google backslides. YC frames this as the 2025 equivalent of the 1956 Bell Labs decree and the 2001 Microsoft decision: rip open the gate so the next Google can be born.


Summarized by o3:

1. Why YC cares

- VC “kill-zone.” YC says Google’s decade of default-search contracts (Apple, carriers, OEMs) froze half of all U.S. search queries, scaring investors away from search/AI startups.

- AI inflection point. Generative/query-based/agentic AI could disrupt search—but only if newcomers can reach users and train on data Google hoards. Without action, Google will “pull the ladder up” again.

2. What YC wants the court to order

- Open index & dataset access. Force Google to license its search index + anonymized click/embedding data on fair, reasonable terms so rivals can build ranking stacks + AI models.

- No self-preferencing in AI results. Google can’t boost Gemini-style tools or demote rivals. No exclusive AI-training corpora access either.

- Ban pay-to-play defaults. Outlaw “billions to be the default” (search, voice, browser, OS, car). No payments for choice-screen placement.

- Anti-circumvention & retaliation guardrails. Independent monitoring, fast dispute resolution, steep fines, and—if Google cheats—possible Android spinoff.

3. Historical playbook YC cites

- 1956 AT&T consent decree opened Bell Labs patents → semiconductor boom.

- 2001 Microsoft browser decree → Firefox, Chrome, Google itself.

- 2023–24 Nvidia-Arm block → both companies exploded in AI.

- YC says same pattern can unlock “the next Stripe/Airbnb—but for search/AI.”

4. Why HN should care

- Open Index ≈ ultimate dev API. Lets you build retrieval-augmented AI agents without a nation-state’s crawl budget.

- Distribution shake-up. Killing default deals revives mobile/browser competition; could birth real alt-search on phones.

- VC signal. YC telling a judge “give us a level field and we’ll bankroll challengers” means real capital is ready.

- Policy trend. Regulators now want to pre-wire markets (index access, AI data parity, Android contingency) before the next moat forms.

5. Bottom line: This isn’t about a fine. It’s about cracking open the data + distribution bottlenecks that froze search since 2009. If Judge Mehta adopts YC/DOJ’s plan, the door opens for real search/AI innovation—and VCs are ready to sprint through it.


This behavior likely stems from RLHF training - at least for the part after the images of the scatter plot are given to the models. Models were probably heavily penalized during training for pattern-matching that could lead to problematic racial misclassifications, similar to the issues Google faced with their image recognition systems in 2015. The tendency to be overly cautious about recognizing primate shapes, even in abstract data visualizations, could be an emergent behavior from these training constraints.


If you had the most knowledgeable entity ever at your disposal, would you only use it to ask questions within your 'field of expertise'?

AI used this way isn't replacing expertise - it's helping people explore ideas, generate hypotheses, and find plausible answers to questions that lack definitive solutions.

Many research problems, especially in niche areas, suffer from limited attention due to scarce expert resources. Curious amateurs can now do initial exploratory work using tools like o1. Maybe it'll surface interesting directions for experts to examine.


A curious experiment: O1's attempt at decoding the Indus Valley script. It constructed a systematic analysis comparing patterns to known ancient writing systems and proposed translations. Looking for feedback from archaeologists and computational linguistics experts on the methodology.


Imo you have to really lack imagination to not see use cases. I can tell you one use case from the top of my head that you may like. AI tutoring, especially for kids. I'd built a voice based language teaching bot for a young cousin of mine and giving the bot the face of a character she likes (e.g. Rey from Star Wars) would make her so much more engaged and excited to learn.

You'd be surprised by the number of people just talking to chat/voice bots. We make voice calling bots, and some people love endlessly talking to our bots for hours. Most of these people are perfectly normal. Check out: https://x.com/deedydas/status/1806352948328583221 .


While I understand the desire to reassure developers, I think this perspective seriously underestimates the pace of progress in AI. Just 3-4 years ago, the idea of AI writing any functional code seemed far-fetched. Now they can handle many coding tasks competently.

The author lists specific tasks LLMs can't do today. But there's no fundamental reason they won't be able to in the near future. Domain expertise, understanding downstream effects, configuring CI pipelines - these are all learnable patterns. As models get larger, are trained on more diverse datasets, size of context window increases, and new architectures emerge, these capabilities will come online rapidly. The jump from GPT-3 to GPT-4 was substantial, and we should expect continued leaps.

This doesn't mean human developers will become obsolete overnight. But it does mean the nature of software development work is likely to change significantly. Lower-level coding tasks may be increasingly automated, shifting focus to higher-level design, architecture, and problem framing.

Rather than dismissing the potential impact, we should be preparing for a world where AI significantly augments or even replaces many current development tasks. This might involve focusing more on skills that complement AI capabilities or exploring new areas where human creativity and insight remain critical.


current AI is already stressing the power grid, and much of it will need to be redeveloped and improved just to keep pushing the limits of LLM’s. Power is the limiting factor with scaling here, so i’m rather unconvinced with your hypothesis. The improvements in the last 2-3 years are in no way indicative of the next 2-3 years.

I agree with your sentiment by the way, developers should find ways to use LLM’s to improve their development process. But the drama is getting old.


Maybe we should instead do those things when that time actually comes. Premature optimization and all that.


Very cool. I have been using Phonecall.bot which has given me great results.


Oh yeah that looks very cool as well. Thanks for the pointer. I'd love you to try CallFast too and let me know how you get on.


By swiping you force people to ‘rate’ everyone they see. You can then use a method like the Elo rating system to match musicians of similar skill levels (or desirability?) more easily.


Privacy.com - let's you easily make virtual burner cards that you can use for free trials and not worry about having to cancel the subscription on time.


Please be aware that while this might (!) work in the US, in many other jurisdictions not canceling a subscription makes you still liable to pay, even if your card is gone; welcome to the wonderful world of debt collection.

IANAL


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: