Hacker News .hnnew | past | comments | ask | show | jobs | submit | willquack's commentslogin

I used to work at Distributive (formerly "Kings Distributed Systems") on its DCP compute platform" which is entirely what you're describing. You can deploy a JS/WASM based workload, and it will be "sliced" and served to browser-based compute nodes. With WebGPU you can sort of have inference executing in the browser too. Incredible people there with an awesome project

I added Python execution support via Pyodide (cpython compiled to wasm) and worked on a bunch of other random stuff like WebLLM inferencing during my time there.

Apart from Distributive, there's also the "Golem network", "Salad", "Koii" and various other similar projects.

---

I'm not sure if I'm convinced by the "Uber for compute" use case with compute buyers and compute workers (sellers), but if you're a university and you have 1000 Windows machines across all your computer labs, it'd be nice to leverage that compute for running research or something idk - especially with the price of ram / cloud offerings these days...


> but if you're a university and you have 1000 Windows machines across all your computer labs, it'd be nice to leverage that compute for running research or something idk - especially with the price of ram / cloud offerings these days...

This reminds me of the DevOps guy who made the developer laptops part of a Jenkins "swarm" under the thought that the machines were beefy and underutilized most of the time.


Awesome project!

Dumb question: could you run this in frontend js using the browser's js engine and wasm environment similar to WebContainers? Maybe `fs` is just in-memory, and some things like forking are disabled. It'd be cool to have "nodejs" in the web!


I work on a project that does exactly that (and more): https://browserpod.io/.

Currently it supports Node, but we plan to add Python, Ruby, git, and more.

You can see it in action in this demo: https://vitedemo.browserpod.io

More info here: https://labs.leaningtech.com/blog/browserpod-10

Ah and kudos to Syrus and his team for this release. Edge.js's architecture seems to have many similarities with BrowserPod. I see it as proof that we are both going in the right direction!


Thanks Yuri. Keep up the good work


It’s not a dumb question at all.

And yes, it will allow running Node.js apps fully on the browser, in a way that’s more compatible than any other alternative!

Stay tuned!


Do you have any specific test case that you would consider "very challenging" on the compatibility side? I'd be curious to check if BrowserPod can support that already.


>in a way that’s more compatible than any other alternative

I can see where that's going.

Awesome. I want to msg. you on LinkedIn but can't.


I worked with Jason (creator of Om) at my last job. He's awesome!


is it his first language design ?


> you can run an initial VDiff, and then resume that one as you get closer to the cutover point.

VDiff (v2) only compares the source and destination at a specific point in time with resume only comparing rows with PK higher than the last one compared before it was paused. I assume this means:

1. VDiff doesn't catch updates to rows with PK lower than the point it was paused which could have become corrupt, and

2. VDiff doesn't continuously validate cdc changes meaning (unless you enforce extra downtime to run / resume a vdiff) you can never be 100% sure if your data is valid before SwitchTraffic

I'm curious if this is something customers even care about, or is point in time data validation sufficient enough to catch any issues that could occur during migrations?


You are correct about resuming. If you do an initial VDiff and then resume that same VDiff say 1 month later it would only diff rows with a higher PK value.

But there's also nothing stopping you from doing a new VDiff to cover all data at that later point in time.


"But there's also nothing stopping you from doing a new VDiff to cover all data at that later point in time." --- isn't this just pushing the same issue forward in time? How is data consistency maintained if a customer reverts back to original while having served a few request from new one already?


It's open source. If you really want to know these things, I would encourage you to look at the code and read the documentation. As noted in the blog post, reverse vreplication is setup when you switch. You can switch back and forth and nothing is lost.

https://github.com/vitessio/vitess

https://vitess.io/docs/reference/vreplication/

"isn't this just pushing the same issue forward in time?" I don't understand what you are trying to say here. You can only compare the two sides / databases at the same logical point in time. While you are doing this comparison at that point in time, the timeline continues to progress. Unless you want to stop the world and prevent writes for the full duration of the diff (which can be days or even weeks).


Thanks for responding!!

I think it's still the same issue where data modified after the VDiff point in time isn't validated before SwitchTraffic. I'm mostly curious how vitess users handle this case, or if any users even care about about this case in the first place?

Is there no demand for continuous data validation similar to what TiDB offers?

Do people who care about 100% correct data validation just accept the downtime required to run a full VDiff before SwitchTraffic?


Why is there an expectation for social media services to have such high uptime? It's not an ISP or cloud provider, why does it matter if it goes down occasionally?


In my mind, it isn't about any specific expectation. Events like this are interesting because the cracks are starting to show when a company follows the "fire as many people as possible, run lean, integrate AI" strategy. The trend I'm seeing is that downtimes are becoming more common, which in turn does not speak well to that strategy.


Yes, I think we’d all benefit from that perspective. Of course, the revenue implications for the ownership of one of these pointless brain rot factories being down for a half hour are enormous. That’s a little notable but only for their shareholders.


Because many people (including high profile CEOs and VCs) are social media and X addicts. They spend literally all waking hours interacting with it.


There's nothing more fun than making a DSL, the only annoying part if finding an excuse to make one


It would be cool to encode the chess board state and turn into the URL so you could hurl urls back and forth over slack to play chess just by clicking on it

but there's something charming about the ascii art over slack in this project that would miss


this is the way to go!

making a move should automatically copy the new url to your clipboard. you can still keep the ascii charm by server side rendering the ascii chess board as an og description.


made a non-ascii of this where nice og images of the board are generated!

https://correspondence-chess-production.up.railway.app/



> `seapie.breakpoint()` opens a working `>>>` REPL at the current execution state. Any changes to variables or function definitions persist. Debugger state is exposed via built-ins (e.g. `_magic_`), and stepping/frame control/etc is handled via small `!commands`.

This is largely what `pdb` does already, no? Example:

```

(Pdb) list

  1   something = 100

  2   import pdb; pdb.set_trace()

  3  -> print(f"value is: {something}")
(Pdb) something = 1234

(Pdb) c

value is: 1234

```

I do like that you use `!<cmd>` to avoid the naming collision issue in pdb between commands and python code!!!


Pdb also has !<cmd>

For example, !interact will give you a working >>> REPL


And ipdb if you want ipython repl.


Code examples can be executed as unit tests to prevent documentation regressions / bitrot in ways human language can't


But now with LLMs maybe they could begin to check for instance API documentation.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: