HN2new | past | comments | ask | show | jobs | submit | remyM's commentslogin

Holy lord, this is awesome.

One thing - will this eventually shift to a paid service or are you hosting it for free forever? I wouldn't mind paying for a slightly more polished version of this in-browser.


Thanks! I'm mulling over a web app version with cloud syncing for a small monthly fee. If you've got any suggestions, feel free to reach out: matt at mochi.cards


Definitely the same here.

That's also why I end up here half the time.. back to work.


I've become a heavy user of the phone app and browser extension called Forest that lets you blacklist a set of sites for a variable amount of time.


Whoops, my phone connected to someone on a train that just went past! My phone call just dropped.


Think of it the opposite way. You have no coverage, are trying to get a message out and the nearest gateway is miles away. A moving car/train is close to you for plenty of time to transfer a few 100 bytes, and someone in the car/train has a mesh aware widget. It stores a copy and waits to go near a gateway where it upload it for you.

Sure, it's not as nice as a WAN connection, but the average cellular contract is pretty expensive per month. Something like $10 per GB, and often a $30 and up base rate.

So sure, long distance multi-hop mesh stinks for real time voice, but could be quite usable for other use cases.


Ah, but now you're talking about solving an additional problem, delay tolerant networking, on top of mesh networking. This adds a whole new layer of complexity on top of the mesh network, and would probably only work as you say for a subset of services that are made to handle this type of unreliable network.

Also many common delay tolerant network implementations rely on message replication to increase the probability of delivery of the message. This puts additional bandwidth strain on the inter-node hops of the network, which as some of the other commenters pointed out, not actually all that high.


Indeed, seems only practical for things like SMS/IM type traffic where even a Long/LAT + 30 character message every few minutes would be quite useful.


Why would your call be routed just via one single phone?


Are you going to send the message over more than one route? That doubles the required bandwidth over the entire mesh.


You could do something smarter with error correcting codes. Using a rateless code you split your message into an infinite number of chunks, then send these chunks out on your multiple paths, and then once enough chunks have been received you ACK each path. No bandwidth wasted.

I think specifically with voice it should be possible to send two chunks such that if they both arrive then you get your audio, and if only one arrives then you still get your audio but at a lower quality.


This is similar to Multipath TCP, for anybody interested: https://en.wikipedia.org/wiki/Multipath_TCP


these look surprisingly sweet, especially the black colorway.

id like a 13in model as depicted in the article, and id like a bigger battery though as ryzen stock doesnt exactly sip battery

wonder what international availability is like


With current internet solutions in many countries this is purely impossible.

Nothing can deliver the ~25ms input latency that local gaming can, but when you can't game locally it'll do.


Seems to lose in Mobilenet and win in other benches according to Nvidia's benchmarks.

I'll wait for independent testing before I drop $100.

https://devblogs.nvidia.com/jetson-nano-ai-computing/


Horseshit in that bench right off the bat: I have a Google Edge TPU board right in front of me and its perf on SSD300 is 70fps, not 48. That's with the browser demo, which (as far as I can tell) includes realtime encoding of h264 for streaming. Almost twice as much as Jetson, and likely in a much more modest power envelope. NVIDIA is known for dishonesty in their benchmarks. Although TPU is, of course, a quantized play, and Maxwell will really suck for that, unless it's been tweaked specifically for this board.

OTOH, fp32 models are _much_ easier to work with, and this thing has more RAM so you can waste it on 32 bit weights, and NVIDIA's software toolkit is second to none. So the Jetson looks pretty tempting as well. I just wish they didn't try to insult my intelligence.


Oh yeah - I never said to trust their benches. Take OEM benches especially vs competitors with the grainiest grains of salt.

When people start getting their hands on them I'll start seeing independent benches, and I think anandtech got their hands on one. Hopefully soon™.


Interesting. I hadn't seen that, but the NVidia numbers on their own products seem credible. I do agree that the flexibility of having real CUDA cores is nice.


couldn't this play games then? seems to be a pretty nice platform for a homebrew videogame console.


definitely could. retroarch would run great under linux, and a wide variety of cores are available.

but im more interested as a cudann box


a) a57 is the "beefy" version of a53, it's significantly faster in raw performance so this should stomp an rpi - here's ARM's performance numbers at release: https://community.arm.com/cfs-file/__key/communityserver-blo...

b) above the HDMI is displayport

c) usb is 3.0 all around, great for NAS-style devices

d) production module (for final product) uses 16gb emmc, whereas the devkit is microSD like rpi


PCIe should be able to do SATA on here - they showed a reference design running PCIe-based SATA devices in their blog post, which was recording 8x1080p30 H.264 to a HDD.


They've actually fixed this hopefully this time around.

According to their blog post it actually has driver support for the RPi CM2 8MP (IMX219) and they'll be releasing their own Nvidia-sanctioned cameras available from their partners.

It should hopefully just work. No lowlight options at this time however, which means external CCTV is out of the question :(


Cool. Well hopefully some third parties will now create cameras all in the same form factor with the same pinout so that the choice of carrier board and camera can be independently made.


Uber is great in the suburbs (sydney, AU), and I'd be fine waiting 5-10 mins more and being put on a low priority so I can get it cheap. It'd also be pretty good during peak periods.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: