One thing - will this eventually shift to a paid service or are you hosting it for free forever? I wouldn't mind paying for a slightly more polished version of this in-browser.
Thanks! I'm mulling over a web app version with cloud syncing for a small monthly fee. If you've got any suggestions, feel free to reach out: matt at mochi.cards
Think of it the opposite way. You have no coverage, are trying to get a message out and the nearest gateway is miles away. A moving car/train is close to you for plenty of time to transfer a few 100 bytes, and someone in the car/train has a mesh aware widget. It stores a copy and waits to go near a gateway where it upload it for you.
Sure, it's not as nice as a WAN connection, but the average cellular contract is pretty expensive per month. Something like $10 per GB, and often a $30 and up base rate.
So sure, long distance multi-hop mesh stinks for real time voice, but could be quite usable for other use cases.
Ah, but now you're talking about solving an additional problem, delay tolerant networking, on top of mesh networking. This adds a whole new layer of complexity on top of the mesh network, and would probably only work as you say for a subset of services that are made to handle this type of unreliable network.
Also many common delay tolerant network implementations rely on message replication to increase the probability of delivery of the message. This puts additional bandwidth strain on the inter-node hops of the network, which as some of the other commenters pointed out, not actually all that high.
You could do something smarter with error correcting codes. Using a rateless code you split your message into an infinite number of chunks, then send these chunks out on your multiple paths, and then once enough chunks have been received you ACK each path. No bandwidth wasted.
I think specifically with voice it should be possible to send two chunks such that if they both arrive then you get your audio, and if only one arrives then you still get your audio but at a lower quality.
Horseshit in that bench right off the bat: I have a Google Edge TPU board right in front of me and its perf on SSD300 is 70fps, not 48. That's with the browser demo, which (as far as I can tell) includes realtime encoding of h264 for streaming. Almost twice as much as Jetson, and likely in a much more modest power envelope. NVIDIA is known for dishonesty in their benchmarks. Although TPU is, of course, a quantized play, and Maxwell will really suck for that, unless it's been tweaked specifically for this board.
OTOH, fp32 models are _much_ easier to work with, and this thing has more RAM so you can waste it on 32 bit weights, and NVIDIA's software toolkit is second to none. So the Jetson looks pretty tempting as well. I just wish they didn't try to insult my intelligence.
Interesting. I hadn't seen that, but the NVidia numbers on their own products seem credible. I do agree that the flexibility of having real CUDA cores is nice.
PCIe should be able to do SATA on here - they showed a reference design running PCIe-based SATA devices in their blog post, which was recording 8x1080p30 H.264 to a HDD.
They've actually fixed this hopefully this time around.
According to their blog post it actually has driver support for the RPi CM2 8MP (IMX219) and they'll be releasing their own Nvidia-sanctioned cameras available from their partners.
It should hopefully just work.
No lowlight options at this time however, which means external CCTV is out of the question :(
Cool. Well hopefully some third parties will now create cameras all in the same form factor with the same pinout so that the choice of carrier board and camera can be independently made.
Uber is great in the suburbs (sydney, AU), and I'd be fine waiting 5-10 mins more and being put on a low priority so I can get it cheap. It'd also be pretty good during peak periods.
One thing - will this eventually shift to a paid service or are you hosting it for free forever? I wouldn't mind paying for a slightly more polished version of this in-browser.