Yeah I'd say the misinformation about the original crime's perpetrator was the main cause for all these issues. There were already a lot of worries about immigrants and asylum seekers, which had been weaponised by a far right wing media and online fake news sources, and the association of the recent attack with someone from that group just lit the spark that set it all off.
Could drugs have had an effect here? Maybe, but I suspect the same issues would have occurred regardless due to deep seated issues with extremism and misinformation in society.
For folks willing to build from source, I have an additional commit on top of the PR (linked in the comments) that enables support for Android's constrained high speed capture mode, allowing 120fps/240fps camera streaming. Not the most useful for meetings, but enables things like capturing high frame rate mixed reality VR footage. As far as I'm aware, there's no other Android webcam app, proprietary or open source, that can do anything above 60fps.
I don't think this is possible, at least not directly. My understanding is that there's a special optimized path for CameraConstrainedHighSpeedCaptureSession recording to MediaCodec surfaces backed by a hardware encoder. If you capture to a different type of surface, the frame rate will likely drop to 60 or lower (assuming it's even allowed--I have not tested).
I'm not too familiar with image analysis, but if it's acceptable to work on lossy frames and to take a small hit to latency, you could record to a MediaCodec hardware encoder and then immediately send the output to another MediaCodec instance to decode it.
Obviously, should your wife be non-technical, you'd put that script in a file and tell her to double-click it. Or just set the whole thing up for her so it magically works.
Looks like you have quite the adb arsenal there, so maybe you have any suggestions for the two things missing from mine:
1. setting a static ADB/TCP port or automatically sending it to the PC (Android now changes it every time)
2. enabling ADB/TCP on mobile hotspot (Android complains "Please connect to WiFi")
Other than those two things, between scrcpy, KDE Connect and Waydroid, the Linux-Android integration is basically perfect and makes even the Applers jealous.
You have to do adb tcpip for wireless adb once in a usb mode per boot, I actually just use port forwarding of all pc port to localhost using ssh, I have a bash script using nmap to find ip
Is there a way to do the reverse, i.e., seeing the computer screen on an Android device?
Specifically I would love a way to use my Android device as a remote controller for PDF presentation. For now I can do this with KDE Connect but the remote controller is blind, I have no way to see the current slide on the Android device, much less to use it to annotate the slides in live.
This is particularly helpful to people using asahi linux and wanting to connect an external monitor. That can be done with something as simple as a raspberry pi.
EDIT: apparently this project help achieving the same thing on different OS but using webRTC and a browser:
This is a bit out of left field, but if you're a web developer you can use Browsersync to control a web page across multiple browsers (even on different devices). So really you "just" need a good PDF viewer on that web page.
I've been thinking a lot about this kind of use case, and I'm more and more convinced we just need better ways of handling web content on the LAN.
Like, no matter what, the UX will probably be meh just trying to mirror screens. Instead the presentation app could give you a QR code with a TLS cert in it, and you could scan it and get a secure connection to it's mobile optimized web remote UI.
We are really nowhere near using the full potential of all the devices and connectivity we have.
Unfortunately, I've become a MacOS (ARM) user recently and start to notice such small misfortunes here and there like your solution being not applicable to MacOS.
I never heard anyone say that we need to summarize, and/or note sections and quotes movies and shows. I just want to enjoy my books as I read them. That's it.
I feel nobody seems to be talking how LLMs can be used in training humans to teach better. fundamentally the process of getting a desired outcomes from such a model involves nudging it to ask and answer better questions right? so why don't we use it as a training tool across domains. Teacher training across multiple subjects could really use this!