I would also call it a perverted use of the term super-resolution. NNs perform image manipulations beyond resolution enhancement. They add features based on trained sources.
And? Do a search on scholar for "Example-based Super Resolution" or "Markov Networks for Superresolution". The term has been used this way in literature for maybe 20 years now.
This was the illusory hope of many of us in the beginning of the IT revolution. Work from anywhere, connected with everybody and the equal distribution of wealth.
I'm pleasantly surprised how Facebook openly admits that this project is a complete failure. I hear almost daily whitewashed NLP results from other companies.
What would a smashing success look like to Facebook?
Tinfoil hat on: a smashing success for them is controlling everyones online life.
See: the Whatsapp acquisition - burning a double digit number of billions on a profitable platform just to remove their one source of income and USP so as to get even more juicy metadata and eyeballs.
(Yes, I was a huge fan of Whatsapp. Yes, so much that I expected them to manage to stay true to their ideals after the acquisition.
Yes, I tried to believe Facebook actually just wanted a part in what Whatsapp was about to become.
My follow-up question would be: Is Scala still competitive to modern languages like Swift and Rust?
I experienced several drawback according to library quality and code-safety. There are too many small details that left me with an unpleasant feeling (I would take ScalaNLP/Breeze for example).
But my central issue is the VM/GC dependency. In times of GPU computing, native interface programming becomes a burden. The knowledge of object ownership improves code readability. I feel uncomfortable leaving object lifetime to a background process.
It depends how you define "competitive". If you are trying to collect hipster points then Scala is not going to help you but let's be real. Nobody is writing big systems in either Rust or Swift, these are playgrounds for people at the moment.
I am big fan of Swift and Scala... and, to be honest, for Swift there's a long way ahead before a big system can be deployed... tooling and libs are not there, also not considering that the lack of ABI stability makes the source code necessary for any external dependency.
This proposal makes much more sense than Apples "Metal Only" solution (https://webkit.org/blog/7380/next-generation-3d-graphics-on-...). I liked Metal at the beginning but over time it turned out that it is just another OpenGL/DX flavor with the same old driver/performance problems. Also using SPIR as IL allows the use of the preferred shader language. WebGL Next is about parallelism and abstraction, otherwise there would be absolutely no benefit.
Can you comment a bit more on the limitations of Metal?
I've been spending the last few days reading through the docs and learning how to use Metal. So, I'm curious what you (and anyone else, I guess) perceive its shortcomings to be.
My personal biggest concern was the re-invention of OpenGL/DX features without the proper knowledge why and how these features can be used. For example I never managed to apply a skeletal animation to a indirect indexed draw call or use transform feedback loops for ray-casting or other deferred shading techniques.
It's trustworthy, it's just misleading. GRsecurity is a whole suite of security related patches/modules that SELinux/Apparmor don't do and have never claimed to do because they're 'just' about MAC.
With RBAC you effectively have to create your own policy for the whole system. I'm not aware of any successful projects providing a full, generic profile.
But ideally, you could be using Grsec without RBAC, but with SELinux.
In Germany It is also called "Katzentisch" (cat table). This practice got famous during the privatization of state enterprises. It was later supplemented by unsuitable tasks. For example woman had to handle heavy packages without proper equipment.