Some criticisms of Jenkins I'm aware of (and some I share):
* The UI is a bit ugly
* Plugins are a minefield. Some are excellent, others are undocumented and unmaintained. The UI treats them equally so it's easy to install a bunch of bad plugins.
* It doesn't have a good security track record.
But it's extremely popular as a self-hosted/corporate/behind the firewall solution because:
* It's extremely flexible.
* It can be self-administered by developers; they can get "root-like" privileges, with all configuration set from the web GUI.
* And of course it's free software
It's easy for an admin to deploy Jenkins, set some basic authentication / security, and then let developers do whatever they want with it.
Most fish farms force fish to swim through a current of running water. I think most are concrete “runs” that continuously pump water through. At least what I’ve seen in the US is that way. There are now many overseas fish farms.
Ditto - these are exactly the questions that must be asked, frequently:
> he asked “one of the good ones, or one of the bad ones?” A designer on his team — a “bad one” — asked too many questions. Questions like “Why are we building this now?” “Are we sure this is the right problem to solve?” “Why don’t we approach the problem in a different way?”
Calling those questions 'doubts' just doesn't seem quite correct.
In the best teams I've worked with/on, designers [1] aren't afraid to frequently ask these questions. I have yet to see a case where that's been a problem. Quite the opposite - far too often, there aren't enough deep / 'stupid' questions asked, especially at the outset & middle of projects.
I had seen this but it wouldn't work well for my case because the link leads to the article url, whereas it should lead to the "expanded" saved snippet saved from that page. Having a link pointing to the original url is still useful though.
Evernote does something like this (web clipper). In the left pane it shows the first lines of the saved fragment, in the right pane it show the full saved fragment and it also includes a link to the original work
Evernote works well for saving, but not for presentation. and its search is sluggish with a bad UX.
I like search on HN (by Algolia), it feels snappy and I can find what I'm looking for relatively fast.
In other words, I want to curate my Evernote notes list in a web app where design is a first class citizen.
That doesn't mean pompous or flashy stuff, for example I find HN's design serves its content well and pg's site does the same for essays.
Sometimes I think about taking a Wordpress theme with masonry and stripping it down so that it has no visual distractions other than the content (and then adding the extra functionality needed). I'm not sure if that's better than building something from the ground up.
Do you have any pointers to get started on this for someone whose last practice of scientific method was a middle-school science fair project? (Asking for a friend)
I picked up the book "Uncontrolled" by Jim Manzi, which is ostensibly about this subject. Would love to hear any recommendations you picked up from learning about this material and applying it.
I just created a gmail account you can contact me at ( anongraddebt@gmail.com ). I'll give you the syllabus, assorted materials, and a bulleted list of take-aways that I've written up for others in the past.
I've been interested in (U)GA and related subjects for a while. Looks really promising and its advocates make it seem like it solves all kinds of problems. But I hesitated to dive in because the learning curve looks rather steep, and it's not clear if the cost-benefit of ascending that curve ends one up in the black.
How useful is (U)GA, really? In other words, what are the concrete benefits relative to traditional vectors, matrices, quaternions? Are those benefits worth the additional complexity (or is GA actually a simplification)? Are there applications where GA is the clear win, or is it simply an alternative way of expressing what can already be expressed through the 3 primitives I mentioned?
On another tack, how would you characterize the power / utility of (U)GA? For instance, it _seems_ like (U)GA is a super-powerful tool that's just misunderstood and everyone would benefit from learning it (like Lisp in the early 2000s). On the other hand, papers often extol the virtues of (U)GA by showing how it can solve (seemingly) esoteric higher-order math problems.
> what are the concrete benefits relative to traditional vectors, matrices, quaternions?
Traditional vectors are fine, but things get ugly when you do things like cross products. People should really just use bivectors. It is conceptually simpler and doesn't entail arbitrary sign conventions.
Matrices are way too general a tool to criticize them as a whole. Using them makes sense for some problems, and much less sense for others. As far as I understand it though, GA is not really competing with matrices as a tool to do linear algebra. GA is mainly about geometry, and not all linear algebra is about geometry.
Quaternions really are bivectors behind the scene. Exposing them as such would make them much easier to understand. And I don't think we would necessarily lose in performance. It could just be a matter of notation really. Instead of naming them i, j and k, we should name them jk, ki and ij respectively. At the cost of a sign change, we would make them simpler to understand imho.
> how would you characterize the power / utility of (U)GA?
I'm afraid I'm not qualified to make the case for GA. People much smarter than me did, and since you're interested you probably know them (David Hestenes, Chris Doran...). Me, I'm convinced and charmed, but I'm just a hobbyist who would like to do 3D graphics elegantly.
Google (and others) gave them so much money because ML is aiming to build a platform that is not only the thing after smartphones, it's the thing after the Web: spatialized information seamlessly interleaved with reality, everywhere.
At that time (>2 years ago), Magic Leap seemed to have achieved the holy grail of Mixed Reality: optics that selectively block out light in real-time. Combined with demos, mockups, and sketches of real-world applications, this tech provided compelling evidence that ML was going to unlock true Mixed Reality real soon.
As erikpukinskis commented, even if Magic Leap fails to deliver, it will likely end up with an extremely valuable IP in an emerging space. Magic Leap did not invent Mixed Reality, but they filed patents on many first-order applications of the tech (there are countless others).
Just a quick note, not to debate the whole premise of this post or it's parent - but I don't believe Magic Leap (or anyone else) has developed the ability to selectively block out light.
The most I've seen done in this space is using very very bulky electrochromatic glass of which the 'pixels' are ~32x the size of actual pixels on the AR display, and have extremely "fuzzy" borders
The Lua community experienced a similar "Cambrian explosion" of module standards followed by standardization. This stemmed from Lua's "mechanisms, not policy" design philosophy:
"Despite our “mechanisms, not policy” rule — which we have found valuable in guiding the evolution of Lua—we should have provided a precise set of policies for modules and packages earlier. The lack of a common policy for building modules and installing packages prevents different groups from sharing code and discourages the development of a community code base." [1]
The same problem occurs when implementing commonly-used patterns where standards aren't defined, like OOP class hierarchies and object instances. History repeating, etc.