HN2new | past | comments | ask | show | jobs | submit | dsego's commentslogin

It has average taste based on the code it was trained on. For example, every time I attempted to polish the UX it wanted to add a toast system, I abhor toasts as a UX pattern. But it also provided elegant backend designs I hadn't even considered.

Can I tag a bugfix that goes in after a feature was already merged into main? Basically out of order. Or do I need to tag the bugfix branch, in which case the main branch is no longer the release, so we need to ensure the bugfix ends up in the remote main branch as well as the release. Seems like it could cause further conflicts.

git doesn't care what order or from which branch you tag things in. If you need to hotfix a previous release you branch from that previous release's tag, make your bugfix and tag that bugfix then merge the whole thing back to main.

Presumably you are maintaining the ordering of these releases with your naming scheme for tags. For instance, using semver tags with your main release being v1.2.0 and your hotfix tag being v1.2.1, even while you've got features in flight for v1.3.0 or v1.4.0 or v2.0.0. Keeping track of the order of versions is part of semver's job.

Perhaps the distinction is that v1.2.0 and v1.2.1 are still separate releases. A bug fix is a different binary output (for compiled languages) and should have its own release tag. Even if you aren't using a compiled language but are using a lot of manual QA, different releases have different QA steps and tracking that with different version numbers is helpful there, too.


I'm not sure what you mean, what does "tag a bugfix", "tag the bugfix branch" or "ensure the bugfix ends up in the remote main branch as well as the release" even mean?

What are you trying to achieve here, or what's the crux? I'm not 100% sure, but it seems you're asking about how to apply a bug fix while QA is testing a tag, that you'd like to be a part of the eventual release, but not on top of other features? Or is about something else?

I think one misconception I can see already, is that tags don't belong to branches, they're on commits. If you have branch A and branch B, with branch B having one extra commit and that commit has tag A, once you merge branch B into branch A, the tag is still pointing to the same commit, and the tag has nothing to do with branches at all. Not that you'd use this workflow for QA/releases, but should at least get the point across.


It means you need a bugfix on your release and you don't want to carry in any other features that have been applied to master in the meantime.

In that case one can just branch off a stable-x.y branch from the respective X.Y release tag as needed.

It really depends on the whole development workflow, but in my experience it was always easier and less hassle to develop on the main/master branch and create stable release or fix branch as needed. With that one also prioritizes on fixing on master first and cherry-pick that fix then directly to the stable branch with potential adaptions relevant for the potential older code state there.

With branching of stable branches as needed the git history gets less messy and stays more linear, making it easier to follow and feels more like a "only pay for what you actually use" model.


"with potential adaptions relevant for the potential older code state there"

And there it is. Not "potential adaptations", they will be a 100% necessity for some applications. There are industries outside webdev where the ideals of semver ("we do NOT break userland", "we do NOT break existing customer workflows", https://xkcd.com/1172/) are strongly applied and cherry-picking backports is not a simple process. Especially with the pace of development that TBD/develop-on-main usually implies, the "potential older code state" is a matter of fact, and eliding the backport process into "just cherry-pick it" as you did is simply not viable.


Usually what I've seen is one of two solutions, the former (usually) being slightly favored: A) hide any new feature behind feature flags, separate "what's in the code" from "how the application works" essentially or B) have two branches, one for development (master) and one for production. The production branch is what QA and releasers work with, master is what developers work with, cherry-picking stuff and backporting becomes relatively trivial.

We've been using feature flags but mostly for controlling when things get released. But feature flags carry their own issues, they complicate the code, introduce parallel code paths, and if not maintained properly it gets difficult to introduce new features and have everything working together seamlessly. Usually you want to remove the flag soon after release, otherwise it festers. The production branch is also ok, but committing out of order can break references if commits are not in the same order as master, and patching something directly to prod can cause issues with promoting changes from master to prod, it requires some foresight to not break builds.

The second one you described is basically GitFlow, just substitute "master branch" for "production branch" and "dev branch" for "master branch". I mean, you literally said "master is what developers work with", so why not call it the "development branch"?

With (B) you've just reconstructed the part of git-flow that was questioned at the start of this thread. Just switch the two branches from master/production to develop/master.

B is basically Gitflow with different branch names - “one for development” is called develop, “one for production” is called main.

Piano is even tuned with stretched tuning to match the harmonics better.

When they ditched all the ports and added the butterfly keyboard?

While I get the butterfly keyboard hate (though mine is so far still perfectly fine) the USB-C ports were amazing. I have a 2016 MBpro and that thing still cooks really well. As somebody who worked in video production those ports were a godsend. No more waiting around for footage to transfer all the damn time. Complete game changer. Plus with one or two quality docks I could plug-in literally anything I ever needed. With the AMD GPU i could also edit pretty beefy 4K with no proxies most of the time. In 2016/2017 that was pretty awesome. Plus last good intel machine they made IMO, so good compatibility with lots of software, target display mode for old iMacs, windows if I wanted it, etc.

Probably my favorite laptop I’ve ever owned. Powerful machine, still sees work, runs great.


It introduced USB-C before it was ubiquitous even on smartphones, at least in my area. All the peripherals still needed a dongle, it was the dongle era. The keyboard was okay to type on once I got used to the short travel, but the keycaps easily broke off, and dust would get in and the keys wouldn't register. Also, the whole laptop would get very hot, at least the 13" pro without the touchbar. I prefer the older 2015 model, before the butterfly, that's the one I had at work but had to give it up, and I regret waiting for the new models instead of purchasing the same one.

Like I said, totally get the keyboard hate. Mine just turned out perfectly fine.

People hated the dongles but again I could hook up everything. Dozens of connections with throughput I could never get before. It was fantastic for my needs and still is!


> I figured out

Or you could maybe learn how to use the OS, in linux lingo RTFM. I don't want to be rude, but the critique was very flippant, the arguments vague, all about expectations based on years using a different OS, doesn't seem you want to give it a fair chance.


This is pretty funny.

> the arguments vague

I gave both generalized and highly specific cases where I felt the UX failed. I referenced principles of UX as well as literal "here is what my experience was in a concrete story".

> , all about expectations based on years using a different OS

No? I mean, again, funny. I explained how I've been using MacOS for years. Actually a decade, now that I count it out.

> doesn't seem you want to give it a fair chance.

a decade lol


Plenty of people use an OS for years without learning. And you admitted to spending time in the terminal, which indicates lack of will to try and learn macos shortcuts, gestures, windowing model, spaces, and so on. And the comment used sweeping generalizations, without referring to any specific principles broken which aren't just personal dislikes or unfamiliarity with a different way of doing things.

> I gave both generalized and highly specific cases where I felt the UX failed.

No guidelines named, no principles defined. No comparison standard is established.

The earlier fullscreen story is a specific case, maybe a discoverability argument, but not not that UX violates every principle. MacOS spaces and fullscreen apps follow a workspace concept, it's not a window resize mode.

> Asymmetric user experiences

What’s asymmetric is not the command — it’s the spatial context. The claim that it’s violated is arguable.

> Heavily reliant on gestures

Not sure which guidelines this breaks, but every gesture has a keyboard shortcut alternative, there is mission control key, menu bar, dock.

> Ridiculous failure modes

No failure mode is defined.


> And you admitted to spending time in the terminal, which indicates lack of will to try and learn macos shortcuts, gestures, windowing model, spaces, and so on.

It indicates no such thing, other than that my preferred UX on a mac has landed on the terminal. It doesn't indicate whatsoever that I never tried to learn, or that I haven't learned, unless you presuppose that learning would necessitate using the computer a specific way.

Indeed, I have learned quite a lot of the various gestures, spaces, etc, unsurprisingly. I avoid them because they suck, and the learning experience was shit.

> And the comment used sweeping generalizations, without referring to any specific principles broken which aren't just personal dislikes or unfamiliarity with a different way of doing things.

All design principles are going to boil down to personal dislikes lol but no, nothing was "unfamiliarity" you can stop saying that thanks.

> No guidelines named, no principles defined. No comparison standard is established.

I could cite guidelines if you think it would help. Microsoft released a UX guideline years ago justifying why magic corners etc are a bad idea. Of course, they obviously don't follow that guide these days. What would you like?

I'm not interested in debating this. I'm perfectly fine with how I've expressed myself, I'm just not motivated enough this late in a Friday to get more detailed, so you'll have to just try to decipher what I've said and find if there's value to you or reject it, which I think is your prerogative.


Right now you can just tell claude to generate an ascii diagrama, or even svg. I did a few days ago when I wanted to share a flow diagram of one particular flow in our app.

Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.

The ability to reverse is very dependent on the transformation being well known, in this case it is deterministic and known with certainty. Any algorithm to reverse motion blur will depend on the translation and rotation of the camera in physical space, and the best the algorithm could do will be limited by the uncertainty in estimating those values.

If you apply a fake motion blur like in photoshop or after effects then that could probably be reversed pretty well.


> and the best the algorithm could do will be limited by the uncertainty in estimating those values

That's relatively easy if you're assuming simple translation and rotation (simple camera movement), as opposed to a squiggle movement or something (e.g. from vibration or being knocked). Because you can simply detect how much sharper the image gets, and hone in on the right values.


I recall a paper from many years ago (early 2010s) describing methods to estimate the camera motion and remove motion blur from blurry image contents only. I think they used a quality metric on the resulting “unblurred” image as a loss function for learning the effective motion estimate. This was before deep learning took off; certainly today’s image models could do much better at assessing the quality of the unblurred image than a hand-crafted metric.

Probably not the exact paper you have in mind, but... https://jspan.github.io/projects/text-deblurring/index.html

Record gyro motion at time of shutter?

I believe Microsoft of all people solved this a while ago by using the gyroscope in a phone to produce a de-blur kernel that cleaned up the image.

Its somewhere here: https://www.microsoft.com/en-us/research/product/computation...


I wonder if the "night mode" on newer phone cameras is doing something similar. Take a long exposure, use the IMU to produce a kernel that tidies up the image post facto. The night mode on my S24 actually produces some fuzzy, noisy artifacts that aren't terribly different from the artifacts in the OP's deblurs.

The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.

For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...


Absolutely, Photoshop has it:

https://helpx.adobe.com/photoshop/using/reduce-camera-shake-...

Or... from the note at the top, had it? Very strange, features are almost never removed. I really wonder what the architectural reason was here.


Just guessing, patent troll.

Oof, I hope not. I wonder if the architecture for GPU filters migrated, and this feature didn't get enough usage to warrant being rewritten from scratch?

Does subgrid work for this layout?


It's very reminiscent of the "if you don't like it, leave" attitude in politics.


It can never be user-friendly enough if how windows does things is the yardstick. Windows users bemoan about how terrible Macs are all the time just because things are done differently, and they don't even try to figure it out. If it doesn't work like windows it's not good enough.


Why can't we make Linux work like Windows? Modifiability is supposed to be a benefit of open-source.


If you install a distro that uses KDE Plasma, you're already most of the way there. Not just because of it's design similarities to Windows but it's the desktop environment that's been getting the most financial support lately and has seen the most rapid improvement.

I personally prefer it at this point. Dolphin blows away explorer, window management is more slick and more flexible out of the box and it also happens to be deeply customizable.


The ASUS laptop I bought has a litany of issues: blue screens, audio dying, won't wake from sleep. MSFT, Nvidia and ASUS all blame each other.

I have a feeling modern Linux on this machine wouldn't be worse than what it shipped with. The days of fighting for 3 days with audio or printer drivers after an install are mostly behind us.


If Linux isn't uploading their FDE keys to Microsoft servers by default, Windows users will get scared and start crying. Needless to say, their tastes and desires should never be entertained.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: