HN2new | past | comments | ask | show | jobs | submit | srean's commentslogin

That's a wonderful set of Knotes. Thank you.

BTW is your book out ? (On algebra and programming)


In my grad school days, a couple of decades ago, I had written a library for my own use to facilitate chaining of different kinds of numeric operations on data vectors and sequences. Essentially, for a very simple form of deforestation [0,1], equipped with intermediate buffers to facilitate SIMD.

GCC, surprisingly, was quite good at generating SIMD code from it and eliminating temporary data vectors. GCC was even quite good at explaining why it didn't emit SIMD that I wanted it to emit. Much to my surprise GCC was better in this regard than Clang. From what I had read about Clang at that time, it should have been the otherway round. The error messages were better too (wonders of competition).

I quite liked it. It was a loads of fun. I would however be wary of using it in anger.

The problem is, this sublanguage feels more like a dynamically typed language where errors would be thrown deep in the instantiation chain when it ultimately failed to instantiate.

There was no type-system to guide you ahead-of-time that what you are trying to do would eventually fail to instantiate.

The code got lost when Bitbucket stopped it's support for Mercurial. I still likely have it somewhere in my old files.

Later I wanted to rewrite this in D but never started (thrown out of my university because I graduated).

[0] https://en.wikipedia.org/wiki/Deforestation_(computer_scienc...

[1] https://en.wikipedia.org/wiki/Expression_templates


It is pretty well known that C++ templates really are dynamically typed, it's compile time duck typing.

What you did sounds like Eigen, which probably takes expression templates much further than you did due to the long time it's been around. It is weirdly absent from its documentation, but Eigen does have explicit SIMD support for several architectures and SIMD instruction sets. https://libeigen.gitlab.io/


Yes am familiar with Eigen. And yes I am/was that lunatic you mention in your other comment.

This thing I wrote, predated Eigen (by a few years, I think) had a better support for pipelines and dataflow oriented programming (at least in the context of what I needed) than Eigen. It was more like itertools, functools of Python. Likely older than Eigen but positively after Blitz++ because I remember studying Blitz++ code.

Unlike Eigen, mine had no explicit support for intrinsics, for that I relied on a smart enough compiler, GCC was surprisingly good.

Too bad MSVC was quite bad at optimising and inlining away to the expression trees that Eigen (and mine) generated.


I right away admit that I am an absolute novice in this space, but I have a few questions. The question I always had is why do we not model it closer to the actual tangible Physics and Biology going on ?

For example, the Physical reality is the different frequencies of light. The Biological reality is that different types of cells on our retina respond with differing intensity to each of those frequencies.

So to my naive mind, a way of modeling color is to have (i) a forward model that map light frequencies to response intensities of the different types of cellular light receptors and (ii) an inverse model that estimates the frequency mix of light from the cellular responses.

That is, have two spaces (i) the light frequencity space (a list of tuple of frequency and intensity/power at that frequency) and the (ii) cellular response space.

Once we have these, we can go from a pigment or excited phosphor to a biological response in a two step process.

From (a) pigment/phosphor (+ frequency mix of illuminating light) to output light frequencities (b) from those frequencities to cellular response.

For all processing, make frequencities the base space to work in (allowing us to change/personalize the forward model).

Yes, the inverse model leads to an ill posed inverse problem, but we are now very knowledgeable about how to solve these.

The frequencies may need to discretized for convenience.

I am obviously a novice and don't know much about modeling color, but this way of modeling seems more grounded in the tangibles. This also gives a way to model how a color-blind person might perceive a picture.

Is the reason that we do not do it this way it's complexity ?

Eager to be illuminated (pun intended).


My understanding is that you're describing the CIE 1931 color space:

https://en.wikipedia.org/wiki/CIE_1931_color_space#Color_vis...

You'll see it is based on the physics and human biology, and is the basis for everything else.

The thing is, this color space isn't very useful for color calculations in the perceptual/subjective sense (e.g. if I just want to change one characteristic, like luminosity, without affecting the chromaticity), so we have transformations to more useful spaces like XYZ, Lab, etc.

There's also the fact human vision is a subjective/psychological phenomenon, so only frequency response is not enough: our vision can map different frequency responses to the same perceptual color (metamerism), our vision adapts color perception based on light source, etc.


Thanks for the pointer.

EDIT Thanks again. This exactly like what I had in mind, in spirit at least.


The problem is that this will only work in one direction. You can calculate the stimulation of the photoreceptors for a certain spectrum, but not the other way around. For example the eye cannot distinguish between purple light consisting of one specific wavelength and purble light mixed by red and blue wavelengths, because both give the same stimulation of the receptors. So there is an infinite number of possible spectra for any given stimulation of the photoreceptors. All we can do is take the stimulation values (X, Y and Z) and convert from there to all kinds of color models and back.

Your approach would make a lot of sense for sensors that are full spectrum analyzers, but the eye isn't one.


You are talking about the inverse problem. https://en.wikipedia.org/wiki/Inverse_problem

Yes because it's not a one to one map we cannot invert the map uniquely. But that's ok, we can maintain a distribution over the possible frequencities consistent with the response. That's how it's done in other areas of mathematics where similar non-bijections arise.

Much thanks for answering though, because I suspect I am asking a very basic question.


You're correct, for what it's worth. I too have always wished that light was modeled based on physics, not on how humans happen to see.

Unfortunately the problem is data acquisition (cameras), and data creation (artists). You need lots of data to figure out e.g. what a certain metal's spectrum is, and it's not nearly as clear-cut as just painting RGB values onto a box in a game engine.

For better or worse, all our tools are set up to work in RGB, regardless of the color space you happen to be using. So your physics-based approach would have the monumental task of redefining how to create a texture in Photoshop, and how to specify a purple light in a game engine.

I think the path toward actual photorealism is to use ML models. They should be able to take ~any game engine's rendered frame as input, and output something closer to what you'd see in real life. And I'm pretty sure it can be done in realtime, especially if you're using a GAN based approach instead of diffusion models.


No need for ML. This already exists, the keyword to look for is "spectral rendering".

To add to the general thread: the diverse color spaces are there to answer questions that inherently involve how a typical human sees colors, so they _have_ to include biology, that's their whole point. For example:

- I want objects of a specific color (because of branding), how to communicate that to contractors, and how to check it?

- What's a correct processing chain from capturing an image to display/print, that guarantees that the image will look the same on all devices?


I see. Makes sense.

Good comment. For people with physics and mathematics skills and intuition, learning or discussing about color or music theory can be a little bit frustrating experience. Ie a lot of it is about traditions and some famous people's ideas and less about how it actually works under the hood. I guess similar things exist in most fields when you dig a little bit deeper. "We always apply this fudge factor to get the correct results". Even something considered hard science.

Of course artists can be very effective with whatever the toolsets they have learned, they sort of can transcend all the obfuscations and actually can express themselves. It's a bit hard to change everything then as the "user's transformations" are then baked in.

It's also true that mathematical models are often simplifications and one has to consider the whole end to end pipeline, where a more "accurate" transformation can yield a worse end result if applied blindly. I'm always reminded of the anecdote of when in the past, at some tv channel, they switched to more accurate weather forecast models. But the audience got worse forecasts, since the resident meteorologist was used to certain errors the old model produced and could compensate those, while the new model had maybe less errors in total but they were different. Happens all the time and why some engineer thinking something is objectively better might not actually be better for the customer...


> For example, the Physical reality is the different frequencies of light. The Biological reality is that different types of cells on our retina respond with differing intensity to each of those frequencies.

Because this isn't true. And there are multiple ways with completely different combinations of intesities of light at different frequencies to have a color that most people will see as the same color. This is because the color receptors of our eye overlap.


> And there are multiple ways with completely different combinations of intesities of light at different frequencies to have a color that most people will see as the same color. This is because the color receptors of our eye overlap.

Of course, if you read back you will notice I am quite aware that the mapping is not a one to one bijection. Hence the need to solve the inverse problem.

One can maintains the inverse image (a set), or maintain a distribution over that, or certain statistics of that distribution. I have a link in my post about what inverse problems are.


We see different colors not because our eyes can tell what frequency a particular spectral color is, but because we have three types of color receptors, each of which is excitied by different broad ranges of overlapping frequencies.

Either my English is very bad or your comprehension is. My understanding is no different from what you say in this comment.

Many of these, like RGB or YUV, are intermediate "simplifications", not what you're necessarily supposed to be working in to generate them. It's like a physics calculation where everyone just decides to randomly stop half way through and use that as a transport medium. RGB exists because that's how the display physically operates, it's how bright those literal subpixels should be. Thus the RGB intermediate transport. YUV (particularly 4:2:2 and 4:2:0) is then essentially a lossy-compression format that's very easy to compress/decompress, and that's why it exists.

If you're doing rendering work, like in a game, those do operate in a more physical domain. That's the so-called "Physically Based Rendering" or PBR that you might see if you hang out in game dev circles.


Most display technologies output RGB light, which means you’re already forced into an output methodology that’s relying on deception purpose-built for human biology and psychology.

Because of that, the layer of your model concerned with the physical reality of light doesn’t seem super useful for basic image input and output.

But of course this layer is incredibly useful for stuff like computer graphics, where it is indeed the case that physically based rendering is widely used for offline CGI rendering (and increasingly for realtime video game rendering).


Short answer

We do model it according to perception, just only when it is useful. Other colour models allow different outputs, printing with coloured ink, displaying on screen by mixing coloured lights, categorising colour, finding how bright it is etc. all need different representations in numbers.


I think this is closer to that than you think. Lab has three intensity vectors. One for each of the photoreceptors in your eye. Anything more than Lab is unnecessarily conplex IMO.

The three Lab values don’t map onto the three different wavelengths captured in the retina.

It’s more like the L is an intensity/brightness factor, and then the a and b values corresponding to the two dimensions of opponent color that neurons capture in the thalamus one step after the eye


In fact, to support your point, it is perhaps questionable from first principles if 3 dimensions not 4 is right. Leaving out tetra chromats and the (partially) color blind, normal human light perception is 1 kind of rod and 3 kinds of cones (i.e. 4 photo receptors, plus some light sensitive ganglia that don't seem to participate in vision, but diurnal regulation).

So, sure, this "4th dimension" (for normals) might be as simple as "candelas" - truly orthogonal, but one does hear an awful lot about "ambient" or "candela contrastive" (a term I just made up) kinds of effects. (EDIT: e.g. in color calibration of projectors in dark rooms vs. living rooms, for example, but I'm sure there are many.) I am just one person, but it feels like candela brightness matters for color perception. So, maybe luminous intensity is not actually exactly orthogonal. Maybe this is all covered in the 1931 CIE documents, though.


I have always been intrigued by the similarity of Italian naming conventions and that of the Arabs and Persians.

Resident of, son of, father of, family of. Leonardo of Pisa of the family of Bonacci being another well known one.

I suppose it is not specific to those cultures and was a more widespread convention.


Very interesting. Are there any technical hindrances that prevent Android being the same ?

Slightly out of my depth, hopefully others weigh in.

Getting a very good lockdown mode requires both owning the entire stack (Apps + OS + Silicon) and being willing to sacrifice repairability (swapping chips/cameras/displays/touch controllers is a good way to help hack into a phone), and willingness to spend a lot of money on something that few people would actually pay for. Apple is the only company that's even positioned to take on this challenge.

AndroidOS has to work with a bunch of core functionality chips that Google/Samsung don't make. Having a bunch of different code paths/interfaces for a bunch of different SoC's, cellular modems, touch controllers, and cameras is not a winning recipe for security. Both Google and Samsung also use their own SoC's (Google Tensor G5, Samsung Exynos) but Samsung also uses a lot of Qualcomm Snapdragons ... and if you're using someone else's SoC there's no chance in hell of coming up with a proper "Lockdown Mode". Samsung or Google might be able to come up with a fully integrated solution someday, each have invested in parts of this. Beyond SOC's, Samsung has their custom silicon which helps them lock down security for their combo touch/display controller. Samsung has also invested a lot into customizing their Knox Secure Folder solutions (and everything else branded "Knox" as well, which is all mostly industry-leading for Android options). Google has the Pixel with their own Titan M2 security chip, and obviously they own the OS.

But it's a lot of work when so much of your engineering is dealing with changes that other companies are making. Google has to keep up with Samsung's hardware changes, because the tail wags the dog there, and Samsung spends a lot of engineering time figuring out how to deal with / customize / fork changes to AndroidOS that Google pushes (while the dog still wags the tail, too). Both have to deal with whatever Qualcomm throws at them for cellular modems, and it required a monumental effort/expense from Apple to only just recently bring up a replacement for Qualcomm's modems.


Thanks for replying. Such a comprehensive and well thought comment ought to have been a top standalone comment.

I don't think so, I use GrapheneOS and I think I can't even use the USB-C port for anything other than charging (which should be configurable).

It is configurable. It can be used to charge (either way), for data transfer, or for remote control. You can set it up with a fixed behavior, or to request permission everytime you plug a data cable.

Yes, all Android phones except for GrapheneOS are vulnerable to something, so they'll just copy the flash storage and hand it back to you.

There was a good corporate bullshit generator posted here in HN but probably before chatGPT became a thing. Can't seem to find it.

Love ? That's for plebs. The right thing is to leverage wholistic synergizing paradigms.


I find the technical side of the hobby very interesting but the thought that it requires having a conversation with strangers and that too synchronously, is a personal deterrent.

I don't know if there are others like me.

EDIT: Glad to see that there are others around. Happy to meet you. Async acks are great. So is the joy of engaging with something intellectually challenging.


Humans can be so different, it's fascinating and awesome. I'm quite the opposite. Where I wanted to talk to people from around the world and have some interesting conversations about everything, the fact that they are mostly "Hello, I'm calling from x, using radio y, and your signal is z. 73" is a big let down for me. I somewhat like the technical sid of it, but just calling to say "Hello, x, y, z, bye." ends up feeling like a waste of time to me (in the sense that I could be using this time for other, more interesting things. I still find it somewhat interesting though). But, if you enjoy the technical aspect of it, then I think it's fine, you can focus more on cw, or the digital modes where the contacts are automated.

edit: I know there are ragchews all the time, but it's still mostly about equipment.


Oh I love meeting and knowing people from different parts of the world, have curiosity about their human stories, about how they think, what they find funny, their food, their music, their culture, their interests. Oh yes their food, music and humor.

That's one reason I love New York so much.

However my pace is much slower. I take my time.

On the technical side, the added attraction is to do some homebrewed amateur radio astronomy.


I've been licensed for a couple of years. All my QSOs are short and sweet, mostly activating and hunting POTA/SOTA contacts, where the activator prefers to have short QSOs in order to activate the park/summit. I have no interest in having a conversation with the other amateur, at most I would exchange what gear I'm using and what power I'm running. That is it.

I like building kits, QRP, CW, and building my own antennas. I only make contact with other people to be able to improve my skills and validate my gear.

My father has been licensed for nearly 50 years, he loves the technical side of the hobby, I've made more contacts over the last 2 years than he has in 15 years. There are others like you.

Oliver, M7OCL


There are dozens of us, friend. I got my license for the challenge and learning that came with. Tuned in to signals near and far but never sent my own voice over the air.

Totally the same. I got my license 7 years ago just for fun. I challenge myself to have at least one QSO per year which always takes an hour of prepping myself mentally. But I love the technical side and am always happy to join practical meetups.

Yeah, I’ve got a UK Foundation licence but have never actually made a call - it was more of a “have transceiver might as well be legal to use it” thing.

(Also, HF antennas - just didn’t anticipate how difficult they were to set up properly)


I've made all of a single contact via CQ. The tech has taught me a lot and continues to teach, but listening to folks rag chew doesn't stay interesting for very long.

Bible is quite permissive of killing if it's in the name of god. Genocide is quite a recurring theme.

Even God told Abraham to kill his own son. Like, really?

Don't worry, it was just a test of Abraham's loyalty. God was never going to let him kill Isaac. It's the perfect example of a completely ethical thing to do to another person...

Some religious people would be nodding along in agreement not realising this is satire.

That's for sure, it seems to be a pretty straightforward case of Poe's law

https://en.wikipedia.org/wiki/Poe%27s_law


Is it satire if there are no fools?

Not unlike a cartel head that rules by a mix of fear and gaslighting.

Many religious texts, not just the Bible start making a lot of sense when looked at like psyops.


Yes exactly.

Golden rule does not need the existence of any god.

There are godless religions too that have strong ethical traditions. They are not religions in the Abrahamic sense.


Oops I commented the same before reading yours.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: