1. understand weighted least squares and how you can update an initial estimate (prior mean and variance) with a new measurement and its uncertainty (i.e. inverse variance weighted least squares)
2. this works because the true mean hasn't changed between measurements. What if it did?
3. KF uses a model of how the mean changes to predict what it should be now based on the past, including an inflation factor on the uncertainty since predictions aren't perfect
4. after the prediction, it becomes the same problem as (1) except you use the predicted values as the initial estimate
There are some details about the measurement matrix (when your measurement is a linear combination of the true value -- the state) and the Kalman gain, but these all come from the least squares formulation.
Least squares is the key and you can prove it's optimal under certain assumptions (e.g. Bayesian MMSE).
I was expecting something about the morphological erosion operator but this was pretty cool.
Some of the techniques here seem to be motivated by physical processes (e.g. rain). I wonder if that could be taken further to derive the whole process?
Criticize gatekeeping all you want, but I feel it’s safer to recommend a Mac or iPhone to an older, non-technical person than the equivalent Windows / Android machine.
And I’m still able to install any app I want with minimal fuss.
Can anyone explain why Mamba models start with a continuous time SSM (and discretize) vs discrete time?
I know the step isn’t fixed, also not sure why that’s important. Is that the only reason? There also seems to be a parameterization advantage too with the continuous formulation.
1. understand weighted least squares and how you can update an initial estimate (prior mean and variance) with a new measurement and its uncertainty (i.e. inverse variance weighted least squares)
2. this works because the true mean hasn't changed between measurements. What if it did?
3. KF uses a model of how the mean changes to predict what it should be now based on the past, including an inflation factor on the uncertainty since predictions aren't perfect
4. after the prediction, it becomes the same problem as (1) except you use the predicted values as the initial estimate
There are some details about the measurement matrix (when your measurement is a linear combination of the true value -- the state) and the Kalman gain, but these all come from the least squares formulation.
Least squares is the key and you can prove it's optimal under certain assumptions (e.g. Bayesian MMSE).
reply