- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- Dec 20 2018 16:01
rlabbe on master

Bug fix: definition of expected… Bug fix: Incorrect equation for… Merge pull request #269 from ru… (compare)

- Dec 13 2018 20:02
rlabbe on master

Bug fix: moved __future__ impor… Merge pull request #267 from ru… (compare)

- Nov 29 2018 17:24
rlabbe on master

Correct docstring about save_cu… Merge pull request #171 from pa… (compare)

- Nov 02 2018 23:12
rlabbe on master

Fixed for new code organization… (compare)

- Nov 02 2018 23:04
rlabbe on master

point to localhost The links i… (compare)

- Nov 02 2018 22:59
rlabbe on master

Fixed particle animation I don… (compare)

- Nov 02 2018 22:50
rlabbe on master

Remove weight reset in update()… Merge branch 'master' of https:… (compare)

- Nov 02 2018 22:49
rlabbe on master

Remove weight reset in update()… (compare)

- Nov 02 2018 21:59
rlabbe on master

Remove default argument from di… (compare)

- Nov 02 2018 21:56
rlabbe on master

Error in code in text. Github … Cleaned up gaussian named tuple… (compare)

- Nov 02 2018 21:45
rlabbe on master

Added line for actual weight tr… (compare)

- Nov 02 2018 21:17
rlabbe on master

improve naming in ch 1 "predict… Merge pull request #256 from be… (compare)

- Nov 02 2018 21:03
rlabbe on master

fix a bug of forget passing in … Merge pull request #255 from Li… (compare)

- Nov 02 2018 21:02
rlabbe on master

fix: Fix typo 'abddition' to '… fix: Fix typo Change '-1' to '… fix: Fix wikipedia link Change… and 1 more (compare)

- Nov 02 2018 21:01
rlabbe on master

Change 10.90 to 11.35 Merge pull request #251 from of… (compare)

- Nov 02 2018 17:02
rlabbe on master

Normalize binary line ending G… Merge branch 'master' of https:… (compare)

- Nov 02 2018 17:00
rlabbe on master

Normalize binary line ending (compare)

- Nov 02 2018 16:59
rlabbe on master

Normalize binary line ending (compare)

- Nov 02 2018 16:53
rlabbe on master

added binary attirbutes for bin… (compare)

- Oct 10 2018 22:35
rlabbe on 1.4.5

Hi, I'm going through the textbook and it's really great! I'm on chapter 5 section "Multivariate Normal Distributions" and found that it was hard to understand how adding in covariance elements affected the resulting PDF since the figure was static. So at the top I changed

`%matplotlib inline`

to `%matplotlib notebook`

and the figure became interactive! I could rotate the PDF to see more angles.
Not sure if it's worth changing but I figured I'd make the suggestion

Another suggestion is, in chapter 1 function

`plot_estimate_chart_3()`

, the residual line is black and is hard to see. I updated my local copy to make it green, though something brighter like magenta could be better. Change line 107 of `code/gh_internal.py`

:`ax.annotate('', xy=[1,159], xytext=[1,164.2], arrowprops=dict(arrowstyle='-', ec='g', lw=1, shrinkA=8, shrinkB=8))`

There is a tension that I am trying to balance with the charts - I want more interactivity, but some people use the PDF version of the book. The interactive charting software have not implemented the hooks that allow nbconvert (which converts the notebooks into latex which I then convert to PDF) to extract the graphs. I'm sort of thinking about parsing the notebooks prior to running nbconvert, and replacing the interactive charts with boring old matplotlib, and then executing the cells.

Also, colors are tricky. I'm talking to publishers, and print will be black and white. I keep changing the plot settings to make the plots readable in B&W, but that seems like a bit of a waste of time - I can solve that problem when it comes time.

@rlabbe or anyone listening: Is there a way of getting the log-likelihood of a model on a set of data out from the Kalman filter class? Does the class handle missing data, e.g., by marking missing using Numpy's ma.array? Finally, aside from studying the code, does the implementation use the SVD way of calculation, per R's dlm (see https://hypergeometric.wordpress.com/2015/07/29/comprehensive-and-compact-tutorial-on-petris-dlm-package-in-r/), or square root filter, or some other way? Thank you!

Hi. Those are good additions for the library. Here's the current status:

Missing data is handled by setting z=None. If using batch_filter, you might call it with kf.batch_filter(zs=[1., 2., 3., None, 5.]). That is probably not 'canonical' python behavior, and I will add it to the issues.

I am working on log-likelihood, and metrics like NEES, NIS, etc for the next release of FilterPy.

I do not currently have an SVD filter. It is on the backlog.

The Kalman filter class uses the standard linear Kalman filter equations; this makes it more pedagogical in nature, though I have used it plenty of times in less demanding situations. The only concession I made to real world engineering is in the computation of P - the published (I-KH)P equation is unstable.

A square root filter is implemented by the class SquareRootKalmanFilter, in the filterpy.kalman module. Read the documentation carefully - this is more a reference implementation and i have not used it in production. Brown suggests that square root filters are no longer needed with modern hardware unless P is going to vary by 20 orders of magnitude. His reasoning seems strong, but I do not have empirical evidence to back that up.

To round out the descriptions, there is also a fading memory and information filter implemented for the linear filters. I have an EKF and UKF, but not with the square root variants.

If you want to compute the log-likelihood yourself you can. This link gives the equation for the computation: http://www.econ.umn.edu/~karib003/help/kalman_example1.htm. Their 'C_t' can be accessed with 'kf.S' in my code after calling update().

Here is some code for likelihood. Haven't really tested it

```
import numpy as np
from scipy.stats import multivariate_normal
from numpy import dot, log, exp
import scipy.linalg as la
def gaus_pdf(X, M, S):
DX = (X-M)[0,0]
E = 0.5*np.dot(DX.T, (S/DX))
d = M.shape[0]
E = E + 0.5 * d * log(2*np.pi) + 0.5 * log(la.det(S));
P = exp(-E)
return P
def kf_liklihood(x, P, z, H, R):
IM = np.dot(H, x)
S = np.dot(H, P).dot(H.T) + R
print(gaus_pdf(z, IM, S))
print(multivariate_normal.pdf(z, mean=IM, cov=S))
return multivariate_normal.pdf(z, mean=IM, cov=S)
```

Here is code for likelihood that I haven't really tested

```
from scipy.stats import multivariate_normal
def likelihood(x, P, z, H, R):
IM = np.dot(H, x)
S = np.dot(H, P).dot(H.T) + R
return multivariate_normal.pdf(z, mean=IM, cov=S)
```

When I install the packge filterpy，I got the following error:

C:\WINDOWS\system32>conda install --channel https://conda.anaconda.org/phios filterpy

Fetching package metadata: ......

Solving package specifications: ..........

Error: Unsatisfiable package specifications.

Generating hint:

[ COMPLETE ]|##################################################| 100%

Hint: the following packages conflict with each other:

- filterpy
- python 3.5*

Use 'conda info filterpy' etc. to see the dependencies for each package.

Note that the following features are enabled:

- vc14

I want to know wether the filterpy can be installed with python3.5 and how can solute this problems.Thanks.

Sorry, I jsut saw this. I don't know who phios is, it is some random person that put filterpy on conda. filterpy works in python 3.5 - I do all development on the latest releases of Python 3. Try

`pip install filterpy`

instead; that should work. Recently I tried to get filterpy working with conda; `conda install filterpy`

might work for windows, but I have reports it doesn't work for mac and linux yet. I need to put more effort on this.
Just before I forget here is the first one I spotted: *But sitting down and ***trying** to read many of these books is a dismal and trying experience if you do not have the necessary background. [chapter 00]. I think you meant tiring

Just noticed that I marked the wrong *trying*, I meant the next one *dismal and trying experience* . Let me know if I should continue doing that or not :). Thanks for the great book!

yes, that is how most people have done it. Once or twice we have had issues where about every line is for some reason marked as a change (I suspect line endings differences between windows and linux), and endup with something like 50,000 lines changes. Of course i had to reject those because there was no reasonable way for me to code/word review before accepting. Just keep the repo up to date and there should be no problem. my pace of change has really slowed down so that shouldn't be an issue

Hi rlabbe, Just to let you know: it seems that filterpy is missing again from your binder builld (http://app.mybinder.org/2637687816/notebooks/03-Gaussians.ipynb)

Hi Roger! Thanks for this book, so far my read through it has been very helpful. There is one concept that I am having problems understanding related to a problem I am working on. How does the UKF deal with control inputs to the system? The system I am looking at has a fairly simple A matrix (diagonal, mostly unity), with a full B matrix and a non-linear measurement function. Does the UKF deal with this well?

i wonna run a code about robot motion using ekf

code are in the book

but get this erros

File "/usr/local/lib/python2.7/dist-packages/scipy/_lib/_util.py", line 231, in _asarray_validated

raise ValueError('object arrays are not supported')

ValueError: object arrays are not supported

raise ValueError('object arrays are not supported')

ValueError: object arrays are not supported

what should i do?

@rlabbe: I am reading through your book online and so far it has been amazing, thanks for publishing all this!

Although I did not read all chapters yet, it came to my understanding that outliers (measurements containing no information) are pretty bad for a 'standard' Kalman filter. By just searching trough the pdf version I didn't find any mention of outliers. So I was wondering if you could point me to any resources that introduce outlier handling within kalman filtering? Of course I googled but I cannot really judge the quality for a beginner of all the different results I get back...

Although I did not read all chapters yet, it came to my understanding that outliers (measurements containing no information) are pretty bad for a 'standard' Kalman filter. By just searching trough the pdf version I didn't find any mention of outliers. So I was wondering if you could point me to any resources that introduce outlier handling within kalman filtering? Of course I googled but I cannot really judge the quality for a beginner of all the different results I get back...

I intend to add a section on just this topic in a few day. In the meantime, I suggest looking up 'mahalanobis distance', which is a measure of how far a measurement is from the KF's prior. You can use this to 'gate' your data - discard data that is "too far away". Theory says throw away anything > 3 std, but in practice you may find 4,5, even 6 std to be a better gating distance

If you throw the data away, you just don't call update for that time period. You will thus call predict twice in a row, and your estimate will gain uncertainty because you did 2 predictions in a row

that's the general idea. The search term "kalman filter gating" is also a fruitful search.

I wish to congratulate you for your excellent book on Kalman and Bayesian filters.

It is clear, didactic and well-documented.

I had to build my very first Kalman filter in a quite complex configuration (7 state variables, strong non-linearities and very low signal to noise ratio).

Your book has brought me tremendous help in doing that - although, as a Scilab user, I have found the recourse to Python more troublesome than helpful -.

It is rare to find such a thorough, simple, user-oriented while scientifically sound presentation of the Kalman filter.

I think you are a born pedagogue.

Lots of thanks.

It is clear, didactic and well-documented.

I had to build my very first Kalman filter in a quite complex configuration (7 state variables, strong non-linearities and very low signal to noise ratio).

Your book has brought me tremendous help in doing that - although, as a Scilab user, I have found the recourse to Python more troublesome than helpful -.

It is rare to find such a thorough, simple, user-oriented while scientifically sound presentation of the Kalman filter.

I think you are a born pedagogue.

Lots of thanks.

@rlabbe I just started working through the Jupyter notebooks. Your intuitive approach to the subject matter is very refreshing. I do have a question/observation about the material in Chapter 3... I find the discussion of the product vs sum of "Gaussians" a bit confusing. It seems that you are discussing the sum of Gaussian random "variables" and the product of Gaussian probability "distributions". The sum of two independent Gaussian random variables is also Gaussian-distributed. The product of two Gaussian random variables is not, in general, Gaussian-distributed.

Having now made it through Chapter 4... I think the source of the confusion is that, in both cases, we are really talking about operations on Gaussian "distributions" rather than random "variables" . The mathematical operation involved in the "prediction" step is really a convolution, rather than a sum, of Gaussian "distributions", which can be shown to be a Gaussian "distribution" with mean and variance as described in Chapter 3. At least, that's what I think after reading Chapter 4... Looking forward to further enlightenment in the upcoming chapters... :-)