Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Aug 22 18:21

    rlabbe on master

    Helpers to copy and compare fil… Added repr_string, and reformat… Removed debugging print stateme… and 2 more (compare)

  • Jul 07 17:37

    rlabbe on master

    Update README.rst Adding defin… Merge pull request #267 from ja… (compare)

  • Jul 07 00:22

    rlabbe on master

    Added test using code from scik… Fixed warning about ragged arra… Removed use of scipy's mahalano… (compare)

  • Jul 06 23:18

    rlabbe on master

    Added support for lists in pret… (compare)

  • Jun 29 20:15

    rlabbe on master

    Pass KF into kinematic_kf kine… Made docstrings with latex in t… (compare)

  • Jun 29 20:04

    rlabbe on master

    Revert "Generalized sigma-point… Merge pull request #272 from rl… (compare)

  • Jun 29 20:03

    rlabbe on revert-247-master

    Revert "Generalized sigma-point… (compare)

  • Jun 27 19:08

    rlabbe on master

    Added page numbers and edition … Merge branch 'master' of https:… (compare)

  • Jun 03 17:23

    rlabbe on master

    included link to readthedocs do… Merge pull request #259 from zo… (compare)

  • Jun 03 17:22

    rlabbe on master

    Fix requirements.txt syntax Merge pull request #254 from ha… (compare)

  • Jun 03 17:20

    rlabbe on master

    Fixed repeated word in doc for … Merge pull request #248 from Gj… (compare)

  • Jun 03 17:20

    rlabbe on master

    first attempt for 1D Ebeigbe si… compute weights of EbeigbeSigma… comments for class Ebeigbesigma… and 6 more (compare)

  • Jun 03 17:19

    rlabbe on master

    Correct output shapes of sigma_… Merge pull request #244 from an… (compare)

  • Jun 03 17:18

    rlabbe on master

    Fixed the names of the differen… Made the test for the square ro… Fixed bug in the dot product ar… and 5 more (compare)

  • Jun 03 17:17

    rlabbe on master

    Enable sequential processing Enable processing vector compon… Add a unit test for sequential … and 1 more (compare)

  • May 19 17:46

    rlabbe on master

    added space after operator Exa… (compare)

  • May 15 21:28

    rlabbe on master

    Issue #420 - Clarify sum is of … (compare)

  • May 15 21:14

    rlabbe on master

    switch to f-strings Some cells… (compare)

  • May 15 21:00

    rlabbe on master

    interactive plots now work corr… merged and resolved conflicts Merge branch 'deadundead-fix-in… and 1 more (compare)

  • May 15 20:29

    rlabbe on master

    Reordered plot for consistency … Merge pull request #415 from er… (compare)

Roger Labbe
Here is some code for likelihood. Haven't really tested it
import numpy as np
from scipy.stats import multivariate_normal
from numpy import dot, log, exp
import scipy.linalg as la
def gaus_pdf(X, M, S):
    DX = (X-M)[0,0]
    E = 0.5*np.dot(DX.T, (S/DX))
    d = M.shape[0]
    E = E + 0.5 * d * log(2*np.pi) + 0.5 * log(la.det(S));
    P = exp(-E)
    return P

def kf_liklihood(x, P, z, H, R):
    IM = np.dot(H, x)
    S = np.dot(H, P).dot(H.T) + R
    print(gaus_pdf(z, IM, S))
    print(multivariate_normal.pdf(z, mean=IM, cov=S))
    return multivariate_normal.pdf(z, mean=IM, cov=S)
Here is code for likelihood that I haven't really tested
from scipy.stats import multivariate_normal
def likelihood(x, P, z, H, R):
    IM = np.dot(H, x)
    S = np.dot(H, P).dot(H.T) + R
    return multivariate_normal.pdf(z, mean=IM, cov=S)
Roger Labbe

FilterPy 0.0.26 changes:

  • Added likelihood and log-likelihood to the KalmanFilter

  • Added an MMAE filter bank class.

  • Added function to compute NEES

Soheil Yasrebi
Roger Labbe
FilterPy 0.0.27 changes:
  • Added function to compute update in the presense of
    correlated process and measurement noise.
    • Added IMM filter.
    • added tests for IMM and MMAE filters
    • Added display of semi-axis for covariance ellipses
    • various bug fixes

When I install the packge filterpy,I got the following error:
C:\WINDOWS\system32>conda install --channel https://conda.anaconda.org/phios filterpy
Fetching package metadata: ......
Solving package specifications: ..........
Error: Unsatisfiable package specifications.
Generating hint:
[ COMPLETE ]|##################################################| 100%

Hint: the following packages conflict with each other:

  • filterpy
  • python 3.5*

Use 'conda info filterpy' etc. to see the dependencies for each package.

Note that the following features are enabled:

  • vc14
    I want to know wether the filterpy can be installed with python3.5 and how can solute this problems.Thanks.
Roger Labbe
Sorry, I jsut saw this. I don't know who phios is, it is some random person that put filterpy on conda. filterpy works in python 3.5 - I do all development on the latest releases of Python 3. Try pip install filterpy instead; that should work. Recently I tried to get filterpy working with conda; conda install filterpy might work for windows, but I have reports it doesn't work for mac and linux yet. I need to put more effort on this.
Mateusz Sadowski
hi @rlabbe! Just started reading your book and it looks great so far! A quick question: would you like me to let you know about any typos I find and if so is this chat room the right place?
Just before I forget here is the first one I spotted: But sitting down and trying to read many of these books is a dismal and trying experience if you do not have the necessary background. [chapter 00]. I think you meant tiring
Just noticed that I marked the wrong trying, I meant the next one dismal and trying experience . Let me know if I should continue doing that or not :). Thanks for the great book!
Roger Labbe
Hi @msadowski , sure, typo reports are great! You can do it here or type up a github issue, whichever is easier.
Mateusz Sadowski
would it make your life easier if I corrected them as I go and then make commits chapter by chapter and then make pull requests for the changes? I would be quite happy to do that since I'm planning to read the book 'cover to cover'
Roger Labbe
yes, that is how most people have done it. Once or twice we have had issues where about every line is for some reason marked as a change (I suspect line endings differences between windows and linux), and endup with something like 50,000 lines changes. Of course i had to reject those because there was no reasonable way for me to code/word review before accepting. Just keep the repo up to date and there should be no problem. my pace of change has really slowed down so that shouldn't be an issue
Mateusz Sadowski
Hi rlabbe, Just to let you know: it seems that filterpy is missing again from your binder builld (http://app.mybinder.org/2637687816/notebooks/03-Gaussians.ipynb)
Roger Labbe
sorry. just saw this. I seem to have it fixed now
Adam Milner
Hi Roger! Thanks for this book, so far my read through it has been very helpful. There is one concept that I am having problems understanding related to a problem I am working on. How does the UKF deal with control inputs to the system? The system I am looking at has a fairly simple A matrix (diagonal, mostly unity), with a full B matrix and a non-linear measurement function. Does the UKF deal with this well?
Does anyone know of smoothing techniques that can be used for a EKF? It looks like all the smoothing techniques in FilterPy are linear problems. Am I right?
Suresh Sharma
Hi @rlabbe! Just getting started with your book, thanks for your work!
i wonna run a code about robot motion using ekf
code are in the book
but get this erros
File "/usr/local/lib/python2.7/dist-packages/scipy/_lib/_util.py", line 231, in _asarray_validated
raise ValueError('object arrays are not supported')
ValueError: object arrays are not supported
what should i do?
@rlabbe: I am reading through your book online and so far it has been amazing, thanks for publishing all this!
Although I did not read all chapters yet, it came to my understanding that outliers (measurements containing no information) are pretty bad for a 'standard' Kalman filter. By just searching trough the pdf version I didn't find any mention of outliers. So I was wondering if you could point me to any resources that introduce outlier handling within kalman filtering? Of course I googled but I cannot really judge the quality for a beginner of all the different results I get back...
Roger Labbe
@maluethi , sorry, I haven't logged in here for a long time.
I intend to add a section on just this topic in a few day. In the meantime, I suggest looking up 'mahalanobis distance', which is a measure of how far a measurement is from the KF's prior. You can use this to 'gate' your data - discard data that is "too far away". Theory says throw away anything > 3 std, but in practice you may find 4,5, even 6 std to be a better gating distance
If you throw the data away, you just don't call update for that time period. You will thus call predict twice in a row, and your estimate will gain uncertainty because you did 2 predictions in a row
that's the general idea. The search term "kalman filter gating" is also a fruitful search.
Roger Labbe
@maluethi the book is updated with a section on outliers. see chapter 8.
@rlabbe Thanks a lot for letting me know. I will certainly look into the new chapter!
I wish to congratulate you for your excellent book on Kalman and Bayesian filters.
It is clear, didactic and well-documented.
I had to build my very first Kalman filter in a quite complex configuration (7 state variables, strong non-linearities and very low signal to noise ratio).
Your book has brought me tremendous help in doing that - although, as a Scilab user, I have found the recourse to Python more troublesome than helpful -.
It is rare to find such a thorough, simple, user-oriented while scientifically sound presentation of the Kalman filter.
I think you are a born pedagogue.
Lots of thanks.
Miguel Oyarzun
@rlabbe I just started working through the Jupyter notebooks. Your intuitive approach to the subject matter is very refreshing. I do have a question/observation about the material in Chapter 3... I find the discussion of the product vs sum of "Gaussians" a bit confusing. It seems that you are discussing the sum of Gaussian random "variables" and the product of Gaussian probability "distributions". The sum of two independent Gaussian random variables is also Gaussian-distributed. The product of two Gaussian random variables is not, in general, Gaussian-distributed.
Miguel Oyarzun

@rlabbe I just started working through the Jupyter notebooks. Your intuitive approach to the subject matter is very refreshing. I do have a question/observation about the material in Chapter 3... I find the discussion of the product vs sum of "Gaussians" a bit confusing. It seems that you are discussing the sum of Gaussian random "variables" and the product of Gaussian probability "distributions". The sum of two independent Gaussian random variables is also Gaussian-distributed. The product of two Gaussian random variables is not, in general, Gaussian-distributed.

Having now made it through Chapter 4... I think the source of the confusion is that, in both cases, we are really talking about operations on Gaussian "distributions" rather than random "variables" . The mathematical operation involved in the "prediction" step is really a convolution, rather than a sum, of Gaussian "distributions", which can be shown to be a Gaussian "distribution" with mean and variance as described in Chapter 3. At least, that's what I think after reading Chapter 4... Looking forward to further enlightenment in the upcoming chapters... :-)

Thank you for writing this book. I hope it is valuable to students, as I wish that I had it when I was a student.
Douglas Daly
Wow - what an amazing book! You offer a very clear and conceptual approach to these topics without glossing over the foundations. And of course, presenting as Jupyter notebooks is a huge help and removes the mysteries of tuning parameters and such.
Thank you very much for providing the library and the book, both have been of immense value for me to understand how Kalman filters work.
Is there anyone here interested in forming a study group for this book? I'm working my way through it and understand it for the most part, but think studying together would make learning better.
Rachel Cohen Yeshurun
Wow, just amazing notebooks, hope one day to work through this. In the meantime, this notebook series is my inspiration for my own work.. Question for you @rlabbe or anyone who knows: I haven't been able to find out how to show all output on startup. I see all your notebooks when rendered on Binder too, show up with all the plots rendered, and no cell output number. It looks so clean, how do you do that??? My cells just show the code. Markdown is rendered, but code is not run, even though I ran all and saved before commit.
Hi @Rlabbe, just starting to read your book (in chapter 1) it is FANTASTIC, i am learning so much, it is a delight! About filters yes, but also I didn't know I could run Jupyter notebooks on Azure and Binder. Very cool! THANK YOU so much!!
Hi @rlabbe , I'm trying to implement a particle filter (SIR) like the one in chapter 12, but I'm having difficulty with it. I'm using it for bearings only tracking of multiple targets to differentiate between true and false targets, and I haven't been able to adapt that code to work without landmarks. Do you have any advice?
Hi, @rlabbe , you call the difference between the measurement and prediction is called the residual, I don't quite agree with you. I mean the difference between the measurement and prediction is called the innovation.the difference between the measurement and posterior is called residual.
1 reply
@rlabbe Nit: Chapter 3.2 is titled "Mean, Variance, and Standard Deviation", however none of these are mentioned in 3.2. Suggest 3.2 be renamed to "Random Variables" and 3.2.1 be removed.
@rlabbe Dear author, I have a question. If the measurement function is unknown in the state space, it is a parameter, which method can be used to estimate the measurement function?
Prashant Dandriyal
Hi guys.
Prashant Dandriyal

In chapter 6: Multivariate Kalman Filters, in the part

what if 𝑥=𝑥˙Δ𝑡 ? (set F00 to 0, the rest at defaults)

leads to the following plot of 𝐏=𝐅𝐏𝐅𝖳𝐏=𝐅𝐏𝐅^𝖳:


But I can't get the idea behind it, I mean how is the variance for x (position) reduced to minimum ? On using 𝑥0 = 0, the resulting matrix for P is:

Font metrics not found for font: .:  
[��^2_�� ��^2  b��^2_�� ��^2 ]

Prashant Dandriyal
Can anybody help ?
Prashant Dandriyal

Hi @rlabbe I came across the part "Stable Computation of the Posterior Covariance" and couldn't understand how the Joseph-equation-derived-covariance mitigates a non-optimal K (Kalman gain) ? I see that to be possible only when a different method is used to compute K.

Can you please help ?

Shiladitya Biswas
@rlabbe in the g-h filter chapter, when we introduce the change in gain_rate as well, in the update step why are we using the residue only, i.e.
why the the gain_rate equation given by "gain_rate = gain_rate + gain_scale (residual/time_step)", and not "gain_rate = gain_rate + gain_scale ((residual-gain_rate)/time_step)", here 1 is our predicted gain rate. Shouldn't the residual give us the measured gain_rate (from sensor) and we already have our previous gain_rate. Using both of these , we update our next gain_rate.
please clarify