- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

'sp.power(timediff, 2) / 2' in '../transition/tests/test_ca.py'

Hi @dlast-dstl @sglvladi my name is Jason, I am a student at Carleton University and I am working alongside Dr.Balaji @bhashyambalaji from the DRDC. When you are free, could you could add me as a code reviewer for the Run Manager and Matlab wrapper pull requests on the Stone Soup github?

I'm working on a Gromov Particle Flow Filter, but having issues with Cholesky decomposition for some not quite positive-definite matrices. I've implemented a version of Cholesky decomposition that tries to handle this (will be pushing to GitHub soon), but still having some issues. What common techniques are there to approach this issue?

@sdhiscocks I'm not familiar with this filter. Is the matrix symmetric? If not, you could add the matrix to it's conjugate transpose and divide by 2. Making the matrix symmetric would at least make it positive semi-definite. To force it to be positive definite - you diagonally load the matrix - that is add eps*eye(N), where eps is some small number. I wouldn't say this is my area - but just some quick ideas.

@DaveKirkland I did try diagonally loading, but seemed to get similar results to current approach. I'm not sure if the issue I've noticed is caused by Cholesky decomposition to be honest. I've created my #132, which includes this and particle flow updater. The farticle flow updater can be used same as particle filter examples, but just swapping out the updater (existing particle predictor can be used).

@sdhiscocks You might see if you could use the LDLt decomposition( see scipy.linalg.ldl() ). It should work for positive semi-definite matrices and keep from having to maintain our own Cholesky code. You would probably have to handle the cases where elements on the diagonal matrix are either zero or negative. If the matrix is symmetric, then you should only have to handle the zero case - unless their some numerical pathological case that leads to negative values.

@snaylor20 I was taking a brief look at your switching model code. Have you dealt with the case of different order models or types and mixing/converting the state vectors between the models? For example when switching from a CV model to CA, you need to add states, and conversely you need to drop states when going from CA to CV. It gets more complicated when switching from Coordinated Turn to CA or CV.

We have a similar issue in the IMM, GPB1 and GPB2 that we are looking at. We also have to handle the case of user adding their own motion models that we don't know about.

@sdhiscocks You could find LDL then make adjustments to the D matrix i.e. convert the zero elements to a small positive quantity, then create a new Cholesky decomp. Set $Q=BB^T=LDL^T$, where $B=LD^\frac{1}{2}$. In this case it might make more sense to set the sqrt to a small number rather than take the sqrt of a small number.

I'm not sure how this would compare to what you doing now - it just keeps the main numerical part in the scipy library and just adds a bit of extra logic around it.

@DaveKirkland Not at the moment, I wasn't sure how to deal with it. There's a quick bodge to it, whereby if you want to go from CV to CA, for example, you would use a CV model with the random walk bolted on the end, which fixes the difference in dimension issue. This obviously has some huge inaccuracies in, owing to the fact the acceleration should be 0, not kept at whatever it was post the acceleration. A more complicated work round would to be have an intermediate singer model with a negative decay constant. Then have another one with a positive decay constant to go between CV and CA. So for example a transition from CV to CA and back to CV would look something like this: CV => SING+ => CA -> SING- => CV.

So I think the issue I'm having in the particle flow filter is I'm calculating the prior covariance from the particles, as this is then causing an issue with calculation of $Q$. It seems the common approach is to run an EKF/UKF in parallel, using the resultant covariance but replacing the state with the mean of particles.

@snaylor20 Ok - Thanks. At this point I'm more concerned with how to handle the conversion of state vectors and covariance matrices between the different models. I think it's fairly straightforward when you know before hand what models you are going to use. Trying to handle it in a generic way is more problematic.

@sdhiscocks For the IMM, GPB1 etc. I need the user to supply conversion routines to convert to/from different state vector & covariance matrices. I think this has to be done at the transition model level. I was thinking of adding these functions as properties to the transition models. The conversion routines would be called as needed inside the IMMpredictor class as needed. If the user didn't define the functions, they would just default to a pre-defined function (identity transform).

I thought of providing a conversion function to the IMMpredictor class, but then I don't think we can distinguish between different types of models - for examples if a user creates multiple types of CombinedLinearGaussianTransitionModel - then we can't distinguish between these different models.

Are you ok with this design? Comments?

@DaveKirkland I guess you'll need some form of mapping like we have used for the state to measurement space for the measurement models, but in this case mapping the different state spaces of each transition model to some sort of "super" state space? If you're looking at some sort of multi transition model, you could provided it with a list of models, so the list indexes will allow you to distinguish the different models. I think having some special multi model class probably makes sense, as you could then use it for simulation, like in classes that @snaylor20 created.