Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 02 14:46
    etfrogers-dstl synchronize #543
  • Dec 02 11:55
    etfrogers-dstl synchronize #543
  • Dec 02 11:33
    etfrogers-dstl synchronize #543
  • Dec 01 13:50
    etfrogers-dstl opened #545
  • Dec 01 13:39
    etfrogers-dstl synchronize #543
  • Dec 01 13:22
    etfrogers-dstl synchronize #543
  • Dec 01 13:12
    etfrogers-dstl synchronize #543
  • Dec 01 10:56
    etfrogers-dstl synchronize #543
  • Dec 01 09:48
    oharrald-Dstl synchronize #544
  • Nov 30 16:39
    oharrald-Dstl review_requested #544
  • Nov 30 16:38
    oharrald-Dstl review_requested #544
  • Nov 30 16:38
    oharrald-Dstl review_requested #544
  • Nov 30 16:38
    oharrald-Dstl review_requested #544
  • Nov 30 16:38
    oharrald-Dstl review_request_removed #544
  • Nov 30 16:38
    oharrald-Dstl opened #544
  • Nov 29 16:42
    oharrald-Dstl review_requested #543
  • Nov 29 16:27
    oharrald-Dstl review_requested #543
  • Nov 29 16:27
    oharrald-Dstl review_request_removed #543
  • Nov 29 16:27
    oharrald-Dstl review_requested #543
  • Nov 29 16:27
    oharrald-Dstl review_requested #543
ap
@apiszcz
Is it possible that a perfect set of horizontal or vertical detections would cause this type of error? dstl/Stone-Soup#387
ap
@apiszcz
Disregard 2/25/2021 14:07 post
ap
@apiszcz
Thank you for 0.1b5
Steven Hiscocks
@sdhiscocks
Interesting Drone tracking challenge on Kaggle, which we've created a Stone Soup example for (see in code section).
ap
@apiszcz
How can I pass in additional arguments or objects to my own version of the detections_gen method? '
''' @BufferedGenerator.generator_method
def detections_gen(self):'''
Steven Hiscocks
@sdhiscocks
So the detections_gen() method is designed such that it is called when you iterate over an instance of the class it belongs to. Therefore, you wouldn't expect arguments to be passed to it typically. Adding additional attributes/properties to the class is the best way, and then access them via self in detections_gen().
ap
@apiszcz
thanks, will do.
ap
@apiszcz
Interested in testing the possibility of speeding up large data sets wrt to class Mahalanobis(Measure). One function that consumes the timeline is the np.linalg.inv(cov). the covariancematrix is a class with testing for 2 dimensions, is there preferred method to convert CovarianceMatrix to a standard numpy ndarray?
```
ap
@apiszcz
This works : ```
Testing function with CuPy (lots of overhead to move to GPU and back)

(Pdb) cp.linalg.inv(cp.array(cov))
array([[3.30009659e-07, 0.00000000e+00],
       [0.00000000e+00, 3.30009659e-07]])

(Pdb) np.linalg.inv(cov)
CovarianceMatrix([[3.30009659e-07, 0.00000000e+00],
                  [0.00000000e+00, 3.30009659e-07]])
ap
@apiszcz
``` update
vi = CovarianceMatrix(cp.linalg.inv(cp.array(cov)).get())
ap
@apiszcz
CuPy has a stream interface, and hypothesizer\distance.py is the where the detection iteration occurs. Perhaps the CuPy stream can be initiated here?
ap
@apiszcz
ap
@apiszcz
Is this the proper parameter assignment to delete the last point if it is a prediction? Initial tests are not deleting the last predict point. I am using the composite deleter.
CovarianceBasedDeleter(covar_trace_thresh=del1, delete_last_pred=True)
I see, this is a new feature 0.1b5, i'm running b4, thanks for this new capability!
oharrald-Dstl
@oharrald-Dstl
Hi ap, yes that's correct. Have you tried it on b5?
ap
@apiszcz
I just turned it one all three deleters however i am still seeing the predict points at the of the track, i'm sure it is me, however I'm also using the composite deleter and did not add to that, do I need to add it there. The answer according to the great documentation is 'yes'. I'll do this next test. I'm speculating I ONLY need the parameter on the setup for the composite deleter, and not the individual deleter setups prior.
I'm interested in the gater setup and use too, i need to review this. Again, thanks!
ap
@apiszcz
I see Gating KDTree changes, will this impact performance in a good way? Any benchmarks/test runs on speed.
Steven Hiscocks
@sdhiscocks
@ap I was just cleaning up the branch. Ran out of time to merge. Should be merged this week, and also we'll be doing another release.
Jonas Åsnes Sagild
@jonassagild
Hi. Does anybody have a source for the implemented track-to-track data associator (https://stonesoup.readthedocs.io/en/v0.1b5/stonesoup.dataassociator.html#stonesoup.dataassociator.tracktotrack.TrackToTrack)? I've tried in vain to find a paper describing, or, hopefully, comparing the approach to the commonly used hypothesis test. Is this a well-known approach to track-to-track association?
Steven Hiscocks
@sdhiscocks
@jonassagild Unfortunately the original author of that component isn't working with us anymore, so we've been unable to ask him. We've looked, but aren't aware of any reference for this. We believe that the implementation was an intuitive approach he took.
We would certainly be interested in adding more well known and referenced techniques. Would welcome your suggestions, or contributions if your able?
Jonas Åsnes Sagild
@jonassagild
I see. Thank you for looking. I was searching for a non-probabilistic approach to the problem and found this implementation last year. I evaluated the stone soup method and found that it performed well in terms of false-positive rate and true-positive rate. Seems like good intuition. I guess the single-scan hypothesis test (ignoring cross-covariances) could be a natural next step. Unfortunately, I'm not able to contribute anything at the moment. Will consider it in the future. Thanks again.
ap
@apiszcz
0.1b6, thank you for Speed up track to truth associator by avoiding searching of all times #431 (sdhiscocks), Add tree data structures for gating #145 (sdhiscocks)
ap
@apiszcz
Did i miss the release note for 0.1b7.dev2?
Steven Hiscocks
@sdhiscocks
@apiszcz No. We use setuptools_scm with default versioning scheme, which automatically generates versions relative to tags ( dev versions). Allows us to more easily check what versions people are using.
ap
@apiszcz
Version, got it. Has anyone considered a standard benchmark data set for the build/test process to monitor performance changes and features (JPDA, rtree, gater, etc.?) Tx
ap
@apiszcz

JPDA question, based on the tutorial example.

Using the Kalman filter:

for timestamp, tracks in kalman_tracker.tracks_gen():
  kalman_tracks.update(tracks)
  detections.update(kalman_tracker.detector.detections)

#then perform JPDA
for n, measurements in enumerate(all_measurements):
    hypotheses = data_associator.associate(tracks,
                                           measurements,
                                           start_time + timedelta(seconds=n))
...

Assume the JPDA (fc....) should be run after the Kalman update?

ap
@apiszcz
Gater question/use, is this sequence correct? (d is a dictionary with the values specified by the parameter)
        hypothesiser      = DistanceHypothesiser(predictor, updater, Mahalanobis(),
                                                 missed_distance=d['hypothesizer_mahalanobis_distance'])
        ###
        measure = measures.Mahalanobis()
        gaterhypothesiser = DistanceGater(hypothesiser, measure=measure, gate_threshold=d['gater']['gater_distance'])

        data_associator   = GNNWith2DAssignment(gaterhypothesiser)
Steven Hiscocks
@sdhiscocks

JPDA question, based on the tutorial example.

Are you looking at the JPDA tutorial? Data association is done before updates.

Gater question/use, is this sequence correct?

Yes, that looks correct.

ap
@apiszcz
Gater working now, I'll review JPDA soon.
ap
@apiszcz
Getting a sense of the gater Measure.measure value.
ap
@apiszcz
kalman_tracker.detector.detections may be empty at the start of the program, however the error here is that Detection is not iterable?
# CASE 1 (works, no issue)
for datetimeobj,ktracks in kalman_tracker.tracks_gen():
  kalman_tracks.update(ktracks)
  detections.update(kalman_tracker.detector.detections)


# CASE 2 ( stack trace starts in the jpda_data_associator call )
for datetimeobj,ktracks in kalman_tracker.tracks_gen():
    for n, measurements in enumerate(kalman_tracker.detector.detections):
        hypotheses = jpda_data_associator.associate(ktracks,
                                                    measurements,
                                                    datetimeobj + timedelta(seconds=n))
        .
        . (rest of JPDA 'Running the JPDA filter code block) 
        .

  kalman_tracks.update(ktracks)
  detections.update(kalman_tracker.detector.detections)


---

Traceback (most recent call last):
  File "testjpda\lib\tracktest\tracking.py", line 5951, in <module>
    main()
  File "testjpda\lib\tracktest\tracking.py", line 5864, in main
    tpd = track_jpda()
  File "testjpda\lib\tracktest\tracking.py", line 2972, in track_jpda
    hypotheses = jpda_data_associator.associate(ctracks,
  File "testjpda\lib\lib\site-packages\stonesoup\dataassociator\probability.py", line 64, in associate
    hypotheses = self.generate_hypotheses(tracks, detections, timestamp, **kwargs)
  File "testjpda\lib\lib\site-packages\stonesoup\dataassociator\base.py", line 26, in generate_hypotheses
    return {track: self.hypothesiser.hypothesise(
  File "testjpda\lib\lib\site-packages\stonesoup\dataassociator\base.py", line 26, in <dictcomp>
    return {track: self.hypothesiser.hypothesise(
  File "testjpda\lib\lib\site-packages\stonesoup\hypothesiser\probability.py", line 120, in hypothesise
    for detection in detections:
TypeError: 'Detection' object is not iterable


==
(Pdb) type(kalman_tracker.detector.detections)
<class 'set'>
Steven Hiscocks
@sdhiscocks
@apiszcz For JPDA tracker, you should use the same core components as your Kalman, but use the PDA Hypothesiser, the JPDA data associator, and then use the MultiTargetMixtureTracker. I'll add something into the tutorials about this.
ap
@apiszcz
Thank, I was not ware of MTMT. PDA and JPDA setup. Here is the latest
for datatimeobj, ktracks in kalman_tracker.tracks_gen():

    # Loop through each track, performing the association step with weights adjusted according to JPDA.
    for track in ktracks:
        track_hypotheses = hypotheses[track]

        posterior_states = []
        posterior_state_weights = []
        for hypothesis in track_hypotheses:

            if not hypothesis:
                posterior_states.append(hypothesis.prediction)
            else:
                posterior_state = updater.update(hypothesis)
                posterior_states.append(posterior_state)
            posterior_state_weights.append(hypothesis.probability)

        means = StateVectors([state.state_vector for state in posterior_states])
        covars = np.stack([state.covar for state in posterior_states], axis=2)
        weights = np.asarray(posterior_state_weights)

        # Reduce mixture of states to one posterior estimate Gaussian.
        post_mean, post_covar = gm_reduce_single(means, covars, weights)

        # Add a Gaussian state approximation to the track.
        track.append(GaussianStateUpdate(
            post_mean, post_covar,
            track_hypotheses,
            track_hypotheses[0].measurement.timestamp))


    kalman_tracks.update(ktracks)
    detections.update(kalman_tracker.detector.detections)
stack trace
    Traceback (most recent call last):
  File "jpdatest\lib\_tracktest2\tracktest.py", line 5927, in <module>
    main(oc,ol,ou,modes,src)
  File "jpdatest\lib\_tracktest2\tracktest.py", line 5840, in main
    tpd = otracktest.track_jpda(oc, ol, ou, modes, src, topicname, tpd)
  File "jpdatest\lib\_tracktest2\tracktest.py", line 2952, in track_jpda
    for datatimeobj, ktracks in self.kalman_tracker.tracks_gen():
  File "jpdatest\lib\lib\site-packages\stonesoup\tracker\simple.py", line 203, in tracks_gen
    tracks |= self.initiator.initiate(unassociated_detections, time)
  File "jpdatest\lib\lib\site-packages\stonesoup\initiator\simple.py", line 191, in initiate
    state_post = self.updater.update(hypothesis)
  File "jpdatest\lib\lib\site-packages\stonesoup\updater\kalman.py", line 227, in update
    predicted_state = hypothesis.prediction
AttributeError: 'MultipleHypothesis' object has no attribute 'prediction'
Steven Hiscocks
@sdhiscocks
All the logic for the tracking loop is inside MTMT. Code should be something like this:
jpda_tracker = MultiTargetMixtureTracker(
    initiator=initiator,  # Can be same as Kalman tracker
    deleter=deleter,  # Can be same as Kalman tracker
    detector=detector,  # Can be same as Kalman tracker
    data_associator=jpda_data_associator,  # Different, as this returns multiple hypotheses (to be mixed)
    updater=updater,  # Can be same as Kalman tracker
)
then same as usual
jpda_tracks = set()
for datatimeobj, jtracks in jpda_tracker:
    jpda_tracks.update(jtracks)
ap
@apiszcz
Thanks, that is what I 'think' have.
        detection_reader  = NRTDetectionReader()
        transition_model  = CombinedLinearGaussianTransitionModel((ConstantVelocity(tmcv), ConstantVelocity(tmcv)))
        measurement_model = LinearGaussian(ndim_state=4, mapping=[0, 2], noise_covar=np.diag([mmnc, mmnc]))
        predictor         = KalmanPredictor(transition_model)
        updater           = KalmanUpdater(measurement_model)


        hypothesiser = PDAHypothesiser(predictor                = predictor,
                                       updater                  = updater,
                                       clutter_spatial_density  = tpd['jpda']['jpda_clutter_spatial_density'],
                                       prob_detect              = tpd['jpda']['jpda_probability_of_detection'])

        data_associator = JPDA(hypothesiser)

        deleter1          = CovarianceBasedDeleter(covar_trace_thresh=del1)
        deleter2          = UpdateTimeDeleter(datetime.timedelta(minutes=del2))
        deleter3          = UpdateTimeStepsDeleter(del3)
        composite_deleter = CompositeDeleter([deleter1, deleter2, deleter3], intersect=False)

        initiator = MultiMeasurementInitiator(
            prior_state         = GaussianState([[0], [0], [0], [0]], np.diag([0, 1, 0, 1])),
            measurement_model   = measurement_model,
            deleter             = composite_deleter,
            data_associator     = data_associator,
            updater             = updater,
            min_points          = 2,
        )

        self.kalman_tracker = MultiTargetMixtureTracker(
                                                    initiator       = initiator,
                                                    deleter         = composite_deleter,
                                                    detector        = detection_reader,
                                                    data_associator = data_associator,
                                                    updater         = updater
                                                )
ap
@apiszcz
The hypothesis object has no prediction property.
(Pdb) dir(hypothesis)
['__abstractmethods__', '__annotations__', '__class__', '__class_getitem__', '__contains__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', '__weakref__', '_abc_impl', '_properties', '_property_normalise', '_property_single_hypotheses', '_property_total_weight', '_subclasses', 'get_missed_detection_probability', 'normalise', 'normalise_probabilities', 'single_hypotheses', 'total_weight']
ap
@apiszcz
On startup I have no initial detections or tracks and that appears be the issue. In the Kalman Velocity tracker having no detections does not cause this stack trace. AttributeError: 'MultipleHypothesis' object has no attribute 'prediction'
wowoyoho
@wowoyoho
@apiszcz I found there is something wrong with MultiMeasurementInitiator in case of PDA. I changed MultiMeasurementInitiator to SimpleMeasurementInitiator which works.
Steven Hiscocks
@sdhiscocks
Good spot @wowoyoho. Your right that multi measurement initiator won't work with JPDA Data Associator. You could also use GNN in MultiMeasurementInitiator, whilst still using JPDA in the tracker.
ap
@apiszcz
@wowoyoho thank you, however I need to use a composite deleter and the SimpleMeasurementInitiator does not appear to support deleters?
os17712
@missliv89
Would anyone be able to explain how the noise=True function was built? I.e. the noise profile/ parameters/ scalability. I'm in the process of trying to manually generate my own noise profiles and am trying to get it as close to/ as similar as the inbuilt noise generated for various other distributions. Many Thanks!
Steven Hiscocks
@sdhiscocks
@missliv89 So in our models, they all currently (for now at least) use a Gaussian model, so when noise is true, the rvs method is called. This uses the covariance, and zero mean to generate noise, which is then added to the state_vector.
jmbarr
@jmbarr
Hi @missliv89. More specifically, when noise=True the rvs method from the multivariate_normal (ie Gaussian) class out of numpy.random or scipy.stats (can’t remember which) is called. rvs is used analogously in other distributions so one could replace multivariate_normal with an alternative from numpy.random or scipy.stats. (The broader Stone Soup community might consider working out how to pass different distributions.)