Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 14:34
    jmbarr labeled #326
  • 14:34
    jmbarr labeled #326
  • 14:34
    jmbarr labeled #326
  • 12:56
    sdhiscocks synchronize #336
  • 12:03
    etfrogers-dstl review_requested #246
  • 12:03
    etfrogers-dstl synchronize #246
  • 10:59
    sdhiscocks review_requested #336
  • 10:59
    sdhiscocks review_requested #336
  • 10:59
    sdhiscocks review_requested #336
  • 10:59
    sdhiscocks opened #336
  • 10:17
    sdhiscocks labeled #74
  • 10:16
    sdhiscocks labeled #309
  • 10:16
    sdhiscocks labeled #295
  • 10:16
    sdhiscocks labeled #278
  • 10:16
    sdhiscocks labeled #271
  • 10:16
    sdhiscocks labeled #246
  • 10:16
    sdhiscocks labeled #226
  • 10:16
    sdhiscocks labeled #145
  • 10:16
    sdhiscocks labeled #142
  • 10:16
    sdhiscocks labeled #120
ap
@apiszcz
They are core to the SIAP value, explicit knowledge of them is of value in my case.
Steven Hiscocks
@sdhiscocks
@apiszcz Yes, I see what you mean: we already calculate a number of those values as parts of other metrics (e.g. count of tracks) but they aren't returned as metric set. @oharrald-Dstl is currently looking to add some ID based SIAP metrics, so we can also look at updating to also return the core values used in the calculations as well.
I've updated #257
ap
@apiszcz

The Moving_Platform_Simulation example defines radar mountings.

Questions:

  1. Is the mounting offset is x,y,z offset from the physical center of the aircraft in meters? (assume yes)

  2. Is the radar_rotation_offsets order Heading(Yaw), Pitch, Roll? Are the units angles degrees or radians? (assume degrees)
    If I want to orient a sensor perpendicular look of 30 degrees for example is this the correct state vector: 0,0,30 ?

https://stonesoup.readthedocs.io/en/latest/auto_examples/Moving_Platform_Simulation.html

radar mountings

radar_mounting_offsets = StateVector([10, 0, 0]) # e.g. nose cone
radar_rotation_offsets = StateVector([0, 0, 0])

Steven Hiscocks
@sdhiscocks
@apiszcz
  1. This will be from physical centre (or centre of mass/rotation). Units aren't defined, so flexible. Obviously need to be consistent that state vector and other distance calculations are assuming same units.
  2. Rotation is around defined as "viewed by an observer looking along the respective rotation axis, towards the origin". So order is Roll, Pitch, Yaw. Angles are in radians. Please note there is a bug highlighted in #310, where a mix of conventions has been used
ap
@apiszcz
Thank you, did I miss "viewed by an observer looking along the respective rotation axis, towards the origin" in the documentation?
Arizona Dad
@ryanofarizona
@bradh @apiszcz Using the commit hash isn't acceptable in my case because there is not a static .zip or .tar.gz of the entire repo that resides at a fixed URL for each commit hash (like there is for each label). (I realize that from a technical perspective, it is simple to clone/checkout a certain commit and zip it up, but that does not comply with the approval process for the other network, which I have minimal/no ability to change.) If there are no plans to make a new tag before the end of this year, I'll try the approach of making a label on my fork and see if that will get approved.
Steven Hiscocks
@sdhiscocks
@ryanofarizona We are looking at doing another release soon. In case it helps, you can download a zip or tar of any commit id. Format is "https://github.com/dstl/Stone-Soup/archive/<commit_sha>.tar.gz" (or zip). e.g. https://github.com/dstl/Stone-Soup/archive/77c0a032346a397fccefbb97ccca6adb1c7d9ae4.zip
Arizona Dad
@ryanofarizona
@sdhiscocks That actually really does help! I was not aware that each commit was archived that way. Thanks.
Brad Hards
@bradh
@ryanofarizona So it isn't "archived" - I believe its generated on demand. Think REST API rather than static file on the other end of the HTTP GET.
Arizona Dad
@ryanofarizona
@bradh Good to know. It may still check the box for my purposes.
asimkhattak
@asimkhattak
@sdhiscocks Hi sir, could this be modified to support distributed fusion architectures for multi-sensor multi-track problems instead of only centralized architectures? I know that track to track fusion isn't offered but if it was developed then could this be possible?
Steven Hiscocks
@sdhiscocks
@asimkhattak We don't have track to track fusion yet as you mention, but it is something we are interested in adding, and any contributions in this area would be welcome. Shouldn't be anything that precludes this, more a case of adding extra features.
Steven Hiscocks
@sdhiscocks
@/all We are currently conducting a user survey for Stone Soup and it would be great if you could spare 5 mins to complete.
buckeye17
@buckeye17
Hello Stone Soup team! I'm new to this package. I'm studying the Multi-Sensor Moving Platform Simulation Example. It generates detections based on ground truth states. Alternatively, is it possible to generate detections based on bespoke update rates for each sensor? The example effectively makes the sensors synchronized, but I'd like to simulate asynchronous sensors. Thanks for your help!
Steven Hiscocks
@sdhiscocks
@buckeye17 There's multiple ways you could approach this: have a Detection Simulator that is aware to only call sensors at certain times/intervals; a sensor which uses the ground truth timestamp to decide to return detections or not; create a feeder which manipulates the detections before they go into the tracker; and maybe other ways.
As an example, I've created Notebook which has a "Mix In" class which can be used with any sensor to only yield detections after minimum time interval from the last. This is based on the Multi-Sensor Platform Simulation Example, with radar now only measuring every 2 seconds, and imager every 3 seconds.
buckeye17
@buckeye17
@sdhiscocks Thanks! That was a huge help! I've now stumbled into another point of confusion: how can a ConstantAcceleration maneuver get incorporated into a MultiTransitionMovingPlatform model? The issue I'm encountering is that when I use ConstantAcceleration instead of say ConstantVelocity, the dimensions of the transition model increase. Since MultiTransitionMovingPlatform doesn't have an acceleration_mapping parameter, it seems the ConstantAcceleration dimensionality needs to be reduced somehow. I couldn't find any examples that use ConstantAcceleration....
Steven Hiscocks
@sdhiscocks
For background, the platform needs the position mapping for sensor offsets and velocity mapping to calculate orientation (current platform implementations assume orientation is direction of travel). It currently doesn't require a acceleration mapping and will ignore other elements of the state vector. Therefore you should be able to use any transition models/state space, as long as they at least contain position and velocity.
ap
@apiszcz
Noted PR on SIAP metrics improvements, thank you.
buckeye17
@buckeye17
image.png
Could someone look over my code and figure out why my radar detections seem to follow the first CombinedLinearGaussianTransitionModel within the targets' MultiTransitionMovingPlatform transition_models list?
laurelstrelzoff
@laurelstrelzoff
Hi, I'm doing this (https://stonesoup.readthedocs.io/en/latest/auto_tutorials/08_JPDATutorial.html) tutorial in the stone soup docs, and I keep getting an error on the lines where transition models get appended. "Unexpected keyword argument 'time_interval' in method call"
for k in range(1, 21):
    truth.append(GroundTruthState(
        transition_model.function(truth[k-1], noise=True, time_interval=timedelta(seconds=1)),
        timestamp=start_time+timedelta(seconds=k)))
truths.add(truth)

truth = GroundTruthPath([GroundTruthState([0, 1, 20, -1], timestamp=start_time)])
for k in range(1, 21):
    truth.append(GroundTruthState(
        transition_model.function(truth[k-1], noise=True, time_interval=timedelta(seconds=1)),
        timestamp=start_time+timedelta(seconds=k)))
truths.add(truth)
Steven Hiscocks
@sdhiscocks
@buckeye17 I'm not 100% sure just looking at that code snippet, but suspect issue is your moving the platforms in your first loop, and then the sim will be moving the platforms again when generating the detections. This will cause odd behaviour. (We should probably add a check for negative time that would create a more obvious error message). Solution could be to get platform positions inside second loop.
Steven Hiscocks
@sdhiscocks
@laurelstrelzoff So example is working on latest master. Could you try installing the latest version python -m pip install --no-cache --no-deps --force-reinstall https://github.com/dstl/Stone-Soup/archive/master.zip#egg=stonesoup
laurelstrelzoff
@laurelstrelzoff
Fixed it, thank you!
buckeye17
@buckeye17
@sdhiscocks You were right. The issue was that under the hood something was being iterated over in each of code blocks. This meant that the detections were relying on the last CombinedLinearGaussianTransitionModel (not the first as I suspected earlier). I fixed this by using a deep copy of the simulation to obtain detections in my second code block. Thanks for your help!
buckeye17
@buckeye17

Returning to my earlier question, I still can't figure out how to utilize ConstantAcceleration as a maneuver in my simulations. If I use CombinedLinearGaussianTransitionModel([ConstantAcceleration(0.), ConstantAcceleration(0.), ConstantAcceleration(0.)]), I get an error stating:

ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 6 is different from 9)

If I change the Z component model with CombinedLinearGaussianTransitionModel([ConstantAcceleration(0.), ConstantAcceleration(0.), ConstantVelocity(0.)]), then I get the following error message:

ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 6 is different from 8)

Notice that the reported mismatch has dropped from 9 to 8. This suggests that ConstantAcceleration is incompatible with my model. You could reproduce this behavior by modifying the target platform in the Moving Platform Simulation. Hopefully this clarifies my issue.

Steven Hiscocks
@sdhiscocks
@buckeye17 I've pushed a dstl/Stone-Soup@fe718c2. I've tried to include some explanation in the commit message. Note the gotchas with the Constant Turn model (which doesn't expect acceleration) and I hit a bit of an issue with the plotting so changes there are more significant than you'd first expect.
buckeye17
@buckeye17
@sdhiscocks FYI, I get the same error even when the full list of transition_models are ConstantAcceleration models, so I don't think the problem only occurs going from turning to acceleration.
buckeye17
@buckeye17

Sorry for the flurry of questions, but here's another issue I've run into. I'm trying to write my simulation to a YAML file using the following code:

YAMLWriter("config.yaml", groundtruth_source=sim.groundtruth, detections_source=sim.detections)

But this gives the error message, "AttributeError: 'PlatformDetectionSimulator' object has no attribute 'current'." When I use the same line of code in the Moving Platform Simulator example, it works fine. So somehow the tracker object is modifying the simulator object, adding the Current attribute, but I can't find the source code which does this. How can I save a simulator without using a tracker? Or do I need to use a dummy tracker?

Steven Hiscocks
@sdhiscocks
So the current attribute is set as each component is run; this is part of the BufferedGenerator which warps the iterable part of readers and another sensibly named attribute is pointed at this current variable (detection_gen() being the iterable method and detections being the attribute in this example). With the YAMLWriter, the *_source variables should be set to the readers them selves, rather than the attribute where detections will be available. So in this case detections_source should be set to sim.
YAMLWriter("config.yaml", groundtruth_source=sim.groundtruth, detections_source=sim)
YAMLWriter class is immature and not well document unfortunately. I'll raise an issue for that.
Another option if you want to write out data, or even your entire tracker, is to use the stonesoup.serialise.yaml
e.g.
tracker = MultiTargetTracker(...)
from stonesoup.serialise import YAML
with open('config.yaml', 'w') as config_file:
    YAML().dump(config_file, tracker)
This is what we typically use. The YAMLWriter is good for writing out the data (truth, tracks, etc.), rather than a configuration.
buckeye17
@buckeye17
@sdhiscocks Thanks again for the quick response! I'm working on a project which is separating the ground truth & detection simulation from the tracker implementation, hence I am trying to avoid using trackers.
Steven Hiscocks
@sdhiscocks
Sure. The YAML() serialiser should work with all components, so can also do:
from stonesoup.serialise import YAML
with open('config.yaml', 'w') as config_file:
    YAML().dump(config_file, [sim.groundtruth, sim])

Or in fact just:

from stonesoup.serialise import YAML
with open('config.yaml', 'w') as config_file:
    YAML().dump(config_file, sim)

would work fine, as it'll serialise the groundtruth attribute.

buckeye17
@buckeye17
Well, I guess I spoke too soon. I am able to generate the yaml file following your correction for detections_source, but the file is blank. When I use your suggested YAML().dump() methods, I get an error message stating, "ruamel.yaml.error.YAMLStreamError: stream argument needs to have a write() method"
Steven Hiscocks
@sdhiscocks

Ah....sorry. I gave you the arguments the wrong way around:

from stonesoup.serialise import YAML
with open('config.yaml', 'w') as config_file:
    YAML().dump(sim, config_file)

What you want to output should be first argument, and file 2nd.

ap
@apiszcz
+1 on YAML writer, I found very helpful in understanding all the data elements. (object, outputpath)
ap
@apiszcz
I am seeing issues with the version number not changing and what is in the repositories. Is there any outlook / plan on implementing version labels for the package yet?
Steven Hiscocks
@sdhiscocks
@apiszcz I've been looking at using setuptools_scm, but haven't got round to reading a bit more in depth.
buckeye17
@buckeye17
I've got another question about the Moving Platform Simulation. Looking at the last code cell in the notebook, it seems that the tracker object has duplicated data. If you add a print statement to the for loop as shown below, you will get two sets of (X, Y) coordinates for each time step. Why are there two sets of coordinates? The coordinate values are very similar but not exact matches.
    for track in ctracks:
        X = [state.state_vector[0] for state in track]
        Y = [state.state_vector[2] for state in track]
        print(time, X[-1], Y[-1])
        artists.extend(ax.plot(X, Y, color='k'))
Steven Hiscocks
@sdhiscocks
The first state on the track at a timestamp will be from the first sensor update (Radar in that example), and the second from the second sensor update (Imager in that example). These could also be predictions if not detection has produced or associated with a track (in the case of the 2nd state at the timestamp, this will match the 1st state at the timestamp, as prediction will be over zero seconds). This is usefuly for understanding how each sensor contributes to the state space, however we probably should only be plotting the last state at each timestamp to be honest.
buckeye17
@buckeye17
Can anyone offer any tips on how to tune the parameters of an EKF tracker? I'm using a DistanceHypothiser with Mahalanobis distance, GlobalNearestNeighbor data associator, CovarianceBasedDeleter, MultiMeasurementInitiator and MultiTargetTracker. The tracker often initiates a track that is significantly isolated from all detections, even when min_detections>100. It also never initiates simultaneous tracks (one track is initiated, a new one will only be initiated when the current track is deleted), even when there are 5 targets whose trajectories never come near each other.
ap
@apiszcz
@buckeye17 you may want to review Optuna, it requires creating a cost function for automated parameter selection. Other methods are grid search in SCIKIT-LEARN and other tools.
Steven Hiscocks
@sdhiscocks
@buckeye17 I'd probably start in checking your transition model is representative of your target, check your measurement model covariance is representative, and then check your prior in your initiator (should have large error for parts of the state space your sensor doesn't measure).
buckeye17
@buckeye17

@apiszcz Thanks for the suggestions. My current dilemma is that the tracker is behaving too poorly to turn over to an optimizer. I need to get it to produce at least one track for each target before optimization can help.

@sdhiscocks Thanks for your advice. My simulation has targets doing combinations of constant velocity and constant turn and my predictor model is constant velocity, so it should be capable. My measurement covariances are in spherical coordinates, so I've done some post hoc analysis of my simulation to find the cartesian variances. I've tried more than 100 simulations with various tracker parameters I'm able to get decent performance if I only simulate 1 target, but once go to 2 targets, I can't get a decent result. The fact that there are never simultaneous tracks make me wonder if my tracker is fundamentally flawed rather than poorly tuned.

If it helps, here's my tracker code:

    bodies_all = target_ls + [sensor_platform]
    sim_all = PlatformDetectionSimulator(groundtruth=truths, platforms=bodies_all)
    target_transition_model = CombinedLinearGaussianTransitionModel(
        [ConstantVelocity(.005), ConstantVelocity(.005), ConstantVelocity(0)]) 
    predictor = ExtendedKalmanPredictor(target_transition_model)
    updater = ExtendedKalmanUpdater()
    hypothesiser = DistanceHypothesiser(predictor, updater, measure=Mahalanobis(), missed_distance=3)

    covariance_limit_for_delete = .1
    min_detections = 50
    X_tar, Y_tar = sum(x_pos_rng)/2, sum(y_pos_rng)/2
    s_prior_state=GaussianState([[X_tar], [0], [Y_tar], [0], [8000*feet_to_deg], [0]],
        np.diag([0, 0.002, 0, 0.002, 0, 0]))

    #data_associator = GNNWith2DAssignment(hypothesiser)
    data_associator = GlobalNearestNeighbour(hypothesiser)
    deleter = CovarianceBasedDeleter(covariance_limit_for_delete)
    initiator = MultiMeasurementInitiator(
        prior_state=s_prior_state,
        measurement_model=None,
        deleter=deleter,
        data_associator=data_associator,
        updater=updater,
        min_points=min_detections
    )

    # Create an EKF Multi-target tracker
    tracker = MultiTargetTracker(
        initiator=initiator,
        deleter=deleter,
        detector=sim_all,
        data_associator=data_associator,
        updater=updater
    )
buckeye17
@buckeye17
Well, it turns out the fundamental flaw I was sensing was not in my tracker setup but in my visualization pipeline, so I was only seeing the last track in each set of tracks :-/ Thanks again for your help @apiszcz & @sdhiscocks!