## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
Randy Mackay
@rmackay9
I wonder, is the jump detection logic simply looking for a jump of 10cm in the position? If that's the case then it means if the vehicle moves at more than 3m/s it would trigger? (i.e. 300cm / 30 = 10cm). I think better logic might be to incorporate the velocity.. so new position - (previous position + (new velocity * dt)) > X. and maybe X should be larger.. maybe 1m?
Randy Mackay
@rmackay9
.. or maybe (new position - (old position + ((new velocity + old velocity)/2 * dt))) > X?
Thien Nguyen
@thien94
@rmackay9 I will change the vision-speed-estimate message to on by default then.
The jump test is being done at the raw position data frequency of 200Hz and not at the output frequency (30Hz) which is set as a parameter, so the threshold is actually 20 m/s (2000 / 200 = 10)
I think for most imaginable cases, that would be sufficient. Alas, it is just my guess and we can change the logic if there is a better one.
Randy Mackay
@rmackay9
ok thanks. This user has the camera mounted downwards and I think the vehicle's yaw controller is oscillating and perhaps this is highlighting the T265's weaknesses
Do we think that the best orientation is facing forward? This is how I mount it and how most people seem to mount it.. I guess I've heard pointing forward and down at 45deg is also good although I think I heard that @patrickpoirier51 mentioned that this led to the vehicle drifting forward (?)
Randy Mackay
@rmackay9
Ah, the position resets all happened as the vehicle landed.. so the camera lens was probably just a few cm from the ground..and the propellers were still spinning..
Thien Nguyen
@thien94

Do we think that the best orientation is facing forward?

I think facing forward is the most stable and most tested configuration. There are some recent changes in the librealsense library to improve some existing issues with downward-facing configuration, so maybe the user can update to the latest version and see if it helps.

Bo Shang
@cnpcshangbo
Hi Dr. Nguyen, we have mounted ZED 2 camera backwards. Do you know what kinds of changes in parameters or codes are needed if we want to use existing work?
Thien Nguyen
@thien94
@cnpcshangbo What code are you referring to, specifically? Also, I am just a student, no doctor yet :)
Bo Shang
@cnpcshangbo
Hi @thien94 , I am thinking of three options:
Option 1: https://ardupilot.org/copter/docs/common-vio-tracking-camera.html
Option 2: https://ardupilot.org/dev/docs/ros-vio-tracking-camera.html#ros-vio-tracking-camera
Option3: I think you also have a script that can work without ROS.
I hope we don’t need to flash the system since we want to keep some existing work. We have a Jetson TX2, ZED and ZED2 on hand. We’ll get RPi 4 and RealSense T265 later. Do you have some recommendations?
Is your boss Prof. Lihua Xie?
Thien Nguyen
@thien94
Assuming you are using only the tracking data with ZED2 (similar to T265), the easiest option would be 2, you just need to feed in the correct ROS topic and test out different configurations until you get the desired result.
Option 3 is actually the same as 1 (running the Python script)

I hope we don’t need to flash the system since we want to keep some existing work.

As long as you can install all the required packages it should be ok

Bo Shang
@cnpcshangbo
Thanks Thien. I’ll try the ROS method first.
Randy Mackay
@rmackay9
I've started working on a new EKF feature to allow switching between the external nav (i.e. T265 or similar) and GPS for the position. It's very much still a work-in-progress but the code is here if anyone is interested: https://github.com/rmackay9/rmackay9-ardupilot/commits/ekf-pos-source1. At the moment the position source can be manually changed using an auxiliary switch but eventually I hope to make it automatic based on some heuristics of how reliable the two sources are.
Randy Mackay
@rmackay9
I've discussed this with Tridge and PaulR already and have their agreement that the approach is OK. There are some open discussions including:
1. what is the minimum feature set we need before it can be merged? i.e. is switching between external and GPS enough or do we want a bigger change?
2. Tridge suggests we should move the sensor buffers into this new class.
3. Paul wants the altitude (aka Z-axis) moved into the new class to allow more defined behaviour of switching between the sources
patrickpoirier51
@patrickpoirier51
@rmackay9 is this work complimentary to the GSoC project with @tridge ?
Randy Mackay
@rmackay9
@patrickpoirier51, Txs for that, I had not considered this. I've now discussed with Tridge and we think they are complimentary and shouldn't conflict too badly. So the GSoC sensor affinity project allows the 1st and 2nd GPS (for eample) to be used by different EKF lanes while my new class would allow specifying whether to use GPS or External Nav for both lanes. Discussing this makes it clear that it should also be possible to have one EKF lane using GPS and the other using External Nav.. this is where the two projects meet perhaps.
bj-neilson
@bj-neilson
@rmackay9 that’s a very promising feature. We’ve got a Ros controlled mower that’s currently only using Ardupilot for actuating servos. Being able to fall back on ardurover gps pose/position and use geofence would be great failsafes
Stam Athiniotis
@stam9_gitlab
@rmackay9 Yes I am sending yaw values as well as position. I also send linear velocity when providing LOCAL coordinate setpoints (I know this is somewhat out to the scope of my question but I was hoping to do the same in BODY coordinates when thie current issue is fixed)
RoboBil
@RoboBil
While I complete some overdue maintenance on my rover (before I UL my logs) , I dId notice a message "POSE JUMP DETECTED". Could this be a reason for my home position shift?
soldierofhell
@soldierofhell
Talking about EKF, there's new ROS package called fuse, succesor of robot_localization. Must have for those interested in special motion models like e.g. ackerman. https://github.com/locusrobotics/fuse
Vinicius Juvinski
@juvinski
Hi guys, I update my jetson with last nvidia image - ubuntu 18.04.4 and both cameras - d435 and t265 is reseting the usb , anyone has any clue of what can be happening ?
Vinicius Juvinski
@juvinski
I´m testing this https://support.intelrealsense.com/hc/en-us/community/posts/360037261873-d435-and-t265-together-on-nvidia-jetson-tx2 I think the issue is solved for now - 30 minutes without reset
patrickpoirier51
@patrickpoirier51
@juvinski Great, keep us updated !
Randy Mackay
@rmackay9
@stam9_gitlab, ok, I think the issue may be in the yaw targets that are being sent. I cannot see what values are being sent in the logs but may be you could check those. Make sure that the values are in radians... maybe just don't send them at all and see if it works better.
@RoboBil, yes, the "pose jump" message means that Thien's scripts noticed that the position was moving impossibly quickly. This points towards a camera issue and is probably not an ArduPilot or script issue.
Vinicius Juvinski
@juvinski
hi Patrick, 2 hours running without problems - the battery was drop to 10,7 volts and then I had the power reset, so I think compiling realsense with this -BUILD_WITH_TM2=True on nano or other jetson boards can help
I´m testing with rover, I was doing some things today but I will test tomorow and return to you
patrickpoirier51
@patrickpoirier51
Sorry Vin I really don't get it
"...Assuming you received from Intel an engineering sample of T265(TM2) device, enable support for it in the SDK.."
TM2 stands for a very particular case?
patrickpoirier51
@patrickpoirier51

Oh...on the installation step 5
TM1-specific:

Tracking Module requires hid_sensor_custom kernel module to operate properly. Due to TM1's power-up sequence constraints, this driver is required to be loaded during boot for the HW to be properly initialized.

Vinicius Juvinski
@juvinski
Interesting i saw in a lot of places to dont patch fhe kernel to use the realsense cameras in the realsense page about nano they just tell to install using pre compiled packages
Im runing more one hour and half without usb reset
Vinicius Juvinski
@juvinski
Thien Nguyen
@thien94
I think TM1 is Intel's internal code name for T150, a tracking module using a monocular camera and IMU that was only available for "big customers" https://github.com/IntelRealSense/librealsense/issues/1269#issuecomment-374635795 and has been discontinued. The "remnant" of this module can still be found in the D4xx datasheet.
TM2 is then the code name for T265.
Vinicius Juvinski
@juvinski
hi @thien94 that´s make sense, I´m running at 2:45 minutes without reset
Huibean Luo
@Huibean
@thien94 I have create a script that can steaming T265's fisheye frame to RTSP by pyrealsense2,share the link in case that can help https://github.com/VimDrones/realsense-helper/blob/master/fisheye_stream_to_rtsp.py
Thien Nguyen
@thien94
@Huibean thanks for sharing!
Randy Mackay
@rmackay9
@Huibean, that's very interesting. We are hoping to work with ANU (Australian National University) who is also working on an visual odometry system (open source of course) and Tridge suggested it would be good to be able to get it working in parallel with the T265's built-in algorithm so we can compare performance. I'll pass a link to ANU.
Randy Mackay
@rmackay9
@thien94, @patrickpoirier51 (and anyone else who is interested), I realised that our APSync scripts are hardcoded to grab librealsense 2.33.1. https://github.com/ArduPilot/companion/blob/master/Common/Ubuntu/librealsense/install_librealsense.sh#L16
I think using a stable release instead of some random version (aka master) is probably a good idea but I'll have to remember from now on to update this version as Intel releases improvements.
I was also thinking that it might be good if the scripts were enhanced to show the firmware version being used. I guess for the T265 it's somehow updated automatically so the librealsense SDK version always matches the camera. This may be different for other cameras. Anyway, not urgent at all just a thought..
Ed Muthiah
@ed-muthiah
@rmackay9 out of curiosity who or which team at ANU are you working with? Rob Mahony?
Randy Mackay
@rmackay9
@ed-muthiah, yes, it's Rob Mahony. He did a presentation at our developer unconference this year. https://www.youtube.com/watch?v=nO_y6BRBBOg
Ed Muthiah
@ed-muthiah
@rmackay9 awesome thanks for sharing :) I'm starting post grad study at the ANU and he's the robotics lecturer!
Randy Mackay
@rmackay9
@ed-muthiah, cool!