Those of us at UIUC who work on ILLIXR decided we should do more of our deliberation in public in addition to our research meetings. There are a couple of venues for this:
Gitter is best for realtime communication. We can deliberate over design decisions and go back-and-forth on specific bugs here. However, Gitter is less discoverable, so we have other options too.
Once we have consensus around a plan for some bug or feature (deliberated on Gitter), whatever we decided can go into a GitHub issue. More specifics and details can be worked out in the issue's comment thread. Contributors can log progress through checklists and our GitHub Project board.
If these details are relevant in the future (e.g. if it is the rationale for an important decision), then can be put either in the high-level documentation (
docs/), comments in the code, or in the root of the repo (in special cases) in their pull request. This way it is permanent and discoverable.
In following my own suggestion, I am asking for feedback on the plan I have laid out above. Then I will roll that into a ILLIXR/ILLIXR#60, which will record our policies in the root of the repo.
master, skim the documentation for the new build process. If you are working on the latest release (
v2-latest), no change yet, we're going to push out a release soon.
I am trying to run the open-vins component by itself using the “./run_illixr_msckf” script. I just want to double check that my usage is correct b/c I ran it overnight and no text or GUI popped up and it was still going.
I wanted to test whether everything was installed and working properly so I went to The EuRoC MAV Dataset and downloaded the ASL Dataset format for Machine Hall 01 as a test (I used the ASL and not the ROS bag b/c the instructions say this is running w/o roscore). This is the command I used: “./run_illixr_msckf ../../mav0/cam0 ../../mav0/cam1 ../../mav0/imu0 ../../mav0/cam0/data ../../mav0/cam1/data.”
I cloned the ILLIXR/open_vins repo, not the whole thing. Ubuntu 16.04 on my x86 desktop
Is my usage correct or is there something I’m missing?
Hi @armandb_gitlab , thanks for checking out ILLIXR! Your usage is just slightly off. The first three parameters need to point to the data files inside the folders; e.g.,
/mav0/cam0/data.csv for the first camera. The final two parameters look good. Once you make the fix, you should see a lot of prints regarding tracking, initialization, and so on.
And thank you for bringing this up. We should fix this issue on our end by checking the arguments and failing instead of silently hanging.
Could I get a similar screenshot from a run on the Intel Xeon E-2236 CPU? I want to try and work backwards from the specs try to figure out what's causing the difference in runtime on each of the different kernels.
Also, on a slightly unrelated note, I noticed that KimeraVIO is the default VIO algorithm over openVINS. What was the reasoning behind this? I noticed that openVINS only takes opencv 3 and Kimera works w/ opencv 4 and kernels like the lktracker seem to look much different (and are probably better) on opencv4. Is this part of the reason?
Hey @kaiming-uw! Thanks for checking out ILLIXR. ILLIXR does indeed support running multiple applications together, courtesy of Monado. You can choose ILLIXR as the device, start the Monado service, and then launch multiple OpenXR applications together. We've tested that this works. However, I would like to mention two caveats:
Let us know if you have questions!
I was trying to profile openVINS with perf to find function hotspots and was running into some issues b/c there weren’t enough symbols. That led me to modify the CMakeLists of ov_msckf w/ -fno-omit-frame-pointer and configure and compile opencv w/ the same compiler flag. Now I have more symbols, but I’m getting nonsensical results like cv::detail::LKTrackerInvoker::operator() was using ~40% of execution time (perf data attached). I’ve tried creating the perf call graph with lbr and dwarf as well with similar results.
In the paper, you all say you used perf and vtune to find hotspots. Did you mostly use perf to get the hotspots and did you run into similar issues? Was the solution to compile all the related libraries (opencv, boost, eigen) w/ frame pointers? Does vtune help avoid some of these challenges?
Any help/advice would be appreciated.
Two quick questions:
Also, for anyone with their GPUs connected to a server looking to run the openGL programs w/ GUIs, I had trouble getting stuff to work with "ssh -X" and ended up using virtual gl https://virtualgl.org/Main/HomePage. Would recommend.
Do you guys know if there are any reputable datasets with visual-inertial data for head-mounted displays?
From what I understand, most of the datasets like EuRoC are for micro-aerial vehicles and self-driving cars. I was able to find others that use phones (NEAR: The NetEase AR...) or emulate head movements by moving a camera by hand. A lot of the 360 video datasets just have the head pose (not the IMU and camera readings).