Hey @abhishek-ep19, raising a PR is an emblem that you have gone through the code base and are willing to contribute to its enhancement. It is always a good gesture and it adds on if you do so. However, if you have worked on the project that you are applying for and mention things related to research that you have done would be of equal importance. In the end, it is about how you are contributing to the organisation and how are you willing to keep doing it.
Are we supposed to install the build tools for RoboComp separately?
I don't think so. Why?
CMakeListsSpecific.txtfile add explicitly link qmat and innermodel to the targets. Just wanted to clarify if this is something to be added to the component's cmakelists or an issue with the build system itself
Can anyone help me with using robocomp with CoppeliaSim? I followed the instructions shown here. Running the viriatoPyRep bridge is throwing an IndexError
Traceback (most recent call last): File "src/viriatoPyrep.py", line 279, in <module> worker.compute() File "/home/ashwin/opt/robocomp/components/dsr-graph/robots_pyrep/viriatoPyrep/src/specificworker.py", line 191, in compute self.read_laser() File "/home/ashwin/opt/robocomp/components/dsr-graph/robots_pyrep/viriatoPyrep/src/specificworker.py", line 251, in read_laser ldata = self.compute_omni_laser([self.hokuyo_base_front_right, File "/home/ashwin/opt/robocomp/components/dsr-graph/robots_pyrep/viriatoPyrep/src/specificworker.py", line 382, in compute_omni_laser imat = np.array([[m,m,m,m],[m,m,m,m],[m,m,m,m],[0,0,0,1]]) IndexError: index 4 is out of bounds for axis 0 with size 4
Am I missing something?
Thank you @Varun270. Have you followed the installation process for robocomp? And the tutorials?
Those would be the first steps to understand Robocomp.
I have sent this mess to the robocomp chat group, but no one reply, I don't know why? Maybe, my idea does not fit with robocomp lib.
I see a lot of project in GSoC this year using deep learning models. How do you guys thing about it? Are there any problems with this idea?
I think a lot of people here want to integrate the deep learning model into their own robot. I think it better to have a base code in Robocomps for this purpose, Could we build something like an adapter component that people can integrate deep learning model without overhead by building everything from the start?
After checking a lot of components in Robocomp github, I see that there are two ways of integrating:
-build a solo component for the deep learning model.
-build integrate directly into the component.
==>> Which one you prefer ??? or Do you have other ways??
About the generic adapter components, in my opinion, it needs to have these feature:
1) a log (inference time, memory consumption, input data, .etc.)
2) inference part: design to adapt with all frameworks like TensorFlow, PyTorch, ONNX, Keras and run them on edge devices. In specific:
load model, specific way to feed/get data into/from the model for each framework.
option to run on CPU or GPU.
option to run code with NVIDIA® TensorRT or other inference acceleration lib.
Preprocess input and post-process output (abstract function -> can be polymorphism for each specific purpose).
3) meta params for inference will be input via a text file (JSON, XML, etc.) such as path to model, manual params for inference .etc.
===> Do you need other things?
This component is built as an abstract class. => so everyone can inherit and specific:
1) their preprocess,
3) and inference function for their own project.
I think it looks like a PyTorch lightning with Pytorch => a wrapper for high-performance inference.
People have experimented with robot and Robocomp, do you think this AI component is possible ?
I think about coding this thing, so feel free to propose your opinion about it.`