Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 18 14:28

    smistad on v4.0

    V4 refactor of PatchStitcher/Ge… (compare)

  • Jun 18 13:43

    smistad on master

    Update CI-windows.yml (compare)

  • Jun 18 13:39

    smistad on master

    Create CI-windows.yml (compare)

  • Jun 18 13:02

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 12:53

    smistad on v4.0

    Changed size_t to Py_ssize_t (compare)

  • Jun 18 12:14

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 12:13

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 11:57

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 11:54

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 11:49

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 11:46

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 11:08

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 11:01

    smistad on master

    Update CI-ubuntu.yml (compare)

  • Jun 18 10:49

    smistad on master

    Create CI-ubuntu.yml (compare)

  • Jun 17 14:40

    smistad on v4.0

    V4 refactor of all renderers Removed unused CMake code causi… V4 refactoring for NN. Fixed is… and 3 more (compare)

  • Jun 17 12:14
    smistad commented on f413353
  • Jun 17 12:13

    smistad on master

    Removed unused CMake code causi… (compare)

  • Jun 17 12:02
    smistad commented on f413353
  • Jun 17 09:19
    giakara commented on f413353
  • Jun 15 12:08

    smistad on v4.0

    Fixed some V4 windows issues (compare)

Erik Smistad
@smistad
That example does not show the raw output, which should be a tensor. Change SegmentationNetwork to NeuralNetwork then you will get a Tensor out instead
André Pedersen
@andreped

Another reason why no segmentation is "rendered" might be because the network predicts everything to be background.

Perhaps take a look at the output of the NeuralNetwork class instead, as @smistad mentioned.

Giannis Dimaridis
@gdims

Hi guys, I've been looking into it for a while today, no luck so far.
I realized that the output of my segmentation model is zero everywhere. I tried changing it to a NeuralNetwork but, when I call .setInputConnection(), I get:

ERROR [140707514742592] Terminated with unhandled exception: NeuralNetwork has no input port with ID 0

I made no other changes to the code.

Anyway, I suspect that there's a problem with my model, so I'll try getting it to work with openVINO but outside of FAST first.

André Pedersen
@andreped

@gdims When running this example with pyFAST 3.2.0 on your machine, it rendered the US with predicted segmentation, right(?):
https://fast.eriksmistad.no/neural_network_image_segmentation_8py-example.html

If yes, then there is likely something wrong with model and/or you are doing something different in preprocessing now than you did before/during training. What did you do in preprocessing when training the model? Perhaps you used standardization (Z-score normalization) instead of regular normalization 0-1?

Giannis Dimaridis
@gdims

I have done some progress. I successfully loaded an .onnx model after using opset <11.
To make the .setInputConnection() not throw the aforementioned error, I realized that for NeuralNetwork, you have to call .load() before .setInputConnection().
I then confirmed that the correct scaleFactor for my model is 1, since thresholding the output tensor at 0 gave a perfectly valid segmentation mask.

So, I now figured that I should pass this tensor output to a TensorToSegmentation() object, with a threshold of 0, and I'll be good to go.
Sadly, the output image of the TensorToSegmentation() object is still all zeros.

Is there something more I am missing here about TensorToSegmentation()? Does it expect the input to be in some range? My models raw output is somewhere in [-50, 50], very roughly.

André Pedersen
@andreped

[-50,50]? Did you forget to add an activation function at the end of the network? Seems like you are using a linear activation at the end, which is not optimal for performing semantic segmentation. If you are using a linear activation, you could try adding a softmax activation layer and then saving/exporting the frozen graph again.

That would also explain why FAST struggles to understand your predictions.

How did you train your model? Did you use PyTorch? It is quite common to produce the logits within the loss there. However, for deployment, I would recommend doing that within the graph

Erik Smistad
@smistad
Sound to me like you only have 1 channel on your output @gdims? TensorToSegmentation/SegmentationNetwork assumes class 0 is background... This would then make your segmentation all zero
I think we need to add a check for this @andreped, and an option for "no background class"
André Pedersen
@andreped
Should definitely add an option for "no background class". That would be relevant for regression models, among others. But I think @gdims main problem is that he has forgotten to add an activation function at the end. That should solve his issues, no?
Erik Smistad
@smistad
Linear activation is not a problem as long as you set the threshold correctly... Which might be a bit random when not using softmax
Giannis Dimaridis
@gdims

Hello again guys, have a nice week!

@andreped
Exactly, I'm using PyTorch and my loss function contains the sigmoid. I might now change this according to what you are saying.
My thoughts on this are the following: since the activation function is not trainable, it should make no difference for the model's performance to include it in the graph.
I mean, the model learns the weights so that the sigmoided output maximizes DICE to ground truth. Assuming a 0.5 threshold for the sigmoid, we may just as well remove it and have a threshold of 0 for the raw logits (right before the sigmoid), and it should make no difference. Am I misunderstanding something fundamental?

@smistad
That's right too, I only have one output channel as I'm doing binary segmentation and I thought it would be okay.

So I'll move my output to a second channel now and see what happens :)

Giannis Dimaridis
@gdims
Update: moving the output of the network to a second channel indeed solves the problem! So it's what @smistad said. Many thanks to both of you, I'll be in touch.
André Pedersen
@andreped

Since you used a sigmoid in the loss function, appending a sigmoid to the graph, will not affect model performance. The model is trained using logits, so that shouldn't be an issue. Thresholding at 0 for the linear activation should also work fine, but I agree with @smistad that this might result in uncontrolled and odd behaviour during inference. Using sigmoid/softmax also makes the output more easily interpretable, especially for you case.

Nice! Glad you finally got it working :) Let us know if you have any future issues. Happy to help!

Giannis Dimaridis
@gdims

Alright, now this is a different issue.
I'm trying to run inference on the Intel Neural Compute Stick 2, which I have installed and successfully ran the included intel demos on.
However, in FAST I get the following:

INFO [140355918460736] OpenVINO: Node setup complete.
E: [ncAPI] [ 204881] [python] getFirmwarePath:637 Firmware not found in: /home/longjon/anaconda3/envs/echo/lib/python3.8/site-packages/fast/bin/../lib/../lib/../lib/usb-ma248x.mvcmd
E: [ncAPI] [ 204881] [python] ncDeviceOpen:915 Can't get firmware, error: NC_ERROR
INFO [140355918460736] Failed to get GPU/VPU plugin for OpenVINO inference engine: Can not init Myriad device: NC_MVCMD_NOT_FOUND
INFO [140355918460736] Trying CPU plugin instead..
INFO [140355918460736] OpenVINO: Inference plugin setup for device type CPU
INFO [140355918460736] OpenVINO: Network loaded.

So it doesn't have the firmware.
usb-ma248x.mvcmd does not exist in that directory.

Has anyone tried something like this?

Giannis Dimaridis
@gdims
By the way, if you believe C++ is a better option overall, let me know. I'm not really attached to Python for this project.
Erik Smistad
@smistad
I didn't know it was necessary to distribute the firmware file for the device.. have only tried the compute stick once. I can send you the file later today.
Giannis Dimaridis
@gdims
Sure Erik, please send it over if it's possible
Erik Smistad
@smistad
Hm, there are several of this mvcmd files, but none with the exact name you wrote. I can see usb-ma2x8x.mvcmd, pcie-ma248x.mvcmd and usb-ma2450.mvcmd
Click the link above to download the file with closest name.
Giannis Dimaridis
@gdims
I found usb-ma2x8x.mvcmd in my official openvino installation (is that where you found those) but it didn't do the job - should I post the errors or is there no point?
Erik Smistad
@smistad
And it should be pasted to /home/longjon/anaconda3/envs/echo/lib/python3.8/site-packages/fast//lib/
Tried changing its name?
Giannis Dimaridis
@gdims
Yes, done that
Erik Smistad
@smistad
Do you get a different error after you copied the firmware file?
It seems only a copy is necessary from the openvino page: "The device firmware must be available: The Intel® Movidius™ NCS and Intel® NCS 2 firmware are the files MvNCAPI-ma2450.mvcmd and MvNCAPI-ma2480.mvcmd files respectively. These files are downloaded onto the build system during the build process and can be copied from the build system here <directory where cloned>/dldt/inference-engine/bin/<arch>/Release/lib to the deployment system ~/benchmark/lib directory."
Giannis Dimaridis
@gdims
Hmm, I've only tried with usb-ma2x8x.mvcmd, let me try and get the ones mentioned here.
Giannis Dimaridis
@gdims
The file you sent (unrenamed!) did the job. Thanks Erik. I think I'm all good for now.
Erik Smistad
@smistad
Great 🥳
Maybe the firmware has to match the OpenVINO version
Giannis Dimaridis
@gdims

Hey guys, I hope you're doing well!
I'd like to report one thing and ask another:
1) For ONNX models, exporting them with opset v11 gives issues (they can't be loaded in FAST). I tested this for deeplabv3 and fcn_resnet50 architectures. opsets v9 and v10 work ok.

2) I'd like to do some real-time processing on my segmentation output. I suppose Erik has faced this in the past. I have a model that is able to segment the LV endocardium (ENDO) and epicardium (EPI), and output the mask in different channels. If I want to obtain a segmentation of the myocardium only, I need to subtract ENDO from EPI. I tried including this procedure in the forward function of my model but such a model could only be exported with opset v11 in ONNX (the error is not clear). So, I guess I could do it post-inference.

So, given a model that outputs 2 segmentation masks, how would you go about obtaining a third mask, derived from the already available ones, and display it realtime?

Giannis Dimaridis
@gdims

Also, does FAST assume that multi-channel segmentation is two-channel only? My model currently outputs a Bx3xWxH tensor where B is the batch size, W and H are the image Width and Height.

Of the 3 channels, 1 is background, 2 is LV endocardium, 3 is LV epicardium.

However, for some reason, when I do:

segmentationRenderer.setColor(1, fast.Color.Red()) # ENDO
segmentationRenderer.setColor(2, fast.Color.Blue()) # EPI

, the two masks come out almost identical, both segmenting the epicardium. They are only different in some frames, at some pixel locations.

It's as if FAST cannot discriminate between the two outputs.

I have tested the model outside of fast and the outputs look correct.

Erik Smistad
@smistad
First off, regarding onnx opset: It is up to the third party inference engines which opset and layers they support
I think OpenVINO and TensorRT in FAST 3.2 support most layers up to opset 10
I think you asking whether fast supports multi-label segmentation: meaning that a pixel can have multiple labels
And the answer is not quite. A Segmentation data object in fast only support one label per pixel, which is the standard i would say
Thus i have always had 1 class for Endo and 1 class for myocardium
Erik Smistad
@smistad
Still, a tensor in FAST can represent a multi-label segmentation. And I believe you can render that with the Heatmap renderer instead
But not sure... Will have to check
Giannis Dimaridis
@gdims

Thanks for the quick replies Erik!

Oh, I understand. But shouldn't I be able to only render the endocardium mask by selecting it via setColor?

Anyway, then my model has to output 2 non-overlapping masks. I'm thinking, and maybe you can answer, it's better to train for epicardium and then subtract endocardium rather than train directly for myocardium, right? This way I can ensure that, where endocardium ends, the myocardium begins.

I'll try to see why my implementation of the above fails to export in ONNX format.

Erik Smistad
@smistad
Another option is to split the tensor into to segmentation images and render them both
Two*
I have trained A LOT of segmentation models for the heart. And space between Endo and myocardium has never been a problem for me. But yeah, you don't have a guarantee that it will not happen. But you don't have a guarantee with that with your approach either if you think about it
Giannis Dimaridis
@gdims
I'm sure you have ;) I've been reading the papers and it's very nice work.
So you mean I shouldn't care about the space between? Or that it doesn't occur? I thought my approach should force the two masks to fit perfectly together :/
Erik Smistad
@smistad
You don't have any guarantee that the model will generate a hole which is just background with either approach. But some simple post processing can fix that if it becomes a problem
For the Camus paper we just did binary_fill_holes in python
Giannis Dimaridis
@gdims
Alright, thanks again. Maybe I'll train the second channel directly on the myocardium. I'll work on it and see.
Giannis Dimaridis
@gdims

Another option is to split the tensor into two segmentation images and render them both

Regarding this, is there a way to do it (realtime) in Python? I've been doing some digging in the code and as far as I can understand, one has to go through C++ to get this level of control. Please correct me if I'm wrong. Good evening!

Erik Smistad
@smistad
Yes, you can do it with python by creating a process object in python and inject it into the pipeline. Here is an example: https://fast.eriksmistad.no/python_process_object_8py-example.html
You would need to two output ports, and call getInputTensor instead of the getInputImage
You can then split it with numpy
Giannis Dimaridis
@gdims
I should have found this, next time I'll search more before asking :)