by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 25 18:46

    dependabot[bot] on pip

    (compare)

  • Sep 25 18:46
    dependabot[bot] closed #52
  • Sep 25 18:46
    dependabot[bot] commented #52
  • Sep 25 18:46
    dependabot[bot] labeled #55
  • Sep 25 18:46
    dependabot[bot] opened #55
  • Sep 25 18:46

    dependabot[bot] on pip

    build(deps): bump tensorflow-gp… (compare)

  • Feb 24 01:05
    themantalope synchronize #51
  • Feb 24 01:03
    themantalope synchronize #51
  • Feb 21 16:39
    anindox8 opened #54
  • Feb 21 16:39
    anindox8 closed #53
  • Feb 21 16:27
    anindox8 opened #53
  • Feb 21 16:23
    anindox8 commented #42
  • Feb 21 15:16
    anindox8 commented #42
  • Jan 28 21:42

    pawni on gh-pages

    AUTO sphinxcontrib-versioning 2… (compare)

  • Jan 28 21:35

    dependabot[bot] on pip

    (compare)

  • Jan 28 21:35
    dependabot[bot] closed #49
  • Jan 28 21:35
    dependabot[bot] commented #49
  • Jan 28 21:35
    dependabot[bot] labeled #52
  • Jan 28 21:35
    dependabot[bot] opened #52
  • Jan 28 21:35

    dependabot[bot] on pip

    build(deps): bump tensorflow-gp… (compare)

Martin Rajchl
@mrajchl
@negative72 cannot spot anything from the traceback and the original example itself, but I would just do a diff between the original code and your new one, this should give hints on what to look at. Typically, people make mistakes when adapting the reader. Did you handle the evaluation case in the reader func?
negative72
@negative72
@mrajchl I didn't handle the evaluation case in the reader, but I'll take another look. Thanks for looking into this.
npardakhti
@npardakhti
Hi guys. I'm in trouble with Tensorflow. I want to save the actual input values of the last layer. It should be a vector of size 256. I'm going to use this vector in another regressor to compare the results. Is there anyone help me with the codes needed by Tensorflow to do that?
Martin Rajchl
@mrajchl
@npardakhti you mean you would like to save all activations going into the last layer? For each training example?
Why don't you build the regressor in the same graph and input these activations to both the last layer and the regressor?
Unless I am not seeing something here
npardakhti
@npardakhti
@mrajchl I'm going to save the feature vector of the trained network. I think it should be of the size of 256. while debugging, I found the last layer's tensor got: {Tensor} Tensor ("pool/global_avg_pool:()", shape=(?, 256), dtype=float32)
Martin Rajchl
@mrajchl
You said before you wanted "to save the actual input values of the last layer". Anyways, you can access all trainable variables in a tensorflow graph. If you have named the op correctly, it should be easy to retrieve. Btw, as a side note, since every run you will have a different random weight init, you will not end up with the same weights (at least not in order). What do you actually need help with? The Tensor you describe here are the output activations of the last pooling layer, it is not the weights. Anyways, if you point me to a line in the code it is easier to help
npardakhti
@npardakhti
@mrajchl How to "access all trainable variables in a tensorflow graph"? I'm not so familiar with Tensorflow. I'm using age prediction code. Let me explain more. We have a network with 5 layers, each layer has 16, 32, 64, 128, 256 filters. The output of each layer is a feature vector of size 16, 32, 64, 128, 256, respectively. Now I want to save the vector just before regression, to use another approach of regression.
If it is possible to change the regression method in the program itself, I'm glad to know how.
Filipa Marques
@FilipaMarques_gitlab

Hello everyone!
I have been experiencing with (https://github.com/DLTK/DLTK/tree/master/examples/applications/IXI_HH_sex_classification_resnet), I made all the changes necessary in both reader.py and train.py in order to run both my .csv and my .nii images and it was going great.
Then I realized I had the option 'extract_examples' as True in reader_params in train.py, since I want my 3D network to be trained on the full images I switched it to False and now I am having a "dimensions problem".
I have CT scans with dimensions [X, 512,512] where X, the number of slices, can be any number between 400 and 800.
The message preceding code exit 1 is the following:

It is my understanding that the network only takes some defined image sizes, problem is, even after reading the 2 papers referenced in the documentation, I still can't find this sizes. Can anyone help?

Tony Reina
@tonyreina
Are there any pre-trained models I could use for benchmarking computer hardware? I'm looking for something like MLPerf but specifically for medical imaging workloads/models
Martin Rajchl
@mrajchl
First of all, apologies for the delayed response, busy times.
@npardakhti You do not want the trainable variables in tensorflow, what you want are the activations from the layer you choose. Please read up on stack overflow on how to do this, but here is a hint for DLTK regression: https://github.com/DLTK/DLTK/blob/master/dltk/networks/regression_classification/resnet.py#L116 x is the activation you need.
Martin Rajchl
@mrajchl
@FilipaMarques_gitlab You will need to ensure that all your training examples have the same size, very simple. Since you have a variable dimension here, it breaks. I would try either padding up to 512 or 800 or cropping down to 400, so all example have the same size. The second concern is about why you would want to input an entire image? Just so you do not have to broadcast the network across patches/image subsets, as we do? I would check out this function here, simplifying a lot for you here: https://github.com/DLTK/DLTK/blob/master/dltk/utils.py#L83
@tonyreina I dont think we have uploaded trained examples, as they are merely for showcasing, but there are some models uploaded in the DLTK model zoo: https://github.com/DLTK/models. However, if you are benchmarking (I assume compute performance, not predictive performance here), you would not need any pre-trained weights anyways, or am I missing something here?
HTH
Tony Reina
@tonyreina
@mrajchl Thanks. Yes. It's true that I could just use random weights, but in many of the recent hardware they are using INT8 precision to do inference. That requires a trained model and a small amount of actual data to make sure that the quantized model's accuracy (or whatever metric used) is relatively unchanged.
FJUNESS
@FJUNESS
Hi!Martin.@mrajchl I run the codes at https://github.com/DLTK/DLTK/blob/master/examples/applications/MRBrainS13_tissue_segmentation/train.py with my data,but I come across the error Dimensions must be equal,but are 23 and 22 for'enc_unit_2_0/sub_unit_add/add'(op:'Add') with input shapes:[?,160,23,25,64],[?,160,22,25,64].Do you know what cause this problem and how to solve it?Of course,I code my reader.
Martin Rajchl
@mrajchl
@FJUNESS Yes, you are not supplying inputs with shapes that can be divided by 2 (for pooling). So at the second sub_unit has an issue of correctly adding it. Try and do not input full images for training, but rather examples of size [64, 64, 64] or similar. Also, check out the strides in all dimensions. The images in the segmentation example are highly anisotropic, i.e. have quite different voxel dimension. Because of this, we are supplying irregular sized examples to the CNN (c.f. https://github.com/DLTK/DLTK/blob/master/examples/applications/MRBrainS13_tissue_segmentation/train.py#L135).
FJUNESS
@FJUNESS
@mrajchl Thank you for your help~In the beginning,I try to input the examples.But I need the segmentation of fulll image.Can I get it after inputing the examples ?After I input the examples of size [64, 64, 64] and use the image as the label,the error happens:rank of labels(received 5) should equal rank of logits minus 1(received 5).I think that I may input the wrong label.My data doesn't have label.Can I train the network with the data without label?
Martin Rajchl
@mrajchl
@FJUNESS Whenever you are getting an error, it would be good to paste the stack trace (or line of code at least). In this case, I know where to look. To answer your questions:
  1. But I need the segmentation of fulll image.Can I get it after inputing the examples ? : What you train on and what you infer on are two different things. You can train on small excerpts of images (e.g. 64,64,64) and infer on a full image later (e.g. 256,256,120). A conv net broadcasts across the inputs (crops or full image). However, sometimes the gpu memory does not suffice to process a full image. This is why we have this helper function here: https://github.com/DLTK/DLTK/blob/master/dltk/utils.py#L9
  2. After I input the examples of size [64, 64, 64] and use the image as the label,the error happens:rank of labels(received 5) should equal rank of logits minus 1(received 5).I think that I may input the wrong label: Correct. Your inputs should be of shape [batch_size, x, y, z], not one_hot encoded.
  3. My data doesn't have label.Can I train the network with the data without label? No. what would you be learning then?
FJUNESS
@FJUNESS
@mrajchl Thanks,Dr.Martin.I see.I can input the full image to the model after I train the network with the input of small excerpts.I want to get the segmentation of my 3D image.It seems that I need to search method to make the one-hot coded label of my 3D image in the first.And,haha,I will remember to paste the stack trace in the next time.
Martin Rajchl
@mrajchl
sound good, good luck
FJUNESS
@FJUNESS
@mrajchl Thank you~
Riccardo Samperna
@riccardosamperna
Hi @mrajchl, I have a question that is bugging me. I hope I can have an answer here.
Riccardo Samperna
@riccardosamperna
The question is not specific for the DLTK tool but it is more generic about the spatial dimensions accepted from Tensorflow. I have seen from one of the examples in this page: https://dltk.github.io/DLTK/0.1.1/user_guide/reader.html that the reader "# Create a 5D image array with dimensions [batch_size, x, y, z, channels] ...". But from the documentation of 3D Convolution of Tensorflow I read that the format accepted is instead (z, y, x). Is there a reason for this choice? Am I missing something?
Martin Rajchl
@mrajchl
@riccardosamperna The tf docs say nothing of that sort (https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3D), if you can point me to where you read that maybe that helps clearing things up. I know that in recurrent units you will have to put time in a certain dimension that is it. In fact it does not matter how you orient your image, as long as you are consistent with it (meaning medical images have orientations, e.g. right-anterior-inferior (RAI) or left-anterior-inferior (LAI)). If you need more info, just google some tutorial (e.g. http://www.grahamwideman.com/gw/brain/orientation/orientterms.htm). Btw, we also wrote a tutorial for the tensorflow blog, that might clear up some aspects (https://medium.com/tensorflow/an-introduction-to-biomedical-image-analysis-with-tensorflow-and-dltk-2c25304e7c13). HTH
Riccardo Samperna
@riccardosamperna
Thanks a lot for your reply @mrajchl. Sure, I can point out to you where I read that. Please look at this line in the documentation: "kernel_size: An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window...". I interpret this line as saying that the convolution kernel works in the order (z, y, x) and not in the (x, y, z). My interpretation might be wrong but Tensorflow documentation is not really clear on this. I agree with you that from the few resources online it looks like that everyone is swapping axes to (x, y, z) after reading the images in numpy array with SimpleITK.
Martin Rajchl
@mrajchl
@riccardosamperna The conv kernel dimensions (e.g. z, x, y) and the tensor dimensions (dim0, dim1, dim2, etc) should correspond, no matter what they are named. I think you are referring to the functional interface (https://www.tensorflow.org/api_docs/python/tf/nn/conv3d), where the k_size is named like that. In the class api (https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv3D), these are named differently again. Just know this: the conv kernel of size (x,y,z) will apply to the physical dimensions of an input tensor of size (batch_size, x, y, z, features). This is how it works across the board, everything else would be confusing. The only special case I am aware of is that in recurrent convolutions are required to have the 'time' dimension in a certain index, e.g. [batch_size, time, y, z, features], c.f. https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM2D. Hope that clears up things
Riccardo Samperna
@riccardosamperna
@mrajchl thank you for the clarification. I guess it resolves my dilemma. WIsh you a nice one.
Martin Rajchl
@mrajchl
ty
Manuel A. Rivas
@marivascruz
Hi @channel - any recommendations for how to start with DICOM images?
In addition, any recommendations for OCT slice images?
Really look forward to working with dltk
Manuel A. Rivas
@marivascruz
To be more specific
I have 40k DICOM images I'd like to process to be used as input
to DLTK
The DICOM images are currently zipped as well
Manuel A. Rivas
@marivascruz
I'm trying to use ./dcm2niix to convert the files to niix file formats
Manuel A. Rivas
@marivascruz
@mrajchl looks like you may be able to help :-)
Martin Rajchl
@mrajchl
Hi @marivascruz in practice it does not matter in which format the images are stored as long as the headers are consistent. SimpleITK can read DICOM series too, but we find nii more organised. How about you take a look at our tutorial to get started? https://medium.com/tensorflow/an-introduction-to-biomedical-image-analysis-with-tensorflow-and-dltk-2c25304e7c13
Manuel A. Rivas
@marivascruz
Thanks @mrajchl - I tried dcm2niix , but it is taking too long to convert DCM series. What are the functions that can read in DICOM series?
Manuel A. Rivas
@marivascruz
Thank you!!
Martin Rajchl
@mrajchl
HTH!
Yan Jin, PhD
@yjinbhhs_twitter

Hi everyone, I am very new to DLTK, and I tried to run the tutorial on my local machine, for 02_reading_data_with_dltk_reader.ipynb (https://github.com/DLTK/DLTK/blob/master/examples/tutorials/02_reading_data_with_dltk_reader.ipynb), I can run it on jupyter without any problem. But when I saved it as a .py file and ran on the local machine, it threw out an error:

File "C:/Users/yjin1/Downloads/Software/dltk/examples/Brats/test.py", line 144, in <module>
features, labels = input_fn()
....
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3401, in _create_op_internal
self._check_not_finalized()

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 2998, in _check_not_finalized
raise RuntimeError("Graph is finalized and cannot be modified.")

RuntimeError: Graph is finalized and cannot be modified.

Could anybody please tell me why the error occurred and how do I fix that? I really get stuck here. What is wrong with the line " features, labels = input_fn()"? Thanks!

Martin Rajchl
@mrajchl
Seems to be that you are attempting to modify the graph after it has been finalized. This is something you need to look up on stackoverflow. However, the tutorials were only meant to be run as a notebook or as code chunks to inspire new development, rather than being 1:1 copied to scripts.
mogs
@mogendienoch
Hello everybody, am glad to join this place to learn more and share
MiftahulJannat77
@MiftahulJannat77

Dear Concern,
I am following the dltk ResNet architecture for binary classification. My dataset is highly imbalanced. I tried your Sparsed_balanced_cross_entropy which did not help in my case.
So, I want to use tf.nn.weighted_cross_entropy_with_logits to assign class weight. I am using the following code:
labels = tf.reshape(labels['y'], [-1, NUM_CLASSES]) labels = tf.cast(labels, tf.float32) loss = tf.nn.weighted_cross_entropy_with_logits( targets = labels, logits = net_output_ops['logits'], pos_weight = 1, name=None)

Using this, I get the following error:

File "D:\Classi\code\Train.py", line 409, in <module>
train(args)

File "D:\Classi\code\Train.py", line 312, in train
steps = EVAL_EVERY_N_STEPS)

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow_estimator\contrib\estimator\python\estimator\replicate_model_fn.py", line 225, in single_device_model_fn
local_ps_devices=ps_devices)[0] # One device, so one spec is out.

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow_estimator\contrib\estimator\python\estimator\replicate_model_fn.py", line 566, in _get_loss_towers
**optional_params)

File "D:\Classi\code\Train.py", line 226, in model_fn
eval_metric_ops = {"accuracy": acc(labels['y'], net_outputops['y']),

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow\python\ops\array_ops.py", line 618, in _slice_helper
_check_index(s)

File "C:\Users\Interne\Anaconda3\envs\3dclassification\lib\site-packages\tensorflow\python\ops\array_ops.py", line 516, in _check_index
raise TypeError(_SLICE_TYPE_ERROR + ", got {!r}".format(idx))

TypeError: Only integers, slices (:), ellipsis (...), tf.newaxis (None) and scalar tf.int32/tf.int64 tensors are valid indices, got 'y'

And, If I use one_hot_labels, I get the same error. I suppose for weighted_cross_entropy_with_logits, I am supposed to use normal labels, not one_hot_labels. Please correct me If I am wrong.

Looking forward to hearing the reason and if possible, the solution of the error.
Thank you.