Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    TacticBlack
    @TacticBlack
    I completed that but there is nothing beyond the cluster analysis. No image creation or warping, no info on aligning existing segmentation/registration data with the cluster classifications. I just guessed that the segmented cells were in order by row, but that still leaves me confused.
    Daniel Fürth
    @tractatus
    1) The tutorial and the figure uses Mclust to perform the clustering.
    https://cran.r-project.org/web/packages/mclust/vignettes/mclust.html
    Mclust in itself comes with a range of tools to determine optimal clustering for Gaussian mixture model.
    The most common way is to use the Bayesian Information Criterion for expectation-maximization. This is very similar to the elbow method of variance explained. Briefly you check at how many clusters the BIC index is peaking or approaching asymptote (the "knee"). If you use plot() on the Mclust object then you should see such a plot. Here is a blog post on how to determine cluster size:
    https://datascienceplus.com/finding-optimal-number-of-clusters/
    2) The tutorial starts form already segmented and register output. If you start with your own data you basically have to segment in each channel and then register using a single channel. In the case of the tutorial I registered using the Pvalb channel because it had the best autofluorescence so that anatomical features could clearly be distinguished.
    Daniel Fürth
    @tractatus
    The output of the clustering will be a vector assigning each cell to a class. In the tutorial you can see that vector asmod$classification this is also the vector you would use if you want to plot either the SNR values with cluster colors (this is shown at the last line) or in the case of showing it directly on your dataset something like this:
    master.data<-rbind(dataset, datasetNPY, datasetPvalb)
    plot(master.data$x, master.data$y, pch=16, cex=0.5, ylim=rev(range(master.data$y)), asp=1, col = mod$classification )
    For more advanced plotting both superimposed on the original image as well as in atlas space see:
    https://github.com/tractatus/wholebrain-examples/tree/master/schematic_registration_plots
    this last tutorial should probably answer any questions you have regarding wireframing. But let me know if anything is unclear @TacticBlack
    TacticBlack
    @TacticBlack

    Thanks for the help here. I was further along than I thought. I understand the cluster analysis for the most part and the registration steps. I need to know how you pulled the mod$classification from the cluster analysis object and aligned these clusters with their respective cells in a single dataset. It looks like they are linked by their row in the data frame. I wrote something very similar to what you have above with some naming tweaks for compatibility with other objects I have in my workspace:

    Combined_Regi_Seg_Class_Data<-rbind(datasetLhx6EGFP, datasetNPY, datasetPvalb)
    Combined_Regi_Seg_Class_Data$classification<-mod$classification
    write.csv(Combined_Regi_Seg_Class_Data, file="Combined_Regi_Seg_Class_Data.csv")

    and generated the image below.
    Then I wonder how you determined that a given cluster was a combination of 2 or more signals and which specific signals those are. That is my "bigger picture" question, I guess. I will definitely look into these resources. Thanks so much!

    Clusters_over_parvalbumin.png
    Daniel Fürth
    @tractatus
    Yes it looks like you got the hang of it. Key when computing new information is to either like in this case have a common row index or have a separate index on which to merge on.
    Ah I see. So your question is for example how I determine that there exist a sub population of Lhx6-GFP that are also positive for Pvalb? Like you want that statement like that shown in text in the console?
    TacticBlack
    @TacticBlack

    Sorry for the delay. I was looking for how you checked that a sub population of Lhx6-EGFP + Pvalb was truly colocalized so you could (reasonably) confidently name the whole cluster Lhx6-EGFP + Pvalb. Was this done manually? You went in and checked some of the cells that were supposedly co-labled?

    Also, I am having trouble saving the plots that show up from schematic.plot(). I am on a windows machine (no quartz.save for me) and using

    png(filename="filename.png", width = 1600, height = 1200, unit='px')
    schematic.plot(dataset)
    dev.off()

    It saves a file that is blank.

    schematic.plot(dataset)
    dev.copy(tiff,'tester1.tiff')
    dev.off()

    Saves a plot, but I cant alter the size of it in the code. At least, not that I know of.

    TacticBlack
    @TacticBlack
    Daniel Fürth
    @tractatus
    @TacticBlack for Windows when the R package loads it will write a quartz() function for you (https://github.com/tractatus/wholebrain/blob/master/R/onAttach.R). That function on Windows is essentially just calling windows(). See ?windows() or directly savePlot() https://www.rdocumentation.org/packages/grDevices/versions/3.4.1/topics/savePlot
    Daniel Fürth
    @tractatus
    so one way of saving is:
    savePlot(filename = "filename.pdf", type = "pdf")
    dev.copy() will also work but important there that it will use the current opened device so if you don't see any output from that make sure you are not running multiple devices in parallel. For example, common to have both Rstudio and R running devices. savePlot() will specifically save form the windows() device.
    TacticBlack
    @TacticBlack

    Awesome! Thank you. We plan to run several thousand images through this. When using your code to analyze in batches, we set the periodical spacing between different images in mm units from anterior to posterior.

    1. How well does this process handle missing coronal sections from skipping wells when slicing?
      Does this section of code help to mitigate these issues:

      coord<-map.to.atlas(image.number=inspected, 
      coordinate=smp.coord, 
      sampling.period=smp, 
      number.of.sections=length(images)
      )
    2. How can I make the whole batch-running process compatible with colocalization?

    Daniel Fürth
    @tractatus
    @TacticBlack this depends on what type of "missing" we are talking about. In my work I usually section tissues and then only mount a subset (lets say every second) and use the remaining for staining etc. In those cases the "missing" is extremely systematic. So if I cut at 40 micron thickness and only mount every second well then my sampling.period is 80 micron.
    TacticBlack
    @TacticBlack
    If we are using a subset split between 4 wells while we are cutting we have every 4th at 40 microns (4*40 = 160 microns between). This makes sense, however, we inevitably make mistakes while sectioning the tissue. For example, one might accidentally skip the 2nd well and place that tissue in the 3rd well, then continue sectioning because you did not notice. So you have a missing a section in well 2. Meanwhile wells 3, 4, and 1 now have a section that is only 120 microns from the last correct measurement due to the original error in well 2. How well does the current method deal with this particular type of common error?
    Daniel Fürth
    @tractatus

    If on the other hand you have chunks missing in a non-systematic way. What I could do is remove those coordinates from the resulting vector coord. It is very easy to subset vectors in R and remove things based on some boolean condition. Here are some ways of doing that:

    #remove a fixed set of indexes
    remove <- c(1, 4, 7, 8)
    coord.new <- coord[-remove]
    
    #remove values that exist in a given matching vector.
    dont.use <- c(-4.13, 4.2, 3.28, 0.6)
    remove <- coord %in% dont.use
    coord.new <- coord[!remove]

    You can quickly view the coordinate in the atlas like this when uncertain how how good the match is.

    schematic.plot(dataset = NULL, coordinate = coord)
    Daniel Fürth
    @tractatus
    The method deals pretty well with certain missing values etc. What I normally would do is run everything as is (with 160 micron spacing in your case). Then in your pipeline make sure to write output with makewebmap()see the readme page: https://github.com/tractatus/wholebrain (scroll down to makewebmap()). A brain with 200 sections or so usually takes 20-30 min to process depending on your computer. You can then quickly inspect the results with glassbrain() in 3D to see if any given section looks odd. If something looks odd what I do is go to the HTML output from makewebmap() and check more in detail why it went wrong. If there is something related to bad match with coordinate can be worth reprocessing those sections. But if the sections themselves look problematic usually better to just remove them from the final dataset using a procedure similar as above but on the dataset object.
    TacticBlack
    @TacticBlack

    Thanks again for your help and patience. I am not as advanced in R coding as I would like to be. I am having trouble looping the code between multiple folders for multiple animals. I am terrible with for loops. Do you have some sample code for nesting the loop you talk about here: http://www.wholebrainsoftware.org/cms/tutorials/ If you have any good resources concerning for loops, I would gladly use them!

    Also, how can I make the whole batch-running process compatible with colocalization? As it is now, the code provided will perform registration each time it loops and requires that specific registration for creating the dataset. It also overwrites each object (seg & regi) each time it loops. I would like to keep those objects in the workspace and assign names to the objects based on values in a dataframe that correspond to the image being processed. I'm not even sure where to begin.

    Sorry for all the questions. I really am trying here.

    Daniel Fürth
    @tractatus
    @TacticBlack you probably want to check out this tutorial on some basic R and how to handle folder, files and writing pipelines with loops:
    http://www.wholebrainsoftware.org/cms/directories-files-and-paths/
    tg-benson
    @tg-benson
    Screenshot from 2021-09-09 14-26-25.png
    Daniel Fürth
    @tractatus
    Hi @tg-benson
    tg-benson
    @tg-benson

    Hi, I hope you're doing well! I ran into an error when running the code that reads: "Error in transformationgrid$nx[index] : negative values are not allowed in a matrix subscript" (see attached screenshot for full context). The image that I'm attempting to process is in a series and doesn't have changed brightness or anything (ie bubbles) that should cause a bug. From my understanding the transformationgrid function is assigning the appropriate atlas image to my input image as it stops right before registration. Any idea why I would be getting this? Thanks in advance!

    Screenshot from 2021-09-09 14-26-25.png

    Daniel Fürth
    @tractatus
    The error indicates that registration tries to extend the atlas contours outside of the image you are registering to. That is not a valid operation and it throws that error. You can solve this by checking myfilter$resize. Check for that image if you can increase that parameter until it runs. If not check that your brain section doesn’t lie on the border of the image.
    So increase myfilter$resize and check if it runs. 0.05 or 0.08 are usually ok values if pixel size is somewhere around 0.5 micron.
    piv1k
    @piv1k

    @tractatus hi again. Thank you for your previous support. I've successfully analyzed a lot of samples. I so much like the WholeBrain flexibility! Although I'm using it for a couple of months, I still find some new features of WholeBrain from time to time, and that's great! Everything is fine, but I am haunted by one question. My slices are not perfect, sometimes the cells are too close to each other and look like a huge bright spot. I can separate the cells by my eyes, but the algorithm does not. On the other hand, sometimes cells are too dark, and decreasing the intensity led to false-positive results. Is it possible to add or erase cells manually?

    And one more question. Recently I found out that WB may be used for coexpression analysis (https://github.com/tractatus/wholebrain-examples/tree/master/coexpression)
    But I didn't found any examples of the script. Could you help with it? This feature may help me greatly! Looking forward to doing that. Many thanks in advance!

    Daniel Fürth
    @tractatus
    @piv1k of you manually want to ad or remove and inspect things the best way to do that is nu making an interactive webmap of the slice. makewebmap() scrolldown on the readme page here: https://github.com/tractatus/wholebrain and you’ll find example. Open the web map in the browser and click on any of the cells and you will get the info including a id # which shows the row number of that cell, removing rows from a data frame with an index in R is easy index<-c(3, 7, 9); dataset[-index, ]. If you want to ad features you have all the drawing tools available in the browser interface. Draw what you need and then click the download icon. That will save a txt file from your browser with a R script that when run in R will ad those objects to your workspace. For co-expression checkout: http://wholebrainsoftware.org/examples/coloc/colocalization.zip
    piv1k
    @piv1k

    @tractatus thank you for your answer. Although this way takes a lot of time, it's better than nothing. I did as you said, but the script didn't download. I've got the file with the text

    manual.output <-list ( list (
    type = "Marker", coords = c (6310, 926), stereo.coords = c (1.530, 1.305)),

    but not any scripts.

    Daniel Fürth
    @tractatus
    @piv1k that is a script. When executed it will create a list object called manual.output. If you on the other hand want better segmentation, albeit at a higher processing time, I would recommend either CellPose or Stardist and use either the binary mask as input to segment or the centroids and import them to a seg$soma object.
    piv1k
    @piv1k
    @tractatus thank you for your advice. CellPose looks nice, I will try it. One more question. Is it possible to get the image of my sample with the registration grid superimposed on it? Like the right image resulting after inspect.registration () function. Is there any command like schematic.plot () that may do that?
    Daniel Fürth
    @tractatus
    @piv1k yes you want to try plot.registration(regi) you can turn the grid on or off as well as the outlines and change the colors etc.
    Marta Graziano
    @marta.graziano_gitlab
    Hello @tractatus , I was trying to add some cells from the webmap, as you suggested and it worked perfectly as you said, I have one question, once I have imported the txt as a R script and I have run it, how do I add it to the segmented cells in the dataset? And Also I have a question about the function glassbrain(), when I try to run it, R gives me back this error: Error in -dataset$DV : invalid argument to unary operator, what does it mean? Thnak oyu very muhc for your help and for this tool you designed!
    Daniel Fürth
    @tractatus
    @marta.graziano_gitlab Hi Marta, let take one thing at a time. Lets start with the error you are getting from glassbrain(). Could you tell me how your code looks like? Maybe run something like this so I can get a feeling of your input.
    head(dataset)
    dim(dataset)
    glassbrain(dataset)
    The glassbrain() needs a dataset object to render.
    Marta Graziano
    @marta.graziano_gitlab

    Hello @tractatus thank you for getting back to me so quicky, I ran what you suggested and the output is this:

    head(dataset1_10)
    animal AP x y intensity area id color right.hemisphere acronym name image
    3 NA 1.84 4162.222 5544.445 8269.800 15 465 #80cdf8 FALSE OT2 Olfactory tubercle, pyramidal layer 364479_rabies_esr1_LHA_1_10_GFP_V5 mcherry_overview_10x_5stack_GFP_ORG
    4 NA 1.84 4155.000 5255.000 18198.000 10 465 #80cdf8 FALSE OT2 Olfactory tubercle, pyramidal layer 364479_rabies_esr1_LHA_1_10_GFP_V5 mcherry_overview_10x_5stack_GFP_ORG
    13 NA 1.84 3758.889 4068.889 9713.725 75 56 #80cdf8 FALSE ACB Nucleus accumbens 364479_rabies_esr1_LHA_1_10_GFP_V5 mcherry_overview_10x_5stack_GFP_ORG
    16 NA 1.84 3698.485 3873.940 9534.529 55 56 #80cdf8 FALSE ACB Nucleus accumbens 364479_rabies_esr1_LHA_1_10_GFP_V5 mcherry_overview_10x_5stack_GFP_ORG
    24 NA 1.84 2187.544 3604.912 8150.682 95 800 #219866 FALSE AIv5 Agranular insular area, ventral part, layer 5 364479_rabies_esr1_LHA_1_10_GFP_V5 mcherry_overview_10x_5stack_GFP_ORG
    50 NA 1.84 4356.667 2553.333 5345.600 5 304 #2fa850 FALSE PL2/3 Prelimbic area, layer 2/3 364479_rabies_esr1_LHA_1_10_GFP_V5 mcherry_overview_10x_5stack_GFP_ORG
    dim(dataset1_10)
    [1] 11 12
    glassbrain(dataset1_10)
    Error in -dataset$DV : invalid argument to unary operator
    This is only a dataset of 1 slice, not of the entire brain.

    Daniel Fürth
    @tractatus

    @marta.graziano_gitlab if you use three quote signs, also called grave accents, at the start and end of a code chunk it will format it with syntax highlighting here. Easier to read that way.

    Anyway, I can see that you are missing DV and ML variables, without them glassbrain()wont know the 3D atlas coordinates to plot in. These variables are only computed if you turn on the forward.warp option during registration since what is done is that it transforms the location of the cells back to the atlas (in the forward direction). The default registration transform the atlas to the tissue (backward direction) which will give you the name of the brain regions your cells are in.

    To get the DV and ML added to the dataset in the fastest way use:

    dataset <-get.cell.ids(regi, dataset, forward.warp = TRUE) 
    glassbrain(dataset)
    Marta Graziano
    @marta.graziano_gitlab
    @tractatus thank you very much! it works perfectly, thak you again! Can I bother you again on the adding cells to the webmap issue? about what are the next steps after I have imported the txt as a R script and I have run it, how do I add it to the segmented cells in the dataset? Thank you very much again!
    Marta Graziano
    @marta.graziano_gitlab

    Hi @tractatus , I am attaching here the file regarding this message:
    **Hi Marta can you post in the main channel an example code you currently have? Ideally post a .RData file with your manual.output, seg and regi object.

    Also at what stage you want to ad it to: the segobject or to dataset?
    segobject has seg$soma$xand seg$soma$y and you can always append to these and rerun get.cell.ids()or inspect.registration().**
    Hi! Thank you for your answer, ideally it would be super to be able to add it to dataset because we have already taken out from the webmap tool all the segmented object that are not cells and if we add this later on to the seg file and reruninspect.registration() we lose this step, unless there is a way to avoid this. Bu of course if it's cleaner to append these coordinates to the seg$soma$x and seg$soma$y then we will do this. I will post a .Rdata file as you said on the general channel copying this message so you can find it. Thank you very much for your help and your kindness!

    Daniel Fürth
    @tractatus
    @marta.graziano_gitlab Hi Marta. This is how you can import into the seg object and rerun get.cell.ids() to update your dataset with the manually annotated points.
    #function to import the pixel coordinates into seg$soma
    add.points.to.seg <- function(manual.output, seg){
      pixel.coordinates <- do.call("rbind", lapply(manual.output, function(x){x$coords}))
      seg$soma$x <- c(seg$soma$x, pixel.coordinates[,1])
      seg$soma$y <- c(seg$soma$x, pixel.coordinates[,2])
      #use get.pixel.intensity() if you want to get that information after segmentation. here just replace with NA.
      seg$soma$intensity <- rep(NA, nrow(pixel.coordinates))
      seg$soma$area <- rep(NA, nrow(pixel.coordinates))
      return(seg)
    }
    #update seg object
    seg<-add.points.to.seg(manual.output, seg)
    #get the information on location in stereotactic coordinates and brain regions etc.
    dataset31_2<-get.cell.ids(regi, seg, forward.warp = TRUE)
    Marta Graziano
    @marta.graziano_gitlab
    Super, thank you very much!
    TacticBlack
    @TacticBlack

    Hey @tractatus, your tutorial for loops and file stuff was super helpful! I am writing and understanding loops pretty well now thanks to your tutorial. I am building a pipeline for multi-channel/multi-fluorophore cell counting and colocalization for my lab. My goal is to automate as much of the process as I can. In order to more effectively use loops, I am storing data, names, filepaths, etc in list objects with multiple levels: channel (fluorophore), mouse, and image, respectively. Thus, my list object "image.names[[1]][[1]][[1]]" would call channel 1, mouse 1, image 1. "myfilter" has a filter setup for each channel. etc... I was able to get segmentation to run just fine!

    seg <- list()
    mouse <- list()
    seg.data <- list()
    
    for(i in seq_along(image.paths)){
      for(j in seq_along(image.paths[[i]])){
        for(k in seq_along(image.paths[[i]][[j]])){
          seg[[k]] <- segment(image.paths[[i]][[j]][[k]], filter = myfilter[[i]], display=FALSE)
          names(seg)[[k]] <- paste0("img", k)
        }
        mouse[[j]] <- seg
        names(mouse)[[j]] <- paste0("m", j)
        seg <- list()
      }
    seg.data[[i]] <- mouse
    names(seg.data)[[i]] <- paste0(channels$name[[i]])
    mouse <- list()
    }
    
    remove(seg, mouse, i, j, k)

    Thanks for the help there! And this may help someone else troubleshoot who wants to do this too so I included the code.

    On to the issue:
    I am just beginning to test registration using the following code for just one mouse (4 images total for testing) and the images are the images from your colocalization tutorial so they should register just fine. I just repeated the image from each channel of your tutorial for each animal. I have already adjusted the 'coord' vector to repeat the same coordinate so this will work for testing. However, I am running into a set of errors during registration. Image, code, and output below.

    CODE:

    for(i in seq_along(image.paths$channel1[[1]])){
      regi<-registration(image.paths$channel1[[1]][[i]], coordinate=coord[i], display=FALSE, filter = myfilter[[1]])
      dev.copy(pdf, paste0(image.paths$channel1[[1]][[i]], '.pdf'))
      dataset<-inspect.registration(regi, seg.data[[1]][[1]][[i]], soma = TRUE, forward.warps = TRUE, batch.mode = TRUE)
      dev.off()
      save(file=paste0(image.names$channel1[[1]][[i]], '.RData'), seg.data[[1]][[1]][[i]], regi, dataset)
      datasets<-rbind(datasets, dataset)
    }

    OUTPUT:

    output_XX##_m01_DAPI_01 exists in .../Wholebrain/Project Working Directory/Inputs/imgs/channel1 but is a file
    output_XX##_m01_DAPI_01 does not exist
    Loading image:.../Wholebrain/Project Working Directory/Inputs/imgs/channel1/XX##_m01_SAL_EXP_DAPI/XX##_m01_DAPI_01.tif
    ====== LOADING DONE ======
    Resizing to: 0.04% of original size.
    Image type: CV_16U_2
    65] - Area: 2.00 - Length: 5.66  - Parent: 38   - Nested: -1  
     * Contour[66] - Area: 14.00 - Length: 13.66  - Parent: 38   - Nested: -1  
    ... [abridged for length]
    Saved as: C:\Users\PJMXeon3\AppData\Local\Temp\Rtmp8mcDCn/100960324.tif 
    Loading image:C:\Users\PJMXeon3\AppData\Local\Temp\Rtmp8mcDCn/100960324.tif
    ====== LOADING DONE ======
    Resizing to: 1% of original size.
    Image type: CV_8UC3_16
    Image type: CV_8U_0
    Loading image:.../Wholebrain/Project Working Directory/Inputs/imgs/channel1/XX##_m01_SAL_EXP_DAPI/XX##_m01_DAPI_01.tif
    Error in transformationgrid$mx[index] : subscript out of bounds
    In addition: Warning messages:
    1: In rbind(coordinates, getIntersection(line, contour[indexExtrapolate[i,  :
      number of columns of result is not a multiple of vector length (arg 2)
    2: In rbind(coordinates, getIntersection(line, contour[indexExtrapolate[i,  :
      number of columns of result is not a multiple of vector length (arg 2)
    3: In rbind(coordinates, getIntersection(line, contour[indexExtrapolate[i,  :
      number of columns of result is not a multiple of vector length (arg 2)
    I couldnt upload the image however it is the PvalbCy5 image from the colocalization tutorial
    Daniel Fürth
    @tractatus

    @TacticBlack the images in the co-localization tutorial have been registered with a custom set of correspondence points. Cant see what filter you are using but the error:

    Error in transformationgrid$mx[index] : subscript out of bounds

    Indicates that the resize and downsample parameter is not properly set in your filter.

    TacticBlack
    @TacticBlack
    resize = 0.04
    downsample = 0.25
    Daniel Fürth
    @tractatus
    Increase it to 0.05 or 0.08. As you increase the resize the atlas outlines becomes smaller and eventually will fit your image. The error indicates that the atlas outlines are outside of your image (subscript out of bounds).
    TacticBlack
    @TacticBlack
    and that may also have caused the warnings too. I am glad you say that because I assumed it was the same thing. Thanks!
    Daniel Fürth
    @tractatus
    Yes so you can probably check the parameters used in the seg$filter objects provided in the tutorial.