Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Thank you, @shobhitpatel101
    Shobhit Patel
    @shobhitpatel101

    Dear Students,

    I hope you are having a good time working on the projects in the frame of SOCIS,
    a listed here: https://socis.esa.int/projects/

    Could you please provide a link to your public code repository to be posted
    on the website (if available)?

    Thanks
    Artur
    This message is intended only for the recipient(s) named above. It may contain proprietary information and/or
    protected content. Any unauthorised disclosure, use, retention or dissemination is prohibited. If you have received
    this e-mail in error, please notify the sender immediately. ESA applies appropriate organisational measures to protect
    personal data, in case of data privacy queries, please contact the ESA Data Protection Officer (dpo@esa.int).

    @vicente-gonzalez-ruiz can i give this repo.. https://github.com/vicente-gonzalez-ruiz/Super-resolution_Imaging or my fork version?
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Yes @shobhitpatel101, provide both (your fork and my repo). No problem.
    Shobhit Patel
    @shobhitpatel101
    Ok thank you
    BTW, how the work goes?
    Shobhit Patel
    @shobhitpatel101
    @vicente-gonzalez-ruiz training part is approximately done.
    Thank you for the resources.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    OK, please, could you provide some details about how are you performing the training?
    Shobhit Patel
    @shobhitpatel101

    @vicente-gonzalez-ruiz sir, I have used a set of images data to generate its downsampled images X(0.5) and trained my model to generate that X2(original) image from downsampled images.

    basically, this contains 3 steps

    1) converting original images to X(0.5) (256x256)downsampled images.

    2) generating the X(2) (512x512) images from the model

    3) comparing it with original images and updating model to reduce loss.

    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    OK, sounds good. When do you think that we can do some test?
    Shobhit Patel
    @shobhitpatel101
    Thanks, end of this month!
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Hi @shobhitpatel101, which is the difference between the images "original_2X" and "processed_2X"?
    Shobhit Patel
    @shobhitpatel101
    Both images are 2x , but in processed_2x some features of images are more dominant like edges, colour contrast,etc.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    OK, thanks. However, remember that the idea is to use an image of half-resolution (0.5x) to get the full-resolution one (1x). In this way, we can compare with the original (1x).
    Shobhit Patel
    @shobhitpatel101
    Ok ,we can input a downsampled image as input 1x -> 0.5x then using model we can again generat 0.5x to 1x which is (2x) of input image.
    I will make a function which will convert input image from 1x to 0.5x
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Please, use pywavelets
    Shobhit Patel
    @shobhitpatel101
    Ok
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Maybe this can help you
    Shobhit Patel
    @shobhitpatel101
    thank you
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Hi, @shobhitpatel101, I would like to test the image up-sampler. How I can do that?
    Shobhit Patel
    @shobhitpatel101
    I am developing a function which output accuracy score based on the above method, but we can check output using visual inspection, by placing testing images on (input) folder and run (run_model.py), the output in (output) folder.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    OK, so, can I run run_model.py?
    Shobhit Patel
    @shobhitpatel101
    Yes
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    OK, I have run the code. I see that by default inputs input/lena.png and outputs output/lena_origina_2X.png and output/lena_processed_2X.png. My question are: why the resolution of the output images is 2048 x 1536? And ... which is the difference between output/lena_origina_2X.png and output/lena_processed_2X.png? Thanks!
    Shobhit Patel
    @shobhitpatel101
    sir, Images on ( /input) folder get downscaled (x0.5) and then used for generating (x2) image (/output), so that we can get an image of the same size as input to compare it and calculate the accuracy.
    Both images are x2 , but in processed_2x some features of images are more dominant like edges, colour contrast,etc.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    OK, thanks. For example, the lena image in /input (512x512) is down-scaled x2 and then up-scaled x2. Right. But, why the resolution of the output lena is ...:
    [vruiz@pluton output] (master) pwd
    /home/vruiz/Super-resolution_Imaging/model/output
    [vruiz@pluton output] (master) file lena_original_2X.png 
    lena_original_2X.png: PNG image data, 2048 x 1536, 8-bit/color RGB, non-interlaced
    Shobhit Patel
    @shobhitpatel101
    Thanks, I have corrected the bug.
    Shobhit Patel
    @shobhitpatel101
    this is due to dimension get changed during processing, now I take some extra variable to store it and then save it.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Hello @shobhitpatel101, thank you for your commit. I have tested it and I think that there is still some misunderstanding because of the resolutions still are wrong (for the experiments I want to do, probably the code is OK, only is wrong the configuration). I have run the module run_model.py and (I'm always referring to lena, if you can, ignore by now the other image, please, it consumes a lot of CPU and now I'm only interested in lena). The input image of the module should be lena with 256x256 and the output lena with 512x512 (now, the output has 1024x1024). As I commented before, you should use the DWT for obtaining the 256x256 version. Is this so?
    Shobhit Patel
    @shobhitpatel101
    Sorry for confusion, in this i have used 512x512 for input that's why output is 1024x1024. i will commit new update with 256*256.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    No problem. Thank you very much!
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Hello @shobhitpatel101, I have checked your PR (and accepted it, obviously). A question: the file output/lena_pywt_1X.png is the input of the up-scaler? If so, why is a grayscale image? I don't see how are you able to input a grayscale image and output a color one ...
    Shobhit Patel
    @shobhitpatel101
    Grayscale is only for testing purposes, we can use that for further process to improve accuracy.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Ok, thanks. However, could you create (as you have done with the grayscale one) the 256x256 version of lena (in color) in the output dir?
    Shobhit Patel
    @shobhitpatel101
    Ok, I will update the repository. Thanks for suggesting.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    :-)
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Hi @shobhitpatel101. A technical question about your implementation: can the super-resolution-x2 coder be applied to the images of the laplacian pyramid?
    Shobhit Patel
    @shobhitpatel101
    Ok, i will check if we are getting desired results.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Thanks, in order to compute the LP (laplacian pyramid, please, use again PyWavelets ... see here)
    Shobhit Patel
    @shobhitpatel101
    Ok, thank you
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Another requirement of the 2x-upscaler is that it should work with "any" input resolution, right?
    Shobhit Patel
    @shobhitpatel101
    Yes, we can do 2x upscaling with any input resolution, if input is (x,y) then upscaled output would be (2x,2y)
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Right!
    Shobhit Patel
    @shobhitpatel101
    I have read the laplacian Pyramid method and think that we can implement in our existing project with some minor changes.
    Vicente González Ruiz
    @vicente-gonzalez-ruiz
    Cool!