by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 15 2019 18:21
    cophus commented #2
  • Sep 15 2019 18:18
    cophus commented #4
  • Sep 04 2019 21:20
    thomasaarholt opened #4
  • Aug 29 2019 08:21
    thomasaarholt closed #1
  • Aug 29 2019 08:21
    thomasaarholt commented #1
  • Aug 29 2019 08:20
    thomasaarholt commented #2
  • Aug 29 2019 08:06
    thomasaarholt closed #3
  • Aug 29 2019 08:06
    thomasaarholt commented #3
  • Aug 29 2019 08:06
    thomasaarholt commented #3
  • Aug 28 2019 17:18
    cophus commented #2
  • Aug 28 2019 17:13
    cophus commented #1
  • Aug 28 2019 17:05
    cophus commented #3
  • Aug 27 2019 07:32
    thomasaarholt opened #3
Tara Prasad Mishra
@Quantumstud
@lerandc. Thanks a lot for your clarification. Yeah! Prismatic command line demos work fine with CUDA 10.2. However, there is a problem pyprismatic GPU version. It exits with an error message "'GCC' failed with exit status 1". Is there any python wrapper for the GPU version of prismatic?
Eric R. Hoglund
@erh3cq
Hi all! I’m new to prismatic and am currently using the GUI on windows. I managed to get everything installed and started following the superSTEM tutorial. Everything worked great until the full calculation. When I click full calculation a progress window flashes on the screen very fast then nothing happens. No output. I clicked save parameters hoping that I could close the program and reload. When I reloaded I received an error that the parameter file is formatted incorrectly, but reading the documentation online it looks fine (and is UTF-8). Now if I click “calculate potential”->”calculate” the program crashes. I have tried uninstalling and it seems to remember the “incorrectly formatted” parameter file location and still crashes. Any thoughts or fixes?
Ok. I just went through and manually input every parameter again and now the potential portion works. The output still just flashes on the screen
Luis RD
@lerandc
@Quantumstud pyprismatic should also work for the GPU version. is there are more specific error message you are getting when compiling? there might be some helpful log files in the main build directory/pyprismatic build subdirectory (assuming you are building pyprismatic with CMake)

@erh3cq These are familiar issues with the windows version. I think the most likely cause for your first issue (full calculation resulting in a crash) could be solved by running the GUI with administrator privileges, or setting the output to a directory that does not require administrator write access.

The parameter file issues might be related to where it tries to save/read the file from, which often is a temp file directory—if there are spaces in the directory name, it might load the wrong folder up/try to load a non-existent file when you reopen the GUI, which could result in the incorrect format error you see. If you specify the full path to the file, this should fix it, if this is what caused the error.

Colin Ophus
@cophus

@erh3cq Yes I think @lerandc is correct. I usually need to run the windows version "as administrator." Also it's important to use a good directory path! If you can run potentials correctly and then it crashes on the full calculation, very high chance you either don't have admin privileges to make the output file, or the directory / filename path is bad. Look out for \ slashes vs / slashes! Windows directory paths will always look like "C:\Users\cophus\prismatic\outputs\datafile01.hdf"

I have another suggestion too - you can try running the Prismatic GUI manually from a command prompt window (or the new windows powershell maybe). This way if the program crashes, you can check the command prompt window to see what went wrong.

Colin Ophus
@cophus
looks like we have a bug in Prismatic potential calculation: the atomic RMS displacements are being rounded to integer values
Colin Ophus
@cophus
Simple workaround while we fix it - when saving the atomic coordinate files, make sure to add at least one extra sig fig to the atomic RMS displacement values, i.e.
0.080 not 0.08
0.10 not 0.1
Eric R. Hoglund
@erh3cq
Thank you @lerandc and @cophus . That and a fresh download resolved the issue.
Now another question. Using PyPrismatic is it possible to perform a probe calculation like on the "probe analyzer" tab of the GUI? Regardless of the answer, is it possible to reuse the S-matrix calculation if I need to simulate two separate regions of interest in the same supercell or is it just as efficient to simulate the full image? I am asking because from my understanding a strong benefit of the prismatic algorithm is that it can simulate a full image with a single S-Matrix iteration.
Luis RD
@lerandc

@erh3cq Currently PyPrismatic can only run a full calculation and can't calculate individual steps, so you would be unable to perform a probe calculation in the same way that the GUI handles it. Without rewriting things, the easiest way to mimic the probe analyzer in pyprismatic would be to do the following:
1) Set the scan window around the part of the cell you want to run a probe through
2) Set the probe step in X and Y larger than the window, so only one probe is calculated
3) Run a full simulation in multislice and PRISM with your desired interpolation factor with 4D output turned on, and compare the results manually.

This will be slower than the probe analyzer on the GUI, since potential can't be reused, but should achieve the same effect. For similar reasons, the S-matrix can't be reused in pyprismatic, nor can it be saved and re-used in the command-line version. Whether or not it would be faster simulate the full image or two separate regions of interest depends largely on your specific simulation settings. I would recommend running a simulation on a small region of interest with a single frozen phonon and timing the two main steps in the PRISM calculation, PRISM02 (S-matrix) and PRISM03 (output). PRISM02 will take the same time no matter the size of your region of interest (depends on supercell size, interpolation factor, real space pixel size), while PRISM03 should be roughly linear in the number of probe positions you calculate.

If the S-matrix step is significantly slower and you do not care about the extra output (esp. if you aren't saving 4D output), then I would probably just simulate the full image.
Eric R. Hoglund
@erh3cq
@lerandc Thank you. The probe analyzer work around make complete since. It would be nice to have the S-matrix reusable by some type of load or “already run” flag. I will try some of the suggestions here then get back to you if I have more questions. Thank you!
AJ Pryor, Ph.D.
@apryor6
FWIW, it’s quite possible to build out pyprismatic to enable this. It’s just a thin wrapping c-extension, so it is possible to invoke any of the underlying C++ or CUDA code from it
The probe analyzing code is executed from the GUI as a Qt slot, so a similar entry point can be made from python
Luis RD
@lerandc
@erh3cq Agreed, I'll create an issue on the github page so that we keep it documented, like @apryor6 points out it wouldn't be the hardest to implement
Shuai Wang
@wangshuai1212_gitlab
@cophus one question. When I use prismatic-gui, gpu resources are fully used. While I use prismatic command line, gpu is not use at all. I set the parameters like -g -S in the command line the same as i did for prismatic-gui. Could you help me figure out what is the reason that gpu is not in use? Thank you
Tara Prasad Mishra
@Quantumstud

@lerandc, I still haven't figured out the pyprismatic in GPU mode. Sorry to bother in a similar topic:

Below is the error I get while install the pyprismatic

In file included from ./include/meta.h:20:0,
from ./include/params.h:23,
from pyprismatic/core.cpp:15:
./include/defines.h:37:10: fatal error: cuComplex.h: No such file or directory

 #include "cuComplex.h"
          ^~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

It seems it is not able to find the cuComplex.h file.
I have checked that I have compiled my CMAKE in python_gpu mode:
Furthermore; the cuComplex.h is available in the Cuda-10.2 library.

reiszadeh
@reiszadeh
Hi
I have seen some bug when I was using prism method with interpolation factor 2. I have started by sample with 3by3by3 nm^3 and increased the sample till 6by6by9 nm^3 . It works!. When I increase the box cell more, despite of the PRISM calculation complete with requesting high memory , the code returns zero value to all element in "realslice" matrix
Tara Prasad Mishra
@Quantumstud

Hi @lerandc ! It seems I found out the problem. The cuprismatic is not being compiled properly. When I use

cmake -DPRISMATIC_ENABLE_GPU=1 -DPRISMATIC_ENABLE_CLI=1 -DPRISMATIC_ENABLE_PYTHON_GPU=1 ../

Everything is compiled fine but the end I get:

CMake Warning:
               Manually-specified variables were not used by the project:
               PRISMATIC_ENABLE_PYTHON_GPU


            -- Build files have been written to: /home/tara/apps/prismatic/prismatic-1.2.1/build`
Luis RD
@lerandc
ah, I see the issue- sorry for not checking in earlier. I updated the command to -DPRISMATIC_ENABLE_PYPRISMATIC, because it was a little more consistent to build pyprismatic with CMake and without the setup.py script once the HDF5 libraries were added in
Could you try this command instead and let me know how it works out for you?
Tara Prasad Mishra
@Quantumstud

Thanks, @lerandc yeah that indeed solves the problem while cmake. Sorry I ran into another problem it seems while pip installing, there is some error generated. I think it is due to the fact that cuprismatic library is not found.
Can you say what I am doing wrong?

 /home/tara/anaconda3/envs/4dsim/compiler_compat/ld: cannot find -lcuprismatic

Thanks a lot!!

Tara Prasad Mishra
@Quantumstud
Thank you very much @learndc. I figured out the pip install method doesn't work however the setup.py method mentioned in the website works flawlessly. Wonderful!! It would be great if the website can updated with the tag.
Tara Prasad Mishra
@Quantumstud
@wangshuai1212_gitlab Are you using a Linux system? If so did you compile Prismatic with -DPRISMATIC_ENABLE_GPU? It seems if you have compiled correctly, prismatic prints out "Using GPU codes" while running the calculations.
Tara Prasad Mishra
@Quantumstud
Putting it out in case someone has already done it. Is there any module or script that converts the VASP POSCAR or LAMMPS output file into the PRISM input file format?
Colin Ophus
@cophus
@Quantumstud Here are a few matlab scripts for importing VASP POSCAR files and writing Prismatic xyz outputs. Note that POSCARs come in different flavours so you might need some modifications.
To write a prismatic .xyz file, you might use a command line like:
writeXYZprismatic('filename.xyz','some comment,cellDim,atomsID(:,4),atomsID(:,1:3),1,0.0999999);
Note I use an occupancy of 1 in all sites (can be a scalar or vector input) and a constant uRMS of 0.1 for all sites ( can also be a vector) though I write it with the extra trailing 9s to get around the read bug where u = 0.1 can be read in as u = 0.
i.e. make sure your Debye-Waller uRMS values are correct in the output file please!
Tara Prasad Mishra
@Quantumstud
Thank you @cophus .
Tara Prasad Mishra
@Quantumstud
Hello all! I was trying prismatic (the conda repository) on a supercomputing cluster with varying the number of CPUs.Strangely, it seems that the amount of time taken for the 48 cores is higher than the amount of time taken for a 24 core calculation. I have attached the output files below. The .o files are the output after the calculation is over shows the wall time and the CPU time that is used. I am trying to benchmark using the SI100.XYZ example structure. I have also attached the scratch parameters. Is there something I am doing wrong?
Tara Prasad Mishra
@Quantumstud
Hi! I had a question regarding the -batchsizeTargetCPU in Prismatic. The description states as, "number of probes/beams to propagate simultaneously for both CPU and GPU workers." I am confused with what should be the optimum batch size with the given number of cores used in the calculation. For example, if I am using a calculation on 24 cores, how should I determine the optimum batch size? Similarly, for a 48 core calculation?
yobc401
@yobc401
Hello, I'm trying to install pyprismatic in supercomputing clusters that I
Hello, I'm trying to install pyprismatic in supercomputing clusters that I'm able to access. I used "conda install pyprismatic -c conda-forge" but it doesn't work properly. The error message appears in the following: EnvironmentNotWritableError: The current user does not have write permissions to the target environment.
environment location: /mpcdf/soft/SLE_12_SP4/packages/x86_64/anaconda/3/2019.03
uid: 30905
gid: 11300
I'm not able to access the cluster by root or sudo account. Could you help me to solve this critical problem? Thanks in advance!
Luis RD
@lerandc
@Quantumstud The batch size determines how many probes an individual worker thread will calculate before requesting more probes from the main host thread. Essentially, every time a worker thread goes back to the host, it requires a memory transfer—for both CPU and GPU workers. The optimum batch size is dependent on the size of your calculation and your specific architecture, so it’s a little hard to say. For multislice, if there are many slices, it doesn’t matter too much anyway, since the propagation is the slowest step. For PRISM, I would increase the batch size until your GPU ram gets fully occupied. If you are working across multiple cores on a cluster, though, I would first verify that every core is actually being utilized—I’m not sure that prismatic sends off work across multiple nodes very naturally.
@yobc401 This might be a better question for whoever runs the supercomputing cluster/is your designated admin on how to install prismatic for your local user environment
Tara Prasad Mishra
@Quantumstud
@lerandc Thanks for the reply!! Yes, for the multicore cluster, prismatic doesn't parallelize with across the different physical cores. Hence the maximum cores that can be used in my cluster are 24. Even if I use 48 cores, only 24 cores are used, and the remaining 24 cores are not utilized. Is there any workaround for that?
@yobc401 Hi! I am also using a supercomputing cluster to run Prismatic. The easiest way is using the Conda package thanks to the prism Conda package that is now available. You can install anaconda locally in your account and then use 'conda install -c conda-forge prismatic' By this, you don't have to have the root or sudo access!
Thomas Aarholt
@thomasaarholt
@yobc401 Even simpler: In the anaconda distribution you are using, create a new environment. This will automatically be created in your local directories, where you have writing access. conda create --name pris -c conda-forge prismatic will do what you want.
zhantaochen
@zhantaochen
Hi there, I was wondering does Prismatic require the unit cell (xyz file) to have alpha, beta, gamma=90 degree? Are we supposed to convert coordinate to orthogonal basis, and could any of you please suggest me some effective ways to do this type of work? I tried searching on Prismatic website and this chatting room but could not find too much related information (except the graphene one but seems no xyz file available to confirm my thoughts). I will appreciate any guidance!
Colin Ophus
@cophus
@zhantaochen Yes Prismatic requires alpha = beta = gamma = 90 degrees. I always convert cells to orthogonal using rational or pseudo-rational approximates. What cell do you need to transform? If you send it to me, I might be able to provide you with a worked example
zhantaochen
@zhantaochen
@cophus Thank you for the response Dr. Ophus! I do not have a target material for now, but am generally curious how to deal with non-orthogonal unit cells. I think graphene would be a great starting point (I have attached a cif file downloaded from Materials Project). Could you please elaborate a bit more, or just throw me a link, about the (pseudo-)rational approximations? I could not find related information after a quick search. I was also wondering does Prismatic directly tile input unit cells? If so, I will pay additional attention to boundaries to make sure periodicity is not interrupted between tiled UC's. Sorry for so many questions...