by

## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Sep 15 2019 18:21
cophus commented #2
• Sep 15 2019 18:18
cophus commented #4
• Sep 04 2019 21:20
thomasaarholt opened #4
• Aug 29 2019 08:21
thomasaarholt closed #1
• Aug 29 2019 08:21
thomasaarholt commented #1
• Aug 29 2019 08:20
thomasaarholt commented #2
• Aug 29 2019 08:06
thomasaarholt closed #3
• Aug 29 2019 08:06
thomasaarholt commented #3
• Aug 29 2019 08:06
thomasaarholt commented #3
• Aug 28 2019 17:18
cophus commented #2
• Aug 28 2019 17:13
cophus commented #1
• Aug 28 2019 17:05
cophus commented #3
• Aug 27 2019 07:32
thomasaarholt opened #3
Thank you so much Colin for your perfect guidance
Dear Colin. how can I find the accuracy of prism method is reasonable for switching from multislice to prism method? the difference intensities of probes between these two methods (for all the same parameters in two methods) tell us the accuracy? or we have to compare the intensities in the Fourier space? or we should change other parameters( such as tile-uc) in prism method for correct comparison with multislice method?
Thomas Aarholt
@thomasaarholt
@reiszadeh the accuracy depends a bit on what you're interested in. I would monitor the property that you are interested in (let's say, HAADF intensity), and then perform the same kind of convergence tests that @ophus describes, but varying the PRISM interpolation factor instead (or maybe the pixel size).
Hi Thomas. Yes, HAADF image is my target. Suppose in prism method and based on computational limit I done all convergence test. This kind of test can decrease boundary cropping artifacts on the probes, and probe wraparound error due to periodicity .
But if we consider the image of multislice method (with the same parameters) as a precise simulation, we have the differences between the images get by multislice and prism method in the real space. The question is that what is the criteria for precision of prism method. Comparing prism images (after convergence test) to multislice images is a criteria? In the GUI tutorial it is mentioned that when differences of image of one atom column in the Fourier space and logscale between prism and multislice image is minimum (visually), the precision of prism is high enough. Are there any way to access to Fourier space images based on probe intensity?
@Quantumstud
Hi Everyone! I was trying to reinstall prismatic since I had my workstation crashed. I see that the current Prismatic v1.2 runs on CUDA 10.1. As per the CUDA 10.1 documentation site, the compatible GCC version is 7.3. However, in the prismatic website to run PyPrimatic the required version is 4.7-4.9. So what is the preferred GCC version currently for installation of prismatic?
Luis RD
@lerandc

@Quantumstud the recommendations on the website are mostly for guarenteed stability reasons/the PyPrismatic requirement wasn't updated. You should use the compiler version that is compatible with your version of CUDA, for whatever version of CUDA and operating system your intended system has installed. The same compiler should work just fine for building both the command-line version and PyPrismatic. The full compatibility guide for CUDA 10 is here: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

older toolkit documentation is here: https://developer.nvidia.com/cuda-toolkit-archive

@Quantumstud
@lerandc. Thanks a lot for your clarification. Yeah! Prismatic command line demos work fine with CUDA 10.2. However, there is a problem pyprismatic GPU version. It exits with an error message "'GCC' failed with exit status 1". Is there any python wrapper for the GPU version of prismatic?
Eric R. Hoglund
@erh3cq
Hi all! I’m new to prismatic and am currently using the GUI on windows. I managed to get everything installed and started following the superSTEM tutorial. Everything worked great until the full calculation. When I click full calculation a progress window flashes on the screen very fast then nothing happens. No output. I clicked save parameters hoping that I could close the program and reload. When I reloaded I received an error that the parameter file is formatted incorrectly, but reading the documentation online it looks fine (and is UTF-8). Now if I click “calculate potential”->”calculate” the program crashes. I have tried uninstalling and it seems to remember the “incorrectly formatted” parameter file location and still crashes. Any thoughts or fixes?
Ok. I just went through and manually input every parameter again and now the potential portion works. The output still just flashes on the screen
Luis RD
@lerandc
@Quantumstud pyprismatic should also work for the GPU version. is there are more specific error message you are getting when compiling? there might be some helpful log files in the main build directory/pyprismatic build subdirectory (assuming you are building pyprismatic with CMake)

@erh3cq These are familiar issues with the windows version. I think the most likely cause for your first issue (full calculation resulting in a crash) could be solved by running the GUI with administrator privileges, or setting the output to a directory that does not require administrator write access.

The parameter file issues might be related to where it tries to save/read the file from, which often is a temp file directory—if there are spaces in the directory name, it might load the wrong folder up/try to load a non-existent file when you reopen the GUI, which could result in the incorrect format error you see. If you specify the full path to the file, this should fix it, if this is what caused the error.

Colin Ophus
@cophus

@erh3cq Yes I think @lerandc is correct. I usually need to run the windows version "as administrator." Also it's important to use a good directory path! If you can run potentials correctly and then it crashes on the full calculation, very high chance you either don't have admin privileges to make the output file, or the directory / filename path is bad. Look out for \ slashes vs / slashes! Windows directory paths will always look like "C:\Users\cophus\prismatic\outputs\datafile01.hdf"

I have another suggestion too - you can try running the Prismatic GUI manually from a command prompt window (or the new windows powershell maybe). This way if the program crashes, you can check the command prompt window to see what went wrong.

Colin Ophus
@cophus
looks like we have a bug in Prismatic potential calculation: the atomic RMS displacements are being rounded to integer values
Colin Ophus
@cophus
Simple workaround while we fix it - when saving the atomic coordinate files, make sure to add at least one extra sig fig to the atomic RMS displacement values, i.e.
0.080 not 0.08
0.10 not 0.1
Eric R. Hoglund
@erh3cq
Thank you @lerandc and @cophus . That and a fresh download resolved the issue.
Now another question. Using PyPrismatic is it possible to perform a probe calculation like on the "probe analyzer" tab of the GUI? Regardless of the answer, is it possible to reuse the S-matrix calculation if I need to simulate two separate regions of interest in the same supercell or is it just as efficient to simulate the full image? I am asking because from my understanding a strong benefit of the prismatic algorithm is that it can simulate a full image with a single S-Matrix iteration.
Luis RD
@lerandc

@erh3cq Currently PyPrismatic can only run a full calculation and can't calculate individual steps, so you would be unable to perform a probe calculation in the same way that the GUI handles it. Without rewriting things, the easiest way to mimic the probe analyzer in pyprismatic would be to do the following:
1) Set the scan window around the part of the cell you want to run a probe through
2) Set the probe step in X and Y larger than the window, so only one probe is calculated
3) Run a full simulation in multislice and PRISM with your desired interpolation factor with 4D output turned on, and compare the results manually.

This will be slower than the probe analyzer on the GUI, since potential can't be reused, but should achieve the same effect. For similar reasons, the S-matrix can't be reused in pyprismatic, nor can it be saved and re-used in the command-line version. Whether or not it would be faster simulate the full image or two separate regions of interest depends largely on your specific simulation settings. I would recommend running a simulation on a small region of interest with a single frozen phonon and timing the two main steps in the PRISM calculation, PRISM02 (S-matrix) and PRISM03 (output). PRISM02 will take the same time no matter the size of your region of interest (depends on supercell size, interpolation factor, real space pixel size), while PRISM03 should be roughly linear in the number of probe positions you calculate.

If the S-matrix step is significantly slower and you do not care about the extra output (esp. if you aren't saving 4D output), then I would probably just simulate the full image.
Eric R. Hoglund
@erh3cq
@lerandc Thank you. The probe analyzer work around make complete since. It would be nice to have the S-matrix reusable by some type of load or “already run” flag. I will try some of the suggestions here then get back to you if I have more questions. Thank you!
AJ Pryor, Ph.D.
@apryor6
FWIW, it’s quite possible to build out pyprismatic to enable this. It’s just a thin wrapping c-extension, so it is possible to invoke any of the underlying C++ or CUDA code from it
The probe analyzing code is executed from the GUI as a Qt slot, so a similar entry point can be made from python
Luis RD
@lerandc
@erh3cq Agreed, I'll create an issue on the github page so that we keep it documented, like @apryor6 points out it wouldn't be the hardest to implement
Shuai Wang
@wangshuai1212_gitlab
@cophus one question. When I use prismatic-gui, gpu resources are fully used. While I use prismatic command line, gpu is not use at all. I set the parameters like -g -S in the command line the same as i did for prismatic-gui. Could you help me figure out what is the reason that gpu is not in use? Thank you
@Quantumstud

@lerandc, I still haven't figured out the pyprismatic in GPU mode. Sorry to bother in a similar topic:

## Below is the error I get while install the pyprismatic

In file included from ./include/meta.h:20:0,
from ./include/params.h:23,
from pyprismatic/core.cpp:15:
./include/defines.h:37:10: fatal error: cuComplex.h: No such file or directory

#include "cuComplex.h"
^~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

It seems it is not able to find the cuComplex.h file.
I have checked that I have compiled my CMAKE in python_gpu mode:
Furthermore; the cuComplex.h is available in the Cuda-10.2 library.

Hi
I have seen some bug when I was using prism method with interpolation factor 2. I have started by sample with 3by3by3 nm^3 and increased the sample till 6by6by9 nm^3 . It works!. When I increase the box cell more, despite of the PRISM calculation complete with requesting high memory , the code returns zero value to all element in "realslice" matrix
@Quantumstud

Hi @lerandc ! It seems I found out the problem. The cuprismatic is not being compiled properly. When I use

cmake -DPRISMATIC_ENABLE_GPU=1 -DPRISMATIC_ENABLE_CLI=1 -DPRISMATIC_ENABLE_PYTHON_GPU=1 ../

Everything is compiled fine but the end I get:

CMake Warning:
Manually-specified variables were not used by the project:
PRISMATIC_ENABLE_PYTHON_GPU

-- Build files have been written to: /home/tara/apps/prismatic/prismatic-1.2.1/build`
Luis RD
@lerandc
ah, I see the issue- sorry for not checking in earlier. I updated the command to -DPRISMATIC_ENABLE_PYPRISMATIC, because it was a little more consistent to build pyprismatic with CMake and without the setup.py script once the HDF5 libraries were added in
Could you try this command instead and let me know how it works out for you?
@Quantumstud

Thanks, @lerandc yeah that indeed solves the problem while cmake. Sorry I ran into another problem it seems while pip installing, there is some error generated. I think it is due to the fact that cuprismatic library is not found.
Can you say what I am doing wrong?

/home/tara/anaconda3/envs/4dsim/compiler_compat/ld: cannot find -lcuprismatic

Thanks a lot!!

@Quantumstud
Thank you very much @learndc. I figured out the pip install method doesn't work however the setup.py method mentioned in the website works flawlessly. Wonderful!! It would be great if the website can updated with the tag.
@Quantumstud
@wangshuai1212_gitlab Are you using a Linux system? If so did you compile Prismatic with -DPRISMATIC_ENABLE_GPU? It seems if you have compiled correctly, prismatic prints out "Using GPU codes" while running the calculations.
@Quantumstud
Putting it out in case someone has already done it. Is there any module or script that converts the VASP POSCAR or LAMMPS output file into the PRISM input file format?
Colin Ophus
@cophus
@Quantumstud Here are a few matlab scripts for importing VASP POSCAR files and writing Prismatic xyz outputs. Note that POSCARs come in different flavours so you might need some modifications.
To write a prismatic .xyz file, you might use a command line like:
writeXYZprismatic('filename.xyz','some comment,cellDim,atomsID(:,4),atomsID(:,1:3),1,0.0999999);
Note I use an occupancy of 1 in all sites (can be a scalar or vector input) and a constant uRMS of 0.1 for all sites ( can also be a vector) though I write it with the extra trailing 9s to get around the read bug where u = 0.1 can be read in as u = 0.
i.e. make sure your Debye-Waller uRMS values are correct in the output file please!
@Quantumstud
Thank you @cophus .
@Quantumstud
Hello all! I was trying prismatic (the conda repository) on a supercomputing cluster with varying the number of CPUs.Strangely, it seems that the amount of time taken for the 48 cores is higher than the amount of time taken for a 24 core calculation. I have attached the output files below. The .o files are the output after the calculation is over shows the wall time and the CPU time that is used. I am trying to benchmark using the SI100.XYZ example structure. I have also attached the scratch parameters. Is there something I am doing wrong?
@Quantumstud
Hi! I had a question regarding the -batchsizeTargetCPU in Prismatic. The description states as, "number of probes/beams to propagate simultaneously for both CPU and GPU workers." I am confused with what should be the optimum batch size with the given number of cores used in the calculation. For example, if I am using a calculation on 24 cores, how should I determine the optimum batch size? Similarly, for a 48 core calculation?
yobc401
@yobc401
Hello, I'm trying to install pyprismatic in supercomputing clusters that I
Hello, I'm trying to install pyprismatic in supercomputing clusters that I'm able to access. I used "conda install pyprismatic -c conda-forge" but it doesn't work properly. The error message appears in the following: EnvironmentNotWritableError: The current user does not have write permissions to the target environment.
environment location: /mpcdf/soft/SLE_12_SP4/packages/x86_64/anaconda/3/2019.03
uid: 30905
gid: 11300
I'm not able to access the cluster by root or sudo account. Could you help me to solve this critical problem? Thanks in advance!
Luis RD
@lerandc
@Quantumstud The batch size determines how many probes an individual worker thread will calculate before requesting more probes from the main host thread. Essentially, every time a worker thread goes back to the host, it requires a memory transfer—for both CPU and GPU workers. The optimum batch size is dependent on the size of your calculation and your specific architecture, so it’s a little hard to say. For multislice, if there are many slices, it doesn’t matter too much anyway, since the propagation is the slowest step. For PRISM, I would increase the batch size until your GPU ram gets fully occupied. If you are working across multiple cores on a cluster, though, I would first verify that every core is actually being utilized—I’m not sure that prismatic sends off work across multiple nodes very naturally.
@yobc401 This might be a better question for whoever runs the supercomputing cluster/is your designated admin on how to install prismatic for your local user environment