@reiszadeh 3 - You don't need to necessarily test the self-interstitial site - just a reasonable stand-in for it. For examine for youyour bulk material is without the defects - if you find converged simulation parameters for this cell, odds are high they will be sufficiently accurate for the defected cell. However if your bulk material is aluminum, but the defect interstitial site is for example gold, this might not be true. Gold scatters a lot more than aluminum, and so might require higher accuracy.
If if doubt, run more simulations! If you think it looks good, but want to check say tiling, repeat with one extra layer on each side. Is the result the same? If so, you are likely well-converged.
@Quantumstud the recommendations on the website are mostly for guarenteed stability reasons/the PyPrismatic requirement wasn't updated. You should use the compiler version that is compatible with your version of CUDA, for whatever version of CUDA and operating system your intended system has installed. The same compiler should work just fine for building both the command-line version and PyPrismatic. The full compatibility guide for CUDA 10 is here: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
older toolkit documentation is here: https://developer.nvidia.com/cuda-toolkit-archive
@erh3cq These are familiar issues with the windows version. I think the most likely cause for your first issue (full calculation resulting in a crash) could be solved by running the GUI with administrator privileges, or setting the output to a directory that does not require administrator write access.
The parameter file issues might be related to where it tries to save/read the file from, which often is a temp file directory—if there are spaces in the directory name, it might load the wrong folder up/try to load a non-existent file when you reopen the GUI, which could result in the incorrect format error you see. If you specify the full path to the file, this should fix it, if this is what caused the error.
@erh3cq Yes I think @lerandc is correct. I usually need to run the windows version "as administrator." Also it's important to use a good directory path! If you can run potentials correctly and then it crashes on the full calculation, very high chance you either don't have admin privileges to make the output file, or the directory / filename path is bad. Look out for \ slashes vs / slashes! Windows directory paths will always look like "C:\Users\cophus\prismatic\outputs\datafile01.hdf"
I have another suggestion too - you can try running the Prismatic GUI manually from a command prompt window (or the new windows powershell maybe). This way if the program crashes, you can check the command prompt window to see what went wrong.
@erh3cq Currently PyPrismatic can only run a full calculation and can't calculate individual steps, so you would be unable to perform a probe calculation in the same way that the GUI handles it. Without rewriting things, the easiest way to mimic the probe analyzer in pyprismatic would be to do the following:
1) Set the scan window around the part of the cell you want to run a probe through
2) Set the probe step in X and Y larger than the window, so only one probe is calculated
3) Run a full simulation in multislice and PRISM with your desired interpolation factor with 4D output turned on, and compare the results manually.
This will be slower than the probe analyzer on the GUI, since potential can't be reused, but should achieve the same effect. For similar reasons, the S-matrix can't be reused in pyprismatic, nor can it be saved and re-used in the command-line version. Whether or not it would be faster simulate the full image or two separate regions of interest depends largely on your specific simulation settings. I would recommend running a simulation on a small region of interest with a single frozen phonon and timing the two main steps in the PRISM calculation, PRISM02 (S-matrix) and PRISM03 (output). PRISM02 will take the same time no matter the size of your region of interest (depends on supercell size, interpolation factor, real space pixel size), while PRISM03 should be roughly linear in the number of probe positions you calculate.
@lerandc, I still haven't figured out the pyprismatic in GPU mode. Sorry to bother in a similar topic:
In file included from ./include/meta.h:20:0,
./include/defines.h:37:10: fatal error: cuComplex.h: No such file or directory
#include "cuComplex.h" ^~~~~~~~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1
It seems it is not able to find the cuComplex.h file.
I have checked that I have compiled my CMAKE in python_gpu mode:
Furthermore; the cuComplex.h is available in the Cuda-10.2 library.
Hi @lerandc ! It seems I found out the problem. The cuprismatic is not being compiled properly. When I use
cmake -DPRISMATIC_ENABLE_GPU=1 -DPRISMATIC_ENABLE_CLI=1 -DPRISMATIC_ENABLE_PYTHON_GPU=1 ../
Everything is compiled fine but the end I get:
CMake Warning: Manually-specified variables were not used by the project: PRISMATIC_ENABLE_PYTHON_GPU -- Build files have been written to: /home/tara/apps/prismatic/prismatic-1.2.1/build`
Thanks, @lerandc yeah that indeed solves the problem while cmake. Sorry I ran into another problem it seems while pip installing, there is some error generated. I think it is due to the fact that cuprismatic library is not found.
Can you say what I am doing wrong?
/home/tara/anaconda3/envs/4dsim/compiler_compat/ld: cannot find -lcuprismatic
Thanks a lot!!