Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Zhihui Du
    @zhihuidu
    @bradcray Hi, Brad, our tests show that on a shared memory multicore computer, it will take a much longer time if we run a program with two locales compared with one locale. I wonder what chaple will do when it cannot have multiple physical resources to execute a program?
    Brad Chamberlain
    @bradcray
    Hi @zhihuidu — Sorry not to have noticed this earlier. I believe that you're correct that, today by default, Chapel processes assume they own the full compute node, and so will (for example) create a thread per core and pin those threads to the cores. When both processes do this, it can hurt performance of course.
    There are some advanced features that can be used to try and carve up a compute node (like a shared memory system like this) between multiple Chapel processes, and we've had some success with them, but it's still not optimal.
    I expect that we'll be doing more here to support a user-facing "locales per node" setting that will cause the processes to be more aware of each other and cooperate better, but that doesn't exist today.
    As far as best practices for the advanced features I refer to above, Elliot Ronaghan (@ronawho on gitter) is the resident expert. I think he was giving tips on this to others at Arkouda recently, let me see if I can find them.
    Zhihui Du
    @zhihuidu
    @bradcray thanks and now I understand it!
    Brad Chamberlain
    @bradcray

    I’m not finding what I was remembering, but did find these two settings:

    CHPL_RT_OVERSUBSCRIBED=yes
    CHPL_RT_NUM_THREADS_PER_LOCALE=8  # or whatever

    That said, I’m not confident that this is sufficient. Specifically, I don’t recall that anything will make sure that the two processes won’t use the same cores for each of their 8 threads (?). Elliot’s definitely the expert here, so I’d suggest checking with him next week (though actually he may still be online now… even though it’s late on a Friday on the east coast. Of course, this is true for you as well… :) )

    Zhihui Du
    @zhihuidu
    I appreciate your quick reply on weekend! Since now the distributed memory computer is not available, we want to do some experiments on a shared memory computer using multiple locales.
    Brad Chamberlain
    @bradcray
    Not a problem (it’s not quite the weekend here yet :) ). I’ll be curious if the trick above improves things at all. It probably won’t be worse, but it probably also won’t be optimal.
    Zhihui Du
    @zhihuidu
    Let's try your method and see if there are some performance difference.
    Michael Merrill
    @mhmerrill
    Does anyone have a topic for today's Arkouda Weekly Call?
    Michael Merrill
    @mhmerrill
    I think since I haven't heard from anyone today's Arkouda Weekly Call topic will be the new https://github.com/Bears-R-Us/arkouda-contrib repo
    Michael Merrill
    @mhmerrill
    I am going to cancel tomorrow’s Arkouda Weekly Call
    Michael Merrill
    @mhmerrill
    today on the Arkouda Weekly Call we will discuss the structure of the arkouda-contrib repo
    Engin Kayraklioglu
    @e-kayrakli
    Hello all! In case you missed it, Chapel Implementers and Users Workshop (CHIUW) call for submissions is out with an April 15 deadline. If you have Arkouda/Chapel related work, we encourage you submit your work there. Even if you don’t, we hope to see you at CHIUW 2022 on June 10th. It is free and virtual.
    Michael Merrill
    @mhmerrill
    Anyone have a topic for today's Arkouda Weekly Call ?
    Michael Merrill
    @mhmerrill
    I am going to cancel today's call if no one has a subject to talk about.
    Michael Merrill
    @mhmerrill
    Todays meeting is cancelled.
    Michael Merrill
    @mhmerrill
    I am going to cancel todays Arkouda weekly call unless someone has a topic to discuss. Please put topics in this channel
    Michael Merrill
    @mhmerrill
    todays meeting is canceled
    Zhihui Du
    @zhihuidu
    When I compile the latest arkouda version, it gives the following warnings.
    WARNING: ParquetMsg module declared in ServerModules.cfg but ARKOUDA_SERVER_PARQUET_SUPPORT is not set.
    WARNING: ParquetMsg module will NOT be built.
    Any suggestion to remove the warning safely?
    Michael Merrill
    @mhmerrill
    @zhihuidu this is just informational if you want Parquet support to be built then set the env variable otherwise leave it unset
    @zhihuidu you could also remove the ParquetMsg module from the ServerModules.cfg file, we should probably rework the build script to not include the Parquet module if the env var is not set
    Zhihui Du
    @zhihuidu
    Got it and thanks!
    Zhihui Du
    @zhihuidu

    @mhmerrill
    Now the arkouda-njit directory is organized as follows.
    client : all python code
    arkouda_graph : graph extension
    suffix_array: suffix array extension
    benchmarks: python code to check different extension functions.
    server : modules of different chapel code
    UniTestCh: chapel unit test code. we have not implemented this part.

    After compiling the server binary using Kyle's python script, we take the following steps to call the extended functions.
    (1) under the master arkouda directory, copy the arkouda-njit directory to here and rename it as arkouda_njit or
    create a arkouda_njit link to the arkouda-njit directory
    (2) under benchmarks, for different python code, import arkouda_njit as njit
    (3) call all the extended function as njit.function

    This method can maintain the extended code independently, at the same time, use the extended functions just like before under the master directory.
    We have done some preliminary tests and it can work.
    If you have any suggestions, please let me know.

    Michael Merrill
    @mhmerrill
    @zhihuidu we are getting much close to something I am happy with, I have proposed a directory structure for the arkouda-contrib repo https://github.com/Bears-R-Us/arkouda-contrib
    @zhihuidu there is an issue there with a preliminary directory structure Bears-R-Us/arkouda-contrib#3
    Zhihui Du
    @zhihuidu
    @mhmerrill Got it and thanks!
    I can follow your style. The major difference is the test directory. I think your method is easy to recognize the testing code.
    Zhihui Du
    @zhihuidu
    I have updated the directory based on your examples. please check https://github.com/Bears-R-Us/arkouda-njit
    Michael Merrill
    @mhmerrill
    @zhihuidu very good start, I think we are going to discuss the structure a bit of contrib directories for arkouda and generic I/O naming at today's Arkouda Weekly Call
    we are going to discuss the structure a bit of contrib directories for arkouda and generic I/O naming at today's Arkouda Weekly Call

    Zoom Invite
    Michael Merrill is inviting you to a scheduled Zoom meeting.

    Topic: Arkouda Weekly Zoom Meeting Time: recurring meeting Tuesdays @ 1pm ET

    Join Zoom Meeting https://us04web.zoom.us/j/77717000423?pwd=TGlmaUN3L2hScFovTy9NRXNnUTE5dz09

    Meeting ID: 777 1700 0423 Passcode: kjM3WS

    Michael Merrill
    @mhmerrill
    do we have a topic for today's Arkouda Weekly Call?
    Michael Merrill
    @mhmerrill
    If there are NO topics for today's meeting then we'll cancel
    pierce314159
    @pierce314159
    Arkouda v2022.04.15 was just released! Thanks to everyone who contributed!
    https://github.com/Bears-R-Us/arkouda/releases/tag/v2022.04.15
    Michael Merrill
    @mhmerrill
    unless we have a topic I am going to cancel today's Arkouda weekly call
    sorry for the late notice
    pierce314159
    @pierce314159
    Hi everyone! Today's Arkouda weekly call is canceled because Mike has another meeting
    pierce314159
    @pierce314159
    Arkouda v2022.05.05 was just released! Thanks to everyone who contributed!
    https://github.com/Bears-R-Us/arkouda/releases/tag/v2022.05.05
    Chris Long
    @compiling-is-winning
    Hi all, stupid questions as I haven't built Arkouda since January, both related to Apache Arrow (I assume for the new Parquet support):
    1) On my Mac laptop, I've had no success building dependencies locally with “make install-deps”. However, I can get each of Arrow, HDF5, and ZeroMQ from Brew. I added a lines to Makefile.paths for the paths to /usr/local/Cellar for each of these. It compiles just fine, and I can run a single-locale Arkouda server with basic functionality. However, if I run “make test-all”, I get a bunch of failed tests related to parquet. So am I pointing Arkouda to the right place to build with Arrow via that line in Makefile.paths?
    2) On Sherlock, I build hdf5 and zeromq fine with “make install-reps”. However, I get a CMake error when trying to build Arrow. Initially, the error was that it wasn’t finding Boost. So I installed a later version of CMake, but am now getting a errors to the effect that CMake is looking for modules in the wrong directory and cannot find CMAKE_ROOT. Has anyone else encountered this?
    Brad Chamberlain
    @bradcray
    For #1, I have not tried to build Parquet / Arrow on my Mac but have not had to manually add paths to brew packages for other things. I guess my main thought there is whether some additional brew step is required to add the brew paths to your environment’s search path variables or the like? E.g., would a normal C compile find the headers and libraries without additional help? Some brew install steps ask you to add some lines to your .bashrc to enable them, though it’s been awhile since I’ve had to do one of those, so I’m not certain.
    For the Parquet question, I’m tagging @bmcdonald3 who’s the resident expert.
    I have not seen that specific cmake error you’re seeing, though in the plan-old Chapel context, I have seen cases where our build process will embed cmake paths into some sort of files, and I’ve had to do a fairly extensive clobber to clean things up and rebuild once I’ve switched cmake versions. I haven’t yet taken the time to figure out why that is, or whether our Makefiles could be updated to take care of it automatically. I wonder whether the same could be true with your Sherlock build as well, though the context is definitely different…?
    pierce314159
    @pierce314159
    hey @compiling-is-winning, @Ethan-DeBandi99 says he'll sync with u about this tomorrow, but we think the failed tests might be solved by PR Bears-R-Us/arkouda#1392
    Chris Long
    @compiling-is-winning
    @bradcray thanks! My guess is that it would. If I just run "which h5ls" (or something analogous for the zeromq and arrow dependencies) I get a path to a system directory that, on further inspection, contains a symbolic link to the directory in Cellar where it was installed with Brew. For the CMake issues on Sherlock, I will clobbering all of my existing installs later tonight
    @pierce314159 cool, thanks! If I run "make test-all", I do indeed get one error in dataframe_test.py and two in io_test.py, with the same routines as mentioned in the PR you linked above
    Chris Long
    @compiling-is-winning
    Looks like with "make test-chapel" I get one error with UnitTestParquetCpp
    Michael Merrill
    @mhmerrill
    @compiling-is-winning i just checked in the PR that fixes the problem i think