by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    maxernsts
    @maxernsts
    The project that I m working on is : the deep ocean currents . Thanks a lot
    can you send me the version that s working fine ?
    can I run the non released version in the online luncher ?
    Erik van Sebille
    @erikvansebille
    The version is on the oceanparcels github. And no, you can’t run it in the online launcher
    maxernsts
    @maxernsts
    Send me the link to download it please.
    Erik van Sebille
    @erikvansebille
    please see the oceanparcels.org website
    maxernsts
    @maxernsts
    caaa.PNG
    please help me, i got this error while trynig to install it!
    maxernsts
    @maxernsts
    Capture.PNG
    I installed the non released version but i got this error!
    Alexander Kier Christensen
    @christensen5
    Hi all. Am I right in thinking that the purpose of the to_write = "once" flag in a custom particle Variable is intended to speed up runs by not repeatedly writing particle information that doesn't change with time (e.g. something like species name)/
    Ignasi Vallès
    @ignasivalles
    Hi Alexander! I was doing it for that purpose.
    Alexander Kier Christensen
    @christensen5
    Hi Ignasi. Did you directly compare your runtimes between to_write = "once" and to_write = "True"? I was recently playing around with this flag to try and debug a separate issue and I've noticed a major slowdown when running with to_write = "once" variables.
    Alexander Kier Christensen
    @christensen5
    I've investigated a little further by adding a custom variable to the particle class in example_stommel.py and running a few sims with this variable set to write either "once" or "True", on both my laptop and my university cluster. On both systems the to_write="once" runs seem to spend significantly longer in pset.execute() than the to_write="True" runs; at 100,000 particles its about an order of magnitude difference.
    runtimes.png
    Here's a quick plot of what I found. Solid lines are to_write="once"runs, dashed lines are to_write="True"runs. Note the log scale on the y-axis.
    (CX1 is the university cluster, the other results are on my laptop).
    Ignasi Vallès
    @ignasivalles
    that's curious ... I would expect the opposite. But do you still have the same error when you set to_write='once' ? I can't do it but would be nice to see if this difference in time execution also happens with older versions.
    Alexander Kier Christensen
    @christensen5
    I'm not certain I understand your question - there are no errors per se, its just that setting to_write='once' results in simulations taking much longer than if I set to_write=True
    Erik van Sebille
    @erikvansebille
    Thanks @christensen5, for reporting this bug. Could you put it in an Issue on the github page, so that we can track it there? This gitter is more for quick questions, not for reporting bugs..
    Alexander Kier Christensen
    @christensen5
    Understood.
    Alexander Kier Christensen
    @christensen5
    Quick question relating to manual choice of chunksize - in the jupyter notebook tutorial you specify chunksizes as a tuple with sizes for each dimension respectively - e.g. (time, dep, lon, lat). Do you know how does Dask interpret being given just one value for chunksize - e.g. chunksize = 512 rather than chunksize = (1, 1, 512, 512)?
    I've just realised I have been testing chunksize effect in this way rather than specifying with tuples and I am wondering if that's why I am seeing slightly surprising results.
    Frank Gonzalez
    @FrankGonzalez_gitlab
    Hello. I'm trying to install and use parcels on my MacBook Pro, OSX 10.15.3 (Catalina). I first selected the "Launch Binder" to run the tutorial_delaystart, which executed perfectly. Encouraged, I followed the installation instructions on http://oceanparcels.org/ . I installed all the examples, opened a jupyter notebook and ran the same example, i.e., the tutorial_delaystart.ipynb . It worked okay down to Cell 5: plotTrajectoriesFile('DelayParticle_time.nc', mode='movie2d_notebook'). Instead of a plot, it printed the following error message:

    TclError Traceback (most recent call last)
    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/IPython/core/formatters.py in call(self, obj)
    343 method = get_real_method(obj, self.print_method)
    344 if method is not None:
    --> 345 return method()
    346 return None
    347 else:

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/matplotlib/animation.py in _reprhtml(self)
    1387 fmt = rcParams['animation.html']
    1388 if fmt == 'html5':
    -> 1389 return self.to_html5_video()
    1390 elif fmt == 'jshtml':
    1391 return self.to_jshtml()

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/matplotlib/animation.py in to_html5_video(self, embed_limit)
    1328 bitrate=rcParams['animation.bitrate'],
    1329 fps=1000. / self._interval)
    -> 1330 self.save(str(path), writer=writer)
    1331 # Now open and base64 encode.
    1332 vid64 = base64.encodebytes(path.read_bytes())

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
    1154 progress_callback(frame_number, total_frames)
    1155 frame_number += 1
    -> 1156 writer.grab_frame(**savefig_kwargs)
    1157
    1158 # Reconnect signal for first draw if necessary

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/matplotlib/animation.py in grab_frame(self, **savefig_kwargs)
    378 # user. We must ensure that every frame is the same size or
    379 # the movie will not save correctly.
    --> 380 self.fig.set_size_inches(self._w, self._h)
    381 # Tell the figure to save its data to the sink, using the
    382 # frame format and dpi.

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/matplotlib/figure.py in set_size_inches(self, w, h, forward)
    910 manager = getattr(self.canvas, 'manager', None)
    911 if manager is not None:
    --> 912 manager.resize(int(canvasw), int(canvash))
    913 self.stale = True
    914

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/site-packages/matplotlib/backends/_backend_tk.py in resize(self, width, height)
    530
    531 def resize(self, width, height):
    --> 532 self.canvas._tkcanvas.master.geometry("%dx%d" % (width, height))
    533
    534 if self.toolbar is not None:

    ~/opt/anaconda3/envs/py3_parcels/lib/python3.6/tkinter/init.py in wm_geometry(self, newGeometry)
    1839 """Set geometry to NEWGEOMETRY of the form =widthxheight+x+y. Return
    1840 current value if None is given."""
    -> 1841 return self.tk.call('wm', 'geometry', self._w, newGeometry)
    1842 geometry = wm_geometry
    1843 def wm_grid(self,

    TclError: can't invoke "wm" command: application has been destroyed

    <matplotlib.animation.FuncAnimation at 0x1209d66a0>

    Not sure where to go from here, so any help will be much appreciated. Thanks. --Frank
    Erik van Sebille
    @erikvansebille
    Hi @FrankGonzalez_gitlab; this seems to be a bug with your installation of jupyter/matplotlib/cartopy. Not much we can do here from parcels side. Are you sure you followed the installation guidelines on the oceanparcels.org webpage?
    HaydenSchilling
    @HaydenSchilling

    Hi, I'm trying to run some simulations but am having trouble getting the BrownianMotion2D kernel to work. The simulations run fine without this kernel but with this kernal it gives the following error:

    INFO: Compiled SampleParticleAdvectionRK4SampleTempSampleBathySampleDistanceSampleAgeBrownianMotion2D ==> C:\Users\hayde\AppData\Local\Temp\parcels-tmp\77ee8631be77ff20ca9291b9f11c3f36_0.dll
    Traceback (most recent call last):
    
      File "<ipython-input-102-e5ab9088a748>", line 162, in <module>
        endtime=end_time)
    
      File "C:\Users\hayde\.conda\envs\py3_parcels\lib\site-packages\parcels\particleset.py", line 527, in execute
        self.kernel.execute(self, endtime=time, dt=dt, recovery=recovery, output_file=output_file)
    
      File "C:\Users\hayde\.conda\envs\py3_parcels\lib\site-packages\parcels\kernel.py", line 341, in execute
        self.execute_jit(pset, endtime, dt)
    
      File "C:\Users\hayde\.conda\envs\py3_parcels\lib\site-packages\parcels\kernel.py", line 250, in execute_jit
        fargs = [byref(f.ctypes_struct) for f in self.field_args.values()]
    
      File "C:\Users\hayde\.conda\envs\py3_parcels\lib\site-packages\parcels\kernel.py", line 250, in <listcomp>
        fargs = [byref(f.ctypes_struct) for f in self.field_args.values()]
    
      File "C:\Users\hayde\.conda\envs\py3_parcels\lib\site-packages\parcels\field.py", line 964, in ctypes_struct
        if not self.data_chunks[i].flags.c_contiguous:
    
    AttributeError: 'NoneType' object has no attribute 'flags'

    Here are (I think) the relevant input parameters:

    # Set diffusion constants.
    Kh_zonal = 100
    Kh_meridional = 100
    
    fieldset = FieldSet.from_nemo(filenames, variables, dimensions, indices, allow_time_extrapolation=True)#, transpose=True)
    
    # Create field of Kh_zonal and Kh_meridional, using same grid as U
    #[time, depth, particle.lon, particle.lat] # Think this order is correct for here
    size4D = (30,30,fieldset.U.grid.ydim, fieldset.U.grid.xdim)
    fieldset.add_field(Field('Kh_zonal', Kh_zonal*np.ones(size4D), grid=fieldset.temp.grid))
    fieldset.add_field(Field('Kh_meridional', Kh_meridional*np.ones(size4D), grid=fieldset.temp.grid))

    Thanks so much for any advice

    Erik van Sebille
    @erikvansebille
    Thanks @HaydenSchilling. Could you put this as an Issue on the github page? I think I'd be able to help, but it's more visible and traceable if it's on there
    (I think it's because the fieldset.temp uses dask chunking, so its grid has all kinds of additional flags that you don't need for the Kh). A quick workaround would be to not use grid= but instead set the dimensions yourself
    HaydenSchilling
    @HaydenSchilling
    Thanks @erikvansebille . I have now created an issue on Github. I'll also have a go at setting the dimensions myself!
    Olmo Zavala
    @olmozavala

    Hello I'm running a simple model with RK4 and Diffusion, but it uses a large amount of particles 290k. I 'm trying to run it now on a cluster with MPI. So I have a couple of questions:
    1) Is there anything extra that I need to do to my code besides the mpirun np # part?
    2) When I run with mpi multiple processes try to write the output at the same time with the line out_parc_file.export(). So I get a permission denied error. If I run this with a single processor everything is fine. I have been using parcels for a couple of weeks now, so I may be doing something wrong.

    Thanks a lot for your help.

    Erik van Sebille
    @erikvansebille
    Thanks @olmozavala for your question. Could you log it on https://github.com/OceanParcels/parcels/issues so that others can comment on it too?
    Amaru Marquez
    @amarux

    Hello,

    I am new in ocean parcels and I would like to simulate the trajectory of an Argo float just as in the tutorial "Argo sampling Kernel" but, using the currents available form HYCOM model data. I installed the parcels version 2.1.5 and the ncdump -h output of a file with the data is:

    netcdf HYCOM_GLBy0.08_expt_93.0_uv3z_20191201T000000 {
    dimensions:
        depth = 33 ;
        lat = 201 ;
        lon = 119 ;
        time = UNLIMITED ; // (1 currently)
    variables:
        double depth(depth) ;
            depth:_FillValue = NaN ;
            depth:long_name = "Depth" ;
            depth:standard_name = "depth" ;
            depth:units = "m" ;
            depth:positive = "down" ;
            depth:axis = "Z" ;
            depth:NAVO_code = 5 ;
        double lat(lat) ;
            lat:_FillValue = NaN ;
            lat:long_name = "Latitude" ;
            lat:standard_name = "latitude" ;
            lat:units = "degrees_north" ;
            lat:point_spacing = "even" ;
            lat:axis = "Y" ;
            lat:NAVO_code = 1 ;
        double lon(lon) ;
            lon:_FillValue = NaN ;
            lon:long_name = "Longitude" ;
            lon:standard_name = "longitude" ;
            lon:units = "degrees_east" ;
            lon:modulo = "360 degrees" ;
            lon:axis = "X" ;
            lon:NAVO_code = 2 ;
        double time(time) ;
            time:NAVO_code = 13 ;
            time:axis = "T" ;
            time:calendar = "gregorian" ;
            time:long_name = "Valid Time" ;
            time:time_origin = "2000-01-01 00:00:00" ;
            time:units = "hours since 2000-01-01" ;
        float water_u(time, depth, lat, lon) ;
            water_u:NAVO_code = 17 ;
            water_u:coordinates = "time" ;
            water_u:long_name = "Eastward Water Velocity" ;
            water_u:missing_value = -30000s ;
            water_u:standard_name = "eastward_sea_water_velocity" ;
            water_u:units = "m/s" ;
        float water_v(time, depth, lat, lon) ;
            water_v:NAVO_code = 18 ;
            water_v:coordinates = "time" ;
            water_v:long_name = "Northward Water Velocity" ;
            water_v:missing_value = -30000s ;
            water_v:standard_name = "northward_sea_water_velocity" ;
            water_v:units = "m/s" ;

    After running a similar example from those of the tutorial I got the following lines:

    WARNING: Casting lon data to np.float32
    WARNING: Casting lat data to np.float32
    WARNING: Casting depth data to np.float32
    WARNING: Trying to initialize a shared grid with different chunking sizes - action prohibited. Replacing requested field_chunksize with grid's master chunksize.
    INFO: Compiled ArgoParticleArgoVerticalMovementAdvectionRK4 ==> /tmp/parcels-1000/5ea591ab2da8acc99f87aa32ccc5f9b0_0.so
    INFO: Temporary output files are stored in out-LFFTNCNT.
    INFO: You can use "parcels_convert_npydir_to_netcdf out-LFFTNCNT" to convert these to a NetCDF file during the run.
    100% (432000.0 of 432000.0) |###############| Elapsed Time: 0:00:00 Time:  0:00:00
    Exception ignored in: <function ParticleFile.__del__ at 0x7f32c59fbd08>
    Traceback (most recent call last):
      File "/home/usuario/anaconda3/lib/python3.7/site-packages/parcels/particlefile.py", line 196, in __del__
      File "/home/usuario/anaconda3/lib/python3.7/site-packages/parcels/particlefile.py", line 201, in close
      File "/home/usuario/anaconda3/lib/python3.7/site-packages/parcels/particlefile.py", line 365, in export
      File "/home/usuario/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 428, in load
    NameError: name 'open' is not defined

    Could you help me to understand what I missed?

    Thank you!

    Erik van Sebille
    @erikvansebille
    That last error is not really a problem; it’s because the netcdf file is closed before the python script is closed. You could remove this bug by issuing pfile.close() before the end of your script. But the output file should be the same
    (see also OceanParcels/parcels#794 for some background)
    Amaru Marquez
    @amarux
    Thank you! file.close() solved the error and also the output netcdf is produced without problems.
    HaydenSchilling
    @HaydenSchilling
    Hi, I'm trying to use the Bluelink ReANalysis (BRAN) ocean model which is accessible online (http://dapds00.nci.org.au/thredds/catalog/gb6/BRAN/BRAN_2016/OFAM/catalog.html). Is there a way to pass a list of url's (eg OpenDAP) in place of local filenames when initializing the fieldset? I have manage to get it running with a subset of files I downloaded but am wondering if it would be possible to run simulations without downloading all the files to my computer?
    Willi Rath
    @willirath
    Yes, you can use xarray datasets work Parcels. And Xarray can load opendap in a lazy way.
    HaydenSchilling
    @HaydenSchilling

    Thanks so much, I think that's exactly what I was looking for but I'm not sure I'm loading the files correctly (note separate u and v files) as it is overloading the memory with just the same two files I was able to load locally. Do you have any suggestions?

    import xarray as xr
    
    files = ['http://dapds00.nci.org.au/thredds/dodsC/gb6/BRAN/BRAN_2016/OFAM/ocean_u_1994_01.nc',
             'http://dapds00.nci.org.au/thredds/dodsC/gb6/BRAN/BRAN_2016/OFAM/ocean_v_1994_01.nc']
    
    ds = xr.open_mfdataset(paths = files, combine = "by_coords") 
    
    variables = {'U': 'u',
                 'V': 'v',
                 'time': 'Time'}
    
    dimensions = {'lat': 'yu_ocean',
                  'lon': 'xu_ocean',
                  'time': 'Time',
                  'depth': 'st_ocean'}
    
    indices = {'depth': [50]} # surface layer only - note this doesn't seem to make any difference at this stage.
    
    fieldset = FieldSet.from_xarray_dataset(ds, variables, dimensions, indices)

    Results in:

    MemoryError: Unable to allocate 15.9 GiB for an array with shape (31, 51, 1500, 3600) and data type int16
    Willi Rath
    @willirath

    I think that's because xarray will, by default, use Dask chunks that span the whole files. Try, e.g.,

    ds = xr.open_mfdataset(
        paths=files,
        chunks={"Time": 1, "st_ocean": 1}
    )

    to have chunks spanning only the horizontal plane.

    Then, you'll also be able to just select for depth level 50 and have Xarray only retrieve the requested level.
    HaydenSchilling
    @HaydenSchilling
    Thanks @willirath I think it's now running. It no longer crashes but the FieldSet.from_xarray_dataset(ds, variables, dimensions, indices) command has now been running for over two hours. Would this be because of a rubbish internet connection or is there something else I should be concerned about?
    JG-2020
    @JG-2020

    Hi,

    I am new to Parcels, so I apologize if this question is trivial. The following loop is part of our code used to set up our initial conditions. The particle depths depend on the local bathymetry (Bathy) and mixed layer depth (MLD), the 2D fields for which have already been loaded and added to the fieldset. latp and lonp are the arrays of particle initial latitude and longitudes respectively, for our approximately 50,000 particles.

    for i in range(0,len(lonp)):
        bathyp[i] = fieldset.Bathy[0,0,latp[i], lonp[i]]
        MLDp[i] = fieldset.MLD[0,0,latp[i], lonp[i]]

    This loop therefore requires ~50,000 iterations and as such it takes a long time to compute. Instead of computing this element by element as the above loop does, we would like to compute the values for the arrays as a whole. We know in MATLAB that using vectorized calculations is much faster than for loops, and so we were wondering if it was possible to take advantage of such efficiencies here. I tried coding this similarly to how I would code it in MATLAB, as shown here:

       bathyp = fieldset.Bathy[0,0,latp, lonp]
       MLDp = fieldset.MLD[0,0,latp, lonp]

    Which returns the error:

    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

    Is fieldset not capable of interpolating my Bathy and MLD arrays for the entire latp and lonp coordinate arrays without the use of a loop?

    Thanks for the help.

    vpavlo21
    @vpavlo21

    Hi all
    I am new to Parcels and Python in general and I just have a hopefully very quick question. I am trying to run a simulation that runs over multiple days, and for each day I have a new dataset. So I have created a list with all the files I am using and then when I am trying to initialize my fieldset I run into the following,

    file_names ={'U':files,
                 'V':files,
                 'W':files}
    variables = {'U':'uo',
                 'V':'vo',
                 'W':'w'}
    dimensions =  {'lat':'latitude',
                   'lon':'longitude',
                   'depth':'depth',
                   'time':'time'}
    fset = FieldSet.from_netcdf(file_names,variables,dimensions)

    which results in the error

    ValueError: Could not convert object to NumPy datetime

    Normally when I run a simulation with just one day of data I don't include the time dimension but here if I don't put it in I get an error telling me it's required.
    Also when I execute the simulation on a particle set will it automatically switch to a new set of data after a day has passed?

    Thank you very much

    Erik van Sebille
    @erikvansebille
    Hi @JG-2020. For your question about slowness of setting the Fields; it’s probably easiest to create a Kernel that does this interpolation and then execute it once (with dt=0). If you use JITParticles, then the interpolaiton should be very fast
    Hi @vpavlo21. To answer your question, I need to know a bit more about the time dimension in your netcdf files. It may be that it uses a calendar that can not be convertedby xarray’s cftime handler. Also, where exactly (on which line in the parcels code) do you get the ValueError? It’s always a good idea to submit the entire error log
    JG-2020
    @JG-2020

    Hi @erikvansebille
    Thanks very much for your reply.

    I believe we are using JITparticles, although we created a new class of particle that samples our field properties, following the tutorial example. Our goal is to speed up the particle initialization because it is currently taking about an hour, and we plan to run simulations with larger domains and many more particles. As I am a beginner, I am hoping you could expand a bit on what you are recommending so I know exactly how to make the code more efficient. Specifically:

    1) Do you mean we should define a kernel that is used separately from our pset.execute kernels, since our issue is with slow initialization of variables before the tracking starts?

    2) Do you mean we should still use Parcel's fieldset.X, where X is one of our variables (e.g. MLD, Bathy etc.), to interpolate the variable to the particle location for each individual particle (one at a time) inside a for loop, or is there a way to do many particles at once and/or another way to interpolate?

    Many thanks

    Erik van Sebille
    @erikvansebille
    Hi @JG-2020. Yes, so what I suggest is something like
     def initSample(particle, fieldset, time):
        particle.bathyp = fieldset.Bathy[particle.time,particle.depth,particle.lat, particle.lon]
    
    pset.execute(initSample, dt=0)  # run without incrementing the time, to initialise particle.bathyp
    
    pset.execute(…)
    Where (…) is your ‘normal’ kernel execution