Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Reza K Ghazi
@dataadvisor
Hey @drbitboy. Thanks for the tip. Actually, for Windows, we should use scape characters. The best idea which you mentioned, is going forward step by step from loading kernels in the same directory and then one step deeper and son so forth.
kidpixo
@kidpixo

Hi , I got this error using spiceypy v2.3.1

Toolkit version: N0066

SPICE(KERNELPOOLFULL) --

There is no room available for adding another character value to the kernel pool.  The character values buffer became full at line 6592 of the text kernel file '~/mertis_output/bepi_spice/kernels/fk/bc_mpo_v20.tf'.

furnsh_c --> FURNSH --> ZZLDKER --> LDPOOL --> ZZRVAR

any idea? I'm a beginner with SPICE/spiceypy, maybe I forgot to deallocate something from memory?

kidpixo
@kidpixo
Uhm I wasn's using any kclear or unload ...
kidpixo
@kidpixo
silly question : quick difference between kclear and unload ?
Jesse Mapel
@jessemapel
I don't think there is any. Kclear just doesn't need the kernel file paths.
Brian Carcich
@drbitboy
Jesse is correct: unload removes data from the kernel pool one file at a time; clear clears the entire kernel pool with one call. 1) Were you FURNSHing the same kernel multiple times? What other kennels were already loaded? Can you provide a metal kernel
if that is what you were using?
Whoops, metal kernel => meta-kernel
kidpixo
@kidpixo

Hi,
thanks now it is more clear.
import spiceypy as spice

metakernel = pathlib.Path(configuration_global.get("spice_path")) / 'kernels/mk/bc_ops.tm'
spice.furnsh(str(metakernel))
    # some calculations  in a loop 
spice.unload(str(metakernel))

metakernel is a pathlib.Path and furnsh doesn't like it @AndrewAnnex is pathlib instead of string path a planned ?
The metakernel is here > https://gist.github.com/kidpixo/d350d29c0b5790bb5ca5daeb40cad4b7

Brian Carcich
@drbitboy
Do you have the ability to FURNSH that file in C or Fortran? Because it is unlikely that SpiceyPy is the problem, rather the underlying CSPICE library is having the issue.
Also, is it possible you are calling that FURNSH more than once?
Jesse Mapel
@jessemapel
pathlib support is currently an open issue AndrewAnnex/SpiceyPy#292
Andrew Annex
@AndrewAnnex
@kidpixo I agree with brian, IIRC there are some params in cspice that could be increased but I would be very hesitant to touch any of the default settings in spice... maybe try managing your kernels to take less space?
@jessemapel @kidpixo yeah pathlib support has been an idea, I am also going to have spiceypy participate in Google Summer of Code so there are a number of small improvement ideas I have that will be added as issues to the repository over the next few weeks, if you have ideas for self contained, small improvements that would be good for students/beginning python programmers to do let me know
@kidpixo I'm curious, how big are all of the spice kernels in that metakernel combined? is it many gb?
kidpixo
@kidpixo
@drbitboy no way I'll do it in FORTRAN or C, because I cannot use those languages ...
Yes, you got it right:
I'm reading an instrument data stream, decoding it and have at least 3 routines to produce different data products, each one calling FURNSH on its own :-( I have to refractor: import spiceypy in the main routine, FURNSH it and pass the furnsh-ed version to each single subroutine and clean ad the end in the main routine.
I'm not tat good at programming under pressure I presume…
@AndrewAnnex pathlib is one , just nice to have. I'm starting now to use it, I will have something more for sure.
The "basic" group of metakernels is ~few GBs, the complete is several 10GB, but I don't know how much of it is load at each run. I would say 1-2GB range is a safe bet.
Andrew Annex
@AndrewAnnex
hmm well I think you just need to be more careful with the size, 10gb could be over the limits hard set in cspice. The other simple thing to do is to just kclear the kernel pool at the start and end of each of your 3 routines unless they need to call each other.
Brian Carcich
@drbitboy
@kidpixo yes, Andrew has a good idea: call kclear at the end of each routine that does the FURNSH of the meta-kernel.
@kidpixo also, I also find it useful to summat like this:
def some_spice_routine(arg1,arg2,...,kernels=[])
  if kernels:  list(map(spiceypy.furnsh,kernels))
  ...
  if kernels: list(map(spiceypy.furnsh,kernels))
  return [...]
Brian Carcich
@drbitboy
that way, you can load the kernels once, before the call, and let the [kernels] keyword argument to that routine be equivalent to [False], so nothing is done; OR you can pass in any kernel(s) in a list (e.g. some_spice_routine(arg1,arg2,kernels=['metakernel.tm']), and the routine FURNSHes the kernel, and cleans up after itself at the end.
kidpixo
@kidpixo
@drbitboy nice! thanks
Andrew Annex
@AndrewAnnex
btw furnsh accepts Iterable data types so you can directly pass in a list of strings of kernel paths
kidpixo
@kidpixo
Hi there
I'm trying to do some spiceypy calculation using https://joblib.readthedocs.io/en/latest/parallel.html , essentially I calculate all my sensor pixel corners vectors point and then I loop over all the observations time to get the coordinates at target .
Obviously it isn't working
I'm passing the furnished spiceypy instance to the function calculating the actual geomtry of observation at a given time, bu I believe that both thread-based parallelism and process-based parallelism don't inherit it and use a fresh not-furnished spiceypy.
kidpixo
@kidpixo
any idea ? experience with joblib ?
kidpixo
@kidpixo
slightly adapted code
from joblib import Parallel, delayed
import multiprocessing
num_cores = multiprocessing.cpu_count()

output  = Parallel(n_jobs=num_cores, backend='threading')(delayed(spice4mertis_geometry_wrapper)(utc, k,et,v, target, frame, sensor, observer) for utc,et in TIS_UTC_time )
kidpixo
@kidpixo
data example
import numpy as np
utc, k,et,v, target, frame, sensor, observer = '2020-04-09T04:00:12.105','C22',639676881.2906493, np.array([-4.61646326e-03, 6.13704793e-04,9.99989156e-01]), 'MOON','IAU_MOON','MPO_MERTIS_TIS_SPACE','MPO'

A single run on those data returns :

dict(zip(head,spice4mertis_geometry_wrapper(utc,k,et,v,target,frame,sensor ,sensor_id,observer, spice)))

{'utc': '2020-04-09T04:00:12.105',
 'name': 'C22',
 'et': 639676881.2906493,
 'tarlon': -0.20023645299777082,
 'tarlat': -0.4657544517921964,
 'sublon': -0.09431312287105752,
 'sublat': 0.014900819804988915,
 'sunlon': -0.21279322760557254,
 'sunlat': -0.025399579268598888,
 'taralt': 747621.3385348794,
 'subalt': 0.2668455383644952,
 'sunalt': '12:02:52',
 'tardis': 7.201687663702408,
 'tarang': 28.21652025567335,
 'ltime': 25.240221527886003}

so , it works on single direction

kidpixo
@kidpixo
I think I am making a mess importing spiceypy and passing it around to function as variable....
Brian Carcich
@drbitboy
CSPICE, the C library underneath spiceypy, is not thread-safe.
Also, I am pretty sure it is not necessary to pass the spiceypy module, and importing the module should be adequater.
Brian Carcich
@drbitboy
  • adequate ;-)
kidpixo
@kidpixo
right, thanks
kidpixo
@kidpixo

https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/Tutorials/pdf/individual_docs/45_development_plans.pdf

Develop SPICE 2.0: a re-implementation of the SPICE Toolkit from the ground, up, providing thread-safe and object oriented features
–This is the major NAIF undertaking, started in May 2017
–It is being implemented in C++11
–It is expected to take several years

kidpixo
@kidpixo

uhm I'm using joblib.
As stated here https://joblib.readthedocs.io/en/latest/parallel.html#thread-based-parallelism-vs-process-based-parallelism

By default joblib.Parallel uses the 'loky' backend module to start separate Python worker processes to execute tasks concurrently on separate CPUs.
[...]
When you know that the function you are calling is based on a compiled extension that releases the Python Global Interpreter Lock (GIL) during most of its computation then it is more efficient to use threads instead of Python processes as concurrent workers.
[...]
To hint that your code can efficiently use threads, just pass prefer="threads" as parameter of the joblib.Parallel constructor

I guess the loky backend would be ok, but this import a fresh spiceypy for each process, not furnished.

kidpixo
@kidpixo

@drbitboy ok, not it works, ugly and inefficient as it is . Essentially I furnish the kernel at each loop in the wrapper function and kclean afterward. I have no idea how this interact with furnshd spiceypy in the main code ....

the for k,v in centers.items(): a second loop on all the directions i need to calculate at each time.

def spice4mertis_geometry_wrapper(utc,k,et,v,target,frame,sensor, sensor_id,sensor_frame,observer):
    metakernel = 'PATH_TO_KERNEL
    spice.furnsh(str(metakernel))
    out = [utc, k,et]+spice4mertis.core.geometry.geometry(et,v, target, frame, sensor, observer=observer)
    spice.kclear()
    return out

from joblib import Parallel, delayed
import multiprocessing
num_cores = multiprocessing.cpu_count()
# for k,v in centers.items():
output  = Parallel(n_jobs=num_cores, backend='loky')(delayed(spice4mertis_geometry_wrapper)(utc,k,et,v,target,frame,sensor, sensor_id,sensor_frame,observer) for utc,et in zip(utc_time.values,et_time))
Brian Carcich
@drbitboy
does the delayed wrapper routine have access to its thread-instance (or process-instance) information? we could run tests e.g. spice.ktotal() to see if one thread or process affects another. Also, if the [import spiceypy] was inside the scope of the delayed wrapper routine, would that affect how the module was operated?
kidpixo
@kidpixo

@drbitboy so:

does the delayed wrapper routine have access to its thread-instance (or process-instance) information?

I guess not, because it was not working. Passing the metakernel path and furnishing in the delayed wrapper works.

we could run tests e.g. spice.ktotal() to see if one thread or process affects another.

I writing this down for my test, good idea, thanks.

Also, if the [import spiceypy] was inside the scope of the delayed wrapper routine, would that affect how the module was operated?

sorry, I don't understand the question, which kind of operation do you mean?

Brian Carcich
@drbitboy
def spice4mertis_...(...)
   import spiceypy as spice
   sp.ktotal()
   ...
kidpixo
@kidpixo
ok, I'll try , thanks
me and @AndrewAnnex are part os openplanetary.slack.com | maybe would useful to move this discussion there or to https://forum.openplanetary.org/ for future reference ?
Andrew Annex
@AndrewAnnex
oh yeah I don't really check this gitter frequently, yeah I have never attempted to parallelize spice calls because of all the issues mentioned above, and spicecalls have never been the bottleneck for me, but multiprocessing would presumably have the best chance of working if each process has a separate kernel pool, doing the test @drbitboy suggest for modifying the kernel pool(s) would also be a very good idea to make sure nothing nasty is happening