Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 09:29
    trichter synchronize #2440
  • 09:29

    trichter on stack

    make division py2 compatible (compare)

  • 09:12
    trichter synchronize #2440
  • 09:12

    trichter on stack

    change default type='normal' to… write stack_count to stats obje… (compare)

  • 08:57
    xichaoqiang opened #2441
  • 08:52
    trichter commented #2440
  • Aug 22 08:36
    trichter synchronize #2440
  • Aug 22 08:36

    trichter on stack

    add Stream.stack method (compare)

  • Aug 22 08:33
    trichter edited #2440
  • Aug 22 08:28
    trichter milestoned #2440
  • Aug 22 08:28
    trichter opened #2440
  • Aug 22 08:28
    trichter labeled #2440
  • Aug 22 08:20

    trichter on stack

    add Stream.stack method (compare)

  • Aug 22 07:28
    trichter closed #2432
  • Aug 22 07:28
    trichter commented #2432
  • Aug 22 04:36
    mhoareau commented #2053
  • Aug 22 02:33
    feichaohao starred obspy/obspy
  • Aug 21 07:19
    xichaoqiang closed #2439
  • Aug 21 07:19
    xichaoqiang commented #2439
  • Aug 20 15:23
    d-chambers commented #2439
Elliott Sales de Andrade
@QuLogic
Oh, you're right, I misread that; hard to see when it's not in proper code blocks.
titi-dev
@titi-dev
Thanks,let me check it out..
titi-dev
@titi-dev
@calum-chamberlain error: problem on write. NotImplementedError:Masked array writing is not supported.You can use np.array.filled() to convert the masked array to a normal array
Calum Chamberlain
@calum-chamberlain
So the data you tried to merge are not continuous and sac does not support this. Either follow the instructions in the error message and fill the gaps, or use a format that does support gaps such as miniseed.
titi-dev
@titi-dev
@calum-chamberlain Thanks for the suggestions..
titi-dev
@titi-dev
@calum-chamberlain Sir please suggest some solutions.How to fill the gaps and merged it successfully to miniseed files.When I tried your mentioned script on the mseed data also there is same memory problem.
Calum Chamberlain
@calum-chamberlain
Morning,
  1. You do not need to fill gaps when writing to miniseed, if you go down this route then do not fill the gaps;
  2. If you want to write to SAC you will need to fill the gaps, if you want to do this I would recommend reading the obspy docs on Stream.merge() and look at fill_value and method arguments
  3. If you were able to get to the stage of writing to SAC before then, providing you didn't try and fill the gaps, you shouldn't have the same memory error as before - it is hard to work out what is wrong without the script you ran and the error message. Assuming all that you changed is that last line (from format="SAC" to format="MSEED") then I wouldn't expect an error;
  4. I'm curious how long these data are and why you are doing this? Maybe there is a better way to work within your machine memory limits.
Calum Chamberlain
@calum-chamberlain
Note that if you do want to fill the gaps, you should make changes to both the st.merge() calls in the example provided.
titi-dev
@titi-dev
Hii good morning.Actually i am running your script only.I am doing it for tomography study.However I checked my data set where gaps are there so i need to fill the gaps with zero.Is there any script which can efficiently perform this job.
This script doesn't work for mseed file as well.same memory error.
Calum Chamberlain
@calum-chamberlain
The error you reported yesterday was coming from st.write(), which was not a memory error. That suggested that you could read and merge all the traces/streams using the script. Is this not the case? Did you get this error running something else?
Why do you need this long, single stream for your tomography study?
If you read the documentation for st.merge() you will see how to fill your gaps with zeros.
Calum Chamberlain
@calum-chamberlain
If you are getting the write error regarding masked numpy arrays (not a memory error) then you should do st.split().write(filename, format="MSEED")
79seismo
@79seismo
@calum-chamberlain I made a mistake with stage gain in my 3-component stationXML. All fine with ppsd.plot(). Thanks very much!
79seismo
@79seismo

@titi-dev Here's what I would do to merge miniseed volumes. Once you do this, you can convert to SAC files if you so desire.

import obspy
from obspy import read, read_inventory
from obspy.io.xseed import Parser
import matplotlib.pyplot as plt
import os, shutil, glob

# loop through miniseed files held in a directory called ABS
for my_mseed in glob.glob('ABC/*'): 
                     sgrams = read(my_mseed)
                     trace = sgrams.select(station='my_station',channel='HHZ')
                     trace_merge = trace.merge(method=1, fill_value='interpolate', 
                                         interpolation_samples=0)

Warning: You can merge using different methods + fill values. Please refer to the tutorial @calum-chamberlain mentioned in an earlier comment.

titi-dev
@titi-dev
@79seismo I tried this method also but no output obtained.
79seismo
@79seismo
@titi-dev do a trace.plot() after trace = sgrams .... step to see if you are actually reading data.
titi-dev
@titi-dev
@79seismo No sir it is not plotting the stream but it is plotting only one trace i.e. 2018-01-02T17:00:00:005-2018-01-02T17:59:59.995 though many traces are present where the starting time is 2018-01-01T00:00:00.05 and ending time 2018-02-01T00:00:00.05.
titi-dev
@titi-dev
Sir, now it is reading the stream data one by one but not writing the merged traces to a single file.
Calum Chamberlain
@calum-chamberlain
The example @79seismo gave is designed to generate a merged trace for a single station and channel (the .select()) does this. Hence only one trace is plotted.
If you want it to write something out you need to add a write command. The snippet provided does not do this, hence no file written.
As far is I can tell the snippet provided by @79seismo is mostly trying to show you how to use merge to fill gaps. You should try to understand the codes people have shared. If you have not been through the obspy tutorials you should.
titi-dev
@titi-dev
However i did like this trace_merge.write("mmm",format='mseed') it write only one trace....ok thanks for suggestion. Actually i am new to the obspy as well as to the python so i am trying hard to understand obspy tutorials.
Calum Chamberlain
@calum-chamberlain
Yup, if you select only one station and channel then merge that you get a single trace. It sounds like this isn't what you want. Maybe don't use select.
Can you clarify what error you were getting with the previous example (see my comments before @79seismo ). Was the error a memory error or the error related to masked arrays?
titi-dev
@titi-dev
@calum-chamberlain when i am using your script for one day(means merging 24 nos of 1 hour segments to a single file) it works fine.But when i chose whole stream to be merged here is an error like this:::: NotImplementedError:Mased array writing is not supported.You can use np.array.filled() to convert the masked array to normal array
Calum Chamberlain
@calum-chamberlain
Yes, that is not a memory error. Try the suggestions I gave: using fill_value and method arguments on merge as shown by @79seismo, or do st.split().write(filename, format="MSEED")
The issue is that your data have gaps, but during merge they are made continuous using masked arrays (look at the numpy docs to learn more) which cannot be written to most (all?) seismological data formats. You therefore need to either split the data (likely not what you want) or fill the gaps.
titi-dev
@titi-dev
@calum-chamberlain st.split().write(filename, format="MSEED") worked fine.So far i have tested it on merging 5 days data only.Let me check it on one month data sets.
Calum Chamberlain
@calum-chamberlain
Remember that that does not fill your gaps.
Why are you merging many days if data? What code needs this?
Of, not if. Apologies
titi-dev
@titi-dev
@calum-chamberlain Sir i need to fill the gaps as well as i want to make the data continuous for ambient noise tomography
Calum Chamberlain
@calum-chamberlain
Then read what I said about filling gaps.
titi-dev
@titi-dev
ok
Calum Chamberlain
@calum-chamberlain
I only know msnoise, but I would think it unusual to have a continuous multi day datasets for noise. Usually people compute daily correlograms as far as I know, using one day of data at a time. However I don't do noise. Just seems like an unusual way to do this!
titi-dev
@titi-dev
In you see your code near about line number 31 it is written st.merge() ......then what i did st.merge(method=1,fill_value=0,interpolation_samples=0) it gives the output.I donot know wheather it fills the gaps or not .Is there any procedure to check it out.Thanks.
Calum Chamberlain
@calum-chamberlain
Plot it?
titi-dev
@titi-dev
both the plots are same
Calum Chamberlain
@calum-chamberlain
Note that if you are doing any frequency domain operations you probably want to detrend your data before filing with zeros. See the detrend method of stream.
titi-dev
@titi-dev
ok sir.
Calum Chamberlain
@calum-chamberlain
And possibly taper (again, see the taper method). If you are not familiar with why you should do this you might find the book: Time Series Analysis and Inverse Theory by Gubbins informative
You can check by doing .split on the resultant stream and see if any gaps remain.
titi-dev
@titi-dev
@calum-chamberlain How to split again i donot have any idea.
Calum Chamberlain
@calum-chamberlain
stream.split()
Remember that there are two merge calls in the snippet of code I sent (which is less than 31 lines so I don't know where you are referring to). You need to edit both of those to avoid memory errors.
Anyway, I'm signing off. If in doubt: read the docs
titi-dev
@titi-dev
ok sir
Thanks