Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 22:33
    codecov[bot] commented #5895
  • Jan 31 2019 22:32
    larsoner synchronize #5895
  • Jan 31 2019 21:16
    codecov[bot] commented #5895
  • Jan 31 2019 21:16
    larsoner synchronize #5895
  • Jan 31 2019 20:23
    agramfort commented #5768
  • Jan 31 2019 18:31
    larsoner opened #5895
  • Jan 31 2019 18:31
    larsoner milestoned #5895
  • Jan 31 2019 17:58
    codecov[bot] commented #5822
  • Jan 31 2019 17:16
    GuillaumeFavelier commented #5822
  • Jan 31 2019 17:13
    codecov[bot] commented #5822
  • Jan 31 2019 17:13
    GuillaumeFavelier synchronize #5822
  • Jan 31 2019 17:01
    jshanna100 commented #5807
  • Jan 31 2019 16:11
    larsoner commented #5807
  • Jan 31 2019 15:36
    jshanna100 commented #5807
  • Jan 31 2019 15:19
    codecov[bot] commented #5807
  • Jan 31 2019 15:07
    GuillaumeFavelier commented #5822
  • Jan 31 2019 14:18
    codecov[bot] commented #5768
  • Jan 31 2019 14:01
    codecov[bot] commented #5807
  • Jan 31 2019 14:01
    jshanna100 synchronize #5807
  • Jan 31 2019 13:45
    GuillaumeFavelier synchronize #5822
mehdikuchi
@mehdikuchi
I have searched the documentation and found only three tfr analysis. Is it not implemented?
Eric Larson
@larsoner
@mehdikuchi the psd_welch uses tukey under the hood because it's the default for spectrogram but we could add an option to set which window function to use, feel free to open an issue on GitHub about adding this feature
mehdikuchi
@mehdikuchi
Thanks for the answer, that option for psd is fine too, but for now I need it in my tfr analysis, so I will open an issue for it.
cedricsimar
@cedricsimar
Hello there, I hope you're all safe!
After applying the inverse transform on EEG data, if I want to get the source activations of a specific area (let's say lateraloccipital-lh) - not the mean or flip_pca but the actual activation through time -, is there an easy way to get this information or should I dive into the source code to modify the _gen_extract_label_time_course function manually to return the data unaltered?
Eric Larson
@larsoner
@cedricsimar you probably want stc.in_label(...).data
cedricsimar
@cedricsimar
@larsoner Thank you Eric for the fast reply, spot on what I was looking for! :-)
tharikjose
@tharikjose

Hello everyone,

I am having trouble reading a Nihon Kohden EEG file. I just used:
raw = mne.io.read_raw_nihon('Nihon/Nihon_Salvo_em_Tres_Montagens/DA7521RO.EEG')

but as output I get the following error:
Loading DA7521RO.EEG
Found 21E file, reading channel names.
Traceback (most recent call last):
File "C:/Users/Sorrino/Documents/Faculdade/EESC/01 - Mestrado/Reading EEG/Reading Nihon.py", line 6, in <module>
raw = mne.io.read_raw_nihon('Nihon/Nihon_Salvo_em_Tres_Montagens/DA7521RO.EEG')
File "C:\Users\Sorrino\anaconda3\envs\mne\lib\site-packages\mne\io\nihon\nihon.py", line 46, in read_raw_nihon
return RawNihon(fname, preload, verbose)
File "<decorator-gen-217>", line 21, in init
File "C:\Users\Sorrino\anaconda3\envs\mne\lib\site-packages\mne\io\nihon\nihon.py", line 311, in init
header = _read_nihon_header(fname)
File "C:\Users\Sorrino\anaconda3\envs\mne\lib\site-packages\mne\io\nihon\nihon.py", line 126, in _read_nihon_header
_chan_labels = _read_21e_file(fname)
File "C:\Users\Sorrino\anaconda3\envs\mne\lib\site-packages\mne\io\nihon\nihon.py", line 119, in _read_21e_file
_chan_labels[idx] = name.strip()
IndexError: list assignment index out of range

I am using version 0.21 of men and the files are ok since they can be read in Nihon's program.
Has anyone faced the same problem or could help me addressing it?

1 reply
timonmerk
@timonmerk
Hey everyone, I also have a question. For plotting ECOG data, is there a way to specify the montage without a FreeSurfer file? In my case I am only provided with MNI transformed coordinates. For plotting I do not have the patient individual / native FreeSurfer model. Is there a way to still follow this tutorial: https://mne.tools/stable/auto_tutorials/misc/plot_ecog.html and provde the simple MNI brain, channels and then use the plot_alignment function? That would be really helpful!
14 replies
ociepkam
@ociepkam

Hi everyone!
I have problem with loading data. I have eeglab files (example: https://ufile.io/f/js83q) and after:
mne.io.read_raw_eeglab('rest1_1_trigg.set')
I get:

Reading rest1_100_trigg.fdt

TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars

The above exception was the direct cause of the following exception:

ValueError Traceback (most recent call last)

<ipython-input-20-49d5d8853521> in <module>
----> 1 mne.io.read_raw_eeglab(os.path.join(folder_with_data, files_all[0]))

~.conda\envs\mne\lib\site-packages\mne\io\eeglab\eeglab.py in read_raw_eeglab(input_fname, eog, preload, uint16_codec, verbose)
219 .. versionadded:: 0.11.0
220 """
--> 221 return RawEEGLAB(input_fname=input_fname, preload=preload,
222 eog=eog, verbose=verbose, uint16_codec=uint16_codec)
223

<decorator-gen-197> in init(self, input_fname, eog, preload, uint16_codec, verbose)

~.conda\envs\mne\lib\site-packages\mne\io\eeglab\eeglab.py in init(self, input_fname, eog, preload, uint16_codec, verbose)
353
354 # create event_ch from annotations
--> 355 annot = read_annotations(input_fname)
356 self.set_annotations(annot)
357 _check_boundary(annot, None)

~.conda\envs\mne\lib\site-packages\mne\annotations.py in read_annotations(fname, sfreq, uint16_codec)
665
666 elif name.endswith('set'):
--> 667 annotations = _read_annotations_eeglab(fname,
668 uint16_codec=uint16_codec)
669

~.conda\envs\mne\lib\site-packages\mne\io\eeglab\eeglab.py in _read_annotations_eeglab(eeg, uint16_codec)
611 duration = np.zeros(len(onset))
612 if len(events) > 0 and hasattr(events[0], 'duration'):
--> 613 duration[:] = [event.duration for event in events]
614
615 return Annotations(onset=np.array(onset) / eeg.srate,

ValueError: setting an array element with a sequence.

I am using version 0.20.8 of mne and Python 3.8.5.
Can you help me with this problem?

2 replies
zhchsh19
@zhchsh19
Hey everyone, I have a question. In the question"Compute MNE inverse solution on evoked data in a mixed source space", the "labels_vol" only contain seven sub structures,how can I get other sub structures?
AaronAngJW
@AaronAngJW

Hi there, can I ask what does n_cycles actually do in:
tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,
return_itc=True, decim=3, n_jobs=1)

The API mentioned that it is defaulted to 7.0. But in what instance do I define this n_cycles by myself and what exactly is changed in the data when that happens? I do notice visible changes in the frequency plot when I changed this value, but not sure what it actually does to the data.

BonnieNg99
@BonnieNg99
Hello everyone, I am new here and I encounter a problem when I was reading .edf files with mne-python. I used this: sample_data_raw_file =os.path.join("filename.edf")
raw = mne.io.read_raw_edf(sample_data_raw_file)
my original file was around 80,000KB but after reading it with that code it became around 70KB
when I print(raw), <RawEDF | filename.edf, 34 x 1183000 (2366.0 s), ~67 kB, data not loaded>, this is the output. The number of electrodes and the length of the EEG hasn't quite changed but the file size is a lot smaller
Does anyone have this problem also?
mehdikuchi
@mehdikuchi
Hello BonnieNg99, in order to load the data when instatiating a raw object you should set the preload to true
Adam Li
@adam2392
Hi quick question on referencing. Does average referencing (or any referencing for that mattering mne) exclude the bad channels or do I need to drop them a priori?
BonnieNg99
@BonnieNg99
@mehdikuchi thank you very much! This solves the problem!
BonnieNg99
@BonnieNg99
Has anyone used pyEDFlib to export EEG data as .edf files using https://gist.github.com/skjerns/bc660ef59dca0dbd53f00ed38c42f6be this before? I have some trouble saving the resulted file. I always end up with a file that contains no information and says "Error, number of datarecords is 0, expected >0".
Eric Larson
@larsoner
@adam2392 I'm pretty sure that bad channels are dropped, but please do check and then open a PR to update the docs (or an issue mentioning that the docs should be updated) accordingly
@BonnieNg99 I think @cbrnr has talked about using pyedflib, see https://github.com/mne-tools/mne-python/issues/8060#issuecomment-679871961
zhchsh19
@zhchsh19
@zhchsh19 Hey everyone, I have a quick question. In the question"Compute MNE inverse solution on evoked data in a mixed source space", the "labels_vol" only contain seven sub structures,how can I get other sub structures?
mehdikuchi
@mehdikuchi
hello all, Is there a way to annotate our data segments channel wise?
Eric Larson
@larsoner
@zhchsh19 the labels used depend on setup_volume_source_space earlier in the example
@mehdikuchi not currently, annotations apply to the segments of time as a whole not to individual channels
mehdikuchi
@mehdikuchi
@larsoner , Thanks a lot
@AaronAngJW :point_up: October 19, 2020 12:30 PM n_cycles specifies the number of the oscillatory signal which get multiplied by the signal at each time step and each analysis window. For instance if you specify 3 cycles per frequency then it is assured that within every analysis window the oscillatory signal sees its 3 cycles.
Masataka_Wada
@masataka-wada
Hey everyone, I have a question.
I try to source reconstruction from the epoched .set file data.
I have followed the tutorial "Source alignment and coordinate frames" to fit EEG channel positions to individual MRI data.
But, the location of EEG sensors is displayed far from the 3d head-model.
When I use the sample EEG data ('sample_audvis_raw.fif') and my individual MRI data, there is no problem like it.
Additionally, channel positions in 2d space are obtained from the epoched EEG data correctly.
So I think there is a problem with the 3d data information of our epoched EEG data, but I can't find the reason for this error.
Please give me advice.
zhchsh19
@zhchsh19

@zhchsh19 the labels used depend on setup_volume_source_space earlier in the example

Yeh, the "labels_vol " contains seven names of sub structures, where can I find other names of other sub structures?

AaronAngJW
@AaronAngJW
@mehdikuchi Ah thanks, so it's basically altering the resolution of the time/freq plot to your subjective preference
刘政(Barry Liu)
@BarryLiu-97
Hi, I was learning the tutorial Working with ECoG data, when I plot the 3d figure of the brain, I got a black figure, while if I save the snapshot, it works well. I do this in win10, pycharm, python3.7. And I have a GForce 1060.
image.png
And if I only want to plot the 3d viewer of this brain model, how do I do?
And can this tutorial help me plot the electrodes' location on the brain in sEEG analysis?
lindseypower
@lindseypower

basically however you process your actual data, you want to process your empty-room data the same way

@larsoner thanks for your response. I implemented what you have suggested to process my empty room data but I am running into an issue because the empty room data doesn't have digitization info. The MNE documentation suggests to set coord_frame = 'meg' for empty room data - which allows it to run without error - but then I can't set destination, as you've suggested, and I'm not sure if this is appropriate to use this coordinate frame considering I've reset the 'head_dev_t' info to transformed space. Do you have any suggestions for how to get around this? Thanks!

Eric Larson
@larsoner
@lindseypower you can also hack raw_erm.info['dig'] = raw_run.info['dig'] to make origin='auto' mode work. I indeed use coord_frame='head' to ensure I'm processing the same way
@BarryLiu-97 no idea why that's happening with Mayavi, can you try using the PyVista 3D backend instead with mne.set_3d_backend('pyvista') at the top of your script? and yes plot_alignment will plot sEEG locations as well (might need the bug fix from mne-tools/mne-python#8393, though)
@masataka-wada it's likely your channel locations are in millimeters instead of meters, you probably need to fix them using raw.get_montage and raw.set_montage
刘政(Barry Liu)
@BarryLiu-97
What if I just want to plot the mni brain model? How do I do?
刘政(Barry Liu)
@BarryLiu-97
@larsoner
Eric Larson
@larsoner
you can use mne.viz.Brain with the fsaverage subject if you just want a brain with nothing on it
Masataka_Wada
@masataka-wada

@masataka-wada it's likely your channel locations are in millimeters instead of meters, you probably need to fix them using raw.get_montage and raw.set_montage

@larsoner Thanks for your courteous reply. I checked the unit of my channel location and you are absolutely right! I really appreciate you!
Is there any function in 'raw.set_montage' to convert millimeters to meters?
I can't find it.

刘政(Barry Liu)
@BarryLiu-97
Hi, which version of mne should I use if I want to do some work on sEEG, like plotting the electordes on the mni brain model, or the activation map on the brain model. And are there some other tutorials or information about sEEG I didn't find in mne? I know the tutorial https://22794-1301584-gh.circle-artifacts.com/0/dev/auto_tutorials/misc/plot_ecog.html.
刘政(Barry Liu)
@BarryLiu-97
@larsoner
刘政(Barry Liu)
@BarryLiu-97
And If I use the mne to plot the electrodes on a mni brain model, what data should I have? I have got the .edf data, the .txt data contains the ch_name and the xyz coordinates information. I also got the pre-surgery fMRI and post-surgery CT.
刘政(Barry Liu)
@BarryLiu-97
Hi, now I find something interesting yesterday when I plotted the 3_d viewer of the brain model with the ecog electrodes, the figure showed me black although the screenshot was ok in Mayavi. I am using a laptop with GeForce 1060, and I have one extra screen. When I set the extra screen as the main screen, the figure shows me the black pic, while I set the laptop's own screen as the main, everything is OK? I am wondering if anyone can explain this, if none, I will just ignore it.
刘政(Barry Liu)
@BarryLiu-97
I was thinking it may have something to do with the rendering ability about the GPU.
Urban Marhl
@UrbanM
Hey, I have one question. I'm working on reconstructing some auditory evoked data, does MNE-python have a built in function to fit 2 dipoles simultaneously on one data frame? I would like to fit one on the left and one in the right hemisphere. For fitting one dipole I used the function mne.fit_dipole.
AaronAngJW
@AaronAngJW
Question: If I set baseline=(1.0,2.0) when generating with mne.Epoch, and then use this epoch to generate averageTFR, then finally do a .plot or .plot_topo, do I still need set baseline=(1.0,2.0) when plotting topo? Or doing baseline=(None,None) already consists of epoch already baselined to (1.0,2.0) from the start?