Skip to content

neuro_py.detectors

DetectDS

Bases: object

Class for detecting dentate spikes

Parameters:

Name Type Description Default
basepath str

Path to the folder containing the data

required
hilus_ch int

Channel number of the hilus signal (0 indexing)

required
mol_ch int

Channel number of the mol signal (0 indexing)

required
noise_ch int

Channel number of the noise signal or signal far from dentate (0 indexing)

None
lowcut float

Low cut frequency for the signal filter

10
highcut float

High cut frequency for the signal filter

250
filter_signal_bool bool

If True, the signal will be filtered

True
primary_threshold float

Primary threshold for detecting the dentate spikes (difference method only)

5
secondary_threshold float

Secondary threshold for detecting the dentate spikes (difference method only)

required
primary_thres_mol float

Primary threshold for detecting the dentate spikes in the mol signal

2
primary_thres_hilus float

Primary threshold for detecting the dentate spikes in the hilus signal

5
min_duration float

Minimum duration of the dentate spikes

0.005
max_duration float

Maximum duration of the dentate spikes

0.05
filter_order int

Order of the filter

4
filter_rs int

Resonance frequency of the filter

20
method str

Method for detecting the dentate spikes. "difference" for detecting the dentate spikes by difference between the hilus and mol signal "seperately" for detecting the dentate spikes by the hilus and mol signal separately

'seperately'
clean_lfp bool

If True, the LFP signal will be cleaned

False
emg_threshold float

Threshold for the EMG signal to remove dentate spikes

0.9

Attributes:

Name Type Description
lfp AnalogSignalArray

LFP signal

filtered_lfp AnalogSignalArray

Filtered LFP signal

mol_hilus_diff AnalogSignalArray

Difference between the hilus and mol signal

ds_epoch EpochArray

EpochArray with the dentate spikes

peak_val ndarray

Peak value of the dentate spikes

Methods:

Name Description
load_lfp

Load the LFP signal

filter_signal

Filter the LFP signal

get_filtered_lfp

Get the filtered LFP signal

get_lfp_diff

Get the difference between the hilus and mol signal

detect_ds_difference

Detect the dentate spikes by difference between the hilus and mol signal

detect_ds_seperately

Detect the dentate spikes by the hilus and mol signal separately

save_ds_epoch

Save the dentate spikes as an EpochArray

Examples:

In IDE or python console

>>> from ds_swr.detection.detect_dentate_spike import DetectDS
>>> from neuro_py.io import loading
>>> channel_tags = loading.load_channel_tags(basepath)
>>> dds = DetectDS(
    basepath,
    channel_tags["hilus"]["channels"] - 1,
    channel_tags["mol"]["channels"] - 1
)
>>> dds.detect_ds()
>>> dds.save_ds_epoch()
>>> dds
<DetectDS at 0x17fe787c640: dentate spikes 5,769> of length 1:11:257 minutes

In command line

>>> python detect_dentate_spike.py Z:/Data/Can/OML22/day20
Source code in neuro_py/detectors/dentate_spike.py
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
class DetectDS(object):
    """
    Class for detecting dentate spikes

    Parameters
    ----------
    basepath : str
        Path to the folder containing the data
    hilus_ch : int
        Channel number of the hilus signal (0 indexing)
    mol_ch : int
        Channel number of the mol signal (0 indexing)
    noise_ch : int, optional
        Channel number of the noise signal or signal far from dentate (0 indexing)
    lowcut : float, optional
        Low cut frequency for the signal filter
    highcut : float, optional
        High cut frequency for the signal filter
    filter_signal_bool : bool, optional
        If True, the signal will be filtered
    primary_threshold : float, optional
        Primary threshold for detecting the dentate spikes (difference method only)
    secondary_threshold : float, optional
        Secondary threshold for detecting the dentate spikes (difference method only)
    primary_thres_mol : float, optional
        Primary threshold for detecting the dentate spikes in the mol signal
    primary_thres_hilus : float, optional
        Primary threshold for detecting the dentate spikes in the hilus signal
    min_duration : float, optional
        Minimum duration of the dentate spikes
    max_duration : float, optional
        Maximum duration of the dentate spikes
    filter_order : int, optional
        Order of the filter
    filter_rs : int, optional
        Resonance frequency of the filter
    method : str, optional
        Method for detecting the dentate spikes.
        "difference" for detecting the dentate spikes by difference between the hilus and mol signal
        "seperately" for detecting the dentate spikes by the hilus and mol signal separately
    clean_lfp : bool, optional
        If True, the LFP signal will be cleaned
    emg_threshold : float, optional
        Threshold for the EMG signal to remove dentate spikes


    Attributes
    ----------
    lfp : nelpy.AnalogSignalArray
        LFP signal
    filtered_lfp : nelpy.AnalogSignalArray
        Filtered LFP signal
    mol_hilus_diff : nelpy.AnalogSignalArray
        Difference between the hilus and mol signal
    ds_epoch : nelpy.EpochArray
        EpochArray with the dentate spikes
    peak_val : np.ndarray
        Peak value of the dentate spikes


    Methods
    -------
    load_lfp()
        Load the LFP signal
    filter_signal()
        Filter the LFP signal
    get_filtered_lfp()
        Get the filtered LFP signal
    get_lfp_diff()
        Get the difference between the hilus and mol signal
    detect_ds_difference()
        Detect the dentate spikes by difference between the hilus and mol signal
    detect_ds_seperately()
        Detect the dentate spikes by the hilus and mol signal separately
    save_ds_epoch()
        Save the dentate spikes as an EpochArray

    Examples
    --------
    In IDE or python console

    >>> from ds_swr.detection.detect_dentate_spike import DetectDS
    >>> from neuro_py.io import loading
    >>> channel_tags = loading.load_channel_tags(basepath)
    >>> dds = DetectDS(
        basepath,
        channel_tags["hilus"]["channels"] - 1,
        channel_tags["mol"]["channels"] - 1
    )
    >>> dds.detect_ds()
    >>> dds.save_ds_epoch()
    >>> dds
    <DetectDS at 0x17fe787c640: dentate spikes 5,769> of length 1:11:257 minutes


    In command line

    >>> python detect_dentate_spike.py Z:/Data/Can/OML22/day20
    """

    def __init__(
        self,
        basepath: str,
        hilus_ch: int,
        mol_ch: int,
        noise_ch: Union[int, None] = None,
        lowcut: int = 10,
        highcut: int = 250,
        filter_signal_bool: bool = True,
        primary_threshold: Union[int, float] = 5,
        primary_thres_mol: Union[int, float] = 2,
        primary_thres_hilus: Union[int, float] = 5,
        min_duration: float = 0.005,
        max_duration: float = 0.05,
        filter_order: int = 4,
        filter_rs: int = 20,
        method: str = "seperately",
        clean_lfp: bool = False,
        emg_threshold: float = 0.9,
    ) -> None:
        # adding all the parameters to the class
        self.__dict__.update(locals())
        del self.__dict__["self"]
        # setting the type name
        self.type_name = self.__class__.__name__
        self.get_xml_data()

    def get_xml_data(self):
        """
        Load the XML file to get the number of channels, sampling frequency and shank to channel mapping
        """
        nChannels, fs, fs_dat, shank_to_channel = loading.loadXML(self.basepath)
        self.nChannels = nChannels
        self.fs = fs
        self.fs_dat = fs_dat
        self.shank_to_channel = shank_to_channel

    def load_lfp(self):
        """
        Load the LFP signal
        """

        lfp, timestep = loading.loadLFP(
            self.basepath,
            n_channels=self.nChannels,
            frequency=self.fs,
            ext="lfp",
        )

        if self.noise_ch is None:
            channels = [self.hilus_ch, self.mol_ch]
        else:
            channels = [self.hilus_ch, self.mol_ch, self.noise_ch]

        self.lfp = nel.AnalogSignalArray(
            data=lfp[:, channels].T,
            timestamps=timestep,
            fs=self.fs,
            support=nel.EpochArray(np.array([min(timestep), max(timestep)])),
        )
        if self.clean_lfp:
            self.lfp._data = np.array(
                [
                    clean_lfp(self.lfp.signals[0]),
                    clean_lfp(self.lfp.signals[1]),
                ]
            )

    def filter_signal(self):
        """
        Filter the LFP signal

        Returns
        -------
        np.ndarray
            Filtered LFP signal
        """
        if not hasattr(self, "lfp"):
            self.load_lfp()

        b, a = cheby2(
            self.filter_order,
            self.filter_rs,
            [self.lowcut, self.highcut],
            fs=self.fs,
            btype="bandpass",
        )
        return filtfilt(b, a, self.lfp.data)

    def get_filtered_lfp(self):
        if not hasattr(self, "lfp"):
            self.load_lfp()

        self.filtered_lfp = deepcopy(self.lfp)
        self.filtered_lfp._data = self.filter_signal()

    def get_lfp_diff(self):
        if self.filter_signal_bool:
            y = self.filter_signal()
        else:
            if not hasattr(self, "lfp"):
                self.load_lfp()
            y = self.lfp.data

        self.mol_hilus_diff = nel.AnalogSignalArray(
            data=y[0, :] - y[1, :],
            timestamps=self.lfp.abscissa_vals,
            fs=self.fs,
            support=nel.EpochArray(
                np.array([min(self.lfp.abscissa_vals), max(self.lfp.abscissa_vals)])
            ),
        )

    def detect_ds_difference(self):
        if not hasattr(self, "mol_hilus_diff"):
            self.get_lfp_diff()

        PrimaryThreshold = (
            self.mol_hilus_diff.mean()
            + self.primary_threshold * self.mol_hilus_diff.std()
        )
        SecondaryThreshold = (
            self.mol_hilus_diff.mean()
            + self.secondary_threshold * self.mol_hilus_diff.std()
        )
        bounds, self.peak_val, _ = nel.utils.get_events_boundaries(
            x=self.mol_hilus_diff.data,
            PrimaryThreshold=PrimaryThreshold,
            SecondaryThreshold=SecondaryThreshold,
            minThresholdLength=0,
            minLength=self.min_duration,
            maxLength=self.max_duration,
            ds=1 / self.mol_hilus_diff.fs,
        )
        # convert bounds to time in seconds
        timebounds = self.mol_hilus_diff.time[bounds]
        # add 1/fs to stops for open interval
        timebounds[:, 1] += 1 / self.mol_hilus_diff.fs
        # create EpochArray with bounds
        self.ds_epoch = nel.EpochArray(timebounds)

        # remove ds in high emg
        _, high_emg_epoch, _ = loading.load_emg(self.basepath, self.emg_threshold)
        if not high_emg_epoch.isempty:
            idx = find_intersecting_intervals(self.ds_epoch, high_emg_epoch)
            self.ds_epoch._data = self.ds_epoch.data[~idx]
            self.peak_val = self.peak_val[~idx]

    def detect_ds_seperately(self):
        if not hasattr(self, "filtered_lfp"):
            self.get_filtered_lfp()

        # min and max time width of ds (converted to samples for find_peaks)
        time_widths = [
            int(self.min_duration * self.filtered_lfp.fs),
            int(self.max_duration * self.filtered_lfp.fs),
        ]

        # detect ds in hilus
        PrimaryThreshold = (
            self.filtered_lfp.data[0, :].mean()
            + self.primary_thres_hilus * self.filtered_lfp.data[0, :].std()
        )

        peaks, properties = find_peaks(
            self.filtered_lfp.data[0, :],
            height=PrimaryThreshold,
            width=time_widths,
        )
        self.peaks = peaks / self.filtered_lfp.fs
        self.peak_val = properties["peak_heights"]

        # create EpochArray with bounds
        hilus_epoch = nel.EpochArray(
            np.array([properties["left_ips"], properties["right_ips"]]).T
            / self.filtered_lfp.fs
        )

        # detect ds in mol
        PrimaryThreshold = (
            self.filtered_lfp.data[1, :].mean()
            + self.primary_thres_mol * self.filtered_lfp.data[1, :].std()
        )

        peaks, properties = find_peaks(
            -self.filtered_lfp.data[1, :],
            height=PrimaryThreshold,
            width=time_widths,
        )
        mol_epoch_peak = peaks / self.filtered_lfp.fs
        # create EpochArray with bounds
        mol_epoch = nel.EpochArray(
            np.array([properties["left_ips"], properties["right_ips"]]).T
            / self.filtered_lfp.fs
        )

        # detect ds in noise channel
        if self.noise_ch is not None:
            PrimaryThreshold = (
                self.filtered_lfp.data[2, :].mean()
                + self.primary_thres_hilus * self.filtered_lfp.data[2, :].std()
            )

            peaks, properties = find_peaks(
                self.filtered_lfp.data[2, :],
                height=PrimaryThreshold,
                width=time_widths,
            )

            # create EpochArray with bounds
            noise_epoch = nel.EpochArray(
                np.array([properties["left_ips"], properties["right_ips"]]).T
                / self.filtered_lfp.fs
            )

        # remove hilus spikes that are not overlapping with mol spikes
        # first, find mol peaks that are within hilus epoch
        idx = in_intervals(mol_epoch_peak, hilus_epoch.data)
        mol_epoch._data = mol_epoch.data[idx]

        overlap = find_intersecting_intervals(
            hilus_epoch, mol_epoch, return_indices=True
        )
        self.ds_epoch = nel.EpochArray(hilus_epoch.data[overlap])
        self.peak_val = self.peak_val[overlap]
        self.peaks = self.peaks[overlap]

        # remove dentate spikes that are overlapping with noise spikes
        if self.noise_ch is not None:
            overlap = find_intersecting_intervals(
                self.ds_epoch, noise_epoch, return_indices=True
            )
            self.ds_epoch = nel.EpochArray(self.ds_epoch.data[~overlap])
            self.peak_val = self.peak_val[~overlap]
            self.peaks = self.peaks[~overlap]

        # remove ds in high emg
        _, high_emg_epoch, _ = loading.load_emg(self.basepath, self.emg_threshold)
        if not high_emg_epoch.isempty:
            idx = find_intersecting_intervals(self.ds_epoch, high_emg_epoch)
            self.ds_epoch._data = self.ds_epoch.data[~idx]
            self.peak_val = self.peak_val[~idx]
            self.peaks = self.peaks[~idx]

    def detect_ds(self):
        """
        Detect the dentate spikes based on the method provided
        """
        if self.method == "difference":
            # deprecated
            raise NotImplementedError
            # self.detect_ds_difference()
        elif self.method == "seperately":
            self.detect_ds_seperately()
        else:
            raise ValueError(f"Method {self.method} not recognized")

    def save_ds_epoch(self):
        """
        Save the dentate spikes as a cellexplorer mat file
        """

        filename = os.path.join(
            self.basepath, os.path.basename(self.basepath) + ".DS2.events.mat"
        )
        data = {}
        data["DS2"] = {}
        data["DS2"]["detectorinfo"] = {}
        data["DS2"]["timestamps"] = self.ds_epoch.data
        data["DS2"]["peaks"] = self.peaks
        data["DS2"]["amplitudes"] = self.peak_val.T
        data["DS2"]["amplitudeUnits"] = "mV"
        data["DS2"]["eventID"] = []
        data["DS2"]["eventIDlabels"] = []
        data["DS2"]["eventIDbinary"] = []
        data["DS2"]["duration"] = self.ds_epoch.durations.T
        data["DS2"]["center"] = np.median(self.ds_epoch.data, axis=1).T
        data["DS2"]["detectorinfo"]["detectorname"] = "DetectDS"
        data["DS2"]["detectorinfo"]["detectionparms"] = []
        data["DS2"]["detectorinfo"]["detectionintervals"] = []
        data["DS2"]["detectorinfo"]["ml_channel"] = self.mol_ch
        data["DS2"]["detectorinfo"]["h_channel"] = self.hilus_ch
        if self.noise_ch is not None:
            data["DS2"]["detectorinfo"]["noise_channel"] = self.noise_ch

        savemat(filename, data, long_field_names=True)

    def get_average_trace(self, shank=None, window=[-0.15, 0.15]):
        """
        Get the average LFP trace around the dentate spikes

        Parameters
        ----------
        shank : int, optional
            Shank number of the hilus signal
        window : list, optional
            Window around the dentate spikes

        Returns
        -------
        np.ndarray
            Average LFP trace around the dentate spikes
        np.ndarray
            Time lags around the dentate spikes
        """

        lfp, _ = loading.loadLFP(
            self.basepath,
            n_channels=self.nChannels,
            frequency=self.fs,
            ext="lfp",
        )

        if shank is None:
            hilus_shank = [
                k for k, v in self.shank_to_channel.items() if self.hilus_ch in v
            ][0]

        ds_average, time_lags = event_triggered_average_fast(
            signal=lfp[:, self.shank_to_channel[hilus_shank]].T,
            events=self.ds_epoch.starts,
            sampling_rate=self.fs,
            window=window,
            return_average=True,
        )
        return ds_average, time_lags

    def plot(self, ax=None, window=[-0.15, 0.15], channel_offset=9e4):
        """
        Plot the average LFP trace around the dentate spikes

        Parameters
        ----------
        ax : matplotlib.axes._subplots.AxesSubplot, optional
            Axis to plot the average LFP trace
        window : list, optional
            Window around the dentate spikes
        channel_offset : float, optional
            Offset between the channels

        Returns
        -------
        matplotlib.axes._subplots.AxesSubplot
            Axis with the average LFP trace
        """

        import matplotlib.pyplot as plt

        ds_average, time_lags = self.get_average_trace(window=window)

        if ax is None:
            fig, ax = plt.subplots(figsize=(5, 10))

        ax.plot(
            time_lags,
            ds_average.T - np.linspace(0, channel_offset, ds_average.shape[0]),
            alpha=0.75,
        )
        return ax

    def _detach(self):
        """Detach the data from the object to allow for pickling"""
        self.filtered_lfp = None
        self.lfp = None
        self.mol_hilus_diff = None

    def save(self, filename: str):
        """
        Save the DetectDS object as a pickle file

        Parameters
        ----------
        filename : str
            Path to the file where the DetectDS object will be saved

        Returns
        -------
        None

        """
        self._detach()
        with open(filename, "wb") as f:
            pickle.dump(self, f)

    @classmethod
    def load(cls, filename: str):
        """
        Load a DetectDS object from a pickle file

        Parameters
        ----------
        filename : str
            Path to the file where the DetectDS object is saved

        Returns
        -------
        DetectDS
            The loaded DetectDS object

        """
        with open(filename, "rb") as f:
            return pickle.load(f)

    def __repr__(self) -> str:
        address_str = " at " + str(hex(id(self)))

        if not hasattr(self, "ds_epoch"):
            return "<%s%s>" % (self.type_name, address_str)

        if self.ds_epoch.isempty:
            return "<%s%s: empty>" % self.type_name

        dentate_spikes = f"dentate spikes {self.ds_epoch.n_intervals}"
        dstr = f"of length {self.ds_epoch.length}"

        return "<%s%s: %s> %s" % (self.type_name, address_str, dentate_spikes, dstr)

    def __str__(self) -> str:
        return self.__repr__()

    def __len__(self) -> int:
        if not hasattr(self, "ds_epoch"):
            return 0
        return self.ds_epoch.n_intervals

    def __getitem__(self, key):
        if not hasattr(self, "ds_epoch"):
            raise IndexError("No dentate spikes detected yet")
        return self.ds_epoch[key]

    def __iter__(self):
        if not hasattr(self, "ds_epoch"):
            raise IndexError("No dentate spikes detected yet")
        return iter(self.ds_epoch)

    def __contains__(self, item):
        if not hasattr(self, "ds_epoch"):
            raise IndexError("No dentate spikes detected yet")
        return item in self.ds_epoch

_detach()

Detach the data from the object to allow for pickling

Source code in neuro_py/detectors/dentate_spike.py
478
479
480
481
482
def _detach(self):
    """Detach the data from the object to allow for pickling"""
    self.filtered_lfp = None
    self.lfp = None
    self.mol_hilus_diff = None

detect_ds()

Detect the dentate spikes based on the method provided

Source code in neuro_py/detectors/dentate_spike.py
362
363
364
365
366
367
368
369
370
371
372
373
def detect_ds(self):
    """
    Detect the dentate spikes based on the method provided
    """
    if self.method == "difference":
        # deprecated
        raise NotImplementedError
        # self.detect_ds_difference()
    elif self.method == "seperately":
        self.detect_ds_seperately()
    else:
        raise ValueError(f"Method {self.method} not recognized")

filter_signal()

Filter the LFP signal

Returns:

Type Description
ndarray

Filtered LFP signal

Source code in neuro_py/detectors/dentate_spike.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
def filter_signal(self):
    """
    Filter the LFP signal

    Returns
    -------
    np.ndarray
        Filtered LFP signal
    """
    if not hasattr(self, "lfp"):
        self.load_lfp()

    b, a = cheby2(
        self.filter_order,
        self.filter_rs,
        [self.lowcut, self.highcut],
        fs=self.fs,
        btype="bandpass",
    )
    return filtfilt(b, a, self.lfp.data)

get_average_trace(shank=None, window=[-0.15, 0.15])

Get the average LFP trace around the dentate spikes

Parameters:

Name Type Description Default
shank int

Shank number of the hilus signal

None
window list

Window around the dentate spikes

[-0.15, 0.15]

Returns:

Type Description
ndarray

Average LFP trace around the dentate spikes

ndarray

Time lags around the dentate spikes

Source code in neuro_py/detectors/dentate_spike.py
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
def get_average_trace(self, shank=None, window=[-0.15, 0.15]):
    """
    Get the average LFP trace around the dentate spikes

    Parameters
    ----------
    shank : int, optional
        Shank number of the hilus signal
    window : list, optional
        Window around the dentate spikes

    Returns
    -------
    np.ndarray
        Average LFP trace around the dentate spikes
    np.ndarray
        Time lags around the dentate spikes
    """

    lfp, _ = loading.loadLFP(
        self.basepath,
        n_channels=self.nChannels,
        frequency=self.fs,
        ext="lfp",
    )

    if shank is None:
        hilus_shank = [
            k for k, v in self.shank_to_channel.items() if self.hilus_ch in v
        ][0]

    ds_average, time_lags = event_triggered_average_fast(
        signal=lfp[:, self.shank_to_channel[hilus_shank]].T,
        events=self.ds_epoch.starts,
        sampling_rate=self.fs,
        window=window,
        return_average=True,
    )
    return ds_average, time_lags

get_xml_data()

Load the XML file to get the number of channels, sampling frequency and shank to channel mapping

Source code in neuro_py/detectors/dentate_spike.py
145
146
147
148
149
150
151
152
153
def get_xml_data(self):
    """
    Load the XML file to get the number of channels, sampling frequency and shank to channel mapping
    """
    nChannels, fs, fs_dat, shank_to_channel = loading.loadXML(self.basepath)
    self.nChannels = nChannels
    self.fs = fs
    self.fs_dat = fs_dat
    self.shank_to_channel = shank_to_channel

load(filename) classmethod

Load a DetectDS object from a pickle file

Parameters:

Name Type Description Default
filename str

Path to the file where the DetectDS object is saved

required

Returns:

Type Description
DetectDS

The loaded DetectDS object

Source code in neuro_py/detectors/dentate_spike.py
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
@classmethod
def load(cls, filename: str):
    """
    Load a DetectDS object from a pickle file

    Parameters
    ----------
    filename : str
        Path to the file where the DetectDS object is saved

    Returns
    -------
    DetectDS
        The loaded DetectDS object

    """
    with open(filename, "rb") as f:
        return pickle.load(f)

load_lfp()

Load the LFP signal

Source code in neuro_py/detectors/dentate_spike.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
def load_lfp(self):
    """
    Load the LFP signal
    """

    lfp, timestep = loading.loadLFP(
        self.basepath,
        n_channels=self.nChannels,
        frequency=self.fs,
        ext="lfp",
    )

    if self.noise_ch is None:
        channels = [self.hilus_ch, self.mol_ch]
    else:
        channels = [self.hilus_ch, self.mol_ch, self.noise_ch]

    self.lfp = nel.AnalogSignalArray(
        data=lfp[:, channels].T,
        timestamps=timestep,
        fs=self.fs,
        support=nel.EpochArray(np.array([min(timestep), max(timestep)])),
    )
    if self.clean_lfp:
        self.lfp._data = np.array(
            [
                clean_lfp(self.lfp.signals[0]),
                clean_lfp(self.lfp.signals[1]),
            ]
        )

plot(ax=None, window=[-0.15, 0.15], channel_offset=90000.0)

Plot the average LFP trace around the dentate spikes

Parameters:

Name Type Description Default
ax AxesSubplot

Axis to plot the average LFP trace

None
window list

Window around the dentate spikes

[-0.15, 0.15]
channel_offset float

Offset between the channels

90000.0

Returns:

Type Description
AxesSubplot

Axis with the average LFP trace

Source code in neuro_py/detectors/dentate_spike.py
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
def plot(self, ax=None, window=[-0.15, 0.15], channel_offset=9e4):
    """
    Plot the average LFP trace around the dentate spikes

    Parameters
    ----------
    ax : matplotlib.axes._subplots.AxesSubplot, optional
        Axis to plot the average LFP trace
    window : list, optional
        Window around the dentate spikes
    channel_offset : float, optional
        Offset between the channels

    Returns
    -------
    matplotlib.axes._subplots.AxesSubplot
        Axis with the average LFP trace
    """

    import matplotlib.pyplot as plt

    ds_average, time_lags = self.get_average_trace(window=window)

    if ax is None:
        fig, ax = plt.subplots(figsize=(5, 10))

    ax.plot(
        time_lags,
        ds_average.T - np.linspace(0, channel_offset, ds_average.shape[0]),
        alpha=0.75,
    )
    return ax

save(filename)

Save the DetectDS object as a pickle file

Parameters:

Name Type Description Default
filename str

Path to the file where the DetectDS object will be saved

required

Returns:

Type Description
None
Source code in neuro_py/detectors/dentate_spike.py
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
def save(self, filename: str):
    """
    Save the DetectDS object as a pickle file

    Parameters
    ----------
    filename : str
        Path to the file where the DetectDS object will be saved

    Returns
    -------
    None

    """
    self._detach()
    with open(filename, "wb") as f:
        pickle.dump(self, f)

save_ds_epoch()

Save the dentate spikes as a cellexplorer mat file

Source code in neuro_py/detectors/dentate_spike.py
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def save_ds_epoch(self):
    """
    Save the dentate spikes as a cellexplorer mat file
    """

    filename = os.path.join(
        self.basepath, os.path.basename(self.basepath) + ".DS2.events.mat"
    )
    data = {}
    data["DS2"] = {}
    data["DS2"]["detectorinfo"] = {}
    data["DS2"]["timestamps"] = self.ds_epoch.data
    data["DS2"]["peaks"] = self.peaks
    data["DS2"]["amplitudes"] = self.peak_val.T
    data["DS2"]["amplitudeUnits"] = "mV"
    data["DS2"]["eventID"] = []
    data["DS2"]["eventIDlabels"] = []
    data["DS2"]["eventIDbinary"] = []
    data["DS2"]["duration"] = self.ds_epoch.durations.T
    data["DS2"]["center"] = np.median(self.ds_epoch.data, axis=1).T
    data["DS2"]["detectorinfo"]["detectorname"] = "DetectDS"
    data["DS2"]["detectorinfo"]["detectionparms"] = []
    data["DS2"]["detectorinfo"]["detectionintervals"] = []
    data["DS2"]["detectorinfo"]["ml_channel"] = self.mol_ch
    data["DS2"]["detectorinfo"]["h_channel"] = self.hilus_ch
    if self.noise_ch is not None:
        data["DS2"]["detectorinfo"]["noise_channel"] = self.noise_ch

    savemat(filename, data, long_field_names=True)

bimodal_thresh(bimodal_data, max_thresh=np.inf, schmidt=False, max_hist_bins=25, start_bins=10, set_thresh=None, nboot=100, force_bimodal=False)

BimodalThresh: Find threshold between bimodal data modes (e.g., UP vs DOWN states) and return crossing times (UP/DOWN onset/offset times).

Parameters:

Name Type Description Default
bimodal_data array - like

Vector of bimodal data

required
max_thresh float

Maximum threshold value (default: inf)

inf
schmidt bool

Use Schmidt trigger with halfway points between trough and peaks (default: False)

False
max_hist_bins int

Maximum number of histogram bins to try before giving up (default: 25)

25
start_bins int

Minimum number of histogram bins for initial histogram (default: 10)

10
set_thresh float

Manually set your own threshold (default: None)

None
nboot int

Number of bootstrap iterations for dip test (default: 100)

100
force_bimodal bool

If True, skip bimodality test and proceed with threshold detection (default: False)

False

Returns:

Name Type Description
thresh float

Threshold value between modes

cross dict

Dictionary with keys: - 'upints': array of UP state intervals [onsets, offsets] - 'downints': array of DOWN state intervals [onsets, offsets]

bihist dict

Dictionary with keys: - 'bins': bin centers - 'hist': counts

diptest_result dict

Dictionary with keys: - 'dip': Hartigan's dip test statistic - 'p': p-value for bimodal distribution

Example

data = np.concatenate([np.random.normal(0, 1, 1000), ... np.random.normal(5, 1, 1000)]) thresh, cross, bihist, diptest_result = bimodal_thresh(data)

Notes

Python translation of BimodalThresh.m from MehrotraLevenstein_2023

Source code in neuro_py/detectors/up_down_state.py
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
def bimodal_thresh(
    bimodal_data,
    max_thresh=np.inf,
    schmidt=False,
    max_hist_bins=25,
    start_bins=10,
    set_thresh=None,
    nboot=100,
    force_bimodal=False,
):
    """
    BimodalThresh: Find threshold between bimodal data modes (e.g., UP vs DOWN states)
    and return crossing times (UP/DOWN onset/offset times).

    Parameters
    ----------
    bimodal_data : array-like
        Vector of bimodal data
    max_thresh : float, optional
        Maximum threshold value (default: inf)
    schmidt : bool, optional
        Use Schmidt trigger with halfway points between trough and peaks (default: False)
    max_hist_bins : int, optional
        Maximum number of histogram bins to try before giving up (default: 25)
    start_bins : int, optional
        Minimum number of histogram bins for initial histogram (default: 10)
    set_thresh : float, optional
        Manually set your own threshold (default: None)
    nboot : int, optional
        Number of bootstrap iterations for dip test (default: 100)
    force_bimodal : bool, optional
        If True, skip bimodality test and proceed with threshold detection (default: False)

    Returns
    -------
    thresh : float
        Threshold value between modes
    cross : dict
        Dictionary with keys:
        - 'upints': array of UP state intervals [onsets, offsets]
        - 'downints': array of DOWN state intervals [onsets, offsets]
    bihist : dict
        Dictionary with keys:
        - 'bins': bin centers
        - 'hist': counts
    diptest_result : dict
        Dictionary with keys:
        - 'dip': Hartigan's dip test statistic
        - 'p': p-value for bimodal distribution

    Example
    -------
    >>> data = np.concatenate([np.random.normal(0, 1, 1000),
    ...                        np.random.normal(5, 1, 1000)])
    >>> thresh, cross, bihist, diptest_result = bimodal_thresh(data)

    Notes
    -----
    Python translation of BimodalThresh.m from MehrotraLevenstein_2023

    """

    # Initialize
    bimodal_data = np.array(bimodal_data).flatten()
    bimodal_data = bimodal_data[~np.isnan(bimodal_data)]

    # Run Hartigan's dip test for bimodality
    dip_stat, p_value = hartigan_diptest(bimodal_data, n_boot=nboot)
    diptest_result = {"dip": dip_stat, "p": p_value}

    # If not bimodal, return empty (unless forced)
    if p_value > 0.05 and not force_bimodal:
        cross = {"upints": np.array([]), "downints": np.array([])}
        hist_counts, bin_edges = np.histogram(bimodal_data, bins=start_bins)
        bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2
        bihist = {"hist": hist_counts, "bins": bin_centers}
        return np.nan, cross, bihist, diptest_result

    # Remove data over max threshold
    bimodal_data = bimodal_data[bimodal_data < max_thresh]

    # Find histogram with exactly 2 peaks
    num_peaks = 1
    num_bins = start_bins

    while num_peaks != 2:
        hist_counts, bin_edges = np.histogram(bimodal_data, bins=num_bins)
        bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2

        # Find peaks (add zeros at edges for edge detection)
        padded_hist = np.concatenate([[0], hist_counts, [0]])
        peaks, _ = find_peaks(padded_hist, distance=1)
        peaks = np.sort(peaks) - 1  # Adjust for padding

        # Keep only top 2 peaks
        if len(peaks) > 2:
            peak_heights = hist_counts[peaks]
            top_2_idx = np.argsort(peak_heights)[-2:]
            peaks = np.sort(peaks[top_2_idx])

        num_peaks = len(peaks)
        num_bins += 1

        if num_bins >= max_hist_bins and set_thresh is None:
            print("Unable to find trough")
            cross = {"upints": np.array([]), "downints": np.array([])}
            bihist = {"hist": hist_counts, "bins": bin_centers}
            return np.nan, cross, bihist, diptest_result

    bihist = {"hist": hist_counts, "bins": bin_centers}

    # Find trough between peaks
    between_peaks = bin_centers[peaks[0] : peaks[1] + 1]
    between_hist = hist_counts[peaks[0] : peaks[1] + 1]

    # Find minimum (trough)
    trough_idx = np.argmin(between_hist)

    if set_thresh is not None:
        thresh = set_thresh
    else:
        thresh = between_peaks[trough_idx]

    # Schmidt trigger: use halfway points between trough and peaks
    if schmidt:
        thresh_up = thresh + 0.5 * (between_peaks[-1] - thresh)
        thresh_down = thresh + 0.5 * (between_peaks[0] - thresh)

        over_up = bimodal_data > thresh_up
        over_down = bimodal_data > thresh_down

        cross_up = np.where(np.diff(over_up.astype(int)) == 1)[0]
        cross_down = np.where(np.diff(over_down.astype(int)) == -1)[0]

        # Check for empty crossings before vstack
        if len(cross_up) == 0 or len(cross_down) == 0:
            cross = {
                "upints": np.array([]).reshape(0, 2),
                "downints": np.array([]).reshape(0, 2),
            }
            return thresh, cross, bihist, diptest_result

        # Delete incomplete (repeat) crossings
        all_crossings = np.vstack(
            [
                np.column_stack([cross_up, np.ones(len(cross_up))]),
                np.column_stack([cross_down, np.zeros(len(cross_down))]),
            ]
        )

        sort_order = np.argsort(all_crossings[:, 0])
        all_crossings = all_crossings[sort_order]

        up_down_switch = np.diff(all_crossings[:, 1])
        same_state = np.where(up_down_switch == 0)[0] + 1
        all_crossings = np.delete(all_crossings, same_state, axis=0)

        cross_up = all_crossings[all_crossings[:, 1] == 1, 0].astype(int)
        cross_down = all_crossings[all_crossings[:, 1] == 0, 0].astype(int)
    else:
        over_ind = bimodal_data > thresh
        cross_up = np.where(np.diff(over_ind.astype(int)) == 1)[0]
        cross_down = np.where(np.diff(over_ind.astype(int)) == -1)[0]

    # If only one crossing, return empty
    if len(cross_up) == 0 or len(cross_down) == 0:
        cross = {
            "upints": np.array([]).reshape(0, 2),
            "downints": np.array([]).reshape(0, 2),
        }
        return thresh, cross, bihist, diptest_result

    # Create interval arrays
    up_for_up = cross_up.copy()
    up_for_down = cross_up.copy()
    down_for_up = cross_down.copy()
    down_for_down = cross_down.copy()

    # Adjust for proper pairing
    if cross_up[0] < cross_down[0]:
        up_for_down = up_for_down[1:]
    if cross_down[-1] > cross_up[-1]:
        down_for_down = down_for_down[:-1]
    if cross_down[0] < cross_up[0]:
        down_for_up = down_for_up[1:]
    if cross_up[-1] > cross_down[-1]:
        up_for_up = up_for_up[:-1]

    # Ensure equal length for pairing
    min_len_up = min(len(up_for_up), len(down_for_up))
    min_len_down = min(len(down_for_down), len(up_for_down))

    # Check if pairing resulted in any valid intervals
    if min_len_up == 0 or min_len_down == 0:
        cross = {
            "upints": np.array([]).reshape(0, 2),
            "downints": np.array([]).reshape(0, 2),
        }
        return thresh, cross, bihist, diptest_result

    upints = np.column_stack([up_for_up[:min_len_up], down_for_up[:min_len_up]])
    downints = np.column_stack(
        [down_for_down[:min_len_down], up_for_down[:min_len_down]]
    )

    cross = {"upints": upints, "downints": downints}

    return thresh, cross, bihist, diptest_result

detect_up_down_states(basepath=None, st=None, nrem_epochs=None, region='ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC|CTX', min_dur=0.03, max_dur=0.5, percentile=20, bin_size=0.01, smooth_sigma=0.02, min_cells=10, save_mat=True, epoch_by_epoch=False, beh_epochs=None, show_figure=False, overwrite=False)

Detect UP and DOWN states in neural data.

UP and DOWN states are identified by computing the total firing rate of all simultaneously recorded neurons in bins of 10 ms, smoothed with a Gaussian kernel of 20 ms s.d. Epochs with a firing rate below the specified percentile threshold are considered DOWN states, while the intervals between DOWN states are classified as UP states. Epochs shorter than min_dur or longer than max_dur are discarded.

Parameters:

Name Type Description Default
basepath str

Base directory path where event files and neural data are stored.

None
st Optional[SpikeTrain]

Spike train data. If None, spike data will be loaded based on specified regions.

None
nrem_epochs Optional[EpochArray]

NREM epochs. If None, epochs will be loaded from the basepath.

None
region str

Brain regions for loading spikes. The first region is prioritized.

"ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC"
min_dur float

Minimum duration for DOWN states, in seconds.

0.03
max_dur float

Maximum duration for DOWN states, in seconds.

0.5
percentile float

Percentile threshold for determining DOWN states based on firing rate.

20
bin_size float

Bin size for computing firing rates, in seconds.

0.01
smooth_sigma float

Standard deviation for Gaussian kernel smoothing, in seconds.

0.02
min_cells int

Minimum number of neurons required for analysis.

10
save_mat bool

Whether to save the detected UP and DOWN states to .mat files.

True
epoch_by_epoch bool

Whether to perform detection epoch by epoch. If True, detection will be performed separately for each sleep epoch.

False
beh_epochs Optional[EpochArray]

Optional behavioral epochs to use for epoch-by-epoch detection. If None, sleep epochs will be loaded and used.

None
show_figure bool

Whether to display a figure showing firing rates during detected UP and DOWN states.

False
overwrite bool

Whether to overwrite existing .mat files when saving detected states.

False

Returns:

Type Description
Tuple[Optional[EpochArray], Optional[EpochArray]]

A tuple containing the detected DOWN state epochs and UP state epochs. Returns (None, None) if no suitable states are found or insufficient data is available.

Examples:

>>> down_state, up_state = detect_up_down_states(basepath="/path/to/data", show_figure=True)

From command line: $ python up_down_state.py /path/to/data

Notes

Detection method based on https://doi.org/10.1038/s41467-020-15842-4

Source code in neuro_py/detectors/up_down_state.py
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
def detect_up_down_states(
    basepath: Optional[str] = None,
    st: Optional[nel.SpikeTrainArray] = None,
    nrem_epochs: Optional[nel.EpochArray] = None,
    region: str = "ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC|CTX",
    min_dur: float = 0.03,
    max_dur: float = 0.5,
    percentile: float = 20,
    bin_size: float = 0.01,
    smooth_sigma: float = 0.02,
    min_cells: int = 10,
    save_mat: bool = True,
    epoch_by_epoch: bool = False,
    beh_epochs: Optional[nel.EpochArray] = None,
    show_figure: bool = False,
    overwrite: bool = False,
) -> Tuple[Optional[nel.EpochArray], Optional[nel.EpochArray]]:
    """
    Detect UP and DOWN states in neural data.

    UP and DOWN states are identified by computing the total firing rate of all
    simultaneously recorded neurons in bins of 10 ms, smoothed with a Gaussian kernel
    of 20 ms s.d. Epochs with a firing rate below the specified percentile threshold
    are considered DOWN states, while the intervals between DOWN states are classified
    as UP states. Epochs shorter than `min_dur` or longer than `max_dur` are discarded.

    Parameters
    ----------
    basepath : str
        Base directory path where event files and neural data are stored.
    st : Optional[nel.SpikeTrain], default=None
        Spike train data. If None, spike data will be loaded based on specified regions.
    nrem_epochs : Optional[nel.EpochArray], default=None
        NREM epochs. If None, epochs will be loaded from the basepath.
    region : str, default="ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC"
        Brain regions for loading spikes. The first region is prioritized.
    min_dur : float, default=0.03
        Minimum duration for DOWN states, in seconds.
    max_dur : float, default=0.5
        Maximum duration for DOWN states, in seconds.
    percentile : float, default=20
        Percentile threshold for determining DOWN states based on firing rate.
    bin_size : float, default=0.01
        Bin size for computing firing rates, in seconds.
    smooth_sigma : float, default=0.02
        Standard deviation for Gaussian kernel smoothing, in seconds.
    min_cells : int, default=10
        Minimum number of neurons required for analysis.
    save_mat : bool, default=True
        Whether to save the detected UP and DOWN states to .mat files.
    epoch_by_epoch : bool, default=False
        Whether to perform detection epoch by epoch. If True, detection will be performed separately for each sleep epoch.
    beh_epochs : Optional[nel.EpochArray], default=None
        Optional behavioral epochs to use for epoch-by-epoch detection. If None, sleep epochs will be loaded and used.
    show_figure : bool, default=False
        Whether to display a figure showing firing rates during detected UP and DOWN states.
    overwrite : bool, default=False
        Whether to overwrite existing .mat files when saving detected states.

    Returns
    -------
    Tuple[Optional[nel.EpochArray], Optional[nel.EpochArray]]
        A tuple containing the detected DOWN state epochs and UP state epochs.
        Returns (None, None) if no suitable states are found or insufficient data is available.

    Examples
    --------
    >>> down_state, up_state = detect_up_down_states(basepath="/path/to/data", show_figure=True)

    From command line:
    $ python up_down_state.py /path/to/data

    Notes
    -----
    Detection method based on https://doi.org/10.1038/s41467-020-15842-4
    """

    def _detect_states(bst_segment: nel.AnalogSignalArray, domain: nel.EpochArray):
        """Detect down/up states within a given domain using shared logic."""

        down_state_epochs = bst_segment.bin_centers[
            find_interval(
                bst_segment.data.flatten()
                < np.percentile(bst_segment.data.T, percentile)
            )
        ]
        if down_state_epochs.shape[0] == 0:
            return None, None

        durations = down_state_epochs[:, 1] - down_state_epochs[:, 0]
        down_state_epochs = down_state_epochs[durations > bin_size]

        down_state_epochs = (
            nel.EpochArray(data=down_state_epochs).merge(gap=bin_size * 2).data
        )
        durations = down_state_epochs[:, 1] - down_state_epochs[:, 0]
        down_state_epochs = down_state_epochs[
            ~((durations < min_dur) | (durations > max_dur)), :
        ]
        if down_state_epochs.shape[0] == 0:
            return None, None

        down_state_epochs = nel.EpochArray(data=down_state_epochs, domain=domain)

        up_state_epochs = ~down_state_epochs
        up_state_epochs = up_state_epochs.data
        # make sure up states are longer than bin size
        durations = up_state_epochs[:, 1] - up_state_epochs[:, 0]
        up_state_epochs = up_state_epochs[durations > bin_size]
        # merge nearby up states that are closer than 2*bin_size
        up_state_epochs = nel.EpochArray(data=up_state_epochs, domain=domain).merge(
            gap=bin_size * 2
        )

        return down_state_epochs, up_state_epochs

    # check for existence of event files
    if save_mat and not overwrite:
        filename_downstate = os.path.join(
            basepath, os.path.basename(basepath) + "." + "down_state" + ".events.mat"
        )
        filename_upstate = os.path.join(
            basepath, os.path.basename(basepath) + "." + "up_state" + ".events.mat"
        )
        if os.path.exists(filename_downstate) & os.path.exists(filename_upstate):
            down_state = loading.load_events(basepath=basepath, epoch_name="down_state")
            up_state = loading.load_events(basepath=basepath, epoch_name="up_state")
            return down_state, up_state

    # load brain states
    if nrem_epochs is None:
        state_dict = loading.load_SleepState_states(basepath)
        nrem_epochs = nel.EpochArray(state_dict["NREMstate"])

    if nrem_epochs.isempty:
        print(f"No NREM epochs found for {basepath}")
        return None, None

    # load spikes
    if st is None:
        st, _ = loading.load_spikes(basepath, brainRegion=region)

    # check if there are enough cells
    if st is None or st.isempty or st.data.shape[0] < min_cells:
        print(f"No spikes found for {basepath} {region}")
        return None, None

    # flatten spikes
    st = st[nrem_epochs].flatten()

    # bin and smooth
    bst = st.bin(ds=bin_size).smooth(sigma=smooth_sigma)

    if epoch_by_epoch:
        if beh_epochs is None:
            epoch_df = npy.io.load_epoch(basepath)
            epoch_df = npy.session.compress_repeated_epochs(epoch_df)
            epoch_df = epoch_df.query("environment == 'sleep'")
            beh_epochs = nel.EpochArray(epoch_df[["startTime", "stopTime"]].values)

        down_state_epochs = []
        up_state_epochs = []
        for ep in beh_epochs:
            domain = nrem_epochs & ep
            if domain.isempty:
                continue

            down_state_epochs_, up_state_epochs_ = _detect_states(bst[domain], domain)
            if down_state_epochs_ is None or up_state_epochs_ is None:
                continue

            down_state_epochs.append(down_state_epochs_.data)
            up_state_epochs.append(up_state_epochs_.data)

        if len(down_state_epochs) == 0 or len(up_state_epochs) == 0:
            print(f"No down states found for {basepath}")
            return None, None

        down_state_epochs = nel.EpochArray(
            data=np.concatenate(down_state_epochs), domain=nrem_epochs
        )
        up_state_epochs = nel.EpochArray(
            data=np.concatenate(up_state_epochs), domain=nrem_epochs
        )
    else:
        down_state_epochs, up_state_epochs = _detect_states(bst, nrem_epochs)
        if down_state_epochs is None or up_state_epochs is None:
            print(f"No down states found for {basepath}")
            return None, None

    # save to cell explorer mat file
    if save_mat:
        epoch_to_mat(down_state_epochs, basepath, "down_state", "detect_up_down_states")
        epoch_to_mat(up_state_epochs, basepath, "up_state", "detect_up_down_states")

    # optional figure to show firing rate during up and down states
    if show_figure:
        from matplotlib import pyplot as plt

        plt.figure()
        ax = plt.gca()
        psth = npy.process.compute_psth(st.data, down_state_epochs.starts, n_bins=500)
        psth.columns = ["Down states"]
        psth.plot(ax=ax)

        psth = npy.process.compute_psth(st.data, up_state_epochs.starts, n_bins=500)
        psth.columns = ["Up states"]

        psth.plot(ax=ax)
        ax.legend(loc="upper right", frameon=False)
        ax.axvline(0, color="k", linestyle="--")

        ax.set_xlabel("Time from state transition (s)")
        ax.set_ylabel("Firing rate (Hz)")

    return down_state_epochs, up_state_epochs

detect_up_down_states_bimodal_thresh(basepath=None, st=None, nrem_epochs=None, region='ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC|CTX', bin_size=0.01, smooth_sigma=0.02, min_cells=10, save_mat=True, epoch_by_epoch=False, beh_epochs=None, show_figure=False, overwrite=False, schmidt=False, nboot=100, force_bimodal=False)

Detect UP and DOWN states using bimodal_thresh on firing rate distribution.

Uses the same data loading and epoch-by-epoch logic as detect_up_down_states, but applies Hartigan's dip test and bimodal threshold detection instead of a fixed percentile. This is useful when UP/DOWN states form a clear bimodal distribution in the firing rate histogram.

Parameters:

Name Type Description Default
basepath str

Base directory path where event files and neural data are stored.

None
st Optional[SpikeTrainArray]

Spike train data. If None, spike data will be loaded based on specified regions.

None
nrem_epochs Optional[EpochArray]

NREM epochs. If None, epochs will be loaded from the basepath.

None
region str

Brain regions for loading spikes. The first region is prioritized.

"ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC|CTX"
bin_size float

Bin size for computing firing rates, in seconds.

0.01
smooth_sigma float

Standard deviation for Gaussian kernel smoothing, in seconds.

0.02
min_cells int

Minimum number of neurons required for analysis.

10
save_mat bool

Whether to save the detected UP and DOWN states to .mat files.

True
epoch_by_epoch bool

Whether to perform detection epoch by epoch.

False
beh_epochs Optional[EpochArray]

Optional behavioral epochs to use for epoch-by-epoch detection.

None
show_figure bool

Whether to display a figure showing firing rates during detected UP and DOWN states.

False
overwrite bool

Whether to overwrite existing .mat files when saving detected states.

False
schmidt bool

Use Schmidt trigger (hysteresis) for state transitions in bimodal_thresh.

False
nboot int

Number of bootstrap iterations for Hartigan's dip test. Reduce further (e.g., 50) for very long recordings to improve performance.

100
force_bimodal bool

If True, skip the bimodality test and force threshold detection even if the distribution appears unimodal. Use with caution.

False

Returns:

Type Description
Tuple[Optional[EpochArray], Optional[EpochArray]]

A tuple containing the detected DOWN state epochs and UP state epochs. Returns (None, None) if no suitable states are found or insufficient data is available.

Source code in neuro_py/detectors/up_down_state.py
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
def detect_up_down_states_bimodal_thresh(
    basepath: Optional[str] = None,
    st: Optional[nel.SpikeTrainArray] = None,
    nrem_epochs: Optional[nel.EpochArray] = None,
    region: str = "ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC|CTX",
    bin_size: float = 0.01,
    smooth_sigma: float = 0.02,
    min_cells: int = 10,
    save_mat: bool = True,
    epoch_by_epoch: bool = False,
    beh_epochs: Optional[nel.EpochArray] = None,
    show_figure: bool = False,
    overwrite: bool = False,
    schmidt: bool = False,
    nboot: int = 100,
    force_bimodal: bool = False,
) -> Tuple[Optional[nel.EpochArray], Optional[nel.EpochArray]]:
    """
    Detect UP and DOWN states using bimodal_thresh on firing rate distribution.

    Uses the same data loading and epoch-by-epoch logic as `detect_up_down_states`,
    but applies Hartigan's dip test and bimodal threshold detection instead of a
    fixed percentile. This is useful when UP/DOWN states form a clear bimodal
    distribution in the firing rate histogram.

    Parameters
    ----------
    basepath : str
        Base directory path where event files and neural data are stored.
    st : Optional[nel.SpikeTrainArray], default=None
        Spike train data. If None, spike data will be loaded based on specified regions.
    nrem_epochs : Optional[nel.EpochArray], default=None
        NREM epochs. If None, epochs will be loaded from the basepath.
    region : str, default="ILA|PFC|PL|EC1|EC2|EC3|EC4|EC5|MEC|CTX"
        Brain regions for loading spikes. The first region is prioritized.
    bin_size : float, default=0.01
        Bin size for computing firing rates, in seconds.
    smooth_sigma : float, default=0.02
        Standard deviation for Gaussian kernel smoothing, in seconds.
    min_cells : int, default=10
        Minimum number of neurons required for analysis.
    save_mat : bool, default=True
        Whether to save the detected UP and DOWN states to .mat files.
    epoch_by_epoch : bool, default=False
        Whether to perform detection epoch by epoch.
    beh_epochs : Optional[nel.EpochArray], default=None
        Optional behavioral epochs to use for epoch-by-epoch detection.
    show_figure : bool, default=False
        Whether to display a figure showing firing rates during detected UP and DOWN states.
    overwrite : bool, default=False
        Whether to overwrite existing .mat files when saving detected states.
    schmidt : bool, default=False
        Use Schmidt trigger (hysteresis) for state transitions in bimodal_thresh.
    nboot : int, default=100
        Number of bootstrap iterations for Hartigan's dip test. Reduce further (e.g., 50)
        for very long recordings to improve performance.
    force_bimodal : bool, default=False
        If True, skip the bimodality test and force threshold detection even if
        the distribution appears unimodal. Use with caution.

    Returns
    -------
    Tuple[Optional[nel.EpochArray], Optional[nel.EpochArray]]
        A tuple containing the detected DOWN state epochs and UP state epochs.
        Returns (None, None) if no suitable states are found or insufficient data is available.
    """

    # check for existence of event files
    if save_mat and not overwrite:
        filename_downstate = os.path.join(
            basepath,
            os.path.basename(basepath) + "." + "down_state" + ".events.mat",
        )
        filename_upstate = os.path.join(
            basepath,
            os.path.basename(basepath) + "." + "up_state" + ".events.mat",
        )
        if os.path.exists(filename_downstate) & os.path.exists(filename_upstate):
            down_state = loading.load_events(basepath=basepath, epoch_name="down_state")
            up_state = loading.load_events(basepath=basepath, epoch_name="up_state")
            return down_state, up_state

    # load brain states
    if nrem_epochs is None:
        state_dict = loading.load_SleepState_states(basepath)
        nrem_epochs = nel.EpochArray(state_dict["NREMstate"])

    if nrem_epochs.isempty:
        print(f"No NREM epochs found for {basepath}")
        return None, None

    # load spikes
    if st is None:
        st, _ = loading.load_spikes(basepath, brainRegion=region)

    # check if there are enough cells
    if st is None or st.isempty or st.data.shape[0] < min_cells:
        print(f"No spikes found for {basepath} {region}")
        return None, None

    # flatten spikes
    st = st[nrem_epochs].flatten()

    # bin and smooth
    bst = st.bin(ds=bin_size).smooth(sigma=smooth_sigma)

    def _detect_states_bimodal(
        bst_segment: nel.AnalogSignalArray, domain: nel.EpochArray
    ):
        """Detect down/up states using bimodal_thresh within a given domain."""

        # Get firing rate time series
        firing_rates = bst_segment.data.flatten()
        if firing_rates.size == 0:
            return None, None

        # Apply bimodal_thresh to the firing rates
        thresh, cross, bihist, diptest_result = bimodal_thresh(
            firing_rates, schmidt=schmidt, nboot=nboot, force_bimodal=force_bimodal
        )

        # If not bimodal or no threshold found
        if np.isnan(thresh):
            return None, None

        # Get bin centers (times)
        bin_centers = bst_segment.bin_centers

        # Extract downints and upints from cross
        downints = cross["downints"]  # indices into firing_rates array [n_intervals, 2]
        upints = cross["upints"]

        # Convert indices to time intervals using bin_centers
        if downints.size == 0:
            return None, None

        # Clip indices to valid range to prevent out-of-bounds access
        n_bins = len(bin_centers)
        downints_clipped = np.clip(downints.astype(int), 0, n_bins - 1)
        upints_clipped = (
            np.clip(upints.astype(int), 0, n_bins - 1)
            if upints.size > 0
            else np.array([], dtype=int).reshape(0, 2)
        )

        # Convert index intervals to time intervals
        # downints has shape [n, 2] where each row is [start_idx, end_idx]
        down_state_times = np.column_stack(
            [
                bin_centers[downints_clipped[:, 0]],
                bin_centers[downints_clipped[:, 1]],
            ]
        )
        down_state_epochs = nel.EpochArray(data=down_state_times, domain=domain)

        if upints.size == 0:
            # Generate up states as complement
            up_state_epochs = ~down_state_epochs
        else:
            up_state_times = np.column_stack(
                [
                    bin_centers[upints_clipped[:, 0]],
                    bin_centers[upints_clipped[:, 1]],
                ]
            )
            up_state_epochs = nel.EpochArray(data=up_state_times, domain=domain)

        return down_state_epochs, up_state_epochs

    if epoch_by_epoch:
        if beh_epochs is None:
            epoch_df = npy.io.load_epoch(basepath)
            epoch_df = npy.session.compress_repeated_epochs(epoch_df)
            epoch_df = epoch_df.query("environment == 'sleep'")
            beh_epochs = nel.EpochArray(epoch_df[["startTime", "stopTime"]].values)

        down_state_epochs = []
        up_state_epochs = []
        for ep in beh_epochs:
            domain = nrem_epochs & ep
            if domain.isempty:
                continue

            down_state_epochs_, up_state_epochs_ = _detect_states_bimodal(
                bst[domain], domain
            )
            if down_state_epochs_ is None or up_state_epochs_ is None:
                continue

            down_state_epochs.append(down_state_epochs_.data)
            up_state_epochs.append(up_state_epochs_.data)

        if len(down_state_epochs) == 0 or len(up_state_epochs) == 0:
            print(f"No down states found for {basepath}")
            return None, None

        down_state_epochs = nel.EpochArray(
            data=np.concatenate(down_state_epochs), domain=nrem_epochs
        )
        up_state_epochs = nel.EpochArray(
            data=np.concatenate(up_state_epochs), domain=nrem_epochs
        )
    else:
        down_state_epochs, up_state_epochs = _detect_states_bimodal(
            bst[nrem_epochs], nrem_epochs
        )
        if down_state_epochs is None or up_state_epochs is None:
            print(f"No down states found for {basepath}")
            return None, None

    # save to cell explorer mat file
    if save_mat:
        epoch_to_mat(
            down_state_epochs,
            basepath,
            "down_state",
            "detect_up_down_states_bimodal_thresh",
        )
        epoch_to_mat(
            up_state_epochs,
            basepath,
            "up_state",
            "detect_up_down_states_bimodal_thresh",
        )

    # optional figure to show firing rate during up and down states
    if show_figure:
        from matplotlib import pyplot as plt

        plt.figure()
        ax = plt.gca()
        psth = npy.process.compute_psth(st.data, down_state_epochs.starts, n_bins=500)
        psth.columns = ["Down states"]
        psth.plot(ax=ax)

        psth = npy.process.compute_psth(st.data, up_state_epochs.starts, n_bins=500)
        psth.columns = ["Up states"]

        psth.plot(ax=ax)
        ax.legend(loc="upper right", frameon=False)
        ax.axvline(0, color="k", linestyle="--")

        ax.set_xlabel("Time from state transition (s)")
        ax.set_ylabel("Firing rate (Hz)")

    return down_state_epochs, up_state_epochs