Skip to content

neuro_py.spikes

BurstIndex_Royer_2012(autocorrs)

Calculate the burst index from Royer et al. (2012). The burst index ranges from -1 to 1, where: -1 indicates non-bursty behavior, and 1 indicates bursty behavior.

Parameters:

Name Type Description Default
autocorrs DataFrame

Autocorrelograms of spike trains, with time (in seconds) as index and correlation values as columns.

required

Returns:

Type Description
list

List of burst indices for each autocorrelogram column.

Notes

The burst index is calculated as: burst_idx = (peak - baseline) / max(peak, baseline)

  • Peak is calculated as the maximum of the autocorrelogram between 2-9 ms.
  • Baseline is calculated as the mean of the autocorrelogram between 40-50 ms.

Examples:

>>> burst_idx = BurstIndex_Royer_2012(autocorr_df)
Source code in neuro_py/spikes/spike_tools.py
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
def BurstIndex_Royer_2012(autocorrs: pd.DataFrame) -> list:
    """
    Calculate the burst index from Royer et al. (2012).
    The burst index ranges from -1 to 1, where:
    -1 indicates non-bursty behavior, and 1 indicates bursty behavior.

    Parameters
    ----------
    autocorrs : pd.DataFrame
        Autocorrelograms of spike trains, with time (in seconds) as index and
        correlation values as columns.

    Returns
    -------
    list
        List of burst indices for each autocorrelogram column.

    Notes
    -----
    The burst index is calculated as:
        burst_idx = (peak - baseline) / max(peak, baseline)

    - Peak is calculated as the maximum of the autocorrelogram between 2-9 ms.
    - Baseline is calculated as the mean of the autocorrelogram between 40-50 ms.

    Examples
    -------
    >>> burst_idx = BurstIndex_Royer_2012(autocorr_df)
    """
    # peak range 2 - 9 ms
    peak = autocorrs.loc[0.002:0.009].max()
    # baseline idx 40 - 50 ms
    baseline = autocorrs.loc[0.04:0.05].mean()

    burst_idx = []
    for p, b in zip(peak, baseline):
        if (p is None) | (b is None):
            burst_idx.append(np.nan)
            continue
        if p > b:
            burst_idx.append((p - b) / p)
        elif p < b:
            burst_idx.append((p - b) / b)
        else:
            burst_idx.append(np.nan)
    return burst_idx

get_spindices(data)

Spike timestamps and IDs from each spike train in a time-sorted DataFrame.

Parameters:

Name Type Description Default
data ndarray

Spike times for each spike train, where each element is an array of spike times for a neuron.

required

Returns:

Type Description
DataFrame

Sorted spike times and the corresponding spikes' neuron IDs

Examples:

>>> spike_trains = [np.array([0.1, 0.2, 0.4]), np.array([0.15, 0.35])]
>>> spikes = get_spindices(spike_trains)
Source code in neuro_py/spikes/spike_tools.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def get_spindices(data: np.ndarray) -> pd.DataFrame:
    """
    Spike timestamps and IDs from each spike train in a time-sorted DataFrame.

    Parameters
    ----------
    data : np.ndarray
        Spike times for each spike train, where each element is an array of
        spike times for a neuron.

    Returns
    -------
    pd.DataFrame
        Sorted spike times and the corresponding spikes' neuron IDs

    Examples
    -------
    >>> spike_trains = [np.array([0.1, 0.2, 0.4]), np.array([0.15, 0.35])]
    >>> spikes = get_spindices(spike_trains)
    """
    spikes_id = np.repeat(np.arange(len(data)), [len(spk) for spk in data])

    spikes = pd.DataFrame({"spike_times": np.concatenate(data), "spike_id": spikes_id})
    spikes.sort_values("spike_times", inplace=True)
    return spikes

select_burst_spikes(spikes, mode='bursts', isiBursts=0.006, isiSpikes=0.02)

Discriminate bursts versus single spikes based on inter-spike intervals.

Parameters:

Name Type Description Default
spikes ndarray

Array of spike times.

required
mode str

Either 'bursts' (default) or 'single'.

'bursts'
isiBursts float

Maximum inter-spike interval for bursts (default = 0.006 seconds).

0.006
isiSpikes float

Minimum inter-spike interval for single spikes (default = 0.020 seconds).

0.02

Returns:

Type Description
ndarray

A boolean array indicating for each spike whether it matches the criterion.

Notes

Adapted from: http://fmatoolbox.sourceforge.net/Contents/FMAToolbox/Analyses/SelectSpikes.html

Source code in neuro_py/spikes/spike_tools.py
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def select_burst_spikes(
    spikes: np.ndarray,
    mode: str = "bursts",
    isiBursts: float = 0.006,
    isiSpikes: float = 0.020,
) -> np.ndarray:
    """
    Discriminate bursts versus single spikes based on inter-spike intervals.

    Parameters
    ----------
    spikes : np.ndarray
        Array of spike times.
    mode : str, optional
        Either 'bursts' (default) or 'single'.
    isiBursts : float, optional
        Maximum inter-spike interval for bursts (default = 0.006 seconds).
    isiSpikes : float, optional
        Minimum inter-spike interval for single spikes (default = 0.020 seconds).

    Returns
    -------
    np.ndarray
        A boolean array indicating for each spike whether it matches the criterion.

    Notes
    -----
    Adapted from: http://fmatoolbox.sourceforge.net/Contents/FMAToolbox/Analyses/SelectSpikes.html
    """

    dt = np.diff(spikes)

    if mode == "bursts":
        b = dt < isiBursts
        # either next or previous isi < threshold
        selected = np.insert(b, 0, False, axis=0) | np.append(b, False)
    else:
        s = dt > isiSpikes
        # either next or previous isi > threshold
        selected = np.insert(s, 0, False, axis=0) & np.append(s, False)

    return selected

spindices_to_ndarray(spikes, spike_id=None)

Convert spike times and spike IDs from a DataFrame into a list of arrays, where each array contains the spike times for a given spike train.

Parameters:

Name Type Description Default
spikes DataFrame

DataFrame containing 'spike_times' and 'spike_id' columns, sorted by 'spike_times'.

required
spike_id list or ndarray

List or array of spike IDs to search for in the DataFrame. If None, all spike IDs are used.

None

Returns:

Type Description
List[ndarray]

A list of arrays, each containing the spike times for a corresponding spike train.

Examples:

>>> spike_trains = spindices_to_ndarray(spikes_df, spike_id=[0, 1, 2])
Source code in neuro_py/spikes/spike_tools.py
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
def spindices_to_ndarray(
    spikes: pd.DataFrame, spike_id: Union[List[int], np.ndarray, None] = None
) -> List[np.ndarray]:
    """
    Convert spike times and spike IDs from a DataFrame into a list of arrays,
    where each array contains the spike times for a given spike train.

    Parameters
    ----------
    spikes : pd.DataFrame
        DataFrame containing 'spike_times' and 'spike_id' columns, sorted by
        'spike_times'.
    spike_id : list or np.ndarray, optional
        List or array of spike IDs to search for in the DataFrame. If None, all
        spike IDs are used.

    Returns
    -------
    List[np.ndarray]
        A list of arrays, each containing the spike times for a corresponding
        spike train.

    Examples
    -------
    >>> spike_trains = spindices_to_ndarray(spikes_df, spike_id=[0, 1, 2])
    """
    if spike_id is None:
        spike_id = spikes.spike_id.unique()
    data = [spikes[spikes.spike_id == spk_i].spike_times.values for spk_i in spike_id]
    return data