An Interacting Neuronal Network with Inhibition: theoretical analysis and perfect simulation

We study a purely inhibitory neural network model where neurons are represented by their state of inhibition. The study we present here is partially based on the work of Cottrell \cite{Cot} and Fricker et al. \cite{FRST}. The spiking rate of a neuron depends only on its state of inhibition. When a neuron spikes, its state is replaced by a random new state, independently of anything else and the inhibition state of the other neurons increase by a positive value. Using the Perron-Frobenius theorem, we show the existence of a Lyapunov function for the process. Furthermore, we prove a local Doeblin condition which implies the existence of an invariant measure for the process. Finally, we extend our model to the case where the neurons are indexed by $ \mathbb{Z}. $ We construct a perfect simulation algorithm to show the recurrence of the process under certain conditions. To do this, we rely on the classical contour technique used in the study of contact processes, and assuming that the spiking rate lies on the interval $[ \beta_* , \beta^* ], $ we show that there is a critical threshold for the ratio $ \delta= \frac{\beta_*}{\beta^* - \beta_*}$ over which the process is ergodic. \\ \textbf{Keywords}: spiking rate, interacting neurons, perfect simulation algorithm, classical contour technique.


Introduction
For the operation of a neural network, neurons excite and/or inhibit each other.Here, we study a model of a purely inhibitory neural network where neurons are represented by their inhibitory state.The study we present is partially based on the work of Cottrell [6].Her model consists of considering N interacting neurons described their state of inhibition.In her work, a neuron spikes when its state touches the value 0. When a neuron spikes, the state of inhibition of the other neurons increase by a non-negative deterministic constant θ.The spiking neuron immediately receives a random inhibition independently of anything else.In Cottrell's work the state of inhibition is just the waiting time until the next spike.
In the present work we generalize Cottrell's model in several natural ways.Actually, in Cottrell's model, the next spiking time in the neural net is deterministic and we will lift this assumption.A random spiking time is more realistic than a deterministic one since stochasticity is present all over in the brain functioning.Secondly, to allow formal general models we allow the state of inhibition to decrease at a general rate in between the successive spikes of the network while in Cottrell's work the drift of the flow is equal to −1.
In the first part of this paper, we consider systems of N interacting neurons, in which any neuron can spike at any time.The spiking neuron takes a new random state of inhibition, and the others increase their inhibitory state by a deterministic quantity that we will call the inhibition weight, which depends on the distance between the spiking neuron and the "receiving" neuron, so that a neuron located far away of the spiking neuron is less impacted by the spike.The model thus presented obviously extends Cottrell [6] and Fricker et al. [8] in two ways: the spiking time is no more deterministic but it is random; the dynamic of the process is no more constant.
Firstly, we show the existence of a Lyapunov function that allows us to formulate a sufficient condition of non-evanescence of the process in the sense of Meyn and Tweedie [14], i.e. a condition ensuring that the process does not escape at infinity.To do so, we introduce a reproduction matrix H and we suppose the spectral radius of H is lower than 1.The eigenvector associated with the spectral radius of H allows us to find a Lyapunov function for the process.
Secondly, we study the recurrence of the process relying on Doeblin conditions which we establish for the embedded chain sampled at the jump times.We show the existence of an invariant probability measure for the process.We do this in the case when the distribution of the new states has an absolutely continuous density and the jump rate is bounded.
In a second part, we consider the case where we have an infinite number of neurons indexed by Z (see Comets et al. [4], Galves and Löcherbach [10], Galves et al. [11] and Morgan André [1]).The mean field behavior of such models has been studied by Cormier et al. [5] and Robert and Touboul [15] who were also interested in the stationary distributions of these processes.In the work of Ferrari et al. [7], considering an infinite system of interacting point processes with memory of variable length, the authors investigated the conditions for the existence of a phase transition using the classical contour technique, based on the classical work of Griffeath [13] on a contact process.Morgan André in this work [1], proves that the model described in [7] presents a metastable behavior while relying on the contour technique used in [13].Following the idea of Ferrari et al. [7], Galves et al. [9] and Griffeath [13], we construct a perfect simulation algorithm that allows us to show the recurrence of the process.Assuming that the spiking rate takes values in the interval [β * , β * ], we show that there is a critical threshold for the ratio δ = β * β * −β * over which the process is ergodic.
This paper is organized as follows.In Section 2 we describe the model and study the law of the first jump time of the process.The Foster-Lyapunov and Doeblin conditions are discussed to find non-evanescence criteria and to show the existence of a unique invariant probability measure of the process in Section 3 which is our first main result.Finally, in Section 4, we present a perfect simulation algorithm and we simulate the law of the state of inhibition of a given neuron in its invariant regime.

Description of the model
In our paper, let us consider we have N neurons that are related to each other.For all i ∈ {1, . . ., N }, X i,N t describes the state of inhibition of neuron i at time t.When the neuron i ∈ {1, . . ., N } spikes, • The current state of inhibition of neuron i is replaced by a new value Y i independently of anything else with distribution F i .Y i is the new position of the jumping particle right after the jump.
• The state of inhibition of any neuron j ̸ = i is increased by a positive value W i→j at time t.
In between successive jumps of the system, each neuron i follows the deterministic dynamic where α i is positive on (0, ∞) , locally Lipschitz on [0, ∞) and α i (0) = 0 such that the process can not enter the negative values.Let β i be a continuous positive and decreasing rate function on [0, ∞).We have taken β i to be decreasing so that the larger x i t is, the lower its probability of spiking and the smaller x i t is, the higher its probability of spiking.x i t (x i ) designs the solution of the equation (2.1) at time t starting from x i at time 0.
We are thus led to consider the piecewise deterministic Markov process (PDMP) For i ∈ {1, . . ., N }, the state of inhibition of neuron i at time t, X i,N t is given by: where M i is a random Poisson measure with intensity dtdrF i (dy) and for all i, the M i are all independent.This model extends that of Goncalves et al. [12] in the multidimensional case.
Remark 2.1.For all i ∈ {1, . . ., N }, X i,N t can be interpreted as the inhibition state of the neuron i at time t and W j→i as the inhibition weight of the neuron j on the neuron i.When W i→j ≤ 0, we say that the neuron i is excitatory for the neuron j and when W i→j ≥ 0, we say that the neuron i is inhibitory for the neuron j.In our paper we are interested in the case where neuron i is inhibitory for neuron j i.e., W i→j ≥ 0.
Remark 2.2.The formula (2.2) is well-posed in the sense that there is non explosion of the process.Since β i (X i,N s ) ≤ β i (0) for all i we deduce that t 0 β i (X i,N s )ds < ∞ whence the non explosion, that is, almost surely, the process has only a finite number of jumps within each finite time interval.
The infinitesimal generator associated with this model is given by: where V is a smooth function and e i is the i − th unit vector.In other words, at each jump of the process, a single neuron spikes.If it is neuron i then its state is replaced by Y i and all other neurons receive the inhibition weight W i→j ≥ 0 for any j ̸ = i.

First jump time
Let N i t be the counting process of successive jumps of neuron i, that is, and S i 1 the first spike time of neuron i, so we have S i 1 = inf{t > 0, N i t = 1}.Let S 1 be the first jump time of the state process (X t ), that is, S 1 = min i S i 1 .Let for all i, t 0 (x i ) := the time for the neuron i to hit 0 starting from x i .
Proposition 2.3.For t < min i t 0 (x i ), (2.4) Proof.For all t > 0, Moreover, if t < min i t 0 (x i ) by making a change of variables z = x i s (x i ) and using the fact that dz = ẋi s (x i )ds = −α i (z)ds we have Proposition 2.5.

with a positive probability
the first jump time is infinite. Proof.
the time for neuron i to hit 0 starting from x i is finite then it is obvious (by definition of t 0 (x i )) to see that it is sufficient for

does not hold and for all
We finish this section with a simulation of the process starting from some fixed initial configuration (x 1 0 , . . ., x N 0 ).For this, we assume that for all i the jump rate β i (x i ) is bounded and lower bounded, that is, The following variables will be used to write our simulation algorithm.
• T = (T 1 , T 2 , . . . ) where T 1 < T 2 < . . .are the times of successive proposals of jumps for the total system, to be accepted or rejected • L is the label associated with T .It will be {sure} or {possible} • P = (P 1 , . . ., P N ) is the vector of states of the N neurons at a fixed instant • I is the vector which represents the index of the neuron which spikes.
(4) We update the vector P and start the procedure again from (1).
We stop the procedure after a fixed finite number n of iterations.
We plot in the following figure a typical trajectory of X i,N t with N = 2 neurons.In both figures N = 2 neurons, n = 50 iterations and F i ∼ exp(1) for all 1 ≤ i ≤ 2. It can be seen that in Figure 2.1 (a) there are more occurrences of jumps than in Figure 2.1 (b).In both, Neuron i = 1 is plotted in blue and neuron i = 2 in red.

Foster-Lyapunov and Doeblin conditions
In this section, we want to find conditions of non-evanescence of the process and show the existence of an invariant probability measure of the process.
In this case, the process is ergodic.
In this case, the process is transient In what follows, K is a fixed compact set, and we suppose that We define W the matrix of inhibition weight by W ij := W j→i , i ̸ = j and W ii = 0.It is further assumed that the matrix W is irreducible in the sense that there exists an integer p > 0 such that W p has only positive coefficients.We introduce the reproduction matrix which is also irreducible.
Suppose that ρ(H) < 1 where ρ(H) is the largest eigenvalue of H, that is, the spectral radius of H.By the Perron Frobenius theorem, there exists a left eigenvector κ associated to this eigenvalue ρ, that is, for all i, On the other hand, put and recall that the infinitesimal generator is given by: So by replacing V by its expression in the infinitesimal generator G N V (x) we have for all x ∈ K c : This calculus leads us to introduce the following Assumption 3.1.Let α > 0. For all i, there exists r i , such that Corollary 3.2.Under Assumption 3.1 we have for all x ∈ K c such that x i ≥ r i for all i and such α i (x i ) ≥ αx i : where c is a positive constant.
Definition 3.3.We call the process non evanescent if there exists a compact K such that for all x, P x − almost surely, lim sup t 1 K (X t ) = 1.
Proof.V (x) defined in (3.1) above is a norm-like function because the eigenvector κ is positive.Indeed, we call V : R N + → R a norm-like function if V is a positive, measurable function and V (x) → ∞ when x → ∞.The condition (CD1) of Meyn and Tweedie [14] implies the result.□ Example 3.5 (Mean-field interaction).Suppose we have N neurons.We suppose also the function γ i such that ∥γ i ∥ ∞,K c < ∞ and F i = F, W j→i = θ for all i.In this case the reproduction matrix is for some fixed compact set K. Suppose ρ(H) is the spectral radius of H.Then, ρ(H) = sup x∈K c γ i (x)(E(Y ) + (N − 1)θ) and its associated eigenvector is κ = (1, . . ., 1).The condition ρ(H) < 1 is therefore equivalent to sup x∈K c γ i (x) < 1/(E(Y ) + (N − 1)θ).
Example 3.6 (Torus).Suppose we have N ≥ 3 neurons such that each neuron interacts with its two nearest neighbors (its left and right neighbors).Neuron 1 interacts with neuron 2 and neuron N .Neuron N interacts with neuron N − 1 and neuron 1, so we have a torus.

Doeblin condition
Moreover we suppose that F i (dy) is absolutely continuous and ∥β i ∥ ∞ < ∞ for all i.Then there exist d ∈ (0, 1) and a probability measure where Q is the transition operator of embedded chain Z n = X Sn and Q N is its N −th iterate.
To prove the above result we fix any deterministic sequence In the sequel we shall work on the event S 1 = s 1 , . . ., S N = s N , I 1 = 1, . . ., I N = N and Y 1 = y 1 , . . ., Y N = y N .This means that the jumps are ordered such that neuron 1 jumps before neuron 2 and etc.Let y = (y 1 , . . ., y N ) where y i is the new state of inhibition of neuron i after the spike.
Let t k = s k − s k−1 for all 1 ≤ k ≤ N the inter jump times of the N neurons which implies that Conditionally on this event, let Ψ s N be the vector of states of the process at time s N .We can define Ψ s N as a function of the states y 1 , . . ., y N such that Ψ s N : R N → R N is given by: where for all l ̸ = k, ψ k,l s (u) = x k s (u) + W l→k (3.4) and x k s (u) means the solution of the deterministic dynamic ẋk s = −α i (x k s ), x k 0 = u.Remark 3.9.In the definition of Ψ k s N (y), we note that it depends only on y k .Therefore we have for all i ̸ = j, ∂Ψ i s N ∂y j = 0. Proof.Let J Ψs N (y) be the Jacobian matrix of Ψ s N (y).Using Remark 3.9 we have : We obtain det(J Ψs N (y) ) ̸ = 0 if and only if N j=1 ∂Ψ j s N (y) For all 1 ≤ j ≤ N − 1, we have: It means that |det(J Ψs N (y) )| ̸ = 0 then Ψ s N (y) is a local diffeomorphism.Localizing, we may therefore conclude that for each y there exists B such that Ψ s N : B → Ψ S N (B) is a diffeomorphism.□ Proof of Theorem 3.8.Let ε > 0 fixed.We will work on the event In particular, on E, the index I n of the n−th neuron is equal to n for all n ∈ {1, . . ., N }.
Knowing that the first jump takes place at time S 1 = s 1 , the probability that the index I 1 of the first jump is equal to 1 is given by: .
We want to compute, P(I 1 = 1, . . ., I N = N |S 1 = s 1 , S 2 = s 2 , . . ., S N = s N ).To obtain a compact formula, using formula (3.4) we define giving the states of neuron k at time S j depending on whether neuron k jumped before or after time S j .Let s N )) be the state of neuron k before the j − th jump.We know that as long as neuron k has not yet jumped, it receives each time a quantity W j→k , ∀ j ̸ = k from the other neurons that jumped before it.So knowing all the jump times where other neurons jumped, we have: For any Borel subset B of R N we have Remark that on the event E, Following the arguments of Benaïm et al. [3], for any t * ≤ N ε, there exists a ball B r (t * ) of radius r, of center t * and an open set I ⊂ R N such that we can find for all is a diffeomorphism (see Benaïm et al. [3,Lemma 6.2]).In the above formula, Ψ s N denotes the restriction of Ψ s N to W s N .This allows us to apply the theorem of a change of variables in the inequality (3.6).α ′ j x j s Ψ j s N −i (y) is upper bounded since α j is a global Lipschitz function.Then, for all 1 ≤ j ≤ N − 1 we obtain: Then, ∀ y ∈ W S N , c |det(J Ψs N (y) )| −1 ≥ c ′ > 0 and the inequality (3.6) becomes : Proof.When β k is strictly lower-bounded and bounded, we can notice that the lower bound obtained in Theorem 3.8 holds on the whole state space R + , that is, without 1 K .This allows us to have the global lower bound Q N (x, dy) ≥ dν(dy) and thus the uniform ergodicity of the process.□

Perfect simulation
In this section, we consider a framework with an infinity of neurons indexed by Z.We want to build a perfect simulation algorithm to show in another way the recurrence of our process under certain conditions.Let V .→i= {j : W j→i ̸ = 0} and V i→.= {j : W i→j ̸ = 0} be the incoming and out-coming neighborhoods of the neuron i (see Comets et al. [4] and Galves and Löcherbach [10]).We consider the case where each neuron has a finite number of neighbors.We assume throughout this section that for all i the jump rate The following variables will be used to write the perfect simulation algorithm: • T is the time vector • P is the matrix of states where each row of this matrix represents the different states of the N neurons at a fixed instant • I is the vector which represents the index of the neuron which spikes.
We fix a neuron i ∈ Z and in what follows we are interested in finding the state of i at time 0 in the stationary regime, that is, assuming that the process starts from −∞.To do so we explore the past in order to determine all sets of indices and times which affect the value of neuron i at time 0.
To explain what we mean by this, let us consider the following example where the interactions are given in the case of nearest neighbors.In the following example, the red dots represent possible jumps and the blue stars represent sure jumps.The sure and possible jumps are the same as in Algorithm 2.6.In this example, we have fixed a neuron i in Z at time 0 and we say that the clan of ancestors of neuron i is reduced to neuron i itself.It is assumed that the space of neurons is reduced to i − 1, i, i + 1.Then, at time T 1 , neuron i + 1 makes a possible jump.We record the time T 1 and we add the neuron i + 1 to the clan of the ancestors of the neuron i.At time T 2 neuron i makes a possible jump.As neuron i is already in the ancestor clan then the clan remains unchanged and we download the time T 2 .At time T 3 neuron i − 1 makes a sure jump.We register the time T 3 and the neuron i − 1 but the clan remains unchanged.At time T 4 the neuron i + 1 makes a sure jump and as the neuron i + 1 is already in the clan, we remove from the clan and only the neuron i remains in the clan.At time T 5 neuron i makes a sure jump and as neuron i is already in the clan, we remove from the clan and the clan becomes empty.Our algorithm stops the first time the clan becomes empty.In the following algorithm we will work in a general case.
The set of neurons thus constructed will be called the ancestor clan of neuron i. (see Galves and Löcherbach [10], Galves et al. [11]).The clan of ancestors is a process that evolves in time by successive jumps.We start with C i 0 = {i} and in the following we will define the updates of this process at the times of the jumps.More precisely we do the following: Algorithm 4.1 (Backward procedure).
(1) We simulate, ∀ l ∈ Z, N l,s t and N l,p t two Poisson processes with respective intensities β * and β * − β * .The jump times of N l,s t and N l,p t are respectively T l,s n and T l,p n for the neuron l after n jumps.
(2) Let i ∈ Z be fixed and T 1 = inf{T j,r 1 > 0 : j ∈ V .→i, T i,r 1 > 0} where r ∈ {p, s} and V .→i is the incoming neighborhood of i.
1 , we set C i T 1 = ∅ and we stop the algorithm.In this case we set I 1 = i.
(3) Suppose T n is the n − th jump time of C i Tn .Then, m we set I n+1 = j and then We stop the procedure at time T i stop = inf{t : To ensure that the algorithm stops it will be necessary to find a criterion so that T i stop < ∞.This will be done in Theorem 4.7 below.The above algorithm is called the backward procedure.
In the following we will write a forward procedure of the process in case where each neuron has a finite number of neighbors and in case T i stop < ∞.For this we define: where N i stop is the number of steps of the backward procedure, C i is the union of all clans of ancestors up to N i stop and ∂ ext (C i t ) is the set of neurons not belonging to the clan of ancestor of neuron i but having an interaction with at least one neuron in the ancestor clan of neuron i.
In this algorithm, we will rely on the a priori realizations of the processes N i,s t and N i,p t .Algorithm 4.2 (Forward procedure).
(1) We initialize the set of sites for which the decision to accept can be made by (2) , we have where there exists • We decide according to the probabilities Tn−T n−1 (P to accept the presence of a spike of neuron I n−1 .We update • Else with the probabilities 1 − p we reject the presence of a spike of neuron I n−1 and P Tn−T n−1 (P ).
We consider all the elements of S i and we always start with the last element to get out of the clan.The update of S i allows us to start the procedure again.
We stop the procedure when all the elements of C i are filled.
• The output of the above algorithm 4.2 is a sample of the process in its stationary state.For more on this, see [9, p. 21].
• For any site (i, t) ∈ Z × R + , C i t is a Markov jump process taking values in the finite subset of Z (see Galves et al. [11]) and its infinitesimal generator is given by where g is a test function.Proposition 4.4.Let d j = min x j β j (x j ), d j = max x j β j (x j ) and b j = k→j (d k − d k ) where k→j means the sum over all neurons k such that W k→j ̸ = 0.If sup j b j < inf j d j then for all j, T j stop is finite almost surely.
Proof.Let i be a fixed neuron and C i t the clan of ancestors of neuron i at time t.We set b = sup j b j and d = inf j d j .We shall construct a process Z = (Z t ) t such that for all n, |C i Tn | ≤ |Z n | where Z n = Z Tn .
We proceed as follows: ( (2) When we add an element of clan C i t , we also add an element of Z t .And when we remove an element from Z t , we also remove an element of clan C i t .But we can remove an element of clan C i t and not of Z t , so the two processes do not always jump together.Therefore we may couple C i t with a classical birth and death chain having birth rate b|Z n | and death rate d|Z n |.
We notice that E(Z 1 ) = 2b/(b + d), then if b < d then almost surely lim n→∞ Z n = 0 (see for instance, Theorem 1 of Athreya and Ney [2]).Which implies that lim n→∞ |C i Tn | = 0 this implies T i stop is finite almost surely.□ In this general case where a neuron has a finite number of neighbors (more than two neighbors) with which it interacts, we can say no more than Proposition 4.4.Thus, in the following, we put ourselves in the case where each neuron i has exactly two neighbors so that the neuron i interacts only with the neurons i + 1 and i − 1.In other words, the incoming neighborhood of i is V .→i= {i + 1, i − 1}.(1) We simulate, ∀ l ∈ Z, N l,s t and N l,p t two Poisson processes with respective intensities β * and β * − β * .The jump times of N l,s t and N l,p t are respectively T l,s n and T l,p n for the neuron l after n jumps.The jump times T l,s n will be considered as times of sure jumps (counted by the process N l,s t ) and the jump times T n will be considered as times of possible jumps (counted by the process N l,p t ) (2) Let i ∈ Z fix and 1 , we set C i T 1 = ∅ and we stop the algorithm.We put I 1 = i.
(3) Suppose T n is the n−th jump time of C i Tn .We have: We update C i t and start the procedure again.We stop the procedure at time T i stop where Indeed, the whole procedure makes sense only if T i stop < ∞ almost surely.Remark 4.6.The forward procedure is the same as in the first case where each neuron has a finite number of neighbors.
The following theorem gives conditions on the extinction time of the process.Theorem 4.7.We set δ = β * β * −β * .There exists a critical value 0 < δ c < ∞ such that: • if δ > δ c , then the extinction time is finite almost surely that is, P(∀ i, T i stop < ∞) = 1 • if δ < δ c , then the extinction time is infinite with a positive probability that is, P(∀ i, T i stop = ∞) > 0.
Proof.We first show that T i stop < +∞ almost surely for sufficiently large δ.We observe that we can upper bound |C i t | (where |C i t | is the cardinal of C i t ) by Z t almost surely for all t ≥ 0 where Z 0 = 1 and (Z t ) t≥0 is a branching process.With a rate n(β * − β * ) the transition from Z t is from n to n + 1 and with a rate nβ * this transition is from n to n − 1.  (for more details, see Ferrari et al. [7].) We first observe that the occurrence of either uru, urd, or drd can be upper bounded by δ.This is due the fact that the probability associated with uru or drd is δ 1+2δ and that of urd is δ 2+δ .In the same way, we observe that the occurrence of either dld, ulu or dlu can be upper bounded by 1. Indeed, the associated probability with its directions is 1 1+δ .Therefore we obtain the following list of upper bounds uru occurs with probability at most δ urd occurs with probability at most δ drd occurs with probability at most δ dru occurs with probability at most 1 dld occurs with probability at most 1 ulu occurs with probability at most 1 dlu occurs with probability at most 1.
In the above list, we have upper bounded the probability associated with dru which is given by δ 3δ = 1 3 , by 1.For a given contour having 4n edges, with n ≥ 2, its probability is therefore upper bounded by δ N (drd)+N (uru)+N (urd) = δ n−N (dru) ≤ δ n/2 .Indeed, for each triplet we have 4 possible choices.The first entry of a given triplet is always fixed by the previous triplet in the sequence, and for the first triplet D 1 the first entry is always u.
Then, for n = 1, the probability of appearance of a contour of length 4 is equal to P(D 1 = urd) = δ 2+δ ≤ δ.We also have, for n = 2, the probability of appearance of a contour of length 8 is equal to Remark 4.9.In the above probabilities, we have not put the direction D 4 = dlu because it is a certain direction.It is common to all possible paths and its probability of occurrence is 1.In this example, the distribution of the state of inhibition X 0 (i) in stationary regime seems to be continuous although F i is discrete.We do not provide a proof here, this is outside the scope of this paper.We observe two local extrema at 1 and 2 which are linked to the jumps because of the Dirac.These extrema suggest that jumps are very frequent in this process.

Proposition 3 . 10 .
For all 1 ≤ k ≤ N let α k be a globally Lipschitz function.For all y ∈ R N + , there exists an open neighborhood B of y such that Ψ s N : B → Ψ s N (B) is a local diffeomorphism.

Corollary 3 . 11 .
ν(I) where d = c ′ λ(B r (t * )) with λ(B r (t * )) the Lebesgue measure of the ball B r (t * ) and ν(I) the uniform measure of I.□ If for all k ≤ N, β k is strictly lower-bounded and bounded, then the process is recurrent.

Figure 4 . 3 .
Figure 4.3.Densities of X 0 (i) be the instants of successive jumps of the N neurons.It is obvious that the embedded chain Z n := X Sn is a Markov chain.Let I n be the index of the neuron which jumps at time S n .
Proposition 3.7.Suppose that the assumptions of Proposition 2.5 hold.Then, (Z n , I n ) is a Markov chain and its transition Q(x, dy) is given by: 1) The neurons of the clan of ancestors C i t jump up with jump rate j∈C i