Rare event simulation for electronic circuit design

In this work, we propose an algorithm to simulate rare events for electronic circuit design. Our approach heavily relies on a smart use of importance sampling, which enables us to tackle probabilities of the magnitude 10 --10. Not only can we compute very rare default probability, but we can also compute the quantile associated to a given default probability and its expected shortfall. We show the impressive efficiency of method on real circuits.


Introduction
We consider the problem of computing an expectation in the context of rare event simulation using a Monte Carlo approach.It is well known that, in such a framework, the accuracy of the Monte Carlo estimator is quite bad because too few samples lead to a non zero value of the involved characteristics.Assume we want to compute E[ϕ(X)], where ϕ : R d −→ R and X is a random vector with values in R d .There are basically two well-known tools to improve the accuracy of the crude Monte Carlo estimator: importance sampling and stratified sampling.As we will see in Section 2, importance sampling can be implemented in a quite fully automated way while stratification requires an a priori good knowledge of the realization space.
As an application problem, we will study the efficient and accurate determination of the yield of an integrated circuit, through electrical circuit level simulation, under stochastic constraints due to the manufacturing process.Several orders of magnitude of variance reduction can be achieved, for example millions or billions of MC runs necessary for very small probability might be reduced to a few hundreds or thousands runs.Section 5 presents quantitative results and speedup factors obtained with our importance sampling algorithm.The performance will be demonstrated on several integrated circuits from SRAM bitcells, to digital standard cells, analog IP blocks as well as on some small toy circuits.
We first give an overview of the semiconductor industry, and of the integrated circuits in particular.The notion of yield is explained and how this translates to very low probability of failure estimation for some kind of circuits.We will also introduce shortly the concept of analog circuit simulation used in our context.

EDA market and its segmentation
The past decades saw a considerable evolution of the manufacturing processes of integrated circuits (IC).The beginning of the twenty-first century experiences the same trend but with the emergence of connected objects, applications linked to mobility, onboard systems (especially in the automotive domain), the processing of mass data or medical devices.
For circuit designers, it is necessary to reduce the time of putting new products on the market, while satisfying both the demands for increased performance (consumption, reduced weights, sizes and costs).To these constraints related to the end user, there are the problems of safety and security (aviation, motor vehicles), connectivity and adaptivity to more and more standards: WLAN (Wireless Local Area Network), Bluetooth, Wi-Fi (Wireless Fidelity).The result is an exponential growth in terms of complexity of operation and size (in number of transistors) for ICs.The time to market of a product is a crucial factor in its success.These systems of great complexity have to be designed and validated efficiently.The manufacturing costs being high, multiple attempts cannot be accepted.
The initial design of complex circuits is obtained in particular by the use of standard cell libraries and blocks of intellectual property (IP) developed by specialized teams or companies.Simulation constitutes the initial step after the specification.From our perspective, we will consider only a limited number of segments of this market: memory circuit simulation and standard cells.The simulation of complete digital blocks (digital SoCs) or large AMS/RF circuits cannot be addressed with our techniques due to the prohibitive CPU time required for each individual simulation.For our experiments, electrical simulations will be achieved with the commercial SPICE simulator Eldo ® .SPICE-type simulators (Simulation Program with Integrated Circuit Emphasis) are the industry-standard way to verify circuit operation at the transistor level before manufacturing an integrated circuit.

Principles of analog circuit simulation
The automatic analysis routines of a SPICE simulation program determine the numerical solution of a mathematical representation of the circuit under validation.After each circuit element has been modeled, the system of equations describing the complete circuit is determined and topological constraints are determined by the interconnection of the elements.The equations of the circuit are generally a nonlinear differential algebraic system of of the form: where Φ is a nonlinear vector function, u(t) is the vector of circuit variables (e.g.node voltages and charges or branch currents), u(t) is the derivative of this vector with respect to time t.
The circuit analysis consists in determining the solution of Eq. (1) for the following different simulation domains: nonlinear quiescent point calculation (DC), time-domain large-signal solution of nonlinear differential algebraic equations (TRAN), linear small-signal frequency domain (AC), noise analysis (NOISE).SPICE circuit simulation programs are presented in Ho et al. [1975], Vlach and Singhal [1993] and McCalla [1987].
The Monte Carlo simulation in SPICE programs is performed on multiple model evaluations with randomly selected model process variables.These variables are noted X in this paper.It is worth mentioning that the input parameters in models are in general the various design parameters (geometry of transistors), the process parameters (random fluctuations due to manufacturing process), and the environmental conditions (voltages conditions and temperatures).The electrical models of all major semiconductor manufacturers contain statistical models of process parameters with the estimation of central tendency characteristics, such as mean value, standard deviation, even higher moments such as skewness or kurtosis.In the context of circuit simulation, the problems can live in very large dimension.The number of parameters are in general proportional to the number of transistors.In section 5.2 some circuit examples will be presented where the dimension of X ranges from a few tens to several thousands components.
A circuit performance may be any electrical characteristic that is extracted from the simulation.It will be noted H = h(X).In our context, the simulator acts as a black-box.We do not know anything a priori about the intrinsic nature of the relation between the measured output and the statistical parameters.Typical measures of performance may be the time delay for signal propagation across a circuit, the gain of an amplifier or the oscillation frequency of an oscillator.

Yield losses and probability of failure
The yield of an Integrated Circuit (IC) plays a major role in the manufacturing cost of an Application-Specific IC (ASIC).This concept typically enters in a manufacturing flow called "Yield Learning Flow".In such a flow, the yield is defined to be the proportion of the manufactured circuits that satisfy both the design engineers and the process engineers specifications.The yield of a manufactured product should increase from the initial design to the large-volume production.Yield is therefore related to the cost of production.In nanoscale technologies, yield losses can be entirely caused by the uncontrolled and unavoidable factors like random dopant fluctuation and line edge roughness (see Agarwal and Nassif [2008]).And each new technology node tend to make the problem more difficult, since the size of the transistors within ICs has shrunk exponentially, following Moore's Law.Over the past decades, the typical MOS transistors channel lengths were once several micrometers (Intel 8086 CPU launched in 1978 was using a 3.2 µm process), but today ICs are incorporating MOS with channel lengths of tens of nanometers.This empirical law predicted that the number of transistors would double every 2 years due to shrinking transistor dimensions and additional technical improvements.As a consequence, some authors predicted that power consumption per unit area would remain constant, and that that computer chip performance would roughly double every 18 months.The leading manufacturers continue to satisfy this empirical trend.In 2020, TSMC have already started mass producing 5 nanometer (nm) chips for companies including Apple, Huawei and Qualcomm.TSMC and Intel both have planned 2 nm products on their road-maps, with TSMC scheduling earliest production for 2023 and Intel for 2029.In May 2021, IBM announced the development of the first chip with 2nm technology.
Yield is interpreted as the probability of failure, rather than the probability of success.In some applications, it is much simpler to determine the yield of a single type of circuit which is repeated a very large number of times, say several millions.A typical and important example is Static Random-Access Memory (SRAM) manufacturing yield.The simple block in SRAM circuits is the bitcell.Its role is to store one elementary bit of data using latching circuitry (flip-flop).The probability of failure is defined here to be the probability of a wrong operation (read, write, or retention) occurring in an SRAM circuit.For such problems, the yield may be quantitatively appreciated by considering an explicit value for the acceptable quality limit (AQL).For example, an AQL for 1-Megabit SRAM bit-cell circuits could be 100 ppm.If production of one million such circuits is targeted, the manufacturing process cannot allow more than 100 defective bit-cells globally.This means that the probability of failure must be at most: 100 -defective bit-cells 10 6 -circuits × 10 6 -bits = 10 −10 This relations shows that yield drops exponentially with the number of elementary blocks, and imposes extremely small values for where a large number of repetitions is used.
The practical problem of estimating failure probabilities for a large class of circuits is then translated into a general mathematical framework.The goal is to compute the expectation E[ϕ(X)] where the function ϕ writes as an indicator function ϕ(x) = 1 {h(x)≥γ} with x ∈ R d , where h is a circuit performance of interest (evaluated through SPICE simulations) and γ is an upper bound specified by the circuit designer.Note also the practical importance of the inverse problem, e.g. the estimation of the quantile γ = Γ(p) given the probability of failure 0 < p < 1 such that the quantile function Γ is defined as follows: in terms of the output measure distribution F H .The quantile problem is fundamental in verification flows where the circuit designers have to validate the design of complete library of standard cells.The circuits must satisfy prescribed specifications, for example the time delay must be less than some upper bound γ ≤ γ u .We will show, with the examples, how this quantile estimator can be obtained in a fully automatic way with our algorithm.
The rest of the paper unfolds as follows.In Section 2, we present importance sampling in a Gaussian framework and further enhance it for very rare events.Then, in Section 3, we apply the methodology developed before to the computation of a default probability and its expected shortfall.In Section 4, we explain how importance sampling and stratified sampling can be coupled to improve the performance of our algorithm on some particularly demanding cases.Finally, Section 5 presents intensive numerical experiments on real electronic circuits.

General framework
Assume the random vector X has a density function g : R d −→ R + .Consider a family of density functions (f (•; θ)) θ∈R p parametrized by a finite dimensional parameter θ ∈ R p satisfying supp(g) ⊂ supp(f (•; θ)) for all θ ∈ R p .Then, we can write where Y is a random vector with density function f (•; θ).From a practical point of view, one has to find the best density function h in order to maximize the accuracy of the Monte Carlo estimator.First, we need to make precise in which sens "best" has to be understood.
As the convergence of the Monte Carlo estimator is governed by the central limit theorem, it is quite natural to try to find the density function f (•, θ) which minimizes best the variance of the estimator Only the second moment depends on the density function g and moreover using Eq. ( 3) again, we obtain If the optimization problem min θ E ϕ(X) 2 h(X) f (X;θ) is well-posed (strongly convex for instance), we can solve this problem directly.

The Gaussian case
Assume the density function g stands for the standard Gaussian density function in R d .Consider a family of Gaussian distributions with values in R d parametrized by their expectations Then, the importance sampling formula Eq. ( 3) writes as The second moment can be also written in a simplified way where the last equality comes from the application of Eq. ( 4) with −θ.
From this proposition, we can write Hence, the value θ * minimizing the variance is uniquely defined by ∇v(θ * ) = 0.As the Hessian matrix is tractable at roughly the same cost as the gradient itself, one can implement a Newton algorithm with optimal descent direction to minimize v.

Existing approaches in EDA
Importance Sampling is used with great success for many decades in multiple scientific fields, including structural safety engineering, aeronautics, communication networks, finance, nuclear safety.The world of EDA is actually rather behind, comparatively.IS with Gaussian mixtures has been used noticeably in the context of SRAM simulation (see Kanj et al. [2006]).
The mathematics of the basic idea are rather straightforward, although some users have difficulties with this notion of using a modified distribution.The performance of any IS strategy heavily depends on the chosen instrumental distribution, and implementation must be very thorough.However, incorrect choice of the instrumental distribution (as demonstrated in Hocevar et al. [1983]) or bad implementation can lead to poor performance, and even to an increase of the variance.
However, at least compared to the field of mechanical engineering, the relation between the process parameters and the output is more often mildly non-linear than not.Given the difficulty, many approaches that yield only an approximation of the probability or quantile have been tried.Machine Learning techniques can help as well.None of these approaches can avoid the curse of dimensionality (see for example McConaghy et al. [2013], Singhee and Rutenbar [2009]), and the validity of the approximated confidence intervals, when they can be estimated, is questionable.

Algorithm for the mean shift approach in the Gaussian case
We consider the framework detailed in Section 2.2 relying on the variance criterion to determine the optimal parameter θ * uniquely defined by ∇v(θ * ) = 0 thanks to Proposition 2.1.The idea of the algorithm is to replace all the expectations by empirical means and then deterministic optimization techniques.
4. Compute the expectation E(ϕ(X)) by Monte Carlo using this new sample The number of samples n used for solving the optimization problem may differ from the number of samples m used in the Monte Carlo procedure.The way to balance the number of samples between these two steps has to be further studied.For n large enough, v n inherits its strong convexity and differentiability from the ones of v.
The convergence of the Sample Average approximation θ n is known to converge to θ ⋆ and to satisfy a central limit theorem under integrability conditions.We introduce the following condition We refer to Jourdain and Lelong [2009] and Rubinstein and Shapiro [1993] for a proof of this result.The convergence of the sequence M m (θ n ) m,n is governed by the following theorem.We can state the following result.(ii.)If the function ϕ is almost everywhere continuous and for some η > 0 and some com- where Proof.The proof of (i.) can be found in Badouraly Kassim et al. [2015].
We only give the guidelines to prove (ii.).
From the standard central limit theorem, We deduce from Proposition 2.3, that P(n(m) Then, it is sufficient to monitor differences as , which is a subset of V for large enough n.As φ is almost everywhere continuous, we deduce that the above difference a.s.goes to 0 when n tends to infinity.Let δ < η, the conditional independence combined with Hölder's inequality yield that for large enough n which proves the uniform integrability of the family (|ϕ Then, we deduce that the second term in (9) tends to zero.
An immediate consequence of this result is that the accuracy of a Monte Carlo algorithm based on M m (θ n(m) ) can be monitored using the standard theory of confidence intervals.
A key issue from a practical and computational point of view is to fix n(m) for a given value of m.As n(m) has not impact on the convergence rate but only on how the empirical variance is close to the optimal variance.Hence, we are actually more interested in the convergence of v n (θ n ) to v(θ ⋆ ) than in the one of θ n itself.Monitoring the convergence of v n (θ n ) would give us a hint on how to choose n(m).

Proposition 2.5. Under the assumptions of Proposition
By the standard central limit theorem, √ n(v n (θ ⋆ ) − v(θ ⋆ )) converges in distribution to a normal random variable with mean 0 and variance (γ ⋆ ) 2 .
To handle the term )), we use the following Taylor expansion where θn is a point on the segment joining θ ⋆ and θ n .Since ∇v n converges locally uniformly to ∇v (see Jourdain and Lelong [2009]), ∇v n ( θn ) converges a.s. to ∇v(θ ⋆ ) = 0.Then, we deduce from the convergence in distribution of As usual, if we consider an estimator γ 2 n of (γ ⋆ ) 2 , we obtain L − → N (0, 1), which can be used in practice to determine the magnitude of n.A natural estimator of γ n can be computed online within the minimization algorithm The convergence of γ 2 n to (γ ⋆ ) 2 ensues from the locally uniform strong law of large numbers for the sequence of random functions θ → ϕ 4 (X i ) e −2θ•X i +|θ| 2 , see Rubinstein and Shapiro [1993, Lemma A1] or Ledoux and Talagrand [1991, Corollary 7.10, page 189].

Using importance sampling for very rare event simulation
Assume the function ϕ writes as an indicator function ϕ(x) = 1 {h(x)≥γ} with x ∈ R d where γ makes the event almost never occur in practice, P(h(X) ≥ γ) is much smaller than 1/n.Then, the empirical updating formulae are not well defined and we will never manage to compute the mixture parameters straightaway.In such a situation, Kroese et al. [2006] suggest to use a ladder with finite number of levels to reach level γ.Consider some levels γ 1 , . . ., γ K = γ where K is a priori unknown but computed online.In this scheme, the samples used at step k are drawn according to the importance sampling distribution obtained from step k − 1, which ensures that the level γ k is never too rare.
In this section, we focus only the mean shift approach.The multilevel approach is based on a step by step update of the importance density thus ensuring the targeted event is never too rare.At step k, the samples are generated according to the importance sampling density computed at step k − 1.Before stating the algorithm, we need to extend Eq. ( 4) to a wider setting.
Consider two R d −valued normal random vector Y and Z with densities f (•; θ Y ) and f (•; θ Z ) respectively for some θ Y , θ Z ∈ R d .Then, from Eq. (3), we get These two formulae can be made more explicit by expanding the density functions The Algorithm can be written as Algorithm 2.6 (Multilevel importance sampling).Fix θ 2. Using the previous samples, compute the ρ quantile of h(X (k) ) and set it to γ k+1 provided it is less than γ; otherwise set γ k+1 = γ.

Compute the variance criterion
Then, θ 4. If γ k < γ, return to step 1 and proceed with k ← k + 1.
5. Draw some new samples X1 , . . ., Xm following the standard normal distribution and independent of the past.Approximate E[ϕ(X)] by Using intermediate levels has no theoretical impact on the convergence of the algorithm, it is a practical though essential trick to circumvent the difficulty of sampling a rare event.
If the levels (γ k ) k are chosen so that the events {h(X (k+1) j ) ≥ γ k } are never too rare, then a fairly small number of samples n can be used.We know from Proposition 2.4 that the convergence rate of S m,n is monitored by √ m so the computational effort should be put on m.

Computation of θ
(k+1) n . The vector θ (k+1) n is defined as the solution of a minimization problem, which is known to be strongly convex.Hence, the solution can be computed using the Newton algorithm with optimal descent direction given for finding the root of the gradient.
When n goes to infinity, the Hessian matrix whose smallest eigenvalue is larger than Depending on the choice of the levels (γ k ) k , the lower bound E 1 {h(X (k+1) )≥γ} can become quite small, which may lead to a badly performing Newton algorithm as the smallest eigenvalue of the Hessian matrix measures the convexity of the function and the less convex the function is, the slower the Newton algorithm converges.To circumvent this problem, we suggest to consider an alternative characterization of θ (k+1) n . The equation ∇v If we introduce the function u (k+1) n defined by can be seen as the root of ∇u (θ) and we can prove that the Hessian matrix ∇ 2 u (k+1) n (θ) is uniformly lower bounded by I d .We refer the reader to Jourdain and Lelong [2009] for more details on this subtle transformation of the original problem to ensure a minimum convexity.
Note that in practice, one needs to solve the linear systems (θ) at each iteration of the Newton algorithm.The Hessian matrix being a dense matrix, it is more interesting to use an iterative algorithm such as the conjugate gradient algorithm.
The number d of input variables may also be high.This dimension is directly linked to number of electronic devices on a given circuit and specifically to the number of transistors.However this dimension does not impact the theoretical convergence properties of the approach.As this dimension grows, it becomes difficult to make sufficient progress during the multilevel algorithm.When the budget of simulation is limited, we experienced non convergence (and therefore failure) of the Importance Sampling algorithm if the number of variables d is closed to 10 5 variables.A simple idea is to look for the importance sampling parameter in a subspace Aϑ : ϑ ∈ R d ′ where A ∈ R d×d ′ is matrix with rank d ′ ≤ d.In our situation, the matrix A needs to be identified automatically, and is a submatrix of the identity matrix after removing its d − d ′ columns corresponding to the unimportant variables.Our approach falls into the category of sequential deterministic methods.It produces the same subset on a given problem every time, by starting with a single solution (a variable subset) and iteratively add or remove dimensions until some termination criterion is met.This approach is clearly suboptimal, there is no guarantee for finding the optimal subset solution (because we do not examine all possible subsets).We made some tests to validate the approach with large scale problems see section 5.1.

Computing a default probability and its expected shortfall
In this section, we assume that the function ϕ is given by ϕ(x) = 1 {h(x)≥γ} where h : R d → R and a ∈ R. We consider the problem of both estimating where X is a standard normal random vector in R d .The quantity E[h(X)|h(X) ≥ γ] is called conditional value at risk (CVaR) or expected shortfall and we denote it by CVaR(h(X)).The CVaR can be seen as a mesure of the dispersion of the distribution tail.In particular, a CVaR close to γ means that the fail circuits are very concentrated and justifies the practitioners' habit to pick a single fail circuit at random when trying to analyse the reasons of a failure.We will give an practtical example in section 5.2.2.
The default probability is computed by applying the importance sampling approach developed in Section 2 to its standard Monte Carlo estimator.
Note that CVaR(h(X)) can be written Based on Section 2.2, it is natural to consider the importance sampling version of the CVaR Note that we could afford different importance sampling parameters for the numerator and the denominator, each being the solution of an optimization problem.Since, we at computing both CVaR(h(X)) and P(h(X) ≥ γ), we will anyway have to compute the optimal θ for the denominator.Considering the computational cost of solving such an optimization problem, we believe it is not worth it using two different importance sampling parameters in ( 12).
Then, a standard estimator writes as Although the estimations of P(h(X) ≥ γ) and CVaR(h(X)) are presented one and after the other, in practice they are computed simultaneously using the same runs of the simulator.Thus, the two estimators are actually obtained for the cost of a one only.
It is clear from the strong law of large number that CVaR n (X) → CVaR(X) a.s.when n → ∞.Due the random nature of an estimator, it has to come with a confidence interval, which relies on a central limit theorem.
Proposition 3.1.The following convergence holds with Proof.To keep the notation of the proof as compact as possible, we introduce . When i = 1, we simply drop the index.Similarly, for θ = 0, we write From the standard multi-dimensional central limit theorem, it is known that where This convergence in distribution combined with Slutsky's theorem yields that √ n(CVaR n (Y )− CVaR(Y )) converges in distribution to a normal random variable with mean 0 and variance Some comments on Proposition 3.1.It is interesting to analyse the convergence result stated in Proposition 3.1 in light of the behaviour of an other estimator of CVaR(X).Let us introduce CVaR which assesses that CVaR n (h(X)) is non biased.From, the standard central limit theorem, we deduce that For θ = 0, the difference becomes The limiting variance σ2 is larger than the one appearing in (14) (at least for θ = 0) but unlike CVaR n (h(X)), the estimator CVaR n (h(X)) is biased, which is precisely the effect induced by replacing P(Y > γ) by ).The gap between the two limiting variances may become impressively large.Consider the following toy example with h(x) = x, X following a real valued standard normal distribution and a = 1.5.Take θ = 0.In this case, P(X > γ) ≈ 0.067 and we find σ 2 ≈ 2 whereas σ2 ≈ 54.
In the case θ = 0, we can compute the bias.

Proposition 3.2. The estimator CVaR
Proof.We use the notation Y = h(X).We define T n = n i=1 1 {Y i >γ} and the tuple (Z 1 , . . ., Z Tn ) holds the values of (Y 1 , . . ., Y n ) for which the Y i > γ.With these new random variables, as the random variable Z i 1 {Tn=t} are identically distributed.Let us compute the joint distribution of (Z 1 , T n ).Let z ∈ R and t ∈ {1, . . ., n} Then, from (18), we deduce

Stratified sampling
An other well-known tool for rare event simulation is stratified sampling.This technique can lead to very impressive results provided one has a good a priori knowledge of the strata and of their respective weights in the sampling.

Introduction to stratification
Consider a partition D 1 , . . ., D I of R d and assume we know how to sample according to L(X|X ∈ D i ) for i = 1, . . ., I. Let p i = P(X ∈ D i ).We assume that the strata are chosen such that p i > 0 for all i.For a fixed 1 ≤ i ≤ I, let (X j i , 1 ≤ j ≤ N i ) be i.i.d.samples according to L(X|X ∈ D i ) where N i is the number of samples in stratum i.We assume that the samples in two different strata are independent.
We know that Hence, we can use the following Monte Carlo estimator to approximation E[ϕ(X)] Assume the total number of samples is N , then the variance of this estimator is minimum for where v 2 i = Var(ϕ(X (i) )).Using this optimal allocation yields to an estimator with minimum variance given by v 2 * N with However, this allocation formula can be hardly used in practice as the v ′ i s are usually unknown.To overcome this difficulty, Etoré and Jourdain [2010] proposed an adaptive scheme to learn the (v i ) i=1,...,I and then the (q i ) i=1,...,I .

Importance sampling as a stratifying direction
The key difficulty in applying stratified sampling techniques is to know which strata to be used.Glasserman et al. [1999] suggested to use the optimal importance sampling vector as a direction along which to stratify.Assume some optimal shift µ ⋆ has already been computed.The vector u = µ ⋆ |µ ⋆ | defines an hyperplane on which the strata are based.Consider an increasing sequence of levels (a i ) 0≤i≤I with a 0 = −∞ and a I = +∞, we consider the partition Since u • X follows the standard normal distribution with values in R, the levels a i 's are usually chosen as quantiles of the standard normal distribution.Then, simulating the normal random vectors conditionally to being in a given stratum can be easily implemented by using the following result.
A natural approach is to proceed in two steps 1. Importance Sampling step.Apply Algorithm 2.6 to compute an almost optimal change of measure with parameters (µ ⋆ , I d ).
5 Rare events simulation for electronic circuits

Large scale analytic problems
We validate in this section the Importance Sampling multilevel approach with large scale problems.The test-cases were generated automatically for the purpose of this test, and not related to circuit simulation.
The response function is defined as h(x) = a • x A + b • x B , and x = (x A , x B ) is a vector of normal variables N (0, 1).The coefficients a and b were defined such as max i∈A |a i | ≫ max i∈B |b i |.This makes the subset A the set of important variables.The remaining Bcomponents may be considered as noisy variables.
For our comparisons, we fixed the cardinality |A| = 10 and the vector a, and we gradually increased the dimension of the noisy components.The size of the batches is fixed at 1000 runs.

|B|
Prob.CI@95% Nb.Runs 1000 2.8039 × 10 −5 8.60% 5000 2000 2.9891 × 10 −5 8.91% 5000 10000 2.8239 × 10 −5 9.65% 10000 50000 2.9815 × 10 −5 9.83% 20200 The previous table indicates that the approach is rather not dependent on the input dimension of the problem.The differences, as the dimension grows, are due mainly to the difficulty of finding the 'optimal' subset of important variables during the search of the final mean-shift.Increasing the batch size may help as well, because larger batches will improve the accuracy of the reduction dimension algorithms.

Circuit examples
We use systematically the same settings for the Importance sampling algorithm.The relative precision on the probability estimator is fixed at 10%.And the size of each batch is 1000 runs, the total number of runs is therefore a multiple of this size.We report the "Speedup" in our tests as the fraction of the theoretical number runs of the plain MC over the effective number of runs to obtain the same relative accuracy (based on on Central Limit theorem).For our experiments, electrical simulations will be achieved with the commercial SPICE simulators Eldo ® and EldoRF from Siemens EDA.

VCO LC Tank
This is an example of an analog IP block showing the IS algorithm for low probability estimation and quantile estimation.The circuit is a simple LC-tank VCO oscillating at 1.8GHz (inspired from Craninckx and Steyaert [1997]) The circuit contains 28 Gaussian random variables.The oscillation frequency (FOSC) is obtained using the Steady-State Analysis of Eldo RF.The phase noise (PNOISE) at 1 MHz offset from the carrier is also extracted using the Steady-State noise analysis of Eldo RF.Both the oscillation frequency and the phase noise are strongly non-Gaussian.The distributions are heavily skewed (left and right respectively).This means that all kinds of Gaussian extrapolations using the estimated standard deviation with a few 100's runs are completely wrong.The results for the analysis of ISMC are mostly read in the following The Prob column indicates the tail probability and CI@95% the relative Confidence Interval at 95% level.The Nb. Runs column gives the total number of runs for each estimator.The Speedup columns finally provides the speedup factor over a plain Monte Carlo simulation (for the same accuracy).
Similarly, one gives the results for the quantile estimation where we fixed the tail probability at level 1.0 × 10 −4 .

6T SRAM Bitcell
The ciruit is a classical six transistors SRAM bit cell designed in CMOS 45nm.A subset of the device model parameters has variability information with Gaussian random variables.In this example, the total number of statistical parameters is 36 (6 per device).The cell is configured for measuring the write delay.The purpose of the Monte Carlo simulation is to estimate the probability for the write delay to be larger than a certain threshold.This threshold might be used to define a "fail" behavior for yield estimation. Formally, the problem is to estimate the probability of failure and the associated confidence interval.This kind of low probability estimation is difficult (not to say impossible) with plain Monte Carlo, because the number of runs required to obtain a decent accuracy (in terms of confidence interval on the probability) is huge.In this particular example, the IS flow estimates a probability of 9.1893 × 10 −6 to within 9.99% (with 95% confidence level) in 8000 runs, whereas it would require approximately 4 × 10 7 runs with plain Monte Carlo.

Measure
Tail Prob CI@95% Nb.Runs Speedup WDELAY Right 9.1893 × 10 −6 9.99% 8000 5.2 × 10 3 We give the detail of the multilevel Importance Sampling phase with this circuit.We recall that we need to use intermediate levels to reduce the difficulty of sampling a rare event.If the levels (γ k ) k are chosen so that the events {h(X (k+1) j ) ≥ γ k } are never too rare, then a fairly small number of samples n can be used.We now provide some results on CVaR computation for this circuit.As noted above, comparing the threshold γ and CVaR may give a hint about the dispersion of 'Failed' population of runs.

Measure
γ CVaR CI@95% Nb.Runs Speedup WDELAY 5.3120 × 10 1 5.3388 × 10 1 10.1% 9000 5.1 × 10 3 It is important to note here that the CVaR is a by-product of the IS flow which is mainly drived by the computation of the failure probability.The computation of the CVaR is based on the same samples.Here practitioners may conclude that picking one point of the failed runs provides an accurate example for 'debugging' or understanding the reasons that lead to this fail point.The analysis of the largest components of the mean-shift may help further to identify to variables and therefore the electronic devices at the origin of the typical failure point.

StdCell LH
This circuit is flip-flop or latch circuit extracted from a library of standard cells.It is a circuit that can be used to store state information, and a fundamental storage element in sequential logic of digital electronics systems used in computers, and many types of systems.The number of random variables is fixed at 66.The "LAB3" output represents the measure of a delay (in seconds).

Measure Tail
Prob CI@95% Nb.Runs Speedup LAB3 Right 1.2493 × 10 −9 8.05% 7000 6.8 × 10 7 The speedup here is quite impressive, but note that in general the testbenches used for such standard cells contain multiple output measures.Because the samples cannot be mutualized accross the different estimators, the total number of runs will be a multiple of the computed quantiles and/or probabilities.

StdCell MUX4
This circuit a multiplexer (or mux) is another fundamental block extracted from a library of standard cells.The number of random variables is fixed at 116.The output response chosen for this experiment is a measure of delay (in seconds) on a critical path.It is therefore a time-domain analysis.
We are interested here in the computation of quantiles at some predefined failure probabilities.

Measure
Failure Prob.Quantile CI@95% Nb.Runs Speedup LAB179 1.34990 × 10 −3 2.55684 × 10 −10 0.72% 4000 3.2 -9.86588 × 10 −10 2.78912 × 10 −10 0.22% 6000 3.9 × 10 6 As we noted in the previous example, if multiple circuit output need to be characterized then the speedup 3.2 obtained for the first failure probability (approx.1.3 × 10 −3 ) may not compensate the total number of runs required for estimating each quantile individually.A plain Monte Carlo simulation may do the job because the quantiles can be computed on the same samples.This option may not be possible when very low failure probabilities are searched.In such cases, the IS algorithm may prevail over the plain MC.

Memory Cell
This circuit represents a block of memory simulated with its peripheral circuitry.The number of random variables is fixed at 2096.The simulation output is here the read delay measured in seconds.

Measure
Tail Prob CI@95% Nb.Runs Speedup READ10 Right 3.4506 × 10 −8 8.96% 9000 1.5 × 10 6 This last example is provided to demonstrate the effectiveness of IS algorithm on a quite large size problem.This is a time-domain analysis which requires several minutes for a nominal simulation on a modern architecture.Our IS algorithm can run in full parallel mode, taking advantage of compute farms.

Proposition 2. 4 .
Let n(m) be an increasing function of m tending to infinity with m.We assume the Assumptions of Proposition 2.3.The following results hold.(i.)The estimator M m (θ n(m) ) is an unbiased and convergent estimator of E[ϕ(X)];

Lemma 4. 1 .
Let a < b be real numbers and X ∼ N (0, I d ).Let U be a random variable with uniform distribution on [0, 1] and Y ∼ N (0,I d ) both independent of X. Set Z = Φ −1 (Φ(a) + U (Φ(b) − Φ(a))), where Φ is the cumulative distribution function of the standard normal distribution.Then, uZ

table .
The following table provides the first iterations of this exploration phase.