01451

The direction cosines converged with a tolerance level of 0.005.

Task 7

y* = 0.1451 x V03b = 0.07947b y* = 0.9894 x Vl^b = 1.29b Using Equation 12.31, it can be shown that

0.0113564 — 9.594176 x 10—4 x 0.07947b — 2.74745 x 10—3 x 1.29b = 0

The reliability index converges with a tolerance level of 0.005. The reliability index for the correlated random variables case is considerably different than that observed for the uncorrelated case.

12.7 Reliability Evaluation Using Simulation

Reliability evaluation using sophisticated probability and statistical theories may not be practical for many practicing structural engineers. But a simple simulation technique makes it possible to calculate the risk or probability of failure without knowing the analytical techniques and with only a little background in probability and statistics. The advancement in computing power makes simulation an attractive option for risk evaluation at the present time.

International experts agree that simulation can be an alternative for implementing the reliability-based design concept in practical design [19]. Lewis and Orav [20] wrote, ''Simulation is essentially a controlled statistical sampling technique that, with a model, is used to obtain approximate answer for questions about complex, multi-factor probabilistic problems.'' They added, "It is this interaction of experience, applied mathematics, statistics, and computing science that makes simulation such a stimulating subject, but at the same time a subject that is difficult to teach and write about.''

Theoretical simulation is usually performed numerically with the help of computers, allowing a more elaborate representation of a complicated engineering system than can be achieved by physical experiments, and it is often less expensive than physical models. It allows a designer to know the uncertainty characteristics being considered in a particular design, to use judgment to quantify randomness beyond what is considered in a typical codified design, to evaluate the nature of implicit or explicit performance functions, and to have control of the deterministic algorithm used to study the realistic structural behavior at the system level.

The method commonly used for this purpose is called the Monte Carlo simulation technique. In the simplest form of the basic simulation, each random variable in a problem is sampled several times to represent the underlying probabilistic characteristics. Solving the problem deterministically for each realization is known as a simulation cycle, trial, or run. Using many simulation cycles will give the probabilistic characteristics of the problem, particularly when the number of cycles tends to infinity. Using computer simulation to study the presence of uncertainty in the problem is an inexpensive experiment compared to laboratory testing. It also helps evaluate different design alternatives in the presence of uncertainty, with the goal of identifying the optimal solution.

12.7.1 Steps in Simulation

The Monte Carlo simulation technique has six essential elements [14]: (1) defining the problem in terms of all the random variables; (2) quantifying the probabilistic characteristics of all the random variables in terms of their PDFs and the corresponding parameters; (3) generating values of these random variables; (4) evaluating the problem deterministically for each set of realizations of all the random variables; (5) extracting probabilistic information from N such realizations; and (6) determining the accuracy and efficiency of the simulation. The success of implementing the Monte Carlo simulation in design will depend on how accurately each element is addressed. All these steps are discussed briefly in the following sections.

Step 1: Defining the problem in terms of all the random variables. The function that needs to be simulated must be defined in terms of all the random variables present in the formulation. For example, if the uncertainty in the applied bending moment, Ma, at the midspan of a simply supported beam of span L loaded with a uniform load w per unit length and a concentrated load P at the midspan needs to be evaluated, the problem can be represented as

In this equation, if the span is assumed to be a known constant but w and P are random variables with specified statistical characteristics, then the applied moment is also a random variable. Its probabilistic characteristics can be evaluated using simulation.

On the other hand, if the probability of failure of the same beam is of interest and MR is denoted as its bending moment capacity, the corresponding function to be simulated is

In this case, MR is expected to be a random variable in addition to w and P. The probability of failure of the beam can be evaluated by studying cases where g( ) will be negative or where the applied moment is greater than the resisting moment.

Step 2: Quantifying the probabilistic characteristics of all the random variables. The uncertainties associated with most of the random variables used in structural engineering have already been quantified

by their underlying distributions and the parameters needed to define them uniquely. The subject has been discussed in detail by Haldar and Mahadevan [14] and will not be discussed further here.

Step 3: Generating random numbers for all the variables. The generation of random numbers according to a specific distribution is the heart of Monte Carlo simulation. All modern computers have the capability to generate uniformly distributed random numbers between 0 and 1. The computer will produce the required number of uniform random numbers corresponding to an arbitrary seed value between 0 and 1. In most cases, these are known as pseudorandom numbers and provide a platform for all engineering simulations.

Since most random variables are not expected to be uniform between 0 and 1, it is necessary to transform a uniform random number ui between 0 and 1 to another random number with the appropriate statistical characteristics. The inverse transformation technique [14] is commonly used for this purpose. In this approach, the CDF of a random variable X, FX(xi) is equated to the generated random number ui. Thus

If xi is a uniform random variable between a and b, and ui is a uniform random number between 0 and 1, then it can be shown that xi - a

If X is a normal random variable with a mean of mX and a standard deviation of sX, then a normal random number xi corresponding to a uniform number ui between 0 and 1 can be shown to be

where F-1 is the inverse of the CDF of a standard normal variable. Similarly, if X is a lognormal random variable with parameters 1X and bx, then x can be generated according to the lognormal distribution as

Most computers will generate random numbers for commonly used distributions. If not, the above procedure can be used to generate random numbers for a specific distribution.

Step 4: Evaluating the problem deterministically for each set of realizations of all the random variables. N random numbers for each of the random variables present in the problem will give N sets of random numbers, each set representing a realization of the problem. Thus, deterministically solving the problem defined in Step 1 N times will give N sample points. The generated information will provide the uncertainty in the response variable. Using N sample points and standard procedures, all the necessary statistical information can be collected, as briefly discussed next.

Step 5: Extracting probabilistic information from N such realizations. Simulation can be used to evaluate the uncertainty in the response variable like Ma in Equation 12.32. However, if the objective is only to estimate the probability of failure, the following procedure can be used.

If the value of g( ) in Equation 12.33 is negative, it indicates failure. Let Nf be the number of simulation cycles when g( ) is negative and let N be the total number of simulation cycles. The probability of failure can be expressed as

Step 6: Determining the accuracy and efficiency of the simulation. The probability of failure using Equation 12.40 is a major concern. The estimated probability of failure will reach the true value when N approaches infinity. When pf and/or N are small, a considerable amount of error is expected in the estimated value of pf. Haldar and Mahadevan [14] discussed the related issues in great detail. The following recommendation can be followed. In many structural engineering problems, the probability of failure could be smaller than 10~5, that is, on average only 1 out of 100,000 simulations would show a failure. At least 100,000 simulation cycles are required to predict this behavior. For a reasonable estimate, at least 10 times this minimum, that is, 1 million simulation cycles, is usually recommended to estimate the probability of failure of 10~5. Thus, if n random variables are present in a formulation to be simulated, n x 106 random numbers are required. Simulation could be cumbersome or tedious for structural reliability evaluation. However, simulation is routinely used to verify a new theoretical method.

12.7.2 Variance Reduction Techniques

The discussion of simulation will not be complete without discussing variance reduction techniques (VRTs). The concept behind simulation presented in the previous section is relatively simple. However, its application to structural engineering reliability analysis depends on the efficiency of the simulation. The attractiveness of the simulation method can be greatly improved if the probability of failure can be estimated with a reduced number of simulation cycles. This led to the development of many VRTs. The efficiency of simulation can be improved by using VRTs, which can be grouped in several ways [14]. One approach is to consider whether the variance reduction method alters the experiment by altering the input scheme, by altering the model, or by special analysis of the output. The VRTs can also be grouped according to description or purpose (i.e., sampling method, correlation methods, and special methods).

The sampling methods either constrain the sample to be representative or distort the sample to emphasize the important aspects of the function being estimated. Some of the sampling methods are systematic sampling, importance sampling, stratified sampling, Latin hypercube sampling, adaptive sampling, randomization sampling, and conditional expectation. The correlation methods employ strategies to achieve correlation between functions or different simulations to improve the efficiency. Some of the VRTs in correlation methods are common random numbers, antithetic variates, and control variates. Other special VRTs include partition of the region, random quadratic method, biased estimator, and indirect estimator. The VRTs can also be combined to further increase the efficiency of the simulation. The details of these VRTs cannot be presented here but can be found in Haldar and Mahadevan [14].

The type of VRT that can be used depends on the problem under consideration. It is usually impossible to know beforehand how much efficiency can be improved using a given technique. In most cases, VRTs increase the efficiency in the reliability estimation by using a smaller number of simulation cycles. Haldar and Mahadevan [14] noted that VRTs increase the computational difficulty for each simulation, and a considerable amount of expertise may be necessary to implement them. The most desirable feature of simulation, its basic simplicity, is thus lost.

12.7.3 Simulation in Structural Design

As mentioned earlier, simulation can be an attractive alternative to estimate the reliability of a structural system. Simulation will enable reliability estimation considering realistic nonlinear structural behavior, the location of a structural element in a complicated structural system, correlation characteristics of random variables, etc. Reliability evaluation using a classical method like FORM essentially evaluates the reliability at the element level. Thus, simulation has many attractive features. It also has some deficiencies. Like other reliability methods, if the reference or allowable values are not known, it will be unable to estimate the reliability. The outcome of the simulation could be different depending on the number of simulation cycles and the characteristics of the computer-generated random numbers. One fundamental drawback is the time or cost of simulation. Huh and Haldar [21] reported that simulating

100,000 cycles in a supercomputer (SGI Origin 2000) to estimate the reliability of a one-bay two-story steel frame subjected to only 5 s of an earthquake loading may take more than 23 h. Using an ordinary computer, it may take several years.

It is clear that the simulation approach provides a reasonable alternative to the commonly used codified approach. However, there are still some issues that need to be addressed before it can be adopted in structural design. Further evaluation is needed of issues related to the efficiency and accuracy of the deterministic algorithm to be used in simulations, appropriate quantification of randomness, defining the statistical characteristics and performance functions, the selection of reference or allowable values, evaluating the correlation characteristics of random variables in complex systems, simulation of random variables versus random field, simulation of multi-variate random variables, system reliability, the effect of load combinations, time-dependent reliability, available software to implement the simulation-based concept, etc. The documentation of case studies will help in this endeavor.

0 0

Post a comment