In October 1997, the Multidisciplinary Center for Earthquake
Engineering Research (formerly the National Center for Earthquake Engineering Research)
and the Federal Highway Administration (FHWA) sponsored a two-day workshop, Ground
Motion Methodologies for the Eastern United States, to evaluate ground motion modeling
methods applicable in the eastern U.S. The predictive methods were to be assessed for
their ability to produce time histories appropriate for use in engineering applications.
The intent of the workshop was not to rank various modeling methodologies, but rather to
evaluate the state-of-the-art for strong ground motion prediction in the region and the
variability of time histories from different modeling methods. Further, the workshop
served to introduce the participants to the concept of formal model validation and thence
application to develop synthetic motions. This focus on practicality responded to the user
community's need to evaluate the credibility of synthetic time histories developed for
specific projects and the lack of criteria on which to base these evaluations.
Two issues were paramount in the evaluation of the time histories: the
peak amplitudes of the ground motion and the non-stationary character of the time history.
The models were assessed against the following measures:
- the ability of the methods to predict the amplitude of the ground motion (median and
variability) expressed as elastic response spectra;
- their ability to define the non-stationary characteristics of the time history expressed
as duration of the acceleration, velocity and displacement;
- whether synthetic time histories require further scaling and, if so, what the scaling
- what means can be used to evaluate synthetic time histories to ensure they are
Ultimately, the workshop resulted in recommendations as to the
seismological community's ability to predict absolute levels of expected shaking and to
judge whether synthetic motions required subsequent modification.
The engineering community involved in the seismic assessment of
eastern U.S. (EUS) facilities looks to the seismological community to define ground motion
time histories for seismic evaluation of structures. In previous highway projects, time
histories developed by different groups have been significantly different in both
amplitude and waveform. The engineering community requires criteria to evaluate the
adequacy of synthetic ground motions, whether defined by time histories or response
spectra, or other ground motion characteristics. They also require guidance regarding use
of finite fault modeling for near-fault motions. Finally, they require a cost-effective
approach to develop motions for standard application.
Because of the scarcity of recorded EUS strong ground motions for
comparisons, the engineering community lacks measures against which they may judge the
attributes of synthetic time histories. Currently available attenuation relations provide
estimates of the response spectral values, but for evaluating time histories, estimates of
the peak ground velocity and peak ground displacement are also needed. Additionally,
measures of non-stationarity are needed to check the synthetic time histories:
acceleration (velocity, and/or displacement) duration, and/or the slope of a Husid plot
for the motion, and a recommendation on one- or two-sided displacement histories.
Practitioners also need cost-effective methods to develop ground
motions for use in typical applications. Site-specific modeling can be costly and
generally is warranted only for the analysis of critical facilities. A standard library of
time histories for EUS earthquakes is needed for use in engineering evaluations. This
library should include well documented motions for a few representative cases and
guidelines on acceptable methods of scaling them.
Validation and Simulation Studies
The goal of the MCEER/FHWA workshop was not to set target spectra
or other acceptability criteria, but rather to evaluate synthetic time histories resulting
from various predictive models. The workshop focus on methods of predicting strong ground
motion was the first step in addressing the needs of the engineering community by
assessing the capabilities of available numerical simulation procedures. An element of
this effort consists of a validation exercise for each modeling method to check model
calibrations and test parameter sensitivities. A suite of simulations from each method is
needed to estimate the median ground motion and the variability.
This approach has been adopted in several recent studies including the
1990 Diablo Canyon Long Term Seismic Program (LTSP; an in-depth evaluation of the seismic
hazard and risk at the plant), the 1993 EPRI study of EUS ground motions, the 1995
Southern California Earthquake Center study of scenario earthquakes in southern
California, the 1996 Yucca Mountain (Nevada) study of scenario earthquakes, and the 1997
Yucca Mountain probabilistic seismic hazard analysis. The MCEER/FHWA workshop was
constructed using what was learned from these studies in terms of how to organize the
exercises (necessary constraints), how to validate the models, and how to compare the
results. Several participants in the workshop also contributed to one or more of the
The validation is intended to evaluate how well the models can predict
ground motion from a past earthquake. Each modeler estimates ground motion for a recorded
earthquake using source, path, and site parameters that are appropriate for the events and
optimizing other model parameters to provide the best data fit. Comparisons of the
predicted ground motions with the recorded motions results in model misfits to the data,
an important element of the uncertainty in future estimates of ground motion in postulated
earthquakes. Although comparisons against recordings from more than one earthquakes is
needed to validate a model, a single earthquake validation exercise was performed in this
workshop to demonstrate the concept and to provide a rough evaluation of the adequacy of
In the simulation exercise, the numerical models are used to predict
ground motion for a future earthquake. Select parameters (such as event magnitude, fault
geometry, station locations, site or path parameters) are fixed and multiple realizations
are performed which randomize event-specific parameters, which were optimized in the
validation exercise. The predicted ground motions from the alternative modeling methods
are summarized by the median ground motion and standard deviation (variability)
Treatment of Variability
The modeling methods used by the different groups include different
sets of source, path, and site parameters. To track the variability of the model
prediction, each model parameter must be declared as "fixed" or
"event-specific." Event-specific parameters are optimized to the best value for
each past earthquake considered in the validation exercise. Since the event-specific
parameters are unknown for future earthquakes, they must be randomized in the scenario
simulations. Fixed model parameters are not randomized in the scenario simulations because
the effect of the variability of these parameters is captured in the misfits to the
recorded data in the validation exercise (assuming enough earthquakes are included in the
validation to represent the variability).
For example, one model may assume that the stress-parameter of the
sub-events are constant for all earthquakes, whereas another model may assume that the
stress-parameter is event-specific. The first model may accurately predict the median
ground motion from a suite of past earthquakes (e.g., it is unbiased) but it will probably
have a poorer fit to the individual earthquake than the second model. When predicting
ground motions for a future earthquake, the first model would keep the sub-event
stress-parameter fixed, but the second model would have to specify a distribution of the
stress-parameter and then sample the distribution for a suite of simulations.
This leads to two types of variability of the predicted ground motions.
The variability from the misfit of the predicted ground motions to recorded ground motions
from past earthquakes is called "modeling variability." The modeling variability
reflects the limitations of the model to predict the ground motion even when all of the
event-specific parameters are known. In the context of the model, these variations are
unexplainable randomness. The modeling variability can only be computed by comparing
predicted ground motions to observed ground motions.
The variability due to variations in event-specific parameters for
future earthquakes is called "parametric variability." This represents the
variability of the ground motion that results from varying the event-specific source
parameters. In contrast to the modeling variability, this source of variability is
understood. The parametric variability is computed using multiple realizations of the
simulation process that sample the range of event-specific parameters.
To compare the variability of ground motion predictions from
alternative models, it is important to keep track of both the modeling variability and the
parametric variability. The total variability is the combination of the modeling and
parametric variability. In general, as more event-specific parameters are included in the
model, the modeling variability is shifted to parametric variability. Whether the total
variability goes up or down as more event-specific parameters are included depends on how
well the distribution of the event-specific parameters for the events used in the
validation agrees with the distribution assumed for those parameters in the simulations.
If a large enough sample of events is used in the validation, then the total variability
is unlikely to change as more event-specific parameters are used: the reduction in the
modeling variability is offset by a corresponding increase in the parametric variability.
There is, however, an advantage to shifting modeling variability to parametric
variability: the cause of the parametric variability is understood; whereas, the cause of
the modeling variability is not explicitly understood.
The validation has two purposes. First, it is intended to determine if
the model predictions are unbiased on average. Second, it provides an estimate of the
In this workshop, we have only used a single event in the validation
exercise. A single event is not sufficient to evaluate the model bias on average, nor does
it provide an accurate estimate of the modeling variability. A full validation was beyond
the scope of the workshop. Some of the models have been validated for a larger number of
events in previous studies. When available, we have included these more comprehensive
validation results in these proceedings in addition to the single event validation
Treatment of Uncertainty
The variability discussed above is called "aleatory"
variability. It represents variability that is considered to be random. In addition to
aleatory variability, there is "epistemic" uncertainty. Epistemic uncertainty is
uncertainty due to insufficient data (lack of information). In ground motion modeling,
epistemic uncertainty results from uncertainty in the distributions of parameter values.
For a fixed model parameter, there is epistemic uncertainty on the best
fixed value due to the small number of earthquakes used in the validation. For an
event-specific model parameter, there is epistemic uncertainty in the probability density
function for the parameter. Using the example of sub-event stress parameter again, if it
is a fixed parameter, there is epistemic uncertainty in the best average value due to the
small number of earthquakes used in the validation. If it is an event-specific parameter,
then there is uncertainty both in the median value and standard deviation used to
represent the range of sub-event stress parameter values for future earthquakes.
In previous studies, epistemic uncertainty has not typically been
assessed for individual models, but rather it has been assessed by comparing the median
and variability of ground motions from alternative credible models. (Here, credible
implies that the model has an acceptably small model bias in the model validation.) This
approach also incorporates the uncertainty in the basic underlying physical model used in
the numerical process.
Because epistemic uncertainty is due to lack of data, as more data
become available, epistemic uncertainty will be reduced. The additional data will provide
constraints on the distribution of event-specific parameters and the alternative modeling
methods should produce more similar results as they are modified based on additional